diff --git a/2.2/microservices/application/AdvancedTopics/index.html b/2.2/microservices/application/AdvancedTopics/index.html
index c8df2e6a44..0d25ed5fc9 100644
--- a/2.2/microservices/application/AdvancedTopics/index.html
+++ b/2.2/microservices/application/AdvancedTopics/index.html
@@ -1664,6 +1664,13 @@
How it works
+
+
+
+
+ Custom Storage
+
+
@@ -2767,6 +2774,13 @@
How it works
+
+
+
+
+ Custom Storage
+
+
@@ -3074,6 +3088,59 @@ How it works
Note
Changing Writable.Pipeline.ExecutionOrder will invalidate all currently stored data and result in it all being removed from the database on the next retry. This is because the position of the export function can no longer be guaranteed and no way to ensure it is properly executed on the retry.
+Custom Storage
+The default backing store is redis. Custom implementations of the StoreClient
interface can be provided if redis does not meet your requirements.
+type StoreClient interface {
+ // Store persists a stored object to the data store and returns the assigned UUID.
+ Store(o StoredObject) (id string, err error)
+
+ // RetrieveFromStore gets an object from the data store.
+ RetrieveFromStore(appServiceKey string) (objects []StoredObject, err error)
+
+ // Update replaces the data currently in the store with the provided data.
+ Update(o StoredObject) error
+
+ // RemoveFromStore removes an object from the data store.
+ RemoveFromStore(o StoredObject) error
+
+ // Disconnect ends the connection.
+ Disconnect() error
+}
+
+A factory function to create these clients can then be registered with your service by calling RegisterCustomStoreFactory
+service.RegisterCustomStoreFactory("jetstream", func(cfg interfaces.DatabaseInfo, cred config.Credentials) (interfaces.StoreClient, error) {
+ conn, err := nats.Connect(fmt.Sprintf("nats://%s:%d", cfg.Host, cfg.Port))
+
+ if err != nil {
+ return nil, err
+ }
+
+ js, err := conn.JetStream()
+
+ if err != nil {
+ return nil, err
+ }
+
+ kv, err := js.KeyValue(serviceKey)
+
+ if err != nil {
+ kv, err = js.CreateKeyValue(&nats.KeyValueConfig{Bucket: serviceKey})
+ }
+
+ return &JetstreamStore{
+ conn: conn,
+ serviceKey: serviceKey,
+ kv: kv,
+ }, err
+})
+
+and configured using the registered name in the Database
section:
+[Database]
+ Type = "jetstream"
+ Host = "broker"
+ Port = 4222
+ Timeout = "5s"
+
Secrets
Configuration
All instances of App Services running in secure mode require a SecretStore to be configured. With the use of Redis Pub/Sub
as the default EdgeX MessageBus all App Services need the redisdb
known secret added to their SecretStore so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details.
diff --git a/2.2/microservices/application/ApplicationServiceAPI/index.html b/2.2/microservices/application/ApplicationServiceAPI/index.html
index 4017c3f578..a0740ac8c8 100644
--- a/2.2/microservices/application/ApplicationServiceAPI/index.html
+++ b/2.2/microservices/application/ApplicationServiceAPI/index.html
@@ -1816,6 +1816,13 @@
RegisterCustomTriggerFactory
+
+
+
+
+ RegisterCustomStoreFactory
+
+
@@ -3085,6 +3092,13 @@
RegisterCustomTriggerFactory
+
+
+
+
+ RegisterCustomStoreFactory
+
+
@@ -3158,6 +3172,7 @@ Application Service API
AddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error
RequestTimeout() time.Duration
RegisterCustomTriggerFactory(name string, factory func(TriggerConfig) (Trigger, error)) error
+ RegisterCustomStoreFactory(name string, factory func(cfg DatabaseInfo, cred config.Credentials) (StoreClient, error)) error
}
Factory Functions
@@ -3556,6 +3571,9 @@ RequestTimeout()
RegisterCustomTriggerFactory
RegisterCustomTriggerFactory(name string, factory func(TriggerConfig) (Trigger, error)) error
This API registers a trigger factory for a custom trigger to be used. See the Custom Triggers section for more details and example.
+RegisterCustomStoreFactory
+RegisterCustomStoreFactory(name string, factory func(cfg DatabaseInfo, cred config.Credentials) (StoreClient, error)) error
+This API registers a factory to construct a custom store client for the store & forward loop.
diff --git a/2.2/search/search_index.json b/2.2/search/search_index.json
index 9b7a2d9bf5..3a9fae3c43 100644
--- a/2.2/search/search_index.json
+++ b/2.2/search/search_index.json
@@ -1 +1 @@
-{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Introduction EdgeX 2.x Want to know what's new in EdgeX 2.x releases (Ireland/Jakarta/etc)? If you are already familiar with EdgeX, look for the EdgeX 2.x emoji ( Edgey - the EdgeX mascot) throughout the documentation - like the one on this page outlining what's new in the latest 2.x releases. These sections will give you a summary of what's new in each area of the documentation. EdgeX Foundry is an open source, vendor neutral, flexible, interoperable, software platform at the edge of the network, that interacts with the physical world of devices , sensors, actuators, and other IoT objects. In simple terms, EdgeX is edge middleware - serving between physical sensing and actuating \"things\" and our information technology (IT) systems. The EdgeX platform enables and encourages the rapidly growing community of IoT solution providers to work together in an ecosystem of interoperable components to reduce uncertainty, accelerate time to market, and facilitate scale. By bringing this much-needed interoperability, EdgeX makes it easier to monitor physical world items, send instructions to them, collect data from them, move the data across the fog up to the cloud where it may be stored, aggregated, analyzed, and turned into information, actuated, and acted upon. So EdgeX enables data to travel northwards towards the cloud or enterprise and back to devices, sensors, and actuators. The initiative is aligned around a common goal: the simplification and standardization of the foundation for tiered edge computing architectures in the IoT market while still enabling the ecosystem to provide significant value-added differentiation. If you don't need further description and want to immediately use EdgeX Foundry use this link: Getting Started Guide EdgeX Foundry Use Cases Originally built to support industrial IoT needs, EdgeX today is used in a variety of use cases to include: Building automation \u2013 helping to manage shared workspace facilities Oil/gas \u2013 closed loop control of a gas supply valve Retail \u2013 multi-sensor reconciliation for loss prevention at the point of sale Water treatment \u2013 monitor and control chemical dosing Consumer IoT \u2013 the open source HomeEdge project is using elements of EdgeX as part of its smart home platform EdgeX Foundry Architectural Tenets EdgeX Foundry was conceived with the following tenets guiding the overall architecture: EdgeX Foundry must be platform agnostic with regard to Hardware (x86, ARM) Operating system (Linux, Windows, MacOS, ...) Distribution (allowing for the distribution of functionality through micro services at the edge, on a gateway, in the fog, on cloud, etc.) Deployment/orchestration (Docker, Snaps, K8s, roll-your-own, ... ) Protocols ( north or south side protocols) EdgeX Foundry must be extremely flexible Any part of the platform may be upgraded, replaced or augmented by other micro services or software components Allow services to scale up and down based on device capability and use case EdgeX Foundry should provide \" reference implementation \" services but encourages best of breed solutions EdgeX Foundry must provide for store and forward capability To support disconnected/remote edge systems To deal with intermittent connectivity EdgeX Foundry must support and facilitate \"intelligence\" moving closer to the edge in order to address Actuation latency concerns Bandwidth and storage concerns Operating remotely concerns EdgeX Foundry must support brown and green device/sensor field deployments EdgeX Foundry must be secure and easily managed Deployments EdgeX was originally built by Dell to run on its IoT gateways . While EdgeX can and does run on gateways, its platform agnostic nature and micro service architecture enables tiered distributed deployments. In other words, a single instance of EdgeX\u2019s micro services can be distributed across several host platforms. The host platform for one or many EdgeX micro services is called a node. This allows EdgeX to leverage compute, storage, and network resources wherever they live on the edge. Its loosely-coupled architecture enables distribution across nodes to enable tiered edge computing. For example, thing communicating services could run on a programmable logic controller (PLC), a gateway, or be embedded in smarter sensors while other EdgeX services are deployed on networked servers or even in the cloud. The scope of a deployment could therefore include embedded sensors, controllers, edge gateways, servers and cloud systems. EdgeX micro services can be deployed across an array of compute nodes to maximize resources while at the same time position more processing intelligence closer to the physical edge. The number and the function of particular micro services deployed on a given node depends on the use case and capability of the hardware and infrastructure. Apache 2 License EdgeX is distributed under Apache 2 License backed by the Apache Foundation. Apache 2 licensing is very friendly (\u201cpermissive\u201d) to open and commercial interests. It allows users to use the software for any purpose. It allows users to distribute, modify or even fork the code base without seeking permission from the founding project. It allows users to change or extend the code base without having to contribute back to the founding project. It even allows users to build commercial products without concerns for profit sharing or royalties to go back to the Linux Foundation or open source project organization. EdgeX Foundry Service Layers EdgeX Foundry is a collection of open source micro services. These micro services are organized into 4 service layers, and 2 underlying augmenting system services. The Service Layers traverse from the edge of the physical realm (from the Device Services Layer), to the edge of the information realm (that of the Application Services Layer), with the Core and Supporting Services Layers at the center. The 4 Service Layers of EdgeX Foundry are as follows: Core Services Layer Supporting Services Layer Application Services Layer Device Services Layer The 2 underlying System Services of EdgeX Foundry are as follows: Security System Management Core Services Layer Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where most of the innate knowledge of what \u201cthings\u201d are connected, what data is flowing through, and how EdgeX is configured resides in an EdgeX instance. Core consists of the following micro services: Core data: a persistence repository and associated management service for data collected from south side objects. Command: a service that facilitates and controls actuation requests from the north side to the south side. Metadata: a repository and associated management service of metadata about the objects that are connected to EdgeX Foundry. Metadata provides the capability to provision new devices and pair them with their owning device services. Registry and Configuration: provides other EdgeX Foundry micro services with information about associated services within EdgeX Foundry and micro services configuration properties (i.e. - a repository of initialization values). Core services provide intermediary communications between the things and the IT systems. Supporting Services Layer The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Normal software application duties such as scheduler, and data clean up (also known as scrubbing in EdgeX) are performed by micro services in the supporting services layer. These services often require some amount of core services in order to function. In all cases, supporting service can be considered optional \u2013 that is they can be left out of an EdgeX deployment depending on use case needs and system resources. Supporting services include: Rules Engine: the reference implementation edge analytics service that performs if-then conditional actuation at the edge based on sensor data collected by the EdgeX instance. This service may be replaced or augmented by use case specific analytics capability. Scheduler: an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time, the service will call on any EdgeX service API URL via REST to trigger an operation. For example, the scheduler service periodically calls on core data APIs to clean up old sensed events that have been successfully exported out of EdgeX. Alerts and Notifications: provides EdgeX services with a central facility to send out an alert or notification. These are notices sent to another system or to a person monitoring the EdgeX instance (internal service communications are often handled more directly). Application Services Layer Application services are the means to extract, process/transform and send sensed data from EdgeX to an endpoint or process of your choice. EdgeX today offers application service examples to send data to many of the major cloud providers (Amazon IoT Hub, Google IoT Core, Azure IoT Hub, IBM Watson IoT\u2026), to MQTT(s) topics, and HTTP(s) REST endpoints. Application services are based on the idea of a \"functions pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event messages) in the order specified. The first function in a pipeline is a trigger. A trigger begins the functions pipeline execution. A trigger, for example, is something like a message landing in a message queue. Each function then acts on the message. Common functions include filtering, transformation (i.e. to XML or JSON), compression, and encryption functions. The function pipeline ends when the message has gone through all the functions and is set to a sink. Putting the resulting message into an MQTT topic to be sent to Azure or AWS is an example of a sink completing an application service. Device Services Layer Device services connect \u201cthings\u201d \u2013 that is sensors and devices \u2013 into the rest of EdgeX. Device services are the edge connectors interacting with the \"things\" that include, but are not limited to: alarm systems, heating and air conditioning systems in homes and office buildings, lights, machines in any industry, irrigation systems, drones, currently automated transit such as some rail systems, currently automated factories, and appliances in your home. In the future, this may include driverless cars and trucks, traffic signals, fully automated fast food facilities, fully automated self-serve grocery stores, devices taking medical readings from patients, etc. Device services may service one or a number of things or devices (sensor, actuator, etc.) at one time. A device that a device service manages, could be something other than a simple, single, physical device. The device could be another gateway (and all of that gateway's devices), a device manager, a device aggregator that acts as a device, or collection of devices, to EdgeX Foundry. The device service communicates with the devices, sensors, actuators, and other IoT objects through protocols native to each device object. The device service converts the data produced and communicated by the IoT object into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry. EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, MQTT, etc. System Services Layer Security Infrastructure Security elements of EdgeX Foundry protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules. There are two major EdgeX security components. A security store, which is used to provide a safe place to keep the EdgeX secrets. Examples of EdgeX secrets are the database access passwords used by the other services and tokens to connect to cloud systems. An API gateway serves as the reverse proxy to restrict access to EdgeX REST resources and perform access control related works. System Management System Management facilities provide the central point of contact for external management systems to start/stop/restart EdgeX services, get the status/health of a service, or get metrics on the EdgeX services (such as memory usage) so that the EdgeX services can be monitored. Software Development Kits (SDKs) Two types of SDKs are provided by EdgeX to assist in creating north and south side services \u2013 specifically to create application services and device services. SDKs for both the north and south side services make connecting new things or new cloud/enterprise systems easier by providing developers all the scaffolding code that takes care of the basic operations of the service. Thereby allowing developers to focus on specifics of their connectivity to the south or north side object without worrying about all the raw plumbing of a micro service. SDKs are language specific; meaning an SDK is written to create services in a particular programming language. Today, EdgeX offers the following SDKs: Golang Device Service SDK C Device Service SDK Golang Application Functions SDK How EdgeX Works Sensor Data Collection EdgeX\u2019s primary job is to collect data from sensors and devices and make that data available to north side applications and systems. Data is collected from a sensor by a device service that speaks the protocol of that device. Example: a Modbus device service would communicate in Modbus to get a pressure reading from a Modbus pump. The device service translates the sensor data into an EdgeX event object. The device service can then either: put the event object on a message bus (which may be implemented via Redis Streams or MQTT). Subscribers to the event message on the message bus can be application services or core data or both (see step 1.1 below). send the event object to the core data service via REST communications (see step 1.2). When core data receives the event (either via message bus or REST), it persists the sensor data in the local edge database. EdgeX uses Redis as our persistence store. There is an abstraction in place to allow you to use another database (which has allowed other databases to be used in the past). Persistence is not required and can be turned off. Data is persisted in EdgeX at the edge for two basics reasons: Edge nodes are not always connected. During periods of disconnected operations, the sensor data must be saved so that it can be transmitted northbound when connectivity is restored. This is referred to as store and forward capability. In some cases, analytics of sensor data needs to look back in history in order to understand the trend and to make the right decision based on that history. If a sensor reports that it is 72\u00b0 F right now, you might want to know what the temperature was ten minutes ago before you make a decision to adjust a heating or cooling system. If the temperature was 85\u00b0 F, you may decide that adjustments to lower the room temperature you made ten minutes ago were sufficient to cool the room. It is the context of historical data that are important to local analytic decisions. When core data receives event objects from the device service via REST, it will put sensor data events on a message topic destined for application services. Redis Pub/Sub is used as the messaging infrastructure by default (step 2). MQTT or ZMQ can also be used as the messaging infrastructure between core data and the application services. The application service transforms the data as needed and pushes the data to an endpoint. It can also filter, enrich, compress, encrypt or perform other functions on the event before sending it to the endpoint (step 3). The endpoint could be an HTTP/S endpoint, an MQTT topic, a cloud system (cloud topic), etc. Edge Analytics and Actuation In edge computing, simply collecting sensor data is only part of the job of an edge platform like EdgeX. Another important job of an edge platform is to be able to: Analyze the incoming sensor data locally Act quickly on that analysis Edge or local analytics is the processing that performs an assessment of the sensor data collected at the edge (\u201clocally\u201d) and triggers actuations or actions based on what it sees. Why edge analytics ? Local analytics are important for two reasons: Some decisions cannot afford to wait for sensor collected data to be fed back to an enterprise or cloud system and have a response returned. Additionally, some edge systems are not always connected to the enterprise or cloud \u2013 they have intermittent periods of connectivity. Local analytics allows systems to operate independently, at least for some stretches of time. For example: a shipping container\u2019s cooling system must be able to make decisions locally without the benefit of Internet connectivity for long periods of time when the ship is at sea. Local analytics also allow a system to act quickly in a low latent fashion when critical to system operations. As an extreme case, imagine that your car\u2019s airbag fired on the basis of data being sent to the cloud and analyzed for collisions. Your car has local analytics to prevent such a potentially slow and error prone delivery of the safety actuation in your automobile. EdgeX is built to act locally on data it collects from the edge. In other words, events are processed by local analytics and can be used to trigger action back down on a sensor/device. Just as application services prepare data for consumption by north side cloud systems or applications, application services can process and get EdgeX events (and the sensor data they contain) to any analytics package (see step 4). By default, EdgeX ships with a simple rules engine (the default EdgeX rules engine is eKuiper \u2013 an open source rules engine and now a sister project in LF Edge). Your own analytics package (or ML agent) could replace or augment the local rules engine. The analytic package can explore the sensor event data and make a decision to trigger actuation of a device. For example, it could check that the pressure reading of an engine is greater than 60 PSI. When such a rule is determined to be true, the analytic package calls on the core command service to trigger some action, like \u201copen a valve\u201d on some controllable device (see step 5). The core command service gets the actuation request and determines which device it needs to act on with the request; then calling on the owning device service to do the actuation (see step 6). Core command allows developers to put additional security measures or checks in place before actuating. The device service receives the request for actuation, translates that into a protocol specific request and forwards the request to the desired device (see step 7). Project Release Cadence Typically, EdgeX releases twice a year; once in the spring and once in the fall. Bug fix releases may occur more often. Each EdgeX release has a code name. The code name follows an alphabetic pattern similar to Android (code names sequentially follow the alphabet). The code name of each release is named after some geographical location in the world. The honor of naming an EdgeX release is given to a community member deemed to have contributed significantly to the project. A release also has a version number. The release version follows sematic versioning to indicate the release is major or minor in scope. Major releases typically contain significant new features and functionality and are not always backward compatible with prior releases. Minor releases are backward compatible and usually contain bug fixes and fewer new features. See the project Wiki for more information on releases, versions and patches . Release Schedule Version Barcelona Oct 2017 0.5.0 California Jun 2017 0.6.0 Delhi Oct 2018 0.7.0 Edinburgh Jul 2019 1.0.0 Fuji Nov 2019 1.1.0 Geneva May 2020 1.2.0 Hanoi November 2020 1.3.0 Ireland Spring 2021 2.0.0 Jakarta Fall 2021 2.1.0 Kamukura Spring 2022 TBD Levski Fall 2022 TBD Note : minor releases of the Device Services and Application Services (along with their associated SDKs) can be release independently. Graphical User Interface, the command line interface (CLI) and other tools can be released independently. EdgeX community members convene in a meeting right at the time of a release to plan the next release and roadmap future releases. See the Project Wiki for more detailed information on releases and roadmap . EdgeX 2.0 The Ireland Release The Ireland release, available June 2021, is the second major version of EdgeX. Highlights of the 2.0 release include: A new and improved set of service APIs, which eliminate a lot of technical debt and setting EdgeX up for new features in the future (such as allowing for more message based communications) Direct device service to application service communications via message bus (bypassing core data if desired or allowing it to be a secondary subscriber) Simplified device profiles Improved security New, improved and more comprehensive graphical user interface (for development and demonstration purposes) New device services for CoAP, GPIO, and LLRP (RFID protocol) An LLRP inventory application service Improved application service capability and functions (to include new filter functions) Cleaner/simpler Docker image naming and facilities to create custom Docker Compose files EdgeX 2.0 provides adopters with a platform that Has an improved API that addresses edge application needs of today and tomorrow Is more efficient and lighter (depending on use case) Is more reliable and offers better quality of service (less REST, more messaging and incorporating a number of bug fixes) Has eliminated a lot of technical debt accumulated over 4 years EdgeX History and Naming EdgeX Foundry began as a project chartered by Dell IoT Marketing and developed by the Dell Client Office of the CTO as an incubation project called Project Fuse in July 2015. It was initially created to run as the IoT software application on Dell\u2019s introductory line of IoT gateways. Dell entered the project into open source through the Linux Foundation on April 24, 2017. EdgeX was formally announced and demonstrated at Hanover Messe 2017. Hanover Messe is one of the world's largest industrial trade fairs. At the fair, the Linux Foundation also announced the association of 50 founding member organizations \u2013 the EdgeX ecosystem \u2013 to help further the project and the goals of creating a universal edge platform. The name \u2018foundry\u2019 was used to draw parallels to Cloud Foundry . EdgeX Foundry is meant to be a foundry for solutions at the edge just like Cloud Foundry is a foundry for solutions in the cloud. Cloud Foundry was originated by VMWare (Dell Technologies is a major shareholder of VMWare - recall that Dell Technologies was the original creator of EdgeX). The \u2018X\u2019 in EdgeX represents the transformational aspects of the platform and allows the project name to be trademarked and to be used in efforts such as certification and certification marks. The EdgeX Foundry Logo represents the nature of its role as transformation engine between the physical OT world and the digital IT world. The EdgeX community selected the octopus as the mascot or \u201cspirit animal\u201d of the project at its inception. Its eight arms and the suckers on the arms represent the sensors. The sensors bring the data into the octopus. Actually, the octopus has nine brains in a way. It has millions of neurons running down each arm; functioning as mini-brains in each of those arms. The arms of the octopus serve as \u201clocal analytics\u201d like that offered by EdgeX. The mascot is affectionately called \u201cEdgey\u201d by the community.","title":"Introduction"},{"location":"#introduction","text":"EdgeX 2.x Want to know what's new in EdgeX 2.x releases (Ireland/Jakarta/etc)? If you are already familiar with EdgeX, look for the EdgeX 2.x emoji ( Edgey - the EdgeX mascot) throughout the documentation - like the one on this page outlining what's new in the latest 2.x releases. These sections will give you a summary of what's new in each area of the documentation. EdgeX Foundry is an open source, vendor neutral, flexible, interoperable, software platform at the edge of the network, that interacts with the physical world of devices , sensors, actuators, and other IoT objects. In simple terms, EdgeX is edge middleware - serving between physical sensing and actuating \"things\" and our information technology (IT) systems. The EdgeX platform enables and encourages the rapidly growing community of IoT solution providers to work together in an ecosystem of interoperable components to reduce uncertainty, accelerate time to market, and facilitate scale. By bringing this much-needed interoperability, EdgeX makes it easier to monitor physical world items, send instructions to them, collect data from them, move the data across the fog up to the cloud where it may be stored, aggregated, analyzed, and turned into information, actuated, and acted upon. So EdgeX enables data to travel northwards towards the cloud or enterprise and back to devices, sensors, and actuators. The initiative is aligned around a common goal: the simplification and standardization of the foundation for tiered edge computing architectures in the IoT market while still enabling the ecosystem to provide significant value-added differentiation. If you don't need further description and want to immediately use EdgeX Foundry use this link: Getting Started Guide","title":"Introduction"},{"location":"#edgex-foundry-use-cases","text":"Originally built to support industrial IoT needs, EdgeX today is used in a variety of use cases to include: Building automation \u2013 helping to manage shared workspace facilities Oil/gas \u2013 closed loop control of a gas supply valve Retail \u2013 multi-sensor reconciliation for loss prevention at the point of sale Water treatment \u2013 monitor and control chemical dosing Consumer IoT \u2013 the open source HomeEdge project is using elements of EdgeX as part of its smart home platform","title":"EdgeX Foundry Use Cases"},{"location":"#edgex-foundry-architectural-tenets","text":"EdgeX Foundry was conceived with the following tenets guiding the overall architecture: EdgeX Foundry must be platform agnostic with regard to Hardware (x86, ARM) Operating system (Linux, Windows, MacOS, ...) Distribution (allowing for the distribution of functionality through micro services at the edge, on a gateway, in the fog, on cloud, etc.) Deployment/orchestration (Docker, Snaps, K8s, roll-your-own, ... ) Protocols ( north or south side protocols) EdgeX Foundry must be extremely flexible Any part of the platform may be upgraded, replaced or augmented by other micro services or software components Allow services to scale up and down based on device capability and use case EdgeX Foundry should provide \" reference implementation \" services but encourages best of breed solutions EdgeX Foundry must provide for store and forward capability To support disconnected/remote edge systems To deal with intermittent connectivity EdgeX Foundry must support and facilitate \"intelligence\" moving closer to the edge in order to address Actuation latency concerns Bandwidth and storage concerns Operating remotely concerns EdgeX Foundry must support brown and green device/sensor field deployments EdgeX Foundry must be secure and easily managed","title":"EdgeX Foundry Architectural Tenets"},{"location":"#deployments","text":"EdgeX was originally built by Dell to run on its IoT gateways . While EdgeX can and does run on gateways, its platform agnostic nature and micro service architecture enables tiered distributed deployments. In other words, a single instance of EdgeX\u2019s micro services can be distributed across several host platforms. The host platform for one or many EdgeX micro services is called a node. This allows EdgeX to leverage compute, storage, and network resources wherever they live on the edge. Its loosely-coupled architecture enables distribution across nodes to enable tiered edge computing. For example, thing communicating services could run on a programmable logic controller (PLC), a gateway, or be embedded in smarter sensors while other EdgeX services are deployed on networked servers or even in the cloud. The scope of a deployment could therefore include embedded sensors, controllers, edge gateways, servers and cloud systems. EdgeX micro services can be deployed across an array of compute nodes to maximize resources while at the same time position more processing intelligence closer to the physical edge. The number and the function of particular micro services deployed on a given node depends on the use case and capability of the hardware and infrastructure.","title":"Deployments"},{"location":"#apache-2-license","text":"EdgeX is distributed under Apache 2 License backed by the Apache Foundation. Apache 2 licensing is very friendly (\u201cpermissive\u201d) to open and commercial interests. It allows users to use the software for any purpose. It allows users to distribute, modify or even fork the code base without seeking permission from the founding project. It allows users to change or extend the code base without having to contribute back to the founding project. It even allows users to build commercial products without concerns for profit sharing or royalties to go back to the Linux Foundation or open source project organization.","title":"Apache 2 License"},{"location":"#edgex-foundry-service-layers","text":"EdgeX Foundry is a collection of open source micro services. These micro services are organized into 4 service layers, and 2 underlying augmenting system services. The Service Layers traverse from the edge of the physical realm (from the Device Services Layer), to the edge of the information realm (that of the Application Services Layer), with the Core and Supporting Services Layers at the center. The 4 Service Layers of EdgeX Foundry are as follows: Core Services Layer Supporting Services Layer Application Services Layer Device Services Layer The 2 underlying System Services of EdgeX Foundry are as follows: Security System Management","title":"EdgeX Foundry Service Layers"},{"location":"#core-services-layer","text":"Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where most of the innate knowledge of what \u201cthings\u201d are connected, what data is flowing through, and how EdgeX is configured resides in an EdgeX instance. Core consists of the following micro services: Core data: a persistence repository and associated management service for data collected from south side objects. Command: a service that facilitates and controls actuation requests from the north side to the south side. Metadata: a repository and associated management service of metadata about the objects that are connected to EdgeX Foundry. Metadata provides the capability to provision new devices and pair them with their owning device services. Registry and Configuration: provides other EdgeX Foundry micro services with information about associated services within EdgeX Foundry and micro services configuration properties (i.e. - a repository of initialization values). Core services provide intermediary communications between the things and the IT systems.","title":"Core Services Layer"},{"location":"#supporting-services-layer","text":"The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Normal software application duties such as scheduler, and data clean up (also known as scrubbing in EdgeX) are performed by micro services in the supporting services layer. These services often require some amount of core services in order to function. In all cases, supporting service can be considered optional \u2013 that is they can be left out of an EdgeX deployment depending on use case needs and system resources. Supporting services include: Rules Engine: the reference implementation edge analytics service that performs if-then conditional actuation at the edge based on sensor data collected by the EdgeX instance. This service may be replaced or augmented by use case specific analytics capability. Scheduler: an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time, the service will call on any EdgeX service API URL via REST to trigger an operation. For example, the scheduler service periodically calls on core data APIs to clean up old sensed events that have been successfully exported out of EdgeX. Alerts and Notifications: provides EdgeX services with a central facility to send out an alert or notification. These are notices sent to another system or to a person monitoring the EdgeX instance (internal service communications are often handled more directly).","title":"Supporting Services Layer"},{"location":"#application-services-layer","text":"Application services are the means to extract, process/transform and send sensed data from EdgeX to an endpoint or process of your choice. EdgeX today offers application service examples to send data to many of the major cloud providers (Amazon IoT Hub, Google IoT Core, Azure IoT Hub, IBM Watson IoT\u2026), to MQTT(s) topics, and HTTP(s) REST endpoints. Application services are based on the idea of a \"functions pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event messages) in the order specified. The first function in a pipeline is a trigger. A trigger begins the functions pipeline execution. A trigger, for example, is something like a message landing in a message queue. Each function then acts on the message. Common functions include filtering, transformation (i.e. to XML or JSON), compression, and encryption functions. The function pipeline ends when the message has gone through all the functions and is set to a sink. Putting the resulting message into an MQTT topic to be sent to Azure or AWS is an example of a sink completing an application service.","title":"Application Services Layer"},{"location":"#device-services-layer","text":"Device services connect \u201cthings\u201d \u2013 that is sensors and devices \u2013 into the rest of EdgeX. Device services are the edge connectors interacting with the \"things\" that include, but are not limited to: alarm systems, heating and air conditioning systems in homes and office buildings, lights, machines in any industry, irrigation systems, drones, currently automated transit such as some rail systems, currently automated factories, and appliances in your home. In the future, this may include driverless cars and trucks, traffic signals, fully automated fast food facilities, fully automated self-serve grocery stores, devices taking medical readings from patients, etc. Device services may service one or a number of things or devices (sensor, actuator, etc.) at one time. A device that a device service manages, could be something other than a simple, single, physical device. The device could be another gateway (and all of that gateway's devices), a device manager, a device aggregator that acts as a device, or collection of devices, to EdgeX Foundry. The device service communicates with the devices, sensors, actuators, and other IoT objects through protocols native to each device object. The device service converts the data produced and communicated by the IoT object into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry. EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, MQTT, etc.","title":"Device Services Layer"},{"location":"#system-services-layer","text":"Security Infrastructure Security elements of EdgeX Foundry protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules. There are two major EdgeX security components. A security store, which is used to provide a safe place to keep the EdgeX secrets. Examples of EdgeX secrets are the database access passwords used by the other services and tokens to connect to cloud systems. An API gateway serves as the reverse proxy to restrict access to EdgeX REST resources and perform access control related works. System Management System Management facilities provide the central point of contact for external management systems to start/stop/restart EdgeX services, get the status/health of a service, or get metrics on the EdgeX services (such as memory usage) so that the EdgeX services can be monitored.","title":"System Services Layer"},{"location":"#software-development-kits-sdks","text":"Two types of SDKs are provided by EdgeX to assist in creating north and south side services \u2013 specifically to create application services and device services. SDKs for both the north and south side services make connecting new things or new cloud/enterprise systems easier by providing developers all the scaffolding code that takes care of the basic operations of the service. Thereby allowing developers to focus on specifics of their connectivity to the south or north side object without worrying about all the raw plumbing of a micro service. SDKs are language specific; meaning an SDK is written to create services in a particular programming language. Today, EdgeX offers the following SDKs: Golang Device Service SDK C Device Service SDK Golang Application Functions SDK","title":"Software Development Kits (SDKs)"},{"location":"#how-edgex-works","text":"","title":"How EdgeX Works"},{"location":"#sensor-data-collection","text":"EdgeX\u2019s primary job is to collect data from sensors and devices and make that data available to north side applications and systems. Data is collected from a sensor by a device service that speaks the protocol of that device. Example: a Modbus device service would communicate in Modbus to get a pressure reading from a Modbus pump. The device service translates the sensor data into an EdgeX event object. The device service can then either: put the event object on a message bus (which may be implemented via Redis Streams or MQTT). Subscribers to the event message on the message bus can be application services or core data or both (see step 1.1 below). send the event object to the core data service via REST communications (see step 1.2). When core data receives the event (either via message bus or REST), it persists the sensor data in the local edge database. EdgeX uses Redis as our persistence store. There is an abstraction in place to allow you to use another database (which has allowed other databases to be used in the past). Persistence is not required and can be turned off. Data is persisted in EdgeX at the edge for two basics reasons: Edge nodes are not always connected. During periods of disconnected operations, the sensor data must be saved so that it can be transmitted northbound when connectivity is restored. This is referred to as store and forward capability. In some cases, analytics of sensor data needs to look back in history in order to understand the trend and to make the right decision based on that history. If a sensor reports that it is 72\u00b0 F right now, you might want to know what the temperature was ten minutes ago before you make a decision to adjust a heating or cooling system. If the temperature was 85\u00b0 F, you may decide that adjustments to lower the room temperature you made ten minutes ago were sufficient to cool the room. It is the context of historical data that are important to local analytic decisions. When core data receives event objects from the device service via REST, it will put sensor data events on a message topic destined for application services. Redis Pub/Sub is used as the messaging infrastructure by default (step 2). MQTT or ZMQ can also be used as the messaging infrastructure between core data and the application services. The application service transforms the data as needed and pushes the data to an endpoint. It can also filter, enrich, compress, encrypt or perform other functions on the event before sending it to the endpoint (step 3). The endpoint could be an HTTP/S endpoint, an MQTT topic, a cloud system (cloud topic), etc.","title":"Sensor Data Collection"},{"location":"#edge-analytics-and-actuation","text":"In edge computing, simply collecting sensor data is only part of the job of an edge platform like EdgeX. Another important job of an edge platform is to be able to: Analyze the incoming sensor data locally Act quickly on that analysis Edge or local analytics is the processing that performs an assessment of the sensor data collected at the edge (\u201clocally\u201d) and triggers actuations or actions based on what it sees. Why edge analytics ? Local analytics are important for two reasons: Some decisions cannot afford to wait for sensor collected data to be fed back to an enterprise or cloud system and have a response returned. Additionally, some edge systems are not always connected to the enterprise or cloud \u2013 they have intermittent periods of connectivity. Local analytics allows systems to operate independently, at least for some stretches of time. For example: a shipping container\u2019s cooling system must be able to make decisions locally without the benefit of Internet connectivity for long periods of time when the ship is at sea. Local analytics also allow a system to act quickly in a low latent fashion when critical to system operations. As an extreme case, imagine that your car\u2019s airbag fired on the basis of data being sent to the cloud and analyzed for collisions. Your car has local analytics to prevent such a potentially slow and error prone delivery of the safety actuation in your automobile. EdgeX is built to act locally on data it collects from the edge. In other words, events are processed by local analytics and can be used to trigger action back down on a sensor/device. Just as application services prepare data for consumption by north side cloud systems or applications, application services can process and get EdgeX events (and the sensor data they contain) to any analytics package (see step 4). By default, EdgeX ships with a simple rules engine (the default EdgeX rules engine is eKuiper \u2013 an open source rules engine and now a sister project in LF Edge). Your own analytics package (or ML agent) could replace or augment the local rules engine. The analytic package can explore the sensor event data and make a decision to trigger actuation of a device. For example, it could check that the pressure reading of an engine is greater than 60 PSI. When such a rule is determined to be true, the analytic package calls on the core command service to trigger some action, like \u201copen a valve\u201d on some controllable device (see step 5). The core command service gets the actuation request and determines which device it needs to act on with the request; then calling on the owning device service to do the actuation (see step 6). Core command allows developers to put additional security measures or checks in place before actuating. The device service receives the request for actuation, translates that into a protocol specific request and forwards the request to the desired device (see step 7).","title":"Edge Analytics and Actuation"},{"location":"#project-release-cadence","text":"Typically, EdgeX releases twice a year; once in the spring and once in the fall. Bug fix releases may occur more often. Each EdgeX release has a code name. The code name follows an alphabetic pattern similar to Android (code names sequentially follow the alphabet). The code name of each release is named after some geographical location in the world. The honor of naming an EdgeX release is given to a community member deemed to have contributed significantly to the project. A release also has a version number. The release version follows sematic versioning to indicate the release is major or minor in scope. Major releases typically contain significant new features and functionality and are not always backward compatible with prior releases. Minor releases are backward compatible and usually contain bug fixes and fewer new features. See the project Wiki for more information on releases, versions and patches . Release Schedule Version Barcelona Oct 2017 0.5.0 California Jun 2017 0.6.0 Delhi Oct 2018 0.7.0 Edinburgh Jul 2019 1.0.0 Fuji Nov 2019 1.1.0 Geneva May 2020 1.2.0 Hanoi November 2020 1.3.0 Ireland Spring 2021 2.0.0 Jakarta Fall 2021 2.1.0 Kamukura Spring 2022 TBD Levski Fall 2022 TBD Note : minor releases of the Device Services and Application Services (along with their associated SDKs) can be release independently. Graphical User Interface, the command line interface (CLI) and other tools can be released independently. EdgeX community members convene in a meeting right at the time of a release to plan the next release and roadmap future releases. See the Project Wiki for more detailed information on releases and roadmap . EdgeX 2.0","title":"Project Release Cadence"},{"location":"#the-ireland-release","text":"The Ireland release, available June 2021, is the second major version of EdgeX. Highlights of the 2.0 release include: A new and improved set of service APIs, which eliminate a lot of technical debt and setting EdgeX up for new features in the future (such as allowing for more message based communications) Direct device service to application service communications via message bus (bypassing core data if desired or allowing it to be a secondary subscriber) Simplified device profiles Improved security New, improved and more comprehensive graphical user interface (for development and demonstration purposes) New device services for CoAP, GPIO, and LLRP (RFID protocol) An LLRP inventory application service Improved application service capability and functions (to include new filter functions) Cleaner/simpler Docker image naming and facilities to create custom Docker Compose files EdgeX 2.0 provides adopters with a platform that Has an improved API that addresses edge application needs of today and tomorrow Is more efficient and lighter (depending on use case) Is more reliable and offers better quality of service (less REST, more messaging and incorporating a number of bug fixes) Has eliminated a lot of technical debt accumulated over 4 years","title":"The Ireland Release"},{"location":"#edgex-history-and-naming","text":"EdgeX Foundry began as a project chartered by Dell IoT Marketing and developed by the Dell Client Office of the CTO as an incubation project called Project Fuse in July 2015. It was initially created to run as the IoT software application on Dell\u2019s introductory line of IoT gateways. Dell entered the project into open source through the Linux Foundation on April 24, 2017. EdgeX was formally announced and demonstrated at Hanover Messe 2017. Hanover Messe is one of the world's largest industrial trade fairs. At the fair, the Linux Foundation also announced the association of 50 founding member organizations \u2013 the EdgeX ecosystem \u2013 to help further the project and the goals of creating a universal edge platform. The name \u2018foundry\u2019 was used to draw parallels to Cloud Foundry . EdgeX Foundry is meant to be a foundry for solutions at the edge just like Cloud Foundry is a foundry for solutions in the cloud. Cloud Foundry was originated by VMWare (Dell Technologies is a major shareholder of VMWare - recall that Dell Technologies was the original creator of EdgeX). The \u2018X\u2019 in EdgeX represents the transformational aspects of the platform and allows the project name to be trademarked and to be used in efforts such as certification and certification marks. The EdgeX Foundry Logo represents the nature of its role as transformation engine between the physical OT world and the digital IT world. The EdgeX community selected the octopus as the mascot or \u201cspirit animal\u201d of the project at its inception. Its eight arms and the suckers on the arms represent the sensors. The sensors bring the data into the octopus. Actually, the octopus has nine brains in a way. It has millions of neurons running down each arm; functioning as mini-brains in each of those arms. The arms of the octopus serve as \u201clocal analytics\u201d like that offered by EdgeX. The mascot is affectionately called \u201cEdgey\u201d by the community.","title":"EdgeX History and Naming"},{"location":"V2TopLevelMigration/","text":"V2 Migration Guide EdgeX 2.0 Many backward breaking changes occurred in the EdgeX 2.0 (Ireland) release which may require some migration depending on your use case. This section describes how to migrate from V1 to V2 at a high level and refers the reader to the appropriate detail documents. The areas to consider for migrating are: Custom Compose File Database Custom Configuration Custom Device Service Custom Device Profile Custom Pre-Defined Device Custom Applications Service Security eKuiper Rules Custom Compose File The compose files for V2 have many changes from their V1 counter parts. If you have customized a V1 compose file to add additional services and/or add or modify configuration overrides, it is highly recommended that you start with the appropriate V2 compose file and re-add your customizations. It is very likely that the sections for your additional services will need to be migrated to have the proper environment overrides. Best approach is to use one of the V2 service sections that closest matches your service as a template. The latest V2 compose files can be found here: https://github.com/edgexfoundry/edgex-compose/tree/ireland Compose Builder If the add on service(s) in your custom compose file are EdgeX released device or app services, it is highly recommended that you use the Compose Builder to generate your custom compose file. The latest V2 Compose Builder can be found here: https://github.com/edgexfoundry/edgex-compose/tree/ireland/compose-builder#readme Database There currently is no migration path for the data stored in the database. The V2 data collections are stored separately from the V1 data collections in the Redis database. Redis is now the only supported database, i.e. support for Mongo has been removed. Note Since the V1 data and V2 data are stored separately, one could create a migration tool and upstream it to the EdgeX community. Warning If the database is not cleared before starting the V2 services, the old V1 data will still reside in the database taking up useful memory. It is recommended that you first wipe the database clean before starting V2 Services. That is unless you create a DB migration tool, in which case you will not want to clear the V1 data until it has been migrated. See Clearing Redis Database section below for details on how to clear the Redis database. The following sections describe what you need to be aware for the different services that create data in the database. Core Data The Event/Reading data stored by Core Data is considered transient and of little value once it has become old. The V2 versions of these data collections will be empty until new Events/Readings are received from V2 Device Services. The V1 ValueDescriptors have been removed in V2. Core Metadata Most of the data stored by Core Metadata will be recreated when the V2 versions of the Device Services start-up. The statically declared devices will automatically be created and device discovery will find and add existing devices. Any device profiles, devices, provision watchers created manually via the V1 REST APIs will have to be recreated using the V2 REST API. Any manually-applied AdministrativeState settings will also need to be re-applied. Support Notifications Any Subscriptions created via the V1 REST API will have to be recreated using the V2 REST API. The Notification and Transmission collections will be empty until new notifications are sent using EdgeX 2.0 Support Scheduler The statically declared Interval and IntervalAction will be created automatically. Any Interval and/or IntervalAction created via the V1 REST API will have to be recreated using the V2 REST API. If you have created a custom configuration with additional statically declared Interval s and IntervalActions see the TOML File section under Custom Configuration below. Application Services Application services use the database only when the Store and Forward capability is enabled. If you do not use this capability you can skip this section. This data collection only has data when that data could not be exported. It is recommended not to upgrade to V2 while the Store and Forward data collection is not empty or you are certain the data is no longer needed. You can determine if the Store and Forward data collection is empty by setting the Application Service's log level to DEBUG and look for the following message which is logged every RetryInterval : msg=\" 0 stored data items found for retrying\" Clearing Redis Database Docker When running EdgeX in Docker the simplest way to clear the database is to remove the db-data volume after stopping the V1 EdgeX services. docker-compose -f down docker volume rm $(docker volume ls -q | grep db-data) Now when the V2 EdgeX services are started the database will be cleared of the old v1 data. Snaps Because there are no tools to migrate EdgeX configuration and database, it's not possible to update the edgexfoundry snap from a V1 version to a V2 version. You must remove the V1 snap first, and then install a V2 version of the snap (available from the 2.0 track in the Snap Store). This will result in starting fresh with EdgeX V2 and all V1 data removed. Local If you are running EdgeX locally, i.e. not in Docker or snaps and in non-secure mode you can use the Redis CLI to clear the database. The CLI would have been installed when you installed Redis locally. Run the following command to clear the database: redis-cli FLUSHDB This will not work if running EdgeX V1 in running in secure mode since you will not have the random generated Redis password unless you created an Admin password when you installed Redis. Custom Configuration Consul If you have customized any EdgeX service's configuration (core, support, device, etc.) via Consul, those customization will need to be re-applied to those services' configuration in Consul once the V2 versions have started and pushed their configuration into Consul. The V2 services now use 2.0 in the Consul path rather than 1.0 . See the TOML File section below for details on migrating configuration for each of the EdgeX services. Example Consul path for V2 .../kv/edgex/core/2.0/core-data/ The same applies for custom device and application service once they have been migrated following the guides referenced in the Custom Device Service and Custom Applications Service sections below. Warning If the Consul data is not cleared prior to running the V2 services, the V1 configuration will remain and be taking up useful memory. The configuration data in Consul can be cleared by deleting the .../kv/edgex/ node with the curl command below prior to starting EdgeX 2.0. Consul is secured in EdgeX 2.0 secure-mode which will make running the command below require an access token if not done prior. curl --request DELETE http://localhost:8500/v1/kv/edgex?recurse=true` TOML File If you have custom configuration TOML files for any EdgeX service (core, support, device, etc.) that configuration will need to be migrated. See V2 Migration of Common Configuration for the details on migrating configuration common to all EdgeX services. The following are where you can find the configuration migration specifics for individual core/support the services Core Data Core Metadata Core Command Support Notifications Support Scheduler System Management Agent (DEPRECATED) Application Services Device Services (common) Device MQTT Device Camera Custom Environment Overrides If you have custom environment overrides for configuration impacted by the V2 changes you will also need to migrate your overrides to use the new name or value depending on what has changed. Refer to the links above and/or below for details for migration of common and/or the service specific configuration to determine if your overrides require migrating. Custom Device Service If you have custom Device Services they will need to be migrated to the V2 version of the Device SDK. See Device Service V2 Migration Guide for complete details. Custom Device Profile If you have custom V1 Device Profile(s) for one of the EdgeX Device Services they will need to be migrated to the V2 version of Device Profiles. See Device Service V2 Migration Guide for complete details. Custom Pre-Defined Device If you have custom V1 Pre-Defined Device(s) for one of the EdgeX Device Services they will need to be migrated to the V2 version of Pre-Defined Devices. See Device Service V2 Migration Guide for complete details. Custom Applications Service If you have custom Application Services they will need to be migrated to the V2 version of the App Functions SDK. See Application Services V2 Migration Guide for complete details. Security Settings If you have an add-on service running in secure mode you will need to set addition security service environment variables in EdgeX V2. See Configuring Add-on Service for more details. API Gateway configuration The API gateway has different tools to set TLS and acquire access tokens. See Configuring API Gateway section for complete details. Secure Consul Consul is now secured when running EdgeX 2.0 in secured mode. See Secure Consul section for complete details. Secured API Gateway Admin Port The API Gateway Admin port is now secured when running EdgeX 2.0 in secured mode. See API Gateway Admin Port (TBD) section for complete details. eKuiper Rules If you have rules defined in the eKuiper rules engine that utilize the meta() directive, you will need to migrate your rule(s) to use the new V2 meta names. The following are the meta names that have changed, added or removed. device => deviceName name => resourceName profileName ( new ) pushed ( removed ) created ( removed - use origin) modified ( removed - use origin) floatEncoding ( removed ) Example V1 to V2 rule migration V1 Rule: { \"id\": \"ruleInt64\", \"sql\": \"SELECT Int64 FROM demo WHERE meta(device) = \\\"Random-Integer-Device\\\" \", \"actions\": [ { \"mqtt\": { \"server\": \"tcp://edgex-mqtt-broker:1883\", \"topic\": \"result\", \"clientId\": \"demo_001\" } } ] } V2 Rule: { \"id\": \"ruleInt64\", \"sql\": \"SELECT Int64 FROM demo WHERE meta(deviceName) = \\\"Random-Integer-Device\\\" \", \"actions\": [ { \"mqtt\": { \"server\": \"tcp://edgex-mqtt-broker:1883\", \"topic\": \"result\", \"clientId\": \"demo_001\" } } ] }","title":"V2 Migration Guide"},{"location":"V2TopLevelMigration/#v2-migration-guide","text":"EdgeX 2.0 Many backward breaking changes occurred in the EdgeX 2.0 (Ireland) release which may require some migration depending on your use case. This section describes how to migrate from V1 to V2 at a high level and refers the reader to the appropriate detail documents. The areas to consider for migrating are: Custom Compose File Database Custom Configuration Custom Device Service Custom Device Profile Custom Pre-Defined Device Custom Applications Service Security eKuiper Rules","title":"V2 Migration Guide"},{"location":"V2TopLevelMigration/#custom-compose-file","text":"The compose files for V2 have many changes from their V1 counter parts. If you have customized a V1 compose file to add additional services and/or add or modify configuration overrides, it is highly recommended that you start with the appropriate V2 compose file and re-add your customizations. It is very likely that the sections for your additional services will need to be migrated to have the proper environment overrides. Best approach is to use one of the V2 service sections that closest matches your service as a template. The latest V2 compose files can be found here: https://github.com/edgexfoundry/edgex-compose/tree/ireland","title":"Custom Compose File"},{"location":"V2TopLevelMigration/#compose-builder","text":"If the add on service(s) in your custom compose file are EdgeX released device or app services, it is highly recommended that you use the Compose Builder to generate your custom compose file. The latest V2 Compose Builder can be found here: https://github.com/edgexfoundry/edgex-compose/tree/ireland/compose-builder#readme","title":"Compose Builder"},{"location":"V2TopLevelMigration/#database","text":"There currently is no migration path for the data stored in the database. The V2 data collections are stored separately from the V1 data collections in the Redis database. Redis is now the only supported database, i.e. support for Mongo has been removed. Note Since the V1 data and V2 data are stored separately, one could create a migration tool and upstream it to the EdgeX community. Warning If the database is not cleared before starting the V2 services, the old V1 data will still reside in the database taking up useful memory. It is recommended that you first wipe the database clean before starting V2 Services. That is unless you create a DB migration tool, in which case you will not want to clear the V1 data until it has been migrated. See Clearing Redis Database section below for details on how to clear the Redis database. The following sections describe what you need to be aware for the different services that create data in the database.","title":"Database"},{"location":"V2TopLevelMigration/#core-data","text":"The Event/Reading data stored by Core Data is considered transient and of little value once it has become old. The V2 versions of these data collections will be empty until new Events/Readings are received from V2 Device Services. The V1 ValueDescriptors have been removed in V2.","title":"Core Data"},{"location":"V2TopLevelMigration/#core-metadata","text":"Most of the data stored by Core Metadata will be recreated when the V2 versions of the Device Services start-up. The statically declared devices will automatically be created and device discovery will find and add existing devices. Any device profiles, devices, provision watchers created manually via the V1 REST APIs will have to be recreated using the V2 REST API. Any manually-applied AdministrativeState settings will also need to be re-applied.","title":"Core Metadata"},{"location":"V2TopLevelMigration/#support-notifications","text":"Any Subscriptions created via the V1 REST API will have to be recreated using the V2 REST API. The Notification and Transmission collections will be empty until new notifications are sent using EdgeX 2.0","title":"Support Notifications"},{"location":"V2TopLevelMigration/#support-scheduler","text":"The statically declared Interval and IntervalAction will be created automatically. Any Interval and/or IntervalAction created via the V1 REST API will have to be recreated using the V2 REST API. If you have created a custom configuration with additional statically declared Interval s and IntervalActions see the TOML File section under Custom Configuration below.","title":"Support Scheduler"},{"location":"V2TopLevelMigration/#application-services","text":"Application services use the database only when the Store and Forward capability is enabled. If you do not use this capability you can skip this section. This data collection only has data when that data could not be exported. It is recommended not to upgrade to V2 while the Store and Forward data collection is not empty or you are certain the data is no longer needed. You can determine if the Store and Forward data collection is empty by setting the Application Service's log level to DEBUG and look for the following message which is logged every RetryInterval : msg=\" 0 stored data items found for retrying\"","title":"Application Services"},{"location":"V2TopLevelMigration/#clearing-redis-database","text":"","title":"Clearing Redis Database"},{"location":"V2TopLevelMigration/#docker","text":"When running EdgeX in Docker the simplest way to clear the database is to remove the db-data volume after stopping the V1 EdgeX services. docker-compose -f down docker volume rm $(docker volume ls -q | grep db-data) Now when the V2 EdgeX services are started the database will be cleared of the old v1 data.","title":"Docker"},{"location":"V2TopLevelMigration/#snaps","text":"Because there are no tools to migrate EdgeX configuration and database, it's not possible to update the edgexfoundry snap from a V1 version to a V2 version. You must remove the V1 snap first, and then install a V2 version of the snap (available from the 2.0 track in the Snap Store). This will result in starting fresh with EdgeX V2 and all V1 data removed.","title":"Snaps"},{"location":"V2TopLevelMigration/#local","text":"If you are running EdgeX locally, i.e. not in Docker or snaps and in non-secure mode you can use the Redis CLI to clear the database. The CLI would have been installed when you installed Redis locally. Run the following command to clear the database: redis-cli FLUSHDB This will not work if running EdgeX V1 in running in secure mode since you will not have the random generated Redis password unless you created an Admin password when you installed Redis.","title":"Local"},{"location":"V2TopLevelMigration/#custom-configuration","text":"","title":"Custom Configuration"},{"location":"V2TopLevelMigration/#consul","text":"If you have customized any EdgeX service's configuration (core, support, device, etc.) via Consul, those customization will need to be re-applied to those services' configuration in Consul once the V2 versions have started and pushed their configuration into Consul. The V2 services now use 2.0 in the Consul path rather than 1.0 . See the TOML File section below for details on migrating configuration for each of the EdgeX services. Example Consul path for V2 .../kv/edgex/core/2.0/core-data/ The same applies for custom device and application service once they have been migrated following the guides referenced in the Custom Device Service and Custom Applications Service sections below. Warning If the Consul data is not cleared prior to running the V2 services, the V1 configuration will remain and be taking up useful memory. The configuration data in Consul can be cleared by deleting the .../kv/edgex/ node with the curl command below prior to starting EdgeX 2.0. Consul is secured in EdgeX 2.0 secure-mode which will make running the command below require an access token if not done prior. curl --request DELETE http://localhost:8500/v1/kv/edgex?recurse=true`","title":"Consul"},{"location":"V2TopLevelMigration/#toml-file","text":"If you have custom configuration TOML files for any EdgeX service (core, support, device, etc.) that configuration will need to be migrated. See V2 Migration of Common Configuration for the details on migrating configuration common to all EdgeX services. The following are where you can find the configuration migration specifics for individual core/support the services Core Data Core Metadata Core Command Support Notifications Support Scheduler System Management Agent (DEPRECATED) Application Services Device Services (common) Device MQTT Device Camera","title":"TOML File"},{"location":"V2TopLevelMigration/#custom-environment-overrides","text":"If you have custom environment overrides for configuration impacted by the V2 changes you will also need to migrate your overrides to use the new name or value depending on what has changed. Refer to the links above and/or below for details for migration of common and/or the service specific configuration to determine if your overrides require migrating.","title":"Custom Environment Overrides"},{"location":"V2TopLevelMigration/#custom-device-service","text":"If you have custom Device Services they will need to be migrated to the V2 version of the Device SDK. See Device Service V2 Migration Guide for complete details.","title":"Custom Device Service"},{"location":"V2TopLevelMigration/#custom-device-profile","text":"If you have custom V1 Device Profile(s) for one of the EdgeX Device Services they will need to be migrated to the V2 version of Device Profiles. See Device Service V2 Migration Guide for complete details.","title":"Custom Device Profile"},{"location":"V2TopLevelMigration/#custom-pre-defined-device","text":"If you have custom V1 Pre-Defined Device(s) for one of the EdgeX Device Services they will need to be migrated to the V2 version of Pre-Defined Devices. See Device Service V2 Migration Guide for complete details.","title":"Custom Pre-Defined Device"},{"location":"V2TopLevelMigration/#custom-applications-service","text":"If you have custom Application Services they will need to be migrated to the V2 version of the App Functions SDK. See Application Services V2 Migration Guide for complete details.","title":"Custom Applications Service"},{"location":"V2TopLevelMigration/#security","text":"","title":"Security"},{"location":"V2TopLevelMigration/#settings","text":"If you have an add-on service running in secure mode you will need to set addition security service environment variables in EdgeX V2. See Configuring Add-on Service for more details.","title":"Settings"},{"location":"V2TopLevelMigration/#api-gateway-configuration","text":"The API gateway has different tools to set TLS and acquire access tokens. See Configuring API Gateway section for complete details.","title":"API Gateway configuration"},{"location":"V2TopLevelMigration/#secure-consul","text":"Consul is now secured when running EdgeX 2.0 in secured mode. See Secure Consul section for complete details.","title":"Secure Consul"},{"location":"V2TopLevelMigration/#secured-api-gateway-admin-port","text":"The API Gateway Admin port is now secured when running EdgeX 2.0 in secured mode. See API Gateway Admin Port (TBD) section for complete details.","title":"Secured API Gateway Admin Port"},{"location":"V2TopLevelMigration/#ekuiper-rules","text":"If you have rules defined in the eKuiper rules engine that utilize the meta() directive, you will need to migrate your rule(s) to use the new V2 meta names. The following are the meta names that have changed, added or removed. device => deviceName name => resourceName profileName ( new ) pushed ( removed ) created ( removed - use origin) modified ( removed - use origin) floatEncoding ( removed ) Example V1 to V2 rule migration V1 Rule: { \"id\": \"ruleInt64\", \"sql\": \"SELECT Int64 FROM demo WHERE meta(device) = \\\"Random-Integer-Device\\\" \", \"actions\": [ { \"mqtt\": { \"server\": \"tcp://edgex-mqtt-broker:1883\", \"topic\": \"result\", \"clientId\": \"demo_001\" } } ] } V2 Rule: { \"id\": \"ruleInt64\", \"sql\": \"SELECT Int64 FROM demo WHERE meta(deviceName) = \\\"Random-Integer-Device\\\" \", \"actions\": [ { \"mqtt\": { \"server\": \"tcp://edgex-mqtt-broker:1883\", \"topic\": \"result\", \"clientId\": \"demo_001\" } } ] }","title":"eKuiper Rules"},{"location":"api/Ch-APIIntroduction/","text":"Introduction Each of the EdgeX services (core, supporting, management, device and application) implement a RESTful API. This section provides details about each service's API. You will see there is a common set of API's that all services implement, which are: Version Metrics Config Ping Each Edgex Service's RESTful API is documented via Swagger. A link is provided to the swagger document in the service specific documentation. Also included in this API Reference are a couple 3rd party services (Configuration/Registry and Rules Engine). These services do not implement the above common APIs and don't not have swagger documentation. Links are provided to their appropriate documentation. See the left side navigation for complete list of services to access their API Reference. EdgeX 2.0 For EdgeX 2.0 all the EdgeX services use new DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all or /label/{label}, provide offset and limit query parameters.","title":"Introduction"},{"location":"api/Ch-APIIntroduction/#introduction","text":"Each of the EdgeX services (core, supporting, management, device and application) implement a RESTful API. This section provides details about each service's API. You will see there is a common set of API's that all services implement, which are: Version Metrics Config Ping Each Edgex Service's RESTful API is documented via Swagger. A link is provided to the swagger document in the service specific documentation. Also included in this API Reference are a couple 3rd party services (Configuration/Registry and Rules Engine). These services do not implement the above common APIs and don't not have swagger documentation. Links are provided to their appropriate documentation. See the left side navigation for complete list of services to access their API Reference. EdgeX 2.0 For EdgeX 2.0 all the EdgeX services use new DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all or /label/{label}, provide offset and limit query parameters.","title":"Introduction"},{"location":"api/applications/Ch-APIAppFunctionsSDK/","text":"Application Services The App Functions SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a functions pipeline. This functions pipeline processes messages received by the configured trigger. See Application Functions SDK for more details on this SDK. The App Functions SDK provides a RESTful API that all Application Services inherit from the SDK. Application Service SDK V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the App Functions SDK has changed to use DTOs (Data Transfer Objects) for all responses and for POST requests. One exception is the /api/v2/trigger endpoint that is enabled when the Trigger is configured to be http . This endpoint accepts any data POSTed to it.","title":"Application Services"},{"location":"api/applications/Ch-APIAppFunctionsSDK/#application-services","text":"The App Functions SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a functions pipeline. This functions pipeline processes messages received by the configured trigger. See Application Functions SDK for more details on this SDK. The App Functions SDK provides a RESTful API that all Application Services inherit from the SDK. Application Service SDK V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the App Functions SDK has changed to use DTOs (Data Transfer Objects) for all responses and for POST requests. One exception is the /api/v2/trigger endpoint that is enabled when the Trigger is configured to be http . This endpoint accepts any data POSTed to it.","title":"Application Services"},{"location":"api/applications/Ch-APIRulesEngine/","text":"Rules Engine EdgeX Foundry Rules Engine Microservice receives data from the instance of App Service Configurable running the rules-engine profile (aka app-rules-engine) via the EdgeX MessageBus. EdgeX uses eKuiper for the rules engine, which is a separate LF Edge project. See the eKuiper README for more details on this rules engine. eKuiper's RESTful API documentation","title":"Rules Engine"},{"location":"api/applications/Ch-APIRulesEngine/#rules-engine","text":"EdgeX Foundry Rules Engine Microservice receives data from the instance of App Service Configurable running the rules-engine profile (aka app-rules-engine) via the EdgeX MessageBus. EdgeX uses eKuiper for the rules engine, which is a separate LF Edge project. See the eKuiper README for more details on this rules engine. eKuiper's RESTful API documentation","title":"Rules Engine"},{"location":"api/core/Ch-APICoreCommand/","text":"Core Command EdgeX Foundry's Command microservice is a conduit for other services to trigger action on devices and sensors through their managing Device Services. See Core Command for more details about this service. The service provides an API to get the list of commands that can be issued for all devices or a single device. Commands are divided into two groups for each device: GET commands are issued to a device or sensor to get a current value for a particular attribute on the device, such as the current temperature provided by a thermostat sensor, or the on/off status of a light. SET commands are issued to a device or sensor to change the current state or status of a device or one of its attributes, such as setting the speed in RPMs of a motor, or setting the brightness of a dimmer light. Core Command V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Command has changed to use DTOs (Data Transfer Objects) for all responses and for all PUT requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Command"},{"location":"api/core/Ch-APICoreCommand/#core-command","text":"EdgeX Foundry's Command microservice is a conduit for other services to trigger action on devices and sensors through their managing Device Services. See Core Command for more details about this service. The service provides an API to get the list of commands that can be issued for all devices or a single device. Commands are divided into two groups for each device: GET commands are issued to a device or sensor to get a current value for a particular attribute on the device, such as the current temperature provided by a thermostat sensor, or the on/off status of a light. SET commands are issued to a device or sensor to change the current state or status of a device or one of its attributes, such as setting the speed in RPMs of a motor, or setting the brightness of a dimmer light. Core Command V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Command has changed to use DTOs (Data Transfer Objects) for all responses and for all PUT requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Command"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/","text":"Configuration and Registry EdgeX uses the 3rd party Consul microservice as the implementations for Configuration and Registry. The RESTful APIs are provided by Consul directly, and several communities supply Consul client libraries for different programming languages, including Go (official), Python, Java, PHP, Scala, Erlang/OTP, Ruby, Node.js, and C#. EdgeX 2.0 New for Edgex 2.0 is Secure Consul when running EdgeX in secure mode. See the Secure Consul section for more details. For the client libraries of different languages, please refer to the list on this page: https://www.consul.io/downloads_tools.html Configuration Management For the current API documentation, please refer to the official Consul web site: https://www.consul.io/intro/getting-started/kv.html https://www.consul.io/docs/agent/http/kv.html Service Registry For the current API documentation, please refer to the official Consul web site: https://www.consul.io/intro/getting-started/services.html https://www.consul.io/docs/agent/http/catalog.html https://www.consul.io/docs/agent/http/agent.html https://www.consul.io/docs/agent/checks.html https://www.consul.io/docs/agent/http/health.html Service Registration While each microservice is starting up, it will connect to Consul to register its endpoint information, including microservice ID, address, port number, and health checking method. After that, other microservices can locate its URL from Consul, and Consul has the ability to monitor its health status. The RESTful API of registration is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_service_register Service Deregistration Before microservices shut down, they have to deregister themselves from Consul. The RESTful API of deregistration is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_service_deregister Service Discovery Service Discovery feature allows client micro services to query the endpoint information of a particular microservice by its microservice IDor list all available services registered in Consul. The RESTful API of querying service by microservice IDis described on the following Consul page: https://www.consul.io/docs/agent/http/catalog.html#catalog_service The RESTful API of listing all available services is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_services Health Checking Health checking is a critical feature that prevents using services that are unhealthy. Consul provides a variety of methods to check the health of services, including Script + Interval, HTTP + Interval, TCP + Interval, Time to Live (TTL), and Docker + Interval. The detailed introduction and examples of each checking methods are described on the following Consul page: https://www.consul.io/docs/agent/checks.html The health checks should be established during service registration. Please see the paragraph on this page of Service Registration section. Consul UI Consul has UI which allows you to view the health of registered services and view/edit services' individual configuration. Learn more about the UI on the following Consul page: https://learn.hashicorp.com/tutorials/consul/get-started-explore-the-ui EdgeX 2.0 Please note that as of EdgeX 2.0, Consul can be secured. When EdgeX is running in secure mode with secure Consul , you must provide Consul's access token to get to the UI referenced above. See How to get Consul ACL token for details.","title":"Configuration and Registry"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#configuration-and-registry","text":"EdgeX uses the 3rd party Consul microservice as the implementations for Configuration and Registry. The RESTful APIs are provided by Consul directly, and several communities supply Consul client libraries for different programming languages, including Go (official), Python, Java, PHP, Scala, Erlang/OTP, Ruby, Node.js, and C#. EdgeX 2.0 New for Edgex 2.0 is Secure Consul when running EdgeX in secure mode. See the Secure Consul section for more details. For the client libraries of different languages, please refer to the list on this page: https://www.consul.io/downloads_tools.html","title":"Configuration and Registry"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#configuration-management","text":"For the current API documentation, please refer to the official Consul web site: https://www.consul.io/intro/getting-started/kv.html https://www.consul.io/docs/agent/http/kv.html","title":"Configuration Management"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#service-registry","text":"For the current API documentation, please refer to the official Consul web site: https://www.consul.io/intro/getting-started/services.html https://www.consul.io/docs/agent/http/catalog.html https://www.consul.io/docs/agent/http/agent.html https://www.consul.io/docs/agent/checks.html https://www.consul.io/docs/agent/http/health.html Service Registration While each microservice is starting up, it will connect to Consul to register its endpoint information, including microservice ID, address, port number, and health checking method. After that, other microservices can locate its URL from Consul, and Consul has the ability to monitor its health status. The RESTful API of registration is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_service_register Service Deregistration Before microservices shut down, they have to deregister themselves from Consul. The RESTful API of deregistration is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_service_deregister Service Discovery Service Discovery feature allows client micro services to query the endpoint information of a particular microservice by its microservice IDor list all available services registered in Consul. The RESTful API of querying service by microservice IDis described on the following Consul page: https://www.consul.io/docs/agent/http/catalog.html#catalog_service The RESTful API of listing all available services is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_services Health Checking Health checking is a critical feature that prevents using services that are unhealthy. Consul provides a variety of methods to check the health of services, including Script + Interval, HTTP + Interval, TCP + Interval, Time to Live (TTL), and Docker + Interval. The detailed introduction and examples of each checking methods are described on the following Consul page: https://www.consul.io/docs/agent/checks.html The health checks should be established during service registration. Please see the paragraph on this page of Service Registration section.","title":"Service Registry"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#consul-ui","text":"Consul has UI which allows you to view the health of registered services and view/edit services' individual configuration. Learn more about the UI on the following Consul page: https://learn.hashicorp.com/tutorials/consul/get-started-explore-the-ui EdgeX 2.0 Please note that as of EdgeX 2.0, Consul can be secured. When EdgeX is running in secure mode with secure Consul , you must provide Consul's access token to get to the UI referenced above. See How to get Consul ACL token for details.","title":"Consul UI"},{"location":"api/core/Ch-APICoreData/","text":"Core Data EdgeX Foundry Core Data microservice includes the Events/Readings database collected from devices /sensors and APIs to expose this database to other services. Its APIs to provide access to Add, Query and Delete Events/Readings. See Core Data for more details about this service. Core Data V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Data has changed to use DTOs (Data Transfer Objects) for all responses and for all POST requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Data"},{"location":"api/core/Ch-APICoreData/#core-data","text":"EdgeX Foundry Core Data microservice includes the Events/Readings database collected from devices /sensors and APIs to expose this database to other services. Its APIs to provide access to Add, Query and Delete Events/Readings. See Core Data for more details about this service. Core Data V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Data has changed to use DTOs (Data Transfer Objects) for all responses and for all POST requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Data"},{"location":"api/core/Ch-APICoreMetadata/","text":"Core Metadata The Core Metadata microservice includes the device/sensor metadata database and APIs to expose this database to other services. In particular, the device provisioning service deposits and manages device metadata through this service's API. See Core Metadata for more details about this service. Core Metadata V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Metadata has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Metadata"},{"location":"api/core/Ch-APICoreMetadata/#core-metadata","text":"The Core Metadata microservice includes the device/sensor metadata database and APIs to expose this database to other services. In particular, the device provisioning service deposits and manages device metadata through this service's API. See Core Metadata for more details about this service. Core Metadata V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Metadata has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Metadata"},{"location":"api/devices/Ch-APIDeviceSDK/","text":"Device Services The EdgeX Foundry Device Service Software Development Kit (SDK) takes the Developer through the step-by-step process to create an EdgeX Foundry Device Service microservice. See Device Service SDK for more details on this SDK. The Device Service SDK provides a RESTful API that all Device Services inherit from the SDK. Device SDK V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Device Service SDK has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT requests.","title":"Device Services"},{"location":"api/devices/Ch-APIDeviceSDK/#device-services","text":"The EdgeX Foundry Device Service Software Development Kit (SDK) takes the Developer through the step-by-step process to create an EdgeX Foundry Device Service microservice. See Device Service SDK for more details on this SDK. The Device Service SDK provides a RESTful API that all Device Services inherit from the SDK. Device SDK V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Device Service SDK has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT requests.","title":"Device Services"},{"location":"api/management/Ch-APISystemManagement/","text":"System Management Agent EdgeX 2.0 System Management Agent has been deprecated for EdgeX 2.0. While it is still available, it may be removed in a future release and no further develop is planned for it. The EdgeX System Management Agent (SMA) microservice exposes the EdgeX management service API to 3rd party systems. In other words, the Agent serves as a proxy for system management service API calls into each micro service. See System Management Agent for more details about this service. System Management V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the System Management Agent has changed to use DTOs (Data Transfer Objects) for all responses and for all POST requests.","title":"System Management Agent"},{"location":"api/management/Ch-APISystemManagement/#system-management-agent","text":"EdgeX 2.0 System Management Agent has been deprecated for EdgeX 2.0. While it is still available, it may be removed in a future release and no further develop is planned for it. The EdgeX System Management Agent (SMA) microservice exposes the EdgeX management service API to 3rd party systems. In other words, the Agent serves as a proxy for system management service API calls into each micro service. See System Management Agent for more details about this service. System Management V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the System Management Agent has changed to use DTOs (Data Transfer Objects) for all responses and for all POST requests.","title":"System Management Agent"},{"location":"api/support/Ch-APISupportNotifications/","text":"Support Notifications When a person or a system needs to be informed of something discovered on the node by another microservice on the node, EdgeX Foundry's Support Notifications microservice delivers that information. Examples of Alerts and Notifications that other services might need to broadcast include sensor data detected outside of certain parameters, usually detected by a Rules Engine service, or a system or service malfunction usually detected by system management services. See Support Notifications for more details about this service. Support Notifications V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Support Notifications has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Support Notifications"},{"location":"api/support/Ch-APISupportNotifications/#support-notifications","text":"When a person or a system needs to be informed of something discovered on the node by another microservice on the node, EdgeX Foundry's Support Notifications microservice delivers that information. Examples of Alerts and Notifications that other services might need to broadcast include sensor data detected outside of certain parameters, usually detected by a Rules Engine service, or a system or service malfunction usually detected by system management services. See Support Notifications for more details about this service. Support Notifications V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Support Notifications has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Support Notifications"},{"location":"api/support/Ch-APISupportScheduler/","text":"Support Scheduler EdgeX Foundry's Support Scheduler microservice to schedule actions to occur on specific intervals. See Support Scheduler for more details about this service. Support Scheduler V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Support Scheduler has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Support Scheduler"},{"location":"api/support/Ch-APISupportScheduler/#support-scheduler","text":"EdgeX Foundry's Support Scheduler microservice to schedule actions to occur on specific intervals. See Support Scheduler for more details about this service. Support Scheduler V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Support Scheduler has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Support Scheduler"},{"location":"design/","text":"Architecture Decision Records Folder This folder contains EdgeX Foundry decision records (ADR) and legacy design / requirement documents. /design /adr (architecture decision Records) /legacy-design (legacy design documents) /legacy-requirements (legacy requirement documents) At the root of the ADR folder (/design/adr) are decisions that are relevant to multiple parts of the project (aka \ufffd cross cutting concerns ). Sub folders under the ADR folder contain decisions relevant to the specific area of the project and essentially set up along working group lines (security, core, application, etc.). Naming and Formatting ADR documents are requested to follow RFC (request for comments) naming standard. Specifically, authors should name their documents with a sequentially increasing integer (or serial number) and then the architectural design topic: (sequence number - topic). Example: 0001-SeparateConfigurationInterface. The sequence is a global sequence for all EdgeX ADR. Per RFC and Michael Nygard suggestions the makeup of the ADR document should generally include: Title Status (proposed, accepted, rejected, deprecated, superseded, etc.) Context and Proposed Design Decision Consequences/considerations References Document history is maintained via Github history. Ownership EdgeX WG chairman own the sub folder and included documents associated to their work group. The EdgeX TSC chair/vice chair are responsible for the root level, cross cutting concern documents. Review and Approval ADR\u2019s shall be submitted as PRs to the appropriate edgex-docs folder based on the Architecture Decision Records Folder section above. The status of the PR (inside the document) shall be listed as proposed during this period. The PRs shall be left open (not merged) so that comments against the PR can be collected during the proposal period. The PRs can be approved and merged only after a formal vote of approval is conducted by the TSC. On approval of the ADR by the TSC, the status of the ADR should be changed to accepted . If the ADR is not approved by the TSC, the status in the document should be changed to rejected and the PR closed. Legacy A separate folder (/design/legacy-design) is used for legacy design/architecture decisions. A separate folder (/design/legacy-requirements) is used for legacy requirements documents. WG chairman take the responsibility for posting legacy material in to the applicable folders. Table of Contents A README with a table of contents for current documents is located here . Legacy Design and Requirements have their own Table of Contents as well and are located in their respective directories at /legacy-design and /legacy-requirements . Document authors are asked to keep the TOC updated with each new document entry.","title":"Architecture Decision Records Folder"},{"location":"design/#architecture-decision-records-folder","text":"This folder contains EdgeX Foundry decision records (ADR) and legacy design / requirement documents. /design /adr (architecture decision Records) /legacy-design (legacy design documents) /legacy-requirements (legacy requirement documents) At the root of the ADR folder (/design/adr) are decisions that are relevant to multiple parts of the project (aka \ufffd cross cutting concerns ). Sub folders under the ADR folder contain decisions relevant to the specific area of the project and essentially set up along working group lines (security, core, application, etc.).","title":"Architecture Decision Records Folder"},{"location":"design/#naming-and-formatting","text":"ADR documents are requested to follow RFC (request for comments) naming standard. Specifically, authors should name their documents with a sequentially increasing integer (or serial number) and then the architectural design topic: (sequence number - topic). Example: 0001-SeparateConfigurationInterface. The sequence is a global sequence for all EdgeX ADR. Per RFC and Michael Nygard suggestions the makeup of the ADR document should generally include: Title Status (proposed, accepted, rejected, deprecated, superseded, etc.) Context and Proposed Design Decision Consequences/considerations References Document history is maintained via Github history.","title":"Naming and Formatting"},{"location":"design/#ownership","text":"EdgeX WG chairman own the sub folder and included documents associated to their work group. The EdgeX TSC chair/vice chair are responsible for the root level, cross cutting concern documents.","title":"Ownership"},{"location":"design/#review-and-approval","text":"ADR\u2019s shall be submitted as PRs to the appropriate edgex-docs folder based on the Architecture Decision Records Folder section above. The status of the PR (inside the document) shall be listed as proposed during this period. The PRs shall be left open (not merged) so that comments against the PR can be collected during the proposal period. The PRs can be approved and merged only after a formal vote of approval is conducted by the TSC. On approval of the ADR by the TSC, the status of the ADR should be changed to accepted . If the ADR is not approved by the TSC, the status in the document should be changed to rejected and the PR closed.","title":"Review and Approval"},{"location":"design/#legacy","text":"A separate folder (/design/legacy-design) is used for legacy design/architecture decisions. A separate folder (/design/legacy-requirements) is used for legacy requirements documents. WG chairman take the responsibility for posting legacy material in to the applicable folders.","title":"Legacy"},{"location":"design/#table-of-contents","text":"A README with a table of contents for current documents is located here . Legacy Design and Requirements have their own Table of Contents as well and are located in their respective directories at /legacy-design and /legacy-requirements . Document authors are asked to keep the TOC updated with each new document entry.","title":"Table of Contents"},{"location":"design/TOC/","text":"ADR Table of Contents Name/Link Short Description 0001 Registry Refactor Separate out Registry and Configuration APIs 0002 Array Datatypes Allow Arrays to be held in Readings 0003 V2 API Principles Principles and Goals of V2 API Design 0004 Feature Flags Feature Flag Implementation 0005 Service Self Config Init Service Self Config Init & Config Seed Removal 0006 Metrics Collection Collection of service telemetry data 0007 Release Automation Overview of Release Automation Flow for EdgeX 0008 Secret Distribution Creation and Distribution of Secrets 0009 Secure Bootstrapping Secure Bootstrapping of EdgeX 0011 Device Service REST API The REST API for Device Services in EdgeX v2.x 0012 Device Service Filters Device Service event/reading filters 0013 Device Service Events via Message Bus Device Services send Events via Message Bus 0014 Secret Provider for All Secret Provider for All EdgeX Services 0015 Encryption between microservices Details conditions under which TLS is or is not used 0016 Container Image Guidelines Documents best practices for security of docker images 0017 Securing access to Consul Access control and authorization strategy for Consul 0018 Service Registry Service registry usage for EdgeX services 0019 EdgeX-CLI V2 EdgeX-CLI V2 Implementation 0020 Delay start services (SPIFFE/SPIRE) Secret store tokens for delayed start services 0021 Device Profile Changes Rules on device profile modifications","title":"ADR Table of Contents"},{"location":"design/TOC/#adr-table-of-contents","text":"Name/Link Short Description 0001 Registry Refactor Separate out Registry and Configuration APIs 0002 Array Datatypes Allow Arrays to be held in Readings 0003 V2 API Principles Principles and Goals of V2 API Design 0004 Feature Flags Feature Flag Implementation 0005 Service Self Config Init Service Self Config Init & Config Seed Removal 0006 Metrics Collection Collection of service telemetry data 0007 Release Automation Overview of Release Automation Flow for EdgeX 0008 Secret Distribution Creation and Distribution of Secrets 0009 Secure Bootstrapping Secure Bootstrapping of EdgeX 0011 Device Service REST API The REST API for Device Services in EdgeX v2.x 0012 Device Service Filters Device Service event/reading filters 0013 Device Service Events via Message Bus Device Services send Events via Message Bus 0014 Secret Provider for All Secret Provider for All EdgeX Services 0015 Encryption between microservices Details conditions under which TLS is or is not used 0016 Container Image Guidelines Documents best practices for security of docker images 0017 Securing access to Consul Access control and authorization strategy for Consul 0018 Service Registry Service registry usage for EdgeX services 0019 EdgeX-CLI V2 EdgeX-CLI V2 Implementation 0020 Delay start services (SPIFFE/SPIRE) Secret store tokens for delayed start services 0021 Device Profile Changes Rules on device profile modifications","title":"ADR Table of Contents"},{"location":"design/adr/0001-Registy-Refactor/","text":"Registry Refactoring Design Status Context Proposed Design Decision Consequences References Status Approved Context Currently the Registry Client in go-mod-registry module provides Service Configuration and Service Registration functionality. The goal of this design is to refactor the go-mod-registry module for separation of concerns. The Service Registry functionality will stay in the go-mod-registry module and the Service Configuration functionality will be separated out into a new go-mod-configuration module. This allows for implementations for deferent providers for each, another aspect of separation of concerns. Proposed Design Provider Connection information An aspect of using the current Registry Client is \" Where do the services get the Registry Provider connection information? \" Currently all services either pull this connection information from the local configuration file or from the edgex_registry environment variable. Device Services also have the option to specify this connection information on the command line. With the refactoring for separation of concerns, this issue changes to \" Where do the services get the Configuration Provider connection information? \" There have been concerns voiced by some in the EdgeX community that storing this Configuration Provider connection information in the configuration which ultimately is provided by that provider is not the right design. This design proposes that all services will use the command line option approach with the ability to override with an environment variable. The Configuration Provider information will not be stored in each service's local configuration file. The edgex_registry environment variable will be deprecated. The Registry Provider connection information will continue to be stored in each service's configuration either locally or from the Configuration Provider same as all other EdgeX Client and Database connection information. Command line option changes The new -cp/-configProvider command line option will be added to each service which will have a value specified using the format {type}.{protocol}://{host}:{port} e.g consul.http://localhost:8500 . This new command line option will be overridden by the edgex_configuration_provider environment variable when it is set. This environment variable's value has the same format as the command line option value. If no value is provided to the -cp/-configProvider option, i.e. just -cp , and no environment variable override is specified, the default value of consul.http://localhost:8500 will be used. if -cp/-configProvider not used and no environment variable override is specified the local configuration file is used, as is it now. All services will log the Configuration Provider connection information that is used. The existing -r/-registry command line option will be retained as a Boolean flag to indicate to use the Registry. Bootstrap Changes All services in the edgex-go mono repo use the new common bootstrap functionality. The plan is to move this code to a go module for the Device Service and App Functions SDKs to also use. The current bootstrap modules pkg/bootstrap/configuration/registry.go and pkg/bootstrap/container/registry.go will be refactored to use the new Configuration Client and be renamed appropriately. New bootstrap modules will be created for using the revised version of Registry Client . The current use of useRegistry and registryClient for service configuration will be change to appropriate names for using the new Configuration Client . The current use of useRegistry and registryClient for service registration will be retained for service registration. Call to the new Unregister() API will be added to shutdown code for all services. Config-Seed Changes The conf-seed service will have similar changes for specifying the Configuration Provider connection information since it doesn't use the common bootstrap package. Beyond that it will have minor changes for switching to using the Configuration Client interface, which will just be imports and appropriate name refactoring. Config Endpoint Changes Since the Configuration Provider connection information will no longer be in the service's configuration struct, the config endpoint processing will be modified to add the Configuration Provider connection information to the resulting JSON create from service's configuration. Client Interfaces changes Current Registry Client This following is the current Registry Client Interface type Client interface { Register () error HasConfiguration () ( bool , error ) PutConfigurationToml ( configuration * toml . Tree , overwrite bool ) error PutConfiguration ( configStruct interface {}, overwrite bool ) error GetConfiguration ( configStruct interface {}) ( interface {}, error ) WatchForChanges ( updateChannel chan <- interface {}, errorChannel chan <- error , configuration interface {}, waitKey string ) IsAlive () bool ConfigurationValueExists ( name string ) ( bool , error ) GetConfigurationValue ( name string ) ([] byte , error ) PutConfigurationValue ( name string , value [] byte ) error GetServiceEndpoint ( serviceId string ) ( types . ServiceEndpoint , error ) IsServiceAvailable ( serviceId string ) error } New Configuration Client This following is the new Configuration Client Interface which contains the Service Configuration specific portion from the above current Registry Client . type Client interface { HasConfiguration () ( bool , error ) PutConfigurationFromToml ( configuration * toml . Tree , overwrite bool ) error PutConfiguration ( configStruct interface {}, overwrite bool ) error GetConfiguration ( configStruct interface {}) ( interface {}, error ) WatchForChanges ( updateChannel chan <- interface {}, errorChannel chan <- error , configuration interface {}, waitKey string ) IsAlive () bool ConfigurationValueExists ( name string ) ( bool , error ) GetConfigurationValue ( name string ) ([] byte , error ) PutConfigurationValue ( name string , value [] byte ) error } Revised Registry Client This following is the revised Registry Client Interface, which contains the Service Registry specific portion from the above current Registry Client . The UnRegister() API has been added per issue #20 type Client interface { Register () error UnRegister () error IsAlive () bool GetServiceEndpoint ( serviceId string ) ( types . ServiceEndpoint , error ) IsServiceAvailable ( serviceId string ) error } Client Configuration Structs Current Registry Client Config The following is the current struct used to configure the current Registry Client type Config struct { Protocol string Host string Port int Type string Stem string ServiceKey string ServiceHost string ServicePort int ServiceProtocol string CheckRoute string CheckInterval string } New Configuration Client Config The following is the new struct the will be used to configure the new Configuration Client from the command line option or environment variable values. The Service Registry portion has been removed from the above existing Registry Client Config type Config struct { Protocol string Host string Port int Type string BasePath string ServiceKey string } New Registry Client Config The following is the revised struct the will be used to configure the new Registry Client from the information in the service's configuration. This is mostly unchanged from the existing Registry Client Config , except that the Stem for configuration has been removed type Config struct { Protocol string Host string Port int Type string ServiceKey string ServiceHost string ServicePort int ServiceProtocol string CheckRoute string CheckInterval string } Provider Implementations The current Consul implementation of the Registry Client will be split up into implementations for the new Configuration Client in the new go-mod-configuration module and the revised Registry Client in the existing go-mod-registry module. Decision It was decided to move forward with the above design After initial ADR was approved, it was decided to retain the -r/--registry command-line flag and not add the Enabled field in the Registry provider configuration. Consequences Once the refactoring of go-mod-registry and go-mod-configuration are complete, they will need to be integrated into the new go-mod-bootstrap. Part of this integration will be the Command line option changes above. At this point the edgex-go services will be integrated with the new Registry and Configuration providers. The App Services SDK and Device Services SDK will then need to integrate go-mod-bootstrap to take advantage of these new providers. References Registry Abstraction - Decouple EdgeX services from Consul (Previous design)","title":"Registry Refactoring Design"},{"location":"design/adr/0001-Registy-Refactor/#registry-refactoring-design","text":"Status Context Proposed Design Decision Consequences References","title":"Registry Refactoring Design"},{"location":"design/adr/0001-Registy-Refactor/#status","text":"Approved","title":"Status"},{"location":"design/adr/0001-Registy-Refactor/#context","text":"Currently the Registry Client in go-mod-registry module provides Service Configuration and Service Registration functionality. The goal of this design is to refactor the go-mod-registry module for separation of concerns. The Service Registry functionality will stay in the go-mod-registry module and the Service Configuration functionality will be separated out into a new go-mod-configuration module. This allows for implementations for deferent providers for each, another aspect of separation of concerns.","title":"Context"},{"location":"design/adr/0001-Registy-Refactor/#proposed-design","text":"","title":"Proposed Design"},{"location":"design/adr/0001-Registy-Refactor/#provider-connection-information","text":"An aspect of using the current Registry Client is \" Where do the services get the Registry Provider connection information? \" Currently all services either pull this connection information from the local configuration file or from the edgex_registry environment variable. Device Services also have the option to specify this connection information on the command line. With the refactoring for separation of concerns, this issue changes to \" Where do the services get the Configuration Provider connection information? \" There have been concerns voiced by some in the EdgeX community that storing this Configuration Provider connection information in the configuration which ultimately is provided by that provider is not the right design. This design proposes that all services will use the command line option approach with the ability to override with an environment variable. The Configuration Provider information will not be stored in each service's local configuration file. The edgex_registry environment variable will be deprecated. The Registry Provider connection information will continue to be stored in each service's configuration either locally or from the Configuration Provider same as all other EdgeX Client and Database connection information.","title":"Provider Connection information"},{"location":"design/adr/0001-Registy-Refactor/#command-line-option-changes","text":"The new -cp/-configProvider command line option will be added to each service which will have a value specified using the format {type}.{protocol}://{host}:{port} e.g consul.http://localhost:8500 . This new command line option will be overridden by the edgex_configuration_provider environment variable when it is set. This environment variable's value has the same format as the command line option value. If no value is provided to the -cp/-configProvider option, i.e. just -cp , and no environment variable override is specified, the default value of consul.http://localhost:8500 will be used. if -cp/-configProvider not used and no environment variable override is specified the local configuration file is used, as is it now. All services will log the Configuration Provider connection information that is used. The existing -r/-registry command line option will be retained as a Boolean flag to indicate to use the Registry.","title":"Command line option changes"},{"location":"design/adr/0001-Registy-Refactor/#bootstrap-changes","text":"All services in the edgex-go mono repo use the new common bootstrap functionality. The plan is to move this code to a go module for the Device Service and App Functions SDKs to also use. The current bootstrap modules pkg/bootstrap/configuration/registry.go and pkg/bootstrap/container/registry.go will be refactored to use the new Configuration Client and be renamed appropriately. New bootstrap modules will be created for using the revised version of Registry Client . The current use of useRegistry and registryClient for service configuration will be change to appropriate names for using the new Configuration Client . The current use of useRegistry and registryClient for service registration will be retained for service registration. Call to the new Unregister() API will be added to shutdown code for all services.","title":"Bootstrap Changes"},{"location":"design/adr/0001-Registy-Refactor/#config-seed-changes","text":"The conf-seed service will have similar changes for specifying the Configuration Provider connection information since it doesn't use the common bootstrap package. Beyond that it will have minor changes for switching to using the Configuration Client interface, which will just be imports and appropriate name refactoring.","title":"Config-Seed Changes"},{"location":"design/adr/0001-Registy-Refactor/#config-endpoint-changes","text":"Since the Configuration Provider connection information will no longer be in the service's configuration struct, the config endpoint processing will be modified to add the Configuration Provider connection information to the resulting JSON create from service's configuration.","title":"Config Endpoint Changes"},{"location":"design/adr/0001-Registy-Refactor/#client-interfaces-changes","text":"","title":"Client Interfaces changes"},{"location":"design/adr/0001-Registy-Refactor/#current-registry-client","text":"This following is the current Registry Client Interface type Client interface { Register () error HasConfiguration () ( bool , error ) PutConfigurationToml ( configuration * toml . Tree , overwrite bool ) error PutConfiguration ( configStruct interface {}, overwrite bool ) error GetConfiguration ( configStruct interface {}) ( interface {}, error ) WatchForChanges ( updateChannel chan <- interface {}, errorChannel chan <- error , configuration interface {}, waitKey string ) IsAlive () bool ConfigurationValueExists ( name string ) ( bool , error ) GetConfigurationValue ( name string ) ([] byte , error ) PutConfigurationValue ( name string , value [] byte ) error GetServiceEndpoint ( serviceId string ) ( types . ServiceEndpoint , error ) IsServiceAvailable ( serviceId string ) error }","title":"Current Registry Client"},{"location":"design/adr/0001-Registy-Refactor/#new-configuration-client","text":"This following is the new Configuration Client Interface which contains the Service Configuration specific portion from the above current Registry Client . type Client interface { HasConfiguration () ( bool , error ) PutConfigurationFromToml ( configuration * toml . Tree , overwrite bool ) error PutConfiguration ( configStruct interface {}, overwrite bool ) error GetConfiguration ( configStruct interface {}) ( interface {}, error ) WatchForChanges ( updateChannel chan <- interface {}, errorChannel chan <- error , configuration interface {}, waitKey string ) IsAlive () bool ConfigurationValueExists ( name string ) ( bool , error ) GetConfigurationValue ( name string ) ([] byte , error ) PutConfigurationValue ( name string , value [] byte ) error }","title":"New Configuration Client"},{"location":"design/adr/0001-Registy-Refactor/#revised-registry-client","text":"This following is the revised Registry Client Interface, which contains the Service Registry specific portion from the above current Registry Client . The UnRegister() API has been added per issue #20 type Client interface { Register () error UnRegister () error IsAlive () bool GetServiceEndpoint ( serviceId string ) ( types . ServiceEndpoint , error ) IsServiceAvailable ( serviceId string ) error }","title":"Revised Registry Client"},{"location":"design/adr/0001-Registy-Refactor/#client-configuration-structs","text":"","title":"Client Configuration Structs"},{"location":"design/adr/0001-Registy-Refactor/#current-registry-client-config","text":"The following is the current struct used to configure the current Registry Client type Config struct { Protocol string Host string Port int Type string Stem string ServiceKey string ServiceHost string ServicePort int ServiceProtocol string CheckRoute string CheckInterval string }","title":"Current Registry Client Config"},{"location":"design/adr/0001-Registy-Refactor/#new-configuration-client-config","text":"The following is the new struct the will be used to configure the new Configuration Client from the command line option or environment variable values. The Service Registry portion has been removed from the above existing Registry Client Config type Config struct { Protocol string Host string Port int Type string BasePath string ServiceKey string }","title":"New Configuration Client Config"},{"location":"design/adr/0001-Registy-Refactor/#new-registry-client-config","text":"The following is the revised struct the will be used to configure the new Registry Client from the information in the service's configuration. This is mostly unchanged from the existing Registry Client Config , except that the Stem for configuration has been removed type Config struct { Protocol string Host string Port int Type string ServiceKey string ServiceHost string ServicePort int ServiceProtocol string CheckRoute string CheckInterval string }","title":"New Registry Client Config"},{"location":"design/adr/0001-Registy-Refactor/#provider-implementations","text":"The current Consul implementation of the Registry Client will be split up into implementations for the new Configuration Client in the new go-mod-configuration module and the revised Registry Client in the existing go-mod-registry module.","title":"Provider Implementations"},{"location":"design/adr/0001-Registy-Refactor/#decision","text":"It was decided to move forward with the above design After initial ADR was approved, it was decided to retain the -r/--registry command-line flag and not add the Enabled field in the Registry provider configuration.","title":"Decision"},{"location":"design/adr/0001-Registy-Refactor/#consequences","text":"Once the refactoring of go-mod-registry and go-mod-configuration are complete, they will need to be integrated into the new go-mod-bootstrap. Part of this integration will be the Command line option changes above. At this point the edgex-go services will be integrated with the new Registry and Configuration providers. The App Services SDK and Device Services SDK will then need to integrate go-mod-bootstrap to take advantage of these new providers.","title":"Consequences"},{"location":"design/adr/0001-Registy-Refactor/#references","text":"Registry Abstraction - Decouple EdgeX services from Consul (Previous design)","title":"References"},{"location":"design/adr/0004-Feature-Flags/","text":"Feature Flag Proposal Status Accepted Context Out of the proposal for releasing on time, the community suggested that we take a closer look at feature-flags. Feature-flags are typically intended for users of an application to turn on or off new or unused features. This gives user more control to adopt a feature-set at their own pace \u2013 i.e disabling store and forward in App Functions SDK without breaking backward compatibility. It can also be used to indicate to developers the features that are more often used than others and can provided valuable feedback to enhance and continue a given feature. To gain that insight of the use of any given feature, we would require not only instrumentation of the code but a central location in the cloud (i.e a TIG stack) for the telemetry to be ingested and in turn reported in order to provide the feedback to the developers. This becomes infeasible primarily because the cloud infrastructure costs, privacy concerns, and other unforeseen legal reasons for sending \u201cUsage Metrics\u201d of an EdgeX installation back to a central entity such as the Linux Foundation, among many others. Without the valuable feedback loop, feature-flags don\u2019t provide much value on their own and they certainly don\u2019t assist in increasing velocity to help us deliver on time. Putting aside one of the major value propositions listed above, feasibility of a feature flag \u201cmodule\u201d was still evaluated. The simplest approach would be to leverage configuration following a certain format such as FF_[NewFeatureName]=true/false. This is similar to what is done today. Turning on/off security is an example, turning on/off the registry is another. Expanding this further with a module could offer standardization of controlling a given feature such as featurepkg.Register(\u201cMyNewFeature\u201d) or featurepkg.IsOn(\u201cMyNewFeature\u201d) . However, this really is just adding complexity on top of the underlying configuration that is already implemented. If we were to consider doing something like this, it lends it self to a central management of features within the EdgeX framework\u2014either its own service or possibly added as part of the SMA. This could help address concerns around feature dependencies and compatibility. Feature A on Service X requires Feature B and Feature C on Service Y. Continuing down this path starts to beget a fairly large impact to EdgeX for value that cannot be fully realized. Decision The community should NOT pursue a full-fledged feature flag implementation either homegrown or off-the-shelf. However, it should be encouraged to develop features with a wholistic perspective and consider leveraging configuration options to turn them on/off. In other words, once a feature compiles, can work under common scenarios, but perhaps isn\u2019t fully tested with edge cases, but doesn\u2019t impact any other functionality, should be encouraged. Consequences Allows more focus on the many more competing priorities for this release. Minimal impact to development cycles and release schedule","title":"Feature Flag Proposal"},{"location":"design/adr/0004-Feature-Flags/#feature-flag-proposal","text":"","title":"Feature Flag Proposal"},{"location":"design/adr/0004-Feature-Flags/#status","text":"Accepted","title":"Status"},{"location":"design/adr/0004-Feature-Flags/#context","text":"Out of the proposal for releasing on time, the community suggested that we take a closer look at feature-flags. Feature-flags are typically intended for users of an application to turn on or off new or unused features. This gives user more control to adopt a feature-set at their own pace \u2013 i.e disabling store and forward in App Functions SDK without breaking backward compatibility. It can also be used to indicate to developers the features that are more often used than others and can provided valuable feedback to enhance and continue a given feature. To gain that insight of the use of any given feature, we would require not only instrumentation of the code but a central location in the cloud (i.e a TIG stack) for the telemetry to be ingested and in turn reported in order to provide the feedback to the developers. This becomes infeasible primarily because the cloud infrastructure costs, privacy concerns, and other unforeseen legal reasons for sending \u201cUsage Metrics\u201d of an EdgeX installation back to a central entity such as the Linux Foundation, among many others. Without the valuable feedback loop, feature-flags don\u2019t provide much value on their own and they certainly don\u2019t assist in increasing velocity to help us deliver on time. Putting aside one of the major value propositions listed above, feasibility of a feature flag \u201cmodule\u201d was still evaluated. The simplest approach would be to leverage configuration following a certain format such as FF_[NewFeatureName]=true/false. This is similar to what is done today. Turning on/off security is an example, turning on/off the registry is another. Expanding this further with a module could offer standardization of controlling a given feature such as featurepkg.Register(\u201cMyNewFeature\u201d) or featurepkg.IsOn(\u201cMyNewFeature\u201d) . However, this really is just adding complexity on top of the underlying configuration that is already implemented. If we were to consider doing something like this, it lends it self to a central management of features within the EdgeX framework\u2014either its own service or possibly added as part of the SMA. This could help address concerns around feature dependencies and compatibility. Feature A on Service X requires Feature B and Feature C on Service Y. Continuing down this path starts to beget a fairly large impact to EdgeX for value that cannot be fully realized.","title":"Context"},{"location":"design/adr/0004-Feature-Flags/#decision","text":"The community should NOT pursue a full-fledged feature flag implementation either homegrown or off-the-shelf. However, it should be encouraged to develop features with a wholistic perspective and consider leveraging configuration options to turn them on/off. In other words, once a feature compiles, can work under common scenarios, but perhaps isn\u2019t fully tested with edge cases, but doesn\u2019t impact any other functionality, should be encouraged.","title":"Decision"},{"location":"design/adr/0004-Feature-Flags/#consequences","text":"Allows more focus on the many more competing priorities for this release. Minimal impact to development cycles and release schedule","title":"Consequences"},{"location":"design/adr/0005-Service-Self-Config/","text":"Service Self Config Init & Config Seed Removal Status approved - TSC vote on 3/25/20 for Geneva release NOTE: this ADR does not address high availability considerations and concerns. EdgeX, in general, has a number of unanswered questions with regard to HA architecture and this design adds to those considerations. Context Since its debut, EdgeX has had a configuration seed service (config-seed) that, on start of EdgeX, deposits configuration for all the services into Consul (our configuration/registry service). For development purposes, or on resource constrained platforms, EdgeX can be run without Consul with services simply reading configuration from the filesystem. While this process has nominally worked for several releases of EdgeX, there has always been some issues with this extra initialization process (config-seed), not least of which are: - race conditions on the part of the services, as they bootstrap, coming up before the config-seed completes its deposit of configuration into Consul - how to deal with \"overrides\" such as environmental variable provided configuration overrides. As the override is often specific to a service but has to be in place for config-seed in order to take effect. - need for an additional service that is only there for init and then dies (confusing to users) NOTE - for historical purposes, it should be noted that config-seed only writes configuration into the configuration/registry service (Consul) once on the first start of EdgeX. On subsequent starts of EdgeX, config-seed checks to see if it has already populated the configuration/registry service and will not rewrite configuration again (unless the --overwrite flag is used). The design/architectural proposal, therefore, is: - removal of the config-seed service (removing cmd/config-seed from the edgex-go repository) - have each EdgeX micro service \"self seed\" - that is seed Consul with their own required configuration on bootstrap of the service. Details of that bootstrapping process are below. Command Line Options All EdgeX services support a common set of command-line options, some combination of which are required on startup for a service to interact with the rest of EdgeX. Command line options are not set by any configuration. Command line options include: --configProvider or -cp (the configuration provider location URL - prefixed with consul. - for example: -cp=consul.http://localhost:8500 ) --overwrite or -o (overwrite the configuration in the configuration provider) --file or -f (the configuration filename - configuration.toml is used by default if the configuration filename is not provided) --profile or -p (the name of a sub directory in the configuration directory in which a profile-specific configuration file is found. This has no default. If not specified, the configuration file is read from the configuration directory) --confdir or -c (the directory where the configuration file is found - ./res is used by default if the confdir is not specified, where \".\" is the convention on Linux/Unix/MacOS which means current directory) --registry or -r (string indicating use of the registry) The distinction of command line options versus configuration will be important later in this ADR. Two command line options (-o for overwrite and -r for registry) are not overridable by environmental variables. NOTES: Use of the --overwrite command line option should be used sparingly and with expert knowledge of EdgeX; in particular knowledge of how it operates and where/how it gets its configuration on restarts, etc. Ordinarily, --overwrite is provided as a means to support development needs. Use of --overwrite permanently in production enviroments is highly discouraged. Configuration Initialization Each service has (or shall have if not providing it already) a local configuration file. The service may use the local configuration file on initialization of the service (aka bootstrap of the service) depending on command line options and environmental variables (see below) provided at startup. Using a configuration provider When the configuration provider is specified, the service will call on the configuration provider (Consul) and check if the top-level (root) namespace for the service exists. If configuratation at the top-level (root) namespace exists, it indicates that the service has already populated its configuration into the configuration provider in a prior startup. If the service finds the top-level (root) namespace is already populated with configuration information it will then read that configuration information from the configuration provider under namespace for that service (and ignore what is in the local configuration file). If the service finds the top-level (root) namespace is not populated with configuration information, it will read its local configuration file and populate the configuration provider (under the namespace for the service) with configuration read from the local configuration file. A configuration provider can be specified with a command line argument (the -cp / --configProvider) or environment variable (the EDGEX_CONFIGURATION_PROVIDER environmental variable which overrides the command line argument). NOTE: the environmental variables are typically uppercase but there have been inconsistencies in environmental variable casing (example: edgex_registry). This should be considered and made consistent in a future major release. Using the local configuration file When a configuration provider isn't specified, the service just uses the configuration in its local configuration file. That is the service uses the configuration in the file associated with the profile, config filename and config file directory command line options or environmental variables. In this case, the service does not contact the configuration service (Consul) for any configuration information. NOTE: As the services now self seed and deployment specific changes can be made via environment overrides, it will no longer be necessary to have a Docker profile configuration file in each of the service directories (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml). See Consequences below. It will still be possible for users to use the profile mechanism to specify a Docker configuration, but it will no longer be required and not the recommended approach to providing Docker container specific configuration. Overrides Environment variables used to override configuration always take precedence whether configuration is being sourced locally or read from the config provider/Consul. Note - this means that a configuration value that is being overridden by an environment variable will always be the source of truth, even if the same configuration is changed directly in Consul. The name of the environmental variable must match the path names in Consul. NOTES: - Environmental variables overrides remove the need to change the \"docker\" profile in the res/docker/configuration.toml files - Allowing removal of 50% of the existing configuration.toml files. - The override rules in EdgeX between environmental variables and command line options may be counter intuitive compared to other systems. There appears to be no standard practice. Indeed, web searching \"Reddit & Starting Fights Env Variables vs Command Line Args\" will layout the prevailing differences. - Environment variables used for configuration overrides are named by prepending the the configuration element with the configuration section inclusive of sub-path, where sub-path's \".\"s are replaced with underscores. These configuration environment variable overrides must be specified using camel case. Here are two examples: Registry_Host for [Registry] Host = 'localhost' Clients_CoreData_Host for [Clients] [Clients.CoreData] Host = 'localhost' - Going forward, environmental variables that override command line options should be all uppercase. All values overriden get logged (indicating which configuration value or op param and the new value). Decision These features have been implemented (with some minor changes to be done) for consideration here: https://github.com/edgexfoundry/go-mod-bootstrap/compare/master...lenny-intel:SelfSeed2. This code branch will be removed once this ADR is approved and implemented on master. The implementation for self-seeding services and environmental overrides is already implemented (for Fuji) per this document in the application services and device services (and instituted in the SDKs of each). Backward compatibility Several aspects of this ADR contain backward compatibility issues for the device service and application service SDKs. Therefore, for the upcoming minor release, the following guidelines and expections are added to provide for backward compatibility. --registry= for Device SDKs As earlier versions of the device service SDKs accepted a URI for --registry, if specified on the command line, use the given URI as the address of the configuration provider. If both --configProvider and --registry specify URIs, then the service should log an error and exit. --registry (no \u2018=\u2019) and w/o --configProvider for both SDKs If a configProvider URI isn't specified, but --registry (w/out a URI) is specified, then the service will use the Registry provider information from its local configuration file for both configuration and registry providers. Env Var: edgex_registry= for all services (currently has been removed) Add it back and use value as if it was EDGEX_CONFIGURATION_PROVIDER and enable use of registry with same settings in URL. Default to http as it is in Fuji. Consequences Docker compose files will need to be changed to remove config seed. The main Snap will need to be changed to remove config seed. Config seed code (currently in edgex-go repo) is to be removed. Any service specific environmental overrides currently on config seed need to be moved to the specific service(s). The Docker configuration files and directory (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml) that are used to populate the config seed for Docker containers can be eliminated from all the services. In cmd/security-secretstore-setup, there is only a docker configuration.toml. This file will be moved rather than deleted. Documentation would need to reflect removal of config seed and \"self seeding\" process. Removes any potential issue with past race conditions (as experienced with the Edinburgh release) as each service is now responsible for its own configuration. There are still high availability concerns that need to be considered and not covered in this ADR at this time. Removes some confusion on the part of users as to why a service (config-seed) starts and immediately exits. Minimal impact to development cycles and release schedule Configuration endpoints in all services need to ensure the environmental variables are reflected in the configuration data returned (this is a system management impact). Docker files will need to be modified to remove setting profile=docker Docker compose files will need to be changed to add environmental overrides for removal of docker profiles. These should go in the global environment section of the compose files for those overrides that apply to all services. Example: # all common shared environment variables defined here: x-common-env-variables: &common-variables EDGEX_SECURITY_SECRET_STORE: \"false\" EDGEX_CONFIGURATION_PROVIDER: consul.http://edgex-core-consul:8500 Clients_CoreData_Host: edgex-core-data Clients_Logging_Host: edgex-support-logging Logging_EnableRemote: \"true\"","title":"Service Self Config Init & Config Seed Removal"},{"location":"design/adr/0005-Service-Self-Config/#service-self-config-init-config-seed-removal","text":"","title":"Service Self Config Init & Config Seed Removal"},{"location":"design/adr/0005-Service-Self-Config/#status","text":"approved - TSC vote on 3/25/20 for Geneva release NOTE: this ADR does not address high availability considerations and concerns. EdgeX, in general, has a number of unanswered questions with regard to HA architecture and this design adds to those considerations.","title":"Status"},{"location":"design/adr/0005-Service-Self-Config/#context","text":"Since its debut, EdgeX has had a configuration seed service (config-seed) that, on start of EdgeX, deposits configuration for all the services into Consul (our configuration/registry service). For development purposes, or on resource constrained platforms, EdgeX can be run without Consul with services simply reading configuration from the filesystem. While this process has nominally worked for several releases of EdgeX, there has always been some issues with this extra initialization process (config-seed), not least of which are: - race conditions on the part of the services, as they bootstrap, coming up before the config-seed completes its deposit of configuration into Consul - how to deal with \"overrides\" such as environmental variable provided configuration overrides. As the override is often specific to a service but has to be in place for config-seed in order to take effect. - need for an additional service that is only there for init and then dies (confusing to users) NOTE - for historical purposes, it should be noted that config-seed only writes configuration into the configuration/registry service (Consul) once on the first start of EdgeX. On subsequent starts of EdgeX, config-seed checks to see if it has already populated the configuration/registry service and will not rewrite configuration again (unless the --overwrite flag is used). The design/architectural proposal, therefore, is: - removal of the config-seed service (removing cmd/config-seed from the edgex-go repository) - have each EdgeX micro service \"self seed\" - that is seed Consul with their own required configuration on bootstrap of the service. Details of that bootstrapping process are below.","title":"Context"},{"location":"design/adr/0005-Service-Self-Config/#command-line-options","text":"All EdgeX services support a common set of command-line options, some combination of which are required on startup for a service to interact with the rest of EdgeX. Command line options are not set by any configuration. Command line options include: --configProvider or -cp (the configuration provider location URL - prefixed with consul. - for example: -cp=consul.http://localhost:8500 ) --overwrite or -o (overwrite the configuration in the configuration provider) --file or -f (the configuration filename - configuration.toml is used by default if the configuration filename is not provided) --profile or -p (the name of a sub directory in the configuration directory in which a profile-specific configuration file is found. This has no default. If not specified, the configuration file is read from the configuration directory) --confdir or -c (the directory where the configuration file is found - ./res is used by default if the confdir is not specified, where \".\" is the convention on Linux/Unix/MacOS which means current directory) --registry or -r (string indicating use of the registry) The distinction of command line options versus configuration will be important later in this ADR. Two command line options (-o for overwrite and -r for registry) are not overridable by environmental variables. NOTES: Use of the --overwrite command line option should be used sparingly and with expert knowledge of EdgeX; in particular knowledge of how it operates and where/how it gets its configuration on restarts, etc. Ordinarily, --overwrite is provided as a means to support development needs. Use of --overwrite permanently in production enviroments is highly discouraged.","title":"Command Line Options"},{"location":"design/adr/0005-Service-Self-Config/#configuration-initialization","text":"Each service has (or shall have if not providing it already) a local configuration file. The service may use the local configuration file on initialization of the service (aka bootstrap of the service) depending on command line options and environmental variables (see below) provided at startup. Using a configuration provider When the configuration provider is specified, the service will call on the configuration provider (Consul) and check if the top-level (root) namespace for the service exists. If configuratation at the top-level (root) namespace exists, it indicates that the service has already populated its configuration into the configuration provider in a prior startup. If the service finds the top-level (root) namespace is already populated with configuration information it will then read that configuration information from the configuration provider under namespace for that service (and ignore what is in the local configuration file). If the service finds the top-level (root) namespace is not populated with configuration information, it will read its local configuration file and populate the configuration provider (under the namespace for the service) with configuration read from the local configuration file. A configuration provider can be specified with a command line argument (the -cp / --configProvider) or environment variable (the EDGEX_CONFIGURATION_PROVIDER environmental variable which overrides the command line argument). NOTE: the environmental variables are typically uppercase but there have been inconsistencies in environmental variable casing (example: edgex_registry). This should be considered and made consistent in a future major release. Using the local configuration file When a configuration provider isn't specified, the service just uses the configuration in its local configuration file. That is the service uses the configuration in the file associated with the profile, config filename and config file directory command line options or environmental variables. In this case, the service does not contact the configuration service (Consul) for any configuration information. NOTE: As the services now self seed and deployment specific changes can be made via environment overrides, it will no longer be necessary to have a Docker profile configuration file in each of the service directories (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml). See Consequences below. It will still be possible for users to use the profile mechanism to specify a Docker configuration, but it will no longer be required and not the recommended approach to providing Docker container specific configuration.","title":"Configuration Initialization"},{"location":"design/adr/0005-Service-Self-Config/#overrides","text":"Environment variables used to override configuration always take precedence whether configuration is being sourced locally or read from the config provider/Consul. Note - this means that a configuration value that is being overridden by an environment variable will always be the source of truth, even if the same configuration is changed directly in Consul. The name of the environmental variable must match the path names in Consul. NOTES: - Environmental variables overrides remove the need to change the \"docker\" profile in the res/docker/configuration.toml files - Allowing removal of 50% of the existing configuration.toml files. - The override rules in EdgeX between environmental variables and command line options may be counter intuitive compared to other systems. There appears to be no standard practice. Indeed, web searching \"Reddit & Starting Fights Env Variables vs Command Line Args\" will layout the prevailing differences. - Environment variables used for configuration overrides are named by prepending the the configuration element with the configuration section inclusive of sub-path, where sub-path's \".\"s are replaced with underscores. These configuration environment variable overrides must be specified using camel case. Here are two examples: Registry_Host for [Registry] Host = 'localhost' Clients_CoreData_Host for [Clients] [Clients.CoreData] Host = 'localhost' - Going forward, environmental variables that override command line options should be all uppercase. All values overriden get logged (indicating which configuration value or op param and the new value).","title":"Overrides"},{"location":"design/adr/0005-Service-Self-Config/#decision","text":"These features have been implemented (with some minor changes to be done) for consideration here: https://github.com/edgexfoundry/go-mod-bootstrap/compare/master...lenny-intel:SelfSeed2. This code branch will be removed once this ADR is approved and implemented on master. The implementation for self-seeding services and environmental overrides is already implemented (for Fuji) per this document in the application services and device services (and instituted in the SDKs of each).","title":"Decision"},{"location":"design/adr/0005-Service-Self-Config/#backward-compatibility","text":"Several aspects of this ADR contain backward compatibility issues for the device service and application service SDKs. Therefore, for the upcoming minor release, the following guidelines and expections are added to provide for backward compatibility. --registry= for Device SDKs As earlier versions of the device service SDKs accepted a URI for --registry, if specified on the command line, use the given URI as the address of the configuration provider. If both --configProvider and --registry specify URIs, then the service should log an error and exit. --registry (no \u2018=\u2019) and w/o --configProvider for both SDKs If a configProvider URI isn't specified, but --registry (w/out a URI) is specified, then the service will use the Registry provider information from its local configuration file for both configuration and registry providers. Env Var: edgex_registry= for all services (currently has been removed) Add it back and use value as if it was EDGEX_CONFIGURATION_PROVIDER and enable use of registry with same settings in URL. Default to http as it is in Fuji.","title":"Backward compatibility"},{"location":"design/adr/0005-Service-Self-Config/#consequences","text":"Docker compose files will need to be changed to remove config seed. The main Snap will need to be changed to remove config seed. Config seed code (currently in edgex-go repo) is to be removed. Any service specific environmental overrides currently on config seed need to be moved to the specific service(s). The Docker configuration files and directory (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml) that are used to populate the config seed for Docker containers can be eliminated from all the services. In cmd/security-secretstore-setup, there is only a docker configuration.toml. This file will be moved rather than deleted. Documentation would need to reflect removal of config seed and \"self seeding\" process. Removes any potential issue with past race conditions (as experienced with the Edinburgh release) as each service is now responsible for its own configuration. There are still high availability concerns that need to be considered and not covered in this ADR at this time. Removes some confusion on the part of users as to why a service (config-seed) starts and immediately exits. Minimal impact to development cycles and release schedule Configuration endpoints in all services need to ensure the environmental variables are reflected in the configuration data returned (this is a system management impact). Docker files will need to be modified to remove setting profile=docker Docker compose files will need to be changed to add environmental overrides for removal of docker profiles. These should go in the global environment section of the compose files for those overrides that apply to all services. Example: # all common shared environment variables defined here: x-common-env-variables: &common-variables EDGEX_SECURITY_SECRET_STORE: \"false\" EDGEX_CONFIGURATION_PROVIDER: consul.http://edgex-core-consul:8500 Clients_CoreData_Host: edgex-core-data Clients_Logging_Host: edgex-support-logging Logging_EnableRemote: \"true\"","title":"Consequences"},{"location":"design/adr/0006-Metrics-Collection/","text":"EdgeX Metrics Collection Status Approved Original proposal 10/24/2020 Approved by the TSC on 3/2/22 Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include: the number of EdgeX Events sent from core data to an application service the number of requests on a service API the average time it takes to process a message through an application service The number of errors logged by a service Control plane events (CPE) are defined as events that occur within an EdgeX instance. Examples of CPE include: a device was provisioned (added to core metadata) a service was stopped service configuration has changed CPE should not be confused with core data Events. Core data Events represent a collection (one or more) of sensor/device readings. Core data Events represent sensing of some measured state of the physical world (temperature, vibration, etc.). CPE represents the detection of some happening inside of the EdgeX software. This ADR outlines metrics (or telemetry) collection and handling. Note This ADR initially incorporated metrics collection and control plane event processing. The EdgeX architects felt the scope of the design was too large to cover under one ADR. Control plane event processing will be covered under a separate ADR in the future. Context System Management services (SMA and executors) currently provide a limited set of \u201cmetrics\u201d to requesting clients (3rd party applications and systems external to EdgeX). Namely, it provides requesting clients with service CPU and memory usage; both metrics about the resource utilization of the service (the executable) itself versus metrics that are about what is happening inside of the service. Arguably, the current system management metrics can be provided by the container engine and orchestration tools (example: by Docker engine) or by the underlying OS tooling. Info The SMA has been deprecated (since Ireland release) and will be removed in a future, yet named, release. Going forward, users of EdgeX will want to have more insights \u2013 that is more metrics telemetry \u2013 on what is happening directly in the services and the tasks that they are preforming. In other words, users of EdgeX will want more telemetry on service activities to include: sensor data collection (how much, how fast, etc.) command requests handled (how many, to which devices, etc.) sensor data transformation as it is done in application services (how fast, what is filtered, etc) sensor data export (how much is sent, how many exports have failed, etc. ) API requests (how often, how quickly, how many success versus failed attempts, etc.) bootstrapping time (time to come up and be available to other services) activity processing time (amount of time it takes to perform a particular service function - such as respond to a command request) Definitions Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include: the number of EdgeX Events sent from core data to an application service via message bus (or via device service to application service in Ireland and beyond) the number of requests on a service API the average time it takes to process a message through an application service The number of errors logged by a service The collection and dissemination of metric data will require internal service level instrumentation (relevant to that service) to capture and send data about relevant EdgeX operations. EdgeX does not currently offer any service instrumentation. Metric Use As a first step in implementation of metrics data, EdgeX will make metric data available to other subscribing 3rd party applications and systems, but will not necessarily consume or use this information itself. In the future, EdgeX may consume its own metric data. For example, EdgeX may, in the future, use a metric on the number of EdgeX events being sent to core data (or app services) as the means to throttle back device data collection. In the future, EdgeX application services may optionally subscribe to a service's metrics messages bus (by attaching to the appropriate message pipe for that service). Thus allowing additional filtering, transformation, endpoint control of metric data from that service. At the point where this feature is supported, consideration would need to be made as to whether all events (sensor reading messages and metric messages) go through the same application services. At this time, EdgeX will not persist the metric data (except as it may be retained as part of a message bus subsystem such as in an MQTT broker). Consumers of metric data are responsible for persisting the data if needed, but this is external to EdgeX. Persistence of metric information may be considered in the future based on requirements and adopter demand for such a feature. In general, EdgeX metrics are meant to provide internal services and external applications and systems better information about what is happening \"inside\" EdgeX services and the associated devices with which it communicates. Requirements Services will push specified metrics collected for that service to a specified (by configuration) message endpoint (as supported by the EdgeX message bus implementation; currently either Redis Pub/Sub or MQTT implementations are supported) Each service will have configuration that specifies a message endpoint for the service metrics. The metrics message topic communications may be secured or unsecured (just as application services provide the means to export to secured or unsecured message pipes today). The configuration will be placed in the Writable area. When a user wishes to change the configuration dynamically (such as turning on/off a metric), then Consul's UI can be used to change it. Services will have configuration which indicates what metrics are available from the service. Services will have configuration which allows EdgeX system managers to select which metrics are on or off - in other words providing configuration that determines what metrics are collected and reported by default. When a metric is turned off (the default setting) the service does not report the metric. When a metric is turned on the service collects and sends the metric to the designated message topic. Metrics collection must be pushed to the designated message topic on some appointed schedule. The schedule would be designated by configuration and done in a way similar to auto events in device services. For the initial implementation, there will be just one scheduled time when all metrics will be collected and pushed to the designated message topic. In the future, there may be a desire to set up a separate schedule for each metric, but this was deemed too complex for the initial implementation. Info Initially, it was proposed that metrics be associated with a \"level\" and allow metrics to be turned on or off by level (like levels associated to log messages in logging). The level of metrics data seems arbitrary at this time and considered too complex for initial implementation. This may be reconsidered in a future release and based on new requirements/use cases. It was also proposed to categorize or label metrics - essentially allowing grouping of various metrics. This would allow groups of metrics to be turned on or off, and allow metrics to be organized per the group when reporting. At this time, this feature is also considered beyond the scope of the initial implementation and to be reconsidered in a future release based on requirements/use case needs. It was also proposed that each service offer a REST API to provide metrics collection information (such as which metrics were being collected) and the ability to turn the collection on or off dynamically. This is deemed out of scope for the first implementation and may be brought back if there are use case requirements / demand for it. Requested Metrics The following is a list of example metrics requested by the EdgeX community and adopters for various service areas. Again, metrics would generally be collected and pushed to the message topic in some configured interval (example: 1/5/15 minutes or other defined interval). This is just a sample of metrics thought relevant by each work group. It may not reflect the metrics supported by the implementation. The exact metrics collected by each service will be determined by the service implementers (or SDK implementers in the case of the app functions and device service SDKs). General The following metrics apply to all (or most) services. Service uptime (time since last service boot) Cumulative number of API requests succeeded / failed / invalid (2xx vs 5xx vs 4xx) Avg response time (in milliseconds or appropriate unit of measure) on APIs Avg and Max request size Core/Supporting Latency (measure of time) an event takes to get through core data Latency (measure of time) a command request takes to get to a device service Indication of health \u2013 that events are being processed during a configurable period Number of events in persistence Number of readings in persistence Number of validation failures (validation of device identification) Number of notification transactions Number of notifications handled Number of failed notification transmissions Number of notifications in retry status Application Services Processing time for a pipeline; latency (measure of time) an event takes to get through an application service pipeline DB access times How often are we failing export to be sent to db to be retried at a later time What is the current store and forward queue size How much data (size in KBs or MBs) of packaged sensor data is being sent to an endpoint (or volume) Number of invalid messages that triggered pipeline Number of events processed Device Services Number of devices managed by this DS Device Requests (which may be more informative than reading counts and rates) Note It is envisioned that there may be additional specific metrics for each device service. For example, the ONVIF camera device service may report number of times camera tampering was detected. Security Security metrics may be more difficult to ascertain as they are cross service metrics. Given the nature of this design (on a per service basis), global security metrics may be out of scope or security metrics collection has to be copied into each service (leading to lots of duplicate code for now). Also, true threat detection based on metrics may be a feature best provided by 3rd party based on particular threats and security profile needs. Number of API requests denied due to wrong access token (Kong) per service and within a given time Number of secrets accessed per service name Count of any accesses and failures to the data persistence layer Count of service start and restart attempts Design Proposal Collect and Push Architecture Metric data will be collected and cached by each service. At designated times (kicked off by configurable schedule), the service will collect telemetry data from the cache and push it to a designated message bus topic. Metrics Messaging Cached metric data, at the designated time, will be marshaled into a message and pushed to the pre-configured message bus topic. Each metric message consists of several key/value pairs: - a required name (the name of the metric) such as service-uptime - a required value which is the telemetry value collected such as 120 as the number of hours the service has been up. - a required timestamp is the time (in Epoch timestamp/milliseconds format) at which the data was collected (similar in nature to the origin of sensed data). - an optional collection (array) of tags. The tags are sets of key/value pairs of strings that provide amplifying information about the telemetry. Tags may include: - originating service name - unit of measure associated with the telemetry value - value type of the value - additional values when the metric is more than just one value (example: when using a histogram, it would include min, max, mean and sum values) The metric name must be unique for that service. Because some metrics are reported from multiple services (such as service uptime), the name is not required to be unique across all services. All information (keys, values, tags, etc.) is in string format and placed in a JSON array within the message body. Here are some example representations: Example metric message body with a single value { \"name\" : \"service-up\" , \"value\" : \"120\" , \"timestamp\" : \"1602168089665570000\" , \"tags\" :{ \"service\" : \"coredata\" , \"uom\" : \"days\" , \"type\" : \"int64\" }} Example metric message body with multiple values { \"name\" : \"api-requests\" , \"value\" : \"24\" , \"timestamp\" : \"1602168089665570001\" , \"tags\" :{ \"service\" : \"coredata\" , \"uom\" : \"count\" , \"type\" : \"int64\" , \"mean\" : \"0.0665\" , \"rate1\" : \"0.111\" , \"rate5\" : \"0.150\" , \"rate15\" : \"0.111\" }} Info The key or metric name must be unique when using go-metrics as it requires the metric name to be unique per the registry. Metrics are considered immutable. Configuration Configuration, not unlike that provided in core data or any device service, will specify the message bus type and locations where the metrics messages should be sent. In fact, the message bus configuration will use (or reuse if the service is already using the message bus) the common message bus configuration as defined below. Common configuration for each service for message queue configuration (inclusive of metrics): [ MessageQueue ] Protocol = 'redis' ## or 'tcp' Host = 'localhost' Port = 5573 Type = 'redis' ## or 'mqtt' PublishTopicPrefix = \"edgex/events/core\" # standard and existing core or device topic for publishing [ MessageQueue.Optional ] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"device-virtual\" # Connection information Qos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified Additional configuration must be provided in each service to provide metrics / telemetry specific configuration. This area of the configuration will likely be different for each type of service. Additional metrics collection configuration to be provided include: Trigger the collection of telemetry from the metrics cache and sending it into the appointed message bus. Define which metrics are available and which are turned off and on . All are false by default. The list of metrics can and likely will be different per service. The keys in this list are the metric name. True and false are used for on and off values. Specify the metrics topic prefix where metrics data will be published to (ex: providing the prefix /edgex/telemetry/topic name where the service and metric name [service-name]/[metric-name] will be appended per metric (allowing subscribers to filter by service or metric name) These metrics configuration options will be defined in the Writable area of configuration.toml so as to allow for dynamic changes to the configuration (when using Consul). Specifically, the [Writable].[Writable.Telemetry] area will dictate metrics collection configuration like this: [[ Writable ]] [[ Writable.Telemetry ]] Interval = \"30s\" PublishTopicPrefix = \"edgex/telemetry\" # // will be added to this Publish Topic prefix #available metrics listed here. All metrics should be listed off (or false) by default service-up = false api-requests = false Info It was discussed that in future EdgeX releases, services may want separate message bus connections. For example one for sensor data and one for metrics telemetry data. This would allow the QoS and other settings of the message bus connection to be different. This would allow sensor data collection, for example, to be messaged with a higher QoS than that of metrics. As an alternate approach, we could modify go-mod-messaging to allow setting QoS per topic (and thereby avoid multiple connections). For the initial release of this feature, the service will use the same connection (and therefore configuration) for metrics telemetry as well as sensor data. Library Support Each service will now need go-mod-messaging support (for GoLang services and the equivalent for C services). Each service would determine when and what metrics to collect and push to the message bus, but will use a common library chosen for each EdgeX language supported (Go or C currently) Use of go-metrics (a GoLang library to publish application metrics) would allow EdgeX to utilize (versus construct) a library utilized by over 7 thousand projects. It provides the means to capture various types of metrics in a registry (a sophisticated map). The metrics can then be published ( reported ) to a number of well known systems such as InfluxDB, Graphite, DataDog, and Syslog. go-metrics is a Go library made from original Java package https://github.com/dropwizard/metrics. A similar package would need to be selected (or created) for C. Per the Core WG meeting of 2/24/22 - it is important to provide an implementation that is the same in Go or C. The adopter of EdgeX should not see a difference in whether the metrics/telemetry is collected by a C or Go service. Configuration of metrics in a C or Go service should have the same structure. The C based metrics collection mechanism in C services (specifically as provided for in our C device service SDK) may operate differently \"under the covers\" but its configuration and resulting metrics messages on the EdgeX message bus must be formatted/organized the same. Considerations in the use of go-metrics This is a Golang only library. Using this library would not provide with any package to use for the C services. If there are expectations for parity between the services, this may be more difficult to achieve given the features of go-metrics. go-metrics will still require the EdgeX team to develop a bootstrapping apparatus to take the metrics configuration and register each of the metrics defined in the configuration in go-metrics. go-metrics would also require the EdgeX team to develop the means to periodically extract the metrics data from the registry and ship it via message bus (something the current go-metrics library does not do). While go-metrics offers the ability for data to be reported to other systems, it would required EdgeX to expose these capabilities (possibly through APIs) if a user wanted to export to these subsystems in addition to the message bus. Per the Kamakura Planning Meeting, it was noted that go-metrics is already a dependency in our Go code due to its use other 3rd party packages (see https://github.com/edgexfoundry/edgex-go/blob/4264632f3ddafb0cbc2089cffbea8c0719035c96/go.sum#L18). Community questions about go-metrics Per the Monthly Architect's meeting of 9/20/21): How it manages the telemetry data (persistence, in memory, database, etc.)? In memory - in a \"registry\"; essentially a key/value store where the key is the metric name Does it offer a query API (in order to easily support the ADR suggested REST API)? Yes - metrics are stored in a \"Registry\" (MetricRegistry - essentially a map). Get (or GetAll) methods provided to query for metrics What does the go-metrics package do so that its features can become requirements for C side? About a dozen types of metrics collection (simple gauge or counter to more sophisticated structures like Histograms) - all stored in a registry (map). How is the data made available? Report out (export or publish) to various integrated packages (InfluxDB, Graphite, DataDog, Syslog, etc.). Nothing to MQTT or other base message service. This would have to be implemented from scratch. Can the metric/telemetry count be reset if needed? Does this happen whenever it posts to the message bus? How would this work for REST? Yes, you can unregister and re-register the metric. A REST API would have to be constructed to call this capability. As an alternative to go-metrics, there is another library called OpenCensus . This is a multi-language metrics library, including Go and C++. This library is more feature rich. OpenCensus is also roughly 5x the size of the go-metrics library. Additional Open Questions Should consideration be given to allow metrics to be placed in different topics per name? If so, we will have to add to the topic name like we do for device name in device services? A future consideration Should consideration be given to incorporate alternate protocols/standards for metric collection such as https://opentelemetry.io/ or https://github.com/statsd/? Go metrics is already a library pulled into all Go services. These packages may be used in C side implementations. Decision Per the Monthly Architect's meeting of 12/13/21 - it was decided to use go-metrics for Go services over creating our own library or using open census. C services will either find/pick a package that provides similar functionality to go-metrics or implement internally something providing MVP capability. Use of go-metrics helps avoid too much service bloat since it is already in most Go services. Per the same Monthly Architect's meeting, it as decided to implement metrics in Go services first. Per the Monthly Architect's meeting of 1/24/22 - it was decided not to support a REST API on all services that would provide information on what metrics the service provides and the ability to turn them on / off. Instead, the decision was to use Writable configuration and allow Consul to be the means to change the configuration (dynamically). If an adopter chooses not to use Consul, then the configuration with regard to metrics collection, as with all configuration in this circumstance, would be static. If an external API need is requested in the future (such as from an external UI or tool), a REST API may be added. See older versions of this PR for ideas on implementation in this case. Per Core Working Group meeting of 2/24/22 (and in many other previous meetings on this ADR) - it was decided that the EdgeX approach should be one of push (via message bus/MQTT) vs. pull (REST API). Both approaches require each service to collect metric telemetry specific to that service. After collecting it, the service must either push it onto a message topic (as a message) or cache it (into memory or some storage mechanism depending on whether the storage needs to be durable or not) and allow for a REST API call that would cause the data to be pulled from that cache and provided in a response to the REST call. Given both mechanisms require the same collection process, the belief is that push is probably preferred today by adopters. In the future, if highly desired, a pull REST API could be added (along with a decision on how to cache the metrics telemetry for that pull). Per Core Working Group meeting of 2/24/22 - importantly , EdgeX is just making the metrics telemetry available on the internal EdgeX message bus. An adopter would need to create something to pull the data off this bus to use it in some way. As voiced by several on the call, it is important for the adopter to realize that today, \"we (EdgeX) are not providing the last mile in metrics data\". The adopter must provide that last mile which is to pick the data from the topic, make it available to their systems and do something with it. Per Core Working Group meeting of 2/24/22 (and in many other previous meetings on this ADR) - it was decided not to use Prometheus (or Prometheus library) as the means to provide for metrics. The reasons for this are many: Push vs pull is favored in the first implementation (see point above). Also see similar debate online for the pluses/minuses of each approach. EdgeX wants to make telemetry data available without dictating the specific mechanism for making the data more widely available. Specific debate centered on use of Prometheus as a popular collection library (to use inside of services to collect the data) as well as a monitoring system to watch/display the data. While Prometheus is popular open source approach, it was felt that many organizations choose to use InfluxDB/Grafana, DataDog, AppDynamics, a cloud provided mechanism, or their own home-grown solution to collect, analyse, visualize and otherwise use the telemetry. Therefore, rather than dictating the selection of the monitoring system, EdgeX would simply make the data available whereby and organization could choose their own monitoring system/tooling. It should be noted that the EdgeX approach merely makes the telemetry data available by message bus. A Prometheus approach would provide collection as well as backend system to otherwise collect, analyse, display, etc. the data. Therefore, there is typically work to be done by the adopter to get the telemetry data from the proposed EdgeX message bus solution and do something with it. There are some reporters that come with go-metrics that allow for data to be taken directly from go-metrics and pushed to an intermediary for Prometheus and other monitoring/telemetry platforms as referenced above. These capabilities may not be very well supported and is beyond the scope of this EdgeX ADR. However, even without reporters , it was felt a relatively straightforward exercise (on the part of the adopter) to create an application that listens to the EdgeX metrics message bus and makes that data available via pull REST API for Prometheus if desired. The Prometheus client libraries would have to be added to each service which would bloat the services (although they are available for both Go an C). The benefit of using go-metrics is that it is used already by Hashicorp Consul (so already in the Go services). Implementation Details for Go The go-metrics package offers the following types of metrics collection: Gauges: holds a single integer (int64) value. Example use: Number of notifications in retry status Operations to update the gauge and get the gauge's value Example code: g := metrics . NewGauge () g . Update ( 42 ) // set the value to 42 g . Update ( 10 ) // now set the value to 10 fmt . Println ( g . Value ()) // print out the current value in the gauge = 10 Counter: holds a integer (in64) count. A counter could be implemented with a Gauge. Example use: the current store and forward queue size Operations to increment, decrement, clear and get the counter's count (or value) c := metrics . NewCounter () c . Inc ( 1 ) // add one to the current counter c . Inc ( 10 ) // add 10 to the current counter, making it 11 c . Dec ( 5 ) // decrement the counter by 5, making it 6 fmt . Println ( c . Count ()) // print out the current count of the counter = 6 Meter: measures the rate (int64) of events over time (at one, five and fifteen minute intervals). Example use: the number or rate of requests on a service API Operations: provide the total count of events as well as the mean and rate at 1, 5, and 15 minute rates m := metrics . NewMeter () m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by fmt . Println ( m . Count ()) // prints 4 fmt . Println ( m . Rate1 ()) // prints 0.11075889086811593 fmt . Println ( m . Rate5 ()) // prints 0.1755318374350548 fmt . Println ( m . Rate15 ()) // prints 0.19136522498856992 fmt . Println ( m . RateMean ()) //prints 0.06665062941438574 Histograms: measure the statistical distribution of values (int64 values) in a collection of values. Example use: response times on APIs Operations: update and get the min, max, count, percentile, sample, sum and variance from the collection h := metrics . NewHistogram ( metrics . NewUniformSample ( 4 )) h . Update ( 10 ) h . Update ( 20 ) h . Update ( 30 ) h . Update ( 40 ) fmt . Println (( h . Max ())) // prints 40 fmt . Println ( h . Min ()) // prints 10 fmt . Println ( h . Mean ()) // prints 25 fmt . Println ( h . Count ()) // prints 4 fmt . Println ( h . Percentile ( 0.25 )) //prints 12.5 fmt . Println ( h . Variance ()) //prints 125 fmt . Println ( h . Sample ()) //prints &{4 {0 0} 4 [10 20 30 40]} Timer: measures both the rate a particular piece of code is called and the distribution of its duration Example use: how often an app service function gets called and how long it takes get through the function Operations: update and get min, max, count, rate1, rate5, rate15, mean, percentile, sum and variance from the collection t := metrics . NewTimer () t . Update ( 10 ) time . Sleep ( 15 * time . Second ) t . Update ( 20 ) time . Sleep ( 15 * time . Second ) t . Update ( 30 ) time . Sleep ( 15 * time . Second ) t . Update ( 40 ) time . Sleep ( 15 * time . Second ) fmt . Println (( t . Max ())) // prints 40 fmt . Println ( t . Min ()) // prints 10 fmt . Println ( t . Mean ()) // prints 25 fmt . Println ( t . Count ()) // prints 4 fmt . Println ( t . Sum ()) // prints 100 fmt . Println ( t . Percentile ( 0.25 )) //prints 12.5 fmt . Println ( t . Variance ()) //prints 125 fmt . Println ( t . Rate1 ()) // prints 0.1116017821771607 fmt . Println ( t . Rate5 ()) // prints 0.1755821073441404 fmt . Println ( t . Rate15 ()) // prints 0.1913711954736821 fmt . Println ( t . RateMean ()) //prints 0.06665773963998162 Note The go-metrics package does offer some variants of these like the GaugeFloat64 to hold 64 bit floats. Consequences Should there be a global configuration option to turn all metrics off/on? EdgeX doesn't yet have global config so this will have to be by service. Given the potential that each service publishes metrics to the same message topic, 0MQ is not implementation option unless each service uses a different 0MQ pipe (0MQ topics do not allow multiple publishers). Like the DS to App Services implementation, do we allow 0MQ to be used, but only if each service sends to a different 0MQ topic? Probably not. We need to avoid service bloat. EdgeX is not an enterprise system. How can we implement in a concise and economical way? Use of Go metrics helps on the Go side since this is already a module used by EdgeX modules (and brought in by default). Care and concern must be given to not cause too much bloat on the C side. SMA reports on service CPU, memory, configuration and provides the means to start/stop/restart the services. This is currently outside the scope of the new metric collection/monitoring. In the future, 3rd party mechanisms which offer the same capability as SMA may warrant all of SMA irrelevant. The existing notifications service serves to send a notification via alternate protocol outside of EdgeX. This communication service is provided as a generic communication instrument from any micro service and is independent of any type of data or concern. In the future, the notification service could be configured to be a subscriber of the metric messages and trigger appropriate external notification (via email, SMTP, etc.). Reference Possible standards for implementation Open Telemetry statsd go-metrics OpenCensus","title":"EdgeX Metrics Collection"},{"location":"design/adr/0006-Metrics-Collection/#edgex-metrics-collection","text":"","title":"EdgeX Metrics Collection"},{"location":"design/adr/0006-Metrics-Collection/#status","text":"Approved Original proposal 10/24/2020 Approved by the TSC on 3/2/22 Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include: the number of EdgeX Events sent from core data to an application service the number of requests on a service API the average time it takes to process a message through an application service The number of errors logged by a service Control plane events (CPE) are defined as events that occur within an EdgeX instance. Examples of CPE include: a device was provisioned (added to core metadata) a service was stopped service configuration has changed CPE should not be confused with core data Events. Core data Events represent a collection (one or more) of sensor/device readings. Core data Events represent sensing of some measured state of the physical world (temperature, vibration, etc.). CPE represents the detection of some happening inside of the EdgeX software. This ADR outlines metrics (or telemetry) collection and handling. Note This ADR initially incorporated metrics collection and control plane event processing. The EdgeX architects felt the scope of the design was too large to cover under one ADR. Control plane event processing will be covered under a separate ADR in the future.","title":"Status"},{"location":"design/adr/0006-Metrics-Collection/#context","text":"System Management services (SMA and executors) currently provide a limited set of \u201cmetrics\u201d to requesting clients (3rd party applications and systems external to EdgeX). Namely, it provides requesting clients with service CPU and memory usage; both metrics about the resource utilization of the service (the executable) itself versus metrics that are about what is happening inside of the service. Arguably, the current system management metrics can be provided by the container engine and orchestration tools (example: by Docker engine) or by the underlying OS tooling. Info The SMA has been deprecated (since Ireland release) and will be removed in a future, yet named, release. Going forward, users of EdgeX will want to have more insights \u2013 that is more metrics telemetry \u2013 on what is happening directly in the services and the tasks that they are preforming. In other words, users of EdgeX will want more telemetry on service activities to include: sensor data collection (how much, how fast, etc.) command requests handled (how many, to which devices, etc.) sensor data transformation as it is done in application services (how fast, what is filtered, etc) sensor data export (how much is sent, how many exports have failed, etc. ) API requests (how often, how quickly, how many success versus failed attempts, etc.) bootstrapping time (time to come up and be available to other services) activity processing time (amount of time it takes to perform a particular service function - such as respond to a command request)","title":"Context"},{"location":"design/adr/0006-Metrics-Collection/#definitions","text":"Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include: the number of EdgeX Events sent from core data to an application service via message bus (or via device service to application service in Ireland and beyond) the number of requests on a service API the average time it takes to process a message through an application service The number of errors logged by a service The collection and dissemination of metric data will require internal service level instrumentation (relevant to that service) to capture and send data about relevant EdgeX operations. EdgeX does not currently offer any service instrumentation.","title":"Definitions"},{"location":"design/adr/0006-Metrics-Collection/#metric-use","text":"As a first step in implementation of metrics data, EdgeX will make metric data available to other subscribing 3rd party applications and systems, but will not necessarily consume or use this information itself. In the future, EdgeX may consume its own metric data. For example, EdgeX may, in the future, use a metric on the number of EdgeX events being sent to core data (or app services) as the means to throttle back device data collection. In the future, EdgeX application services may optionally subscribe to a service's metrics messages bus (by attaching to the appropriate message pipe for that service). Thus allowing additional filtering, transformation, endpoint control of metric data from that service. At the point where this feature is supported, consideration would need to be made as to whether all events (sensor reading messages and metric messages) go through the same application services. At this time, EdgeX will not persist the metric data (except as it may be retained as part of a message bus subsystem such as in an MQTT broker). Consumers of metric data are responsible for persisting the data if needed, but this is external to EdgeX. Persistence of metric information may be considered in the future based on requirements and adopter demand for such a feature. In general, EdgeX metrics are meant to provide internal services and external applications and systems better information about what is happening \"inside\" EdgeX services and the associated devices with which it communicates.","title":"Metric Use"},{"location":"design/adr/0006-Metrics-Collection/#requirements","text":"Services will push specified metrics collected for that service to a specified (by configuration) message endpoint (as supported by the EdgeX message bus implementation; currently either Redis Pub/Sub or MQTT implementations are supported) Each service will have configuration that specifies a message endpoint for the service metrics. The metrics message topic communications may be secured or unsecured (just as application services provide the means to export to secured or unsecured message pipes today). The configuration will be placed in the Writable area. When a user wishes to change the configuration dynamically (such as turning on/off a metric), then Consul's UI can be used to change it. Services will have configuration which indicates what metrics are available from the service. Services will have configuration which allows EdgeX system managers to select which metrics are on or off - in other words providing configuration that determines what metrics are collected and reported by default. When a metric is turned off (the default setting) the service does not report the metric. When a metric is turned on the service collects and sends the metric to the designated message topic. Metrics collection must be pushed to the designated message topic on some appointed schedule. The schedule would be designated by configuration and done in a way similar to auto events in device services. For the initial implementation, there will be just one scheduled time when all metrics will be collected and pushed to the designated message topic. In the future, there may be a desire to set up a separate schedule for each metric, but this was deemed too complex for the initial implementation. Info Initially, it was proposed that metrics be associated with a \"level\" and allow metrics to be turned on or off by level (like levels associated to log messages in logging). The level of metrics data seems arbitrary at this time and considered too complex for initial implementation. This may be reconsidered in a future release and based on new requirements/use cases. It was also proposed to categorize or label metrics - essentially allowing grouping of various metrics. This would allow groups of metrics to be turned on or off, and allow metrics to be organized per the group when reporting. At this time, this feature is also considered beyond the scope of the initial implementation and to be reconsidered in a future release based on requirements/use case needs. It was also proposed that each service offer a REST API to provide metrics collection information (such as which metrics were being collected) and the ability to turn the collection on or off dynamically. This is deemed out of scope for the first implementation and may be brought back if there are use case requirements / demand for it.","title":"Requirements"},{"location":"design/adr/0006-Metrics-Collection/#requested-metrics","text":"The following is a list of example metrics requested by the EdgeX community and adopters for various service areas. Again, metrics would generally be collected and pushed to the message topic in some configured interval (example: 1/5/15 minutes or other defined interval). This is just a sample of metrics thought relevant by each work group. It may not reflect the metrics supported by the implementation. The exact metrics collected by each service will be determined by the service implementers (or SDK implementers in the case of the app functions and device service SDKs).","title":"Requested Metrics"},{"location":"design/adr/0006-Metrics-Collection/#general","text":"The following metrics apply to all (or most) services. Service uptime (time since last service boot) Cumulative number of API requests succeeded / failed / invalid (2xx vs 5xx vs 4xx) Avg response time (in milliseconds or appropriate unit of measure) on APIs Avg and Max request size","title":"General"},{"location":"design/adr/0006-Metrics-Collection/#coresupporting","text":"Latency (measure of time) an event takes to get through core data Latency (measure of time) a command request takes to get to a device service Indication of health \u2013 that events are being processed during a configurable period Number of events in persistence Number of readings in persistence Number of validation failures (validation of device identification) Number of notification transactions Number of notifications handled Number of failed notification transmissions Number of notifications in retry status","title":"Core/Supporting"},{"location":"design/adr/0006-Metrics-Collection/#application-services","text":"Processing time for a pipeline; latency (measure of time) an event takes to get through an application service pipeline DB access times How often are we failing export to be sent to db to be retried at a later time What is the current store and forward queue size How much data (size in KBs or MBs) of packaged sensor data is being sent to an endpoint (or volume) Number of invalid messages that triggered pipeline Number of events processed","title":"Application Services"},{"location":"design/adr/0006-Metrics-Collection/#device-services","text":"Number of devices managed by this DS Device Requests (which may be more informative than reading counts and rates) Note It is envisioned that there may be additional specific metrics for each device service. For example, the ONVIF camera device service may report number of times camera tampering was detected.","title":"Device Services"},{"location":"design/adr/0006-Metrics-Collection/#security","text":"Security metrics may be more difficult to ascertain as they are cross service metrics. Given the nature of this design (on a per service basis), global security metrics may be out of scope or security metrics collection has to be copied into each service (leading to lots of duplicate code for now). Also, true threat detection based on metrics may be a feature best provided by 3rd party based on particular threats and security profile needs. Number of API requests denied due to wrong access token (Kong) per service and within a given time Number of secrets accessed per service name Count of any accesses and failures to the data persistence layer Count of service start and restart attempts","title":"Security"},{"location":"design/adr/0006-Metrics-Collection/#design-proposal","text":"","title":"Design Proposal"},{"location":"design/adr/0006-Metrics-Collection/#collect-and-push-architecture","text":"Metric data will be collected and cached by each service. At designated times (kicked off by configurable schedule), the service will collect telemetry data from the cache and push it to a designated message bus topic.","title":"Collect and Push Architecture"},{"location":"design/adr/0006-Metrics-Collection/#metrics-messaging","text":"Cached metric data, at the designated time, will be marshaled into a message and pushed to the pre-configured message bus topic. Each metric message consists of several key/value pairs: - a required name (the name of the metric) such as service-uptime - a required value which is the telemetry value collected such as 120 as the number of hours the service has been up. - a required timestamp is the time (in Epoch timestamp/milliseconds format) at which the data was collected (similar in nature to the origin of sensed data). - an optional collection (array) of tags. The tags are sets of key/value pairs of strings that provide amplifying information about the telemetry. Tags may include: - originating service name - unit of measure associated with the telemetry value - value type of the value - additional values when the metric is more than just one value (example: when using a histogram, it would include min, max, mean and sum values) The metric name must be unique for that service. Because some metrics are reported from multiple services (such as service uptime), the name is not required to be unique across all services. All information (keys, values, tags, etc.) is in string format and placed in a JSON array within the message body. Here are some example representations: Example metric message body with a single value { \"name\" : \"service-up\" , \"value\" : \"120\" , \"timestamp\" : \"1602168089665570000\" , \"tags\" :{ \"service\" : \"coredata\" , \"uom\" : \"days\" , \"type\" : \"int64\" }} Example metric message body with multiple values { \"name\" : \"api-requests\" , \"value\" : \"24\" , \"timestamp\" : \"1602168089665570001\" , \"tags\" :{ \"service\" : \"coredata\" , \"uom\" : \"count\" , \"type\" : \"int64\" , \"mean\" : \"0.0665\" , \"rate1\" : \"0.111\" , \"rate5\" : \"0.150\" , \"rate15\" : \"0.111\" }} Info The key or metric name must be unique when using go-metrics as it requires the metric name to be unique per the registry. Metrics are considered immutable.","title":"Metrics Messaging"},{"location":"design/adr/0006-Metrics-Collection/#configuration","text":"Configuration, not unlike that provided in core data or any device service, will specify the message bus type and locations where the metrics messages should be sent. In fact, the message bus configuration will use (or reuse if the service is already using the message bus) the common message bus configuration as defined below. Common configuration for each service for message queue configuration (inclusive of metrics): [ MessageQueue ] Protocol = 'redis' ## or 'tcp' Host = 'localhost' Port = 5573 Type = 'redis' ## or 'mqtt' PublishTopicPrefix = \"edgex/events/core\" # standard and existing core or device topic for publishing [ MessageQueue.Optional ] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"device-virtual\" # Connection information Qos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified Additional configuration must be provided in each service to provide metrics / telemetry specific configuration. This area of the configuration will likely be different for each type of service. Additional metrics collection configuration to be provided include: Trigger the collection of telemetry from the metrics cache and sending it into the appointed message bus. Define which metrics are available and which are turned off and on . All are false by default. The list of metrics can and likely will be different per service. The keys in this list are the metric name. True and false are used for on and off values. Specify the metrics topic prefix where metrics data will be published to (ex: providing the prefix /edgex/telemetry/topic name where the service and metric name [service-name]/[metric-name] will be appended per metric (allowing subscribers to filter by service or metric name) These metrics configuration options will be defined in the Writable area of configuration.toml so as to allow for dynamic changes to the configuration (when using Consul). Specifically, the [Writable].[Writable.Telemetry] area will dictate metrics collection configuration like this: [[ Writable ]] [[ Writable.Telemetry ]] Interval = \"30s\" PublishTopicPrefix = \"edgex/telemetry\" # // will be added to this Publish Topic prefix #available metrics listed here. All metrics should be listed off (or false) by default service-up = false api-requests = false Info It was discussed that in future EdgeX releases, services may want separate message bus connections. For example one for sensor data and one for metrics telemetry data. This would allow the QoS and other settings of the message bus connection to be different. This would allow sensor data collection, for example, to be messaged with a higher QoS than that of metrics. As an alternate approach, we could modify go-mod-messaging to allow setting QoS per topic (and thereby avoid multiple connections). For the initial release of this feature, the service will use the same connection (and therefore configuration) for metrics telemetry as well as sensor data.","title":"Configuration"},{"location":"design/adr/0006-Metrics-Collection/#library-support","text":"Each service will now need go-mod-messaging support (for GoLang services and the equivalent for C services). Each service would determine when and what metrics to collect and push to the message bus, but will use a common library chosen for each EdgeX language supported (Go or C currently) Use of go-metrics (a GoLang library to publish application metrics) would allow EdgeX to utilize (versus construct) a library utilized by over 7 thousand projects. It provides the means to capture various types of metrics in a registry (a sophisticated map). The metrics can then be published ( reported ) to a number of well known systems such as InfluxDB, Graphite, DataDog, and Syslog. go-metrics is a Go library made from original Java package https://github.com/dropwizard/metrics. A similar package would need to be selected (or created) for C. Per the Core WG meeting of 2/24/22 - it is important to provide an implementation that is the same in Go or C. The adopter of EdgeX should not see a difference in whether the metrics/telemetry is collected by a C or Go service. Configuration of metrics in a C or Go service should have the same structure. The C based metrics collection mechanism in C services (specifically as provided for in our C device service SDK) may operate differently \"under the covers\" but its configuration and resulting metrics messages on the EdgeX message bus must be formatted/organized the same. Considerations in the use of go-metrics This is a Golang only library. Using this library would not provide with any package to use for the C services. If there are expectations for parity between the services, this may be more difficult to achieve given the features of go-metrics. go-metrics will still require the EdgeX team to develop a bootstrapping apparatus to take the metrics configuration and register each of the metrics defined in the configuration in go-metrics. go-metrics would also require the EdgeX team to develop the means to periodically extract the metrics data from the registry and ship it via message bus (something the current go-metrics library does not do). While go-metrics offers the ability for data to be reported to other systems, it would required EdgeX to expose these capabilities (possibly through APIs) if a user wanted to export to these subsystems in addition to the message bus. Per the Kamakura Planning Meeting, it was noted that go-metrics is already a dependency in our Go code due to its use other 3rd party packages (see https://github.com/edgexfoundry/edgex-go/blob/4264632f3ddafb0cbc2089cffbea8c0719035c96/go.sum#L18). Community questions about go-metrics Per the Monthly Architect's meeting of 9/20/21): How it manages the telemetry data (persistence, in memory, database, etc.)? In memory - in a \"registry\"; essentially a key/value store where the key is the metric name Does it offer a query API (in order to easily support the ADR suggested REST API)? Yes - metrics are stored in a \"Registry\" (MetricRegistry - essentially a map). Get (or GetAll) methods provided to query for metrics What does the go-metrics package do so that its features can become requirements for C side? About a dozen types of metrics collection (simple gauge or counter to more sophisticated structures like Histograms) - all stored in a registry (map). How is the data made available? Report out (export or publish) to various integrated packages (InfluxDB, Graphite, DataDog, Syslog, etc.). Nothing to MQTT or other base message service. This would have to be implemented from scratch. Can the metric/telemetry count be reset if needed? Does this happen whenever it posts to the message bus? How would this work for REST? Yes, you can unregister and re-register the metric. A REST API would have to be constructed to call this capability. As an alternative to go-metrics, there is another library called OpenCensus . This is a multi-language metrics library, including Go and C++. This library is more feature rich. OpenCensus is also roughly 5x the size of the go-metrics library.","title":"Library Support"},{"location":"design/adr/0006-Metrics-Collection/#additional-open-questions","text":"Should consideration be given to allow metrics to be placed in different topics per name? If so, we will have to add to the topic name like we do for device name in device services? A future consideration Should consideration be given to incorporate alternate protocols/standards for metric collection such as https://opentelemetry.io/ or https://github.com/statsd/? Go metrics is already a library pulled into all Go services. These packages may be used in C side implementations.","title":"Additional Open Questions"},{"location":"design/adr/0006-Metrics-Collection/#decision","text":"Per the Monthly Architect's meeting of 12/13/21 - it was decided to use go-metrics for Go services over creating our own library or using open census. C services will either find/pick a package that provides similar functionality to go-metrics or implement internally something providing MVP capability. Use of go-metrics helps avoid too much service bloat since it is already in most Go services. Per the same Monthly Architect's meeting, it as decided to implement metrics in Go services first. Per the Monthly Architect's meeting of 1/24/22 - it was decided not to support a REST API on all services that would provide information on what metrics the service provides and the ability to turn them on / off. Instead, the decision was to use Writable configuration and allow Consul to be the means to change the configuration (dynamically). If an adopter chooses not to use Consul, then the configuration with regard to metrics collection, as with all configuration in this circumstance, would be static. If an external API need is requested in the future (such as from an external UI or tool), a REST API may be added. See older versions of this PR for ideas on implementation in this case. Per Core Working Group meeting of 2/24/22 (and in many other previous meetings on this ADR) - it was decided that the EdgeX approach should be one of push (via message bus/MQTT) vs. pull (REST API). Both approaches require each service to collect metric telemetry specific to that service. After collecting it, the service must either push it onto a message topic (as a message) or cache it (into memory or some storage mechanism depending on whether the storage needs to be durable or not) and allow for a REST API call that would cause the data to be pulled from that cache and provided in a response to the REST call. Given both mechanisms require the same collection process, the belief is that push is probably preferred today by adopters. In the future, if highly desired, a pull REST API could be added (along with a decision on how to cache the metrics telemetry for that pull). Per Core Working Group meeting of 2/24/22 - importantly , EdgeX is just making the metrics telemetry available on the internal EdgeX message bus. An adopter would need to create something to pull the data off this bus to use it in some way. As voiced by several on the call, it is important for the adopter to realize that today, \"we (EdgeX) are not providing the last mile in metrics data\". The adopter must provide that last mile which is to pick the data from the topic, make it available to their systems and do something with it. Per Core Working Group meeting of 2/24/22 (and in many other previous meetings on this ADR) - it was decided not to use Prometheus (or Prometheus library) as the means to provide for metrics. The reasons for this are many: Push vs pull is favored in the first implementation (see point above). Also see similar debate online for the pluses/minuses of each approach. EdgeX wants to make telemetry data available without dictating the specific mechanism for making the data more widely available. Specific debate centered on use of Prometheus as a popular collection library (to use inside of services to collect the data) as well as a monitoring system to watch/display the data. While Prometheus is popular open source approach, it was felt that many organizations choose to use InfluxDB/Grafana, DataDog, AppDynamics, a cloud provided mechanism, or their own home-grown solution to collect, analyse, visualize and otherwise use the telemetry. Therefore, rather than dictating the selection of the monitoring system, EdgeX would simply make the data available whereby and organization could choose their own monitoring system/tooling. It should be noted that the EdgeX approach merely makes the telemetry data available by message bus. A Prometheus approach would provide collection as well as backend system to otherwise collect, analyse, display, etc. the data. Therefore, there is typically work to be done by the adopter to get the telemetry data from the proposed EdgeX message bus solution and do something with it. There are some reporters that come with go-metrics that allow for data to be taken directly from go-metrics and pushed to an intermediary for Prometheus and other monitoring/telemetry platforms as referenced above. These capabilities may not be very well supported and is beyond the scope of this EdgeX ADR. However, even without reporters , it was felt a relatively straightforward exercise (on the part of the adopter) to create an application that listens to the EdgeX metrics message bus and makes that data available via pull REST API for Prometheus if desired. The Prometheus client libraries would have to be added to each service which would bloat the services (although they are available for both Go an C). The benefit of using go-metrics is that it is used already by Hashicorp Consul (so already in the Go services).","title":"Decision"},{"location":"design/adr/0006-Metrics-Collection/#implementation-details-for-go","text":"The go-metrics package offers the following types of metrics collection: Gauges: holds a single integer (int64) value. Example use: Number of notifications in retry status Operations to update the gauge and get the gauge's value Example code: g := metrics . NewGauge () g . Update ( 42 ) // set the value to 42 g . Update ( 10 ) // now set the value to 10 fmt . Println ( g . Value ()) // print out the current value in the gauge = 10 Counter: holds a integer (in64) count. A counter could be implemented with a Gauge. Example use: the current store and forward queue size Operations to increment, decrement, clear and get the counter's count (or value) c := metrics . NewCounter () c . Inc ( 1 ) // add one to the current counter c . Inc ( 10 ) // add 10 to the current counter, making it 11 c . Dec ( 5 ) // decrement the counter by 5, making it 6 fmt . Println ( c . Count ()) // print out the current count of the counter = 6 Meter: measures the rate (int64) of events over time (at one, five and fifteen minute intervals). Example use: the number or rate of requests on a service API Operations: provide the total count of events as well as the mean and rate at 1, 5, and 15 minute rates m := metrics . NewMeter () m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by fmt . Println ( m . Count ()) // prints 4 fmt . Println ( m . Rate1 ()) // prints 0.11075889086811593 fmt . Println ( m . Rate5 ()) // prints 0.1755318374350548 fmt . Println ( m . Rate15 ()) // prints 0.19136522498856992 fmt . Println ( m . RateMean ()) //prints 0.06665062941438574 Histograms: measure the statistical distribution of values (int64 values) in a collection of values. Example use: response times on APIs Operations: update and get the min, max, count, percentile, sample, sum and variance from the collection h := metrics . NewHistogram ( metrics . NewUniformSample ( 4 )) h . Update ( 10 ) h . Update ( 20 ) h . Update ( 30 ) h . Update ( 40 ) fmt . Println (( h . Max ())) // prints 40 fmt . Println ( h . Min ()) // prints 10 fmt . Println ( h . Mean ()) // prints 25 fmt . Println ( h . Count ()) // prints 4 fmt . Println ( h . Percentile ( 0.25 )) //prints 12.5 fmt . Println ( h . Variance ()) //prints 125 fmt . Println ( h . Sample ()) //prints &{4 {0 0} 4 [10 20 30 40]} Timer: measures both the rate a particular piece of code is called and the distribution of its duration Example use: how often an app service function gets called and how long it takes get through the function Operations: update and get min, max, count, rate1, rate5, rate15, mean, percentile, sum and variance from the collection t := metrics . NewTimer () t . Update ( 10 ) time . Sleep ( 15 * time . Second ) t . Update ( 20 ) time . Sleep ( 15 * time . Second ) t . Update ( 30 ) time . Sleep ( 15 * time . Second ) t . Update ( 40 ) time . Sleep ( 15 * time . Second ) fmt . Println (( t . Max ())) // prints 40 fmt . Println ( t . Min ()) // prints 10 fmt . Println ( t . Mean ()) // prints 25 fmt . Println ( t . Count ()) // prints 4 fmt . Println ( t . Sum ()) // prints 100 fmt . Println ( t . Percentile ( 0.25 )) //prints 12.5 fmt . Println ( t . Variance ()) //prints 125 fmt . Println ( t . Rate1 ()) // prints 0.1116017821771607 fmt . Println ( t . Rate5 ()) // prints 0.1755821073441404 fmt . Println ( t . Rate15 ()) // prints 0.1913711954736821 fmt . Println ( t . RateMean ()) //prints 0.06665773963998162 Note The go-metrics package does offer some variants of these like the GaugeFloat64 to hold 64 bit floats.","title":"Implementation Details for Go"},{"location":"design/adr/0006-Metrics-Collection/#consequences","text":"Should there be a global configuration option to turn all metrics off/on? EdgeX doesn't yet have global config so this will have to be by service. Given the potential that each service publishes metrics to the same message topic, 0MQ is not implementation option unless each service uses a different 0MQ pipe (0MQ topics do not allow multiple publishers). Like the DS to App Services implementation, do we allow 0MQ to be used, but only if each service sends to a different 0MQ topic? Probably not. We need to avoid service bloat. EdgeX is not an enterprise system. How can we implement in a concise and economical way? Use of Go metrics helps on the Go side since this is already a module used by EdgeX modules (and brought in by default). Care and concern must be given to not cause too much bloat on the C side. SMA reports on service CPU, memory, configuration and provides the means to start/stop/restart the services. This is currently outside the scope of the new metric collection/monitoring. In the future, 3rd party mechanisms which offer the same capability as SMA may warrant all of SMA irrelevant. The existing notifications service serves to send a notification via alternate protocol outside of EdgeX. This communication service is provided as a generic communication instrument from any micro service and is independent of any type of data or concern. In the future, the notification service could be configured to be a subscriber of the metric messages and trigger appropriate external notification (via email, SMTP, etc.).","title":"Consequences"},{"location":"design/adr/0006-Metrics-Collection/#reference","text":"Possible standards for implementation Open Telemetry statsd go-metrics OpenCensus","title":"Reference"},{"location":"design/adr/0018-Service-Registry/","text":"Service Registry Status Context Existing Behavior Device Services Registry Client Interface Usage Core and Support Services Security Proxy Setup History Problem Statement Decision References Status Approved (by TSC vote on 3/25/21) Context An EdgeX system may be run with an optional service registry, the use of which (see the related ADR 0001-Registry-Refactor [1]) can be controlled on a per-service basis via the -r/-registry commmand line options. For the purposes of this ADR, a base assumption is that the registry has been enabled for all services. The default service registry used by EdgeX is Consul [2] from Hashicorp. Consul is also the default configuration provider for EdgeX. This ADR is meant to address the current usage of the registry by EdgeX services, and in particular whether the EdgeX services are using the registry to determine the location of peer services vs. using static per-service configuration. The reason this is being investigated is that there has been a proposal that EdgeX do away with the registry functionality, as the current implementation is not considered secure , due to the current configuration of Consul as used by the latest version of EdgeX (Hanoi/1.3.0). According to the original Service Name Design document (v6) [3] written during the California (0.6) release of EdgeX, all EdgeX Foundry microservices should be able to accomplish the following tasks: Register with the configuration/registration (referred to simply as \u201cthe registry\u201d for the rest of this document) provider (today Consul) Respond to availability requests Respond to shutdown requests by: Cleaning up resources in an orderly fashion Unregistering itself from the registry Get the address (host & port) of another EdgeX microservice by service name through the registry (when enabled) The purpose of this design is to ensure that services themselves advertise their location to the rest of the system by first self- registering. Most service registries (including Consul) implement some sort of health check mechanism. If a service is failing one or more health checks, the registry will stop reporting its availability when queried. Note - the design specifically excludes device services from this service lookup, as Core Metadata maintains a persistent store of DeviceService objects which provide service location for device services. Existing Behavior This section documents the existing behavior in the Hanoi (1.3.x) version of EdgeX. Device Services Device Virtual's behavior was first tested using the edgexfoundry snap (which is configured to always use the registry) by doing the following: $ sudo snap install edgexfoundry $ cp /var/snap/edgexfoundry/current/config/device-virtual/res/configuration.toml . I edited the file, removing the [Client.Data] section completely and copied the file back into place. Next I enabled device-virtual while monitoring the journal output. $ sudo cp configuration.toml /var/snap/edgexfoundry/current/config/device-virtual/res/ $ sudo snap set edgexfoundry device-virtual=on The following error was seen in the journal: level=INFO app=device-virtual source=httpserver.go:94 msg=\"Web server starting (0.0.0.0:49990)\" error: fatal error; Host setting for Core Data client not configured Next I followed the same steps, but instead of completely removing the client, I instead set the client ports to invalid values. In this case the service logged the following errors and exited: level=ERROR app=device-virtual source=service.go:149 msg=\"DeviceServicForName failed: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\" level=ERROR app=device-virtual source=init.go:45 msg=\"Couldn't register to metadata service: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\\n\" Note - in order to run this second test, the easiest way to do so is to remove and reinstall the snap vs. manually wiping out device-virtual's configuration in Consul. I could have also stopped the service, modified the configuration directly in Consul, and restarted the service. Registry Client Interface Usage Next the service's usage of the go-mod-registry Client interface was examined: type Client interface { // Registers the current service with Registry for discover and health check Register() error // Un-registers the current service with Registry for discover and health check Unregister() error // Simply checks if Registry is up and running at the configured URL IsAlive() bool // Gets the service endpoint information for the target ID from the Registry GetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error) // Checks with the Registry if the target service is available, i.e. registered and healthy IsServiceAvailable(serviceId string) (bool, error) } Summary If a device service is started with the registry flag set: Both Device SDKs register with the registry on startup, and unregister from the registry on normal shutdown. The Go SDK (device-sdk-go) queries the registry to check dependent service availability and health (via IsServiceAvailable ) on startup. Regardless of the registry setting, the Go SDK always sources the addresses of its dependent services from the Client* configuration stanzas. The C SDK queries the registry for the addresses of its dependent services. It pings the services directly to determine their availbility and health. Core and Support Services The same approach was used for Core and Support services (i.e. reviewing the usage of go-mod-bootstrap's Client interface), and ironically, the SMA seems to be the only service in edgex-go that actually queries the registry for service location: ./internal/system/agent/getconfig/executor.go: ep, err := e.registryClient.GetServiceEndpoint(serviceName) ./internal/system/agent/direct/metrics.go: e, err := m.registryClient.GetServiceEndpoint(serviceName) In summary, other than the SMA's configuration and metrics logic, the Core and Support services behave in the same manner as device-sdk-go. Note - the SMA also has a longstanding issue #2486 where it continuousy logs errors if one (or more) of the Support Services are not running. As described in the issue, this could be avoided if the SMA used the registry to determine if the services were actually available. See related issue #1662 ('Look at Driving \"Default Services List\" via Configuration'). Security Proxy Setup The security-proxy-setup service also relies on static service address configuration to configure the server routes for each of the services accessible through the API Gateway (aka Kong). Although it uses the same TOML-based client config keys as the other services, these configuration values are only ever read from the security-proxy-setup's local configuration.toml file, as the security services have never supported using our configuration provider (aka Consul). Note - Another point worth mentioning with respect to security services is that in the Geneva and Hanoi releases the service health checks registered by the services (and the associated IsServiceAvailable method) are used to orchestrate the ordered startup of the security services via a set of Consul scripts. This additional orchestration is only performed when EdgeX is deployed via docker, and is slated to to be removed as part of the Ireland release. History After a bit of research reaching as far back as the California (0.6.1) release of EdgeX, I've managed to piece together why the current implementation works the way it does. This history focues solely on the core and support services. The California release of EdgeX was released in June of 2018 and was the first to include services written using Go. This version of EdgeX as well as versions through the Fuji release all relied on a bootstrapping service called core-config-seed which was responsible for seeding the configuration of all of the core and support services into Consul prior to any of the services being started. This release actually preceded usage of TOML for configuration files, and instead just used a flat key/value format, with keys converted from legacy Java property names (e.g. meta.db.device.url ) to Camel[Pascal]/Case (e.g. MetaDeviceServiceURL). I chose the config key mentioned above on purpose: MetaDeviceURL = \"http://edgex-core-metadata:48081/api/v1/device\" Not only did this config key provide the address of core metadata, it also provided the path of a specific REST endpoint. In later releases of EdgeX, the address of the service and the specific endpoint paths were de-coupled. Instead of following the Service Name design (which was finalized two months earlier), the initial implementation followed the legacy Java implementation and initialized its service clients for each required REST endpoint (belonging to another EdgeX service) directly from the associated *URL config key read from Consul (if enabled) or directly from the configuration file. The shared client initialization code also created an Endpoint monitor goroutine and passed it a go channel channel used by the service to receive updates to the REST API endpoint URL. This monitor goroutine effectively polled Consul every 15s (this became configurable in later versions) for the client's service address and if a change was detected, would write the updated endpoint URL to the given channel, effectively ensuring that the service started using the new URL. It wasn't till late in the Geneva development cycle that I noticed log messages which made me aware of the fact that every one of our services was making a REST call to check the address of a service endpoint every 15s, for every REST endpoint it used! An issue was filed (https://github.com/edgexfoundry/edgex-go/issues/2594), and the client monitoring was removed as part of the Geneva 1.2.1 release. Problem Statement The fundamental problem with the existing implementations (as decribed above), is that there is too much duplication of configuration across services. For instance, Core Data's service port can easily be changed by passing the environment variable SERVICE_PORT to the service on startup. This overrides the configuration read from the configuration provider, and will cause Core Data to listen on the new port, however it has no impact on any services which use Core Data, as the client config for each is read from the configuration provider (excluding security-proxy-setup). This means in order to change a service port, environment variable overrides (e.g. CLIENTS_COREDARA_PORT) need to set for every client service as well as security-proxy-setup (if required). Decision Update the core, support, and security-proxy-setup services to use go-mod-registry's Client.GetServiceEndpoint method (if started with the --registry option) to determine (a) if a service dependency is available and (b) use the returned address information to initialize client endpoints (or setup the correct route in the case of proxy-setup). The same changes also need to be applied to the App Functions SDK and Go Device SDK, with only minor changes required in the C Device SDK (see previous commments re: the current implementation). Note - this design only works if service registration occurs before the service initializes its clients. For instance, Core Data and Core Metadata both depend on the other, and thus if both defer service registration till after client initialization, neither will be able to successfully lookup the address of the other service. Consquences One impact of this decision is that since the security-proxy-setup service currently runs before any of the core and support services are started, it would not be possible to implement this proposal without also modifying the service to use a lazy initialization of the API Gateway's routes. As such, the implementation of this ADR will require more design work with respect to security-proxy-setup. Some of the issues include: Splitting the configuration of the API Gateway from the service route intialization logic, either by making the service long-running or splitting route initialization into it's own service. Handling registry and non-registry scenarios (i.e. add --registry command-line support to security-proxy-setup). Handling changes to service address information (i.e. dynamically update API Gateway routes if/when service addresses change). Finally the proxy-setup's configuration needs to be updated so that its Route entries use service-keys instead of arbitrary names (e.g. ( Route.core-data vs. Route.CoreData ). References [1] ADR 0001-Registry-Refactor [2] Consul [3] Service Name Design v6","title":"Service Registry"},{"location":"design/adr/0018-Service-Registry/#service-registry","text":"Status Context Existing Behavior Device Services Registry Client Interface Usage Core and Support Services Security Proxy Setup History Problem Statement Decision References","title":"Service Registry"},{"location":"design/adr/0018-Service-Registry/#status","text":"Approved (by TSC vote on 3/25/21)","title":"Status"},{"location":"design/adr/0018-Service-Registry/#context","text":"An EdgeX system may be run with an optional service registry, the use of which (see the related ADR 0001-Registry-Refactor [1]) can be controlled on a per-service basis via the -r/-registry commmand line options. For the purposes of this ADR, a base assumption is that the registry has been enabled for all services. The default service registry used by EdgeX is Consul [2] from Hashicorp. Consul is also the default configuration provider for EdgeX. This ADR is meant to address the current usage of the registry by EdgeX services, and in particular whether the EdgeX services are using the registry to determine the location of peer services vs. using static per-service configuration. The reason this is being investigated is that there has been a proposal that EdgeX do away with the registry functionality, as the current implementation is not considered secure , due to the current configuration of Consul as used by the latest version of EdgeX (Hanoi/1.3.0). According to the original Service Name Design document (v6) [3] written during the California (0.6) release of EdgeX, all EdgeX Foundry microservices should be able to accomplish the following tasks: Register with the configuration/registration (referred to simply as \u201cthe registry\u201d for the rest of this document) provider (today Consul) Respond to availability requests Respond to shutdown requests by: Cleaning up resources in an orderly fashion Unregistering itself from the registry Get the address (host & port) of another EdgeX microservice by service name through the registry (when enabled) The purpose of this design is to ensure that services themselves advertise their location to the rest of the system by first self- registering. Most service registries (including Consul) implement some sort of health check mechanism. If a service is failing one or more health checks, the registry will stop reporting its availability when queried. Note - the design specifically excludes device services from this service lookup, as Core Metadata maintains a persistent store of DeviceService objects which provide service location for device services.","title":"Context"},{"location":"design/adr/0018-Service-Registry/#existing-behavior","text":"This section documents the existing behavior in the Hanoi (1.3.x) version of EdgeX.","title":"Existing Behavior"},{"location":"design/adr/0018-Service-Registry/#device-services","text":"Device Virtual's behavior was first tested using the edgexfoundry snap (which is configured to always use the registry) by doing the following: $ sudo snap install edgexfoundry $ cp /var/snap/edgexfoundry/current/config/device-virtual/res/configuration.toml . I edited the file, removing the [Client.Data] section completely and copied the file back into place. Next I enabled device-virtual while monitoring the journal output. $ sudo cp configuration.toml /var/snap/edgexfoundry/current/config/device-virtual/res/ $ sudo snap set edgexfoundry device-virtual=on The following error was seen in the journal: level=INFO app=device-virtual source=httpserver.go:94 msg=\"Web server starting (0.0.0.0:49990)\" error: fatal error; Host setting for Core Data client not configured Next I followed the same steps, but instead of completely removing the client, I instead set the client ports to invalid values. In this case the service logged the following errors and exited: level=ERROR app=device-virtual source=service.go:149 msg=\"DeviceServicForName failed: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\" level=ERROR app=device-virtual source=init.go:45 msg=\"Couldn't register to metadata service: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\\n\" Note - in order to run this second test, the easiest way to do so is to remove and reinstall the snap vs. manually wiping out device-virtual's configuration in Consul. I could have also stopped the service, modified the configuration directly in Consul, and restarted the service.","title":"Device Services"},{"location":"design/adr/0018-Service-Registry/#registry-client-interface-usage","text":"Next the service's usage of the go-mod-registry Client interface was examined: type Client interface { // Registers the current service with Registry for discover and health check Register() error // Un-registers the current service with Registry for discover and health check Unregister() error // Simply checks if Registry is up and running at the configured URL IsAlive() bool // Gets the service endpoint information for the target ID from the Registry GetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error) // Checks with the Registry if the target service is available, i.e. registered and healthy IsServiceAvailable(serviceId string) (bool, error) }","title":"Registry Client Interface Usage"},{"location":"design/adr/0018-Service-Registry/#summary","text":"If a device service is started with the registry flag set: Both Device SDKs register with the registry on startup, and unregister from the registry on normal shutdown. The Go SDK (device-sdk-go) queries the registry to check dependent service availability and health (via IsServiceAvailable ) on startup. Regardless of the registry setting, the Go SDK always sources the addresses of its dependent services from the Client* configuration stanzas. The C SDK queries the registry for the addresses of its dependent services. It pings the services directly to determine their availbility and health.","title":"Summary"},{"location":"design/adr/0018-Service-Registry/#core-and-support-services","text":"The same approach was used for Core and Support services (i.e. reviewing the usage of go-mod-bootstrap's Client interface), and ironically, the SMA seems to be the only service in edgex-go that actually queries the registry for service location: ./internal/system/agent/getconfig/executor.go: ep, err := e.registryClient.GetServiceEndpoint(serviceName) ./internal/system/agent/direct/metrics.go: e, err := m.registryClient.GetServiceEndpoint(serviceName) In summary, other than the SMA's configuration and metrics logic, the Core and Support services behave in the same manner as device-sdk-go. Note - the SMA also has a longstanding issue #2486 where it continuousy logs errors if one (or more) of the Support Services are not running. As described in the issue, this could be avoided if the SMA used the registry to determine if the services were actually available. See related issue #1662 ('Look at Driving \"Default Services List\" via Configuration').","title":"Core and Support Services"},{"location":"design/adr/0018-Service-Registry/#security-proxy-setup","text":"The security-proxy-setup service also relies on static service address configuration to configure the server routes for each of the services accessible through the API Gateway (aka Kong). Although it uses the same TOML-based client config keys as the other services, these configuration values are only ever read from the security-proxy-setup's local configuration.toml file, as the security services have never supported using our configuration provider (aka Consul). Note - Another point worth mentioning with respect to security services is that in the Geneva and Hanoi releases the service health checks registered by the services (and the associated IsServiceAvailable method) are used to orchestrate the ordered startup of the security services via a set of Consul scripts. This additional orchestration is only performed when EdgeX is deployed via docker, and is slated to to be removed as part of the Ireland release.","title":"Security Proxy Setup"},{"location":"design/adr/0018-Service-Registry/#history","text":"After a bit of research reaching as far back as the California (0.6.1) release of EdgeX, I've managed to piece together why the current implementation works the way it does. This history focues solely on the core and support services. The California release of EdgeX was released in June of 2018 and was the first to include services written using Go. This version of EdgeX as well as versions through the Fuji release all relied on a bootstrapping service called core-config-seed which was responsible for seeding the configuration of all of the core and support services into Consul prior to any of the services being started. This release actually preceded usage of TOML for configuration files, and instead just used a flat key/value format, with keys converted from legacy Java property names (e.g. meta.db.device.url ) to Camel[Pascal]/Case (e.g. MetaDeviceServiceURL). I chose the config key mentioned above on purpose: MetaDeviceURL = \"http://edgex-core-metadata:48081/api/v1/device\" Not only did this config key provide the address of core metadata, it also provided the path of a specific REST endpoint. In later releases of EdgeX, the address of the service and the specific endpoint paths were de-coupled. Instead of following the Service Name design (which was finalized two months earlier), the initial implementation followed the legacy Java implementation and initialized its service clients for each required REST endpoint (belonging to another EdgeX service) directly from the associated *URL config key read from Consul (if enabled) or directly from the configuration file. The shared client initialization code also created an Endpoint monitor goroutine and passed it a go channel channel used by the service to receive updates to the REST API endpoint URL. This monitor goroutine effectively polled Consul every 15s (this became configurable in later versions) for the client's service address and if a change was detected, would write the updated endpoint URL to the given channel, effectively ensuring that the service started using the new URL. It wasn't till late in the Geneva development cycle that I noticed log messages which made me aware of the fact that every one of our services was making a REST call to check the address of a service endpoint every 15s, for every REST endpoint it used! An issue was filed (https://github.com/edgexfoundry/edgex-go/issues/2594), and the client monitoring was removed as part of the Geneva 1.2.1 release.","title":"History"},{"location":"design/adr/0018-Service-Registry/#problem-statement","text":"The fundamental problem with the existing implementations (as decribed above), is that there is too much duplication of configuration across services. For instance, Core Data's service port can easily be changed by passing the environment variable SERVICE_PORT to the service on startup. This overrides the configuration read from the configuration provider, and will cause Core Data to listen on the new port, however it has no impact on any services which use Core Data, as the client config for each is read from the configuration provider (excluding security-proxy-setup). This means in order to change a service port, environment variable overrides (e.g. CLIENTS_COREDARA_PORT) need to set for every client service as well as security-proxy-setup (if required).","title":"Problem Statement"},{"location":"design/adr/0018-Service-Registry/#decision","text":"Update the core, support, and security-proxy-setup services to use go-mod-registry's Client.GetServiceEndpoint method (if started with the --registry option) to determine (a) if a service dependency is available and (b) use the returned address information to initialize client endpoints (or setup the correct route in the case of proxy-setup). The same changes also need to be applied to the App Functions SDK and Go Device SDK, with only minor changes required in the C Device SDK (see previous commments re: the current implementation). Note - this design only works if service registration occurs before the service initializes its clients. For instance, Core Data and Core Metadata both depend on the other, and thus if both defer service registration till after client initialization, neither will be able to successfully lookup the address of the other service.","title":"Decision"},{"location":"design/adr/0018-Service-Registry/#consquences","text":"One impact of this decision is that since the security-proxy-setup service currently runs before any of the core and support services are started, it would not be possible to implement this proposal without also modifying the service to use a lazy initialization of the API Gateway's routes. As such, the implementation of this ADR will require more design work with respect to security-proxy-setup. Some of the issues include: Splitting the configuration of the API Gateway from the service route intialization logic, either by making the service long-running or splitting route initialization into it's own service. Handling registry and non-registry scenarios (i.e. add --registry command-line support to security-proxy-setup). Handling changes to service address information (i.e. dynamically update API Gateway routes if/when service addresses change). Finally the proxy-setup's configuration needs to be updated so that its Route entries use service-keys instead of arbitrary names (e.g. ( Route.core-data vs. Route.CoreData ).","title":"Consquences"},{"location":"design/adr/0018-Service-Registry/#references","text":"[1] ADR 0001-Registry-Refactor [2] Consul [3] Service Name Design v6","title":"References"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/","text":"Device Services Send Events via Message Bus Status Context Decision Which Message Bus implementations? Go Device SDK C Device SDK Core Data and Persistence V2 Event DTO Validation Message Envelope Application Services MessageBus Topics Configuration Device Services [MessageQueue] Core Data [MessageQueue] Application Services [MessageBus] [Binding] Secure Connections Consequences Status Approved Context Currently EdgeX Events are sent from Device Services via HTTP to Core Data, which then puts the Events on the MessageBus after optionally persisting them to the database. This ADR details how Device Services will send EdgeX Events to other services via the EdgeX MessageBus. Note: Though this design is centered on device services, it does have cross cutting impacts with other EdgeX services and modules Note: This ADR is dependent on the Secret Provider for All to provide the secrets for secure Message Bus connections. Decision Which Message Bus implementations? Multiple Device Services may need to be publishing Events to the MessageBus concurrently. ZMQ will not be a valid option if multiple Device Services are configured to publish. This is because ZMQ only allows for a single publisher. ZMQ will still be valid if only one Device Service is publishing Events. The MQTT and Redis Streams are valid options to use when multiple Device Services are required, as they both support multiple publishers. These are the only other implementations currently available for Go services. The C base device services do not yet have a MessageBus implementation. See the C Device SDK below for details. Note: Documentation will need to be clear when ZMQ can be used and when it can not be used. Go Device SDK The Go Device SDK will take advantage of the existing go-mod-messaging module to enable use of the EdgeX MessageBus. A new bootstrap handler will be created which initializes the MessageBus client based on configuration. See Configuration section below for details. The Go Device SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details. C Device SDK The C Device SDK will implement its own MessageBus abstraction similar to the one in go-mod-messaging . The first implementation type (MQTT or Redis Streams) is TBD. Using this abstraction allows for future implementations to be added when use cases warrant the additional implementations. As with the Go SDK, the C SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details. Core Data and Persistence With this design, Events will be sent directly to Application Services w/o going through Core Data and thus will not be persisted unless changes are made to Core Data. To allow Events to optionally continue to be persisted, Core Data will become an additional or secondary (and optional) subscriber for the Events from the MessageBus. The Events will be persisted when they are received. Core Data will also retain the ability to receive Events via HTTP, persist them and publish them to the MessageBus as is done today. This allows for the flexibility to have some device services to be configured to POST Events and some to be configured to publish Events while we transition the Device Services to all have the capability to publishing Events. In the future, once this new Publish approach has been proven, we may decide to remove POSTing Events to Core Data from the Device SDKs. The existing PersistData setting will be ignored by the code path subscribing to Events since the only reason to do this is to persist the Events. There is a race condition for Marked As Pushed when Core Data is persisting Events received from the MessageBus. Core Data may not have finished persisting an Event before the Application Service has processed the Event and requested the Event be Marked As Pushed . It was decided to remove Mark as Pushed capability and just rely on time based scrubbing of old Events. V2 Event DTO As this development will be part of the Ireland release all Events published to the MessageBus will use the V2 Event DTO. This is already implemented in Core Data for the V2 AddEvent API. Validation Services receiving the Event DTO from the MessageBus will log validation errors and stop processing the Event. Message Envelope EdgeX Go Services currently uses a custom Message Envelope for all data that is published to the MessageBus. This envelope wraps the data with metadata, which is ContentType (JSON or CBOR), Correlation-Id and the obsolete Checksum . The Checksum is used when the data is CBOR encoded to identify the Event in V1 API to be mark it as pushed. This checksum is no longer needed as the V2 Event DTO requires the ID be set by the Device Services which will always be used in the V2 API to mark the Events as pushed. The Message Envelope will be updated to remove this property. The C SDK will recreate this Message Envelope. Application Services As part of the V2 API consumption work in Ireland the App Services SDK will be changed to expect to receive V2 Event DTOs rather than the V1 Event model. It will also be updated to no longer expect or use the Checksum currently on the Message Envelope. Note these changes must occur for the V2 consumption and are not directly tied to this effort. The App Service SDK will be enhanced for the secure MessageBus connection described below. See Secure Connections for details MessageBus Topics Note: The change recommended here is not required for this design, but it provides a good opportunity to adopt it. Currently Core Data publishes Events to the simple events topic. All Application Services running receive every Event published, whether they want them or not. The Events can be filtered out using the FilterByDeviceName or FilterByResourceName pipeline functions, but the Application Services still receives every Event and process all the Events to some extent. This could cause load issues in a deployment with many devices and large volume of Events from various devices or a very verbose device that the Application Services is not interested in. Note: The current FilterByDeviceName is only good if the device name is known statically and the only instance of the device defined by the DeviceProfileName . What we really need is FilterByDeviceProfileName which allows multiple instances of a device to be filtered for, rather than a single instance as it it now. The V2 API will be adding DeviceProfileName to the Events, so in Ireland this filter will be possible. Pub/Sub systems have advanced topic schema, which we can take advantage of from Application Services to filter for just the Events the Application Service actual wants. Publishers of Events must add the DeviceProfileName , DeviceName and SourceName to the topic in the form edgex/events/// . The SourceName is the Resource or Command name used to create the Event. This allows Application Services to filter for just the Events from the device(s) it wants by only subscribing to those DeviceProfileNames or the specific DeviceNames or just the specific SourceNames Example subscribe topics if above schema is used: edgex/events/# All Events Core Data will subscribe using this topic schema edgex/events/Random-Integer-Device/# Any Events from devices created from the Random-Integer-Device device profile edgex/events/Random-Integer-Device/Random-Integer-Device1 Only Events from the Random-Integer-Device1 Device edgex/events/Random-Integer-Device/#/Int16 Any Events with Readings from Int16 device resource from devices created from the Random-Integer-Device device profile. **edgex/events/Modbus-Device/#/HVACValues Any Events with Readings from HVACValues device command from devices created from the Modbus-Device device profile. The MessageBus abstraction allows for multiple subscriptions, so an Application Service could specify to receive data from multiple specific device profiles or devices by creating multiple subscriptions. i.e. edgex/Events/Random-Integer-Device/# and edgex/Events/Random-Boolean-Device/# . Currently the App SDK only allows for a single subscription topic to be configured, but that could easily be expanded to handle a list of subscriptions. See Configuration section below for details. Core Data's existing publishing of Events would also need to be changed to use this new topic schema. One challenge with this is Core Data doesn't currently know the DeviceProfileName or DeviceName when it receives a CBOR encoded event. This is because it doesn't decode the Event until after it has published it to the MessageBus. Also, Core Data doesn't know of SourceName at all. The V2 API will be enhanced to change the AddEvent endpoint from /event to /event/{profile}/{device}/{source} so that DeviceProfileName , DeviceName , and SourceName are always know no matter how the request is encoded. This new topic approach will be enabled via each publisher's PublishTopic having the DeviceProfileName , DeviceName and SourceName added to the configured PublishTopicPrefix PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix See Configuration section below for details. Configuration Device Services All Device services will have the following additional configuration to allow connecting and publishing to the MessageBus. As describe above in the MessageBus Topics section, the PublishTopic will include the DeviceProfileName and DeviceName . [MessageQueue] A MessageQueue section will be added, which is similar to that used in Core Data today, but with PublishTopicPrefix instead of Topic .To enable secure connections, the Username & Password have been replaced with ClientAuth & SecretPath , See Secure Connections section below for details. The added Enabled property controls whether the Device Service publishes to the MessageBus or POSTs to Core Data. [MessageQueue] Enabled = true Protocol = \"tcp\" Host = \"localhost\" Port = 1883 Type = \"mqtt\" PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none` Core Data Core data will also require additional configuration to be able to subscribe to receive Events from the MessageBus. As describe above in the MessageBus Topics section, the PublishTopicPrefix will have DeviceProfileName and DeviceName added to create the actual Public Topic. [MessageQueue] The MessageQueue section will be changed so that the Topic property changes to PublishTopicPrefix and SubscribeEnabled and SubscribeTopic will be added. As with device services configuration, the Username & Password have been replaced with ClientAuth & SecretPath for secure connections. See Secure Connections section below for details. In addition, the Boolean SubscribeEnabled property will be used to control if the service subscribes to Events from the MessageBus or not. [MessageQueue] Protocol = \"tcp\" Host = \"localhost\" Port = 1883 Type = \"mqtt\" PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix SubscribeEnabled = true SubscribeTopic = \"edgex/events/#\" [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable evnironment variable overrides of them # Client Identifiers ClientId = \"edgex-core-data\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none` Application Services [MessageBus] Similar to above, the Application Services MessageBus configuration will change to allow for secure connection to the MessageBus. The Username & Password have been replaced with ClientAuth & SecretPath for secure connections. See Secure Connections section below for details. [MessageBus.Optional] # MQTT Specific options # Client Identifiers ClientId = \"\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none` [Binding] The Binding configuration section will require changes for the subscribe topics scheme described in the MessageBus Topics section above to filter for Events from specific device profiles or devices. SubscribeTopic will change from a string property containing a single topic to the SubscribeTopics string property containing a comma separated list of topics. This allows for the flexibility for the property to be a single topic with the # wild card so the Application Service receives all Events as it does today. Receive only Events from the Random-Integer-Device and Random-Boolean-Device profiles [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/Random-Integer-Device, edgex/events/Random-Boolean-Device\" Receive only Events from the Random-Integer-Device1 from the Random-Integer-Device profile [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/Random-Integer-Device/Random-Integer-Device1\" or receives all Events: [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/#\" Secure Connections As stated earlier, this ADR is dependent on the Secret Provider for All ADR to provide a common Secret Provider for all Edgex Services to access their secrets. Once this is available, the MessageBus connection can be secured via the following configurable client authentications modes which follows similar implementation for secure MQTT Export and secure MQTT Trigger used in Application Services. none - No authentication usernamepassword - Username & password authentication. clientcert - Client certificate and key for authentication. The secrets specified for the above options are pulled from the Secret Provider using the configured SecretPath . How the secrets are injected into the Secret Provider is out of scope for this ADR and covered in the Secret Provider for All ADR. Consequences If C SDK doesn't support ZMQ or Redis Streams then there must be a MQTT Broker running when a C Device service is in use and configured to publish to MessageBus. Since we've adopted the publish topic scheme with DeviceProfileName and DeviceName the V2 API must restrict the characters used in device names to those allowed in a topic. An issue for V2 API already exists for restricting the allowable characters to RFC 3986 , which will suffice. Newer ZMQ may allow for multiple publishers. Requires investigation and very likely rework of the ZMQ implementation in go-mod-messaging. No alternative has been found . Mark as Push V2 Api will be removed from Core Data, Core Data Client and the App SDK Consider moving App Service Binding to Writable. (out of scope for this ADR)","title":"Device Services Send Events via Message Bus"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#device-services-send-events-via-message-bus","text":"Status Context Decision Which Message Bus implementations? Go Device SDK C Device SDK Core Data and Persistence V2 Event DTO Validation Message Envelope Application Services MessageBus Topics Configuration Device Services [MessageQueue] Core Data [MessageQueue] Application Services [MessageBus] [Binding] Secure Connections Consequences","title":"Device Services Send Events via Message Bus"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#status","text":"Approved","title":"Status"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#context","text":"Currently EdgeX Events are sent from Device Services via HTTP to Core Data, which then puts the Events on the MessageBus after optionally persisting them to the database. This ADR details how Device Services will send EdgeX Events to other services via the EdgeX MessageBus. Note: Though this design is centered on device services, it does have cross cutting impacts with other EdgeX services and modules Note: This ADR is dependent on the Secret Provider for All to provide the secrets for secure Message Bus connections.","title":"Context"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#decision","text":"","title":"Decision"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#which-message-bus-implementations","text":"Multiple Device Services may need to be publishing Events to the MessageBus concurrently. ZMQ will not be a valid option if multiple Device Services are configured to publish. This is because ZMQ only allows for a single publisher. ZMQ will still be valid if only one Device Service is publishing Events. The MQTT and Redis Streams are valid options to use when multiple Device Services are required, as they both support multiple publishers. These are the only other implementations currently available for Go services. The C base device services do not yet have a MessageBus implementation. See the C Device SDK below for details. Note: Documentation will need to be clear when ZMQ can be used and when it can not be used.","title":"Which Message Bus implementations?"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#go-device-sdk","text":"The Go Device SDK will take advantage of the existing go-mod-messaging module to enable use of the EdgeX MessageBus. A new bootstrap handler will be created which initializes the MessageBus client based on configuration. See Configuration section below for details. The Go Device SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details.","title":"Go Device SDK"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#c-device-sdk","text":"The C Device SDK will implement its own MessageBus abstraction similar to the one in go-mod-messaging . The first implementation type (MQTT or Redis Streams) is TBD. Using this abstraction allows for future implementations to be added when use cases warrant the additional implementations. As with the Go SDK, the C SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details.","title":"C Device SDK"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#core-data-and-persistence","text":"With this design, Events will be sent directly to Application Services w/o going through Core Data and thus will not be persisted unless changes are made to Core Data. To allow Events to optionally continue to be persisted, Core Data will become an additional or secondary (and optional) subscriber for the Events from the MessageBus. The Events will be persisted when they are received. Core Data will also retain the ability to receive Events via HTTP, persist them and publish them to the MessageBus as is done today. This allows for the flexibility to have some device services to be configured to POST Events and some to be configured to publish Events while we transition the Device Services to all have the capability to publishing Events. In the future, once this new Publish approach has been proven, we may decide to remove POSTing Events to Core Data from the Device SDKs. The existing PersistData setting will be ignored by the code path subscribing to Events since the only reason to do this is to persist the Events. There is a race condition for Marked As Pushed when Core Data is persisting Events received from the MessageBus. Core Data may not have finished persisting an Event before the Application Service has processed the Event and requested the Event be Marked As Pushed . It was decided to remove Mark as Pushed capability and just rely on time based scrubbing of old Events.","title":"Core Data and Persistence"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#v2-event-dto","text":"As this development will be part of the Ireland release all Events published to the MessageBus will use the V2 Event DTO. This is already implemented in Core Data for the V2 AddEvent API.","title":"V2 Event DTO"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#validation","text":"Services receiving the Event DTO from the MessageBus will log validation errors and stop processing the Event.","title":"Validation"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#message-envelope","text":"EdgeX Go Services currently uses a custom Message Envelope for all data that is published to the MessageBus. This envelope wraps the data with metadata, which is ContentType (JSON or CBOR), Correlation-Id and the obsolete Checksum . The Checksum is used when the data is CBOR encoded to identify the Event in V1 API to be mark it as pushed. This checksum is no longer needed as the V2 Event DTO requires the ID be set by the Device Services which will always be used in the V2 API to mark the Events as pushed. The Message Envelope will be updated to remove this property. The C SDK will recreate this Message Envelope.","title":"Message Envelope"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#application-services","text":"As part of the V2 API consumption work in Ireland the App Services SDK will be changed to expect to receive V2 Event DTOs rather than the V1 Event model. It will also be updated to no longer expect or use the Checksum currently on the Message Envelope. Note these changes must occur for the V2 consumption and are not directly tied to this effort. The App Service SDK will be enhanced for the secure MessageBus connection described below. See Secure Connections for details","title":"Application Services"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagebus-topics","text":"Note: The change recommended here is not required for this design, but it provides a good opportunity to adopt it. Currently Core Data publishes Events to the simple events topic. All Application Services running receive every Event published, whether they want them or not. The Events can be filtered out using the FilterByDeviceName or FilterByResourceName pipeline functions, but the Application Services still receives every Event and process all the Events to some extent. This could cause load issues in a deployment with many devices and large volume of Events from various devices or a very verbose device that the Application Services is not interested in. Note: The current FilterByDeviceName is only good if the device name is known statically and the only instance of the device defined by the DeviceProfileName . What we really need is FilterByDeviceProfileName which allows multiple instances of a device to be filtered for, rather than a single instance as it it now. The V2 API will be adding DeviceProfileName to the Events, so in Ireland this filter will be possible. Pub/Sub systems have advanced topic schema, which we can take advantage of from Application Services to filter for just the Events the Application Service actual wants. Publishers of Events must add the DeviceProfileName , DeviceName and SourceName to the topic in the form edgex/events/// . The SourceName is the Resource or Command name used to create the Event. This allows Application Services to filter for just the Events from the device(s) it wants by only subscribing to those DeviceProfileNames or the specific DeviceNames or just the specific SourceNames Example subscribe topics if above schema is used: edgex/events/# All Events Core Data will subscribe using this topic schema edgex/events/Random-Integer-Device/# Any Events from devices created from the Random-Integer-Device device profile edgex/events/Random-Integer-Device/Random-Integer-Device1 Only Events from the Random-Integer-Device1 Device edgex/events/Random-Integer-Device/#/Int16 Any Events with Readings from Int16 device resource from devices created from the Random-Integer-Device device profile. **edgex/events/Modbus-Device/#/HVACValues Any Events with Readings from HVACValues device command from devices created from the Modbus-Device device profile. The MessageBus abstraction allows for multiple subscriptions, so an Application Service could specify to receive data from multiple specific device profiles or devices by creating multiple subscriptions. i.e. edgex/Events/Random-Integer-Device/# and edgex/Events/Random-Boolean-Device/# . Currently the App SDK only allows for a single subscription topic to be configured, but that could easily be expanded to handle a list of subscriptions. See Configuration section below for details. Core Data's existing publishing of Events would also need to be changed to use this new topic schema. One challenge with this is Core Data doesn't currently know the DeviceProfileName or DeviceName when it receives a CBOR encoded event. This is because it doesn't decode the Event until after it has published it to the MessageBus. Also, Core Data doesn't know of SourceName at all. The V2 API will be enhanced to change the AddEvent endpoint from /event to /event/{profile}/{device}/{source} so that DeviceProfileName , DeviceName , and SourceName are always know no matter how the request is encoded. This new topic approach will be enabled via each publisher's PublishTopic having the DeviceProfileName , DeviceName and SourceName added to the configured PublishTopicPrefix PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix See Configuration section below for details.","title":"MessageBus Topics"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#configuration","text":"","title":"Configuration"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#device-services","text":"All Device services will have the following additional configuration to allow connecting and publishing to the MessageBus. As describe above in the MessageBus Topics section, the PublishTopic will include the DeviceProfileName and DeviceName .","title":"Device Services"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagequeue","text":"A MessageQueue section will be added, which is similar to that used in Core Data today, but with PublishTopicPrefix instead of Topic .To enable secure connections, the Username & Password have been replaced with ClientAuth & SecretPath , See Secure Connections section below for details. The added Enabled property controls whether the Device Service publishes to the MessageBus or POSTs to Core Data. [MessageQueue] Enabled = true Protocol = \"tcp\" Host = \"localhost\" Port = 1883 Type = \"mqtt\" PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`","title":"[MessageQueue]"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#core-data","text":"Core data will also require additional configuration to be able to subscribe to receive Events from the MessageBus. As describe above in the MessageBus Topics section, the PublishTopicPrefix will have DeviceProfileName and DeviceName added to create the actual Public Topic.","title":"Core Data"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagequeue_1","text":"The MessageQueue section will be changed so that the Topic property changes to PublishTopicPrefix and SubscribeEnabled and SubscribeTopic will be added. As with device services configuration, the Username & Password have been replaced with ClientAuth & SecretPath for secure connections. See Secure Connections section below for details. In addition, the Boolean SubscribeEnabled property will be used to control if the service subscribes to Events from the MessageBus or not. [MessageQueue] Protocol = \"tcp\" Host = \"localhost\" Port = 1883 Type = \"mqtt\" PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix SubscribeEnabled = true SubscribeTopic = \"edgex/events/#\" [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable evnironment variable overrides of them # Client Identifiers ClientId = \"edgex-core-data\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`","title":"[MessageQueue]"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#application-services_1","text":"","title":"Application Services"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagebus","text":"Similar to above, the Application Services MessageBus configuration will change to allow for secure connection to the MessageBus. The Username & Password have been replaced with ClientAuth & SecretPath for secure connections. See Secure Connections section below for details. [MessageBus.Optional] # MQTT Specific options # Client Identifiers ClientId = \"\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`","title":"[MessageBus]"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#binding","text":"The Binding configuration section will require changes for the subscribe topics scheme described in the MessageBus Topics section above to filter for Events from specific device profiles or devices. SubscribeTopic will change from a string property containing a single topic to the SubscribeTopics string property containing a comma separated list of topics. This allows for the flexibility for the property to be a single topic with the # wild card so the Application Service receives all Events as it does today. Receive only Events from the Random-Integer-Device and Random-Boolean-Device profiles [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/Random-Integer-Device, edgex/events/Random-Boolean-Device\" Receive only Events from the Random-Integer-Device1 from the Random-Integer-Device profile [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/Random-Integer-Device/Random-Integer-Device1\" or receives all Events: [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/#\"","title":"[Binding]"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#secure-connections","text":"As stated earlier, this ADR is dependent on the Secret Provider for All ADR to provide a common Secret Provider for all Edgex Services to access their secrets. Once this is available, the MessageBus connection can be secured via the following configurable client authentications modes which follows similar implementation for secure MQTT Export and secure MQTT Trigger used in Application Services. none - No authentication usernamepassword - Username & password authentication. clientcert - Client certificate and key for authentication. The secrets specified for the above options are pulled from the Secret Provider using the configured SecretPath . How the secrets are injected into the Secret Provider is out of scope for this ADR and covered in the Secret Provider for All ADR.","title":"Secure Connections"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#consequences","text":"If C SDK doesn't support ZMQ or Redis Streams then there must be a MQTT Broker running when a C Device service is in use and configured to publish to MessageBus. Since we've adopted the publish topic scheme with DeviceProfileName and DeviceName the V2 API must restrict the characters used in device names to those allowed in a topic. An issue for V2 API already exists for restricting the allowable characters to RFC 3986 , which will suffice. Newer ZMQ may allow for multiple publishers. Requires investigation and very likely rework of the ZMQ implementation in go-mod-messaging. No alternative has been found . Mark as Push V2 Api will be removed from Core Data, Core Data Client and the App SDK Consider moving App Service Binding to Writable. (out of scope for this ADR)","title":"Consequences"},{"location":"design/adr/014-Secret-Provider-For-All/","text":"Secret Provider for All Status Context Existing Implementations What is a Secret? Service Exclusive vs Service Shared Secrets Known and Unknown Services Static Secrets and Runtime Secrets Interfaces and factory methods Bootstrap's current implementation Interfaces Factory and bootstrap handler methods App SDK's current implementation Interface Factory and bootstrap handler methods Secret Store for non-secure mode InsecureSecrets Configuration Decision Only Exclusive Secret Stores Abstraction Interface Implementation Factory Method and Bootstrap Handler Caching of Secrets Insecure Secrets Handling on-the-fly changes to InsecureSecrets Mocks Where will SecretProvider reside? Go Services C Device Service Consequences Status Approved Context This ADR defines the new SecretProvider abstraction that will be used by all EdgeX services, including Device Services. The Secret Provider is used by services to retrieve secrets from the Secret Store. The Secret Store, in secure mode, is currently Vault. In non-secure mode it is configuration in some form, i.e. DatabaseInfo configuration or InsecureSecrets configuration for Application Services. Existing Implementations The Secret Provider abstraction defined in this ADR is based on the Secret Provider abstraction implementations in the Application Functions SDK (App SDK) for Application Services and the one in go-mod-bootstrap (Bootstrap) used by the Core, Support & Security services in edgex-go. Device Services do not currently use secure secrets. The App SDK implementation was initially based on the Bootstrap implementation. The similarities and differences between these implementations are: Both wrap the SecretClient from go-mod-secrets Both initialize the SecretClient based on the SecretStore configuration(s) Both have factory methods, but they differ greatly Both implement the GetDatabaseCredentials API Bootstrap's uses split interfaces definitions ( CredentialsProvider & CertificateProvider ) while the App SDK's use a single interface ( SecretProvider ) for the abstraction Bootstrap's includes the bootstrap handler while the App SDK's has the bootstrap handler separated out Bootstrap's implements the GetCertificateKeyPair API, which the App SDK's does not App SDK's implements the following, which the Bootstrap's does not Initialize API (Bootstrap's initialization is done by the bootstrap handler) StoreSecrets API GetSecrets API InsecureSecretsUpdated API SecretsLastUpdated API Wraps a second SecretClient for the Application Service instance's exclusive secrets. Used by the StoreSecrets & GetSecrets APIs The standard SecretClient is considered the shared client for secrets that all Application Service instances share. It is only used by the GetDatabaseCredentials API Configuration based secret store for non-secure mode called InsecureSecrets Caching of secrets Needed so that secrets used by pipeline functions do not cause call out to Vault for every Event processed What is a Secret? A secret is a collection of key/value pairs stored in a SecretStore at specified path whose values are sensitive in nature. Redis database credentials are an example of a Secret which contains the username and password key/values stored at the redisdb path. Service Exclusive vs Service Shared Secrets Service Exclusive secrets are those that are exclusive to the instance of the running service. An example of exclusive secrets are the HTTP Auth tokens used by two running instances of app-service-configurable (http-export) which export different device Events to different endpoints with different Auth tokens in the HTTP headers. Service Exclusive secrets are seeded by POSTing the secrets to the /api/vX/secrets endpoint on the running instance of each Application Service. Service Shared secrets are those that all instances of a class of service, such a Application Services, share. Think of Core Data as it own class of service. An example of shared secrets are the database credentials for the single database instance for Store and Forward data that all Application Services may need to access. Another example is the database credentials for each of instance the Core Data. It is shared, but only one instance of Core Data is currently ever run. Service Shared secrets are seeded by security-secretstore-setup using static configuration for static secrets for known services. Currently database credentials are the only shared secrets. In the future we may have Message Bus credentials as shared secrets, but these will be truly shared secrets for all services to securely connect to the Message Bus, not just shared between instances of a service. Application Services currently have the ability to configure SecretStores for Service Exclusive and/or Service Shared secrets depending on their needs. Known and Unknown Services Known Services are those identified in the static configuration by security-secretstore-setup These currently are Core Data, Core Metadata, Support Notifications, Support Scheduler and Application Service (class) Unknown Services are those not known in the static configuration that become known when added to the Docker compose file or Snap. Application Service (instance) are examples of these services. Service exclusive SecretStore can be created for these services by adding the services' unique name , i.e. appservice-http-export, to the ADD_SECRETSTORE_TOKENS environment variable for security-secretstore-setup ADD_SECRETSTORE_TOKENS: \"appservice-http-export, appservice-mqtt-export\" This creates an exclusive secret store token for each service listed. The name provided for each service must be used in the service's SecretStore configuration and Docker volume mount (if applicable). Typically the configuration is set via environment overrides or is already in an existing configuration profile ( http-export profile for app-service-configurable). Example docker-compose file entries: environment : ... SecretStoreExclusive_Path : \"/v1/secret/edgex/appservice-http-export/\" TokenFile : \"/tmp/edgex/secrets/appservice-http-export/secrets-token.json\" volumes : ... - /tmp/edgex/secrets/appservice-http-export:/tmp/edgex/secrets/appservice-http-export:ro,z Static Secrets and Runtime Secrets Static Secrets are those identified by name in the static configuration whose values are randomly generated at seed time. These secrets are seeded on start-up of EdgeX. Database credentials are currently the only secrets of this type Runtime Secrets are those not known in the static configuration and that become known during run time. These secrets are seeded at run time via the Application Services /api/vX/secrets endpoint HTTP header authorization credentials for HTTP Export are types of these secrets Interfaces and factory methods Bootstrap's current implementation Interfaces type CredentialsProvider interface { GetDatabaseCredentials ( database config . Database ) ( config . Credentials , error ) } and type CertificateProvider interface { GetCertificateKeyPair ( path string ) ( config . CertKeyPair , error ) } Factory and bootstrap handler methods type SecretProvider struct { secretClient pkg . SecretClient } func NewSecret () * SecretProvider { return & SecretProvider {} } func ( s * SecretProvider ) BootstrapHandler ( ctx context . Context , _ * sync . WaitGroup , startupTimer startup . Timer , dic * di . Container ) bool { ... Intializes the SecretClient and adds it to the DIC for both interfaces . ... } App SDK's current implementation Interface type SecretProvider interface { Initialize ( _ context . Context ) bool StoreSecrets ( path string , secrets map [ string ] string ) error GetSecrets ( path string , _ ... string ) ( map [ string ] string , error ) GetDatabaseCredentials ( database db . DatabaseInfo ) ( common . Credentials , error ) InsecureSecretsUpdated () SecretsLastUpdated () time . Time } Factory and bootstrap handler methods type SecretProviderImpl struct { SharedSecretClient pkg . SecretClient ExclusiveSecretClient pkg . SecretClient secretsCache map [ string ] map [ string ] string // secret's path, key, value configuration * common . ConfigurationStruct cacheMuxtex * sync . Mutex loggingClient logger . LoggingClient //used to track when secrets have last been retrieved LastUpdated time . Time } func NewSecretProvider ( loggingClient logger . LoggingClient , configuration * common . ConfigurationStruct ) * SecretProviderImpl { sp := & SecretProviderImpl { secretsCache : make ( map [ string ] map [ string ] string ), cacheMuxtex : & sync . Mutex {}, configuration : configuration , loggingClient : loggingClient , LastUpdated : time . Now (), } return sp } type Secrets struct { } func NewSecrets () * Secrets { return & Secrets {} } func ( _ * Secrets ) BootstrapHandler ( ctx context . Context , _ * sync . WaitGroup , startupTimer startup . Timer , dic * di . Container ) bool { ... Creates NewNewSecretProvider , calls Initailizes () and adds it to the DIC ... } Secret Store for non-secure mode Both Bootstrap's and App SDK's implementation use the DatabaseInfo configuration for GetDatabaseCredentials API in non-secure mode. The App SDK only uses it, for backward compatibility, if the database credentials are not found in the new InsecureSecrets configuration section. For Ireland it was planned to only use the new InsecureSecrets configuration section in non-secure mode. Note: Redis credentials are blank in non-secure mode Core Data [Databases] [Databases.Primary] Host = \"localhost\" Name = \"coredata\" Username = \"\" Password = \"\" Port = 6379 Timeout = 5000 Type = \"redisdb\" Application Services [Database] Type = \"redisdb\" Host = \"localhost\" Port = 6379 Username = \"\" Password = \"\" Timeout = \"30s\" InsecureSecrets Configuration The App SDK defines a new Writable configuration section called InsecureSecrets . This structure mimics that of the secure SecretStore when EDGEX_SECURITY_SECRET_STORE environment variable is set to false . Having the InsecureSecrets in the Writable section allows for the secrets to be updated without restarting the service. Some minor processing must occur when the InsecureSecrets section is updated. This is to call the InsecureSecretsUpdated API. This API simply sets the time the secrets were last updated. The SecretsLastUpdated API returns this timestamp so pipeline functions that use credentials for exporting know if their client needs to be recreated with new credentials, i.e MQTT export. type WritableInfo struct { LogLevel string ... InsecureSecrets InsecureSecrets } type InsecureSecrets map [ string ] InsecureSecretsInfo type InsecureSecretsInfo struct { Path string Secrets map [ string ] string } [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" [Writable.InsecureSecrets.mqtt] path = \"mqtt\" [Writable.InsecureSecrets.mqtt.Secrets] username = \"\" password = \"\" cacert = \"\" clientcert = \"\" clientkey = \"\" Decision The new SecretProvider abstraction defined by this ADR is a combination of the two implementations described above in the Existing Implementations section. Only Exclusive Secret Stores To simplify the SecretProvider abstraction, we need to reduce to using only exclusive SecretStores . This allows all the APIs to deal with a single SecretClient , rather than the split up way we currently have in Application Services. This requires that the current Application Service shared secrets (database credentials) must be copied into each Application Service's exclusive SecretStore when it is created. The challenge is how do we seed static secrets for unknown services when they become known. As described above in the Known and Unknown Services section above, services currently identify themselves for exclusive SecretStore creation via the ADD_SECRETSTORE_TOKENS environment variable on security-secretstore-setup. This environment variable simply takes a comma separated list of service names. ADD_SECRETSTORE_TOKENS : \",\" If we expanded this to add an optional list of static secret identifiers for each service, i.e. appservice/redisdb , the exclusive store could also be seeded with a copy of static shared secrets. In this case the Redis database credentials for the Application Services' shared database. The environment variable name will change to ADD_SECRETSTORE now that it is more than just tokens. ADD_SECRETSTORE : \"app-service-xyz[appservice/redisdb]\" Note: The secret identifier here is the short path to the secret in the existing appservice SecretStore . In the above example this expands to the full path of /secret/edgex/appservice/redisdb The above example results in the Redis credentials being copied into app-service-xyz's SecretStore at /secret/edgex/app-service-xyz/redis . Similar approach could be taken for Message Bus credentials where a common SecretStore is created with the Message Bus credentials saved. The services request the credentials are copied into their exclusive SecretStore using common/messagebus as the secret identifier. Full specification for the environment variable's value is a comma separated list of service entries defined as: [optional list of static secret IDs sperated by ;],[optional list of static secret IDs sperated by ;],... Example with one service specifying IDs for static secrets and one without static secrets ADD_SECRETSTORE : \"appservice-xyz[appservice/redisdb; common/messagebus], appservice-http-export\" When the ADD_SECRETSTORE environment variable is processed to create these SecretStores , it will copy the specified saved secrets from the initial SecretStore into the service's SecretStore . This all depends on the completion of database or other credential bootstrapping and the secrets having been stored prior to the environment variable being processed. security-secretstore-setup will need to be refactored to ensure this sequencing. Abstraction Interface The following will be the new SecretProvider abstraction interface used by all Edgex services type SecretProvider interface { // Stores new secrets into the service's exclusive SecretStore at the specified path. StoreSecrets ( path string , secrets map [ string ] string ) error // Retrieves secrets from the service's exclusive SecretStore at the specified path. GetSecrets ( path string , _ ... string ) ( map [ string ] string , error ) // Sets the secrets lastupdated time to current time. SecretsUpdated () // Returns the secrets last updated time SecretsLastUpdated () time . Time } Note: The GetDatabaseCredentials and GetCertificateKeyPair APIs have been removed. These are no longer needed since insecure database credentials will no longer be stored in the DatabaseInfo configuration and certificate key pairs are secrets like any others. This allows these secrets to be retrieved via the GetSecrets API. Implementation Factory Method and Bootstrap Handler The factory method and bootstrap handler will follow that currently in the Bootstrap implementation with some tweaks. Rather than putting the two split interfaces into the DIC, it will put just the single interface instance into the DIC. See details in the Interfaces and factory methods section above under Existing Implementations . Caching of Secrets Secrets will be cached as they are currently in the Application Service implementation Insecure Secrets Insecure Secrets will be handled as they are currently in the Application Service implementation. DatabaseInfo configuration will no longer be an option for storing the insecure database credentials. They will be stored in the InsecureSecrets configuration only. [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" Handling on-the-fly changes to InsecureSecrets All services will need to handle the special processing when InsecureSecrets are changed on-the-fly via Consul. Since this will now be a common configuration item in Writable it can be handled in go-mod-bootstrap along with existing log level processing. This special processing will be taken from App SDK. Mocks Proper mock of the SecretProvider interface will be created with Mockery to be used in unit tests. Current mock in App SDK is hand written rather then generated with Mockery . Where will SecretProvider reside? Go Services The final decision to make is where will this new SecretProvider abstraction reside? Originally is was assumed that it would reside in go-mod-secrets , which seems logical. If we were to attempt this with the implementation including the bootstrap handler, go-mod-secrets would have a dependency on go-mod-bootstrap which will likely create a circular dependency. Refactoring the existing implementation in go-mod-bootstrap and have it reside there now seems to be the best choice. C Device Service The C Device SDK will implement the same SecretProvider abstraction, InsecureSercets configuration and the underling SecretStore client. Consequences All service's will have Writable.InsecureSecrets section added to their configuration InsecureSecrets definition will be moved from App SDK to go-mod-bootstrap Go Device SDK will add the SecretProvider to it's bootstrapping C Device SDK implementation could be big lift? SecretStore configuration section will be added to all Device Services edgex-go services will be modified to use the single SecretProvider interface from the DIC in place of current usage of the GetDatabaseCredentials and GetCertificateKeyPair interfaces. Calls to GetDatabaseCredentials and GetCertificateKeyPair will be replaced with calls to GetSecrets API and appropriate processing of the returned secrets will be added. App SDK will be modified to use GetSecrets API in place of the GetDatabaseCredentials API App SDK will be modified to use the new SecretProvider bootstrap handler app-service-configurable's configuration profiles as well as all the Application Service examples configurations will be updated to remove the SecretStoreExclusive configuration and just use the existing SecretStore configuration security-secretstore-setup will be enhanced as described in the Exclusive Secret Stores only section above Adding new services that need static secrets added to their SecretStore requires stopping and restarting all the services. The is because security-secretstore-setup has completed but not stopped. If it is rerun without stopping the other services, there tokens and static secrets will have changed. The planned refactor of security-secretstore-setup will attempt to resolve this. Snaps do not yet support setting the environment variable for adding SecretStore. It is planned for Ireland release.","title":"Secret Provider for All"},{"location":"design/adr/014-Secret-Provider-For-All/#secret-provider-for-all","text":"Status Context Existing Implementations What is a Secret? Service Exclusive vs Service Shared Secrets Known and Unknown Services Static Secrets and Runtime Secrets Interfaces and factory methods Bootstrap's current implementation Interfaces Factory and bootstrap handler methods App SDK's current implementation Interface Factory and bootstrap handler methods Secret Store for non-secure mode InsecureSecrets Configuration Decision Only Exclusive Secret Stores Abstraction Interface Implementation Factory Method and Bootstrap Handler Caching of Secrets Insecure Secrets Handling on-the-fly changes to InsecureSecrets Mocks Where will SecretProvider reside? Go Services C Device Service Consequences","title":"Secret Provider for All"},{"location":"design/adr/014-Secret-Provider-For-All/#status","text":"Approved","title":"Status"},{"location":"design/adr/014-Secret-Provider-For-All/#context","text":"This ADR defines the new SecretProvider abstraction that will be used by all EdgeX services, including Device Services. The Secret Provider is used by services to retrieve secrets from the Secret Store. The Secret Store, in secure mode, is currently Vault. In non-secure mode it is configuration in some form, i.e. DatabaseInfo configuration or InsecureSecrets configuration for Application Services.","title":"Context"},{"location":"design/adr/014-Secret-Provider-For-All/#existing-implementations","text":"The Secret Provider abstraction defined in this ADR is based on the Secret Provider abstraction implementations in the Application Functions SDK (App SDK) for Application Services and the one in go-mod-bootstrap (Bootstrap) used by the Core, Support & Security services in edgex-go. Device Services do not currently use secure secrets. The App SDK implementation was initially based on the Bootstrap implementation. The similarities and differences between these implementations are: Both wrap the SecretClient from go-mod-secrets Both initialize the SecretClient based on the SecretStore configuration(s) Both have factory methods, but they differ greatly Both implement the GetDatabaseCredentials API Bootstrap's uses split interfaces definitions ( CredentialsProvider & CertificateProvider ) while the App SDK's use a single interface ( SecretProvider ) for the abstraction Bootstrap's includes the bootstrap handler while the App SDK's has the bootstrap handler separated out Bootstrap's implements the GetCertificateKeyPair API, which the App SDK's does not App SDK's implements the following, which the Bootstrap's does not Initialize API (Bootstrap's initialization is done by the bootstrap handler) StoreSecrets API GetSecrets API InsecureSecretsUpdated API SecretsLastUpdated API Wraps a second SecretClient for the Application Service instance's exclusive secrets. Used by the StoreSecrets & GetSecrets APIs The standard SecretClient is considered the shared client for secrets that all Application Service instances share. It is only used by the GetDatabaseCredentials API Configuration based secret store for non-secure mode called InsecureSecrets Caching of secrets Needed so that secrets used by pipeline functions do not cause call out to Vault for every Event processed","title":"Existing Implementations"},{"location":"design/adr/014-Secret-Provider-For-All/#what-is-a-secret","text":"A secret is a collection of key/value pairs stored in a SecretStore at specified path whose values are sensitive in nature. Redis database credentials are an example of a Secret which contains the username and password key/values stored at the redisdb path.","title":"What is a Secret?"},{"location":"design/adr/014-Secret-Provider-For-All/#service-exclusive-vs-service-shared-secrets","text":"Service Exclusive secrets are those that are exclusive to the instance of the running service. An example of exclusive secrets are the HTTP Auth tokens used by two running instances of app-service-configurable (http-export) which export different device Events to different endpoints with different Auth tokens in the HTTP headers. Service Exclusive secrets are seeded by POSTing the secrets to the /api/vX/secrets endpoint on the running instance of each Application Service. Service Shared secrets are those that all instances of a class of service, such a Application Services, share. Think of Core Data as it own class of service. An example of shared secrets are the database credentials for the single database instance for Store and Forward data that all Application Services may need to access. Another example is the database credentials for each of instance the Core Data. It is shared, but only one instance of Core Data is currently ever run. Service Shared secrets are seeded by security-secretstore-setup using static configuration for static secrets for known services. Currently database credentials are the only shared secrets. In the future we may have Message Bus credentials as shared secrets, but these will be truly shared secrets for all services to securely connect to the Message Bus, not just shared between instances of a service. Application Services currently have the ability to configure SecretStores for Service Exclusive and/or Service Shared secrets depending on their needs.","title":"Service Exclusive vs Service Shared Secrets"},{"location":"design/adr/014-Secret-Provider-For-All/#known-and-unknown-services","text":"Known Services are those identified in the static configuration by security-secretstore-setup These currently are Core Data, Core Metadata, Support Notifications, Support Scheduler and Application Service (class) Unknown Services are those not known in the static configuration that become known when added to the Docker compose file or Snap. Application Service (instance) are examples of these services. Service exclusive SecretStore can be created for these services by adding the services' unique name , i.e. appservice-http-export, to the ADD_SECRETSTORE_TOKENS environment variable for security-secretstore-setup ADD_SECRETSTORE_TOKENS: \"appservice-http-export, appservice-mqtt-export\" This creates an exclusive secret store token for each service listed. The name provided for each service must be used in the service's SecretStore configuration and Docker volume mount (if applicable). Typically the configuration is set via environment overrides or is already in an existing configuration profile ( http-export profile for app-service-configurable). Example docker-compose file entries: environment : ... SecretStoreExclusive_Path : \"/v1/secret/edgex/appservice-http-export/\" TokenFile : \"/tmp/edgex/secrets/appservice-http-export/secrets-token.json\" volumes : ... - /tmp/edgex/secrets/appservice-http-export:/tmp/edgex/secrets/appservice-http-export:ro,z","title":"Known and Unknown Services"},{"location":"design/adr/014-Secret-Provider-For-All/#static-secrets-and-runtime-secrets","text":"Static Secrets are those identified by name in the static configuration whose values are randomly generated at seed time. These secrets are seeded on start-up of EdgeX. Database credentials are currently the only secrets of this type Runtime Secrets are those not known in the static configuration and that become known during run time. These secrets are seeded at run time via the Application Services /api/vX/secrets endpoint HTTP header authorization credentials for HTTP Export are types of these secrets","title":"Static Secrets and Runtime Secrets"},{"location":"design/adr/014-Secret-Provider-For-All/#interfaces-and-factory-methods","text":"","title":"Interfaces and factory methods"},{"location":"design/adr/014-Secret-Provider-For-All/#bootstraps-current-implementation","text":"","title":"Bootstrap's current implementation"},{"location":"design/adr/014-Secret-Provider-For-All/#interfaces","text":"type CredentialsProvider interface { GetDatabaseCredentials ( database config . Database ) ( config . Credentials , error ) } and type CertificateProvider interface { GetCertificateKeyPair ( path string ) ( config . CertKeyPair , error ) }","title":"Interfaces"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-and-bootstrap-handler-methods","text":"type SecretProvider struct { secretClient pkg . SecretClient } func NewSecret () * SecretProvider { return & SecretProvider {} } func ( s * SecretProvider ) BootstrapHandler ( ctx context . Context , _ * sync . WaitGroup , startupTimer startup . Timer , dic * di . Container ) bool { ... Intializes the SecretClient and adds it to the DIC for both interfaces . ... }","title":"Factory and bootstrap handler methods"},{"location":"design/adr/014-Secret-Provider-For-All/#app-sdks-current-implementation","text":"","title":"App SDK's current implementation"},{"location":"design/adr/014-Secret-Provider-For-All/#interface","text":"type SecretProvider interface { Initialize ( _ context . Context ) bool StoreSecrets ( path string , secrets map [ string ] string ) error GetSecrets ( path string , _ ... string ) ( map [ string ] string , error ) GetDatabaseCredentials ( database db . DatabaseInfo ) ( common . Credentials , error ) InsecureSecretsUpdated () SecretsLastUpdated () time . Time }","title":"Interface"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-and-bootstrap-handler-methods_1","text":"type SecretProviderImpl struct { SharedSecretClient pkg . SecretClient ExclusiveSecretClient pkg . SecretClient secretsCache map [ string ] map [ string ] string // secret's path, key, value configuration * common . ConfigurationStruct cacheMuxtex * sync . Mutex loggingClient logger . LoggingClient //used to track when secrets have last been retrieved LastUpdated time . Time } func NewSecretProvider ( loggingClient logger . LoggingClient , configuration * common . ConfigurationStruct ) * SecretProviderImpl { sp := & SecretProviderImpl { secretsCache : make ( map [ string ] map [ string ] string ), cacheMuxtex : & sync . Mutex {}, configuration : configuration , loggingClient : loggingClient , LastUpdated : time . Now (), } return sp } type Secrets struct { } func NewSecrets () * Secrets { return & Secrets {} } func ( _ * Secrets ) BootstrapHandler ( ctx context . Context , _ * sync . WaitGroup , startupTimer startup . Timer , dic * di . Container ) bool { ... Creates NewNewSecretProvider , calls Initailizes () and adds it to the DIC ... }","title":"Factory and bootstrap handler methods"},{"location":"design/adr/014-Secret-Provider-For-All/#secret-store-for-non-secure-mode","text":"Both Bootstrap's and App SDK's implementation use the DatabaseInfo configuration for GetDatabaseCredentials API in non-secure mode. The App SDK only uses it, for backward compatibility, if the database credentials are not found in the new InsecureSecrets configuration section. For Ireland it was planned to only use the new InsecureSecrets configuration section in non-secure mode. Note: Redis credentials are blank in non-secure mode Core Data [Databases] [Databases.Primary] Host = \"localhost\" Name = \"coredata\" Username = \"\" Password = \"\" Port = 6379 Timeout = 5000 Type = \"redisdb\" Application Services [Database] Type = \"redisdb\" Host = \"localhost\" Port = 6379 Username = \"\" Password = \"\" Timeout = \"30s\"","title":"Secret Store for non-secure mode"},{"location":"design/adr/014-Secret-Provider-For-All/#insecuresecrets-configuration","text":"The App SDK defines a new Writable configuration section called InsecureSecrets . This structure mimics that of the secure SecretStore when EDGEX_SECURITY_SECRET_STORE environment variable is set to false . Having the InsecureSecrets in the Writable section allows for the secrets to be updated without restarting the service. Some minor processing must occur when the InsecureSecrets section is updated. This is to call the InsecureSecretsUpdated API. This API simply sets the time the secrets were last updated. The SecretsLastUpdated API returns this timestamp so pipeline functions that use credentials for exporting know if their client needs to be recreated with new credentials, i.e MQTT export. type WritableInfo struct { LogLevel string ... InsecureSecrets InsecureSecrets } type InsecureSecrets map [ string ] InsecureSecretsInfo type InsecureSecretsInfo struct { Path string Secrets map [ string ] string } [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" [Writable.InsecureSecrets.mqtt] path = \"mqtt\" [Writable.InsecureSecrets.mqtt.Secrets] username = \"\" password = \"\" cacert = \"\" clientcert = \"\" clientkey = \"\"","title":"InsecureSecrets Configuration"},{"location":"design/adr/014-Secret-Provider-For-All/#decision","text":"The new SecretProvider abstraction defined by this ADR is a combination of the two implementations described above in the Existing Implementations section.","title":"Decision"},{"location":"design/adr/014-Secret-Provider-For-All/#only-exclusive-secret-stores","text":"To simplify the SecretProvider abstraction, we need to reduce to using only exclusive SecretStores . This allows all the APIs to deal with a single SecretClient , rather than the split up way we currently have in Application Services. This requires that the current Application Service shared secrets (database credentials) must be copied into each Application Service's exclusive SecretStore when it is created. The challenge is how do we seed static secrets for unknown services when they become known. As described above in the Known and Unknown Services section above, services currently identify themselves for exclusive SecretStore creation via the ADD_SECRETSTORE_TOKENS environment variable on security-secretstore-setup. This environment variable simply takes a comma separated list of service names. ADD_SECRETSTORE_TOKENS : \",\" If we expanded this to add an optional list of static secret identifiers for each service, i.e. appservice/redisdb , the exclusive store could also be seeded with a copy of static shared secrets. In this case the Redis database credentials for the Application Services' shared database. The environment variable name will change to ADD_SECRETSTORE now that it is more than just tokens. ADD_SECRETSTORE : \"app-service-xyz[appservice/redisdb]\" Note: The secret identifier here is the short path to the secret in the existing appservice SecretStore . In the above example this expands to the full path of /secret/edgex/appservice/redisdb The above example results in the Redis credentials being copied into app-service-xyz's SecretStore at /secret/edgex/app-service-xyz/redis . Similar approach could be taken for Message Bus credentials where a common SecretStore is created with the Message Bus credentials saved. The services request the credentials are copied into their exclusive SecretStore using common/messagebus as the secret identifier. Full specification for the environment variable's value is a comma separated list of service entries defined as: [optional list of static secret IDs sperated by ;],[optional list of static secret IDs sperated by ;],... Example with one service specifying IDs for static secrets and one without static secrets ADD_SECRETSTORE : \"appservice-xyz[appservice/redisdb; common/messagebus], appservice-http-export\" When the ADD_SECRETSTORE environment variable is processed to create these SecretStores , it will copy the specified saved secrets from the initial SecretStore into the service's SecretStore . This all depends on the completion of database or other credential bootstrapping and the secrets having been stored prior to the environment variable being processed. security-secretstore-setup will need to be refactored to ensure this sequencing.","title":"Only Exclusive Secret Stores"},{"location":"design/adr/014-Secret-Provider-For-All/#abstraction-interface","text":"The following will be the new SecretProvider abstraction interface used by all Edgex services type SecretProvider interface { // Stores new secrets into the service's exclusive SecretStore at the specified path. StoreSecrets ( path string , secrets map [ string ] string ) error // Retrieves secrets from the service's exclusive SecretStore at the specified path. GetSecrets ( path string , _ ... string ) ( map [ string ] string , error ) // Sets the secrets lastupdated time to current time. SecretsUpdated () // Returns the secrets last updated time SecretsLastUpdated () time . Time } Note: The GetDatabaseCredentials and GetCertificateKeyPair APIs have been removed. These are no longer needed since insecure database credentials will no longer be stored in the DatabaseInfo configuration and certificate key pairs are secrets like any others. This allows these secrets to be retrieved via the GetSecrets API.","title":"Abstraction Interface"},{"location":"design/adr/014-Secret-Provider-For-All/#implementation","text":"","title":"Implementation"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-method-and-bootstrap-handler","text":"The factory method and bootstrap handler will follow that currently in the Bootstrap implementation with some tweaks. Rather than putting the two split interfaces into the DIC, it will put just the single interface instance into the DIC. See details in the Interfaces and factory methods section above under Existing Implementations .","title":"Factory Method and Bootstrap Handler"},{"location":"design/adr/014-Secret-Provider-For-All/#caching-of-secrets","text":"Secrets will be cached as they are currently in the Application Service implementation","title":"Caching of Secrets"},{"location":"design/adr/014-Secret-Provider-For-All/#insecure-secrets","text":"Insecure Secrets will be handled as they are currently in the Application Service implementation. DatabaseInfo configuration will no longer be an option for storing the insecure database credentials. They will be stored in the InsecureSecrets configuration only. [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\"","title":"Insecure Secrets"},{"location":"design/adr/014-Secret-Provider-For-All/#handling-on-the-fly-changes-to-insecuresecrets","text":"All services will need to handle the special processing when InsecureSecrets are changed on-the-fly via Consul. Since this will now be a common configuration item in Writable it can be handled in go-mod-bootstrap along with existing log level processing. This special processing will be taken from App SDK.","title":"Handling on-the-fly changes to InsecureSecrets"},{"location":"design/adr/014-Secret-Provider-For-All/#mocks","text":"Proper mock of the SecretProvider interface will be created with Mockery to be used in unit tests. Current mock in App SDK is hand written rather then generated with Mockery .","title":"Mocks"},{"location":"design/adr/014-Secret-Provider-For-All/#where-will-secretprovider-reside","text":"","title":"Where will SecretProvider reside?"},{"location":"design/adr/014-Secret-Provider-For-All/#go-services","text":"The final decision to make is where will this new SecretProvider abstraction reside? Originally is was assumed that it would reside in go-mod-secrets , which seems logical. If we were to attempt this with the implementation including the bootstrap handler, go-mod-secrets would have a dependency on go-mod-bootstrap which will likely create a circular dependency. Refactoring the existing implementation in go-mod-bootstrap and have it reside there now seems to be the best choice.","title":"Go Services"},{"location":"design/adr/014-Secret-Provider-For-All/#c-device-service","text":"The C Device SDK will implement the same SecretProvider abstraction, InsecureSercets configuration and the underling SecretStore client.","title":"C Device Service"},{"location":"design/adr/014-Secret-Provider-For-All/#consequences","text":"All service's will have Writable.InsecureSecrets section added to their configuration InsecureSecrets definition will be moved from App SDK to go-mod-bootstrap Go Device SDK will add the SecretProvider to it's bootstrapping C Device SDK implementation could be big lift? SecretStore configuration section will be added to all Device Services edgex-go services will be modified to use the single SecretProvider interface from the DIC in place of current usage of the GetDatabaseCredentials and GetCertificateKeyPair interfaces. Calls to GetDatabaseCredentials and GetCertificateKeyPair will be replaced with calls to GetSecrets API and appropriate processing of the returned secrets will be added. App SDK will be modified to use GetSecrets API in place of the GetDatabaseCredentials API App SDK will be modified to use the new SecretProvider bootstrap handler app-service-configurable's configuration profiles as well as all the Application Service examples configurations will be updated to remove the SecretStoreExclusive configuration and just use the existing SecretStore configuration security-secretstore-setup will be enhanced as described in the Exclusive Secret Stores only section above Adding new services that need static secrets added to their SecretStore requires stopping and restarting all the services. The is because security-secretstore-setup has completed but not stopped. If it is rerun without stopping the other services, there tokens and static secrets will have changed. The planned refactor of security-secretstore-setup will attempt to resolve this. Snaps do not yet support setting the environment variable for adding SecretStore. It is planned for Ireland release.","title":"Consequences"},{"location":"design/adr/core/0003-V2-API-Principles/","text":"Geneva API Guiding Principles Status Accepted by EdgeX Foundry working groups as of Core Working Group meeting 16-Jan-2020 Note This ADR was written pre-Geneva with an assumption that the V2 APIs would be available in Geneva. In actuality, the full V2 APIs will be delivered in the Ireland release (Spring 2020) Context A redesign of the EdgeX Foundry API is proposed for the Geneva release. This is understood by the community to warrant a 2.0 release that will not be backward compatible. The goal is to rework the API using solid principles that will allow for extension over the course of several release cycles, avoiding the necessity of yet another major release version in a short period of time. Briefly, this effort grew from the acknowledgement that the current models used to facilitate requests and responses via the EdgeX Foundry API were legacy definitions that were once used as internal representations of state within the EdgeX services themselves. Thus if you want to add or update a device, you populate a full device model rather than a specific Add/UpdateDeviceRequest. Currently, your request model has the same definition, and thus validation constraints, as the response model because they are one and the same! It is desirable to separate and be specific about what is required for a given request, as well as its state validity, and the bare minimum that must be returned within a response. Following from that central need, other considerations have been used when designing this proposed API. These will be enumerated and briefly explained below. 1.) Transport-agnostic Define the request/response data transfer objects (DTO) in a manner whereby they can be used independent of transport. For example, although an OpenAPI doc is implicitly coupled to HTTP/REST, define the DTOs in such a way that they could also be used if the platform were to evolve to a pub/sub architecture. 2.) Support partial updates via PATCH Given a request to, for example, update a device the user should be able to update only some properties of the device. Previously this would require an endpoint for each individual property to be updated since the \"update device\" endpoint, facilitated by a PUT, would perform a complete replacement of the device's data. If you only wanted to update the LastConnected timestamp, then a separate endpoint for that property was required. We will leverage PATCH in order to update an entity and only those properties populated on the request will be considered. Properties that are missing or left blank will not be touched. 3.) Support multiple requests at once Endpoints for the addition or updating of data (POST/PATCH) should accept multiple requests at once. If it were desirable to add or update multiple devices with one request, for example, the API should facilitate this. 4.) Support multiple correlated responses at once Following from #3 above, each request sent to the endpoint must result in a corresponding response. In the case of HTTP/REST, this means if four requests are sent to a POST operation, the return payload will have four responses. Each response must expose a \"code\" property containing a numeric result for what occurred. These could be equivalent to HTTP status codes, for example. So while the overall call might succeed, one or more of the child requests may not have. It is up to the caller to examine each response and handle accordingly. In order to correlate each response to its original request, each request must be assigned its own ID (in GUID format). The caller can then tie a response to an individual request and handle the result accordingly, or otherwise track that a response to a given request was not received. 5.) Use of 207 HTTP Status (Multi-Result) In the case where an endpoint can support multiple responses, the returned HTTP code from a REST API will be 207 (Multi-status) 6.) Each service should provide a \"batch\" request endpoint In addition to use-case specific endpoints that you'd find in any REST API, each service should provide a \"batch\" endpoint that can take any kind of request. This is a generic endpoint that allows you to group requests of different types within a single call. For example, instead of having to call two endpoints to get two jobs done, you can call a single endpoint passing the specific requests and have them routed appropriately within the service. Also, when considering agnostic transport, the batch endpoint would allow for the definition and handling of \"GET\" equivalent DTOs which are now implicit in the format of a URL. 7.) GET endpoints returning a list of items must support pagination URL parameters must be supported for every GET endpoint to support pagination. These parameters should indicate the current page of results and the number of results on a page. Decision Commnunity has accepted the reasoning for the new API and the design principles outlined above. The approach will be to gradually implement the V2 API side-by-side with the current V1 APIs. We believe it will take more than a single release cycle to implement the new specification. Releases of that occur prior to the V2 API implementation completion will continue to be major versioned as 1.x. Subsequent to completion, releases will be major versioned as 2.x. Consequences Backward incompatibility with EdgeX Foundry's V1 API requires a major version increment (e.g. v2.x). Service-level testing (e.g. blackbox tests) needs to be rewritten. Specification-first development allows for different implementations of EdgeX services to be certified as \"EdgeX Compliant\" in reference to an objective standard. Transport-agnostic focus enables different architectural patterns (pub/sub versus REST) using the same data representation.","title":"Geneva API Guiding Principles"},{"location":"design/adr/core/0003-V2-API-Principles/#geneva-api-guiding-principles","text":"","title":"Geneva API Guiding Principles"},{"location":"design/adr/core/0003-V2-API-Principles/#status","text":"Accepted by EdgeX Foundry working groups as of Core Working Group meeting 16-Jan-2020 Note This ADR was written pre-Geneva with an assumption that the V2 APIs would be available in Geneva. In actuality, the full V2 APIs will be delivered in the Ireland release (Spring 2020)","title":"Status"},{"location":"design/adr/core/0003-V2-API-Principles/#context","text":"A redesign of the EdgeX Foundry API is proposed for the Geneva release. This is understood by the community to warrant a 2.0 release that will not be backward compatible. The goal is to rework the API using solid principles that will allow for extension over the course of several release cycles, avoiding the necessity of yet another major release version in a short period of time. Briefly, this effort grew from the acknowledgement that the current models used to facilitate requests and responses via the EdgeX Foundry API were legacy definitions that were once used as internal representations of state within the EdgeX services themselves. Thus if you want to add or update a device, you populate a full device model rather than a specific Add/UpdateDeviceRequest. Currently, your request model has the same definition, and thus validation constraints, as the response model because they are one and the same! It is desirable to separate and be specific about what is required for a given request, as well as its state validity, and the bare minimum that must be returned within a response. Following from that central need, other considerations have been used when designing this proposed API. These will be enumerated and briefly explained below. 1.) Transport-agnostic Define the request/response data transfer objects (DTO) in a manner whereby they can be used independent of transport. For example, although an OpenAPI doc is implicitly coupled to HTTP/REST, define the DTOs in such a way that they could also be used if the platform were to evolve to a pub/sub architecture. 2.) Support partial updates via PATCH Given a request to, for example, update a device the user should be able to update only some properties of the device. Previously this would require an endpoint for each individual property to be updated since the \"update device\" endpoint, facilitated by a PUT, would perform a complete replacement of the device's data. If you only wanted to update the LastConnected timestamp, then a separate endpoint for that property was required. We will leverage PATCH in order to update an entity and only those properties populated on the request will be considered. Properties that are missing or left blank will not be touched. 3.) Support multiple requests at once Endpoints for the addition or updating of data (POST/PATCH) should accept multiple requests at once. If it were desirable to add or update multiple devices with one request, for example, the API should facilitate this. 4.) Support multiple correlated responses at once Following from #3 above, each request sent to the endpoint must result in a corresponding response. In the case of HTTP/REST, this means if four requests are sent to a POST operation, the return payload will have four responses. Each response must expose a \"code\" property containing a numeric result for what occurred. These could be equivalent to HTTP status codes, for example. So while the overall call might succeed, one or more of the child requests may not have. It is up to the caller to examine each response and handle accordingly. In order to correlate each response to its original request, each request must be assigned its own ID (in GUID format). The caller can then tie a response to an individual request and handle the result accordingly, or otherwise track that a response to a given request was not received. 5.) Use of 207 HTTP Status (Multi-Result) In the case where an endpoint can support multiple responses, the returned HTTP code from a REST API will be 207 (Multi-status) 6.) Each service should provide a \"batch\" request endpoint In addition to use-case specific endpoints that you'd find in any REST API, each service should provide a \"batch\" endpoint that can take any kind of request. This is a generic endpoint that allows you to group requests of different types within a single call. For example, instead of having to call two endpoints to get two jobs done, you can call a single endpoint passing the specific requests and have them routed appropriately within the service. Also, when considering agnostic transport, the batch endpoint would allow for the definition and handling of \"GET\" equivalent DTOs which are now implicit in the format of a URL. 7.) GET endpoints returning a list of items must support pagination URL parameters must be supported for every GET endpoint to support pagination. These parameters should indicate the current page of results and the number of results on a page.","title":"Context"},{"location":"design/adr/core/0003-V2-API-Principles/#decision","text":"Commnunity has accepted the reasoning for the new API and the design principles outlined above. The approach will be to gradually implement the V2 API side-by-side with the current V1 APIs. We believe it will take more than a single release cycle to implement the new specification. Releases of that occur prior to the V2 API implementation completion will continue to be major versioned as 1.x. Subsequent to completion, releases will be major versioned as 2.x.","title":"Decision"},{"location":"design/adr/core/0003-V2-API-Principles/#consequences","text":"Backward incompatibility with EdgeX Foundry's V1 API requires a major version increment (e.g. v2.x). Service-level testing (e.g. blackbox tests) needs to be rewritten. Specification-first development allows for different implementations of EdgeX services to be certified as \"EdgeX Compliant\" in reference to an objective standard. Transport-agnostic focus enables different architectural patterns (pub/sub versus REST) using the same data representation.","title":"Consequences"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/","text":"EdgeX-CLI V2 Design Status Approved (by TSC vote on 10/6/21) Context This ADR presents a technical plan for creation of a 2.0 version of edgex-cli which supports the new V2 REST APIs developed as part of the Ireland release of EdgeX. Existing Behavior The latest version of edgex-cli (1.0.1) only supports the V1 REST APIs and thus cannot be used with V2 releases of EdgeX. As the edgex-cli was developed organically over time, the current implementation has a number of bugs mostly involving a lack of consistent behavior, especially with respect to formatting of output. Other issues with the existing client include: lack of tab completion default output of commands is too verbose verbose output sometime prevents use of jq static configuration file required (i.e. no registry support) project hierarchy not conforming to best practice guidelines History The original Hanoi V1 client was created by a team at VMWare which is no longer participating in the project. Canonical will lead the development of the Ireland/Jakarta V2 client. Decision Use standardized command-line args/flags Argument/Flag Description -d , --debug show additional output for debugging purposes (e.g. REST URL, request JSON, \u2026). This command-line arg will replace -v, --verbose and will no longer trigger output of the response JSON (see -j, --json). -j , --json output the raw JSON response returned by the EdgeX REST API and nothing else. This output mode is used for script-based usage of the client. --version output the version of the client and if available, the version of EdgeX installed on the system (using the version of the metadata data service) Restructure the Go code hierarchy to follow the most recent recommended guidelines . For instance /cmd should just contain the main application for the project, not an implementation for each command - that should be in /internal/cmd Take full advantage of the features of the underlying command-line library, Cobra , such as tab-completion of commands. Allow overlap of command names across services by supporting an argument to specify the service to use: -m/--metadata , -c/--command , -n/--notification , -s/--scheduler or --data (which is the default). Examples: edgex-cli ping --data edgex-cli ping -m edgex-cli version -c Implement all required V2 endpoints for core services Core Command - edgex-cli command read | write | list Core Data - edgex-cli event add | count | list | rm | scrub** - edgex-cli reading count | list Metadata - edgex-cli device add | adminstate | list | operstate | rm | update - edgex-cli deviceprofile add | list | rm | update - edgex-cli deviceservice add | list | rm | update - edgex-cli provisionwatcher add | list | rm | update Support Notifications - edgex-cli notification add | list | rm - edgex-cli subscription add | list | rm Support Scheduler - edgex-cli interval add | list | rm | update ** Common endpoints in all services ** - ** ` edgex - cli version ` ** - ** ` edgex - cli ping ` ** - ** ` edgex - cli metrics ` ** - ** ` edgex - cli status ` ** The commands will support arguments as appropriate . For instance : - ` event list ` using ` / event / all ` to return all events - ` event list -- device { name }` using ` / event / device / name / { name }` to return the events sourced from the specified device . Currently, some commands default to always displaying GUIDs in objects when they're not really needed. Change this so that by default GUIDs aren't displayed, but add a flag which causes them to be displayed. scrub may not work with Redis being secured by default. That might also apply to the top-level db command (used to wipe the entire db). If so, then the commands will be disabled in secure mode, but permitted in non-secure mode. Have built-in defaults with port numbers for all core services and allow overrides, avoiding the need for static configuration file or configuration provider. (Stretch) implement a -o / --output argument which could be used to customize the pretty-printed objects (i.e. non-JSON). (Stretch) Implement support for use of the client via the API Gateway, including being able to connect to a remote EdgeX instance. This might require updates in go-mod-core-contracts. References Command Line Interface Guidelines The Unix Programming Environment, Brian W. Kernighan and Rob Pike POSIX Utility Conventions Program Behavior for All Programs, GNU Coding Standards 12 Factor CLI Apps, Jeff Dickey CLI Style Guide, Heroku Standard Go Project Layout","title":"EdgeX-CLI V2 Design"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#edgex-cli-v2-design","text":"","title":"EdgeX-CLI V2 Design"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#status","text":"Approved (by TSC vote on 10/6/21)","title":"Status"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#context","text":"This ADR presents a technical plan for creation of a 2.0 version of edgex-cli which supports the new V2 REST APIs developed as part of the Ireland release of EdgeX.","title":"Context"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#existing-behavior","text":"The latest version of edgex-cli (1.0.1) only supports the V1 REST APIs and thus cannot be used with V2 releases of EdgeX. As the edgex-cli was developed organically over time, the current implementation has a number of bugs mostly involving a lack of consistent behavior, especially with respect to formatting of output. Other issues with the existing client include: lack of tab completion default output of commands is too verbose verbose output sometime prevents use of jq static configuration file required (i.e. no registry support) project hierarchy not conforming to best practice guidelines","title":"Existing Behavior"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#history","text":"The original Hanoi V1 client was created by a team at VMWare which is no longer participating in the project. Canonical will lead the development of the Ireland/Jakarta V2 client.","title":"History"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#decision","text":"Use standardized command-line args/flags Argument/Flag Description -d , --debug show additional output for debugging purposes (e.g. REST URL, request JSON, \u2026). This command-line arg will replace -v, --verbose and will no longer trigger output of the response JSON (see -j, --json). -j , --json output the raw JSON response returned by the EdgeX REST API and nothing else. This output mode is used for script-based usage of the client. --version output the version of the client and if available, the version of EdgeX installed on the system (using the version of the metadata data service) Restructure the Go code hierarchy to follow the most recent recommended guidelines . For instance /cmd should just contain the main application for the project, not an implementation for each command - that should be in /internal/cmd Take full advantage of the features of the underlying command-line library, Cobra , such as tab-completion of commands. Allow overlap of command names across services by supporting an argument to specify the service to use: -m/--metadata , -c/--command , -n/--notification , -s/--scheduler or --data (which is the default). Examples: edgex-cli ping --data edgex-cli ping -m edgex-cli version -c Implement all required V2 endpoints for core services Core Command - edgex-cli command read | write | list Core Data - edgex-cli event add | count | list | rm | scrub** - edgex-cli reading count | list Metadata - edgex-cli device add | adminstate | list | operstate | rm | update - edgex-cli deviceprofile add | list | rm | update - edgex-cli deviceservice add | list | rm | update - edgex-cli provisionwatcher add | list | rm | update Support Notifications - edgex-cli notification add | list | rm - edgex-cli subscription add | list | rm Support Scheduler - edgex-cli interval add | list | rm | update ** Common endpoints in all services ** - ** ` edgex - cli version ` ** - ** ` edgex - cli ping ` ** - ** ` edgex - cli metrics ` ** - ** ` edgex - cli status ` ** The commands will support arguments as appropriate . For instance : - ` event list ` using ` / event / all ` to return all events - ` event list -- device { name }` using ` / event / device / name / { name }` to return the events sourced from the specified device . Currently, some commands default to always displaying GUIDs in objects when they're not really needed. Change this so that by default GUIDs aren't displayed, but add a flag which causes them to be displayed. scrub may not work with Redis being secured by default. That might also apply to the top-level db command (used to wipe the entire db). If so, then the commands will be disabled in secure mode, but permitted in non-secure mode. Have built-in defaults with port numbers for all core services and allow overrides, avoiding the need for static configuration file or configuration provider. (Stretch) implement a -o / --output argument which could be used to customize the pretty-printed objects (i.e. non-JSON). (Stretch) Implement support for use of the client via the API Gateway, including being able to connect to a remote EdgeX instance. This might require updates in go-mod-core-contracts.","title":"Decision"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#references","text":"Command Line Interface Guidelines The Unix Programming Environment, Brian W. Kernighan and Rob Pike POSIX Utility Conventions Program Behavior for All Programs, GNU Coding Standards 12 Factor CLI Apps, Jeff Dickey CLI Style Guide, Heroku Standard Go Project Layout","title":"References"},{"location":"design/adr/core/0021-Device-Profile-Changes/","text":"Changes to Device Profiles Status Approved By TSC Vote on 2/14/22 Please see a prior PR on this topic that detailed much of the debate and context on this issue. For clarity and simplicity, that PR was closed in favor of this simpler ADR. Context While the device profile has always been the way to describe a device/sensor and template its communications to the rest of the EdgeX platform, over the course of EdgeX evolution there have been changes in what could change in a profile (often based on its associations to other EdgeX objects). This document is meant to address the issue of change surrounding device profiles in EdgeX going forward \u2013 specifically when can a device profile (or its sub-elements such as device resources) be added, modified or removed. Summary of Device Profile Rules These rules will be implemented in core metadata on device profile API calls. A device profile can be added anytime Device resources or device commands can be added to a device profile anytime Attributes can be added to a device profile anytime A device profile can be removed or modified when the device profile is not associated to a device or provision watcher this includes modifying any field (except identifiers like names and ids) this includes changes to the array of device resources, device commands this includes changes to attributes (of device resources) even when a device profile is associated to a device or provision watcher, fields of the device profile or device resource can be modified when the field change will not affect the behavior of the system. on profile, the following fields do not affect the behavior: description, manufacturer, model, labels. on device resource, the following fields do not affect the behavior: description and tag A device profile cannot be removed when it is associated to a device or provision watcher. A device profile can be removed or modified even when associated to an event or reading. However, configuration options (see New Configuration Settings below) are available to block the change or removal of a device profile for any reason. the rationale behind the new configuraton settings was specifically to protect the event/reading association to device profiles. Events and readings are generally considered short lived (ephemeral) objects and already contain the necessary device profile information that are needed by the system during their short life without having to refer to and keep the device profile. But if an adopter wants to make sure the device profile is unmodified and still exists for any event/readings association (or for any reason), then the new config setting will block device profile changes or removals. see note below in Consequences that a new Units property must be added to the Reading object in order to support this rule and the need for all relevant profile data to be in the reading. Ancillary Rules associated to Device Profiles Name and ID fields (identifying fields) for device profiles, device resources, etc. cannot be modified and can never be null. A device profile can begin life \u201cempty\u201d - meaning that it has no device resources or device commands. New APIs The following APIs would be added to the metadata REST service in order to meet the design specified above. Add Profile General Property PATCH API (allow to modify profile's description, manufacturer, model and label fields) Add Profile Device Resource POST API Add Profile Device Resource PATCH API (allow to modify Description and IsHidden only) Add Profile Device Resource DELETE API (allow as described above) Add Profile Device Command POST API Add Profile Device Command PATCH API (allow as described above) Add Profile Device Command DELETE API (allow as described above) New Configuration Settings Some adopters may not view event/reading data as ephemeral or short lived. These adopters may choose not to allow device profiles to be modified or removed when associated to an event or reading. For this reason, two new configuration options, in the [Writable.ProfileChange] section, will be added to metadata configuration that are used to reject modifications or deletions. StrictDeviceProfileChanges (set to false by default) StrictDeviceProfileDeletes (set to false by default) When either of these config settings are set to true, metadata would accordingly reject changes to or removal of profiles (note: metadata will not check that there are actually events or readings - or any object - associated to the device profile when these are set to true. It simply rejects all modification or deletes to device profiles with the assumption that there could be events, readings or other objects associated and which need to be preserved). Consequences/Considerations In order to allow device profiles to be updated or removed even when associated to an EdgeX event/reading, a new property needs to be added to the reading object. Readings will now contain a \u201cUnits\u201d (string) property. This property will indicate the units of measure for the Value in the Reading and will be populated based on the Units for the device resource. A new device service configuration property, ReadingUnits (set true by default) will allow adopters to indicate they do not want units to be added to the readings (for cases where there is a concern about the number of readings and the extra data of adding units). The ReadingUnits configuration option will be added to the [Writable.Reading] section of device services (and addressed in the device service SDKs). This allows the event/reading to contain all relevant information from the device profile that is needed by the system during the course of the event/reading\u2019s life. This allows the device profile to be modified or even removed even when there are events/readings in the system that were created from information in the device profile. References Metadata API Device Service SDK Required Functionality","title":"Changes to Device Profiles"},{"location":"design/adr/core/0021-Device-Profile-Changes/#changes-to-device-profiles","text":"","title":"Changes to Device Profiles"},{"location":"design/adr/core/0021-Device-Profile-Changes/#status","text":"Approved By TSC Vote on 2/14/22 Please see a prior PR on this topic that detailed much of the debate and context on this issue. For clarity and simplicity, that PR was closed in favor of this simpler ADR.","title":"Status"},{"location":"design/adr/core/0021-Device-Profile-Changes/#context","text":"While the device profile has always been the way to describe a device/sensor and template its communications to the rest of the EdgeX platform, over the course of EdgeX evolution there have been changes in what could change in a profile (often based on its associations to other EdgeX objects). This document is meant to address the issue of change surrounding device profiles in EdgeX going forward \u2013 specifically when can a device profile (or its sub-elements such as device resources) be added, modified or removed.","title":"Context"},{"location":"design/adr/core/0021-Device-Profile-Changes/#summary-of-device-profile-rules","text":"These rules will be implemented in core metadata on device profile API calls. A device profile can be added anytime Device resources or device commands can be added to a device profile anytime Attributes can be added to a device profile anytime A device profile can be removed or modified when the device profile is not associated to a device or provision watcher this includes modifying any field (except identifiers like names and ids) this includes changes to the array of device resources, device commands this includes changes to attributes (of device resources) even when a device profile is associated to a device or provision watcher, fields of the device profile or device resource can be modified when the field change will not affect the behavior of the system. on profile, the following fields do not affect the behavior: description, manufacturer, model, labels. on device resource, the following fields do not affect the behavior: description and tag A device profile cannot be removed when it is associated to a device or provision watcher. A device profile can be removed or modified even when associated to an event or reading. However, configuration options (see New Configuration Settings below) are available to block the change or removal of a device profile for any reason. the rationale behind the new configuraton settings was specifically to protect the event/reading association to device profiles. Events and readings are generally considered short lived (ephemeral) objects and already contain the necessary device profile information that are needed by the system during their short life without having to refer to and keep the device profile. But if an adopter wants to make sure the device profile is unmodified and still exists for any event/readings association (or for any reason), then the new config setting will block device profile changes or removals. see note below in Consequences that a new Units property must be added to the Reading object in order to support this rule and the need for all relevant profile data to be in the reading.","title":"Summary of Device Profile Rules"},{"location":"design/adr/core/0021-Device-Profile-Changes/#ancillary-rules-associated-to-device-profiles","text":"Name and ID fields (identifying fields) for device profiles, device resources, etc. cannot be modified and can never be null. A device profile can begin life \u201cempty\u201d - meaning that it has no device resources or device commands.","title":"Ancillary Rules associated to Device Profiles"},{"location":"design/adr/core/0021-Device-Profile-Changes/#new-apis","text":"The following APIs would be added to the metadata REST service in order to meet the design specified above. Add Profile General Property PATCH API (allow to modify profile's description, manufacturer, model and label fields) Add Profile Device Resource POST API Add Profile Device Resource PATCH API (allow to modify Description and IsHidden only) Add Profile Device Resource DELETE API (allow as described above) Add Profile Device Command POST API Add Profile Device Command PATCH API (allow as described above) Add Profile Device Command DELETE API (allow as described above)","title":"New APIs"},{"location":"design/adr/core/0021-Device-Profile-Changes/#new-configuration-settings","text":"Some adopters may not view event/reading data as ephemeral or short lived. These adopters may choose not to allow device profiles to be modified or removed when associated to an event or reading. For this reason, two new configuration options, in the [Writable.ProfileChange] section, will be added to metadata configuration that are used to reject modifications or deletions. StrictDeviceProfileChanges (set to false by default) StrictDeviceProfileDeletes (set to false by default) When either of these config settings are set to true, metadata would accordingly reject changes to or removal of profiles (note: metadata will not check that there are actually events or readings - or any object - associated to the device profile when these are set to true. It simply rejects all modification or deletes to device profiles with the assumption that there could be events, readings or other objects associated and which need to be preserved).","title":"New Configuration Settings"},{"location":"design/adr/core/0021-Device-Profile-Changes/#consequencesconsiderations","text":"In order to allow device profiles to be updated or removed even when associated to an EdgeX event/reading, a new property needs to be added to the reading object. Readings will now contain a \u201cUnits\u201d (string) property. This property will indicate the units of measure for the Value in the Reading and will be populated based on the Units for the device resource. A new device service configuration property, ReadingUnits (set true by default) will allow adopters to indicate they do not want units to be added to the readings (for cases where there is a concern about the number of readings and the extra data of adding units). The ReadingUnits configuration option will be added to the [Writable.Reading] section of device services (and addressed in the device service SDKs). This allows the event/reading to contain all relevant information from the device profile that is needed by the system during the course of the event/reading\u2019s life. This allows the device profile to be modified or even removed even when there are events/readings in the system that were created from information in the device profile.","title":"Consequences/Considerations"},{"location":"design/adr/core/0021-Device-Profile-Changes/#references","text":"Metadata API Device Service SDK Required Functionality","title":"References"},{"location":"design/adr/device-service/0002-Array-Datatypes/","text":"Array Datatypes Design Status Context Decision Consequences Status Approved Context The current data model does not directly provide for devices which provide array data. Small fixed-length arrays may be handled by defining multiple device resources - one for each element - and aggregating them via a resource command. Other array data may be passed using the Binary type. Neither of these approaches is ideal: the binary data is opaque and any service processing it would need specific knowledge to do so, and aggregation presents the device service implementation with a multiple-read request that could in many cases be better handled by a single request. This design adds arrays of primitives to the range of supported types in EdgeX. It comprises an extension of the DeviceProfile model, and an update to the definition of Reading. Decision DeviceProfile extension The permitted values of the Type field in PropertyValue are extended to include: \"BoolArray\", \"Uint8Array\", \"Uint16Array\", \"Uint32Array\", \"Uint64Array\", \"Int8Array\", Int16Array\", \"Int32Array\", \"Int64Array\", \"Float32Array\", \"Float64Array\" Readings In the API (v1 and v2), Reading.Value is a string representation of the data. If this is maintained, the representation for Array types will follow the JSON array syntax, ie [\"value1\", \"value2\", ...] Consequences Any service which processes Readings will need to be reworked to account for the new Reading type. Device Service considerations The API used for interfacing between device SDKs and devices service implementations contains a local representation of reading values. This will need to be updated in line with the changes outlined here. For C, this will involve an extension of the existing union type. For Go, additional fields may be added to the CommandValue structure. Processing of numeric data in the device service, ie offset , scale etc will not be applied to the values in an array.","title":"Array Datatypes Design"},{"location":"design/adr/device-service/0002-Array-Datatypes/#array-datatypes-design","text":"Status Context Decision Consequences","title":"Array Datatypes Design"},{"location":"design/adr/device-service/0002-Array-Datatypes/#status","text":"Approved","title":"Status"},{"location":"design/adr/device-service/0002-Array-Datatypes/#context","text":"The current data model does not directly provide for devices which provide array data. Small fixed-length arrays may be handled by defining multiple device resources - one for each element - and aggregating them via a resource command. Other array data may be passed using the Binary type. Neither of these approaches is ideal: the binary data is opaque and any service processing it would need specific knowledge to do so, and aggregation presents the device service implementation with a multiple-read request that could in many cases be better handled by a single request. This design adds arrays of primitives to the range of supported types in EdgeX. It comprises an extension of the DeviceProfile model, and an update to the definition of Reading.","title":"Context"},{"location":"design/adr/device-service/0002-Array-Datatypes/#decision","text":"","title":"Decision"},{"location":"design/adr/device-service/0002-Array-Datatypes/#deviceprofile-extension","text":"The permitted values of the Type field in PropertyValue are extended to include: \"BoolArray\", \"Uint8Array\", \"Uint16Array\", \"Uint32Array\", \"Uint64Array\", \"Int8Array\", Int16Array\", \"Int32Array\", \"Int64Array\", \"Float32Array\", \"Float64Array\"","title":"DeviceProfile extension"},{"location":"design/adr/device-service/0002-Array-Datatypes/#readings","text":"In the API (v1 and v2), Reading.Value is a string representation of the data. If this is maintained, the representation for Array types will follow the JSON array syntax, ie [\"value1\", \"value2\", ...]","title":"Readings"},{"location":"design/adr/device-service/0002-Array-Datatypes/#consequences","text":"Any service which processes Readings will need to be reworked to account for the new Reading type.","title":"Consequences"},{"location":"design/adr/device-service/0002-Array-Datatypes/#device-service-considerations","text":"The API used for interfacing between device SDKs and devices service implementations contains a local representation of reading values. This will need to be updated in line with the changes outlined here. For C, this will involve an extension of the existing union type. For Go, additional fields may be added to the CommandValue structure. Processing of numeric data in the device service, ie offset , scale etc will not be applied to the values in an array.","title":"Device Service considerations"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/","text":"Device Service REST API Status Approved Context This ADR details the REST API to be provided by Device Service implementations in EdgeX version 2.x. As such, it supercedes the equivalent sections of the earlier \"Device Service Functional Requirements\" document. These requirements should be implemented as far as possible within the Device Service SDKs, but they also apply to any Device Service implementation. Decision Common endpoints The DS should provide the REST endpoints that are expected of all EdgeX microservices, specifically: config metrics ping version Callback Endpoint Methods callback/device PUT and POST callback/device/name/{name} DELETE callback/profile PUT callback/watcher PUT and POST callback/watcher/name/{name} DELETE parameter meaning {name} the name of the device or watcher These endpoints are used by the Core Metadata service to inform the device service of metadata updates. Endpoints are defined for each of the objects of interest to a device service, ie Devices, Device Profiles and Provision Watchers. On receipt of calls to these endpoints the device service should update its internal state accordingly. Note that the device service does not need to be informed of the creation or deletion of device profiles, as these operations may only occur where no devices are associated with the profile. To avoid stale profile entries the device service should delete a profile from its cache when the last device using it is deleted. Object deletion When an object is deleted, the Metadata service makes a DELETE request to the relevant callback/{type}/name/{name} endpoint. Object creation and updates When an object is created or updated, the Metadata service makes a POST or PUT request respectively to the relevant callback/{type} endpoint. The payload of the request is the new or updated object, ie one of the Device, DeviceProfile or ProvisionWatcher DTOs. Device Endpoint Methods device/name/{name}/{command} GET and PUT parameter meaning {name} the name of the device {command} the command name The command specified must match a deviceCommand or deviceResource name in the device's profile body (for PUT ): An application/json SettingRequest, which is a set of key/value pairs where the keys are valid deviceResource names, and the values provide the command argument for that resource. Example: {\"AHU-TargetTemperature\": \"28.5\", \"AHU-TargetBand\": \"4.0\"} Return code Meaning 200 the command was successful 404 the specified device does not exist, or the command/resource is unknown 405 attempted write to a read-only resource 423 the specified device is locked (admin state) or disabled (operating state) 500 the device driver is unable to process the request response body : A successful GET operation will return a JSON-encoded EventResponse object, which contains one or more Readings. Example: {\"apiVersion\":\"v2\",\"deviceName\":\"Gyro\",\"origin\":1592405201763915855,\"readings\":[{\"deviceName\":\"Gyro\",\"name\":\"Xrotation\",\"value\":\"124\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Yrotation\",\"value\":\"-54\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Zrotation\",\"value\":\"122\",\"origin\":1592405201763915855,\"valueType\":\"int32\"}]} This endpoint is used for obtaining readings from a device, and for writing settings to a device. Data formats The values obtained when readings are taken, or used to make settings, are expressed as strings. Type EdgeX types Representation Boolean Bool \"true\" or \"false\" Integer Uint8-Uint64 , Int8-Int64 Numeric string, eg \"-132\" Float Float32 , Float64 Decimal with exponent, eg \"1.234e-5\" String String string Binary Bytes octet array Array BoolArray , Uint8Array-Uint64Array , Int8Array-Int64Array , Float32Array , Float64Array JSON Array, eg \"[\"1\", \"34\", \"-5\"]\" Notes: - The presence of a Binary reading will cause the entire Event to be encoded using CBOR rather than JSON - Arrays of String and Binary data are not supported Readings and Events A Reading represents a value obtained from a deviceResource. It contains the following fields Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken value The reading value valueType The type of the data Or for binary Readings, the following fields Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken binaryValue The reading value mediaType The MIME type of the data An Event represents the result of a GET command. If the command names a deviceResource, the Event will contain a single Reading. If the command names a deviceCommand, the Event will contain as many Readings as there are deviceResources listed in the deviceCommand. The fields of an Event are as follows: Field name Description deviceName The name of the Device from which the Readings are taken profileName The name of the Profile describing the Device origin The time at which the Event was created readings An array of Readings Query Parameters Calls to the device endpoints may include a Query String in the URL. This may be used to pass parameters relating to the request to the device service. Individual device services may define their own parameters to control specific behaviors. Parameters beginning with the prefix ds- are reserved to the Device SDKs and the following parameters are defined for GET requests: Parameter Valid Values Default Meaning ds-pushevent \"yes\" or \"no\" \"no\" If set to yes, a successful GET will result in an event being pushed to the EdgeX system ds-returnevent \"yes\" or \"no\" \"yes\" If set to no, there will be no Event returned in the http response Device States A Device in EdgeX has two states associated with it: the Administrative state and the Operational state. The Administrative state may be set to LOCKED (normally UNLOCKED ) to block access to the device for administrative reasons. The Operational state may be set to DOWN (normally UP ) to indicate that the device is not currently working. In either case access to the device via this endpoint will be denied and HTTP 423 (\"Locked\") will be returned. Data Transformations A number of simple data transformations may be defined in the deviceResource. The table below shows these transformations in the order in which they are applied to outgoing data, ie Readings. The transformations are inverted and applied in reverse order for incoming data. Transform Applicable reading types Effect mask Integers The reading is masked (bitwise-and operation) with the specified value. shift Integers The reading is bit-shifted by the specified value. Positive values indicate right-shift, negative for left. base Integers and Floats The reading is replaced by the specified value raised to the power of the reading. scale Integers and Floats The reading is multiplied by the specified value. offset Integers and Floats The reading is increased by the specified value. The operation of the mask transform on incoming data (a setting) is that the value to be set on the resource is the existing value bitwise-anded with the complement of the mask, bitwise-ored with the value specified in the request. ie, new-value = (current-value & !mask) | request-value The combination of mask and shift can therefore be used to access data contained in a subdivision of an octet. It is possible that following the application of the specified transformations, a value may exceed the range that may be represented by its type. Should this occur on a set operation, a suitable error should be logged and returned, along with the Bad Request http code 400. If it occurs as part of a get operation, the Reading's value should be set to the String \"overflow\" and its valueType to String . Assertions and Mappings Assertions are another attribute in a device resource's PropertyValue, which specify a string which the reading value is compared against. If the comparison fails, then the http request returns a string of the form \"Assertion failed for device resource: \\ , with value: \\ \" , this also has a side-effect of setting the device operatingstate to DISABLED . A 500 status code is also returned. Note that the error response and status code should be returned regardless of the ds-returnevent setting. Assertions are also checked where an event is being generated due to an AutoEvent, or asynchronous readings are pushed. In these cases if the assertion is triggered, an error should be logged and the operating state should be set as above. Assertions are not checked for settings, only for readings. Mappings may be defined in a deviceCommand. These allow Readings of string type to be remapped. Mappings are applied after assertions are checked, and are the final transformation before Readings are created. Mappings are also applied, but in reverse, to settings ( PUT request data). lastConnected timestamp Each Device has as part of its metadata a timestamp named lastConnected , this indicates the most recent occasion when the device was successfully interacted with. The device service should update this timestamp every time a GET or PUT operation succeeds, unless it has been configured not to do so (eg for performance reasons). Discovery Endpoint Methods discovery POST A call to this endpoint triggers the device discovery process, if enabled. See Discovery Design for details. Consequences Changes from v1.x API The callback endpoint is split according to the type of object being updated Callbacks for new and updated objects take the object in the request body The device/all form is removed GET requests take parameters controlling what is to be done with resulting Events, and the default behavior does not send the Event to core-data References OpenAPI definition of v2 API : https://github.com/edgexfoundry/device-sdk-go/blob/master/openapi/v2/device-sdk.yaml Device Service Functional Requirements (Geneva) : https://wiki.edgexfoundry.org/download/attachments/329488/edgex-device-service-requirements-v11.pdf?version=1&modificationDate=1591621033000&api=v2","title":"Device Service REST API"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#device-service-rest-api","text":"","title":"Device Service REST API"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#status","text":"Approved","title":"Status"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#context","text":"This ADR details the REST API to be provided by Device Service implementations in EdgeX version 2.x. As such, it supercedes the equivalent sections of the earlier \"Device Service Functional Requirements\" document. These requirements should be implemented as far as possible within the Device Service SDKs, but they also apply to any Device Service implementation.","title":"Context"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#decision","text":"","title":"Decision"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#common-endpoints","text":"The DS should provide the REST endpoints that are expected of all EdgeX microservices, specifically: config metrics ping version","title":"Common endpoints"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#callback","text":"Endpoint Methods callback/device PUT and POST callback/device/name/{name} DELETE callback/profile PUT callback/watcher PUT and POST callback/watcher/name/{name} DELETE parameter meaning {name} the name of the device or watcher These endpoints are used by the Core Metadata service to inform the device service of metadata updates. Endpoints are defined for each of the objects of interest to a device service, ie Devices, Device Profiles and Provision Watchers. On receipt of calls to these endpoints the device service should update its internal state accordingly. Note that the device service does not need to be informed of the creation or deletion of device profiles, as these operations may only occur where no devices are associated with the profile. To avoid stale profile entries the device service should delete a profile from its cache when the last device using it is deleted.","title":"Callback"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#object-deletion","text":"When an object is deleted, the Metadata service makes a DELETE request to the relevant callback/{type}/name/{name} endpoint.","title":"Object deletion"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#object-creation-and-updates","text":"When an object is created or updated, the Metadata service makes a POST or PUT request respectively to the relevant callback/{type} endpoint. The payload of the request is the new or updated object, ie one of the Device, DeviceProfile or ProvisionWatcher DTOs.","title":"Object creation and updates"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#device","text":"Endpoint Methods device/name/{name}/{command} GET and PUT parameter meaning {name} the name of the device {command} the command name The command specified must match a deviceCommand or deviceResource name in the device's profile body (for PUT ): An application/json SettingRequest, which is a set of key/value pairs where the keys are valid deviceResource names, and the values provide the command argument for that resource. Example: {\"AHU-TargetTemperature\": \"28.5\", \"AHU-TargetBand\": \"4.0\"} Return code Meaning 200 the command was successful 404 the specified device does not exist, or the command/resource is unknown 405 attempted write to a read-only resource 423 the specified device is locked (admin state) or disabled (operating state) 500 the device driver is unable to process the request response body : A successful GET operation will return a JSON-encoded EventResponse object, which contains one or more Readings. Example: {\"apiVersion\":\"v2\",\"deviceName\":\"Gyro\",\"origin\":1592405201763915855,\"readings\":[{\"deviceName\":\"Gyro\",\"name\":\"Xrotation\",\"value\":\"124\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Yrotation\",\"value\":\"-54\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Zrotation\",\"value\":\"122\",\"origin\":1592405201763915855,\"valueType\":\"int32\"}]} This endpoint is used for obtaining readings from a device, and for writing settings to a device.","title":"Device"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#data-formats","text":"The values obtained when readings are taken, or used to make settings, are expressed as strings. Type EdgeX types Representation Boolean Bool \"true\" or \"false\" Integer Uint8-Uint64 , Int8-Int64 Numeric string, eg \"-132\" Float Float32 , Float64 Decimal with exponent, eg \"1.234e-5\" String String string Binary Bytes octet array Array BoolArray , Uint8Array-Uint64Array , Int8Array-Int64Array , Float32Array , Float64Array JSON Array, eg \"[\"1\", \"34\", \"-5\"]\" Notes: - The presence of a Binary reading will cause the entire Event to be encoded using CBOR rather than JSON - Arrays of String and Binary data are not supported","title":"Data formats"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#readings-and-events","text":"A Reading represents a value obtained from a deviceResource. It contains the following fields Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken value The reading value valueType The type of the data Or for binary Readings, the following fields Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken binaryValue The reading value mediaType The MIME type of the data An Event represents the result of a GET command. If the command names a deviceResource, the Event will contain a single Reading. If the command names a deviceCommand, the Event will contain as many Readings as there are deviceResources listed in the deviceCommand. The fields of an Event are as follows: Field name Description deviceName The name of the Device from which the Readings are taken profileName The name of the Profile describing the Device origin The time at which the Event was created readings An array of Readings","title":"Readings and Events"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#query-parameters","text":"Calls to the device endpoints may include a Query String in the URL. This may be used to pass parameters relating to the request to the device service. Individual device services may define their own parameters to control specific behaviors. Parameters beginning with the prefix ds- are reserved to the Device SDKs and the following parameters are defined for GET requests: Parameter Valid Values Default Meaning ds-pushevent \"yes\" or \"no\" \"no\" If set to yes, a successful GET will result in an event being pushed to the EdgeX system ds-returnevent \"yes\" or \"no\" \"yes\" If set to no, there will be no Event returned in the http response","title":"Query Parameters"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#device-states","text":"A Device in EdgeX has two states associated with it: the Administrative state and the Operational state. The Administrative state may be set to LOCKED (normally UNLOCKED ) to block access to the device for administrative reasons. The Operational state may be set to DOWN (normally UP ) to indicate that the device is not currently working. In either case access to the device via this endpoint will be denied and HTTP 423 (\"Locked\") will be returned.","title":"Device States"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#data-transformations","text":"A number of simple data transformations may be defined in the deviceResource. The table below shows these transformations in the order in which they are applied to outgoing data, ie Readings. The transformations are inverted and applied in reverse order for incoming data. Transform Applicable reading types Effect mask Integers The reading is masked (bitwise-and operation) with the specified value. shift Integers The reading is bit-shifted by the specified value. Positive values indicate right-shift, negative for left. base Integers and Floats The reading is replaced by the specified value raised to the power of the reading. scale Integers and Floats The reading is multiplied by the specified value. offset Integers and Floats The reading is increased by the specified value. The operation of the mask transform on incoming data (a setting) is that the value to be set on the resource is the existing value bitwise-anded with the complement of the mask, bitwise-ored with the value specified in the request. ie, new-value = (current-value & !mask) | request-value The combination of mask and shift can therefore be used to access data contained in a subdivision of an octet. It is possible that following the application of the specified transformations, a value may exceed the range that may be represented by its type. Should this occur on a set operation, a suitable error should be logged and returned, along with the Bad Request http code 400. If it occurs as part of a get operation, the Reading's value should be set to the String \"overflow\" and its valueType to String .","title":"Data Transformations"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#assertions-and-mappings","text":"Assertions are another attribute in a device resource's PropertyValue, which specify a string which the reading value is compared against. If the comparison fails, then the http request returns a string of the form \"Assertion failed for device resource: \\ , with value: \\ \" , this also has a side-effect of setting the device operatingstate to DISABLED . A 500 status code is also returned. Note that the error response and status code should be returned regardless of the ds-returnevent setting. Assertions are also checked where an event is being generated due to an AutoEvent, or asynchronous readings are pushed. In these cases if the assertion is triggered, an error should be logged and the operating state should be set as above. Assertions are not checked for settings, only for readings. Mappings may be defined in a deviceCommand. These allow Readings of string type to be remapped. Mappings are applied after assertions are checked, and are the final transformation before Readings are created. Mappings are also applied, but in reverse, to settings ( PUT request data).","title":"Assertions and Mappings"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#lastconnected-timestamp","text":"Each Device has as part of its metadata a timestamp named lastConnected , this indicates the most recent occasion when the device was successfully interacted with. The device service should update this timestamp every time a GET or PUT operation succeeds, unless it has been configured not to do so (eg for performance reasons).","title":"lastConnected timestamp"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#discovery","text":"Endpoint Methods discovery POST A call to this endpoint triggers the device discovery process, if enabled. See Discovery Design for details.","title":"Discovery"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#consequences","text":"","title":"Consequences"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#changes-from-v1x-api","text":"The callback endpoint is split according to the type of object being updated Callbacks for new and updated objects take the object in the request body The device/all form is removed GET requests take parameters controlling what is to be done with resulting Events, and the default behavior does not send the Event to core-data","title":"Changes from v1.x API"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#references","text":"OpenAPI definition of v2 API : https://github.com/edgexfoundry/device-sdk-go/blob/master/openapi/v2/device-sdk.yaml Device Service Functional Requirements (Geneva) : https://wiki.edgexfoundry.org/download/attachments/329488/edgex-device-service-requirements-v11.pdf?version=1&modificationDate=1591621033000&api=v2","title":"References"},{"location":"design/adr/device-service/0012-DeviceService-Filters/","text":"Device Service Filters Status Approved (by TSC vote on 3/15/21) design (initially) for Hanoi - but now being considered for Ireland implementation TBD (desired feature targeted for Ireland or Jakarata) Context In EdgeX today, sensor/device data collected can be \"filtered\" by application services before being exported or sent to some north side application or system. Built-in application service functions (available through the app services SDK) allow EdgeX event/reading objects to be filtered by device name or by device ResourceName. That is, event/readings can be filtered by: which device sent the event/reading (as determined by the Event device property). the classification or origin (such as temperature or humidity) of data produced by the device as determined by the Reading's name property (which used to be the value descriptor and now refers to the device ResourceName). Two Levels of Device Service Filtering There are potentially two places where \"filtering\" in a device service could be useful. One (Sensor Data Filter) - after the device service has communicated with the sensor or device to get sensor values (but before the service creates Event/Reading objects and pushes those to core data). A sensor data filter would allow the device service to essentially ignore some of the raw sensed data. This would allow for some device service optimization in that the device service would not have perform type transformations and creation of event/reading objects if the data can be eliminated at this early stage. This first level filtering would, if put in place , likely occur in code associated with the read command gets done by the ProtocolDriver . Two (Reading Filter) - after the sensor data has been collected and read and put into Event/Reading objects, there is a desire to filter some of the Readings based on the Reading values or Reading name (which is the device ResourceName) or some combination of value and name. At this time, this design only addresses the need for the second filter (Reading Filter) . At the time of this writing, no applicable use case has yet to be defined to warrant the Sensor Data Filter. Reading Filters Reading filters will allow, not unlike application service filter functions today, to have Readings in an Event to be removed if: the value was outside or inside some range, or the value was greater than, less than or equal to some value based on the Reading value (numeric) of a Reading outside a specified range (min/max) described in the service configuration. Thus avoiding sending in outlier or jittery data Readings that could negatively effect analytics. Future scope: based on the Reading value (numeric) equal to or near (with in some specified range) the last reading. This allows a device service to reduce sending in Event/Readings that do not represent any significant change. This differs from the already implemented onChangeOnly in that it is filtering Readings within a specified degree of change. Note: this feature would require caching of readings which has not fully been implemented in the SDK. The existing mechanism for autoevents provides a partial cache. Added for future reference, but this feature would not be accomplished in the initial implementation; requiring extra design work on caching to be implemented. the value was the same as some or not the same as some specified value or values (for strings, boolean and other non-numeric values) the value matches a pattern (glob and/or regex) when the value is a string. the name (the device ResourceName) matched a particular value; in other words match temperature or humidity as example device resources. Unlike application services, there is not a need to filter on a device name (or identifier). Simply disable the device in the device service if all Event/Readings are to be stopped for the device. In the case that all Readings of an Event are filtered, it is assumed the entire Event is deemed to be worthless and not sent to core data by the device service. If only some Readings from and Event are filtered, the Event minus the filtered Readings would be sent to core data. The filter behaves the same whether the collection of Readings and Events is triggered by a scheduled collection of data from the underlying sensor/device or triggered by a command request (as from the command service). Therefore, the call for a command request still results in a successful status code and a return of no results (or partial results) if the filter causes all or some of the readings to be removed. Design / Architecture A new function interface shall be defined that, when implemented, performs a Reading Filter operation. A ReadingFilter function would take a parameter (an Event containing readings), check whether the Readings of the Event match on the filtering configuration (see below) and if they do then remove them from the Event . The ReadingFilter function would return the Event object (minus filtered Readings ) or nil if the Event held no more Readings . Pseudo code for the generic function is provided below. The results returned will include a boolean to indicate whether any Reading objects were removed from the Event (allowing the receiver to know if some were filtered from the original list). func ( f Filter ) ReadingFilter ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) { // depending on impl; filtering for values in/out of a range, >, <, =, same, not same, from a particular name (device resource), etc. // The boolean will indicate whether any Readings were filtered from the Event. if ( len ( event . Reading )) > 0 ) if ( len filteredReadings > 0 ) return event , true else return event , false else return nil , true } Based on current needs/use cases, implementations of the function interface could include the following filter functions: func ( f Filter ) FilterByValue ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) {} func ( f Filter ) FilterByResourceNamesMatch ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) {} Note The app functions SDK comes with FilterByDeviceName and FilterByResourceName functions today. The FilterByResourceName would behave similarly to FilterByResourceNameMatch. The Filter structure houses the configuration parameters for which the filter functions work and filter on. Note The app functions SDK uses a fairly simple Filter structure. type Filter struct { FilterValues [] string FilterOut bool } Given the collection of filter operations (in range, out of range, equal or not equal), the following structure is proposed: type Filter struct { FilterValues [] string TargetResourceName string FilterOp string // enum of in (in range inclusive), out (outside a range exclusive), eq (equal) or ne (not equal) } Examples use of the Filter structure to specify filtering: Filter { FilterValues : { 10 , 20 }, \"Int64\" , FilterOp : \"in\" } // filter for those Int64 readings with values between 10-20 inclusive Filter { FilterValues : { 10 , 20 }, \"Int64\" , FilterOp : \"out\" } // filter for those Int64 readings with values outside of 10-20. Filter { FilterValues : { 8 , 10 , 12 }, \"Int64\" , FilterOp : \"eq\" } //filter for those Int64 readings with values of 8, 10, or 12. Filter { FilterValues : { 8 , 10 }, \"Int64\" , FilterOp : \"ne\" } //filter for those Int64 readings with values not equal to 8 or 10 Filter { FilterValues : { \"Int32\" , \"Int64\" }, nil , FilterOp : \"eq\" } //filter to be used with FilterByResourceNameMatch. Filter for resource names of Int32 or Int64. Filter { FilterValues : { \"Int32\" }, nil , FilterOp : \"ne\" } //filter to be used with FilterByResourceNameMatch. Filter for resource names not equal to (excluding) Int32. A NewFilter function creates, initializes and returns a new instance of the filter based on the configuration provided. func NewReadingNameFilter ( filterValues [] string , filterOp string ) Filter { return Filter { FilterValues : filterValues , TargetResourceName string , FilterOp : filterOp } } Sharing filter functions If one were to explore the filtering functions in the app functions SDK filter.go (both FilterByDeviceName and FilterByValueDescriptor ), the filters operate on the Event model object and return the same objects ( Event or nil). Ideally, since both app services and device services generally share the same interface model (from go-mod-core-contracts ), it would be the desire to share the same filter functions functions between SDKs and associated services. Decisions on how to do this in Go - whether by shared module for example - is left as a future release design and implementation task - and as the need for common filter functions across device services and application services are identified in use cases. C needs are likely to be handled in the SDK directly. Additional Design Considerations As Device Services do not have the concept of a functions pipeline like application services do, consideration must be given as to how and where to: provide configuration to specify which filter functions to invoke create the filter invoke the filtering functions At this time, custom filters will not be supported as the custom filters would not be known by the SDK and therefore could not be specified in configuration. This is consistent with the app functions SDK and filtering. Function Inflection Point It is precisely after the convert to Event/Reading objects (after the async readings are assembled into events) and before returning that result in common.SendEvent (in utils.go) function that the device service should invoke the required filter functions. In the existing V1 implementation of the device-sdk-go, commands, async readings, and auto-events all call the function common.SendEvent() . Note: V2 implementation will require some re-evaluation of this inflection point. Where possible, the implementation should locate a single point of inflection if possible. In the C SDK, it is likely that the filters will be called before conversion to Event/Reading objects - they will operate on commandresult objects (equivalent to CommandValues). The order in which functions are called is important when more than one filter is provided. The order that functions are called should be reflected in the order listed in the configuration of the filters. Events containing binary values (event.HasBinaryValue), will not be filtered. Future releases may include binary value filters. Setting Filter Function and Configuration When filter functions are shared (or appear to be doing the same type of work) between SDKs, the configuration of the similar filter functions should also look similar. The app functions SDK configuration model for filters should therefore be followed. While device services do not have pipelines, the inclusion and configuration of filters for device services should have a similar look (to provide symmetry with app services). The configuration has to provide the functions required and parameters to make the functions work - even though the association to a pipeline is not required. Below is the common app service configuration as it relates to filters: [Writable.Pipeline] ExecutionOrder = \"FilterByDeviceName, TransformToXML, SetOutputData\" [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] DeviceNames = \"Random-Float-Device,Random-Integer-Device\" FilterOut = \"false\" Suggested and hypothetical configuration for the device service reading filters should look something like that below. [Writable.Filters] # filter readings where resource name equals Int32 ExecutionOrder = \"FilterByResourceNamesMatch, FilterByValue\" [Writable.Filter.Functions.FilterByResourceNamesMatch] [Writable.Filter.Functions.FilterByResourceNamesMatch.Parameters] FilterValues = \"Int32\" FilterOps = \"eq\" # filter readings where the Int64 readings (resource name) is Int64 and the values are between 10 and 20 [Writable.Filter.Functions.FilterByValue] [Writable.Filter.Functions.FilterByValue.Parameters] TargetResourceName = \"Int64\" FilterValues = { 10 , 20 } FilterOp = \"in\" Decision To be determined Consequences This design does not take into account potential changes found with the V2 API. References","title":"Device Service Filters"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#device-service-filters","text":"","title":"Device Service Filters"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#status","text":"Approved (by TSC vote on 3/15/21) design (initially) for Hanoi - but now being considered for Ireland implementation TBD (desired feature targeted for Ireland or Jakarata)","title":"Status"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#context","text":"In EdgeX today, sensor/device data collected can be \"filtered\" by application services before being exported or sent to some north side application or system. Built-in application service functions (available through the app services SDK) allow EdgeX event/reading objects to be filtered by device name or by device ResourceName. That is, event/readings can be filtered by: which device sent the event/reading (as determined by the Event device property). the classification or origin (such as temperature or humidity) of data produced by the device as determined by the Reading's name property (which used to be the value descriptor and now refers to the device ResourceName).","title":"Context"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#two-levels-of-device-service-filtering","text":"There are potentially two places where \"filtering\" in a device service could be useful. One (Sensor Data Filter) - after the device service has communicated with the sensor or device to get sensor values (but before the service creates Event/Reading objects and pushes those to core data). A sensor data filter would allow the device service to essentially ignore some of the raw sensed data. This would allow for some device service optimization in that the device service would not have perform type transformations and creation of event/reading objects if the data can be eliminated at this early stage. This first level filtering would, if put in place , likely occur in code associated with the read command gets done by the ProtocolDriver . Two (Reading Filter) - after the sensor data has been collected and read and put into Event/Reading objects, there is a desire to filter some of the Readings based on the Reading values or Reading name (which is the device ResourceName) or some combination of value and name. At this time, this design only addresses the need for the second filter (Reading Filter) . At the time of this writing, no applicable use case has yet to be defined to warrant the Sensor Data Filter.","title":"Two Levels of Device Service Filtering"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#reading-filters","text":"Reading filters will allow, not unlike application service filter functions today, to have Readings in an Event to be removed if: the value was outside or inside some range, or the value was greater than, less than or equal to some value based on the Reading value (numeric) of a Reading outside a specified range (min/max) described in the service configuration. Thus avoiding sending in outlier or jittery data Readings that could negatively effect analytics. Future scope: based on the Reading value (numeric) equal to or near (with in some specified range) the last reading. This allows a device service to reduce sending in Event/Readings that do not represent any significant change. This differs from the already implemented onChangeOnly in that it is filtering Readings within a specified degree of change. Note: this feature would require caching of readings which has not fully been implemented in the SDK. The existing mechanism for autoevents provides a partial cache. Added for future reference, but this feature would not be accomplished in the initial implementation; requiring extra design work on caching to be implemented. the value was the same as some or not the same as some specified value or values (for strings, boolean and other non-numeric values) the value matches a pattern (glob and/or regex) when the value is a string. the name (the device ResourceName) matched a particular value; in other words match temperature or humidity as example device resources. Unlike application services, there is not a need to filter on a device name (or identifier). Simply disable the device in the device service if all Event/Readings are to be stopped for the device. In the case that all Readings of an Event are filtered, it is assumed the entire Event is deemed to be worthless and not sent to core data by the device service. If only some Readings from and Event are filtered, the Event minus the filtered Readings would be sent to core data. The filter behaves the same whether the collection of Readings and Events is triggered by a scheduled collection of data from the underlying sensor/device or triggered by a command request (as from the command service). Therefore, the call for a command request still results in a successful status code and a return of no results (or partial results) if the filter causes all or some of the readings to be removed.","title":"Reading Filters"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#design-architecture","text":"A new function interface shall be defined that, when implemented, performs a Reading Filter operation. A ReadingFilter function would take a parameter (an Event containing readings), check whether the Readings of the Event match on the filtering configuration (see below) and if they do then remove them from the Event . The ReadingFilter function would return the Event object (minus filtered Readings ) or nil if the Event held no more Readings . Pseudo code for the generic function is provided below. The results returned will include a boolean to indicate whether any Reading objects were removed from the Event (allowing the receiver to know if some were filtered from the original list). func ( f Filter ) ReadingFilter ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) { // depending on impl; filtering for values in/out of a range, >, <, =, same, not same, from a particular name (device resource), etc. // The boolean will indicate whether any Readings were filtered from the Event. if ( len ( event . Reading )) > 0 ) if ( len filteredReadings > 0 ) return event , true else return event , false else return nil , true } Based on current needs/use cases, implementations of the function interface could include the following filter functions: func ( f Filter ) FilterByValue ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) {} func ( f Filter ) FilterByResourceNamesMatch ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) {} Note The app functions SDK comes with FilterByDeviceName and FilterByResourceName functions today. The FilterByResourceName would behave similarly to FilterByResourceNameMatch. The Filter structure houses the configuration parameters for which the filter functions work and filter on. Note The app functions SDK uses a fairly simple Filter structure. type Filter struct { FilterValues [] string FilterOut bool } Given the collection of filter operations (in range, out of range, equal or not equal), the following structure is proposed: type Filter struct { FilterValues [] string TargetResourceName string FilterOp string // enum of in (in range inclusive), out (outside a range exclusive), eq (equal) or ne (not equal) } Examples use of the Filter structure to specify filtering: Filter { FilterValues : { 10 , 20 }, \"Int64\" , FilterOp : \"in\" } // filter for those Int64 readings with values between 10-20 inclusive Filter { FilterValues : { 10 , 20 }, \"Int64\" , FilterOp : \"out\" } // filter for those Int64 readings with values outside of 10-20. Filter { FilterValues : { 8 , 10 , 12 }, \"Int64\" , FilterOp : \"eq\" } //filter for those Int64 readings with values of 8, 10, or 12. Filter { FilterValues : { 8 , 10 }, \"Int64\" , FilterOp : \"ne\" } //filter for those Int64 readings with values not equal to 8 or 10 Filter { FilterValues : { \"Int32\" , \"Int64\" }, nil , FilterOp : \"eq\" } //filter to be used with FilterByResourceNameMatch. Filter for resource names of Int32 or Int64. Filter { FilterValues : { \"Int32\" }, nil , FilterOp : \"ne\" } //filter to be used with FilterByResourceNameMatch. Filter for resource names not equal to (excluding) Int32. A NewFilter function creates, initializes and returns a new instance of the filter based on the configuration provided. func NewReadingNameFilter ( filterValues [] string , filterOp string ) Filter { return Filter { FilterValues : filterValues , TargetResourceName string , FilterOp : filterOp } }","title":"Design / Architecture"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#sharing-filter-functions","text":"If one were to explore the filtering functions in the app functions SDK filter.go (both FilterByDeviceName and FilterByValueDescriptor ), the filters operate on the Event model object and return the same objects ( Event or nil). Ideally, since both app services and device services generally share the same interface model (from go-mod-core-contracts ), it would be the desire to share the same filter functions functions between SDKs and associated services. Decisions on how to do this in Go - whether by shared module for example - is left as a future release design and implementation task - and as the need for common filter functions across device services and application services are identified in use cases. C needs are likely to be handled in the SDK directly.","title":"Sharing filter functions"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#additional-design-considerations","text":"As Device Services do not have the concept of a functions pipeline like application services do, consideration must be given as to how and where to: provide configuration to specify which filter functions to invoke create the filter invoke the filtering functions At this time, custom filters will not be supported as the custom filters would not be known by the SDK and therefore could not be specified in configuration. This is consistent with the app functions SDK and filtering.","title":"Additional Design Considerations"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#function-inflection-point","text":"It is precisely after the convert to Event/Reading objects (after the async readings are assembled into events) and before returning that result in common.SendEvent (in utils.go) function that the device service should invoke the required filter functions. In the existing V1 implementation of the device-sdk-go, commands, async readings, and auto-events all call the function common.SendEvent() . Note: V2 implementation will require some re-evaluation of this inflection point. Where possible, the implementation should locate a single point of inflection if possible. In the C SDK, it is likely that the filters will be called before conversion to Event/Reading objects - they will operate on commandresult objects (equivalent to CommandValues). The order in which functions are called is important when more than one filter is provided. The order that functions are called should be reflected in the order listed in the configuration of the filters. Events containing binary values (event.HasBinaryValue), will not be filtered. Future releases may include binary value filters.","title":"Function Inflection Point"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#setting-filter-function-and-configuration","text":"When filter functions are shared (or appear to be doing the same type of work) between SDKs, the configuration of the similar filter functions should also look similar. The app functions SDK configuration model for filters should therefore be followed. While device services do not have pipelines, the inclusion and configuration of filters for device services should have a similar look (to provide symmetry with app services). The configuration has to provide the functions required and parameters to make the functions work - even though the association to a pipeline is not required. Below is the common app service configuration as it relates to filters: [Writable.Pipeline] ExecutionOrder = \"FilterByDeviceName, TransformToXML, SetOutputData\" [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] DeviceNames = \"Random-Float-Device,Random-Integer-Device\" FilterOut = \"false\" Suggested and hypothetical configuration for the device service reading filters should look something like that below. [Writable.Filters] # filter readings where resource name equals Int32 ExecutionOrder = \"FilterByResourceNamesMatch, FilterByValue\" [Writable.Filter.Functions.FilterByResourceNamesMatch] [Writable.Filter.Functions.FilterByResourceNamesMatch.Parameters] FilterValues = \"Int32\" FilterOps = \"eq\" # filter readings where the Int64 readings (resource name) is Int64 and the values are between 10 and 20 [Writable.Filter.Functions.FilterByValue] [Writable.Filter.Functions.FilterByValue.Parameters] TargetResourceName = \"Int64\" FilterValues = { 10 , 20 } FilterOp = \"in\"","title":"Setting Filter Function and Configuration"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#decision","text":"To be determined","title":"Decision"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#consequences","text":"This design does not take into account potential changes found with the V2 API.","title":"Consequences"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#references","text":"","title":"References"},{"location":"design/adr/devops/0007-Release-Automation/","text":"Release Automation Status Approved by TSC 04/08/2020 Context EdgeX Foundry is a framework composed of microservices to ease development of IoT/Edge solutions. With the framework getting richer, project growth, the number of artifacts to be released has increased. This proposal outlines a method for automating the release process for the base artifacts. Requirements Release Artifact Definition For the scope of Hanoi release artifact types are defined as: GitHub tags in the repositories. Docker images in our Nexus repository and Docker hub. *Snaps in the Snapcraft store. This list is likely to expand in future releases. *The building and publishing of snaps was removed from community scope in September 2020 and is managed outside the community by Canonical. General Requirements As the EdgeX Release Czar I gathered the following requirements for automating this part of the release. The release automation needs a manual trigger to be triggered by the EdgeX Release Czar or the Linux Foundation Release Engineers. The goal of this automation is to have a \"push button\" release mechanism to reduce human error in our release process. Release artifacts can come from one or more GitHub repositories at a time. GitHub repositories can have one or more release artifact types to release. GitHub repositories can have one or more artifacts of a specific type to release. (For example: The mono repository, edgex-go, has more than 20 docker images to release.) GitHub repositories may be released at different times. (For example: Application and Device service repositories can be released on a different day than the Core services in the mono repository.) Ability to track multiple release streams for the project. An audit trail history for releases. Location The code that will manage the release automation for EdgeX Foundry will live in a repository called cd-management . This repository will have a branch named release that will track the releases of artifacts off the main branch of the EdgeX Foundry repositories. Multiple Release Streams EdgeX Foundry has this idea of multple release streams that basically coincides with different named branches in GitHub. For the majority of the main releases we will be targeting those off the main branch. In our cd-management repository we will have a release branch that will track the main branches EdgeX repositories. In the future we will mark a specific release for long term support (LTS). When this happens we will have to branch off main in the EdgeX repositories and create a separate release stream for the LTS. The suggestion at that point will be to branch off the release branch in cd-management as well and use this new release branch to track the LTS branches in the EdgeX repositories. Release Flow Go Modules, Device and Application SDKs During Development Go modules, Application and Device SDKs only release a GitHub tag as their release. Go modules, Application and Device SDKs are set up to automatically increment a developmental version tag on each merge to main . (IE: 1.0.0-dev.1 -> 1.0.0-dev.2) Release The release automation for Go Modules, Device and Application SDKs is used to set the final release version git tag. (IE: 1.0.0-dev.X -> 1.0.0) For each release, the Go Modules, Device and Application SDK repositories will be tagged with the release version. Core Services (Including Security and System Management services), Application Services, Device Services and Supporting Docker Images During Development For the Core Services, Application Services, Device Services and Supporting Docker Images we release Github tags and docker images. On every merge to the main branch we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2), stage docker images in our Nexus repository (docker.staging). Release The release automation will need to do the following: Set version tag on GitHub. (IE: 1.0.0-dev.X -> 1.0.0) Promote docker images in our Nexus repository from docker.staging to docker.release and public Docker hub. Supporting Assets (e.g. edgex-cli) During Development For supporting release assets (e.g. edgex-cli) we release GitHub tags on every merge to the main branch. For every merge to main we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2) and store the build artifacts in our Nexus repository. Release For EdgeX releases the release automation will set the final release version by creating a git tag (e.g. 1.0.0-dev.X -> 1.0.0) and produce a Github Release containing the binary assets targeted for release.","title":"Release Automation"},{"location":"design/adr/devops/0007-Release-Automation/#release-automation","text":"","title":"Release Automation"},{"location":"design/adr/devops/0007-Release-Automation/#status","text":"Approved by TSC 04/08/2020","title":"Status"},{"location":"design/adr/devops/0007-Release-Automation/#context","text":"EdgeX Foundry is a framework composed of microservices to ease development of IoT/Edge solutions. With the framework getting richer, project growth, the number of artifacts to be released has increased. This proposal outlines a method for automating the release process for the base artifacts.","title":"Context"},{"location":"design/adr/devops/0007-Release-Automation/#requirements","text":"","title":"Requirements"},{"location":"design/adr/devops/0007-Release-Automation/#release-artifact-definition","text":"For the scope of Hanoi release artifact types are defined as: GitHub tags in the repositories. Docker images in our Nexus repository and Docker hub. *Snaps in the Snapcraft store. This list is likely to expand in future releases. *The building and publishing of snaps was removed from community scope in September 2020 and is managed outside the community by Canonical.","title":"Release Artifact Definition"},{"location":"design/adr/devops/0007-Release-Automation/#general-requirements","text":"As the EdgeX Release Czar I gathered the following requirements for automating this part of the release. The release automation needs a manual trigger to be triggered by the EdgeX Release Czar or the Linux Foundation Release Engineers. The goal of this automation is to have a \"push button\" release mechanism to reduce human error in our release process. Release artifacts can come from one or more GitHub repositories at a time. GitHub repositories can have one or more release artifact types to release. GitHub repositories can have one or more artifacts of a specific type to release. (For example: The mono repository, edgex-go, has more than 20 docker images to release.) GitHub repositories may be released at different times. (For example: Application and Device service repositories can be released on a different day than the Core services in the mono repository.) Ability to track multiple release streams for the project. An audit trail history for releases.","title":"General Requirements"},{"location":"design/adr/devops/0007-Release-Automation/#location","text":"The code that will manage the release automation for EdgeX Foundry will live in a repository called cd-management . This repository will have a branch named release that will track the releases of artifacts off the main branch of the EdgeX Foundry repositories.","title":"Location"},{"location":"design/adr/devops/0007-Release-Automation/#multiple-release-streams","text":"EdgeX Foundry has this idea of multple release streams that basically coincides with different named branches in GitHub. For the majority of the main releases we will be targeting those off the main branch. In our cd-management repository we will have a release branch that will track the main branches EdgeX repositories. In the future we will mark a specific release for long term support (LTS). When this happens we will have to branch off main in the EdgeX repositories and create a separate release stream for the LTS. The suggestion at that point will be to branch off the release branch in cd-management as well and use this new release branch to track the LTS branches in the EdgeX repositories.","title":"Multiple Release Streams"},{"location":"design/adr/devops/0007-Release-Automation/#release-flow","text":"","title":"Release Flow"},{"location":"design/adr/devops/0007-Release-Automation/#go-modules-device-and-application-sdks","text":"","title":"Go Modules, Device and Application SDKs"},{"location":"design/adr/devops/0007-Release-Automation/#during-development","text":"Go modules, Application and Device SDKs only release a GitHub tag as their release. Go modules, Application and Device SDKs are set up to automatically increment a developmental version tag on each merge to main . (IE: 1.0.0-dev.1 -> 1.0.0-dev.2)","title":"During Development"},{"location":"design/adr/devops/0007-Release-Automation/#release","text":"The release automation for Go Modules, Device and Application SDKs is used to set the final release version git tag. (IE: 1.0.0-dev.X -> 1.0.0) For each release, the Go Modules, Device and Application SDK repositories will be tagged with the release version.","title":"Release"},{"location":"design/adr/devops/0007-Release-Automation/#core-services-including-security-and-system-management-services-application-services-device-services-and-supporting-docker-images","text":"","title":"Core Services (Including Security and System Management services), Application Services, Device Services and Supporting Docker Images"},{"location":"design/adr/devops/0007-Release-Automation/#during-development_1","text":"For the Core Services, Application Services, Device Services and Supporting Docker Images we release Github tags and docker images. On every merge to the main branch we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2), stage docker images in our Nexus repository (docker.staging).","title":"During Development"},{"location":"design/adr/devops/0007-Release-Automation/#release_1","text":"The release automation will need to do the following: Set version tag on GitHub. (IE: 1.0.0-dev.X -> 1.0.0) Promote docker images in our Nexus repository from docker.staging to docker.release and public Docker hub.","title":"Release"},{"location":"design/adr/devops/0007-Release-Automation/#supporting-assets-eg-edgex-cli","text":"","title":"Supporting Assets (e.g. edgex-cli)"},{"location":"design/adr/devops/0007-Release-Automation/#during-development_2","text":"For supporting release assets (e.g. edgex-cli) we release GitHub tags on every merge to the main branch. For every merge to main we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2) and store the build artifacts in our Nexus repository.","title":"During Development"},{"location":"design/adr/devops/0007-Release-Automation/#release_2","text":"For EdgeX releases the release automation will set the final release version by creating a git tag (e.g. 1.0.0-dev.X -> 1.0.0) and produce a Github Release containing the binary assets targeted for release.","title":"Release"},{"location":"design/adr/devops/0010-Release-Artifacts/","text":"Release Artifacts Status Approved Context During the Geneva release of EdgeX Foundry the DevOps WG transformed the CI/CD process with new Jenkins pipeline functionality. After this new functionality was added we also started adding release automation. This new automation is outlined in ADR 0007 Release Automation. However, in ADR 0007 Release Automation only two release artifact types are outlined. This document is meant to be a living document to try to outlines all currently supported artifacts associated with an EdgeX Foundry release, and should be updated if/when this list changes. Release Artifact Types Docker Images Tied to Code Release? Yes Docker images are released for every named release of EdgeX Foundry. During development the community releases images to the docker.staging repository in Nexus . At the time of release we promote the last tested image from docker.staging to docker.release . In addition to that we will publish the docker image on DockerHub . Nexus Retention Policy docker.snapshots Retention Policy: 90 days since last download Contains: Docker images that are not expected to be released. This contains images to optimize the builds in the CI infrastructure. The definitions of these docker images can be found in the edgexfoundry/ci-build-images Github repository. Docker Tags Used: Version, Latest docker.staging Retention Policy: 180 days since last download Contains: Docker images built for potential release and testing purposes during development. Docker Tags Used: Version (ie: v1.x), Release Branch (master, fuji, etc), Latest docker.release Retention Policy: No automatic removal. Requires TSC approval to remove images from this repository. Contains: Officially released docker images for EdgeX. Docker Tags Used:\u2022Version (ie: v1.x), Latest Nexus Cleanup Policies Reference Docker Compose Files Tied to Code Release? Yes Docker compose files are released alongside the docker images for every release of EdgeX Foundry. During development the community maintains compose files a folder named nightly-build . These compose files are meant to be used by our testing frameworks. At the time of release the community makes compose files for that release in a folder matching it's name. (ie: geneva ) DockerHub Image Descriptions and Overviews Tied to Code Release? No After Docker images are published to DockerHub , automation should be run to update the image Overviews and Descriptions of the necessary images. This automation is located in the edgex-docker-hub-documentation branch of the cd-management repository. In preparation for the release the community makes changes to the Overview and Description metadata as appropriate. The Release Czar will coordinate the execution of the automation near the release time. Github Page: EdgeX Docs Tied to Code Release? No EdgeX Foundry releases a set of documentation for our project at http://docs.edgexfoundry.org . This page is a Github page that is managed by the edgex/foundry/edgex-docs Github repository. As a community we make our best effort to keep these docs up to date. On this page we are also versioning the docs with the semantic versions of the named releases. As a community we try to version our documentation site shortly after the official release date but documentation changes are addressed as we find them throughout the release cycle. GitHub Tags Tied to Code Release? Yes, for the final semantic version Github tags are used to track the releases of EdgeX Foundry. During development the tags are incremented automatically for each commit using a development suffix (ie: v1.1.1-dev.1 -> v1.1.1-dev.2 ). At the time of release we release a tag with the final semantic version (ie: v1.1.1 ). Snaps Tied to Code Release? Yes The building of snaps was removed from community scope in September 2020 but are still available on the snapcraft store . Canonical publishes daily arm64 and amd64 releases of the following snaps to latest/edge in the Snap Store. These builds take place on the Canonical Launchpad platform and use the latest code from the master branch of each EdgeX repository, versioned using the latest git tag. edgexfoundry edgex-app-service-configurable edgex-device-camera edgex-device-rest edgex-device-modbus edgex-device-mqtt edgex-device-grove edgex-cli (work-in-progress) Note - this list may expand over time. At code freeze the edgexfoundry snap revision in the edge channel is promoted to latest/beta and $TRACK/beta. Publishing to beta will trigger the Canonical checkbox automated tests, which include tests on a variety of hardware hosted by Canonical. When the project tags a release of any of the snaps listed above, the resulting snap revision is first promoted from the edge channel to latest/candidate and $TRACK/candidate. Canonical tests this revision, and if all looks good, releases to latest/stable and $TRACK/stable. Canonical may also publish updates to the EdgeX snaps after release to address high/critical bugs and CVEs (common vulnerabilities and exposures). Note - in the above descriptions, $TRACK corresponds to the named release tracks (e.g. fuji, geneva, hanoi, ...) which are created for every major/minor release of EdgeX Foundry. SwaggerHub API Docs Tied to Code Release? No In addition to our documentation site EdgeX foundry also releases our API specifications on Swaggerhub. Testing Framework Tied to Code Release? Yes The EdgeX Foundry community has a set of tests we maintain to do regression testing during development this framework is tracking the master branch of the components of EdgeX. At the time of release we will update the testing frameworks to point at the released Github tags and add a version tag to the testing frameworks themselves. This creates a snapshot of testing framework at the time of release for validation of the official release. GitHub Release Artifacts Tied to Code Release? Yes GitHub release functionality is utilized on some repositories to release binary artifacts/assets (e.g. zip/tar files). These are versioned with the semantic version and found on the repository's GitHub Release page under 'Assets'. Known Build Dependencies for EdgeX Foundry There are some internal build dependencies within the EdgeX Foundry organization. When building artifacts for validation or a release you will need to take into the account the build dependencies to make sure you build them in the correct order. Application services have a dependency on the Application Functions SDK. Go Device services have a dependency on the Go Device SDK. C Device services have a dependency on the C Device SDK. Decision Consequences This document is meant to be a living document of all the release artifacts of EdgeX Foundry. With this ADR we would have a good understanding on what needs to be released and when they are released. Without this document this information will remain tribal knowledge within the community.","title":"Release Artifacts"},{"location":"design/adr/devops/0010-Release-Artifacts/#release-artifacts","text":"","title":"Release Artifacts"},{"location":"design/adr/devops/0010-Release-Artifacts/#status","text":"Approved","title":"Status"},{"location":"design/adr/devops/0010-Release-Artifacts/#context","text":"During the Geneva release of EdgeX Foundry the DevOps WG transformed the CI/CD process with new Jenkins pipeline functionality. After this new functionality was added we also started adding release automation. This new automation is outlined in ADR 0007 Release Automation. However, in ADR 0007 Release Automation only two release artifact types are outlined. This document is meant to be a living document to try to outlines all currently supported artifacts associated with an EdgeX Foundry release, and should be updated if/when this list changes.","title":"Context"},{"location":"design/adr/devops/0010-Release-Artifacts/#release-artifact-types","text":"","title":"Release Artifact Types"},{"location":"design/adr/devops/0010-Release-Artifacts/#docker-images","text":"Tied to Code Release? Yes Docker images are released for every named release of EdgeX Foundry. During development the community releases images to the docker.staging repository in Nexus . At the time of release we promote the last tested image from docker.staging to docker.release . In addition to that we will publish the docker image on DockerHub .","title":"Docker Images"},{"location":"design/adr/devops/0010-Release-Artifacts/#nexus-retention-policy","text":"","title":"Nexus Retention Policy"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockersnapshots","text":"Retention Policy: 90 days since last download Contains: Docker images that are not expected to be released. This contains images to optimize the builds in the CI infrastructure. The definitions of these docker images can be found in the edgexfoundry/ci-build-images Github repository. Docker Tags Used: Version, Latest","title":"docker.snapshots"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerstaging","text":"Retention Policy: 180 days since last download Contains: Docker images built for potential release and testing purposes during development. Docker Tags Used: Version (ie: v1.x), Release Branch (master, fuji, etc), Latest","title":"docker.staging"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerrelease","text":"Retention Policy: No automatic removal. Requires TSC approval to remove images from this repository. Contains: Officially released docker images for EdgeX. Docker Tags Used:\u2022Version (ie: v1.x), Latest Nexus Cleanup Policies Reference","title":"docker.release"},{"location":"design/adr/devops/0010-Release-Artifacts/#docker-compose-files","text":"Tied to Code Release? Yes Docker compose files are released alongside the docker images for every release of EdgeX Foundry. During development the community maintains compose files a folder named nightly-build . These compose files are meant to be used by our testing frameworks. At the time of release the community makes compose files for that release in a folder matching it's name. (ie: geneva )","title":"Docker Compose Files"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerhub-image-descriptions-and-overviews","text":"Tied to Code Release? No After Docker images are published to DockerHub , automation should be run to update the image Overviews and Descriptions of the necessary images. This automation is located in the edgex-docker-hub-documentation branch of the cd-management repository. In preparation for the release the community makes changes to the Overview and Description metadata as appropriate. The Release Czar will coordinate the execution of the automation near the release time.","title":"DockerHub Image Descriptions and Overviews"},{"location":"design/adr/devops/0010-Release-Artifacts/#github-page-edgex-docs","text":"Tied to Code Release? No EdgeX Foundry releases a set of documentation for our project at http://docs.edgexfoundry.org . This page is a Github page that is managed by the edgex/foundry/edgex-docs Github repository. As a community we make our best effort to keep these docs up to date. On this page we are also versioning the docs with the semantic versions of the named releases. As a community we try to version our documentation site shortly after the official release date but documentation changes are addressed as we find them throughout the release cycle.","title":"Github Page: EdgeX Docs"},{"location":"design/adr/devops/0010-Release-Artifacts/#github-tags","text":"Tied to Code Release? Yes, for the final semantic version Github tags are used to track the releases of EdgeX Foundry. During development the tags are incremented automatically for each commit using a development suffix (ie: v1.1.1-dev.1 -> v1.1.1-dev.2 ). At the time of release we release a tag with the final semantic version (ie: v1.1.1 ).","title":"GitHub Tags"},{"location":"design/adr/devops/0010-Release-Artifacts/#snaps","text":"Tied to Code Release? Yes The building of snaps was removed from community scope in September 2020 but are still available on the snapcraft store . Canonical publishes daily arm64 and amd64 releases of the following snaps to latest/edge in the Snap Store. These builds take place on the Canonical Launchpad platform and use the latest code from the master branch of each EdgeX repository, versioned using the latest git tag. edgexfoundry edgex-app-service-configurable edgex-device-camera edgex-device-rest edgex-device-modbus edgex-device-mqtt edgex-device-grove edgex-cli (work-in-progress) Note - this list may expand over time. At code freeze the edgexfoundry snap revision in the edge channel is promoted to latest/beta and $TRACK/beta. Publishing to beta will trigger the Canonical checkbox automated tests, which include tests on a variety of hardware hosted by Canonical. When the project tags a release of any of the snaps listed above, the resulting snap revision is first promoted from the edge channel to latest/candidate and $TRACK/candidate. Canonical tests this revision, and if all looks good, releases to latest/stable and $TRACK/stable. Canonical may also publish updates to the EdgeX snaps after release to address high/critical bugs and CVEs (common vulnerabilities and exposures). Note - in the above descriptions, $TRACK corresponds to the named release tracks (e.g. fuji, geneva, hanoi, ...) which are created for every major/minor release of EdgeX Foundry.","title":"Snaps"},{"location":"design/adr/devops/0010-Release-Artifacts/#swaggerhub-api-docs","text":"Tied to Code Release? No In addition to our documentation site EdgeX foundry also releases our API specifications on Swaggerhub.","title":"SwaggerHub API Docs"},{"location":"design/adr/devops/0010-Release-Artifacts/#testing-framework","text":"Tied to Code Release? Yes The EdgeX Foundry community has a set of tests we maintain to do regression testing during development this framework is tracking the master branch of the components of EdgeX. At the time of release we will update the testing frameworks to point at the released Github tags and add a version tag to the testing frameworks themselves. This creates a snapshot of testing framework at the time of release for validation of the official release.","title":"Testing Framework"},{"location":"design/adr/devops/0010-Release-Artifacts/#github-release-artifacts","text":"Tied to Code Release? Yes GitHub release functionality is utilized on some repositories to release binary artifacts/assets (e.g. zip/tar files). These are versioned with the semantic version and found on the repository's GitHub Release page under 'Assets'.","title":"GitHub Release Artifacts"},{"location":"design/adr/devops/0010-Release-Artifacts/#known-build-dependencies-for-edgex-foundry","text":"There are some internal build dependencies within the EdgeX Foundry organization. When building artifacts for validation or a release you will need to take into the account the build dependencies to make sure you build them in the correct order. Application services have a dependency on the Application Functions SDK. Go Device services have a dependency on the Go Device SDK. C Device services have a dependency on the C Device SDK.","title":"Known Build Dependencies for EdgeX Foundry"},{"location":"design/adr/devops/0010-Release-Artifacts/#decision","text":"","title":"Decision"},{"location":"design/adr/devops/0010-Release-Artifacts/#consequences","text":"This document is meant to be a living document of all the release artifacts of EdgeX Foundry. With this ADR we would have a good understanding on what needs to be released and when they are released. Without this document this information will remain tribal knowledge within the community.","title":"Consequences"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/","text":"Creation and Distribution of Secrets Status Approved Context This ADR seeks to clarify and prioritize the secret handling approach taken by EdgeX. EdgeX microservices need a number of secrets to be created and distributed in order to create a functional, secure system. Among these secrets are: Privileged administrator passwords (such as a database superuser password) Service account passwords (e.g. non-privileged database accounts) PKI private keys There is a lack of consistency on how secrets are created and distributed to EdgeX microservices, and when developers need to add new components to the system, it is unclear on what the preferred approach should be. This document assumes a threat model wherein the EdgeX services are sandboxed (such as in a snap or a container) and the host system is trusted, and all services running in a single snap share a trust boundary. Terms The following terms will be helpful for understading the subsequent discussion: SECRETSLOC is a protected file system path where bootstrapping secrets are stored. While EdgeX implements a sophisticated secret handling mechanism, that mechanism itself requires secrets. For example, every microservice that talks to Vault must have its own unique secret to authenticate: Vault itself cannot be used to distribute these secrets. SECRETSLOC fulfills the role that the non-routable instance data IP address, 169.254.169.254, fulfills in the public cloud: delivery of bootstrapping secrets. As EdgeX does not have a hypervisor nor virtual machines for this purpose, a protected file system path is used instead. SECRETSLOC is implementation-dependent. A desirable feature of SECRETSLOC would be that data written here is kept in RAM and is not persisted to storage media. This property is not achieveable in all circumstances. For Docker, a list of suggested paths--in preference order--is: /run/edgex/secrets (a tmpfs volume on a Linux host) /tmp/edgex/secrets (a temporary file area on Linux and MacOS hosts) A persistent docker volume (use when host bind mounts are not available) For snaps, a list of suggested paths-in preference order--is: * /run/snap. $SNAP_NAME / (a tmpfs volume on a Linux host) * $SNAP_DATA /secrets (a snap-specific persistent data area) * TBD (a content interface that allows for sharing of secrets from the core snap) Current practices survey A survey on the existing EdgeX secrets reveals the following appoaches. A designation of \"compliant\" means that the current implementation is aligned with the recommended practices documented in the next section. A designation of \"non-compliant\" means that the current implementation uses an implemention mechanism outside of the recommended practices documented in the next section. A \"non-compliant\" implementation is a candidate for refactoring to bring the implementation into conformance with the recommended practices. System-managed secrets PKI private keys Docker: PKI generated by standalone utility every cold start of the framework. Distribution via SECRETSLOC . (Compliant.) Snaps: PKI generated by standalone utility every cold start of the framework. Deployed to SECRETSLOC . (Compliant.) Secret store master password Docker: Distribution via persistent docker volume. (Non-compliant.) Snaps: Stored in $SNAP_DATA/config/security-secrets-setup/res . (Non-compliant.) Secret store per-service authentication tokens Docker: Distribution via SECRETSLOC generated every cold start of the framework. (Compliant.) Snaps: Distribution via SECRETSLOC , generated every cold start of the framework. (Compliant.) Postgres superuser password Docker: Hard-coded into docker-compose file, checked in to source control. (Non-compliant.) Snaps: Generated at snap install time via \"apg\" (\"automatic password generator\") tool, installed into Postgres, cached to $SNAP_DATA/config/postgres/kongpw (non-compliant), and passed to Kong via $KONG_PG_PASSWORD . MongoDB service account passwords Docker: Direct consumption from secret store. (Compliant.) Snaps: Direct consumption from secret store. (Compliant.) Redis authentication password Docker: Server--staged to secrets volume and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.) Snaps: Server--staged to $SNAP_DATA/secrets/edgex-redis/redis5-password and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.) Kong client authentication tokens Docker: System of reference is unencrypted Postgres database. (Non-compliant.) Snaps: System of reference is unencrypted Postgres database. (Non-compliant.) Note: in the current implementation, Consul is being operated as a public service. Consul will be a subject of a future \"bootstrapping ADR\" due to its role in serivce location. User-managed secrets User-managed secrets functionality is provided by app-functions-sdk-go . If security is enabled, secrets are retrieved from Vault. If security is disabled, secrets are retreived from the configuration provider. If the configuration provider is not available, secrets are read from the underlying .toml . It is taken as granted in this ADR that secrets originating in the configuration provider or from .toml configuration files are not secret. The fallback mechanism is provided as a convienience to the developer, who would otherwise have to litter their code with \"if (isSecurityEnabled())\" logic leading to implementation inconsistencies. The central database credential is supplied by GetDatabaseCredentials() and returns the database credential assigned to app-service-configurable . If security is enabled, database credentials are retreived using the standard flow. If security is disabled, secrets are retreived from the configuration provider from a special section called [Writable.InsecureSecrets] . If not found there, the configuration provider is searched for credentials stored in the legacy [Databases.Primary] section using the Username and Password keys. Each user application has its own exclusive-use area of the secret store that is accessed via GetSecrets() . If security is enabled, secret requests are passed along to go-mod-secrets using an application-specific access token. If security is disabled, secret requets are made to the configuration provider from the [Writable.InsecureSecrets] section. There is no fallback configuration location. As user-managed secrets have no framework support for initialization, a special StoreSecrets() method is made available to the application for the application to initialize its own secrets. This method is only available in security-enabled mode. No changes to user-managed secrets are being proposed in this ADR. Decision Creation of secrets Management of hardware-bound secrets is platform-specific and out-of-scope for the EdgeX framework. EdgeX open source will contain only the necessary hooks to integrate platform-specific functionality. For software-managed secrets, the system of referece of secrets in EdgeX is the EdgeX secret store. The EdgeX secret store provides for encryption of secrets at rest. This term means that if a secret is replicated, the EdgeX secret store is the authoritative source of truth of the secret. Whenever possible, the EdgeX secret store should also be the record of origin of a secret as well. This means creating secrets inside of the EdgeX secret store is preferrable to importing an externally-created secret into the secret store. This can often be done for framework-managed secrets, but not possible for user-managed secrets. Choosing between alternative forms of secrets When given a choice between plain-text secrets and cryptographic keys, cryptographic keys should be preferred. An example situation would be the introduction of an MQTT message broker. A broker may support both TLS client authentication as well as username/password authentication. In such a situation, TLS client authentication would be preferred: The cryptographic key is typically longer in bits than a plain-text secret. A plain-text secret will require transport encryption in order to protect confidentiality of the secret, such as server-side TLS. Use of TLS client authentication typically eliminates the need for additional assets on the server side (such as a password database) to authenticate the client, by relying on digital signature instead. TLS client authentication should not be used unless there is a capability to revoke a compromised certificate, such as by replacing the certificate authority, or providing a certificate revokation list to the server. If certificate revokation is not supported, plain-text secrets (such as username/password) should be used instead, as they are typically easier to revoke. Distribution and consumption of secrets Prohibited practices Use of hard-coded secrets is an instance of CWE-798: Use of hard-coded credentials and is not allowed. A hard-coded secret is a secret that is the same across multiple EdgeX instances. Hard-coded secrets make devices susceptible to BORE (break-once-run-everywhere) attacks, where collections of machines can compromised by a single replicated secret. Specific cases where this is likely to come up are: Secrets embedded in source control EdgeX is an open-source project. Any secret that is present in an EdgeX repository is public to the world, and therefore not a secret, by definition. Configuration files, such as .toml files, .json files, .yaml files (including docker-compose.yml ) are specific instances of this practice. Secrets embedded in binaries Binaries are usually not protected against confidentiality threats, and binaries can be easily reverse-engineered to find any secrets therein. Binaries included compile executables as well as Docker images. Recommended practices Direct consumption from process-to-process interaction with secret store This approach is only possible for components that have native support for Hashicorp Vault . This includes any EdgeX service that links to go-mod-secrets. For example, if secretClient is an instance of the go-mod-secrets secret store client: secrets , err := secretClient . GetSecrets ( \"myservice\" , \"username\" , \"password\" ) The above code will retrieve the username and password properties of the myservice secret. Dynamic injection of secret into process environment space Environment variables are part of a process' environment block and are mapped into a process' memory. In this scenario, an intermediary makes a connection to the secret store to fetch a secret, store it into an environment variable, and then launches a target executable, thereby passing the secret in-memory to the target process. Existing examples of this functionality include vaultenv , envconsul , or env-aws-params . These tools authenticate to a remote network service, inject secrets into the process environment, and then exec's a replacment process that inherits the secret-enriched enviornment block. There are a few potential risks with this approach: Environment blocks are passed to child processes by default. Environment-variable-sniffing malware (introduced by compromised 3rd party libaries) is a proven attack method. Dynamic injection of secret into container-scoped tmpfs volume An example of this approach is consul-template . This approach is useful when a secret is required to be in a configuration file and cannot be passed via an environment variable or directly consumed from a secret store. Distribution via SECRETSLOC This option is the most widely supported secret distribution mechanism by container orchestrators. EdgeX supports runtime environments such as standard Docker and snaps that have no built-in secret management features. Generic Docker does not have a built-in secrets mechanism. Manual configuration of a SECRETSLOC should utilize either a host file file system path or a Docker volume. Snaps also do not have a built-in secrets mechanism. The options for SECRETSLOC are limited to designated snap-writable directories. For comparison: Docker Swarm: Swarm swarm mode is not officially supported by the EdgeX project. Docker Swarm secrets are shared via the /run/secrets volume, which is a Linux tmpfs volume created on the host and shared with the container. For an example of Docker Swarm secrets, see the docker-compose secrets stanza . Secrets distributed in this manner become part of the RaftDB, and thus it becomes necessary to enable swarm autolock mode, which prevents the Raft database encryption key from being stored plaintext on disk. Swarm secrets have an additional limitation in that they are not mutable at runtime. Kubernetes: Kubernetes is not officially supported by the EdgeX project. Kubernetes also supports the secrets volume approach, though the secrets volume can be mounted anywhere in the container namespace. For an example of Kubernetes secrets volumes, see the Kubernetes secrets documentation . Secrets distributed in this manner become part of the etcd database, and thus it becomes necessary to specify a KMS provider for data encryption to prevent etcd from storing plaintext versions of secrets. Consequences As the existing implementation is not fully-compliant with this ADR, significant scope will be added to current and future EdgeX releases in order to bring the project into compliance. List of needed improvements: PKI private keys All: Move to using Vault as system of origin for the PKI instead of the standalone security-secrets-setup utility. All: Cache the PKI for Consul and Vault on persistent disk; rotate occasionally. All: Investigate hardware protection of cached Consul and Vault PKI secret keys. (Vault cannot unseal its own TLS certificate.) Special case: Bring-your-own external Kong certificate and key The Kong external certificate and key is already stored in Vault, however, additional metadata is needed to signal whether these are auto-generated or manually-installed. A manually-installed certificate and key would not be overwritten by the framework bringup logic. Installing a custom certificate and key can then be implemented by overwriting the system-generated ones and setting a flag indicating that they were manually-installed. Secret store master password All: Enable hooks for hardware protection of secret store master password. Secret store per-service authentication tokens No changes required. Postgres superuser password Generate at install time or on cold start of the framework. Cache in Vault and inject into Kong using environment variable injection. MongoDB service account passwords No changes required. Redis(v5) authentication password All: Implement process-to-process injection: start Redis unauthenticated, with a post-start hook to read the secret out of Vault and set the Redis password. (Short race condition between Redis starting, password being set, and dependent services starting.) No changes on client side. Redis(v6) passwords (v6 adds multiple user support) Interim solution: handle like MongoDB service account passwords. Future ADR to propose use of a Vault database secrets engine. No changes on client side (each service accesses its own credential) Kong authentication tokens All: Implement in-transit authentication with TLS-protected Postgres interface. (Subject to change if it is decided not to enable a Postgres backend out of the box.) Additional research needed as PostgreSQL does not support transparent data encryption. References ADR for secret creation and distribution CWE-798: Use of hard-coded credentials Docker Swarm secrets EdgeX go-mod-secrets Hashicorp Vault","title":"Creation and Distribution of Secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#creation-and-distribution-of-secrets","text":"","title":"Creation and Distribution of Secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#context","text":"This ADR seeks to clarify and prioritize the secret handling approach taken by EdgeX. EdgeX microservices need a number of secrets to be created and distributed in order to create a functional, secure system. Among these secrets are: Privileged administrator passwords (such as a database superuser password) Service account passwords (e.g. non-privileged database accounts) PKI private keys There is a lack of consistency on how secrets are created and distributed to EdgeX microservices, and when developers need to add new components to the system, it is unclear on what the preferred approach should be. This document assumes a threat model wherein the EdgeX services are sandboxed (such as in a snap or a container) and the host system is trusted, and all services running in a single snap share a trust boundary.","title":"Context"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#terms","text":"The following terms will be helpful for understading the subsequent discussion: SECRETSLOC is a protected file system path where bootstrapping secrets are stored. While EdgeX implements a sophisticated secret handling mechanism, that mechanism itself requires secrets. For example, every microservice that talks to Vault must have its own unique secret to authenticate: Vault itself cannot be used to distribute these secrets. SECRETSLOC fulfills the role that the non-routable instance data IP address, 169.254.169.254, fulfills in the public cloud: delivery of bootstrapping secrets. As EdgeX does not have a hypervisor nor virtual machines for this purpose, a protected file system path is used instead. SECRETSLOC is implementation-dependent. A desirable feature of SECRETSLOC would be that data written here is kept in RAM and is not persisted to storage media. This property is not achieveable in all circumstances. For Docker, a list of suggested paths--in preference order--is: /run/edgex/secrets (a tmpfs volume on a Linux host) /tmp/edgex/secrets (a temporary file area on Linux and MacOS hosts) A persistent docker volume (use when host bind mounts are not available) For snaps, a list of suggested paths-in preference order--is: * /run/snap. $SNAP_NAME / (a tmpfs volume on a Linux host) * $SNAP_DATA /secrets (a snap-specific persistent data area) * TBD (a content interface that allows for sharing of secrets from the core snap)","title":"Terms"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#current-practices-survey","text":"A survey on the existing EdgeX secrets reveals the following appoaches. A designation of \"compliant\" means that the current implementation is aligned with the recommended practices documented in the next section. A designation of \"non-compliant\" means that the current implementation uses an implemention mechanism outside of the recommended practices documented in the next section. A \"non-compliant\" implementation is a candidate for refactoring to bring the implementation into conformance with the recommended practices.","title":"Current practices survey"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#system-managed-secrets","text":"PKI private keys Docker: PKI generated by standalone utility every cold start of the framework. Distribution via SECRETSLOC . (Compliant.) Snaps: PKI generated by standalone utility every cold start of the framework. Deployed to SECRETSLOC . (Compliant.) Secret store master password Docker: Distribution via persistent docker volume. (Non-compliant.) Snaps: Stored in $SNAP_DATA/config/security-secrets-setup/res . (Non-compliant.) Secret store per-service authentication tokens Docker: Distribution via SECRETSLOC generated every cold start of the framework. (Compliant.) Snaps: Distribution via SECRETSLOC , generated every cold start of the framework. (Compliant.) Postgres superuser password Docker: Hard-coded into docker-compose file, checked in to source control. (Non-compliant.) Snaps: Generated at snap install time via \"apg\" (\"automatic password generator\") tool, installed into Postgres, cached to $SNAP_DATA/config/postgres/kongpw (non-compliant), and passed to Kong via $KONG_PG_PASSWORD . MongoDB service account passwords Docker: Direct consumption from secret store. (Compliant.) Snaps: Direct consumption from secret store. (Compliant.) Redis authentication password Docker: Server--staged to secrets volume and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.) Snaps: Server--staged to $SNAP_DATA/secrets/edgex-redis/redis5-password and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.) Kong client authentication tokens Docker: System of reference is unencrypted Postgres database. (Non-compliant.) Snaps: System of reference is unencrypted Postgres database. (Non-compliant.) Note: in the current implementation, Consul is being operated as a public service. Consul will be a subject of a future \"bootstrapping ADR\" due to its role in serivce location.","title":"System-managed secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#user-managed-secrets","text":"User-managed secrets functionality is provided by app-functions-sdk-go . If security is enabled, secrets are retrieved from Vault. If security is disabled, secrets are retreived from the configuration provider. If the configuration provider is not available, secrets are read from the underlying .toml . It is taken as granted in this ADR that secrets originating in the configuration provider or from .toml configuration files are not secret. The fallback mechanism is provided as a convienience to the developer, who would otherwise have to litter their code with \"if (isSecurityEnabled())\" logic leading to implementation inconsistencies. The central database credential is supplied by GetDatabaseCredentials() and returns the database credential assigned to app-service-configurable . If security is enabled, database credentials are retreived using the standard flow. If security is disabled, secrets are retreived from the configuration provider from a special section called [Writable.InsecureSecrets] . If not found there, the configuration provider is searched for credentials stored in the legacy [Databases.Primary] section using the Username and Password keys. Each user application has its own exclusive-use area of the secret store that is accessed via GetSecrets() . If security is enabled, secret requests are passed along to go-mod-secrets using an application-specific access token. If security is disabled, secret requets are made to the configuration provider from the [Writable.InsecureSecrets] section. There is no fallback configuration location. As user-managed secrets have no framework support for initialization, a special StoreSecrets() method is made available to the application for the application to initialize its own secrets. This method is only available in security-enabled mode. No changes to user-managed secrets are being proposed in this ADR.","title":"User-managed secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#decision","text":"","title":"Decision"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#creation-of-secrets","text":"Management of hardware-bound secrets is platform-specific and out-of-scope for the EdgeX framework. EdgeX open source will contain only the necessary hooks to integrate platform-specific functionality. For software-managed secrets, the system of referece of secrets in EdgeX is the EdgeX secret store. The EdgeX secret store provides for encryption of secrets at rest. This term means that if a secret is replicated, the EdgeX secret store is the authoritative source of truth of the secret. Whenever possible, the EdgeX secret store should also be the record of origin of a secret as well. This means creating secrets inside of the EdgeX secret store is preferrable to importing an externally-created secret into the secret store. This can often be done for framework-managed secrets, but not possible for user-managed secrets.","title":"Creation of secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#choosing-between-alternative-forms-of-secrets","text":"When given a choice between plain-text secrets and cryptographic keys, cryptographic keys should be preferred. An example situation would be the introduction of an MQTT message broker. A broker may support both TLS client authentication as well as username/password authentication. In such a situation, TLS client authentication would be preferred: The cryptographic key is typically longer in bits than a plain-text secret. A plain-text secret will require transport encryption in order to protect confidentiality of the secret, such as server-side TLS. Use of TLS client authentication typically eliminates the need for additional assets on the server side (such as a password database) to authenticate the client, by relying on digital signature instead. TLS client authentication should not be used unless there is a capability to revoke a compromised certificate, such as by replacing the certificate authority, or providing a certificate revokation list to the server. If certificate revokation is not supported, plain-text secrets (such as username/password) should be used instead, as they are typically easier to revoke.","title":"Choosing between alternative forms of secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#distribution-and-consumption-of-secrets","text":"","title":"Distribution and consumption of secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#prohibited-practices","text":"Use of hard-coded secrets is an instance of CWE-798: Use of hard-coded credentials and is not allowed. A hard-coded secret is a secret that is the same across multiple EdgeX instances. Hard-coded secrets make devices susceptible to BORE (break-once-run-everywhere) attacks, where collections of machines can compromised by a single replicated secret. Specific cases where this is likely to come up are: Secrets embedded in source control EdgeX is an open-source project. Any secret that is present in an EdgeX repository is public to the world, and therefore not a secret, by definition. Configuration files, such as .toml files, .json files, .yaml files (including docker-compose.yml ) are specific instances of this practice. Secrets embedded in binaries Binaries are usually not protected against confidentiality threats, and binaries can be easily reverse-engineered to find any secrets therein. Binaries included compile executables as well as Docker images.","title":"Prohibited practices"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#recommended-practices","text":"Direct consumption from process-to-process interaction with secret store This approach is only possible for components that have native support for Hashicorp Vault . This includes any EdgeX service that links to go-mod-secrets. For example, if secretClient is an instance of the go-mod-secrets secret store client: secrets , err := secretClient . GetSecrets ( \"myservice\" , \"username\" , \"password\" ) The above code will retrieve the username and password properties of the myservice secret. Dynamic injection of secret into process environment space Environment variables are part of a process' environment block and are mapped into a process' memory. In this scenario, an intermediary makes a connection to the secret store to fetch a secret, store it into an environment variable, and then launches a target executable, thereby passing the secret in-memory to the target process. Existing examples of this functionality include vaultenv , envconsul , or env-aws-params . These tools authenticate to a remote network service, inject secrets into the process environment, and then exec's a replacment process that inherits the secret-enriched enviornment block. There are a few potential risks with this approach: Environment blocks are passed to child processes by default. Environment-variable-sniffing malware (introduced by compromised 3rd party libaries) is a proven attack method. Dynamic injection of secret into container-scoped tmpfs volume An example of this approach is consul-template . This approach is useful when a secret is required to be in a configuration file and cannot be passed via an environment variable or directly consumed from a secret store. Distribution via SECRETSLOC This option is the most widely supported secret distribution mechanism by container orchestrators. EdgeX supports runtime environments such as standard Docker and snaps that have no built-in secret management features. Generic Docker does not have a built-in secrets mechanism. Manual configuration of a SECRETSLOC should utilize either a host file file system path or a Docker volume. Snaps also do not have a built-in secrets mechanism. The options for SECRETSLOC are limited to designated snap-writable directories. For comparison: Docker Swarm: Swarm swarm mode is not officially supported by the EdgeX project. Docker Swarm secrets are shared via the /run/secrets volume, which is a Linux tmpfs volume created on the host and shared with the container. For an example of Docker Swarm secrets, see the docker-compose secrets stanza . Secrets distributed in this manner become part of the RaftDB, and thus it becomes necessary to enable swarm autolock mode, which prevents the Raft database encryption key from being stored plaintext on disk. Swarm secrets have an additional limitation in that they are not mutable at runtime. Kubernetes: Kubernetes is not officially supported by the EdgeX project. Kubernetes also supports the secrets volume approach, though the secrets volume can be mounted anywhere in the container namespace. For an example of Kubernetes secrets volumes, see the Kubernetes secrets documentation . Secrets distributed in this manner become part of the etcd database, and thus it becomes necessary to specify a KMS provider for data encryption to prevent etcd from storing plaintext versions of secrets.","title":"Recommended practices"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#consequences","text":"As the existing implementation is not fully-compliant with this ADR, significant scope will be added to current and future EdgeX releases in order to bring the project into compliance. List of needed improvements: PKI private keys All: Move to using Vault as system of origin for the PKI instead of the standalone security-secrets-setup utility. All: Cache the PKI for Consul and Vault on persistent disk; rotate occasionally. All: Investigate hardware protection of cached Consul and Vault PKI secret keys. (Vault cannot unseal its own TLS certificate.) Special case: Bring-your-own external Kong certificate and key The Kong external certificate and key is already stored in Vault, however, additional metadata is needed to signal whether these are auto-generated or manually-installed. A manually-installed certificate and key would not be overwritten by the framework bringup logic. Installing a custom certificate and key can then be implemented by overwriting the system-generated ones and setting a flag indicating that they were manually-installed. Secret store master password All: Enable hooks for hardware protection of secret store master password. Secret store per-service authentication tokens No changes required. Postgres superuser password Generate at install time or on cold start of the framework. Cache in Vault and inject into Kong using environment variable injection. MongoDB service account passwords No changes required. Redis(v5) authentication password All: Implement process-to-process injection: start Redis unauthenticated, with a post-start hook to read the secret out of Vault and set the Redis password. (Short race condition between Redis starting, password being set, and dependent services starting.) No changes on client side. Redis(v6) passwords (v6 adds multiple user support) Interim solution: handle like MongoDB service account passwords. Future ADR to propose use of a Vault database secrets engine. No changes on client side (each service accesses its own credential) Kong authentication tokens All: Implement in-transit authentication with TLS-protected Postgres interface. (Subject to change if it is decided not to enable a Postgres backend out of the box.) Additional research needed as PostgreSQL does not support transparent data encryption.","title":"Consequences"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#references","text":"ADR for secret creation and distribution CWE-798: Use of hard-coded credentials Docker Swarm secrets EdgeX go-mod-secrets Hashicorp Vault","title":"References"},{"location":"design/adr/security/0009-Secure-Bootstrapping/","text":"Secure Bootstrapping of EdgeX Secure Bootstrapping of EdgeX Status Context History Decision Stage-gate mechanism Docker-specific service changes \"As-is\" startup flow \"To-be\" startup flow New Bootstrap/RTR container Consequences Benefits Drawbacks Alternatives Event-driven vs commanded staging System management agent (SMA) as the coordinator Create a mega-install container Manual secret provisioning References Status Approved Context Docker-compose, the tool used by EdgeX to manage its Docker-based stack, lags in its support for initialization logic. Docker-compose v2.x used to have a depends_on / condition directive that would test a service's HEALTHCHECK and block startup until the service was \"healthy\". Unfortunately, this feature was removed in 3.x docker-compose. (This feature is also unsuppported in swarm mode as well.) Snaps have an explicit install phase and Kubernetes PODs have optional init containers. In other frameworks, initialization is allowed to run to completion prior to application components being started in production mode. This functionality does not exist in Docker nor docker-compose. The current lack of an initialization phase is a blocking issue for implementing microservice communication security, as critical EdgeX core components that are involved with microservice communication (specifically Consul) are being brought up in an insecure configuration. (Consul's insecure configuration is will be addressed in a separate ADR .) Activities that are best done in the initialization phase include the following: Bootstrapping of crytographic secrets needed by the application. Bootstrapping of database users and passwords. Installation of database schema needed for application logic to function. Initialization of authorization frameworks such as configuring RBAC or ACLs. Other one-time initialization activities. Workarounds when an installation phase is not present include: Perform initialization tasks manually, and manually seed secrets into static configuration files. Ship with known hard-coded secrets in static configuration files. Start in an insecure configuration and remain that way. Provision some secrets at runtime. EdgeX does not have a manual installation flow, and uses a combination of the last three approaches. The objective of this ADR is to define a framework for Docker-based initialization logic in EdgeX. This will enable the removal of certain hard-coded secrets in EdgeX and enable certain components (such as Consul) to be started in a secure configuration. These improvement are necessary pre-requisites to implementing microservice communication security. History In previous releases, container startup sequencing has been primarily been driven by Consul service health checks backed healthcheck endpoints of particular services or by sentinel files placed in the file system when certain intialization milestones are reached. The implementation has been plagued by several issues: Sentinel files are not cleaned up if the framework fails or is shut down. Invalid state left over from previous instantiations of the framework causes difficult-to-resolve race conditions. (Implementation of this ADR will try to remove as many as possible, focusing on those that are used to gate startup. Some use of sentinel files may still be required to indicate completion of initialization steps so that they are not re-done if there is no API-based mechanism to determine if such initialization has been completed.) Consul healh checks are reported in a difficult-to-parse JSON structure, which has lead to the creation of specialized tools that are insensitive to libc implementations used by different container images. Consul is being used not only for service health, but for service location and configuration as well . The requirement to synchronize framework startup for the purpose of securely initializing Consul means that a non-Consul mechanism must be used to stage-gate EdgeX initialization. This last point is the primary motivator of this ADR. Decision Stage-gate mechanism The stage-gate mechanism must work in the following environments: docker-compose in Linux on a single node/system docker-compose in Microsoft Windows on a single node/system docker-compose in Apple MacOS on a single node/system Startup sequencing will be driven by two primary mechanisms: Use of entrypoint scripts to: Block on stage-gate and service dependencies Perform first-boot initialization phase activities as noted in Context The bootstrap container will inject entrypoint scripts into the other containers in the case where EdgeX is directly consuming an upstream container. Docker will automatically retry restarting containers if its entrypoint script is missing. Use of open TCP sockets as semaphores to gate startup sequencing Use of TCP sockets for startup sequencing is commonly used in Docker environments. Due to its popularlity, there are several existing tools for this, including wait-for-it , dockerize , and wait-for . The TCP mechanism is portable across platforms and will work in distributed multi-node scenarios. At least three new ports will be added to EdgeX for sequencing purposes: bootstrap port. This port will be opened once first-time initialization has been completed. tokens_ready port. This port signals that secret-store tokens have been provisioned and are valid. ready_to_run port. This port will be opened once stateful services have completed initialization and it is safe for the majority of EdgeX core services to start. The stateless EdgeX services should block on ready_to_run port. Docker-specific service changes \"As-is\" startup flow The following diagram shows the \"as-is\" startup flow. There are several components being removed via activity unrelated with this ADR. These proposed edits are shown to reduce clutter in the TO-BE diagram. * secrets-setup is being eliminated through a separate ADR to eliminate TLS for single-node usage. * kong-migrations is being combined with the kong service via an entrypoint script. * bootstrap-redis will be incorporated into the Redis entrypoint script to set the Redis password before Redis starts to fix the time delay before a Redis password is set. \"To-be\" startup flow The following diagram shows the \"to-be\" startup flow. Note that the bootstrap flows are always processed, but can be short-circuited. Another difference to note in the \"to-be\" diagram is that the Vault depdendency on Consul is reversed in order to provide better security . New Bootstrap/RTR container The purpose of this new container is to: Inject entrypoint scripts into third-party containers (such as Vault, Redis, Consul, PostgreSQL, Kong) in order to perform first-time initialization and wait on service dependencies Raise the bootstrap semaphore Wait on dependent semaphores required to raise the ready_to_run semaphore (these are the stateful components such as databases, and blocking waiting for sercret store tokens to be provisioned) Raise the ready_to_run semaphore Wait forever (in order to leave TCP sockets open) Consequences Benefits This ADR is expected to yield the following benefits after completion of the related engineering tasks: Standardization of the stage-gate mechanism. Standardized approach to component initialization in Docker. Reduced fragility in the framework startup flow. Vault no longer uses Consul as its data store (uses file system instead). Ability to use a stock Consul container instead of creating a custom one for EdgeX Elimination of several sentinel files used for Consul health checks /tmp/edgex/secrets/ca/.security-secrets-setup.complete /tmp/edgex/secrets/edgex-consul/.secretstore-setup-done Drawbacks Introduction of a new container into the startup flow (but other containers are eliminated or combined). Expanded scope and responsibility of entrypoint scripts, which must not only block component startup, but now must also configure a component for secure operation. Alternatives Event-driven vs commanded staging In this scenario, instead of a service waiting on a TCP-socket semaphore created by another service, services would open a socket and wait for a coordinator/controller to issue a \"go\" command. This solution was not chosen for several reasons: The code required to open a socket and wait for a command is much more complicated than the code required to check for an open socket. Many open source utilities exist to block on a socket opening; there are no such examples for the reverse. This solution would would duplicate the information regarding which services need to run: once in the docker-compose file, and once as a configuration file to the coordinator/controller. System management agent (SMA) as the coordinator In this scenario, the system management agent is responsbile bringing up the EdgeX framework. Since the system management agent has access to the Docker socket, it has the ability to start services in a prescribed order, and as a management agent, has knowledge about the desired state of the framework. This solution was not chosen for several reasons: SMA is an optional EdgeX component--use in this way would make SMA a required core component. SMA, in order to authenticate an authorize remote management requests, requires access to persistent state and secrets. To make the same component responsible for initializing that state and secrets upon which it depends would make the design convoluted. Create a mega-install container This alternative would create a mega-install container that has locally installed verions of critical components needed for bootstrapping such as Vault, Consul, PostgreSQL, and others. A sequential script would start each component in turn, intiailizing each to run in a secure configuration, and then shut them all down again. The same stage-gate mechanism would be used to block startup of these same components, but Docker would start them in production configuration. Manual secret provisioning A typical cloud-based microservice architecture typically has a manual provisioning step. This step would include activities such as configuring Vault, installing a database schema, setting up database service account passwords, and seeding initial secrets such as PKI private keys that have been generated offline (possibly requiring several days of lead time). A cloud team may have weeks or months to prepare for this event, and it might take the greater part of a day. In contrast, EdgeX up to this point has been a \"turnkey\" middleware framework: it can be deployed with the same ease as an application, such as via a docker-compose file, or via a snap install. This means that most of the secret provisioning must be automated and the provisioning logic must be built into the framework in some way. The proposals presented in this ADR are compatibile with continuance of this functionality. References ADR 0008 - Creation and Distribution of Secrets ADR 0015 - Encryption between microservices , Hashicorp Consul Hashicorp Vault Issue: ADR for securing access to Consul Issue: Service registry ADR","title":"Secure Bootstrapping of EdgeX"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#secure-bootstrapping-of-edgex","text":"Secure Bootstrapping of EdgeX Status Context History Decision Stage-gate mechanism Docker-specific service changes \"As-is\" startup flow \"To-be\" startup flow New Bootstrap/RTR container Consequences Benefits Drawbacks Alternatives Event-driven vs commanded staging System management agent (SMA) as the coordinator Create a mega-install container Manual secret provisioning References","title":"Secure Bootstrapping of EdgeX"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#context","text":"Docker-compose, the tool used by EdgeX to manage its Docker-based stack, lags in its support for initialization logic. Docker-compose v2.x used to have a depends_on / condition directive that would test a service's HEALTHCHECK and block startup until the service was \"healthy\". Unfortunately, this feature was removed in 3.x docker-compose. (This feature is also unsuppported in swarm mode as well.) Snaps have an explicit install phase and Kubernetes PODs have optional init containers. In other frameworks, initialization is allowed to run to completion prior to application components being started in production mode. This functionality does not exist in Docker nor docker-compose. The current lack of an initialization phase is a blocking issue for implementing microservice communication security, as critical EdgeX core components that are involved with microservice communication (specifically Consul) are being brought up in an insecure configuration. (Consul's insecure configuration is will be addressed in a separate ADR .) Activities that are best done in the initialization phase include the following: Bootstrapping of crytographic secrets needed by the application. Bootstrapping of database users and passwords. Installation of database schema needed for application logic to function. Initialization of authorization frameworks such as configuring RBAC or ACLs. Other one-time initialization activities. Workarounds when an installation phase is not present include: Perform initialization tasks manually, and manually seed secrets into static configuration files. Ship with known hard-coded secrets in static configuration files. Start in an insecure configuration and remain that way. Provision some secrets at runtime. EdgeX does not have a manual installation flow, and uses a combination of the last three approaches. The objective of this ADR is to define a framework for Docker-based initialization logic in EdgeX. This will enable the removal of certain hard-coded secrets in EdgeX and enable certain components (such as Consul) to be started in a secure configuration. These improvement are necessary pre-requisites to implementing microservice communication security.","title":"Context"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#history","text":"In previous releases, container startup sequencing has been primarily been driven by Consul service health checks backed healthcheck endpoints of particular services or by sentinel files placed in the file system when certain intialization milestones are reached. The implementation has been plagued by several issues: Sentinel files are not cleaned up if the framework fails or is shut down. Invalid state left over from previous instantiations of the framework causes difficult-to-resolve race conditions. (Implementation of this ADR will try to remove as many as possible, focusing on those that are used to gate startup. Some use of sentinel files may still be required to indicate completion of initialization steps so that they are not re-done if there is no API-based mechanism to determine if such initialization has been completed.) Consul healh checks are reported in a difficult-to-parse JSON structure, which has lead to the creation of specialized tools that are insensitive to libc implementations used by different container images. Consul is being used not only for service health, but for service location and configuration as well . The requirement to synchronize framework startup for the purpose of securely initializing Consul means that a non-Consul mechanism must be used to stage-gate EdgeX initialization. This last point is the primary motivator of this ADR.","title":"History"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#decision","text":"","title":"Decision"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#stage-gate-mechanism","text":"The stage-gate mechanism must work in the following environments: docker-compose in Linux on a single node/system docker-compose in Microsoft Windows on a single node/system docker-compose in Apple MacOS on a single node/system Startup sequencing will be driven by two primary mechanisms: Use of entrypoint scripts to: Block on stage-gate and service dependencies Perform first-boot initialization phase activities as noted in Context The bootstrap container will inject entrypoint scripts into the other containers in the case where EdgeX is directly consuming an upstream container. Docker will automatically retry restarting containers if its entrypoint script is missing. Use of open TCP sockets as semaphores to gate startup sequencing Use of TCP sockets for startup sequencing is commonly used in Docker environments. Due to its popularlity, there are several existing tools for this, including wait-for-it , dockerize , and wait-for . The TCP mechanism is portable across platforms and will work in distributed multi-node scenarios. At least three new ports will be added to EdgeX for sequencing purposes: bootstrap port. This port will be opened once first-time initialization has been completed. tokens_ready port. This port signals that secret-store tokens have been provisioned and are valid. ready_to_run port. This port will be opened once stateful services have completed initialization and it is safe for the majority of EdgeX core services to start. The stateless EdgeX services should block on ready_to_run port.","title":"Stage-gate mechanism"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#docker-specific-service-changes","text":"","title":"Docker-specific service changes"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#as-is-startup-flow","text":"The following diagram shows the \"as-is\" startup flow. There are several components being removed via activity unrelated with this ADR. These proposed edits are shown to reduce clutter in the TO-BE diagram. * secrets-setup is being eliminated through a separate ADR to eliminate TLS for single-node usage. * kong-migrations is being combined with the kong service via an entrypoint script. * bootstrap-redis will be incorporated into the Redis entrypoint script to set the Redis password before Redis starts to fix the time delay before a Redis password is set.","title":"\"As-is\" startup flow"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#to-be-startup-flow","text":"The following diagram shows the \"to-be\" startup flow. Note that the bootstrap flows are always processed, but can be short-circuited. Another difference to note in the \"to-be\" diagram is that the Vault depdendency on Consul is reversed in order to provide better security .","title":"\"To-be\" startup flow"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#new-bootstraprtr-container","text":"The purpose of this new container is to: Inject entrypoint scripts into third-party containers (such as Vault, Redis, Consul, PostgreSQL, Kong) in order to perform first-time initialization and wait on service dependencies Raise the bootstrap semaphore Wait on dependent semaphores required to raise the ready_to_run semaphore (these are the stateful components such as databases, and blocking waiting for sercret store tokens to be provisioned) Raise the ready_to_run semaphore Wait forever (in order to leave TCP sockets open)","title":"New Bootstrap/RTR container"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#consequences","text":"","title":"Consequences"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#benefits","text":"This ADR is expected to yield the following benefits after completion of the related engineering tasks: Standardization of the stage-gate mechanism. Standardized approach to component initialization in Docker. Reduced fragility in the framework startup flow. Vault no longer uses Consul as its data store (uses file system instead). Ability to use a stock Consul container instead of creating a custom one for EdgeX Elimination of several sentinel files used for Consul health checks /tmp/edgex/secrets/ca/.security-secrets-setup.complete /tmp/edgex/secrets/edgex-consul/.secretstore-setup-done","title":"Benefits"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#drawbacks","text":"Introduction of a new container into the startup flow (but other containers are eliminated or combined). Expanded scope and responsibility of entrypoint scripts, which must not only block component startup, but now must also configure a component for secure operation.","title":"Drawbacks"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#alternatives","text":"","title":"Alternatives"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#event-driven-vs-commanded-staging","text":"In this scenario, instead of a service waiting on a TCP-socket semaphore created by another service, services would open a socket and wait for a coordinator/controller to issue a \"go\" command. This solution was not chosen for several reasons: The code required to open a socket and wait for a command is much more complicated than the code required to check for an open socket. Many open source utilities exist to block on a socket opening; there are no such examples for the reverse. This solution would would duplicate the information regarding which services need to run: once in the docker-compose file, and once as a configuration file to the coordinator/controller.","title":"Event-driven vs commanded staging"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#system-management-agent-sma-as-the-coordinator","text":"In this scenario, the system management agent is responsbile bringing up the EdgeX framework. Since the system management agent has access to the Docker socket, it has the ability to start services in a prescribed order, and as a management agent, has knowledge about the desired state of the framework. This solution was not chosen for several reasons: SMA is an optional EdgeX component--use in this way would make SMA a required core component. SMA, in order to authenticate an authorize remote management requests, requires access to persistent state and secrets. To make the same component responsible for initializing that state and secrets upon which it depends would make the design convoluted.","title":"System management agent (SMA) as the coordinator"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#create-a-mega-install-container","text":"This alternative would create a mega-install container that has locally installed verions of critical components needed for bootstrapping such as Vault, Consul, PostgreSQL, and others. A sequential script would start each component in turn, intiailizing each to run in a secure configuration, and then shut them all down again. The same stage-gate mechanism would be used to block startup of these same components, but Docker would start them in production configuration.","title":"Create a mega-install container"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#manual-secret-provisioning","text":"A typical cloud-based microservice architecture typically has a manual provisioning step. This step would include activities such as configuring Vault, installing a database schema, setting up database service account passwords, and seeding initial secrets such as PKI private keys that have been generated offline (possibly requiring several days of lead time). A cloud team may have weeks or months to prepare for this event, and it might take the greater part of a day. In contrast, EdgeX up to this point has been a \"turnkey\" middleware framework: it can be deployed with the same ease as an application, such as via a docker-compose file, or via a snap install. This means that most of the secret provisioning must be automated and the provisioning logic must be built into the framework in some way. The proposals presented in this ADR are compatibile with continuance of this functionality.","title":"Manual secret provisioning"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#references","text":"ADR 0008 - Creation and Distribution of Secrets ADR 0015 - Encryption between microservices , Hashicorp Consul Hashicorp Vault Issue: ADR for securing access to Consul Issue: Service registry ADR","title":"References"},{"location":"design/adr/security/0015-in-cluster-tls/","text":"Use of encryption to secure in-cluster EdgeX communications Status Approved Context This ADR seeks to define the EdgeX direction on using encryption to secure \"in-cluster\" EdgeX communications, that is, internal microservice-to-microservice communication. This ADR will seek to clarify the EdgeX direction in several aspects with regard to: EdgeX services communicating within a single host EdgeX services communicating across multiple hosts Using encryption for confidentiality or integrity in communication Using encryption for authentication between microservices This ADR will be used to triage EdgeX feature requests in this space. Background Why encrypt? Why consider encryption in the first place? Simple. Encryption helps with the following problems: Client authentication of servers. The client knows that it is talking to the correct server. This is typically achieved using TLS server certificates that the client checks against a trusted root certificate authority. Since the client is not in charge of network routing, TLS server authentication provides a good assurance that the requests are being routed to the correct server. Server authentication of clients. The server knows the identity of the client that has connected to it. There are a variety of mechanims to achieve this, such as usernames and passwords, tokens, claims, et cetera, but the mechanism under consideration by this ADR is TLS client authentication using TLS client certificates. Confidentiality of messages exchanged between services. Confidentiality is needed to protect authentication data flowing between communicating microservices as well as to protect the message payloads if they contain nonpublic data. TLS provides communication channel confidentiality. Integrity of messages exchanged between services. Integrity is needed to ensure that messages between communicating microservices are not maliciously altered, such as inserting or deleting data in the middle of the exchange. TLS provides communication channel integrity. A microservice architecture normally strives for all of the above protections. Besides TLS, there are other mechanisms that can be used to provide some of the above properties. For example, IPSec tunnels provide confidentity, integrity, and authentication of the hosts (network-level protection). SSH tunnels provide confidentiality, integrity, and authentication of the tunnel endpoints (also network-level protection). TLS, however, is preferred, because it operates in-process at the application level and provides better point-to-point security. Why to not encrypt? In the case of TLS communications, microservices depend on an asymmetric private key to prove their identity. To be of value, this private key must be kept secret. Applications typically depend on process-level isolation and/or file system protections for the private key. Moreover, interprocess communication using sockets is mediated by the operating system kernel. An attacker running at the privilege of the operating system has the ability to compromise TLS protections, such as by substituting a private key or certificate authority of their choice, accessing the unencrypted data in process memory, or intercepting the network communications that flow through the kernel. Therefore, within a single host, TLS protections may slow down an attacker, but are not likely to stop them. Additionally, use of TLS requires management of additional security assets in the form of TLS private keys. Microservice communcation across hosts, however, is vulnerable to intereception, and must be protected via some mechanism such as, but not limited to: IPSec or SSH tunnels, encrypted overlay networks, service mesh middlewares, or application-level TLS. Another reason to not encrypt is that TLS adds overhead to microservice communication in the form of additional network around-trips when opening connections and performing cryptographic public key and symmetric key operations. Decision At this time, EdgeX is primarily a single-node IoT application framework. Should this position change, this ADR should be revisited. Based on the single-node assumption: TLS will not be used for confidentiality and integrity of internal on-host microservice communication. TLS will be avoided as an authentication mechanism of peer microservices. Integrity and confidentiality of microservice communcations crossing host boundaries is required to secure EdgeX, but are an EdgeX customer responsibility. EdgeX customers are welcome to add extra security to their own EdgeX deployments. Consequences This ADR if approved would close the following issues as will-not-fix. https://github.com/edgexfoundry/edgex-go/issues/1942 https://github.com/edgexfoundry/edgex-go/issues/1941 https://github.com/edgexfoundry/edgex-go/issues/2454 https://github.com/edgexfoundry/developer-scripts/issues/240 https://github.com/edgexfoundry/edgex-go/issues/2495 It would also close https://github.com/edgexfoundry/edgex-go/issues/1925 as there is no current need for TLS as a mutual authentication strategy. Alternatives Encrypted overlay networks Encrypted overlay networks provide varying protection based on the product used. Some can only encrypt data, such as an IPsec tunnel. Some can encrypt and provide for network microsegmentation, such as Docker Swarm networks with encryption enabled. Some can encrypt and enforce network policy such as restrictions on ingress traffic or restrictions on egress traffic. Service mesh middleware Service mesh middleware is an alternative that should be investigated if EdgeX decides to fully support a Kubernetes-based deployment using distributed Kubernetes pods. A service mesh typically achieves most of the security objectives of security microservice commuication by intercepting microservice communications and imposing a configuration-driven policy that typically includes confidentiality and integrity protection. These middlewares typically rely on the Kubernetes pod construct and are difficult to support for non-Kubernetes deployments. EdgeX public key infrastructure An EdgeX public key infrastructure that is natively supported by the architecture should be considered if EdgeX decides to support an out-of-box distributed deployment on non-Kubernetes platforms. Native support of TLS requires a significant amount of glue logic, and exceeds the availble resources in the security working group to implement this strategy. The following text outlines a proposed strategy for supporting native TLS in the EdgeX framework: EdgeX will use Hashicorp Vault to secure the EdgeX PKI, through the use of the Vault PKI secrets engine. Vault will be configured with a root CA at initialization time, and a Vault-based sub-CA for dynamic generation of TLS leaf certificates. The root CA will be restricted to be used only by the Vault root token. EdgeX microservices that are based on third-party containers require special support unless they can talk natively to Vault for their secrets. Certain tools, such as those mentioned in the \"Creation and Distribution of Secrets\" ADR ( envconsul , consul-template , and others) can be used to facilitiate third-party container integration. These services are: Consul : Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container. Vault : As Vault's database is encrypted, Vault cannot natively bootstrap its own TLS certificate. Requires TLS certificate to be injected into container and its location set in a configuration file. PostgreSQL : Requires TLS certificate to be injected into '$PGDATA' (default: /var/lib/postgresql/data ) which is where the writable database files are kept. Kong (admin) : Requires environment variable to be set to secure admin port with TLS, with a TLS certificates injected into the container. Kong (external) : Requires a bring-your-own (BYO) external certificate, or as a fallback, a default one should be generated using a configurable external hostname. (The Kong ACME plugin could possibly be used to automate this process.) Redis (v6) : Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container. Mosquitto : Requires TLS certificate set by configuration file, with a TLS certificate injected into the container. Additionally, every EdgeX microservice consumer will require access to the root CA for certificate verification purposes, and every EdgeX microservice server will need a TLS leaf certificate and private key. Note that Vault bootstrapping its own PKI is tricky and not natively supported by Vault. Expect that a non-trivial amount of effort will need to be put into starting Vault in non-secure mode to create the CA hierarchy and a TLS certificate for Vault itself, and then restarting Vault in a TLS-enabled configuration. Periodic certificate rotation is a non-trivial challenge as well. The Vault bootstrapping flow would look something like this: Bring up vault on localhost with TLS disabled (bootstrapping configuration) Initialize a blank Vault and immediately unseal it Encrypt the Vault keyshares and revoke the root token Generate a new root from the keyshares Generate an on-device root CA (see https://learn.hashicorp.com/vault/secrets-management/sm-pki-engine) Create an intermediate CA for TLS server authentication Sign the intermediate CA using the root CA Configure policy for intermediate CA Generate and store leaf certificates for Consul, Vault, PostgreSQL, Kong (admin), Kong (external), Redis (v6), Mosquitto Deploy the PKI to the respective services' secrets area Write the production Vault configuration (TLS-enabled) to a Docker volume There are no current plans for mutual auth TLS. Supporting mutual auth TLS would require creation of a separate PKI hierarchy for generation of TLS client certificates and glue logic to persist the certificates in the service's key-value secret store and provide them when connecting to other EdgeX services.","title":"Use of encryption to secure in-cluster EdgeX communications"},{"location":"design/adr/security/0015-in-cluster-tls/#use-of-encryption-to-secure-in-cluster-edgex-communications","text":"","title":"Use of encryption to secure in-cluster EdgeX communications"},{"location":"design/adr/security/0015-in-cluster-tls/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0015-in-cluster-tls/#context","text":"This ADR seeks to define the EdgeX direction on using encryption to secure \"in-cluster\" EdgeX communications, that is, internal microservice-to-microservice communication. This ADR will seek to clarify the EdgeX direction in several aspects with regard to: EdgeX services communicating within a single host EdgeX services communicating across multiple hosts Using encryption for confidentiality or integrity in communication Using encryption for authentication between microservices This ADR will be used to triage EdgeX feature requests in this space.","title":"Context"},{"location":"design/adr/security/0015-in-cluster-tls/#background","text":"","title":"Background"},{"location":"design/adr/security/0015-in-cluster-tls/#why-encrypt","text":"Why consider encryption in the first place? Simple. Encryption helps with the following problems: Client authentication of servers. The client knows that it is talking to the correct server. This is typically achieved using TLS server certificates that the client checks against a trusted root certificate authority. Since the client is not in charge of network routing, TLS server authentication provides a good assurance that the requests are being routed to the correct server. Server authentication of clients. The server knows the identity of the client that has connected to it. There are a variety of mechanims to achieve this, such as usernames and passwords, tokens, claims, et cetera, but the mechanism under consideration by this ADR is TLS client authentication using TLS client certificates. Confidentiality of messages exchanged between services. Confidentiality is needed to protect authentication data flowing between communicating microservices as well as to protect the message payloads if they contain nonpublic data. TLS provides communication channel confidentiality. Integrity of messages exchanged between services. Integrity is needed to ensure that messages between communicating microservices are not maliciously altered, such as inserting or deleting data in the middle of the exchange. TLS provides communication channel integrity. A microservice architecture normally strives for all of the above protections. Besides TLS, there are other mechanisms that can be used to provide some of the above properties. For example, IPSec tunnels provide confidentity, integrity, and authentication of the hosts (network-level protection). SSH tunnels provide confidentiality, integrity, and authentication of the tunnel endpoints (also network-level protection). TLS, however, is preferred, because it operates in-process at the application level and provides better point-to-point security.","title":"Why encrypt?"},{"location":"design/adr/security/0015-in-cluster-tls/#why-to-not-encrypt","text":"In the case of TLS communications, microservices depend on an asymmetric private key to prove their identity. To be of value, this private key must be kept secret. Applications typically depend on process-level isolation and/or file system protections for the private key. Moreover, interprocess communication using sockets is mediated by the operating system kernel. An attacker running at the privilege of the operating system has the ability to compromise TLS protections, such as by substituting a private key or certificate authority of their choice, accessing the unencrypted data in process memory, or intercepting the network communications that flow through the kernel. Therefore, within a single host, TLS protections may slow down an attacker, but are not likely to stop them. Additionally, use of TLS requires management of additional security assets in the form of TLS private keys. Microservice communcation across hosts, however, is vulnerable to intereception, and must be protected via some mechanism such as, but not limited to: IPSec or SSH tunnels, encrypted overlay networks, service mesh middlewares, or application-level TLS. Another reason to not encrypt is that TLS adds overhead to microservice communication in the form of additional network around-trips when opening connections and performing cryptographic public key and symmetric key operations.","title":"Why to not encrypt?"},{"location":"design/adr/security/0015-in-cluster-tls/#decision","text":"At this time, EdgeX is primarily a single-node IoT application framework. Should this position change, this ADR should be revisited. Based on the single-node assumption: TLS will not be used for confidentiality and integrity of internal on-host microservice communication. TLS will be avoided as an authentication mechanism of peer microservices. Integrity and confidentiality of microservice communcations crossing host boundaries is required to secure EdgeX, but are an EdgeX customer responsibility. EdgeX customers are welcome to add extra security to their own EdgeX deployments.","title":"Decision"},{"location":"design/adr/security/0015-in-cluster-tls/#consequences","text":"This ADR if approved would close the following issues as will-not-fix. https://github.com/edgexfoundry/edgex-go/issues/1942 https://github.com/edgexfoundry/edgex-go/issues/1941 https://github.com/edgexfoundry/edgex-go/issues/2454 https://github.com/edgexfoundry/developer-scripts/issues/240 https://github.com/edgexfoundry/edgex-go/issues/2495 It would also close https://github.com/edgexfoundry/edgex-go/issues/1925 as there is no current need for TLS as a mutual authentication strategy.","title":"Consequences"},{"location":"design/adr/security/0015-in-cluster-tls/#alternatives","text":"","title":"Alternatives"},{"location":"design/adr/security/0015-in-cluster-tls/#encrypted-overlay-networks","text":"Encrypted overlay networks provide varying protection based on the product used. Some can only encrypt data, such as an IPsec tunnel. Some can encrypt and provide for network microsegmentation, such as Docker Swarm networks with encryption enabled. Some can encrypt and enforce network policy such as restrictions on ingress traffic or restrictions on egress traffic.","title":"Encrypted overlay networks"},{"location":"design/adr/security/0015-in-cluster-tls/#service-mesh-middleware","text":"Service mesh middleware is an alternative that should be investigated if EdgeX decides to fully support a Kubernetes-based deployment using distributed Kubernetes pods. A service mesh typically achieves most of the security objectives of security microservice commuication by intercepting microservice communications and imposing a configuration-driven policy that typically includes confidentiality and integrity protection. These middlewares typically rely on the Kubernetes pod construct and are difficult to support for non-Kubernetes deployments.","title":"Service mesh middleware"},{"location":"design/adr/security/0015-in-cluster-tls/#edgex-public-key-infrastructure","text":"An EdgeX public key infrastructure that is natively supported by the architecture should be considered if EdgeX decides to support an out-of-box distributed deployment on non-Kubernetes platforms. Native support of TLS requires a significant amount of glue logic, and exceeds the availble resources in the security working group to implement this strategy. The following text outlines a proposed strategy for supporting native TLS in the EdgeX framework: EdgeX will use Hashicorp Vault to secure the EdgeX PKI, through the use of the Vault PKI secrets engine. Vault will be configured with a root CA at initialization time, and a Vault-based sub-CA for dynamic generation of TLS leaf certificates. The root CA will be restricted to be used only by the Vault root token. EdgeX microservices that are based on third-party containers require special support unless they can talk natively to Vault for their secrets. Certain tools, such as those mentioned in the \"Creation and Distribution of Secrets\" ADR ( envconsul , consul-template , and others) can be used to facilitiate third-party container integration. These services are: Consul : Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container. Vault : As Vault's database is encrypted, Vault cannot natively bootstrap its own TLS certificate. Requires TLS certificate to be injected into container and its location set in a configuration file. PostgreSQL : Requires TLS certificate to be injected into '$PGDATA' (default: /var/lib/postgresql/data ) which is where the writable database files are kept. Kong (admin) : Requires environment variable to be set to secure admin port with TLS, with a TLS certificates injected into the container. Kong (external) : Requires a bring-your-own (BYO) external certificate, or as a fallback, a default one should be generated using a configurable external hostname. (The Kong ACME plugin could possibly be used to automate this process.) Redis (v6) : Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container. Mosquitto : Requires TLS certificate set by configuration file, with a TLS certificate injected into the container. Additionally, every EdgeX microservice consumer will require access to the root CA for certificate verification purposes, and every EdgeX microservice server will need a TLS leaf certificate and private key. Note that Vault bootstrapping its own PKI is tricky and not natively supported by Vault. Expect that a non-trivial amount of effort will need to be put into starting Vault in non-secure mode to create the CA hierarchy and a TLS certificate for Vault itself, and then restarting Vault in a TLS-enabled configuration. Periodic certificate rotation is a non-trivial challenge as well. The Vault bootstrapping flow would look something like this: Bring up vault on localhost with TLS disabled (bootstrapping configuration) Initialize a blank Vault and immediately unseal it Encrypt the Vault keyshares and revoke the root token Generate a new root from the keyshares Generate an on-device root CA (see https://learn.hashicorp.com/vault/secrets-management/sm-pki-engine) Create an intermediate CA for TLS server authentication Sign the intermediate CA using the root CA Configure policy for intermediate CA Generate and store leaf certificates for Consul, Vault, PostgreSQL, Kong (admin), Kong (external), Redis (v6), Mosquitto Deploy the PKI to the respective services' secrets area Write the production Vault configuration (TLS-enabled) to a Docker volume There are no current plans for mutual auth TLS. Supporting mutual auth TLS would require creation of a separate PKI hierarchy for generation of TLS client certificates and glue logic to persist the certificates in the service's key-value secret store and provide them when connecting to other EdgeX services.","title":"EdgeX public key infrastructure"},{"location":"design/adr/security/0016-docker-image-guidelines/","text":"Docker image guidelines Status Approved Context When deploying the EdgeX Docker containers some security measures are recommended to ensure the integrity of the software stack. Decision When deploying Docker images, the following flags should be set for heightened security. To avoid escalation of privileges each docker container should use the no-new-privileges option in their Docker compose file (example below). More details about this flag can be found here . This follows Rule #4 for Docker security found here . security_opt: - \"no-new-privileges:true\" NOTE: Alternatively an AppArmor security profile can be used to isolate the docker container. More details about apparmor profiles can be found here security_opt: [ \"apparmor:unconfined\" ] To further prevent privilege escalation attacks the user should be set for the docker container using the --user= or -u= option in their Docker compose file (example below). More details about this flag can be found here . This follows Rule #2 for Docker security found here . services: device-virtual: image: ${ REPOSITORY } /docker-device-virtual-go ${ ARCH } : ${ DEVICE_VIRTUAL_VERSION } user: $CONTAINER -PORT: $CONTAINER -PORT # user option using an unprivileged user ports: - \"127.0.0.1:49990:49990\" container_name: edgex-device-virtual hostname: edgex-device-virtual networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-virtual depends_on: - consul - data - metadata NOTE: exception Sometimes containers will require root access to perform their fuctions. For example the System Management Agent requires root access to control other Docker containers. In this case you would allow it run as default root user. To avoid a faulty or compromised containers from consuming excess amounts of the host of its resources resource limits should be set for each container. More details about resource limits can be found here . This follows Rule #7 for Docker security found here . services: device-virtual: image: ${ REPOSITORY } /docker-device-virtual-go ${ ARCH } : ${ DEVICE_VIRTUAL_VERSION } user: 4000 :4000 # user option using an unprivileged user ports: - \"127.0.0.1:49990:49990\" container_name: edgex-device-virtual hostname: edgex-device-virtual networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-virtual depends_on: - consul - data - metadata deploy: # Deployment resource limits resources: limits: cpus: '0.001' memory: 50M reservations: cpus: '0.0001' memory: 20M To avoid attackers from writing data to the containers and modifying their files the --read_only flag should be set. More details about this flag can be found here . This follows Rule #8 for Docker security found here . device-rest: image: ${ REPOSITORY } /docker-device-rest-go ${ ARCH } : ${ DEVICE_REST_VERSION } ports: - \"127.0.0.1:49986:49986\" container_name: edgex-device-rest hostname: edgex-device-rest read_only: true # read_only option networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-rest depends_on: - data - command NOTE: exception If a container is required to have write permission to function, then this flag will not work. For example, the vault needs to run setcap in order to lock pages in memory. In this case the --read_only flag will not be used. NOTE: Volumes If writing persistent data is required then a volume can be used. A volume can be attached to the container in the following way device-rest: image: ${ REPOSITORY } /docker-device-rest-go ${ ARCH } : ${ DEVICE_REST_VERSION } ports: - \"127.0.0.1:49986:49986\" container_name: edgex-device-rest hostname: edgex-device-rest read_only: true # read_only option networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-rest depends_on: - data - command volumes: - consul-config:/consul/config:z NOTE: alternatives If writing non-persistent data is required (ex. a config file) then a temporary filesystem mount can be used to accomplish this goal while still enforcing --read_only . Mounting a tmpfs in Docker gives the container a temporary location in the host systems memory to modify files. This location will be removed once the container is stopped. More details about tmpfs can be found here for additional docker security rules and guidelines please check the Docker security cheatsheet Consequences Create a more secure Docker environment References Docker-compose reference https://docs.docker.com/compose/compose-file OWASP Docker Recommendations https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html CIS Docker Benchmark https://workbench.cisecurity.org/files/2433/download/2786 (registration required)","title":"Docker image guidelines"},{"location":"design/adr/security/0016-docker-image-guidelines/#docker-image-guidelines","text":"","title":"Docker image guidelines"},{"location":"design/adr/security/0016-docker-image-guidelines/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0016-docker-image-guidelines/#context","text":"When deploying the EdgeX Docker containers some security measures are recommended to ensure the integrity of the software stack.","title":"Context"},{"location":"design/adr/security/0016-docker-image-guidelines/#decision","text":"When deploying Docker images, the following flags should be set for heightened security. To avoid escalation of privileges each docker container should use the no-new-privileges option in their Docker compose file (example below). More details about this flag can be found here . This follows Rule #4 for Docker security found here . security_opt: - \"no-new-privileges:true\" NOTE: Alternatively an AppArmor security profile can be used to isolate the docker container. More details about apparmor profiles can be found here security_opt: [ \"apparmor:unconfined\" ] To further prevent privilege escalation attacks the user should be set for the docker container using the --user= or -u= option in their Docker compose file (example below). More details about this flag can be found here . This follows Rule #2 for Docker security found here . services: device-virtual: image: ${ REPOSITORY } /docker-device-virtual-go ${ ARCH } : ${ DEVICE_VIRTUAL_VERSION } user: $CONTAINER -PORT: $CONTAINER -PORT # user option using an unprivileged user ports: - \"127.0.0.1:49990:49990\" container_name: edgex-device-virtual hostname: edgex-device-virtual networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-virtual depends_on: - consul - data - metadata NOTE: exception Sometimes containers will require root access to perform their fuctions. For example the System Management Agent requires root access to control other Docker containers. In this case you would allow it run as default root user. To avoid a faulty or compromised containers from consuming excess amounts of the host of its resources resource limits should be set for each container. More details about resource limits can be found here . This follows Rule #7 for Docker security found here . services: device-virtual: image: ${ REPOSITORY } /docker-device-virtual-go ${ ARCH } : ${ DEVICE_VIRTUAL_VERSION } user: 4000 :4000 # user option using an unprivileged user ports: - \"127.0.0.1:49990:49990\" container_name: edgex-device-virtual hostname: edgex-device-virtual networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-virtual depends_on: - consul - data - metadata deploy: # Deployment resource limits resources: limits: cpus: '0.001' memory: 50M reservations: cpus: '0.0001' memory: 20M To avoid attackers from writing data to the containers and modifying their files the --read_only flag should be set. More details about this flag can be found here . This follows Rule #8 for Docker security found here . device-rest: image: ${ REPOSITORY } /docker-device-rest-go ${ ARCH } : ${ DEVICE_REST_VERSION } ports: - \"127.0.0.1:49986:49986\" container_name: edgex-device-rest hostname: edgex-device-rest read_only: true # read_only option networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-rest depends_on: - data - command NOTE: exception If a container is required to have write permission to function, then this flag will not work. For example, the vault needs to run setcap in order to lock pages in memory. In this case the --read_only flag will not be used. NOTE: Volumes If writing persistent data is required then a volume can be used. A volume can be attached to the container in the following way device-rest: image: ${ REPOSITORY } /docker-device-rest-go ${ ARCH } : ${ DEVICE_REST_VERSION } ports: - \"127.0.0.1:49986:49986\" container_name: edgex-device-rest hostname: edgex-device-rest read_only: true # read_only option networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-rest depends_on: - data - command volumes: - consul-config:/consul/config:z NOTE: alternatives If writing non-persistent data is required (ex. a config file) then a temporary filesystem mount can be used to accomplish this goal while still enforcing --read_only . Mounting a tmpfs in Docker gives the container a temporary location in the host systems memory to modify files. This location will be removed once the container is stopped. More details about tmpfs can be found here for additional docker security rules and guidelines please check the Docker security cheatsheet","title":"Decision"},{"location":"design/adr/security/0016-docker-image-guidelines/#consequences","text":"Create a more secure Docker environment","title":"Consequences"},{"location":"design/adr/security/0016-docker-image-guidelines/#references","text":"Docker-compose reference https://docs.docker.com/compose/compose-file OWASP Docker Recommendations https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html CIS Docker Benchmark https://workbench.cisecurity.org/files/2433/download/2786 (registration required)","title":"References"},{"location":"design/adr/security/0017-consul-security/","text":"Securing access to Consul Status Approved Context This ADR defines the motiviation and approach used to secure access to the Consul component in the EdgeX architecture for security-enabled configurations only . Non-secure configuations continue to use Consul in anonymous read-write mode. As this Consul security feature requires Vault to function, if EDGEX_SECURITY_SECRET_STORE=false and Vault is not present, the legacy behavior (unauthenticated Consul access) will be preserved. Consul provides several services for the EdgeX architecture: Service registry (see ADR in references below) Service health monitoring Mutable configuration data Use of the services provided by Consul is optional on a service-by-service basis. Use of the registry is controlled by the -r or --registry flag provided to an EdgeX service. Use of mutable configuration data is controlled by the -cp or --configProvider flag provided to an EdgeX service. When Consul is enabled as a configuration provider, the configuration.toml is parsed into individual settings and seeded into the Consul key-value store on the first start of a service. Configuration reads and writes are then done to Consul if it is specified as the configuration provider, otherwise the static configuration.toml is used. Writes to the [Writable] section in Consul trigger per-service callbacks notifying the application of the changed data. Updates to non- [Writable] sections are parsed only once at startup and require a service restart to take effect. Since configuration data can affect the runtime behavior of services, compensating controls must be introduced in order to mitigate the risks introduced by moving configuration from a static file into to an HTTP-accessible service with mutable state. The current practice is that Consul is exposed via unencrypted HTTP in anonymous read/write mode to all processes and EdgeX services running on the host machine. Decision Consul will be configured with access control list (ACL) functionality enabled, and each EdgeX service will utilize a Consul access token to authenticate to Consul. Consul access tokens will be requested from the Vault Consul secrets engine (to avoid introducing additional bootstrapping secrets). DNS will be disabled via configuration as it is not used in EdgeX. Consul Access Via API Gateway In security enabled EdgeX, the API gateway will be configured to proxy the Consul service over the /consul path, using the request-transformer plugin to add the global management token to incoming requests via the X-Consul-Token HTTP header. Thus, ability to access remote APIs also grants the ability to modify Consul's key-value store. At this time, service access via API gateway is all-or-nothing, but this does not preclude future fine-grained authorization at the API gateway layer to specific microservices, including Consul. Proxying of the Consul UI is problematic and there is no current solution, which would involve proper balacing of the externally-visible URL, the path-stripping effect (or not) of the proxy, Consul's ui_content_path , and UI authentication (the request-transfomer does not work on the UI). Consequences Full implementation of this ADR will deny Consul access to all existing Consul clients. To limit the impacts of the change, deployment will take place in phases. Phase 1 is basic plumbing work and leaves Consul configured in a permissive mode and thus is not a breaking change. Phase 2 will affect the APIs of Go modules and will change the default policy to \"deny\", both of which are breaking changes. Phase 3 is a refinement of access control; presuming the existing services are \"well-behaved\", that is, they do not access configuration of other services, Phase 3 will not introduce any breaking changes on top of the Phase 2 breaking changes. Phase 1 (completed in Ireland release) Vault bootstrapper will install Vault Consul secrets engine. Secretstore-setup will create a Vault token for consul secrets engine configuration. Consul will be started with Consul ACLs enabled with persistent agent tokens and a default \"allow\" policy. Consul bootstrapper will create a bootstrap management token and use the provided Vault token to (re)configure the Consul secrets engine in Vault. Do to a quirk in Consul's ACL behavior that inverts the meaning of an ACL in default-allow mode, in phase 1 the Consul bootstrapper will create an agent token with the global-management policy and install it into the agent. During phase 2, it will be changed to a specific, limited, policy. (This change should not be visible to Consul API clients.) The bootstrap management token will also be stored persistently to be used by the API gateway for proxy authentication, and will also be needed for local access to Consul's web user interface. (Docker-only) Open a port to signal that Consul bootstrapping is completed. (Integrate with ready_to_run signal.) Phase 2 (completed in Ireland release) Consul bootstrapper will install a role in Vault that creates global-management tokens in Consul with no TTL. Registry and configuration client libraries will be modified to accept a Consul access token. go-mod-bootstrap will have contain the necessary glue logic to request a service-specifc Consul access token from Vault every time the service is started. Consul configuration will be changed to a default \"deny\" policy once all services have been changed to authenticated access mode. The agent tokens' policy will be changed to a specific agent policy instead of the global-management policy. Phase 3 (for Jakarta release) Introduce per-service roles and ACL policies that give each service access to its own subset of the Consul key-value store and to register in the service registry. Consul access tokens will be scoped to the needs of the particular service (ability to update that service's registry data, an access that services's KV store). Create a separate management token (non-bootstrap) for API gateway proxy authentication and Consul UI access that is different from boostrap management token stored in Vault. This token will need to be requested outside of Vault in order for it to be non-expiring. Glue logic will ensure that expired Consul tokens are replaced with fresh ones (token freshness can be pre-checked by a request made to /acl/token/self ). Unintended consequences and mitigation (for Jakarta stabilization release) Consul token lifetime will be tied to the Vault token lifetime. Vault deliberately revokes any Consul tokens that it issues in order to ensure that they don't outlive the parent token's lifetime. If Consul is not fully initialized when token revokation is attempted, Vault will be unable to revoke these tokens. Migtigations: Consul will be started concurrently with Vault to give time for Consul to fully initialize. secretstore-setup will delay starting until Consul has completed leader election. secretstore-setup will be modified to less aggressively revoke tokens. Alternatives include revoke-and-orphan which should leave the Consul tokens intact if the secret store is restarted but may leave garbage tokens in the Consul database, or tidy-tokens which cleans up invalid entries in the token database, or simply leave Vault to its own devices and let Vault clean itself up. Testing will be performed and an appropriate mechanism selected. References ADR for secret creation and distribution ADR for secure bootstrapping ADR for service registry Hashicorp Vault","title":"Securing access to Consul"},{"location":"design/adr/security/0017-consul-security/#securing-access-to-consul","text":"","title":"Securing access to Consul"},{"location":"design/adr/security/0017-consul-security/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0017-consul-security/#context","text":"This ADR defines the motiviation and approach used to secure access to the Consul component in the EdgeX architecture for security-enabled configurations only . Non-secure configuations continue to use Consul in anonymous read-write mode. As this Consul security feature requires Vault to function, if EDGEX_SECURITY_SECRET_STORE=false and Vault is not present, the legacy behavior (unauthenticated Consul access) will be preserved. Consul provides several services for the EdgeX architecture: Service registry (see ADR in references below) Service health monitoring Mutable configuration data Use of the services provided by Consul is optional on a service-by-service basis. Use of the registry is controlled by the -r or --registry flag provided to an EdgeX service. Use of mutable configuration data is controlled by the -cp or --configProvider flag provided to an EdgeX service. When Consul is enabled as a configuration provider, the configuration.toml is parsed into individual settings and seeded into the Consul key-value store on the first start of a service. Configuration reads and writes are then done to Consul if it is specified as the configuration provider, otherwise the static configuration.toml is used. Writes to the [Writable] section in Consul trigger per-service callbacks notifying the application of the changed data. Updates to non- [Writable] sections are parsed only once at startup and require a service restart to take effect. Since configuration data can affect the runtime behavior of services, compensating controls must be introduced in order to mitigate the risks introduced by moving configuration from a static file into to an HTTP-accessible service with mutable state. The current practice is that Consul is exposed via unencrypted HTTP in anonymous read/write mode to all processes and EdgeX services running on the host machine.","title":"Context"},{"location":"design/adr/security/0017-consul-security/#decision","text":"Consul will be configured with access control list (ACL) functionality enabled, and each EdgeX service will utilize a Consul access token to authenticate to Consul. Consul access tokens will be requested from the Vault Consul secrets engine (to avoid introducing additional bootstrapping secrets). DNS will be disabled via configuration as it is not used in EdgeX. Consul Access Via API Gateway In security enabled EdgeX, the API gateway will be configured to proxy the Consul service over the /consul path, using the request-transformer plugin to add the global management token to incoming requests via the X-Consul-Token HTTP header. Thus, ability to access remote APIs also grants the ability to modify Consul's key-value store. At this time, service access via API gateway is all-or-nothing, but this does not preclude future fine-grained authorization at the API gateway layer to specific microservices, including Consul. Proxying of the Consul UI is problematic and there is no current solution, which would involve proper balacing of the externally-visible URL, the path-stripping effect (or not) of the proxy, Consul's ui_content_path , and UI authentication (the request-transfomer does not work on the UI).","title":"Decision"},{"location":"design/adr/security/0017-consul-security/#consequences","text":"Full implementation of this ADR will deny Consul access to all existing Consul clients. To limit the impacts of the change, deployment will take place in phases. Phase 1 is basic plumbing work and leaves Consul configured in a permissive mode and thus is not a breaking change. Phase 2 will affect the APIs of Go modules and will change the default policy to \"deny\", both of which are breaking changes. Phase 3 is a refinement of access control; presuming the existing services are \"well-behaved\", that is, they do not access configuration of other services, Phase 3 will not introduce any breaking changes on top of the Phase 2 breaking changes.","title":"Consequences"},{"location":"design/adr/security/0017-consul-security/#phase-1-completed-in-ireland-release","text":"Vault bootstrapper will install Vault Consul secrets engine. Secretstore-setup will create a Vault token for consul secrets engine configuration. Consul will be started with Consul ACLs enabled with persistent agent tokens and a default \"allow\" policy. Consul bootstrapper will create a bootstrap management token and use the provided Vault token to (re)configure the Consul secrets engine in Vault. Do to a quirk in Consul's ACL behavior that inverts the meaning of an ACL in default-allow mode, in phase 1 the Consul bootstrapper will create an agent token with the global-management policy and install it into the agent. During phase 2, it will be changed to a specific, limited, policy. (This change should not be visible to Consul API clients.) The bootstrap management token will also be stored persistently to be used by the API gateway for proxy authentication, and will also be needed for local access to Consul's web user interface. (Docker-only) Open a port to signal that Consul bootstrapping is completed. (Integrate with ready_to_run signal.)","title":"Phase 1 (completed in Ireland release)"},{"location":"design/adr/security/0017-consul-security/#phase-2-completed-in-ireland-release","text":"Consul bootstrapper will install a role in Vault that creates global-management tokens in Consul with no TTL. Registry and configuration client libraries will be modified to accept a Consul access token. go-mod-bootstrap will have contain the necessary glue logic to request a service-specifc Consul access token from Vault every time the service is started. Consul configuration will be changed to a default \"deny\" policy once all services have been changed to authenticated access mode. The agent tokens' policy will be changed to a specific agent policy instead of the global-management policy.","title":"Phase 2 (completed in Ireland release)"},{"location":"design/adr/security/0017-consul-security/#phase-3-for-jakarta-release","text":"Introduce per-service roles and ACL policies that give each service access to its own subset of the Consul key-value store and to register in the service registry. Consul access tokens will be scoped to the needs of the particular service (ability to update that service's registry data, an access that services's KV store). Create a separate management token (non-bootstrap) for API gateway proxy authentication and Consul UI access that is different from boostrap management token stored in Vault. This token will need to be requested outside of Vault in order for it to be non-expiring. Glue logic will ensure that expired Consul tokens are replaced with fresh ones (token freshness can be pre-checked by a request made to /acl/token/self ).","title":"Phase 3 (for Jakarta release)"},{"location":"design/adr/security/0017-consul-security/#unintended-consequences-and-mitigation-for-jakarta-stabilization-release","text":"Consul token lifetime will be tied to the Vault token lifetime. Vault deliberately revokes any Consul tokens that it issues in order to ensure that they don't outlive the parent token's lifetime. If Consul is not fully initialized when token revokation is attempted, Vault will be unable to revoke these tokens. Migtigations: Consul will be started concurrently with Vault to give time for Consul to fully initialize. secretstore-setup will delay starting until Consul has completed leader election. secretstore-setup will be modified to less aggressively revoke tokens. Alternatives include revoke-and-orphan which should leave the Consul tokens intact if the secret store is restarted but may leave garbage tokens in the Consul database, or tidy-tokens which cleans up invalid entries in the token database, or simply leave Vault to its own devices and let Vault clean itself up. Testing will be performed and an appropriate mechanism selected.","title":"Unintended consequences and mitigation (for Jakarta stabilization release)"},{"location":"design/adr/security/0017-consul-security/#references","text":"ADR for secret creation and distribution ADR for secure bootstrapping ADR for service registry Hashicorp Vault","title":"References"},{"location":"design/adr/security/0020-spiffe/","text":"Use SPIFFE/SPIRE for On-demand Secret Store Token Generation Status Approved via TSC vote on 2021-12-14 Context In security-enabled EdgeX, there is a component called security-secretstore-setup that seeds authentication tokens for Hashicorp Vault--EdgeX's secret store--into directories reserved for each EdgeX microservice. The implementation is provided by a sub-component, security-file-token-provider , that works off of a static configuration file ( token-config.json ) that configures known EdgeX services, and an environment variable that lists additional services that require tokens. The token provider creates a unique token for each service and attaches a custom policy to each token that limits token access in a manner that paritions the secret store's namespace. The current solution has some problematic aspects: These tokens have an initial TTL of one hour (1h) and become invalid if not used and renewed within that time period. It is not possible to delay the start of EdgeX services until a later time (that is, greater than the default token TTL), as they will not be able to connect to the EdgeX secret store to obtain required secrets. Transmission of the authentication token requires one or more shared file systems between the service and security-secretstore-setup . In the Docker implementation, this shared file system is constructed by bind-mounting a host-based directory to multiple containers. The snap implementation is similar, utilizing a content-interface between snaps. In a Kubernetes implementation limited to a single worker node, a CSI storage driver that provided RWO volumes would suffice. The current approach cannot support distributed services without an underlying distributed file system to distribute tokens, such as GlusterFS, running across the participating nodes. For Kubernetes, the requirement would be a remote shared file system persistent volume (RWX volume). Decision EdgeX will create a new service, security-spiffe-token-provider . This service will be a mutual-auth TLS service that exchanges a SPIFFE X.509 SVID for a secret store token. An SPIFFE identifier is a URI of the format spiffe://trust domain/workload identifier . For example: spiffe://edgexfoundry.org/service/core-data . A SPIFFE Verifiable Identity Document (SVID) is a cryptographically-signed version of a SPIFFE ID, typically a X.509 certificate with the SPIFFE ID encoded into the subjectAltName certificate extension, or a JSON web token (encoded into the sub claim). The EdgeX implementation will use a naming convention on the path component, such as the above, in order to be able to extract the requesting service from the SPIFFE ID. The SPIFFE token provider will take three parameters: An X.509 SVID used in mutual-auth TLS for the token provider and the service to cross-authenticate. The reqested service key. If blank, the service key will default to the service name encoded in the SVID. If the service name follows the pattern device-(name) , then the service key must follow the format device-(name) or device-name-* . If the service name is app-service-configurable , then the service key must follow the format app-* . (This is an accomodation for the Unix workload attester not being able to distingish workloads that are launched using the same executable binary. Custom app services that support multiple instances won't be supported unless they name the executable the same as the standard app service binary or modify this logic.) A list of \"known secret\" identifiers that will allow new services to request database passwords or other \"known secrets\" to be seeded into their service's partition in the secret store. The go-mod-secrets module will be modified to enable a new mode whereby a secret store token is obtained by: Obtaining an X.509 SVID by contacting a local SPIFFE agent's workload API on a local Unix domain socket. Connecting to the security-spiffe-token-provider service using the X.509 SVID to request a secret store token. The SPIFFE authentication mode will be an opt-in feature. The SPIFFE implementation will be user-replaceable; specifically, the workload API socket will be configurable, as well as the parsing of the SPIFFE ID. Reasons for doing so might include: changing the name of the trust domain in the SPIFFE ID, or moving the SPIFFE server out of the edge. This feature is estimated to be a \"large\" or \"extra large\" effort that could be implemented in a single release cycle. Technical Architecture The work flow is as follows: Create a root CA for the SPIFFE user to use for creation of sub-CA's. The SPIFFE server is started. The server creates a sub-CA for issuing new identities. The trust bundle (certificate authority) data is exported from the SPIFFE server and stored on a shared volume readable by other EdgeX microservices (i.e. the existing secrets volume used for sharing secret store tokens). A join token for the SPIFFE agent is created using token generate and shared to the EdgeX secrets volume. Workload entries are loaded into the SPIFFE server database, using the join-identity of the agent created in the previous step as the parent ID of the workload. The SPIFFE agent is started with the join token created in a previous step to add it to the cluster. Vault is started and security-secret-store-setup initializes it and creates an admin token for security-spiffe-token-provider to use. The security-spiffe-token-provider service is started. It obtains an SVID from the SIFFE agent and uses it as a TLS server certificate. An EdgeX microservice starts and obtains another SVID from the SPIFFE agent and uses it as a TLS client certificate to contact the security-spiffe-token-provider service. The EdgeX microservice uses the trust bundle as a server CA to verify the TLS certificate of the remote service. security-spiffe-token-provider verifies the SVID using the trust bundle as client CA to verify the client, extracts the service key, and issues an appropriate Vault service token. The EdgeX microservice accesses Vault as usual. Workload Registration and Agent Sockets The server uses a workload registration Unix domain socket that allows authorization entries to be added to the authorization database. This socket is protected by Unix file system permissions to control who is allowed to add entries to the database. In this proposal, a subcommand will be added to the EdgeX secrets-config utility to simplify the process of registering new services that uses the registration socket above. The agent uses a workload attesation Unix domain socket that is open to the world. This socket is shared via a snap content-interface of via a shared host bind mount for Docker. There is one agent per node. Trust Bundle SVID's must be traceable back to a known issuing authority (certificate authority) to determine their validity. In the proposed implementation, we will generate a CA on first boot and store it persistently. This root CA will be distributed as the trust bundle. The SPIFFE server will then generate a rotating sub-CA for issuing SVIDs, and the issued SVID will include both the leaf certificate and the intermediate certificate. This implementation differs from the default implementation, which uses a transient CA that is rotated periodically and that keeps a log of past CA's. The default implementation is not suitable because only the Kubernetes reference implementation of the SPIRE server has a notification hook that is invoked when the CA is rotated. CA rotation would just result in issuing of SVIDs that are not trusted by microservices that received only the initial CA. The SPIFFE implementation is replaceable. The user is free to replace this default implementation with potentally a cloud-based SPIFFE server and a cloud-based CA. Workload Authorization Workloads are authenticated by connecting to the spiffe-agent via a Unix domain socket, which is capable of identifying the process ID of the remote client. The process ID is fed into one of following workload attesters, which gather additional metadata about the caller: The Unix workload attester gathers UID, GID, path, and SHA-256 hash of the executable. The Unix workload attester would be used native services and snaps. The Docker workload attester gathers container labels that are added by docker-compose when the container is launched. The Docker workload attester would be used for Docker-based EdgeX deployments. An example label is docker:label:com.docker.compose.service:edgex-core-data where the service label is the key value in the services section of the docker-compose.yml . It is also possible to refer to labels built-in to the container image. The Kubernetes workload attester gathers a wealth of pod and container metadata. Once authenticated, the metadata is sent to the SPIFFE server to authorize the workload. Workloads are authorized via an authorization database connected to the SPIFFE server. Supported databases are SQLite (default), PostgreSQL, and MySQL. Due to startup ordering issues, SQLite will be used. (Disclaimer: SQlite, according for the Turtle book is intended for development and test only. We will use SQlite anyway because because Redis is not supported.) The only service that needs to be seeded to the database as this time is security-spiffe-token-provier . For example: spire-server entry create -parentID \" ${ local_agent_svid } \" -dns edgex-spiffe-token-provider -spiffeID \" ${ svid_service_base } /edgex-spiffe-token-provider\" -selector \"docker:label:com.docker.compose.service:edgex-spiffe-token-provider\" The above command associates a SPIFFE ID with a selector , in this case, a container label, and configures a DNS subjectAltName in the X.509 certificate for server-side TLS. A snap-based installation of EdgeX would use a unix:path or unix:sha256 selector instead. There are two extension mechanims for authorization additional workloads: Inject a config file or environment variable to authorize additional workloads. The container will parse and issue spire-server entry create commands for each additional service. Run the edgex-secrets-config utility (that will wrap the spire-server entry create command) for ad-hoc authorization of new services. The authorization database is persistent across reboots. Consequences This proposal will require addition of several new, optional, EdgeX microservices: security-spiffe-token-provider , running on the main node spiffe-agent , running on the main node and each remote node spiffe-server , running on the main node spiffe-config , a one-shot service running on the main node Note that like Vault, the recommended SPIFFE configuration is to run the SPIFFE server on a dedicated node. If this is a concern, bring your own SPIFFE implementation. Minor changes will be needed to security-secretstore-setup to preserve the token-creating-token used by security-file-token-provider so that it can be used by security-spiffe-token-provider . The startup flow of the framework will be adjusted as follows: Bootstrap service (original) spiffe-server spiffe-config (can be combined with spifee-server ) spiffe-agent Vault service (original) Secret store setup service (original) security-spiffe-token-provider Consul (original) Postgres (original) There is no direct dependency between spiffe-server and any other microservice. security-spiffe-token-provider requires an SVID from spiffe-agent and a Vault admin token. None of these new services will be proxied via the API gateway. In the future, this mechanism may become the default secret store distribution mechanism, as it eliminates several secrets volumes used to share secrets between security-secretstore-setup and various EdgeX microservices. The EdgeX automation will only configure the SPIFEE agent on the main node. Additional nodes can be manually added by the operator by obtaining a join token from the main node and using it to bootstrap a remote node. SPIFFE/SPIRE has native support for Kubernetes and can distribute the trust bundle via a Kubernetes ConfigMap to more easily enable distributed scenarios, removing a major roadblock to usage of EdgeX in a Kubernetes environment. Footprint NOTE: This data is limited by the fact that the pre-built SPIRE reference binaries are compiled with CGO enabled. SPIRE Server 69 MB executable, dynamically linked 151 MB inside of a Debian-slim container 30 MB memory usage, as container SPIRE Agent 33 MB executable, dynamically linked 114 MB inside of a Debian-slim container 64 MB memory usage, as container SPIFFE-base Secret Store Token Provider The following is the minimum size: > 6 MB executable (likely much larger) > 29 MB memory usage, as container Limitations The following are known limitations with this proposal: The capabilities enabled by this solution would only be enabled on Linux platforms. SIFFE/SPIRE Agent is not available for native Windows and pre-built binaries are only avaiable for Linux. (It is unclear as to whether other *nix'es are supported.) The capabilities enabled by this solution would only be supported for Go-based services. The SPIFFE API's are implemented in gRPC, which is only ported to C#, C++, Dart, Go, Java, Kotlin, Node, Objective-C, PHP, Python, and Ruby. Notably, the C language is not supported, and the only other EdgeX supported language is Go. That default TTL of an x.509 SVID is one hour. As such, all SVID consumers must be capable of auto-renewal of SVIDs on both the client and server side. Alternatives Overcoming lack of a supported GRPC-C library Leave C-SDK device services behind. In this option, C device services would be unable to participate in the delayed-start services architecture. Fork a grpc-c library. Forking a grpc-c library and rehabilitating it is one option. There is at least one grpc-c library that has been proven to work, but it requires additional features to make it compatible with the SPIRE workload agent. However, the project is extremely large and it is unlikely that EdgeX is big enough to carry the project. Available libraries include: https://github.com/lixiangyun/grpc-c This library is several years out-of-date, does not compile on current Linux distributions without some rework, and does not pass per-request metadata tags. Proved to work via manual patching. Not supportable. https://github.com/Juniper/grpc-c This library is serveral years out-of-date, also does not compile on current Linux distributiosn without some rework. Uses hard-coded Unix domain socket paths. May support per-request metadata tags, but did not test. Not supportable. https://github.com/HewlettPackard/c-spiffe This library is yet untested. Rather than a gRPC library, this library implements the workload API client directly. Ultimately, this library also wraps the gRPC C++ library, and statically links to it. There is no benefit to the EdgeX project to use this library as we can call the underlying library directly. Hybrid device services. In this model, device services would always be written in Go, but in the case where linking to a C language library is required, CGO features would be used to invoke native C functions from golang. This option would commit the EdgeX project to a one-time investment to port the existing C device services to the new hybrid model. This option is the best choice if the long-term strategy is to end-of-life the C Device SDK. Bridge. In this model, the C++ implementation to invoke the SPIFFE/SPIRE workload API would be hidden behind a dynamic shared library with C linkage. This would require minimal change to the existing C SDK. However, the resulting binaries would have be based on GLIBC vs MUSL in order to get dlopen() support. This will also limit the choice of container base images for containerized services. Modernize. In this model, the Device SDK would be rewritten either partially or in-full in C++. Under this model, the SPIFFE/SPIRE workload API could be accessed via a community-supported C++ GRPC SDK. There are many implementation options: A \"C++ compilation-switch\" where the C SDK could be compiled in C-mode or C++-mode with enhanced functionality. A C++ extension API. The original C SDK would remain as-is, but if compiling with __cplusplus defined, additional API methods would be exposed. The SDK could thus be composed of a mixture of .c files with C linkage and .cc files with C++ linkage. The linker would ultimately determine whether or not the C++ runtime library needed to be linked in. Native C++ device SDK with legacy C wrapper facade. Compile existing code in C++ mode, with optional C++ facade. Opt-in or Standard Feature If one of the following things were to happen, it would push this proposal \"over the edge\" from being an optional opt-in feature to a required standard feature for security: The \"on-demand\" method of obtaining a secret store token is the default method of obtaining a token for non-core EdgeX services. The \"on-demand\" method of obtaining a secret store token is the default method for all EdgeX services. SPIFFE SVID's become the implementation mechanism for microservice-level authentication. (Not in scope for this ADR.) Merge security-file-token-provider and security-spiffe-token-provider Keeping these as separate executables clearly separates the on-demand secret store tokens feature as an optional service. It is possible to combine the services, but there would need to be a configuration switch in order to enable the SPIFFE feature. It would also increase the base executable size to include the extra logic. Alternatives regarding SPIFFE CA Transient CA option The SPIFFE server can be configured with no \"upstream authority\" (certificate authority), and the server will periodically generate a new, transient CA, and keep a bounded history of previous CA's. A rotating trust bundle only practically works in a Kubernetes environment, since a configmap can be updated real-time. For everyone else, we need a static CA that can be pre-distributed to remote nodes. Thus, this solution was not chosen. Vault-based CA option The SPIFFE server can be configured to make requests to a Hashicorp Vault PKI secrets engine to generate intermediate CA certificates for signing SVID's. This is an option for future integrations, but is omitted from this proposal due to the jump in implementation complexity and the desire that the current proposal be on add-on feature. The current implementation allows the SPIFFE server and Vault to be started simultaneously. Using a Vault-based CA would require a complex interlocking sequence of steps. References Issue to create ADR for handling delayed-start services 0018 Service Registry ADR Service List ADR SPIFFE SPIFFE ID X.500 SVID JWT SVID Turtle book","title":"Use SPIFFE/SPIRE for On-demand Secret Store Token Generation"},{"location":"design/adr/security/0020-spiffe/#use-spiffespire-for-on-demand-secret-store-token-generation","text":"","title":"Use SPIFFE/SPIRE for On-demand Secret Store Token Generation"},{"location":"design/adr/security/0020-spiffe/#status","text":"Approved via TSC vote on 2021-12-14","title":"Status"},{"location":"design/adr/security/0020-spiffe/#context","text":"In security-enabled EdgeX, there is a component called security-secretstore-setup that seeds authentication tokens for Hashicorp Vault--EdgeX's secret store--into directories reserved for each EdgeX microservice. The implementation is provided by a sub-component, security-file-token-provider , that works off of a static configuration file ( token-config.json ) that configures known EdgeX services, and an environment variable that lists additional services that require tokens. The token provider creates a unique token for each service and attaches a custom policy to each token that limits token access in a manner that paritions the secret store's namespace. The current solution has some problematic aspects: These tokens have an initial TTL of one hour (1h) and become invalid if not used and renewed within that time period. It is not possible to delay the start of EdgeX services until a later time (that is, greater than the default token TTL), as they will not be able to connect to the EdgeX secret store to obtain required secrets. Transmission of the authentication token requires one or more shared file systems between the service and security-secretstore-setup . In the Docker implementation, this shared file system is constructed by bind-mounting a host-based directory to multiple containers. The snap implementation is similar, utilizing a content-interface between snaps. In a Kubernetes implementation limited to a single worker node, a CSI storage driver that provided RWO volumes would suffice. The current approach cannot support distributed services without an underlying distributed file system to distribute tokens, such as GlusterFS, running across the participating nodes. For Kubernetes, the requirement would be a remote shared file system persistent volume (RWX volume).","title":"Context"},{"location":"design/adr/security/0020-spiffe/#decision","text":"EdgeX will create a new service, security-spiffe-token-provider . This service will be a mutual-auth TLS service that exchanges a SPIFFE X.509 SVID for a secret store token. An SPIFFE identifier is a URI of the format spiffe://trust domain/workload identifier . For example: spiffe://edgexfoundry.org/service/core-data . A SPIFFE Verifiable Identity Document (SVID) is a cryptographically-signed version of a SPIFFE ID, typically a X.509 certificate with the SPIFFE ID encoded into the subjectAltName certificate extension, or a JSON web token (encoded into the sub claim). The EdgeX implementation will use a naming convention on the path component, such as the above, in order to be able to extract the requesting service from the SPIFFE ID. The SPIFFE token provider will take three parameters: An X.509 SVID used in mutual-auth TLS for the token provider and the service to cross-authenticate. The reqested service key. If blank, the service key will default to the service name encoded in the SVID. If the service name follows the pattern device-(name) , then the service key must follow the format device-(name) or device-name-* . If the service name is app-service-configurable , then the service key must follow the format app-* . (This is an accomodation for the Unix workload attester not being able to distingish workloads that are launched using the same executable binary. Custom app services that support multiple instances won't be supported unless they name the executable the same as the standard app service binary or modify this logic.) A list of \"known secret\" identifiers that will allow new services to request database passwords or other \"known secrets\" to be seeded into their service's partition in the secret store. The go-mod-secrets module will be modified to enable a new mode whereby a secret store token is obtained by: Obtaining an X.509 SVID by contacting a local SPIFFE agent's workload API on a local Unix domain socket. Connecting to the security-spiffe-token-provider service using the X.509 SVID to request a secret store token. The SPIFFE authentication mode will be an opt-in feature. The SPIFFE implementation will be user-replaceable; specifically, the workload API socket will be configurable, as well as the parsing of the SPIFFE ID. Reasons for doing so might include: changing the name of the trust domain in the SPIFFE ID, or moving the SPIFFE server out of the edge. This feature is estimated to be a \"large\" or \"extra large\" effort that could be implemented in a single release cycle.","title":"Decision"},{"location":"design/adr/security/0020-spiffe/#technical-architecture","text":"The work flow is as follows: Create a root CA for the SPIFFE user to use for creation of sub-CA's. The SPIFFE server is started. The server creates a sub-CA for issuing new identities. The trust bundle (certificate authority) data is exported from the SPIFFE server and stored on a shared volume readable by other EdgeX microservices (i.e. the existing secrets volume used for sharing secret store tokens). A join token for the SPIFFE agent is created using token generate and shared to the EdgeX secrets volume. Workload entries are loaded into the SPIFFE server database, using the join-identity of the agent created in the previous step as the parent ID of the workload. The SPIFFE agent is started with the join token created in a previous step to add it to the cluster. Vault is started and security-secret-store-setup initializes it and creates an admin token for security-spiffe-token-provider to use. The security-spiffe-token-provider service is started. It obtains an SVID from the SIFFE agent and uses it as a TLS server certificate. An EdgeX microservice starts and obtains another SVID from the SPIFFE agent and uses it as a TLS client certificate to contact the security-spiffe-token-provider service. The EdgeX microservice uses the trust bundle as a server CA to verify the TLS certificate of the remote service. security-spiffe-token-provider verifies the SVID using the trust bundle as client CA to verify the client, extracts the service key, and issues an appropriate Vault service token. The EdgeX microservice accesses Vault as usual.","title":"Technical Architecture"},{"location":"design/adr/security/0020-spiffe/#workload-registration-and-agent-sockets","text":"The server uses a workload registration Unix domain socket that allows authorization entries to be added to the authorization database. This socket is protected by Unix file system permissions to control who is allowed to add entries to the database. In this proposal, a subcommand will be added to the EdgeX secrets-config utility to simplify the process of registering new services that uses the registration socket above. The agent uses a workload attesation Unix domain socket that is open to the world. This socket is shared via a snap content-interface of via a shared host bind mount for Docker. There is one agent per node.","title":"Workload Registration and Agent Sockets"},{"location":"design/adr/security/0020-spiffe/#trust-bundle","text":"SVID's must be traceable back to a known issuing authority (certificate authority) to determine their validity. In the proposed implementation, we will generate a CA on first boot and store it persistently. This root CA will be distributed as the trust bundle. The SPIFFE server will then generate a rotating sub-CA for issuing SVIDs, and the issued SVID will include both the leaf certificate and the intermediate certificate. This implementation differs from the default implementation, which uses a transient CA that is rotated periodically and that keeps a log of past CA's. The default implementation is not suitable because only the Kubernetes reference implementation of the SPIRE server has a notification hook that is invoked when the CA is rotated. CA rotation would just result in issuing of SVIDs that are not trusted by microservices that received only the initial CA. The SPIFFE implementation is replaceable. The user is free to replace this default implementation with potentally a cloud-based SPIFFE server and a cloud-based CA.","title":"Trust Bundle"},{"location":"design/adr/security/0020-spiffe/#workload-authorization","text":"Workloads are authenticated by connecting to the spiffe-agent via a Unix domain socket, which is capable of identifying the process ID of the remote client. The process ID is fed into one of following workload attesters, which gather additional metadata about the caller: The Unix workload attester gathers UID, GID, path, and SHA-256 hash of the executable. The Unix workload attester would be used native services and snaps. The Docker workload attester gathers container labels that are added by docker-compose when the container is launched. The Docker workload attester would be used for Docker-based EdgeX deployments. An example label is docker:label:com.docker.compose.service:edgex-core-data where the service label is the key value in the services section of the docker-compose.yml . It is also possible to refer to labels built-in to the container image. The Kubernetes workload attester gathers a wealth of pod and container metadata. Once authenticated, the metadata is sent to the SPIFFE server to authorize the workload. Workloads are authorized via an authorization database connected to the SPIFFE server. Supported databases are SQLite (default), PostgreSQL, and MySQL. Due to startup ordering issues, SQLite will be used. (Disclaimer: SQlite, according for the Turtle book is intended for development and test only. We will use SQlite anyway because because Redis is not supported.) The only service that needs to be seeded to the database as this time is security-spiffe-token-provier . For example: spire-server entry create -parentID \" ${ local_agent_svid } \" -dns edgex-spiffe-token-provider -spiffeID \" ${ svid_service_base } /edgex-spiffe-token-provider\" -selector \"docker:label:com.docker.compose.service:edgex-spiffe-token-provider\" The above command associates a SPIFFE ID with a selector , in this case, a container label, and configures a DNS subjectAltName in the X.509 certificate for server-side TLS. A snap-based installation of EdgeX would use a unix:path or unix:sha256 selector instead. There are two extension mechanims for authorization additional workloads: Inject a config file or environment variable to authorize additional workloads. The container will parse and issue spire-server entry create commands for each additional service. Run the edgex-secrets-config utility (that will wrap the spire-server entry create command) for ad-hoc authorization of new services. The authorization database is persistent across reboots.","title":"Workload Authorization"},{"location":"design/adr/security/0020-spiffe/#consequences","text":"This proposal will require addition of several new, optional, EdgeX microservices: security-spiffe-token-provider , running on the main node spiffe-agent , running on the main node and each remote node spiffe-server , running on the main node spiffe-config , a one-shot service running on the main node Note that like Vault, the recommended SPIFFE configuration is to run the SPIFFE server on a dedicated node. If this is a concern, bring your own SPIFFE implementation. Minor changes will be needed to security-secretstore-setup to preserve the token-creating-token used by security-file-token-provider so that it can be used by security-spiffe-token-provider . The startup flow of the framework will be adjusted as follows: Bootstrap service (original) spiffe-server spiffe-config (can be combined with spifee-server ) spiffe-agent Vault service (original) Secret store setup service (original) security-spiffe-token-provider Consul (original) Postgres (original) There is no direct dependency between spiffe-server and any other microservice. security-spiffe-token-provider requires an SVID from spiffe-agent and a Vault admin token. None of these new services will be proxied via the API gateway. In the future, this mechanism may become the default secret store distribution mechanism, as it eliminates several secrets volumes used to share secrets between security-secretstore-setup and various EdgeX microservices. The EdgeX automation will only configure the SPIFEE agent on the main node. Additional nodes can be manually added by the operator by obtaining a join token from the main node and using it to bootstrap a remote node. SPIFFE/SPIRE has native support for Kubernetes and can distribute the trust bundle via a Kubernetes ConfigMap to more easily enable distributed scenarios, removing a major roadblock to usage of EdgeX in a Kubernetes environment.","title":"Consequences"},{"location":"design/adr/security/0020-spiffe/#footprint","text":"NOTE: This data is limited by the fact that the pre-built SPIRE reference binaries are compiled with CGO enabled.","title":"Footprint"},{"location":"design/adr/security/0020-spiffe/#spire-server","text":"69 MB executable, dynamically linked 151 MB inside of a Debian-slim container 30 MB memory usage, as container","title":"SPIRE Server"},{"location":"design/adr/security/0020-spiffe/#spire-agent","text":"33 MB executable, dynamically linked 114 MB inside of a Debian-slim container 64 MB memory usage, as container","title":"SPIRE Agent"},{"location":"design/adr/security/0020-spiffe/#spiffe-base-secret-store-token-provider","text":"The following is the minimum size: > 6 MB executable (likely much larger) > 29 MB memory usage, as container","title":"SPIFFE-base Secret Store Token Provider"},{"location":"design/adr/security/0020-spiffe/#limitations","text":"The following are known limitations with this proposal: The capabilities enabled by this solution would only be enabled on Linux platforms. SIFFE/SPIRE Agent is not available for native Windows and pre-built binaries are only avaiable for Linux. (It is unclear as to whether other *nix'es are supported.) The capabilities enabled by this solution would only be supported for Go-based services. The SPIFFE API's are implemented in gRPC, which is only ported to C#, C++, Dart, Go, Java, Kotlin, Node, Objective-C, PHP, Python, and Ruby. Notably, the C language is not supported, and the only other EdgeX supported language is Go. That default TTL of an x.509 SVID is one hour. As such, all SVID consumers must be capable of auto-renewal of SVIDs on both the client and server side.","title":"Limitations"},{"location":"design/adr/security/0020-spiffe/#alternatives","text":"","title":"Alternatives"},{"location":"design/adr/security/0020-spiffe/#overcoming-lack-of-a-supported-grpc-c-library","text":"Leave C-SDK device services behind. In this option, C device services would be unable to participate in the delayed-start services architecture. Fork a grpc-c library. Forking a grpc-c library and rehabilitating it is one option. There is at least one grpc-c library that has been proven to work, but it requires additional features to make it compatible with the SPIRE workload agent. However, the project is extremely large and it is unlikely that EdgeX is big enough to carry the project. Available libraries include: https://github.com/lixiangyun/grpc-c This library is several years out-of-date, does not compile on current Linux distributions without some rework, and does not pass per-request metadata tags. Proved to work via manual patching. Not supportable. https://github.com/Juniper/grpc-c This library is serveral years out-of-date, also does not compile on current Linux distributiosn without some rework. Uses hard-coded Unix domain socket paths. May support per-request metadata tags, but did not test. Not supportable. https://github.com/HewlettPackard/c-spiffe This library is yet untested. Rather than a gRPC library, this library implements the workload API client directly. Ultimately, this library also wraps the gRPC C++ library, and statically links to it. There is no benefit to the EdgeX project to use this library as we can call the underlying library directly. Hybrid device services. In this model, device services would always be written in Go, but in the case where linking to a C language library is required, CGO features would be used to invoke native C functions from golang. This option would commit the EdgeX project to a one-time investment to port the existing C device services to the new hybrid model. This option is the best choice if the long-term strategy is to end-of-life the C Device SDK. Bridge. In this model, the C++ implementation to invoke the SPIFFE/SPIRE workload API would be hidden behind a dynamic shared library with C linkage. This would require minimal change to the existing C SDK. However, the resulting binaries would have be based on GLIBC vs MUSL in order to get dlopen() support. This will also limit the choice of container base images for containerized services. Modernize. In this model, the Device SDK would be rewritten either partially or in-full in C++. Under this model, the SPIFFE/SPIRE workload API could be accessed via a community-supported C++ GRPC SDK. There are many implementation options: A \"C++ compilation-switch\" where the C SDK could be compiled in C-mode or C++-mode with enhanced functionality. A C++ extension API. The original C SDK would remain as-is, but if compiling with __cplusplus defined, additional API methods would be exposed. The SDK could thus be composed of a mixture of .c files with C linkage and .cc files with C++ linkage. The linker would ultimately determine whether or not the C++ runtime library needed to be linked in. Native C++ device SDK with legacy C wrapper facade. Compile existing code in C++ mode, with optional C++ facade.","title":"Overcoming lack of a supported GRPC-C library"},{"location":"design/adr/security/0020-spiffe/#opt-in-or-standard-feature","text":"If one of the following things were to happen, it would push this proposal \"over the edge\" from being an optional opt-in feature to a required standard feature for security: The \"on-demand\" method of obtaining a secret store token is the default method of obtaining a token for non-core EdgeX services. The \"on-demand\" method of obtaining a secret store token is the default method for all EdgeX services. SPIFFE SVID's become the implementation mechanism for microservice-level authentication. (Not in scope for this ADR.)","title":"Opt-in or Standard Feature"},{"location":"design/adr/security/0020-spiffe/#merge-security-file-token-provider-and-security-spiffe-token-provider","text":"Keeping these as separate executables clearly separates the on-demand secret store tokens feature as an optional service. It is possible to combine the services, but there would need to be a configuration switch in order to enable the SPIFFE feature. It would also increase the base executable size to include the extra logic.","title":"Merge security-file-token-provider and security-spiffe-token-provider"},{"location":"design/adr/security/0020-spiffe/#alternatives-regarding-spiffe-ca","text":"","title":"Alternatives regarding SPIFFE CA"},{"location":"design/adr/security/0020-spiffe/#transient-ca-option","text":"The SPIFFE server can be configured with no \"upstream authority\" (certificate authority), and the server will periodically generate a new, transient CA, and keep a bounded history of previous CA's. A rotating trust bundle only practically works in a Kubernetes environment, since a configmap can be updated real-time. For everyone else, we need a static CA that can be pre-distributed to remote nodes. Thus, this solution was not chosen.","title":"Transient CA option"},{"location":"design/adr/security/0020-spiffe/#vault-based-ca-option","text":"The SPIFFE server can be configured to make requests to a Hashicorp Vault PKI secrets engine to generate intermediate CA certificates for signing SVID's. This is an option for future integrations, but is omitted from this proposal due to the jump in implementation complexity and the desire that the current proposal be on add-on feature. The current implementation allows the SPIFFE server and Vault to be started simultaneously. Using a Vault-based CA would require a complex interlocking sequence of steps.","title":"Vault-based CA option"},{"location":"design/adr/security/0020-spiffe/#references","text":"Issue to create ADR for handling delayed-start services 0018 Service Registry ADR Service List ADR SPIFFE SPIFFE ID X.500 SVID JWT SVID Turtle book","title":"References"},{"location":"design/legacy-design/","text":"Legacy Design Documents Name/Link Short Description Registry Abstraction Decouple EdgeX services from Consul device-service/Discovery Dynamically discover new devices","title":"Legacy Design Documents"},{"location":"design/legacy-design/#legacy-design-documents","text":"Name/Link Short Description Registry Abstraction Decouple EdgeX services from Consul device-service/Discovery Dynamically discover new devices","title":"Legacy Design Documents"},{"location":"design/legacy-design/device-service/discovery/","text":"Dynamic Device Discovery Overview Some device protocols allow for devices to be discovered automatically. A Device Service may include a capability for discovering devices and creating the corresponding Device objects within EdgeX. A framework for doing so will be implemented in the Device Service SDKs. The discovery process will operate as follows: Discovery is triggered either on an internal timer or by a call to a REST endpoint The SDK will call a function provided by the DS implementation to request a device scan The implementation calls back to the SDK with details of devices which it has found The SDK filters these devices against a set of acceptance criteria The SDK adds accepted devices in core-metadata. These are now available in the EdgeX system Triggering Discovery A boolean configuration value Device/Discovery/Enabled defaults to false. If this value is set true, and the DS implementation supports discovery, discovery is enabled. The SDK will respond to POST requests on the the /discovery endpoint. No content is required in the request. This call will return one of the following codes: 202: discovery has been triggered or is already running. The response should indicate which, and contain the correlation id that will be used by any resulting requests for device addition 423: the service is locked (admin state) or disabled (operating state) 500: unknown or unanticipated issues exist 501: discovery is not supported by this protocol implementation 503: discovery is disabled by configuration In each of the failure cases a meaningful error message should be returned. In the case where discovery is triggered, the discovery process will run in a new thread or goroutine, so that the REST call may return immediately. An integer configuration value Device/Discovery/Interval defaults to zero. If this value is set to a positive value, and discovery is enabled, the discovery process will be triggered at the specified interval (in seconds). Finding Devices When discovery is triggered, the SDK calls the implementation function provided by the Device Service. This should perform whatever protocol-specific procedure is necessary to find devices, and pass these devices into the SDK by calling the SDK's filtered device addition function. Note: The implementation should call back for every device found. The SDK is to take responsibility for filtering out devices which have already been added. The information required for a found device is as follows: An autogenerated device name The Protocol Properties of the device Optionally, a description string Optionally, a list of label strings The filtered device addition function will take as an argument a collection of structs containing the above data. An implementation may choose to make one call per discovered device, but implementors are encouraged to batch the devices if practical, as in future EdgeX versions it will be possible for the SDK to create all required new devices in a single call to core-metadata. Rationale: An alternative design would have the implementation function return the collection of discovered devices to the SDK. Using a callback mechanism instead has the following advantages: Allows for asynchronous operation. In this mode the DS implementation will intiate discovery and return immediately. For example discovery may be initiated by sending a broadcast packet. Devices will then send return packets indicating their existence. The thread handling inbound network traffic can on receipt of such packets call the filtered device addition function directly. Allows DS implementations where devices self-announce to call the filtered device addition function independent of the discovery process Filtered Device Addition The filter criteria for discovered devices are represented by Provision Watchers. A Provision Watcher contains the following fields: Identifiers : A set of name-value pairs against which a new device's ProtocolProperties are matched BlockingIdentifiers : A further set of name-value pairs which are also matched against a new device's ProtocolProperties Profile : The name of a DeviceProfile which should be assigned to new devices which pass this ProvisionWatcher AdminState : The initial Administrative State for new devices which pass this ProvisionWatcher A candidate new device passes a ProvisionWatcher if all of the Identifiers match, and none of the BlockingIdentifiers . For devices with multiple Device.Protocols , each Device.Protocol is considered separately. A pass (as described above) on any of the protocols results in the device being added. The values specified in Identifiers are regular expressions. Note: If a discovered Device is manually removed from EdgeX, it will be necessary to adjust the ProvisionWatcher via which it was added, either by making the Identifiers more specific or by adding BlockingIdentifiers , otherwise the Device will be re-added the next time Discovery is initiated. Note: ProvisionWatchers are stored in core-metadata. A facility for managing ProvisionWatchers is needed, eg edgex-cli could be extended","title":"Discovery"},{"location":"design/legacy-design/device-service/discovery/#dynamic-device-discovery","text":"","title":"Dynamic Device Discovery"},{"location":"design/legacy-design/device-service/discovery/#overview","text":"Some device protocols allow for devices to be discovered automatically. A Device Service may include a capability for discovering devices and creating the corresponding Device objects within EdgeX. A framework for doing so will be implemented in the Device Service SDKs. The discovery process will operate as follows: Discovery is triggered either on an internal timer or by a call to a REST endpoint The SDK will call a function provided by the DS implementation to request a device scan The implementation calls back to the SDK with details of devices which it has found The SDK filters these devices against a set of acceptance criteria The SDK adds accepted devices in core-metadata. These are now available in the EdgeX system","title":"Overview"},{"location":"design/legacy-design/device-service/discovery/#triggering-discovery","text":"A boolean configuration value Device/Discovery/Enabled defaults to false. If this value is set true, and the DS implementation supports discovery, discovery is enabled. The SDK will respond to POST requests on the the /discovery endpoint. No content is required in the request. This call will return one of the following codes: 202: discovery has been triggered or is already running. The response should indicate which, and contain the correlation id that will be used by any resulting requests for device addition 423: the service is locked (admin state) or disabled (operating state) 500: unknown or unanticipated issues exist 501: discovery is not supported by this protocol implementation 503: discovery is disabled by configuration In each of the failure cases a meaningful error message should be returned. In the case where discovery is triggered, the discovery process will run in a new thread or goroutine, so that the REST call may return immediately. An integer configuration value Device/Discovery/Interval defaults to zero. If this value is set to a positive value, and discovery is enabled, the discovery process will be triggered at the specified interval (in seconds).","title":"Triggering Discovery"},{"location":"design/legacy-design/device-service/discovery/#finding-devices","text":"When discovery is triggered, the SDK calls the implementation function provided by the Device Service. This should perform whatever protocol-specific procedure is necessary to find devices, and pass these devices into the SDK by calling the SDK's filtered device addition function. Note: The implementation should call back for every device found. The SDK is to take responsibility for filtering out devices which have already been added. The information required for a found device is as follows: An autogenerated device name The Protocol Properties of the device Optionally, a description string Optionally, a list of label strings The filtered device addition function will take as an argument a collection of structs containing the above data. An implementation may choose to make one call per discovered device, but implementors are encouraged to batch the devices if practical, as in future EdgeX versions it will be possible for the SDK to create all required new devices in a single call to core-metadata. Rationale: An alternative design would have the implementation function return the collection of discovered devices to the SDK. Using a callback mechanism instead has the following advantages: Allows for asynchronous operation. In this mode the DS implementation will intiate discovery and return immediately. For example discovery may be initiated by sending a broadcast packet. Devices will then send return packets indicating their existence. The thread handling inbound network traffic can on receipt of such packets call the filtered device addition function directly. Allows DS implementations where devices self-announce to call the filtered device addition function independent of the discovery process","title":"Finding Devices"},{"location":"design/legacy-design/device-service/discovery/#filtered-device-addition","text":"The filter criteria for discovered devices are represented by Provision Watchers. A Provision Watcher contains the following fields: Identifiers : A set of name-value pairs against which a new device's ProtocolProperties are matched BlockingIdentifiers : A further set of name-value pairs which are also matched against a new device's ProtocolProperties Profile : The name of a DeviceProfile which should be assigned to new devices which pass this ProvisionWatcher AdminState : The initial Administrative State for new devices which pass this ProvisionWatcher A candidate new device passes a ProvisionWatcher if all of the Identifiers match, and none of the BlockingIdentifiers . For devices with multiple Device.Protocols , each Device.Protocol is considered separately. A pass (as described above) on any of the protocols results in the device being added. The values specified in Identifiers are regular expressions. Note: If a discovered Device is manually removed from EdgeX, it will be necessary to adjust the ProvisionWatcher via which it was added, either by making the Identifiers more specific or by adding BlockingIdentifiers , otherwise the Device will be re-added the next time Discovery is initiated. Note: ProvisionWatchers are stored in core-metadata. A facility for managing ProvisionWatchers is needed, eg edgex-cli could be extended","title":"Filtered Device Addition"},{"location":"design/legacy-requirements/","text":"Legacy Requirements Name/Link Short Description Device Service Device Service SDK required functionality","title":"Legacy Requirements"},{"location":"design/legacy-requirements/#legacy-requirements","text":"Name/Link Short Description Device Service Device Service SDK required functionality","title":"Legacy Requirements"},{"location":"design/legacy-requirements/device-service/","text":"Device SDK Required Functionality Overview This document sets out the required functionality of a Device SDK other than the implementation of its REST API (see ADR 0011 ) and the Dynamic Discovery mechanism (see Discovery ). This functionality is categorised into three areas - actions required at startup, configuration options to be supported, and support for push-style event generation. Startup When the device service is started, in addition to any actions required to support functionality defined elsewhere, the SDK must: Manage the device service's registration in metadata Provide initialization information to the protocol-specific implementation Registration The core-metadata service maintains an extent of device service registrations so that it may route requests relating to particular devices to the correct device service. The SDK should create (on first run) or update its record appropriately. Device service registrations contain the following fields: Name - the name of the device service Description - an optional brief description of the service Labels - optional string labels BaseAddress - URL of the base of the service's REST API The default device service Name is to be hardcoded into every device service implementation. A suffix may be added to this name at runtime by means of commandline option or environment variable. Service names must be unique in a particular EdgeX instance; the suffix mechanism allows for running multiple instances of a given device service. The Description and Labels are configured in the [Service] section of the device service configuration. BaseAddress may be constructed using the [Service]/Host and [Service]/Port entries in the device service configuration. Initialization During startup the SDK must supply to the implementation that part of the service configuration which is specific to the implementation. This configuration is held in the Driver section of the configuration file or registry. The SDK must also supply a logging facility at this stage. This facility should by default emit logs locally (configurable to file or to stdout) but instead should use the optional logging service if the configuration element Logging/EnableRemote is set true . Note: the logging service is deprecated and support for it will be removed in EdgeX v2.0 The implementation on receipt of its configuration should perform any necessary initialization of its own. It may return an error in the event of unrecoverable problems, this should cause the service startup itself to fail. Configuration Configuration should be supported by the SDK, in accordance with ADR 0005 Commandline processing The SDK should handle commandline processing on behalf of the device service. In addition to the common EdgeX service options, the --instance / -i flag should be supported. This specifies a suffix to append to the device service name. Environment variables The SDK should also handle environment variables. In addition to the common EdgeX variables, EDGEX_INSTANCE_NAME should if set override the --instance setting. Configuration file and Registry The SDK should use (or for non-Go implementations, re-implement) the standard mechanisms for obtaining configuration from a file or registry. The configuration parameters to be supported are: Service section Option Type Notes Host String This is the hostname to use when registering the service in core-metadata. As such it is used by other services to connect to the device service, and therefore must be resolvable by other services in the EdgeX deployment. Port Int Port on which to accept the device service's REST API. The assigned port for experimental / in-development device services is 49999. Timeout Int Time (in milliseconds) to wait between attempts to contact core-data and core-metadata when starting up. ConnectRetries Int Number of times to attempt to contact core-data and core-metadata when starting up. StartupMsg String Message to log on successful startup. CheckInterval String The checking interval to request if registering with Consul. Consul will ping the service at this interval to monitor its liveliness. ServerBindAddr String The interface on which the service's REST server should listen. By default the server is to listen on the interface to which the Host option resolves. A value of 0.0.0.0 means listen on all available interfaces. Clients section Defines the endpoints for other microservices in an EdgeX system. Not required when using Registry. Data Option Type Notes Host String Hostname on which to contact the core-data service. Port Int Port on which to contact the core-data service. Metadata Option Type Notes Host String Hostname on which to contact the core-metadata service. Port Int Port on which to contact the core-metadata service. Device section Option Type Notes DataTransform Bool For enabling/disabling transformations on data between the device and EdgeX. Defaults to true (enabled). Discovery/Enabled Bool For enabling/disabling device discovery. Defaults to true (enabled). Discovery/Interval Int Time between automatic discovery runs, in seconds. Defaults to zero (do not run discovery automatically). MaxCmdOps Int Defines the maximum number of resource operations that can be sent to the driver in a single command. MaxCmdResultLen Int Maximum string length for command results returned from the driver. UpdateLastConnected Bool If true, update the LastConnected attribute of a device whenever it is successfully accessed (read or write). Defaults to false. Logging section Option Type Notes LogLevel String Sets the logging level. Available settings in order of increasing severity are: TRACE , DEBUG , INFO , WARNING , ERROR . Driver section This section is for options specific to the protocol driver. Any configuration specified here will be passed to the driver implementation during initialization. Push Events The SDK should implement methods for generating Events other than on receipt of device GET requests. The AutoEvent mechanism provides for generating Events at fixed intervals. The asynchronous event queue enables the device service to generate events at arbitrary times, according to implementation-specific logic. AutoEvents Each device may have as part of its definition in Metadata a number of AutoEvents associated with it. An AutoEvent has the following fields: resource : the name of a deviceResource or deviceCommand indicating what to read. frequency : a string indicating the time to wait between reading events, expressed as an integer followed by units of ms, s, m or h. onchange : a boolean: if set to true, only generate new events if one or more of the contained readings has changed since the last event. The device SDK should schedule device readings from the implementation according to these AutoEvent defininitions. It should use the same logic as it would if the readings were being requested via REST. Asynchronous Event Queue The SDK should provide a mechanism whereby the implementation may submit device readings at any time without blocking. This may be done in a manner appropriate to the implementation language, eg the Go SDK provides a channel on which readings may be pushed, the C SDK provides a function which submits readings to a workqueue.","title":"Device SDK Required Functionality"},{"location":"design/legacy-requirements/device-service/#device-sdk-required-functionality","text":"","title":"Device SDK Required Functionality"},{"location":"design/legacy-requirements/device-service/#overview","text":"This document sets out the required functionality of a Device SDK other than the implementation of its REST API (see ADR 0011 ) and the Dynamic Discovery mechanism (see Discovery ). This functionality is categorised into three areas - actions required at startup, configuration options to be supported, and support for push-style event generation.","title":"Overview"},{"location":"design/legacy-requirements/device-service/#startup","text":"When the device service is started, in addition to any actions required to support functionality defined elsewhere, the SDK must: Manage the device service's registration in metadata Provide initialization information to the protocol-specific implementation","title":"Startup"},{"location":"design/legacy-requirements/device-service/#registration","text":"The core-metadata service maintains an extent of device service registrations so that it may route requests relating to particular devices to the correct device service. The SDK should create (on first run) or update its record appropriately. Device service registrations contain the following fields: Name - the name of the device service Description - an optional brief description of the service Labels - optional string labels BaseAddress - URL of the base of the service's REST API The default device service Name is to be hardcoded into every device service implementation. A suffix may be added to this name at runtime by means of commandline option or environment variable. Service names must be unique in a particular EdgeX instance; the suffix mechanism allows for running multiple instances of a given device service. The Description and Labels are configured in the [Service] section of the device service configuration. BaseAddress may be constructed using the [Service]/Host and [Service]/Port entries in the device service configuration.","title":"Registration"},{"location":"design/legacy-requirements/device-service/#initialization","text":"During startup the SDK must supply to the implementation that part of the service configuration which is specific to the implementation. This configuration is held in the Driver section of the configuration file or registry. The SDK must also supply a logging facility at this stage. This facility should by default emit logs locally (configurable to file or to stdout) but instead should use the optional logging service if the configuration element Logging/EnableRemote is set true . Note: the logging service is deprecated and support for it will be removed in EdgeX v2.0 The implementation on receipt of its configuration should perform any necessary initialization of its own. It may return an error in the event of unrecoverable problems, this should cause the service startup itself to fail.","title":"Initialization"},{"location":"design/legacy-requirements/device-service/#configuration","text":"Configuration should be supported by the SDK, in accordance with ADR 0005","title":"Configuration"},{"location":"design/legacy-requirements/device-service/#commandline-processing","text":"The SDK should handle commandline processing on behalf of the device service. In addition to the common EdgeX service options, the --instance / -i flag should be supported. This specifies a suffix to append to the device service name.","title":"Commandline processing"},{"location":"design/legacy-requirements/device-service/#environment-variables","text":"The SDK should also handle environment variables. In addition to the common EdgeX variables, EDGEX_INSTANCE_NAME should if set override the --instance setting.","title":"Environment variables"},{"location":"design/legacy-requirements/device-service/#configuration-file-and-registry","text":"The SDK should use (or for non-Go implementations, re-implement) the standard mechanisms for obtaining configuration from a file or registry. The configuration parameters to be supported are:","title":"Configuration file and Registry"},{"location":"design/legacy-requirements/device-service/#service-section","text":"Option Type Notes Host String This is the hostname to use when registering the service in core-metadata. As such it is used by other services to connect to the device service, and therefore must be resolvable by other services in the EdgeX deployment. Port Int Port on which to accept the device service's REST API. The assigned port for experimental / in-development device services is 49999. Timeout Int Time (in milliseconds) to wait between attempts to contact core-data and core-metadata when starting up. ConnectRetries Int Number of times to attempt to contact core-data and core-metadata when starting up. StartupMsg String Message to log on successful startup. CheckInterval String The checking interval to request if registering with Consul. Consul will ping the service at this interval to monitor its liveliness. ServerBindAddr String The interface on which the service's REST server should listen. By default the server is to listen on the interface to which the Host option resolves. A value of 0.0.0.0 means listen on all available interfaces.","title":"Service section"},{"location":"design/legacy-requirements/device-service/#clients-section","text":"Defines the endpoints for other microservices in an EdgeX system. Not required when using Registry.","title":"Clients section"},{"location":"design/legacy-requirements/device-service/#data","text":"Option Type Notes Host String Hostname on which to contact the core-data service. Port Int Port on which to contact the core-data service.","title":"Data"},{"location":"design/legacy-requirements/device-service/#metadata","text":"Option Type Notes Host String Hostname on which to contact the core-metadata service. Port Int Port on which to contact the core-metadata service.","title":"Metadata"},{"location":"design/legacy-requirements/device-service/#device-section","text":"Option Type Notes DataTransform Bool For enabling/disabling transformations on data between the device and EdgeX. Defaults to true (enabled). Discovery/Enabled Bool For enabling/disabling device discovery. Defaults to true (enabled). Discovery/Interval Int Time between automatic discovery runs, in seconds. Defaults to zero (do not run discovery automatically). MaxCmdOps Int Defines the maximum number of resource operations that can be sent to the driver in a single command. MaxCmdResultLen Int Maximum string length for command results returned from the driver. UpdateLastConnected Bool If true, update the LastConnected attribute of a device whenever it is successfully accessed (read or write). Defaults to false.","title":"Device section"},{"location":"design/legacy-requirements/device-service/#logging-section","text":"Option Type Notes LogLevel String Sets the logging level. Available settings in order of increasing severity are: TRACE , DEBUG , INFO , WARNING , ERROR .","title":"Logging section"},{"location":"design/legacy-requirements/device-service/#driver-section","text":"This section is for options specific to the protocol driver. Any configuration specified here will be passed to the driver implementation during initialization.","title":"Driver section"},{"location":"design/legacy-requirements/device-service/#push-events","text":"The SDK should implement methods for generating Events other than on receipt of device GET requests. The AutoEvent mechanism provides for generating Events at fixed intervals. The asynchronous event queue enables the device service to generate events at arbitrary times, according to implementation-specific logic.","title":"Push Events"},{"location":"design/legacy-requirements/device-service/#autoevents","text":"Each device may have as part of its definition in Metadata a number of AutoEvents associated with it. An AutoEvent has the following fields: resource : the name of a deviceResource or deviceCommand indicating what to read. frequency : a string indicating the time to wait between reading events, expressed as an integer followed by units of ms, s, m or h. onchange : a boolean: if set to true, only generate new events if one or more of the contained readings has changed since the last event. The device SDK should schedule device readings from the implementation according to these AutoEvent defininitions. It should use the same logic as it would if the readings were being requested via REST.","title":"AutoEvents"},{"location":"design/legacy-requirements/device-service/#asynchronous-event-queue","text":"The SDK should provide a mechanism whereby the implementation may submit device readings at any time without blocking. This may be done in a manner appropriate to the implementation language, eg the Go SDK provides a channel on which readings may be pushed, the C SDK provides a function which submits readings to a workqueue.","title":"Asynchronous Event Queue"},{"location":"examples/","text":"EdgeX Examples In addition to the examples listed in this section of the documentation, you will find other examples in the EdgeX Examples Repository . The tabs below provide a listing (may be partial based on latest updates) for reference. Application Services See App Service Examples for a listing of custom and configurable application service examples. Deployment Example Location Kubernetes Github - examples, deployment Raspberry Pi 4 Github - examples, raspberry-pi-4 Cloud deployments Github - examples, cloud deployment templates Device Services Example Location Random Number Device Service (simulation) Github - examples, device-random Grove Device Service in C Github - examples, device-grove-c Security Example Location Docker Swarm, remote device service via overlay network Github - Docker Swarm SSH Tunneling, remote device service via SSH tunneling Github - SSH Tunneling Warning Not all the examples in the EdgeX Examples repository are available for all EdgeX releases. Check the documentation for details.","title":"EdgeX Examples"},{"location":"examples/#edgex-examples","text":"In addition to the examples listed in this section of the documentation, you will find other examples in the EdgeX Examples Repository . The tabs below provide a listing (may be partial based on latest updates) for reference. Application Services See App Service Examples for a listing of custom and configurable application service examples. Deployment Example Location Kubernetes Github - examples, deployment Raspberry Pi 4 Github - examples, raspberry-pi-4 Cloud deployments Github - examples, cloud deployment templates Device Services Example Location Random Number Device Service (simulation) Github - examples, device-random Grove Device Service in C Github - examples, device-grove-c Security Example Location Docker Swarm, remote device service via overlay network Github - Docker Swarm SSH Tunneling, remote device service via SSH tunneling Github - SSH Tunneling Warning Not all the examples in the EdgeX Examples repository are available for all EdgeX releases. Check the documentation for details.","title":"EdgeX Examples"},{"location":"examples/AppServiceExamples/","text":"App Service Examples The following is a list of examples we currently have available that demonstrate various ways that the Application Functions SDK or App Service Configurable can be used. All of the examples can be found here in the edgex-examples repo. They focus on how to leverage various built in provided functions as mentioned above as well as how to write your own in the case that the SDK does not provide what is needed. Example Name Description Simple Filter XML Demonstrates Filtering of Events by Device names and transforming data to XML Simple Filter XML HTTP Same example as #1, but result published to HTTP Endpoint Simple Filter XML MQTT Same example as #1, but result published to MQTT Broker Simple CBOR Filter Demonstrates Filtering of Events by Resource names for Event that is CBOR encoded containing a binary reading Advanced Filter Convert Publish Demonstrates Filtering of Events by Resource names, custom function to convert the reading and them publish the modified Event back to the MessageBus under a different topic. Advanced Target Type Demonstrates use of custom Target Type and use of HTTP Trigger Cloud Export MQTT Demonstrates simple custom Cloud transform and exporting to Cloud MQTT Broker. Cloud Event Transform Demonstrates custom transforms that convert Event/Readings to and from Cloud Events Send Command Demonstrates sending commands to a Device via the Command Client. Secrets Demonstrates how to retrieve secrets from the service SecretStore Custom Trigger Demonstrates how to create and use a custom trigger Fledge Export Demonstrates custom conversion of Event/Reading to Fledge format and then exporting to Fledge service REST endpoint Influxdb Export Demonstrates custom conversion of Event/Reading to InfluxDB timeseries format and then exporting to InFluxDB via MQTT Json Logic Demonstrates using the built in JSONLogic Evaluate pipeline function IBM Export Profile Demonstrates a custom App Service Configurable profile for exporting to IBM Cloud","title":"App Service Examples"},{"location":"examples/AppServiceExamples/#app-service-examples","text":"The following is a list of examples we currently have available that demonstrate various ways that the Application Functions SDK or App Service Configurable can be used. All of the examples can be found here in the edgex-examples repo. They focus on how to leverage various built in provided functions as mentioned above as well as how to write your own in the case that the SDK does not provide what is needed. Example Name Description Simple Filter XML Demonstrates Filtering of Events by Device names and transforming data to XML Simple Filter XML HTTP Same example as #1, but result published to HTTP Endpoint Simple Filter XML MQTT Same example as #1, but result published to MQTT Broker Simple CBOR Filter Demonstrates Filtering of Events by Resource names for Event that is CBOR encoded containing a binary reading Advanced Filter Convert Publish Demonstrates Filtering of Events by Resource names, custom function to convert the reading and them publish the modified Event back to the MessageBus under a different topic. Advanced Target Type Demonstrates use of custom Target Type and use of HTTP Trigger Cloud Export MQTT Demonstrates simple custom Cloud transform and exporting to Cloud MQTT Broker. Cloud Event Transform Demonstrates custom transforms that convert Event/Readings to and from Cloud Events Send Command Demonstrates sending commands to a Device via the Command Client. Secrets Demonstrates how to retrieve secrets from the service SecretStore Custom Trigger Demonstrates how to create and use a custom trigger Fledge Export Demonstrates custom conversion of Event/Reading to Fledge format and then exporting to Fledge service REST endpoint Influxdb Export Demonstrates custom conversion of Event/Reading to InfluxDB timeseries format and then exporting to InFluxDB via MQTT Json Logic Demonstrates using the built in JSONLogic Evaluate pipeline function IBM Export Profile Demonstrates a custom App Service Configurable profile for exporting to IBM Cloud","title":"App Service Examples"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/","text":"Command Devices with eKuiper Rules Engine Overview This document describes how to actuate a device with rules trigger by the eKuiper rules engine. To make the example simple, the virtual device device-virtual is used as the actuated device. The eKuiper rules engine analyzes the data sent from device-virtual services, and then sends a command to virtual device based a rule firing in eKuiper based on that analysis. It should be noted that an application service is used to route core data through the rules engine. Use Case Scenarios Rules will be created in eKuiper to watch for two circumstances: monitor for events coming from the Random-UnsignedInteger-Device device (one of the default virtual device managed devices), and if a uint8 reading value is found larger than 20 in the event, then send a command to Random-Boolean-Device device to start generating random numbers (specifically - set random generation bool to true). monitor for events coming from the Random-Integer-Device device (another of the default virtual device managed devices), and if the average for int8 reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device device to stop generating random numbers (specifically - set random generation bool to false). These use case scenarios do not have any real business meaning, but easily demonstrate the features of EdgeX automatic actuation accomplished via the eKuiper rule engine. Prerequisite Knowledge This document will not cover basic operations of EdgeX or LF Edge eKuiper. Readers should have basic knowledge of: Get and start EdgeX. Refer to Quick Start for how to get and start EdgeX with the virtual device service. Run the eKuiper Rules Engine. Refer to EdgeX eKuiper Rule Engine Tutorial to understand the basics of eKuiper and EdgeX. Start eKuiper and Create an EdgeX Stream Make sure you read the EdgeX eKuiper Rule Engine Tutorial and successfully run eKuiper with EdgeX. First create a stream that can consume streaming data from the EdgeX application service (rules engine profile). This step is not required if you already finished the EdgeX eKuiper Rule Engine Tutorial . curl -X POST \\ http:// $ekuiper_docker :59720/streams \\ -H 'Content-Type: application/json' \\ -d '{\"sql\": \"create stream demo() WITH (FORMAT=\\\"JSON\\\", TYPE=\\\"edgex\\\")\"}' Get and Test the Command URL Since both use case scenario rules will send commands to the Random-Boolean-Device virtual device, use the curl request below to get a list of available commands for this device. curl http://127.0.0.1:59882/api/v2/device/name/Random-Boolean-Device | jq It should print results like those below. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"deviceCoreCommand\" : { \"deviceName\" : \"Random-Boolean-Device\" , \"profileName\" : \"Random-Boolean-Device\" , \"coreCommands\" : [ { \"name\" : \"WriteBoolValue\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Bool\" , \"valueType\" : \"Bool\" }, { \"resourceName\" : \"EnableRandomization_Bool\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"WriteBoolArrayValue\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/WriteBoolArrayValue\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"BoolArray\" , \"valueType\" : \"BoolArray\" }, { \"resourceName\" : \"EnableRandomization_BoolArray\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"Bool\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/Bool\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Bool\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"BoolArray\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/BoolArray\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"BoolArray\" , \"valueType\" : \"BoolArray\" } ] } ] } } From this output, look for the URL associated to the PUT command (the first URL listed). This is the command eKuiper will use to call on the device. There are two parameters for this command: Bool : Set the returned value when other services want to get device data. The parameter will be used only when EnableRandomization_Bool is set to false. EnableRandomization_Bool : Enable/disable the randomization generation of bool values. If this value is set to true, then the 1st parameter will be ignored. You can test calling this command with its parameters using curl as shown below. curl -X PUT \\ http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue \\ -H 'Content-Type: application/json' \\ -d '{\"Bool\":\"true\", \"EnableRandomization_Bool\": \"true\"}' Create rules Now that you have EdgeX and eKuiper running, the EdgeX stream defined, and you know the command to actuate Random-Boolean-Device , it is time to build the eKuiper rules. The first rule Again, the 1st rule is to monitor for events coming from the Random-UnsignedInteger-Device device (one of the default virtual device managed devices), and if a uint8 reading value is found larger than 20 in the event, then send the command to Random-Boolean-Device device to start generating random numbers (specifically - set random generation bool to true). Given the URL and parameters to the command, below is the curl command to declare the first rule in eKuiper. curl -X POST \\ http:// $ekuiper_server :59720/rules \\ -H 'Content-Type: application/json' \\ -d '{ \"id\": \"rule1\", \"sql\": \"SELECT uint8 FROM demo WHERE uint8 > 20\", \"actions\": [ { \"rest\": { \"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\", \"method\": \"put\", \"dataTemplate\": \"{\\\"Bool\\\":\\\"true\\\", \\\"EnableRandomization_Bool\\\": \\\"true\\\"}\", \"sendSingle\": true } }, { \"log\":{} } ] }' The second rule The 2nd rule is to monitor for events coming from the Random-Integer-Device device (another of the default virtual device managed devices), and if the average for int8 reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device device to stop generating random numbers (specifically - set random generation bool to false). Here is the curl request to setup the second rule in eKuiper. The same command URL is used as the same device action ( Random-Boolean-Device's PUT bool command ) is being actuated, but with different parameters. curl -X POST \\ http:// $ekuiper_server :59720/rules \\ -H 'Content-Type: application/json' \\ -d '{ \"id\": \"rule2\", \"sql\": \"SELECT avg(int8) AS avg_int8 FROM demo WHERE int8 != nil GROUP BY TUMBLINGWINDOW(ss, 20) HAVING avg(int8) > 0\", \"actions\": [ { \"rest\": { \"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\", \"method\": \"put\", \"dataTemplate\": \"{\\\"Bool\\\":\\\"false\\\", \\\"EnableRandomization_Bool\\\": \\\"false\\\"}\", \"sendSingle\": true } }, { \"log\":{} } ] }' Watch the eKuiper Logs Both rules are now created in eKuiper. eKuiper is busy analyzing the event data coming for the virtual devices looking for readings that match the rules you created. You can watch the edgex-kuiper container logs for the rule triggering and command execution. docker logs edgex-kuiper Explore the Results You can also explore the eKuiper analysis that caused the commands to be sent to the service. To see the the data from the analysis, use the SQL below to query eKuiper filtering data. SELECT int8 , \"true\" AS randomization FROM demo WHERE uint8 > 20 The output of the SQL should look similar to the results below. [{ \"int8\" : -75 , \"randomization\" : \"true\" }] Extended Reading Use these resouces to learn more about the features of LF Edge eKuiper. eKuiper Github code repository eKuiper reference guide","title":"Command Devices with eKuiper Rules Engine"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#command-devices-with-ekuiper-rules-engine","text":"","title":"Command Devices with eKuiper Rules Engine"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#overview","text":"This document describes how to actuate a device with rules trigger by the eKuiper rules engine. To make the example simple, the virtual device device-virtual is used as the actuated device. The eKuiper rules engine analyzes the data sent from device-virtual services, and then sends a command to virtual device based a rule firing in eKuiper based on that analysis. It should be noted that an application service is used to route core data through the rules engine.","title":"Overview"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#use-case-scenarios","text":"Rules will be created in eKuiper to watch for two circumstances: monitor for events coming from the Random-UnsignedInteger-Device device (one of the default virtual device managed devices), and if a uint8 reading value is found larger than 20 in the event, then send a command to Random-Boolean-Device device to start generating random numbers (specifically - set random generation bool to true). monitor for events coming from the Random-Integer-Device device (another of the default virtual device managed devices), and if the average for int8 reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device device to stop generating random numbers (specifically - set random generation bool to false). These use case scenarios do not have any real business meaning, but easily demonstrate the features of EdgeX automatic actuation accomplished via the eKuiper rule engine.","title":"Use Case Scenarios"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#prerequisite-knowledge","text":"This document will not cover basic operations of EdgeX or LF Edge eKuiper. Readers should have basic knowledge of: Get and start EdgeX. Refer to Quick Start for how to get and start EdgeX with the virtual device service. Run the eKuiper Rules Engine. Refer to EdgeX eKuiper Rule Engine Tutorial to understand the basics of eKuiper and EdgeX.","title":"Prerequisite Knowledge"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#start-ekuiper-and-create-an-edgex-stream","text":"Make sure you read the EdgeX eKuiper Rule Engine Tutorial and successfully run eKuiper with EdgeX. First create a stream that can consume streaming data from the EdgeX application service (rules engine profile). This step is not required if you already finished the EdgeX eKuiper Rule Engine Tutorial . curl -X POST \\ http:// $ekuiper_docker :59720/streams \\ -H 'Content-Type: application/json' \\ -d '{\"sql\": \"create stream demo() WITH (FORMAT=\\\"JSON\\\", TYPE=\\\"edgex\\\")\"}'","title":"Start eKuiper and Create an EdgeX Stream"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#get-and-test-the-command-url","text":"Since both use case scenario rules will send commands to the Random-Boolean-Device virtual device, use the curl request below to get a list of available commands for this device. curl http://127.0.0.1:59882/api/v2/device/name/Random-Boolean-Device | jq It should print results like those below. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"deviceCoreCommand\" : { \"deviceName\" : \"Random-Boolean-Device\" , \"profileName\" : \"Random-Boolean-Device\" , \"coreCommands\" : [ { \"name\" : \"WriteBoolValue\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Bool\" , \"valueType\" : \"Bool\" }, { \"resourceName\" : \"EnableRandomization_Bool\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"WriteBoolArrayValue\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/WriteBoolArrayValue\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"BoolArray\" , \"valueType\" : \"BoolArray\" }, { \"resourceName\" : \"EnableRandomization_BoolArray\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"Bool\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/Bool\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Bool\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"BoolArray\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/BoolArray\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"BoolArray\" , \"valueType\" : \"BoolArray\" } ] } ] } } From this output, look for the URL associated to the PUT command (the first URL listed). This is the command eKuiper will use to call on the device. There are two parameters for this command: Bool : Set the returned value when other services want to get device data. The parameter will be used only when EnableRandomization_Bool is set to false. EnableRandomization_Bool : Enable/disable the randomization generation of bool values. If this value is set to true, then the 1st parameter will be ignored. You can test calling this command with its parameters using curl as shown below. curl -X PUT \\ http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue \\ -H 'Content-Type: application/json' \\ -d '{\"Bool\":\"true\", \"EnableRandomization_Bool\": \"true\"}'","title":"Get and Test the Command URL"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#create-rules","text":"Now that you have EdgeX and eKuiper running, the EdgeX stream defined, and you know the command to actuate Random-Boolean-Device , it is time to build the eKuiper rules.","title":"Create rules"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#the-first-rule","text":"Again, the 1st rule is to monitor for events coming from the Random-UnsignedInteger-Device device (one of the default virtual device managed devices), and if a uint8 reading value is found larger than 20 in the event, then send the command to Random-Boolean-Device device to start generating random numbers (specifically - set random generation bool to true). Given the URL and parameters to the command, below is the curl command to declare the first rule in eKuiper. curl -X POST \\ http:// $ekuiper_server :59720/rules \\ -H 'Content-Type: application/json' \\ -d '{ \"id\": \"rule1\", \"sql\": \"SELECT uint8 FROM demo WHERE uint8 > 20\", \"actions\": [ { \"rest\": { \"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\", \"method\": \"put\", \"dataTemplate\": \"{\\\"Bool\\\":\\\"true\\\", \\\"EnableRandomization_Bool\\\": \\\"true\\\"}\", \"sendSingle\": true } }, { \"log\":{} } ] }'","title":"The first rule"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#the-second-rule","text":"The 2nd rule is to monitor for events coming from the Random-Integer-Device device (another of the default virtual device managed devices), and if the average for int8 reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device device to stop generating random numbers (specifically - set random generation bool to false). Here is the curl request to setup the second rule in eKuiper. The same command URL is used as the same device action ( Random-Boolean-Device's PUT bool command ) is being actuated, but with different parameters. curl -X POST \\ http:// $ekuiper_server :59720/rules \\ -H 'Content-Type: application/json' \\ -d '{ \"id\": \"rule2\", \"sql\": \"SELECT avg(int8) AS avg_int8 FROM demo WHERE int8 != nil GROUP BY TUMBLINGWINDOW(ss, 20) HAVING avg(int8) > 0\", \"actions\": [ { \"rest\": { \"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\", \"method\": \"put\", \"dataTemplate\": \"{\\\"Bool\\\":\\\"false\\\", \\\"EnableRandomization_Bool\\\": \\\"false\\\"}\", \"sendSingle\": true } }, { \"log\":{} } ] }'","title":"The second rule"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#watch-the-ekuiper-logs","text":"Both rules are now created in eKuiper. eKuiper is busy analyzing the event data coming for the virtual devices looking for readings that match the rules you created. You can watch the edgex-kuiper container logs for the rule triggering and command execution. docker logs edgex-kuiper","title":"Watch the eKuiper Logs"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#explore-the-results","text":"You can also explore the eKuiper analysis that caused the commands to be sent to the service. To see the the data from the analysis, use the SQL below to query eKuiper filtering data. SELECT int8 , \"true\" AS randomization FROM demo WHERE uint8 > 20 The output of the SQL should look similar to the results below. [{ \"int8\" : -75 , \"randomization\" : \"true\" }]","title":"Explore the Results"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#extended-reading","text":"Use these resouces to learn more about the features of LF Edge eKuiper. eKuiper Github code repository eKuiper reference guide","title":"Extended Reading"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/","text":"MQTT EdgeX - Jakarta Release Overview In this example, we use a script to simulate a custom-defined MQTT device, instead of a real device. This provides a straight-forward way to test the device-mqtt features using an MQTT-broker. Note Multi-Level Topics move metadata (i.e. device name, command name,... etc) from the payload into the MQTT topics. Notice the sections marked with Using Multi-level Topic: for relevant input/output throughout this example. Prepare the Custom Device Configuration In this section, we create folders that contain files required for deployment of a customized device configuration to work with the existing device service: - custom-config |- devices |- my.custom.device.config.toml |- profiles |- my.custom.device.profile.yml Device Configuration Use this configuration file to define devices and schedule jobs. device-mqtt generates a relative instance on start-up. Create the device configuration file, named my.custom.device.config.toml , as shown below: # Pre-define Devices [[DeviceList]] Name = \"my-custom-device\" ProfileName = \"my-custom-device-profile\" Description = \"MQTT device is created for test purpose\" Labels = [ \"MQTT\" , \"test\" ] [DeviceList.Protocols] [DeviceList.Protocols.mqtt] # Comment out/remove below to use multi-level topics CommandTopic = \"CommandTopic\" # Uncomment below to use multi-level topics # CommandTopic = \"command/my-custom-device\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"message\" Note CommandTopic is used to publish the GET or SET command request Device Profile The DeviceProfile defines the device's values and operation method, which can be Read or Write. Create a device profile, named my.custom.device.profile.yml , with the following content: name : \"my-custom-device-profile\" manufacturer : \"iot\" model : \"MQTT-DEVICE\" description : \"Test device profile\" labels : - \"mqtt\" - \"test\" deviceResources : - name : randnum isHidden : true description : \"device random number\" properties : valueType : \"Float32\" readWrite : \"R\" - name : ping isHidden : true description : \"device awake\" properties : valueType : \"String\" readWrite : \"R\" - name : message isHidden : false description : \"device message\" properties : valueType : \"String\" readWrite : \"RW\" - name : json isHidden : false description : \"JSON message\" properties : valueType : \"Object\" readWrite : \"RW\" mediaType : \"application/json\" deviceCommands : - name : values readWrite : \"R\" isHidden : false resourceOperations : - { deviceResource : \"randnum\" } - { deviceResource : \"ping\" } - { deviceResource : \"message\" } Prepare docker-compose file Clone edgex-compose $ git clone git@github.com:edgexfoundry/edgex-compose.git $ git checkout main !!! note Use main branch until jakarta is released. Generate the docker-compose.yml file (notice this includes mqtt-broker) $ cd edgex-compose/compose-builder $ make gen ds-mqtt mqtt-broker no-secty ui Check the generated file $ ls | grep 'docker-compose.yml' docker-compose.yml Mount the custom-config Open the edgex-compose/compose-builder/docker-compose.yml file and then add volumes path and environment as shown below: # docker-compose.yml device-mqtt : ... environment : DEVICE_DEVICESDIR : /custom-config/devices DEVICE_PROFILESDIR : /custom-config/profiles ... volumes : - /path/to/custom-config:/custom-config ... Note Replace the /path/to/custom-config in the example with the correct path Enabling Multi-Level Topics To use the optional setting for MQTT device services with multi-level topics, make the following changes in the device service configuration files: There are two ways to set the environment variables for multi-level topics. If the code is built with compose builder, modify the docker-compose.yml file in edgex-compose/compose-builder: # docker-compose.yml device-mqtt : ... environment : MQTTBROKERINFO_INCOMINGTOPIC : \"incoming/data/#\" MQTTBROKERINFO_RESPONSETOPIC : \"command/response/#\" MQTTBROKERINFO_USETOPICLEVELS : \"true\" ... Otherwise if the device service is built locally, modify these lines in configuration.toml : # Comment out/remove when using multi-level topics #IncomingTopic = \"DataTopic\" #ResponseTopic = \"ResponseTopic\" #UseTopicLevels = false # Uncomment to use multi-level topics IncomingTopic = \"incoming/data/#\" ResponseTopic = \"command/response/#\" UseTopicLevels = true Note If you have previously run Device MQTT locally, you will need to remove the services configuration from Consul. This can be done with: curl --request DELETE http://localhost:8500/v1/kv/edgex/devices/2.0/device-mqtt?recurse=true In my.custom.device.config.toml : [DeviceList.Protocols] [DeviceList.Protocols.mqtt] # Comment out/remove below to use multi-level topics # CommandTopic = \"CommandTopic\" # Uncomment below to use multi-level topics CommandTopic = \"command/my-custom-device\" Note If you have run Device-MQTT before, you will need to delete the previously registered device(s) by replacing in the command below: curl --request DELETE http://localhost:59881/api/v2/device/name/ where can be found by running: curl --request GET http://localhost:59881/api/v2/device/all | json_pp Start EdgeX Foundry on Docker Deploy EdgeX using the following commands: $ cd edgex-compose/compose-builder $ docker-compose pull $ docker-compose up -d Using a MQTT Device Simulator Overview Expected Behaviors Using the detailed script below as a simulator, there are three behaviors: Publish random number data every 15 seconds. Default (single-level) Topic: The simulator publishes the data to the MQTT broker with topic DataTopic and the message is similar to the following: {\"name\":\"my-custom-device\", \"cmd\":\"randnum\", \"method\":\"get\", \"randnum\":4161.3549} Using Multi-level Topic: The simulator publishes the data to the MQTT broker with topic incoming/data/my-custom-device/randnum and the message is similar to the following: {\"randnum\":4161.3549} Receive the reading request, then return the response. Default (single-level) Topic: The simulator receives the request from the MQTT broker, the topic is CommandTopic and the message is similar to the following: {\"cmd\":\"randnum\", \"method\":\"get\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\"} The simulator returns the response to the MQTT broker, the topic is ResponseTopic and the message is similar to the following: {\"cmd\":\"randnum\", \"method\":\"get\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\", \"randnum\":42.0} Using Multi-level Topic: The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/randnum/get/293d7a00-66e1-4374-ace0-07520103c95f and message returned is similar to the following: {\"randnum\":\"42.0\"} The simulator returns the response to the MQTT broker, the topic is command/response/# and the message is similar to the following: {\"randnum\":\"4.20e+01\"} Receive the set request, then change the device value. Default (single-level) Topic: The simulator receives the request from the MQTT broker, the topic is CommandTopic and the message is similar to the following: {\"cmd\":\"message\", \"method\":\"set\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\", \"message\":\"test message...\"} The simulator changes the device value and returns the response to the MQTT broker, the topic is ResponseTopic and the message is similar to the following: {\"cmd\":\"message\", \"method\":\"set\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\"} Using Multi-level Topic: The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/testmessage/set/293d7a00-66e1-4374-ace0-07520103c95f and the message is similar to the following: {\"message\":\"test message...\"} The simulator changes the device value and returns the response to the MQTT broker, the topic is command/response/# and the message is similar to the following: {\"message\":\"test message...\"} Creating and Running a MQTT Device Simulator To implement the simulated custom-defined MQTT device, create a javascript, named mock-device.js , with the following content: Default (single-level) Topic: function getRandomFloat ( min , max ) { return Math . random () * ( max - min ) + min ; } const deviceName = \"my-custom-device\" ; let message = \"test-message\" ; let json = { \"name\" : \"My JSON\" }; // DataSender sends async value to MQTT broker every 15 seconds schedule ( '*/15 * * * * *' , ()=>{ let body = { \"name\" : deviceName , \"cmd\" : \"randnum\" , \"randnum\" : getRandomFloat ( 25 , 29 ). toFixed ( 1 ) }; publish ( 'DataTopic' , JSON . stringify ( body )); }); // CommandHandler receives commands and sends response to MQTT broker // 1. Receive the reading request, then return the response // 2. Receive the set request, then change the device value subscribe ( \"CommandTopic\" , ( topic , val ) => { var data = val ; if ( data . method == \"set\" ) { switch ( data . cmd ) { case \"message\" : message = data [ data . cmd ]; break ; case \"json\" : json = data [ data . cmd ]; break ; } } else { switch ( data . cmd ) { case \"ping\" : data . ping = \"pong\" ; break ; case \"message\" : data . message = message ; break ; case \"randnum\" : data . randnum = 12.123 ; break ; case \"json\" : data . json = json ; break ; } } publish ( \"ResponseTopic\" , JSON . stringify ( data )); }); Using Multi-level Topic: function getRandomFloat ( min , max ) { return Math . random () * ( max - min ) + min ; } const deviceName = \"my-custom-device\" ; let message = \"test-message\" ; let json = { \"name\" : \"My JSON\" }; // DataSender sends async value to MQTT broker every 15 seconds schedule ( '*/15 * * * * *' , ()=>{ let body = getRandomFloat ( 25 , 29 ). toFixed ( 1 ); publish ( 'incoming/data/my-custom-device/randnum' , body ); }); // CommandHandler receives commands and sends response to MQTT broker // 1. Receive the reading request, then return the response // 2. Receive the set request, then change the device value subscribe ( \"command/my-custom-device/#\" , ( topic , val ) => { const words = topic . split ( '/' ); var cmd = words [ 2 ]; var method = words [ 3 ]; var uuid = words [ 4 ]; var response = {}; var data = val ; if ( method == \"set\" ) { switch ( cmd ) { case \"message\" : message = data [ cmd ]; break ; case \"json\" : json = data [ cmd ]; break ; } } else { switch ( cmd ) { case \"ping\" : response . ping = \"pong\" ; break ; case \"message\" : response . message = message ; break ; case \"randnum\" : response . randnum = 12.123 ; break ; case \"json\" : response . json = json ; break ; } } var sendTopic = \"command/response/\" + uuid ; publish ( sendTopic , JSON . stringify ( response )); }); To run the device simulator, enter the commands shown below with the following changes: $ mv mock-device.js /path/to/mqtt-scripts $ docker run -d --restart=always --name=mqtt-scripts \\ -v /path/to/mqtt-scripts:/scripts \\ dersimn/mqtt-scripts --url mqtt://172.17.0.1 --dir /scripts Note Replace the /path/to/mqtt-scripts in the example mv command with the correct path Execute Commands Now we're ready to run some commands. Find Executable Commands Use the following query to find executable commands: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/all | jso n _pp { \"deviceCoreCommands\" : [ { \"profileName\" : \"my-custom-device-profile\" , \"coreCommands\" : [ { \"name\" : \"values\" , \"get\" : true , \"path\" : \"/api/v2/device/name/my-custom-device/values\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"randnum\" , \"valueType\" : \"Float32\" }, { \"resourceName\" : \"ping\" , \"valueType\" : \"String\" }, { \"valueType\" : \"String\" , \"resourceName\" : \"message\" } ] }, { \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"message\" , \"valueType\" : \"String\" } ], \"name\" : \"message\" , \"get\" : true , \"path\" : \"/api/v2/device/name/my-custom-device/message\" , \"set\" : true }, { \"name\" : \"json\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/MQTT-test-device/json\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"json\" , \"valueType\" : \"Object\" } ] } ], \"deviceName\" : \"my-custom-device\" } ], \"apiVersion\" : \"v2\" , \"statusCode\" : 200 } Execute SET Command Execute a SET command according to the url and parameterNames, replacing [host] with the server IP when running the SET command. $ curl http://localhost:59882/api/v2/device/name/my-custom-device/message \\ -H \"Content-Type:application/json\" -X PUT \\ -d '{\"message\":\"Hello!\"}' Execute GET Command Execute a GET command as follows: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/ na me/my - cus t om - device/message | jso n _pp { \"event\" : { \"origin\" : 1624417689920618131 , \"readings\" : [ { \"resourceName\" : \"message\" , \"binaryValue\" : null , \"profileName\" : \"my-custom-device-profile\" , \"deviceName\" : \"my-custom-device\" , \"id\" : \"a3bb78c5-e76f-49a2-ad9d-b220a86c3e36\" , \"value\" : \"Hello!\" , \"valueType\" : \"String\" , \"origin\" : 1624417689920615828 , \"mediaType\" : \"\" } ], \"sourceName\" : \"message\" , \"deviceName\" : \"my-custom-device\" , \"apiVersion\" : \"v2\" , \"profileName\" : \"my-custom-device-profile\" , \"id\" : \"e0b29735-8b39-44d1-8f68-4d7252e14cc7\" }, \"apiVersion\" : \"v2\" , \"statusCode\" : 200 } Schedule Job The schedule job is defined in the [[DeviceList.AutoEvents]] section of the device configuration file: [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"message\" After the service starts, query core-data's reading API. The results show that the service auto-executes the command every 30 secs, as shown below: $ curl h tt p : //localhos t : 59880 /api/v 2 /readi n g/resourceName/message | jso n _pp { \"statusCode\" : 200 , \"readings\" : [ { \"value\" : \"test-message\" , \"id\" : \"e91b8ca6-c5c4-4509-bb61-bd4b09fe835c\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"resourceName\" : \"message\" , \"origin\" : 1624418361324331392 , \"profileName\" : \"my-custom-device-profile\" , \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"String\" }, { \"mediaType\" : \"\" , \"binaryValue\" : null , \"resourceName\" : \"message\" , \"value\" : \"test-message\" , \"id\" : \"1da58cb7-2bf4-47f0-bbb8-9519797149a2\" , \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"String\" , \"profileName\" : \"my-custom-device-profile\" , \"origin\" : 1624418330822988843 }, ... ], \"apiVersion\" : \"v2\" } Async Device Reading The device-mqtt subscribes to a DataTopic , which is wait for the real device to send value to MQTT broker , then device-mqtt parses the value and forward to the northbound. The data format contains the following values: name = device name cmd = deviceResource name method = get or set cmd = device reading The following results show that the mock device sent the reading every 15 secs: $ curl h tt p : //localhos t : 59880 /api/v 2 /readi n g/resourceName/ra n d nu m | jso n _pp { \"readings\" : [ { \"origin\" : 1624418475007110946 , \"valueType\" : \"Float32\" , \"deviceName\" : \"my-custom-device\" , \"id\" : \"9b3d337e-8a8a-4a6c-8018-b4908b57abb8\" , \"binaryValue\" : null , \"resourceName\" : \"randnum\" , \"profileName\" : \"my-custom-device-profile\" , \"mediaType\" : \"\" , \"value\" : \"2.630000e+01\" }, { \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"Float32\" , \"id\" : \"06918cbb-ada0-4752-8877-0ef8488620f6\" , \"origin\" : 1624418460007833720 , \"mediaType\" : \"\" , \"profileName\" : \"my-custom-device-profile\" , \"value\" : \"2.570000e+01\" , \"resourceName\" : \"randnum\" , \"binaryValue\" : null }, ... ], \"statusCode\" : 200 , \"apiVersion\" : \"v2\" } MQTT Device Service Configuration MQTT Device Service has the following configurations to implement the MQTT protocol. Configuration Default Value Description MQTTBrokerInfo.Schema tcp The URL schema MQTTBrokerInfo.Host 0.0.0.0 The URL host MQTTBrokerInfo.Port 1883 The URL port MQTTBrokerInfo.Qos 0 Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) MQTTBrokerInfo.KeepAlive 3600 Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 MQTTBrokerInfo.ClientId device-mqtt ClientId to connect to the broker with MQTTBrokerInfo.CredentialsRetryTime 120 The retry times to get the credential MQTTBrokerInfo.CredentialsRetryWait 1 The wait time(seconds) when retry to get the credential MQTTBrokerInfo.ConnEstablishingRetry 10 The retry times to establish the MQTT connection MQTTBrokerInfo.ConnRetryWaitTime 5 The wait time(seconds) when retry to establish the MQTT connection MQTTBrokerInfo.AuthMode none Indicates what to use when connecting to the broker. Must be one of \"none\" , \"usernamepassword\" MQTTBrokerInfo.CredentialsPath credentials Name of the path in secret provider to retrieve your secrets. Must be non-blank. MQTTBrokerInfo.IncomingTopic DataTopic (incoming/data/#) IncomingTopic is used to receive the async value MQTTBrokerInfo.ResponseTopic ResponseTopic (command/response/#) ResponseTopic is used to receive the command response from the device MQTTBrokerInfo.UseTopicLevels false (true) Boolean setting to use multi-level topics MQTTBrokerInfo.Writable.ResponseFetchInterval 500 ResponseFetchInterval specifies the retry interval(milliseconds) to fetch the command response from the MQTT broker Note Using Multi-level Topic: Remember to change the defaults in parentheses in the table above. Overriding with Environment Variables The user can override any of the above configurations using environment: variables to meet their requirement, for example: # docker-compose.yml device-mqtt : . . . environment : MQTTBROKERINFO_CLIENTID : \"my-device-mqtt\" MQTTBROKERINFO_CONNRETRYWAITTIME : \"10\" MQTTBROKERINFO_USETOPICLEVELS : \"false\" ...","title":"MQTT"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#mqtt","text":"EdgeX - Jakarta Release","title":"MQTT"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overview","text":"In this example, we use a script to simulate a custom-defined MQTT device, instead of a real device. This provides a straight-forward way to test the device-mqtt features using an MQTT-broker. Note Multi-Level Topics move metadata (i.e. device name, command name,... etc) from the payload into the MQTT topics. Notice the sections marked with Using Multi-level Topic: for relevant input/output throughout this example.","title":"Overview"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#prepare-the-custom-device-configuration","text":"In this section, we create folders that contain files required for deployment of a customized device configuration to work with the existing device service: - custom-config |- devices |- my.custom.device.config.toml |- profiles |- my.custom.device.profile.yml","title":"Prepare the Custom Device Configuration"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#device-configuration","text":"Use this configuration file to define devices and schedule jobs. device-mqtt generates a relative instance on start-up. Create the device configuration file, named my.custom.device.config.toml , as shown below: # Pre-define Devices [[DeviceList]] Name = \"my-custom-device\" ProfileName = \"my-custom-device-profile\" Description = \"MQTT device is created for test purpose\" Labels = [ \"MQTT\" , \"test\" ] [DeviceList.Protocols] [DeviceList.Protocols.mqtt] # Comment out/remove below to use multi-level topics CommandTopic = \"CommandTopic\" # Uncomment below to use multi-level topics # CommandTopic = \"command/my-custom-device\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"message\" Note CommandTopic is used to publish the GET or SET command request","title":"Device Configuration"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#device-profile","text":"The DeviceProfile defines the device's values and operation method, which can be Read or Write. Create a device profile, named my.custom.device.profile.yml , with the following content: name : \"my-custom-device-profile\" manufacturer : \"iot\" model : \"MQTT-DEVICE\" description : \"Test device profile\" labels : - \"mqtt\" - \"test\" deviceResources : - name : randnum isHidden : true description : \"device random number\" properties : valueType : \"Float32\" readWrite : \"R\" - name : ping isHidden : true description : \"device awake\" properties : valueType : \"String\" readWrite : \"R\" - name : message isHidden : false description : \"device message\" properties : valueType : \"String\" readWrite : \"RW\" - name : json isHidden : false description : \"JSON message\" properties : valueType : \"Object\" readWrite : \"RW\" mediaType : \"application/json\" deviceCommands : - name : values readWrite : \"R\" isHidden : false resourceOperations : - { deviceResource : \"randnum\" } - { deviceResource : \"ping\" } - { deviceResource : \"message\" }","title":"Device Profile"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#prepare-docker-compose-file","text":"Clone edgex-compose $ git clone git@github.com:edgexfoundry/edgex-compose.git $ git checkout main !!! note Use main branch until jakarta is released. Generate the docker-compose.yml file (notice this includes mqtt-broker) $ cd edgex-compose/compose-builder $ make gen ds-mqtt mqtt-broker no-secty ui Check the generated file $ ls | grep 'docker-compose.yml' docker-compose.yml","title":"Prepare docker-compose file"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#mount-the-custom-config","text":"Open the edgex-compose/compose-builder/docker-compose.yml file and then add volumes path and environment as shown below: # docker-compose.yml device-mqtt : ... environment : DEVICE_DEVICESDIR : /custom-config/devices DEVICE_PROFILESDIR : /custom-config/profiles ... volumes : - /path/to/custom-config:/custom-config ... Note Replace the /path/to/custom-config in the example with the correct path","title":"Mount the custom-config"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#enabling-multi-level-topics","text":"To use the optional setting for MQTT device services with multi-level topics, make the following changes in the device service configuration files: There are two ways to set the environment variables for multi-level topics. If the code is built with compose builder, modify the docker-compose.yml file in edgex-compose/compose-builder: # docker-compose.yml device-mqtt : ... environment : MQTTBROKERINFO_INCOMINGTOPIC : \"incoming/data/#\" MQTTBROKERINFO_RESPONSETOPIC : \"command/response/#\" MQTTBROKERINFO_USETOPICLEVELS : \"true\" ... Otherwise if the device service is built locally, modify these lines in configuration.toml : # Comment out/remove when using multi-level topics #IncomingTopic = \"DataTopic\" #ResponseTopic = \"ResponseTopic\" #UseTopicLevels = false # Uncomment to use multi-level topics IncomingTopic = \"incoming/data/#\" ResponseTopic = \"command/response/#\" UseTopicLevels = true Note If you have previously run Device MQTT locally, you will need to remove the services configuration from Consul. This can be done with: curl --request DELETE http://localhost:8500/v1/kv/edgex/devices/2.0/device-mqtt?recurse=true In my.custom.device.config.toml : [DeviceList.Protocols] [DeviceList.Protocols.mqtt] # Comment out/remove below to use multi-level topics # CommandTopic = \"CommandTopic\" # Uncomment below to use multi-level topics CommandTopic = \"command/my-custom-device\" Note If you have run Device-MQTT before, you will need to delete the previously registered device(s) by replacing in the command below: curl --request DELETE http://localhost:59881/api/v2/device/name/ where can be found by running: curl --request GET http://localhost:59881/api/v2/device/all | json_pp","title":"Enabling Multi-Level Topics"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#start-edgex-foundry-on-docker","text":"Deploy EdgeX using the following commands: $ cd edgex-compose/compose-builder $ docker-compose pull $ docker-compose up -d","title":"Start EdgeX Foundry on Docker"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#using-a-mqtt-device-simulator","text":"","title":"Using a MQTT Device Simulator"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overview_1","text":"","title":"Overview"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#expected-behaviors","text":"Using the detailed script below as a simulator, there are three behaviors: Publish random number data every 15 seconds. Default (single-level) Topic: The simulator publishes the data to the MQTT broker with topic DataTopic and the message is similar to the following: {\"name\":\"my-custom-device\", \"cmd\":\"randnum\", \"method\":\"get\", \"randnum\":4161.3549} Using Multi-level Topic: The simulator publishes the data to the MQTT broker with topic incoming/data/my-custom-device/randnum and the message is similar to the following: {\"randnum\":4161.3549} Receive the reading request, then return the response. Default (single-level) Topic: The simulator receives the request from the MQTT broker, the topic is CommandTopic and the message is similar to the following: {\"cmd\":\"randnum\", \"method\":\"get\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\"} The simulator returns the response to the MQTT broker, the topic is ResponseTopic and the message is similar to the following: {\"cmd\":\"randnum\", \"method\":\"get\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\", \"randnum\":42.0} Using Multi-level Topic: The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/randnum/get/293d7a00-66e1-4374-ace0-07520103c95f and message returned is similar to the following: {\"randnum\":\"42.0\"} The simulator returns the response to the MQTT broker, the topic is command/response/# and the message is similar to the following: {\"randnum\":\"4.20e+01\"} Receive the set request, then change the device value. Default (single-level) Topic: The simulator receives the request from the MQTT broker, the topic is CommandTopic and the message is similar to the following: {\"cmd\":\"message\", \"method\":\"set\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\", \"message\":\"test message...\"} The simulator changes the device value and returns the response to the MQTT broker, the topic is ResponseTopic and the message is similar to the following: {\"cmd\":\"message\", \"method\":\"set\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\"} Using Multi-level Topic: The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/testmessage/set/293d7a00-66e1-4374-ace0-07520103c95f and the message is similar to the following: {\"message\":\"test message...\"} The simulator changes the device value and returns the response to the MQTT broker, the topic is command/response/# and the message is similar to the following: {\"message\":\"test message...\"}","title":"Expected Behaviors"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#creating-and-running-a-mqtt-device-simulator","text":"To implement the simulated custom-defined MQTT device, create a javascript, named mock-device.js , with the following content: Default (single-level) Topic: function getRandomFloat ( min , max ) { return Math . random () * ( max - min ) + min ; } const deviceName = \"my-custom-device\" ; let message = \"test-message\" ; let json = { \"name\" : \"My JSON\" }; // DataSender sends async value to MQTT broker every 15 seconds schedule ( '*/15 * * * * *' , ()=>{ let body = { \"name\" : deviceName , \"cmd\" : \"randnum\" , \"randnum\" : getRandomFloat ( 25 , 29 ). toFixed ( 1 ) }; publish ( 'DataTopic' , JSON . stringify ( body )); }); // CommandHandler receives commands and sends response to MQTT broker // 1. Receive the reading request, then return the response // 2. Receive the set request, then change the device value subscribe ( \"CommandTopic\" , ( topic , val ) => { var data = val ; if ( data . method == \"set\" ) { switch ( data . cmd ) { case \"message\" : message = data [ data . cmd ]; break ; case \"json\" : json = data [ data . cmd ]; break ; } } else { switch ( data . cmd ) { case \"ping\" : data . ping = \"pong\" ; break ; case \"message\" : data . message = message ; break ; case \"randnum\" : data . randnum = 12.123 ; break ; case \"json\" : data . json = json ; break ; } } publish ( \"ResponseTopic\" , JSON . stringify ( data )); }); Using Multi-level Topic: function getRandomFloat ( min , max ) { return Math . random () * ( max - min ) + min ; } const deviceName = \"my-custom-device\" ; let message = \"test-message\" ; let json = { \"name\" : \"My JSON\" }; // DataSender sends async value to MQTT broker every 15 seconds schedule ( '*/15 * * * * *' , ()=>{ let body = getRandomFloat ( 25 , 29 ). toFixed ( 1 ); publish ( 'incoming/data/my-custom-device/randnum' , body ); }); // CommandHandler receives commands and sends response to MQTT broker // 1. Receive the reading request, then return the response // 2. Receive the set request, then change the device value subscribe ( \"command/my-custom-device/#\" , ( topic , val ) => { const words = topic . split ( '/' ); var cmd = words [ 2 ]; var method = words [ 3 ]; var uuid = words [ 4 ]; var response = {}; var data = val ; if ( method == \"set\" ) { switch ( cmd ) { case \"message\" : message = data [ cmd ]; break ; case \"json\" : json = data [ cmd ]; break ; } } else { switch ( cmd ) { case \"ping\" : response . ping = \"pong\" ; break ; case \"message\" : response . message = message ; break ; case \"randnum\" : response . randnum = 12.123 ; break ; case \"json\" : response . json = json ; break ; } } var sendTopic = \"command/response/\" + uuid ; publish ( sendTopic , JSON . stringify ( response )); }); To run the device simulator, enter the commands shown below with the following changes: $ mv mock-device.js /path/to/mqtt-scripts $ docker run -d --restart=always --name=mqtt-scripts \\ -v /path/to/mqtt-scripts:/scripts \\ dersimn/mqtt-scripts --url mqtt://172.17.0.1 --dir /scripts Note Replace the /path/to/mqtt-scripts in the example mv command with the correct path","title":"Creating and Running a MQTT Device Simulator"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-commands","text":"Now we're ready to run some commands.","title":"Execute Commands"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#find-executable-commands","text":"Use the following query to find executable commands: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/all | jso n _pp { \"deviceCoreCommands\" : [ { \"profileName\" : \"my-custom-device-profile\" , \"coreCommands\" : [ { \"name\" : \"values\" , \"get\" : true , \"path\" : \"/api/v2/device/name/my-custom-device/values\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"randnum\" , \"valueType\" : \"Float32\" }, { \"resourceName\" : \"ping\" , \"valueType\" : \"String\" }, { \"valueType\" : \"String\" , \"resourceName\" : \"message\" } ] }, { \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"message\" , \"valueType\" : \"String\" } ], \"name\" : \"message\" , \"get\" : true , \"path\" : \"/api/v2/device/name/my-custom-device/message\" , \"set\" : true }, { \"name\" : \"json\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/MQTT-test-device/json\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"json\" , \"valueType\" : \"Object\" } ] } ], \"deviceName\" : \"my-custom-device\" } ], \"apiVersion\" : \"v2\" , \"statusCode\" : 200 }","title":"Find Executable Commands"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-set-command","text":"Execute a SET command according to the url and parameterNames, replacing [host] with the server IP when running the SET command. $ curl http://localhost:59882/api/v2/device/name/my-custom-device/message \\ -H \"Content-Type:application/json\" -X PUT \\ -d '{\"message\":\"Hello!\"}'","title":"Execute SET Command"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-get-command","text":"Execute a GET command as follows: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/ na me/my - cus t om - device/message | jso n _pp { \"event\" : { \"origin\" : 1624417689920618131 , \"readings\" : [ { \"resourceName\" : \"message\" , \"binaryValue\" : null , \"profileName\" : \"my-custom-device-profile\" , \"deviceName\" : \"my-custom-device\" , \"id\" : \"a3bb78c5-e76f-49a2-ad9d-b220a86c3e36\" , \"value\" : \"Hello!\" , \"valueType\" : \"String\" , \"origin\" : 1624417689920615828 , \"mediaType\" : \"\" } ], \"sourceName\" : \"message\" , \"deviceName\" : \"my-custom-device\" , \"apiVersion\" : \"v2\" , \"profileName\" : \"my-custom-device-profile\" , \"id\" : \"e0b29735-8b39-44d1-8f68-4d7252e14cc7\" }, \"apiVersion\" : \"v2\" , \"statusCode\" : 200 }","title":"Execute GET Command"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#schedule-job","text":"The schedule job is defined in the [[DeviceList.AutoEvents]] section of the device configuration file: [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"message\" After the service starts, query core-data's reading API. The results show that the service auto-executes the command every 30 secs, as shown below: $ curl h tt p : //localhos t : 59880 /api/v 2 /readi n g/resourceName/message | jso n _pp { \"statusCode\" : 200 , \"readings\" : [ { \"value\" : \"test-message\" , \"id\" : \"e91b8ca6-c5c4-4509-bb61-bd4b09fe835c\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"resourceName\" : \"message\" , \"origin\" : 1624418361324331392 , \"profileName\" : \"my-custom-device-profile\" , \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"String\" }, { \"mediaType\" : \"\" , \"binaryValue\" : null , \"resourceName\" : \"message\" , \"value\" : \"test-message\" , \"id\" : \"1da58cb7-2bf4-47f0-bbb8-9519797149a2\" , \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"String\" , \"profileName\" : \"my-custom-device-profile\" , \"origin\" : 1624418330822988843 }, ... ], \"apiVersion\" : \"v2\" }","title":"Schedule Job"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#async-device-reading","text":"The device-mqtt subscribes to a DataTopic , which is wait for the real device to send value to MQTT broker , then device-mqtt parses the value and forward to the northbound. The data format contains the following values: name = device name cmd = deviceResource name method = get or set cmd = device reading The following results show that the mock device sent the reading every 15 secs: $ curl h tt p : //localhos t : 59880 /api/v 2 /readi n g/resourceName/ra n d nu m | jso n _pp { \"readings\" : [ { \"origin\" : 1624418475007110946 , \"valueType\" : \"Float32\" , \"deviceName\" : \"my-custom-device\" , \"id\" : \"9b3d337e-8a8a-4a6c-8018-b4908b57abb8\" , \"binaryValue\" : null , \"resourceName\" : \"randnum\" , \"profileName\" : \"my-custom-device-profile\" , \"mediaType\" : \"\" , \"value\" : \"2.630000e+01\" }, { \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"Float32\" , \"id\" : \"06918cbb-ada0-4752-8877-0ef8488620f6\" , \"origin\" : 1624418460007833720 , \"mediaType\" : \"\" , \"profileName\" : \"my-custom-device-profile\" , \"value\" : \"2.570000e+01\" , \"resourceName\" : \"randnum\" , \"binaryValue\" : null }, ... ], \"statusCode\" : 200 , \"apiVersion\" : \"v2\" }","title":"Async Device Reading"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#mqtt-device-service-configuration","text":"MQTT Device Service has the following configurations to implement the MQTT protocol. Configuration Default Value Description MQTTBrokerInfo.Schema tcp The URL schema MQTTBrokerInfo.Host 0.0.0.0 The URL host MQTTBrokerInfo.Port 1883 The URL port MQTTBrokerInfo.Qos 0 Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) MQTTBrokerInfo.KeepAlive 3600 Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 MQTTBrokerInfo.ClientId device-mqtt ClientId to connect to the broker with MQTTBrokerInfo.CredentialsRetryTime 120 The retry times to get the credential MQTTBrokerInfo.CredentialsRetryWait 1 The wait time(seconds) when retry to get the credential MQTTBrokerInfo.ConnEstablishingRetry 10 The retry times to establish the MQTT connection MQTTBrokerInfo.ConnRetryWaitTime 5 The wait time(seconds) when retry to establish the MQTT connection MQTTBrokerInfo.AuthMode none Indicates what to use when connecting to the broker. Must be one of \"none\" , \"usernamepassword\" MQTTBrokerInfo.CredentialsPath credentials Name of the path in secret provider to retrieve your secrets. Must be non-blank. MQTTBrokerInfo.IncomingTopic DataTopic (incoming/data/#) IncomingTopic is used to receive the async value MQTTBrokerInfo.ResponseTopic ResponseTopic (command/response/#) ResponseTopic is used to receive the command response from the device MQTTBrokerInfo.UseTopicLevels false (true) Boolean setting to use multi-level topics MQTTBrokerInfo.Writable.ResponseFetchInterval 500 ResponseFetchInterval specifies the retry interval(milliseconds) to fetch the command response from the MQTT broker Note Using Multi-level Topic: Remember to change the defaults in parentheses in the table above.","title":"MQTT Device Service Configuration"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overriding-with-environment-variables","text":"The user can override any of the above configurations using environment: variables to meet their requirement, for example: # docker-compose.yml device-mqtt : . . . environment : MQTTBROKERINFO_CLIENTID : \"my-device-mqtt\" MQTTBROKERINFO_CONNRETRYWAITTIME : \"10\" MQTTBROKERINFO_USETOPICLEVELS : \"false\" ...","title":"Overriding with Environment Variables"},{"location":"examples/Ch-ExamplesAddingModbusDevice/","text":"Modbus EdgeX - Ireland Release This page describes how to connect Modbus devices to EdgeX. In this example, we simulate the temperature sensor instead of using a real device. This provides a straightforward way to test the device service features. Temperature sensor: https://www.audon.co.uk/ethernet_sensors/NANO_TEMP.html User manual: http://download.inveo.com.pl/manual/nano_t/user_manual_en.pdf Important Notice To fulfill the issue #61 , there is an important incompatible change after v2 (Ireland release). In the Device Profile attributes section, the startingAddress becomes an integer data type and zero-based value. In v1, startingAddress was a string data type and one-based value. Environment You can use any operating system that can install docker and docker-compose. In this example, we use Ubuntu to deploy EdgeX using docker. Modbus Device Simulator 1.Download ModbusPal Download the fixed version of ModbusPal from the https://sourceforge.net/p/modbuspal/discussion/899955/thread/72cf35ee/cd1f/attachment/ModbusPal.jar . 2.Install required lib: sudo apt install librxtx-java 3.Startup the ModbusPal: sudo java -jar ModbusPal.jar Modbus Register Table You can find the available registers in the user manual. Modbus TCP \u2013 Holding Registers Address Name R/W Description 4000 ThermostatL R/W Lower alarm threshold 4001 ThermostatH R/W Upper alarm threshold 4002 Alarm mode R/W 1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher 4004 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105) Setup ModbusPal To simulate the sensor, do the following: Add mock device: Add registers according to the register table: Add the ModbusPal support value auto-generator, which can bind to the registers: Run the Simulator Enable the value generator and click the Run button. Set Up Before Starting Services The following sections describe how to complete the set up before starting the services. If you prefer to start the services and then add the device, see Set Up After Starting Services Create a Custom configuration folder Run the following command: mkdir -p custom-config Set Up Device Profile Run the following command to create your device profile: cd custom-config nano temperature.profile.yml Fill in the device profile according to the Modbus Register Table , as shown below: name : \"Ethernet-Temperature-Sensor\" manufacturer : \"Audon Electronics\" model : \"Temperature\" labels : - \"Web\" - \"Modbus TCP\" - \"SNMP\" description : \"The NANO_TEMP is a Ethernet Thermometer measuring from -55\u00b0C to 125\u00b0C with a web interface and Modbus TCP communications.\" deviceResources : - name : \"ThermostatL\" isHidden : true description : \"Lower alarm threshold of the temperature\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 3999 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"RW\" scale : \"0.1\" - name : \"ThermostatH\" isHidden : true description : \"Upper alarm threshold of the temperature\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4000 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"RW\" scale : \"0.1\" - name : \"AlarmMode\" isHidden : true description : \"1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4001 } properties : valueType : \"Int16\" readWrite : \"RW\" - name : \"Temperature\" isHidden : false description : \"Temperature x 10 (np. 10,5 st.C to 105)\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4003 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.1\" deviceCommands : - name : \"AlarmThreshold\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"ThermostatL\" } - { deviceResource : \"ThermostatH\" } - name : \"AlarmMode\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"AlarmMode\" , mappings : { \"1\" : \"OFF\" , \"2\" : \"Lower\" , \"3\" : \"Higher\" , \"4\" : \"Lower or Higher\" } } In the Modbus protocol, we provide the following attributes: 1. primaryTable : HOLDING_REGISTERS, INPUT_REGISTERS, COILS, DISCRETES_INPUT 2. startingAddress This attribute defines the zero-based startingAddress in Modbus device. For example, the GET command requests data from the Modbus address 4004 to get the temperature data, so the starting register address should be 4003. Address Starting Address Name R/W Description 4004 4003 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105) 3. IS_BYTE_SWAP , IS_WORD_SWAP : To handle the different Modbus binary data order, we support Int32, Uint32, Float32 to do the swap operation before decoding the binary data. For example: { primaryTable: \"INPUT_REGISTERS\", startingAddress: \"4\", isByteSwap: \"false\", isWordSwap: \"true\" } 4. RAW_TYPE : This attribute defines the binary data read from the Modbus device, then we can use the value type to indicate the data type that the user wants to receive. We only support Int16 and Uint16 for rawType. The corresponding value type must be Float32 and Float64 . For example: deviceResources : - name : \"Temperature\" isHidden : false description : \"Temperature x 10 (np. 10,5 st.C to 105)\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4003 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.1\" In the device-modbus, the Property valueType decides how many registers will be read. Like Holding registers, a register has 16 bits. If the Modbus device's user manual specifies that a value has two registers, define it as Float32 or Int32 or Uint32 in the deviceProfile. Once we execute a command, device-modbus knows its value type and register type, startingAddress, and register length. So it can read or write value using the modbus protocol. Set Up Device Service Configuration Run the following command to create your device configuration: cd custom-config nano device.config.toml Fill in the device.config.toml file, as shown below: [[DeviceList]] Name = \"Modbus-TCP-Temperature-Sensor\" ProfileName = \"Ethernet-Temperature-Sensor\" Description = \"This device is a product for monitoring the temperature via the ethernet\" labels = [ \"temperature\" , \"modbus TCP\" ] [DeviceList.Protocols] [DeviceList.Protocols.modbus-tcp] Address = \"172.17.0.1\" Port = \"502\" UnitID = \"1\" Timeout = \"5\" IdleTimeout = \"5\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Temperature\" The address 172.17.0.1 is point to the docker bridge network which means it can forward the request from docker network to the host. Use this configuration file to define devices and AutoEvent. Then the device-modbus will generate the relative instance on startup. The device-modbus offers two types of protocol, Modbus TCP and Modbus RTU, which can be defined as shown below: protocol Name Protocol Address Port UnitID BaudRate DataBits StopBits Parity Timeout IdleTimeout Modbus TCP Gateway address TCP 10.211.55.6 502 1 5 5 Modbus RTU Gateway address RTU /tmp/slave 502 2 19200 8 1 N 5 5 In the RTU protocol, Parity can be: N - None is 0 O - Odd is 1 E - Even is 2, default is E Prepare docker-compose file Clone edgex-compose $ git clone git@github.com:edgexfoundry/edgex-compose.git Generate the docker-compose.yml file $ cd edgex-compose/compose-builder $ make gen ds-modbus Add Custom Configuration to docker-compose File Add prepared configuration files to docker-compose file, you can mount them using volumes and change the environment for device-modbus internal use. Open the docker-compose.yml file and then add volumes path and environment as shown below: device-modbus : ... environment : ... DEVICE_DEVICESDIR : /custom-config DEVICE_PROFILESDIR : /custom-config volumes : ... - /path/to/custom-config:/custom-config Start EdgeX Foundry on Docker Since we generate the docker-compose.yml file at the previous step, we can deploy EdgeX as shown below: $ cd edgex-compose/compose-builder $ docker-compose up -d Creating network \"compose-builder_edgex-network\" with driver \"bridge\" Creating volume \"compose-builder_consul-acl-token\" with default driver ... Creating edgex-core-metadata ... done Creating edgex-core-command ... done Creating edgex-core-data ... done Creating edgex-device-modbus ... done Creating edgex-app-rules-engine ... done Creating edgex-sys-mgmt-agent ... done Set Up After Starting Services If the services are already running and you want to add a device, you can use the Core Metadata API as outlined in this section. If you set up the device profile and Service as described in Set Up Before Starting Services , you can skip this section. To add a device after starting the services, complete the following steps: Upload the device profile above to metadata with a POST to http://localhost:59881/api/v2/deviceprofile/uploadfile and add the file as key \"file\" to the body in form-data format, and the created ID will be returned. The following example command uses curl to send the request: $ curl http://localhost:59881/api/v2/deviceprofile/uploadfile \\ -F \"file=@temperature.profile.yml\" Ensure the Modbus device service is running, adjust the service name below to match if necessary or if using other device services. Add the device with a POST to http://localhost:59881/api/v2/device , the body will look something like: $ curl http://localhost:59881/api/v2/device -H \"Content-Type:application/json\" -X POST \\ -d '[ { \"apiVersion\": \"v2\", \"device\": { \"name\" :\"Modbus-TCP-Temperature-Sensor\", \"description\":\"This device is a product for monitoring the temperature via the ethernet\", \"labels\":[ \"Temperature\", \"Modbus TCP\" ], \"serviceName\": \"device-modbus\", \"profileName\": \"Ethernet-Temperature-Sensor\", \"protocols\":{ \"modbus-tcp\":{ \"Address\" : \"172.17.0.1\", \"Port\" : \"502\", \"UnitID\" : \"1\", \"Timeout\" : \"5\", \"IdleTimeout\" : \"5\" } }, \"autoEvents\":[ { \"Interval\":\"30s\", \"onChange\":false, \"SourceName\":\"Temperature\" } ], \"adminState\":\"UNLOCKED\", \"operatingState\":\"UP\" } } ]' The service name must match/refer to the target device service, and the profile name must match the device profile name from the previous steps. Execute Commands Now we're ready to run some commands. Find Executable Commands Use the following query to find executable commands: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/all | jso n _pp { \"apiVersion\" : \"v2\" , \"deviceCoreCommands\" : [ { \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"coreCommands\" : [ { \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"AlarmThreshold\" , \"get\" : true , \"set\" : true , \"parameters\" : [ { \"valueType\" : \"Float32\" , \"resourceName\" : \"ThermostatL\" }, { \"valueType\" : \"Float32\" , \"resourceName\" : \"ThermostatH\" } ], \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold\" }, { \"get\" : true , \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"AlarmMode\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmMode\" , \"parameters\" : [ { \"resourceName\" : \"AlarmMode\" , \"valueType\" : \"Int16\" } ] }, { \"get\" : true , \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"Temperature\" , \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/Temperature\" , \"parameters\" : [ { \"valueType\" : \"Float32\" , \"resourceName\" : \"Temperature\" } ] } ] } ], \"statusCode\" : 200 } Execute SET command Execute SET command according to url and parameterNames , replacing [host] with the server IP when running the SET command. $ curl http://localhost:59882/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold \\ -H \"Content-Type:application/json\" -X PUT \\ -d '{\"ThermostatL\":\"15\",\"ThermostatH\":\"100\"}' Execute GET command Replace \\ with the server IP when running the GET command. $ curl h tt p : //localhos t : 59882 /api/v 2 /device/ na me/Modbus - TCP - Tempera ture - Se ns or/AlarmThreshold | jso n _pp { \"statusCode\" : 200 , \"apiVersion\" : \"v2\" , \"event\" : { \"origin\" : 1624324686964377495 , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"f3d44a0f-d2c3-4ef6-9441-ad6b1bfb8a9e\" , \"sourceName\" : \"AlarmThreshold\" , \"readings\" : [ { \"resourceName\" : \"ThermostatL\" , \"value\" : \"1.500000e+01\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"9aa879a0-c184-476b-8124-34d35a2a51f3\" , \"valueType\" : \"Float32\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"origin\" : 1624324686963970614 , \"profileName\" : \"Ethernet-Temperature-Sensor\" }, { \"value\" : \"1.000000e+02\" , \"resourceName\" : \"ThermostatH\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"bf7df23b-4338-4b93-a8bd-7abd5e848379\" , \"valueType\" : \"Float32\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"origin\" : 1624324686964343768 , \"profileName\" : \"Ethernet-Temperature-Sensor\" } ], \"apiVersion\" : \"v2\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" } } AutoEvent The AutoEvent is defined in the [[DeviceList.AutoEvents]] section of the device configuration file: [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Temperature\" After service startup, query core-data's API. The results show that the service auto-executes the command every 30 seconds. $ curl h tt p : //localhos t : 59880 /api/v 2 /eve nt /device/ na me/Modbus - TCP - Tempera ture - Se ns or | jso n _pp { \"events\" : [ { \"readings\" : [ { \"value\" : \"5.300000e+01\" , \"binaryValue\" : null , \"origin\" : 1624325219186870396 , \"id\" : \"68a66a35-d3cf-48a2-9bf0-09578267a3f7\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"mediaType\" : \"\" , \"valueType\" : \"Float32\" , \"resourceName\" : \"Temperature\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" } ], \"apiVersion\" : \"v2\" , \"origin\" : 1624325219186977564 , \"id\" : \"4b235616-7304-419e-97ae-17a244911b1c\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"sourceName\" : \"Temperature\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" }, { \"readings\" : [ { \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"resourceName\" : \"Temperature\" , \"valueType\" : \"Float32\" , \"id\" : \"56b7e8be-7ce8-4fa9-89e2-3a1a7ef09050\" , \"origin\" : 1624325189184675483 , \"value\" : \"5.300000e+01\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" } ], \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"sourceName\" : \"Temperature\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"fbab44f5-9775-4c09-84bd-cbfb00001115\" , \"origin\" : 1624325189184721223 , \"apiVersion\" : \"v2\" }, ... ], \"apiVersion\" : \"v2\" , \"statusCode\" : 200 } Set up the Modbus RTU Device This section describes how to connect the Modbus RTU device. We use Ubuntu OS and a Modbus RTU device for this example. Modbus RTU device: http://www.icpdas.com/root/product/solutions/remote_io/rs-485/i-7000_m-7000/i-7055.html User manual: http://ftp.icpdas.com/pub/cd/8000cd/napdos/7000/manual/7000dio.pdf Connect the device Connect the device to your machine(laptop or gateway,etc.) via RS485/USB adaptor and power on. Execute a command on the machine, and you can find a message like the following: $ dmesg | grep tty ... ... [18006.167625] usb 1-1: FTDI USB Serial Device converter now attached to ttyUSB0 It shows the USB attach to ttyUSB0, then you can check whether the device path exists: $ ls /dev/ttyUSB0 /dev/ttyUSB0 Deploy the EdgeX Modify the docker-compose.yml file to mount the device path to the device-modbus: Change the permission of the device path sudo chmod 777 /dev/ttyUSB0 Open docker-compose.yml file with text editor. $ nano /docker-compose.yml Modify the device-modbus section and save the file device-modbus: ... devices: - /dev/ttyUSB0 Deploy the EdgeX $ docker-compose up -d Add device to EdgeX Create the device profile according to the register table $ nano modbus.rtu.demo.profile.yml name : \"Modbus-RTU-IO-Module\" manufacturer : \"icpdas\" model : \"M-7055\" labels : - \"Modbus RTU\" - \"IO Module\" description : \"This IO module offers 8 isolated channels for digital input and 8 isolated channels for digital output.\" deviceResources : - name : \"DO0\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 0 } properties : valueType : \"Bool\" readWrite : \"RW\" - name : \"DO1\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 1 } properties : valueType : \"Bool\" readWrite : \"RW\" - name : \"DO2\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 2 } properties : valueType : \"Bool\" readWrite : \"RW\" deviceCommands : - name : \"DO\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"DO0\" } - { deviceResource : \"DO1\" } - { deviceResource : \"DO2\" } Upload the device profile $ curl http://localhost:59881/api/v2/deviceprofile/uploadfile \\ -F \"file=@modbus.rtu.demo.profile.yml\" Create the device entity to the EdgeX. You can find the Modbus RTU setting on the device or the user manual. $ curl h tt p : //localhos t : 59881 /api/v 2 /device - H \"Content-Type:application/json\" - X POST \\ - d ' [ { \"apiVersion\" : \"v2\" , \"device\" : { \"name\" : \"Modbus-RTU-IO-Module\" , \"description\" : \"The device can be used to monitor the status of the digital input and digital output channels.\" , \"labels\" :[ \"IO Module\" , \"Modbus RTU\" ], \"serviceName\" : \"device-modbus\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"protocols\" :{ \"modbus-tcp\" :{ \"Address\" : \"/dev/ttyUSB0\" , \"BaudRate\" : \"19200\" , \"DataBits\" : \"8\" , \"StopBits\" : \"1\" , \"Parity\" : \"N\" , \"UnitID\" : \"1\" , \"Timeout\" : \"5\" , \"IdleTimeout\" : \"5\" } }, \"adminState\" : \"UNLOCKED\" , \"operatingState\" : \"UP\" } } ] ' Test the GET or SET command","title":"Modbus"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus","text":"EdgeX - Ireland Release This page describes how to connect Modbus devices to EdgeX. In this example, we simulate the temperature sensor instead of using a real device. This provides a straightforward way to test the device service features. Temperature sensor: https://www.audon.co.uk/ethernet_sensors/NANO_TEMP.html User manual: http://download.inveo.com.pl/manual/nano_t/user_manual_en.pdf","title":"Modbus"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#important-notice","text":"To fulfill the issue #61 , there is an important incompatible change after v2 (Ireland release). In the Device Profile attributes section, the startingAddress becomes an integer data type and zero-based value. In v1, startingAddress was a string data type and one-based value.","title":"Important Notice"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#environment","text":"You can use any operating system that can install docker and docker-compose. In this example, we use Ubuntu to deploy EdgeX using docker.","title":"Environment"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus-device-simulator","text":"1.Download ModbusPal Download the fixed version of ModbusPal from the https://sourceforge.net/p/modbuspal/discussion/899955/thread/72cf35ee/cd1f/attachment/ModbusPal.jar . 2.Install required lib: sudo apt install librxtx-java 3.Startup the ModbusPal: sudo java -jar ModbusPal.jar","title":"Modbus Device Simulator"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus-register-table","text":"You can find the available registers in the user manual. Modbus TCP \u2013 Holding Registers Address Name R/W Description 4000 ThermostatL R/W Lower alarm threshold 4001 ThermostatH R/W Upper alarm threshold 4002 Alarm mode R/W 1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher 4004 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105)","title":"Modbus Register Table"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#setup-modbuspal","text":"To simulate the sensor, do the following: Add mock device: Add registers according to the register table: Add the ModbusPal support value auto-generator, which can bind to the registers:","title":"Setup ModbusPal"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#run-the-simulator","text":"Enable the value generator and click the Run button.","title":"Run the Simulator"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-before-starting-services","text":"The following sections describe how to complete the set up before starting the services. If you prefer to start the services and then add the device, see Set Up After Starting Services","title":"Set Up Before Starting Services"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#create-a-custom-configuration-folder","text":"Run the following command: mkdir -p custom-config","title":"Create a Custom configuration folder"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-device-profile","text":"Run the following command to create your device profile: cd custom-config nano temperature.profile.yml Fill in the device profile according to the Modbus Register Table , as shown below: name : \"Ethernet-Temperature-Sensor\" manufacturer : \"Audon Electronics\" model : \"Temperature\" labels : - \"Web\" - \"Modbus TCP\" - \"SNMP\" description : \"The NANO_TEMP is a Ethernet Thermometer measuring from -55\u00b0C to 125\u00b0C with a web interface and Modbus TCP communications.\" deviceResources : - name : \"ThermostatL\" isHidden : true description : \"Lower alarm threshold of the temperature\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 3999 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"RW\" scale : \"0.1\" - name : \"ThermostatH\" isHidden : true description : \"Upper alarm threshold of the temperature\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4000 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"RW\" scale : \"0.1\" - name : \"AlarmMode\" isHidden : true description : \"1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4001 } properties : valueType : \"Int16\" readWrite : \"RW\" - name : \"Temperature\" isHidden : false description : \"Temperature x 10 (np. 10,5 st.C to 105)\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4003 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.1\" deviceCommands : - name : \"AlarmThreshold\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"ThermostatL\" } - { deviceResource : \"ThermostatH\" } - name : \"AlarmMode\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"AlarmMode\" , mappings : { \"1\" : \"OFF\" , \"2\" : \"Lower\" , \"3\" : \"Higher\" , \"4\" : \"Lower or Higher\" } } In the Modbus protocol, we provide the following attributes: 1. primaryTable : HOLDING_REGISTERS, INPUT_REGISTERS, COILS, DISCRETES_INPUT 2. startingAddress This attribute defines the zero-based startingAddress in Modbus device. For example, the GET command requests data from the Modbus address 4004 to get the temperature data, so the starting register address should be 4003. Address Starting Address Name R/W Description 4004 4003 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105) 3. IS_BYTE_SWAP , IS_WORD_SWAP : To handle the different Modbus binary data order, we support Int32, Uint32, Float32 to do the swap operation before decoding the binary data. For example: { primaryTable: \"INPUT_REGISTERS\", startingAddress: \"4\", isByteSwap: \"false\", isWordSwap: \"true\" } 4. RAW_TYPE : This attribute defines the binary data read from the Modbus device, then we can use the value type to indicate the data type that the user wants to receive. We only support Int16 and Uint16 for rawType. The corresponding value type must be Float32 and Float64 . For example: deviceResources : - name : \"Temperature\" isHidden : false description : \"Temperature x 10 (np. 10,5 st.C to 105)\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4003 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.1\" In the device-modbus, the Property valueType decides how many registers will be read. Like Holding registers, a register has 16 bits. If the Modbus device's user manual specifies that a value has two registers, define it as Float32 or Int32 or Uint32 in the deviceProfile. Once we execute a command, device-modbus knows its value type and register type, startingAddress, and register length. So it can read or write value using the modbus protocol.","title":"Set Up Device Profile"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-device-service-configuration","text":"Run the following command to create your device configuration: cd custom-config nano device.config.toml Fill in the device.config.toml file, as shown below: [[DeviceList]] Name = \"Modbus-TCP-Temperature-Sensor\" ProfileName = \"Ethernet-Temperature-Sensor\" Description = \"This device is a product for monitoring the temperature via the ethernet\" labels = [ \"temperature\" , \"modbus TCP\" ] [DeviceList.Protocols] [DeviceList.Protocols.modbus-tcp] Address = \"172.17.0.1\" Port = \"502\" UnitID = \"1\" Timeout = \"5\" IdleTimeout = \"5\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Temperature\" The address 172.17.0.1 is point to the docker bridge network which means it can forward the request from docker network to the host. Use this configuration file to define devices and AutoEvent. Then the device-modbus will generate the relative instance on startup. The device-modbus offers two types of protocol, Modbus TCP and Modbus RTU, which can be defined as shown below: protocol Name Protocol Address Port UnitID BaudRate DataBits StopBits Parity Timeout IdleTimeout Modbus TCP Gateway address TCP 10.211.55.6 502 1 5 5 Modbus RTU Gateway address RTU /tmp/slave 502 2 19200 8 1 N 5 5 In the RTU protocol, Parity can be: N - None is 0 O - Odd is 1 E - Even is 2, default is E","title":"Set Up Device Service Configuration"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#prepare-docker-compose-file","text":"Clone edgex-compose $ git clone git@github.com:edgexfoundry/edgex-compose.git Generate the docker-compose.yml file $ cd edgex-compose/compose-builder $ make gen ds-modbus","title":"Prepare docker-compose file"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#add-custom-configuration-to-docker-compose-file","text":"Add prepared configuration files to docker-compose file, you can mount them using volumes and change the environment for device-modbus internal use. Open the docker-compose.yml file and then add volumes path and environment as shown below: device-modbus : ... environment : ... DEVICE_DEVICESDIR : /custom-config DEVICE_PROFILESDIR : /custom-config volumes : ... - /path/to/custom-config:/custom-config","title":"Add Custom Configuration to docker-compose File"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#start-edgex-foundry-on-docker","text":"Since we generate the docker-compose.yml file at the previous step, we can deploy EdgeX as shown below: $ cd edgex-compose/compose-builder $ docker-compose up -d Creating network \"compose-builder_edgex-network\" with driver \"bridge\" Creating volume \"compose-builder_consul-acl-token\" with default driver ... Creating edgex-core-metadata ... done Creating edgex-core-command ... done Creating edgex-core-data ... done Creating edgex-device-modbus ... done Creating edgex-app-rules-engine ... done Creating edgex-sys-mgmt-agent ... done","title":"Start EdgeX Foundry on Docker"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-after-starting-services","text":"If the services are already running and you want to add a device, you can use the Core Metadata API as outlined in this section. If you set up the device profile and Service as described in Set Up Before Starting Services , you can skip this section. To add a device after starting the services, complete the following steps: Upload the device profile above to metadata with a POST to http://localhost:59881/api/v2/deviceprofile/uploadfile and add the file as key \"file\" to the body in form-data format, and the created ID will be returned. The following example command uses curl to send the request: $ curl http://localhost:59881/api/v2/deviceprofile/uploadfile \\ -F \"file=@temperature.profile.yml\" Ensure the Modbus device service is running, adjust the service name below to match if necessary or if using other device services. Add the device with a POST to http://localhost:59881/api/v2/device , the body will look something like: $ curl http://localhost:59881/api/v2/device -H \"Content-Type:application/json\" -X POST \\ -d '[ { \"apiVersion\": \"v2\", \"device\": { \"name\" :\"Modbus-TCP-Temperature-Sensor\", \"description\":\"This device is a product for monitoring the temperature via the ethernet\", \"labels\":[ \"Temperature\", \"Modbus TCP\" ], \"serviceName\": \"device-modbus\", \"profileName\": \"Ethernet-Temperature-Sensor\", \"protocols\":{ \"modbus-tcp\":{ \"Address\" : \"172.17.0.1\", \"Port\" : \"502\", \"UnitID\" : \"1\", \"Timeout\" : \"5\", \"IdleTimeout\" : \"5\" } }, \"autoEvents\":[ { \"Interval\":\"30s\", \"onChange\":false, \"SourceName\":\"Temperature\" } ], \"adminState\":\"UNLOCKED\", \"operatingState\":\"UP\" } } ]' The service name must match/refer to the target device service, and the profile name must match the device profile name from the previous steps.","title":"Set Up After Starting Services"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-commands","text":"Now we're ready to run some commands.","title":"Execute Commands"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#find-executable-commands","text":"Use the following query to find executable commands: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/all | jso n _pp { \"apiVersion\" : \"v2\" , \"deviceCoreCommands\" : [ { \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"coreCommands\" : [ { \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"AlarmThreshold\" , \"get\" : true , \"set\" : true , \"parameters\" : [ { \"valueType\" : \"Float32\" , \"resourceName\" : \"ThermostatL\" }, { \"valueType\" : \"Float32\" , \"resourceName\" : \"ThermostatH\" } ], \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold\" }, { \"get\" : true , \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"AlarmMode\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmMode\" , \"parameters\" : [ { \"resourceName\" : \"AlarmMode\" , \"valueType\" : \"Int16\" } ] }, { \"get\" : true , \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"Temperature\" , \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/Temperature\" , \"parameters\" : [ { \"valueType\" : \"Float32\" , \"resourceName\" : \"Temperature\" } ] } ] } ], \"statusCode\" : 200 }","title":"Find Executable Commands"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-set-command","text":"Execute SET command according to url and parameterNames , replacing [host] with the server IP when running the SET command. $ curl http://localhost:59882/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold \\ -H \"Content-Type:application/json\" -X PUT \\ -d '{\"ThermostatL\":\"15\",\"ThermostatH\":\"100\"}'","title":"Execute SET command"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-get-command","text":"Replace \\ with the server IP when running the GET command. $ curl h tt p : //localhos t : 59882 /api/v 2 /device/ na me/Modbus - TCP - Tempera ture - Se ns or/AlarmThreshold | jso n _pp { \"statusCode\" : 200 , \"apiVersion\" : \"v2\" , \"event\" : { \"origin\" : 1624324686964377495 , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"f3d44a0f-d2c3-4ef6-9441-ad6b1bfb8a9e\" , \"sourceName\" : \"AlarmThreshold\" , \"readings\" : [ { \"resourceName\" : \"ThermostatL\" , \"value\" : \"1.500000e+01\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"9aa879a0-c184-476b-8124-34d35a2a51f3\" , \"valueType\" : \"Float32\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"origin\" : 1624324686963970614 , \"profileName\" : \"Ethernet-Temperature-Sensor\" }, { \"value\" : \"1.000000e+02\" , \"resourceName\" : \"ThermostatH\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"bf7df23b-4338-4b93-a8bd-7abd5e848379\" , \"valueType\" : \"Float32\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"origin\" : 1624324686964343768 , \"profileName\" : \"Ethernet-Temperature-Sensor\" } ], \"apiVersion\" : \"v2\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" } }","title":"Execute GET command"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#autoevent","text":"The AutoEvent is defined in the [[DeviceList.AutoEvents]] section of the device configuration file: [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Temperature\" After service startup, query core-data's API. The results show that the service auto-executes the command every 30 seconds. $ curl h tt p : //localhos t : 59880 /api/v 2 /eve nt /device/ na me/Modbus - TCP - Tempera ture - Se ns or | jso n _pp { \"events\" : [ { \"readings\" : [ { \"value\" : \"5.300000e+01\" , \"binaryValue\" : null , \"origin\" : 1624325219186870396 , \"id\" : \"68a66a35-d3cf-48a2-9bf0-09578267a3f7\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"mediaType\" : \"\" , \"valueType\" : \"Float32\" , \"resourceName\" : \"Temperature\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" } ], \"apiVersion\" : \"v2\" , \"origin\" : 1624325219186977564 , \"id\" : \"4b235616-7304-419e-97ae-17a244911b1c\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"sourceName\" : \"Temperature\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" }, { \"readings\" : [ { \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"resourceName\" : \"Temperature\" , \"valueType\" : \"Float32\" , \"id\" : \"56b7e8be-7ce8-4fa9-89e2-3a1a7ef09050\" , \"origin\" : 1624325189184675483 , \"value\" : \"5.300000e+01\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" } ], \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"sourceName\" : \"Temperature\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"fbab44f5-9775-4c09-84bd-cbfb00001115\" , \"origin\" : 1624325189184721223 , \"apiVersion\" : \"v2\" }, ... ], \"apiVersion\" : \"v2\" , \"statusCode\" : 200 }","title":"AutoEvent"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-the-modbus-rtu-device","text":"This section describes how to connect the Modbus RTU device. We use Ubuntu OS and a Modbus RTU device for this example. Modbus RTU device: http://www.icpdas.com/root/product/solutions/remote_io/rs-485/i-7000_m-7000/i-7055.html User manual: http://ftp.icpdas.com/pub/cd/8000cd/napdos/7000/manual/7000dio.pdf","title":"Set up the Modbus RTU Device"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#connect-the-device","text":"Connect the device to your machine(laptop or gateway,etc.) via RS485/USB adaptor and power on. Execute a command on the machine, and you can find a message like the following: $ dmesg | grep tty ... ... [18006.167625] usb 1-1: FTDI USB Serial Device converter now attached to ttyUSB0 It shows the USB attach to ttyUSB0, then you can check whether the device path exists: $ ls /dev/ttyUSB0 /dev/ttyUSB0","title":"Connect the device"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#deploy-the-edgex","text":"Modify the docker-compose.yml file to mount the device path to the device-modbus: Change the permission of the device path sudo chmod 777 /dev/ttyUSB0 Open docker-compose.yml file with text editor. $ nano /docker-compose.yml Modify the device-modbus section and save the file device-modbus: ... devices: - /dev/ttyUSB0 Deploy the EdgeX $ docker-compose up -d","title":"Deploy the EdgeX"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#add-device-to-edgex","text":"Create the device profile according to the register table $ nano modbus.rtu.demo.profile.yml name : \"Modbus-RTU-IO-Module\" manufacturer : \"icpdas\" model : \"M-7055\" labels : - \"Modbus RTU\" - \"IO Module\" description : \"This IO module offers 8 isolated channels for digital input and 8 isolated channels for digital output.\" deviceResources : - name : \"DO0\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 0 } properties : valueType : \"Bool\" readWrite : \"RW\" - name : \"DO1\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 1 } properties : valueType : \"Bool\" readWrite : \"RW\" - name : \"DO2\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 2 } properties : valueType : \"Bool\" readWrite : \"RW\" deviceCommands : - name : \"DO\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"DO0\" } - { deviceResource : \"DO1\" } - { deviceResource : \"DO2\" } Upload the device profile $ curl http://localhost:59881/api/v2/deviceprofile/uploadfile \\ -F \"file=@modbus.rtu.demo.profile.yml\" Create the device entity to the EdgeX. You can find the Modbus RTU setting on the device or the user manual. $ curl h tt p : //localhos t : 59881 /api/v 2 /device - H \"Content-Type:application/json\" - X POST \\ - d ' [ { \"apiVersion\" : \"v2\" , \"device\" : { \"name\" : \"Modbus-RTU-IO-Module\" , \"description\" : \"The device can be used to monitor the status of the digital input and digital output channels.\" , \"labels\" :[ \"IO Module\" , \"Modbus RTU\" ], \"serviceName\" : \"device-modbus\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"protocols\" :{ \"modbus-tcp\" :{ \"Address\" : \"/dev/ttyUSB0\" , \"BaudRate\" : \"19200\" , \"DataBits\" : \"8\" , \"StopBits\" : \"1\" , \"Parity\" : \"N\" , \"UnitID\" : \"1\" , \"Timeout\" : \"5\" , \"IdleTimeout\" : \"5\" } }, \"adminState\" : \"UNLOCKED\" , \"operatingState\" : \"UP\" } } ] ' Test the GET or SET command","title":"Add device to EdgeX"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/","text":"SNMP EdgeX - Ireland Release Overview In this example, you add a new Patlite Signal Tower which communicates via SNMP. This example demonstrates how to connect a device through the SNMP Device Service. Patlite Signal Tower, model NHL-FB2 Setup Hardware needed In order to exercise this example, you will need the following hardware A computer able to run EdgeX Foundry A Patlite Signal Tower (NHL-FB2 model) Both the computer and Patlite must be connected to the same ethernet network Software needed In addition to the hardware, you will need the following software Docker Docker Compose EdgeX Foundry V2 (Ireland release) curl to run REST commands (you can also use a tool like Postman) If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry. Add the SNMP Device Service to your docker-compose.yml The EdgeX docker-compose.yml file used to run EdgeX must include the SNMP device service for this example. You can either: download and use the docker-compose.yml file provided with this example or use the EdgeX Compose Builder tool to create your own custom docker-compose.yml file adding device-snmp. See Getting Started using Docker if you need assistance running EdgeX once you have your Docker Compose file. Add the SNMP Device Profile and Device SNMP devices, like the Patlite Signal Tower, provide a set of managed objects to get and set property information on the associated device. Each managed object has an address call an object identifier (or OID) that you use to interact with the SNMP device's managed object. You use the OID to query the state of the device or to set properties on the device. In the case of the Patlite, there are managed object for the colored lights and the buzzer of the device. You can read the current state of a colored light (get) or turn the light on (set) by making a call to the proper OIDs for the associated managed object. For example, on the NH series signal towers used in this example, a \"get\" call to the 1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1 OID returns the current state of the Red signal light. A return value of 1 would signal the light is off. A return value of 2 says the light is on. A return value of 3 says the light is flashing. Read this SNMP tutorial to learn more about the basics of the SNMP protocol. See the Patlite NH Series User's Manual for more information on the SNMP OIDs and function calls and parameters needed for some requests. Add the Patlite Device Profile A device profile has been created for you to get and set the signal tower's three colored lights and to get and set the buzzer. The patlite-snmp device profile defines three device resources for each of the lights and the buzzer. Current State, a read request device resource to get the current state of the requested light or buzzer Control State, a write request device resource to set the current state of the light or buzzer Timer, a write request device resource used in combination with the control state to set the state after the number of seconds provided by the timer resource Note that the attributes of each device resource specify the SNMP OID that the device service will use to make a request of the signal tower. For example, the device resource YAML below (taken from the profile) provides the means to get the current Red light state. Note that a specific OID is provided that is unique to the RED light, current state property. - name : \"RedLightCurrentState\" isHidden : false description : \"red light current state\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"R\" defaultValue : \"1\" Below is the device resource definitions for the Red light control state and timer. Again, unique OIDs are provided as attributes for each property. - name : \"RedLightControlState\" isHidden : true description : \"red light state\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"W\" defaultValue : \"1\" - name : \"RedLightTimer\" isHidden : true description : \"red light timer\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"W\" defaultValue : \"1\" In order to set the Red light on, one would need to send an SNMP request to set OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1 to a value of 2 (on state) along with a number of seconds delay to the time at OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1 . Sending a zero value (0) to the timer would say you want to turn the light on immediately. Because setting a light or buzzer requires both of the control state and timer OIDs to be set together (simultaneously), the device profile contains deviceCommands to set the light and timer device resources (and therefore their SNMP property OIDs) in a single operation. Here is the device command to set the Red light. - name : \"RedLight\" readWrite : \"W\" isHidden : false resourceOperations : - { deviceResource : \"RedLightControlState\" } - { deviceResource : \"RedLightTimer\" } You will need to upload this profile into core metadata. Download the Patlite device profile to a convenient directory. Then, using the following curl command, request the profile be uploaded into core metadata. curl -X 'POST' 'http://localhost:59881/api/v2/deviceprofile/uploadfile' --form 'file=@\"/home/yourfilelocationhere/patlite-snmp.yml\"' Alert Note that the curl command above assumes that core metadata is available at localhost . Change localhost to the host address of your core metadata service. Also note that you will need to replace the /home/yourfilelocationhere path with the path where the profile resides. Add the Patlite Device With the Patlite device profile now in metadata, you can add the Patlite device in metadata. When adding the device, you typically need to provide the name, description, labels and admin/op states of the device when creating it. You will also need to associate the device to a device service (in this case the device-snmp device service). You will ned to associate the new device to a profile - the patlite profile just added in the step above. And you will need to provide the protocol information (such as the address and port of the device) to tell the device service where it can find the physical device. If you wish the device service to automatically get readings from the device, you will also need to provide AutoEvent properties when creating the device. The curl command to POST the new Patlite device (named patlite1 ) into metadata is provide below. You will need to change the protocol Address (currently 10.0.0.14 ) and Port (currently 161 ) to point to your Patlite on your network. In this request to add a new device, AutoEvents are setup to collect the current state of the 3 lights and buzzer every 10 seconds. Notice the reference to the current state device resources in setting up the AutoEvents. curl -X 'POST' 'http://localhost:59881/api/v2/device' -d '[{\"apiVersion\": \"v2\", \"device\": {\"name\": \"patlite1\",\"description\": \"patlite #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"patlite\"],\"serviceName\": \"device-snmp\",\"profileName\": \"patlite-snmp-profile\",\"protocols\": {\"TCP\": {\"Address\": \"10.0.0.14\",\"Port\": \"161\"}}, \"AutoEvents\":[{\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"RedLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"GreenLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"AmberLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"BuzzerCurrentState\"}]}}]' Info Rather than making a REST API call into metadata to add the device, you could alternately provide device configuration files that define the device. These device configuration files would then have to be provided to the service when it starts up. Since you did not create a new Docker image containing the device configuration and just used the existing SNMP device service Docker image, it was easier to make simple API calls to add the profile and device. However, this would mean the profile and device would need to be added each time metadata's database is cleaned out and reset. Test If the device service is up and running and the profile and device have been added correctly, you should now be able to interact with the Patlite via the core command service (and SNMP under the covers via the SNMP device service). Get the Current State To get the current state of a light (in the example below the Green light), make a curl request like the following of the command service. curl 'http://localhost:59882/api/v2/device/name/patlite1/GreenLightCurrentState' | json_pp Alert Note that the curl command above assumes that the core command service is available at localhost . Change the host address of your core command service if it is not available at localhost . The results should look something like that below. { \"statusCode\" : 200 , \"apiVersion\" : \"v2\" , \"event\" : { \"origin\" : 1632188382048586660 , \"deviceName\" : \"patlite1\" , \"sourceName\" : \"GreenLightCurrentState\" , \"id\" : \"1e2a7ba1-c273-46d1-b919-207aafbc60ba\" , \"profileName\" : \"patlite-snmp-profile\" , \"apiVersion\" : \"v2\" , \"readings\" : [ { \"origin\" : 1632188382048586660 , \"resourceName\" : \"GreenLightCurrentState\" , \"deviceName\" : \"patlite1\" , \"id\" : \"a41ac1cf-703b-4572-bdef-8487e9a7100e\" , \"valueType\" : \"Int32\" , \"value\" : \"1\" , \"profileName\" : \"patlite-snmp-profile\" } ] } } Info Note the value will be one of 4 numbers indicating the current state of the light Value Description 1 Off 2 On - solid and not flashing 3 Flashing on 4 Flashing quickly on Set a light or buzzer on To turn a signal tower light or the buzzer on, you can issue a PUT device command via the core command service. The example below turns on the Green light. curl --location --request PUT 'http://localhost:59882/api/v2/device/name/patlite1/GreenLight' --header 'cont: application/json' --data-raw '{\"GreenLightControlState\":\"2\",\"GreenLightTimer\":\"0\"}' This command sets the light on (solid versus flashing) immediate (as denoted by the GreenLightTimer parameter is set to 0). The timer value is the number of seconds delay in making the request to the light or buzzer. Again, the control state can be set to one of four values as listed in the table above. Alert Again note that the curl command above assumes that the core command service is available at localhost . Change the host address of your core command service if it is not available at localhost . Observations Did you notice that EdgeX obfuscates almost all information about SNMP, and managed objects and OIDs? The power of EdgeX is to abstract away protocol differences so that to a user, getting data from a device or setting properties on a device such as this Patlite signal tower is as easy as making simple REST calls into the command service. The only place that protocol information is really seen is in the device profile (where the attributes specify the SNMP OIDs). Of course, the device service must be coded to deal with the protocol specifics and it must know how to translate the simple command REST calls into protocol specific requests of the device. But even device service creation is made easier with the use of the SDKs which provide much of the boilerplate code found in almost every device service regardless of the underlying device protocol.","title":"SNMP"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#snmp","text":"EdgeX - Ireland Release","title":"SNMP"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#overview","text":"In this example, you add a new Patlite Signal Tower which communicates via SNMP. This example demonstrates how to connect a device through the SNMP Device Service. Patlite Signal Tower, model NHL-FB2","title":"Overview"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#setup","text":"","title":"Setup"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#hardware-needed","text":"In order to exercise this example, you will need the following hardware A computer able to run EdgeX Foundry A Patlite Signal Tower (NHL-FB2 model) Both the computer and Patlite must be connected to the same ethernet network","title":"Hardware needed"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#software-needed","text":"In addition to the hardware, you will need the following software Docker Docker Compose EdgeX Foundry V2 (Ireland release) curl to run REST commands (you can also use a tool like Postman) If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry.","title":"Software needed"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-snmp-device-service-to-your-docker-composeyml","text":"The EdgeX docker-compose.yml file used to run EdgeX must include the SNMP device service for this example. You can either: download and use the docker-compose.yml file provided with this example or use the EdgeX Compose Builder tool to create your own custom docker-compose.yml file adding device-snmp. See Getting Started using Docker if you need assistance running EdgeX once you have your Docker Compose file.","title":"Add the SNMP Device Service to your docker-compose.yml"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-snmp-device-profile-and-device","text":"SNMP devices, like the Patlite Signal Tower, provide a set of managed objects to get and set property information on the associated device. Each managed object has an address call an object identifier (or OID) that you use to interact with the SNMP device's managed object. You use the OID to query the state of the device or to set properties on the device. In the case of the Patlite, there are managed object for the colored lights and the buzzer of the device. You can read the current state of a colored light (get) or turn the light on (set) by making a call to the proper OIDs for the associated managed object. For example, on the NH series signal towers used in this example, a \"get\" call to the 1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1 OID returns the current state of the Red signal light. A return value of 1 would signal the light is off. A return value of 2 says the light is on. A return value of 3 says the light is flashing. Read this SNMP tutorial to learn more about the basics of the SNMP protocol. See the Patlite NH Series User's Manual for more information on the SNMP OIDs and function calls and parameters needed for some requests.","title":"Add the SNMP Device Profile and Device"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-patlite-device-profile","text":"A device profile has been created for you to get and set the signal tower's three colored lights and to get and set the buzzer. The patlite-snmp device profile defines three device resources for each of the lights and the buzzer. Current State, a read request device resource to get the current state of the requested light or buzzer Control State, a write request device resource to set the current state of the light or buzzer Timer, a write request device resource used in combination with the control state to set the state after the number of seconds provided by the timer resource Note that the attributes of each device resource specify the SNMP OID that the device service will use to make a request of the signal tower. For example, the device resource YAML below (taken from the profile) provides the means to get the current Red light state. Note that a specific OID is provided that is unique to the RED light, current state property. - name : \"RedLightCurrentState\" isHidden : false description : \"red light current state\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"R\" defaultValue : \"1\" Below is the device resource definitions for the Red light control state and timer. Again, unique OIDs are provided as attributes for each property. - name : \"RedLightControlState\" isHidden : true description : \"red light state\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"W\" defaultValue : \"1\" - name : \"RedLightTimer\" isHidden : true description : \"red light timer\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"W\" defaultValue : \"1\" In order to set the Red light on, one would need to send an SNMP request to set OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1 to a value of 2 (on state) along with a number of seconds delay to the time at OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1 . Sending a zero value (0) to the timer would say you want to turn the light on immediately. Because setting a light or buzzer requires both of the control state and timer OIDs to be set together (simultaneously), the device profile contains deviceCommands to set the light and timer device resources (and therefore their SNMP property OIDs) in a single operation. Here is the device command to set the Red light. - name : \"RedLight\" readWrite : \"W\" isHidden : false resourceOperations : - { deviceResource : \"RedLightControlState\" } - { deviceResource : \"RedLightTimer\" } You will need to upload this profile into core metadata. Download the Patlite device profile to a convenient directory. Then, using the following curl command, request the profile be uploaded into core metadata. curl -X 'POST' 'http://localhost:59881/api/v2/deviceprofile/uploadfile' --form 'file=@\"/home/yourfilelocationhere/patlite-snmp.yml\"' Alert Note that the curl command above assumes that core metadata is available at localhost . Change localhost to the host address of your core metadata service. Also note that you will need to replace the /home/yourfilelocationhere path with the path where the profile resides.","title":"Add the Patlite Device Profile"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-patlite-device","text":"With the Patlite device profile now in metadata, you can add the Patlite device in metadata. When adding the device, you typically need to provide the name, description, labels and admin/op states of the device when creating it. You will also need to associate the device to a device service (in this case the device-snmp device service). You will ned to associate the new device to a profile - the patlite profile just added in the step above. And you will need to provide the protocol information (such as the address and port of the device) to tell the device service where it can find the physical device. If you wish the device service to automatically get readings from the device, you will also need to provide AutoEvent properties when creating the device. The curl command to POST the new Patlite device (named patlite1 ) into metadata is provide below. You will need to change the protocol Address (currently 10.0.0.14 ) and Port (currently 161 ) to point to your Patlite on your network. In this request to add a new device, AutoEvents are setup to collect the current state of the 3 lights and buzzer every 10 seconds. Notice the reference to the current state device resources in setting up the AutoEvents. curl -X 'POST' 'http://localhost:59881/api/v2/device' -d '[{\"apiVersion\": \"v2\", \"device\": {\"name\": \"patlite1\",\"description\": \"patlite #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"patlite\"],\"serviceName\": \"device-snmp\",\"profileName\": \"patlite-snmp-profile\",\"protocols\": {\"TCP\": {\"Address\": \"10.0.0.14\",\"Port\": \"161\"}}, \"AutoEvents\":[{\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"RedLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"GreenLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"AmberLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"BuzzerCurrentState\"}]}}]' Info Rather than making a REST API call into metadata to add the device, you could alternately provide device configuration files that define the device. These device configuration files would then have to be provided to the service when it starts up. Since you did not create a new Docker image containing the device configuration and just used the existing SNMP device service Docker image, it was easier to make simple API calls to add the profile and device. However, this would mean the profile and device would need to be added each time metadata's database is cleaned out and reset.","title":"Add the Patlite Device"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#test","text":"If the device service is up and running and the profile and device have been added correctly, you should now be able to interact with the Patlite via the core command service (and SNMP under the covers via the SNMP device service).","title":"Test"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#get-the-current-state","text":"To get the current state of a light (in the example below the Green light), make a curl request like the following of the command service. curl 'http://localhost:59882/api/v2/device/name/patlite1/GreenLightCurrentState' | json_pp Alert Note that the curl command above assumes that the core command service is available at localhost . Change the host address of your core command service if it is not available at localhost . The results should look something like that below. { \"statusCode\" : 200 , \"apiVersion\" : \"v2\" , \"event\" : { \"origin\" : 1632188382048586660 , \"deviceName\" : \"patlite1\" , \"sourceName\" : \"GreenLightCurrentState\" , \"id\" : \"1e2a7ba1-c273-46d1-b919-207aafbc60ba\" , \"profileName\" : \"patlite-snmp-profile\" , \"apiVersion\" : \"v2\" , \"readings\" : [ { \"origin\" : 1632188382048586660 , \"resourceName\" : \"GreenLightCurrentState\" , \"deviceName\" : \"patlite1\" , \"id\" : \"a41ac1cf-703b-4572-bdef-8487e9a7100e\" , \"valueType\" : \"Int32\" , \"value\" : \"1\" , \"profileName\" : \"patlite-snmp-profile\" } ] } } Info Note the value will be one of 4 numbers indicating the current state of the light Value Description 1 Off 2 On - solid and not flashing 3 Flashing on 4 Flashing quickly on","title":"Get the Current State"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#set-a-light-or-buzzer-on","text":"To turn a signal tower light or the buzzer on, you can issue a PUT device command via the core command service. The example below turns on the Green light. curl --location --request PUT 'http://localhost:59882/api/v2/device/name/patlite1/GreenLight' --header 'cont: application/json' --data-raw '{\"GreenLightControlState\":\"2\",\"GreenLightTimer\":\"0\"}' This command sets the light on (solid versus flashing) immediate (as denoted by the GreenLightTimer parameter is set to 0). The timer value is the number of seconds delay in making the request to the light or buzzer. Again, the control state can be set to one of four values as listed in the table above. Alert Again note that the curl command above assumes that the core command service is available at localhost . Change the host address of your core command service if it is not available at localhost .","title":"Set a light or buzzer on"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#observations","text":"Did you notice that EdgeX obfuscates almost all information about SNMP, and managed objects and OIDs? The power of EdgeX is to abstract away protocol differences so that to a user, getting data from a device or setting properties on a device such as this Patlite signal tower is as easy as making simple REST calls into the command service. The only place that protocol information is really seen is in the device profile (where the attributes specify the SNMP OIDs). Of course, the device service must be coded to deal with the protocol specifics and it must know how to translate the simple command REST calls into protocol specific requests of the device. But even device service creation is made easier with the use of the SDKs which provide much of the boilerplate code found in almost every device service regardless of the underlying device protocol.","title":"Observations"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/","text":"Modbus - Data Type Conversion In use cases where the device resource uses an integer data type with a float scale, precision can be lost following transformation. For example, a Modbus device stores the temperature and humidity in an Int16 data type with a float scale of 0.01. If the temperature is 26.53, the read value is 2653. However, following transformation, the value is 26. To avoid this scenario, the device resource data type must differ from the value descriptor data type. This is achieved using the optional rawType attribute in the device profile to define the binary data read from the Modbus device, and a valueType to indicate what data type the user wants to receive. If the rawType attribute exists, the device service parses the binary data according to the defined rawType , then casts the value according to the valueType defined in the properties of the device resources. The following extract from a device profile defines the rawType as Int16 and the valueType as Float32: EdgeX 2.0 For EdgeX 2.0 the device profile has many changes. Please see Device Profile section for more details. Example - Device Profile deviceResources : - name : \"humidity\" description : \"The response value is the result of the original value multiplied by 100.\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : \"1\" , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.01\" units : \"%RH\" - name : \"temperature\" description : \"The response value is the result of the original value multiplied by 100.\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : \"2\" , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.01\" units : \"degrees Celsius\" Read Command A Read command is executed as follows: The device service executes the Read command to read binary data The binary reading data is parsed as an Int16 data type The integer value is cast to a Float32 value Write Command A Write command is executed as follows: The device service cast the requested Float32 value to an integer value The integer value is converted to binary data The device service executes the Write command When to Transform Data You generally need to transform data when scaling readings between a 16-bit integer and a float value. The following limitations apply: rawType supports only Int16 and Uint16 data types The corresponding valueType must be Float32 or Float64 If an unsupported data type is defined for the rawType attribute, the device service throws an exception similar to the following: Read command failed. Cmd:temperature err:the raw type Int32 is not supported Supported Transformations The supported transformations are as follows: From rawType To valueType Int16 Float32 Int16 Float64 Uint16 Float32 Uint16 Float64","title":"Modbus - Data Type Conversion"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#modbus-data-type-conversion","text":"In use cases where the device resource uses an integer data type with a float scale, precision can be lost following transformation. For example, a Modbus device stores the temperature and humidity in an Int16 data type with a float scale of 0.01. If the temperature is 26.53, the read value is 2653. However, following transformation, the value is 26. To avoid this scenario, the device resource data type must differ from the value descriptor data type. This is achieved using the optional rawType attribute in the device profile to define the binary data read from the Modbus device, and a valueType to indicate what data type the user wants to receive. If the rawType attribute exists, the device service parses the binary data according to the defined rawType , then casts the value according to the valueType defined in the properties of the device resources. The following extract from a device profile defines the rawType as Int16 and the valueType as Float32: EdgeX 2.0 For EdgeX 2.0 the device profile has many changes. Please see Device Profile section for more details. Example - Device Profile deviceResources : - name : \"humidity\" description : \"The response value is the result of the original value multiplied by 100.\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : \"1\" , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.01\" units : \"%RH\" - name : \"temperature\" description : \"The response value is the result of the original value multiplied by 100.\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : \"2\" , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.01\" units : \"degrees Celsius\"","title":"Modbus - Data Type Conversion"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#read-command","text":"A Read command is executed as follows: The device service executes the Read command to read binary data The binary reading data is parsed as an Int16 data type The integer value is cast to a Float32 value","title":"Read Command"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#write-command","text":"A Write command is executed as follows: The device service cast the requested Float32 value to an integer value The integer value is converted to binary data The device service executes the Write command","title":"Write Command"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#when-to-transform-data","text":"You generally need to transform data when scaling readings between a 16-bit integer and a float value. The following limitations apply: rawType supports only Int16 and Uint16 data types The corresponding valueType must be Float32 or Float64 If an unsupported data type is defined for the rawType attribute, the device service throws an exception similar to the following: Read command failed. Cmd:temperature err:the raw type Int32 is not supported","title":"When to Transform Data"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#supported-transformations","text":"The supported transformations are as follows: From rawType To valueType Int16 Float32 Int16 Float64 Uint16 Float32 Uint16 Float64","title":"Supported Transformations"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/","text":"Sending and Consuming Binary Data From EdgeX Device Services EdgeX - Ireland Release Overview In this example, we will demonstrate how to send EdgeX Events and Readings that contain arbitrary binary data. DeviceService Implementation Device Profile To indicate that a deviceResource represents a Binary type, the following format is used: deviceResources : - name : \"camera_snapshot\" isHidden : false description : \"snapshot from camera\" properties : valueType : \"Binary\" readWrite : \"R\" mediaType : \"image/jpeg\" deviceCommands : - name : \"OnvifSnapshot\" isHidden : false readWrite : \"R\" resourceOperations : - { deviceResource : \"camera_snapshot\" } Device Service Here is a snippet from a hypothetical Device Service's HandleReadCommands() method that produces an event that represents a JPEG image captured from a camera: if req . DeviceResourceName == \"camera_snapshot\" { data , err := cameraClient . GetSnapshot () // returns ([]byte, error) check ( err ) cv , err := sdkModels . NewCommandValue ( reqs [ i ]. DeviceResourceName , common . ValueTypeBinary , data ) check ( err ) responses [ i ] = cv } Calling Device Service Command Querying core-metadata for the Device's Commands and DeviceName provides the following as the URL to request a reading from the snapshot command: http://localhost:59990/api/v2/device/name/camera-device/OnvifSnapshot Unlike with non-binary Events, making a request to this URL will return an event in CBOR representation. CBOR is a representation of binary data loosely based off of the JSON data model. This Event will not be human-readable. Parsing CBOR Encoded Events To access the data enclosed in these Events and Readings, they will first need to be decoded from CBOR. The following is a simple Go program that reads in the CBOR response from a file containing the response from the previous HTTP request. The Go library recommended for parsing these events can be found at https://github.com/fxamacker/cbor/ package main import ( \"io/ioutil\" \"github.com/edgexfoundry/go-mod-core-contracts/v2/dtos/requests\" \"github.com/fxamacker/cbor/v2\" ) func check ( e error ) { if e != nil { panic ( e ) } } func main () { // Read in our cbor data fileBytes , err := ioutil . ReadFile ( \"/Users/johndoe/Desktop/image.cbor\" ) check ( err ) // Decode into an EdgeX Event eventRequest := & requests . AddEventRequest {} err = cbor . Unmarshal ( fileBytes , eventRequest ) check ( err ) // Grab binary data and write to a file imgBytes := eventRequest . Event . Readings [ 0 ]. BinaryValue ioutil . WriteFile ( \"/Users/johndoe/Desktop/image.jpeg\" , imgBytes , 0644 ) } In the code above, the CBOR data is read into a byte array , an EdgeX Event struct is created, and cbor.Unmarshal parses the CBOR-encoded data and stores the result in the Event struct. Finally, the binary payload is written to a file from the BinaryValue field of the Reading. This method would work as well for decoding Events off the EdgeX message bus. Encoding Arbitrary Structures in Events The Device SDK's NewCommandValue() function above only accepts a byte slice as binary data. Any arbitrary Go structure can be encoded in a binary reading by first encoding the structure into a byte slice using CBOR. The following illustrates this method: // DeviceService HandleReadCommands() code: foo := struct { X int Y int Z int Bar string } { X : 7 , Y : 3 , Z : 100 , Bar : \"Hello world!\" , } data , err := cbor . Marshal ( & foo ) check ( err ) cv , err := sdkModels . NewCommandValue ( reqs [ i ]. DeviceResourceName , common . ValueTypeBinary , data ) responses [ i ] = cv This code takes the anonymous struct with fields X, Y, Z, and Bar (of different types) and serializes it into a byte slice using the same cbor library, and passing the output to NewCommandValue() . When consuming these events, another level of decoding will need to take place to get the structure out of the binary payload. func main () { // Read in our cbor data fileBytes , err := ioutil . ReadFile ( \"/Users/johndoe/Desktop/foo.cbor\" ) check ( err ) // Decode into an EdgeX Event eventRequest := & requests . AddEventRequest {} err = cbor . Unmarshal ( fileBytes , eventRequest ) check ( err ) // Decode into arbitrary type foo := struct { X int Y int Z int Bar string }{} err = cbor . Unmarshal ( eventRequest . Event . Readings [ 0 ]. BinaryValue , & foo ) check ( err ) fmt . Println ( foo ) } This code takes a command response in the same format as the previous example, but uses the cbor library to decode the CBOR data inside the EdgeX Reading's BinaryValue field. Using this approach, an Event can be sent containing data containing an arbitrary, flexible structure. Use cases could be a Reading containing multiple images, a variable length list of integer read-outs, etc.","title":"Sending and Consuming Binary Data From EdgeX Device Services"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#sending-and-consuming-binary-data-from-edgex-device-services","text":"EdgeX - Ireland Release","title":"Sending and Consuming Binary Data From EdgeX Device Services"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#overview","text":"In this example, we will demonstrate how to send EdgeX Events and Readings that contain arbitrary binary data.","title":"Overview"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#deviceservice-implementation","text":"","title":"DeviceService Implementation"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#device-profile","text":"To indicate that a deviceResource represents a Binary type, the following format is used: deviceResources : - name : \"camera_snapshot\" isHidden : false description : \"snapshot from camera\" properties : valueType : \"Binary\" readWrite : \"R\" mediaType : \"image/jpeg\" deviceCommands : - name : \"OnvifSnapshot\" isHidden : false readWrite : \"R\" resourceOperations : - { deviceResource : \"camera_snapshot\" }","title":"Device Profile"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#device-service","text":"Here is a snippet from a hypothetical Device Service's HandleReadCommands() method that produces an event that represents a JPEG image captured from a camera: if req . DeviceResourceName == \"camera_snapshot\" { data , err := cameraClient . GetSnapshot () // returns ([]byte, error) check ( err ) cv , err := sdkModels . NewCommandValue ( reqs [ i ]. DeviceResourceName , common . ValueTypeBinary , data ) check ( err ) responses [ i ] = cv }","title":"Device Service"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#calling-device-service-command","text":"Querying core-metadata for the Device's Commands and DeviceName provides the following as the URL to request a reading from the snapshot command: http://localhost:59990/api/v2/device/name/camera-device/OnvifSnapshot Unlike with non-binary Events, making a request to this URL will return an event in CBOR representation. CBOR is a representation of binary data loosely based off of the JSON data model. This Event will not be human-readable.","title":"Calling Device Service Command"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#parsing-cbor-encoded-events","text":"To access the data enclosed in these Events and Readings, they will first need to be decoded from CBOR. The following is a simple Go program that reads in the CBOR response from a file containing the response from the previous HTTP request. The Go library recommended for parsing these events can be found at https://github.com/fxamacker/cbor/ package main import ( \"io/ioutil\" \"github.com/edgexfoundry/go-mod-core-contracts/v2/dtos/requests\" \"github.com/fxamacker/cbor/v2\" ) func check ( e error ) { if e != nil { panic ( e ) } } func main () { // Read in our cbor data fileBytes , err := ioutil . ReadFile ( \"/Users/johndoe/Desktop/image.cbor\" ) check ( err ) // Decode into an EdgeX Event eventRequest := & requests . AddEventRequest {} err = cbor . Unmarshal ( fileBytes , eventRequest ) check ( err ) // Grab binary data and write to a file imgBytes := eventRequest . Event . Readings [ 0 ]. BinaryValue ioutil . WriteFile ( \"/Users/johndoe/Desktop/image.jpeg\" , imgBytes , 0644 ) } In the code above, the CBOR data is read into a byte array , an EdgeX Event struct is created, and cbor.Unmarshal parses the CBOR-encoded data and stores the result in the Event struct. Finally, the binary payload is written to a file from the BinaryValue field of the Reading. This method would work as well for decoding Events off the EdgeX message bus.","title":"Parsing CBOR Encoded Events"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#encoding-arbitrary-structures-in-events","text":"The Device SDK's NewCommandValue() function above only accepts a byte slice as binary data. Any arbitrary Go structure can be encoded in a binary reading by first encoding the structure into a byte slice using CBOR. The following illustrates this method: // DeviceService HandleReadCommands() code: foo := struct { X int Y int Z int Bar string } { X : 7 , Y : 3 , Z : 100 , Bar : \"Hello world!\" , } data , err := cbor . Marshal ( & foo ) check ( err ) cv , err := sdkModels . NewCommandValue ( reqs [ i ]. DeviceResourceName , common . ValueTypeBinary , data ) responses [ i ] = cv This code takes the anonymous struct with fields X, Y, Z, and Bar (of different types) and serializes it into a byte slice using the same cbor library, and passing the output to NewCommandValue() . When consuming these events, another level of decoding will need to take place to get the structure out of the binary payload. func main () { // Read in our cbor data fileBytes , err := ioutil . ReadFile ( \"/Users/johndoe/Desktop/foo.cbor\" ) check ( err ) // Decode into an EdgeX Event eventRequest := & requests . AddEventRequest {} err = cbor . Unmarshal ( fileBytes , eventRequest ) check ( err ) // Decode into arbitrary type foo := struct { X int Y int Z int Bar string }{} err = cbor . Unmarshal ( eventRequest . Event . Readings [ 0 ]. BinaryValue , & foo ) check ( err ) fmt . Println ( foo ) } This code takes a command response in the same format as the previous example, but uses the cbor library to decode the CBOR data inside the EdgeX Reading's BinaryValue field. Using this approach, an Event can be sent containing data containing an arbitrary, flexible structure. Use cases could be a Reading containing multiple images, a variable length list of integer read-outs, etc.","title":"Encoding Arbitrary Structures in Events"},{"location":"examples/Ch-ExamplesVirtualDeviceService/","text":"Using the Virtual Device Service Overview The Virtual Device Service GO can simulate different kinds of devices to generate Events and Readings to the Core Data Micro Service. Furthermore, users can send commands and get responses through the Command and Control Micro Service. The Virtual Device Service allows you to execute functional or performance tests without any real devices. This version of the Virtual Device Service is implemented based on Device SDK GO , and uses ql (an embedded SQL database engine) to simulate virtual resources. Introduction For information on the virtual device service see virtual device under the Microservices tab. Working with the Virtual Device Service Running the Virtual Device Service Container The virtual device service depends on the EdgeX core services. By default, the virtual device service is part of the EdgeX community provided Docker Compose files. If you use one of the community provide Compose files , you can pull and run EdgeX inclusive of the virtual device service without having to make any changes. Running the Virtual Device Service Natively (in development mode) If you're going to download the source code and run the virtual device service in development mode, make sure that the EdgeX core service containers are up before starting the virtual device service. See how to work with EdgeX in a hybrid environment in order to run the virtual device service outside of containers. This same file will instruct you on how to get and run the virtual device service code . GET command example The virtual device service is configured to send simulated data to core data every few seconds (from 10-30 seconds depending on device - see the device configuration file for AutoEvent details). You can exercise the GET request on the command service to see the generated value produced by any of the virtual device's simulated devices. Use the curl command below to exercise the virtual device service API (via core command service). curl -X GET localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 ` Warning The example above assumes your core command service is available on localhost at the default service port of 59882. Also, you must replace your device name and command name in the example above with your virtual device service's identifiers. If you are not sure of the identifiers to use, query the command service for the full list of commands and devices at http://localhost:59882/api/v2/device/all . The virtual device should respond (via the core command service) with event/reading JSON similar to that below. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"event\" : { \"apiVersion\" : \"v2\" , \"id\" : \"3beb5b83-d923-4c8a-b949-c1708b6611c1\" , \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"sourceName\" : \"Int8\" , \"origin\" : 1626227770833093400 , \"readings\" : [ { \"id\" : \"baf42bc7-307a-4647-8876-4e84759fd2ba\" , \"origin\" : 1626227770833093400 , \"deviceName\" : \"Random-Integer-Device\" , \"resourceName\" : \"Int8\" , \"profileName\" : \"Random-Integer-Device\" , \"valueType\" : \"Int8\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"value\" : \"-5\" } ] } } PUT command example - Assign a value to a resource The virtual devices managed by the virtual device can also be actuated. The virtual device can be told to enable or disable random number generation. When disabled, the virtual device services can be told what value to respond with for all GET operations. When setting the fixed value, the value must be valid for the data type of the virtual device. For example, the minimum value of Int8 cannot be less than -128 and the maximum value cannot be greater than 127. Below is example actuation of one of the virtual devices. In this example, it sets the fixed GET return value to 123 and turns off random generation. curl -X PUT -d '{\"Int8\": \"123\", \"EnableRandomization_Int8\": \"false\"}' localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 Note The value of the resource's EnableRandomization property is simultaneously updated to false when sending a put command to assign a specified value to the resource. Therefore, the need to set EnableRandomization_Int8 to false is not actually required in the call above Return the virtual device to randomly generating numbers with another PUT call. curl -X PUT -d '{\"EnableRandomization_Int8\": \"true\"}' localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 Reference Architectural Diagram Sequence Diagram Virtual Resource Table Schema Column Type DEVICE_NAME STRING COMMAND_NAME STRING DEVICE_RESOURCE_NAME STRING ENABLE_RANDOMIZATION BOOL DATA_TYPE STRING VALUE STRING","title":"Using the Virtual Device Service"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#using-the-virtual-device-service","text":"","title":"Using the Virtual Device Service"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#overview","text":"The Virtual Device Service GO can simulate different kinds of devices to generate Events and Readings to the Core Data Micro Service. Furthermore, users can send commands and get responses through the Command and Control Micro Service. The Virtual Device Service allows you to execute functional or performance tests without any real devices. This version of the Virtual Device Service is implemented based on Device SDK GO , and uses ql (an embedded SQL database engine) to simulate virtual resources.","title":"Overview"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#introduction","text":"For information on the virtual device service see virtual device under the Microservices tab.","title":"Introduction"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#working-with-the-virtual-device-service","text":"","title":"Working with the Virtual Device Service"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#running-the-virtual-device-service-container","text":"The virtual device service depends on the EdgeX core services. By default, the virtual device service is part of the EdgeX community provided Docker Compose files. If you use one of the community provide Compose files , you can pull and run EdgeX inclusive of the virtual device service without having to make any changes.","title":"Running the Virtual Device Service Container"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#running-the-virtual-device-service-natively-in-development-mode","text":"If you're going to download the source code and run the virtual device service in development mode, make sure that the EdgeX core service containers are up before starting the virtual device service. See how to work with EdgeX in a hybrid environment in order to run the virtual device service outside of containers. This same file will instruct you on how to get and run the virtual device service code .","title":"Running the Virtual Device Service Natively (in development mode)"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#get-command-example","text":"The virtual device service is configured to send simulated data to core data every few seconds (from 10-30 seconds depending on device - see the device configuration file for AutoEvent details). You can exercise the GET request on the command service to see the generated value produced by any of the virtual device's simulated devices. Use the curl command below to exercise the virtual device service API (via core command service). curl -X GET localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 ` Warning The example above assumes your core command service is available on localhost at the default service port of 59882. Also, you must replace your device name and command name in the example above with your virtual device service's identifiers. If you are not sure of the identifiers to use, query the command service for the full list of commands and devices at http://localhost:59882/api/v2/device/all . The virtual device should respond (via the core command service) with event/reading JSON similar to that below. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"event\" : { \"apiVersion\" : \"v2\" , \"id\" : \"3beb5b83-d923-4c8a-b949-c1708b6611c1\" , \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"sourceName\" : \"Int8\" , \"origin\" : 1626227770833093400 , \"readings\" : [ { \"id\" : \"baf42bc7-307a-4647-8876-4e84759fd2ba\" , \"origin\" : 1626227770833093400 , \"deviceName\" : \"Random-Integer-Device\" , \"resourceName\" : \"Int8\" , \"profileName\" : \"Random-Integer-Device\" , \"valueType\" : \"Int8\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"value\" : \"-5\" } ] } }","title":"GET command example"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#put-command-example-assign-a-value-to-a-resource","text":"The virtual devices managed by the virtual device can also be actuated. The virtual device can be told to enable or disable random number generation. When disabled, the virtual device services can be told what value to respond with for all GET operations. When setting the fixed value, the value must be valid for the data type of the virtual device. For example, the minimum value of Int8 cannot be less than -128 and the maximum value cannot be greater than 127. Below is example actuation of one of the virtual devices. In this example, it sets the fixed GET return value to 123 and turns off random generation. curl -X PUT -d '{\"Int8\": \"123\", \"EnableRandomization_Int8\": \"false\"}' localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 Note The value of the resource's EnableRandomization property is simultaneously updated to false when sending a put command to assign a specified value to the resource. Therefore, the need to set EnableRandomization_Int8 to false is not actually required in the call above Return the virtual device to randomly generating numbers with another PUT call. curl -X PUT -d '{\"EnableRandomization_Int8\": \"true\"}' localhost:59882/api/v2/device/name/Random-Integer-Device/Int8","title":"PUT command example - Assign a value to a resource"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#reference","text":"","title":"Reference"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#architectural-diagram","text":"","title":"Architectural Diagram"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#sequence-diagram","text":"","title":"Sequence Diagram"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#virtual-resource-table-schema","text":"Column Type DEVICE_NAME STRING COMMAND_NAME STRING DEVICE_RESOURCE_NAME STRING ENABLE_RANDOMIZATION BOOL DATA_TYPE STRING VALUE STRING","title":"Virtual Resource Table Schema"},{"location":"general/ContainerNames/","text":"EdgeX Container Names The following table provides the list of the default EdgeX Docker image names to the Docker container name and Docker Compose names. EdgeX 2.0 For EdgeX 2.0 the EdgeX docker image names have been simplified and made consistent across all EdgeX services. Core Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/core-data edgex-core-data edgex-core-data data edgexfoundry/core-metadata edgex-core-metadata edgex-core-metadata metadata edgexfoundry/core-command edgex-core-command edgex-core-command command Supporting Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/support-notifications edgex-support-notifications edgex-support-notifications notifications edgexfoundry/support-scheduler edgex-support-scheduler edgex-support-scheduler scheduler Application & Analytics Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/app-service-configurable edgex-app-rules-engine edgex-app-rules-engine app-service-rules edgexfoundry/app-service-configurable edgex-app-http-export edgex-app-http-export app-service-http-export edgexfoundry/app-service-configurable edgex-app-mqtt-export edgex-app-mqtt-export app-service-mqtt-export emqx/kuiper edgex-kuiper edgex-kuiper rulesengine Device Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/device-virtual edgex-device-virtual edgex-device-virtual device-virtual edgexfoundry/device-mqtt edgex-device-mqtt edgex-device-mqtt device-mqtt edgexfoundry/device-rest edgex-device-rest edgex-device-rest device-rest edgexfoundry/device-modbus edgex-device-modbus edgex-device-modbus device-modbus edgexfoundry/device-snmp edgex-device-snmp edgex-device-snmp device-snmp edgexfoundry/device-bacnet edgex-device-bacnet edgex-device-bacnet device-bacnet edgexfoundry/device-camera edgex-device-camera edgex-device-camera device-camera edgexfoundry/device-grove edgex-device-grove edgex-device-grove device-grove edgexfoundry/device-coap edgex-device-coap edgex-device-coap device-coap Security Docker image name Docker container name Docker network hostname Docker Compose service name vault edgex-vault edgex-vault vault postgress edgex-kong-db edgex-kong-db kong-db kong edgex-kong edgex-kong kong edgexfoundry/security-proxy-setup edgex-security-proxy-setup edgex-security-proxy-setup proxy-setup edgexfoundry/security-secretstore-setup edgex-security-secretstore-setup edgex-security-secretstore-setup secretstore-setup edgexfoundry/security-bootstrapper edgex-security-bootstrapper edgex-security-bootstrapper security-bootstrapper Miscellaneous Docker image name Docker container name Docker network hostname Docker Compose service name consul edgex-core-consul edgex-core-consul consul redis edgex-redis edgex-redis database edgexfoundry/sys-mgmt-agent edgex-sys-mgmt-agent edgex-sys-mgmt-agent system","title":"EdgeX Container Names"},{"location":"general/ContainerNames/#edgex-container-names","text":"The following table provides the list of the default EdgeX Docker image names to the Docker container name and Docker Compose names. EdgeX 2.0 For EdgeX 2.0 the EdgeX docker image names have been simplified and made consistent across all EdgeX services. Core Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/core-data edgex-core-data edgex-core-data data edgexfoundry/core-metadata edgex-core-metadata edgex-core-metadata metadata edgexfoundry/core-command edgex-core-command edgex-core-command command Supporting Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/support-notifications edgex-support-notifications edgex-support-notifications notifications edgexfoundry/support-scheduler edgex-support-scheduler edgex-support-scheduler scheduler Application & Analytics Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/app-service-configurable edgex-app-rules-engine edgex-app-rules-engine app-service-rules edgexfoundry/app-service-configurable edgex-app-http-export edgex-app-http-export app-service-http-export edgexfoundry/app-service-configurable edgex-app-mqtt-export edgex-app-mqtt-export app-service-mqtt-export emqx/kuiper edgex-kuiper edgex-kuiper rulesengine Device Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/device-virtual edgex-device-virtual edgex-device-virtual device-virtual edgexfoundry/device-mqtt edgex-device-mqtt edgex-device-mqtt device-mqtt edgexfoundry/device-rest edgex-device-rest edgex-device-rest device-rest edgexfoundry/device-modbus edgex-device-modbus edgex-device-modbus device-modbus edgexfoundry/device-snmp edgex-device-snmp edgex-device-snmp device-snmp edgexfoundry/device-bacnet edgex-device-bacnet edgex-device-bacnet device-bacnet edgexfoundry/device-camera edgex-device-camera edgex-device-camera device-camera edgexfoundry/device-grove edgex-device-grove edgex-device-grove device-grove edgexfoundry/device-coap edgex-device-coap edgex-device-coap device-coap Security Docker image name Docker container name Docker network hostname Docker Compose service name vault edgex-vault edgex-vault vault postgress edgex-kong-db edgex-kong-db kong-db kong edgex-kong edgex-kong kong edgexfoundry/security-proxy-setup edgex-security-proxy-setup edgex-security-proxy-setup proxy-setup edgexfoundry/security-secretstore-setup edgex-security-secretstore-setup edgex-security-secretstore-setup secretstore-setup edgexfoundry/security-bootstrapper edgex-security-bootstrapper edgex-security-bootstrapper security-bootstrapper Miscellaneous Docker image name Docker container name Docker network hostname Docker Compose service name consul edgex-core-consul edgex-core-consul consul redis edgex-redis edgex-redis database edgexfoundry/sys-mgmt-agent edgex-sys-mgmt-agent edgex-sys-mgmt-agent system","title":"EdgeX Container Names"},{"location":"general/Definitions/","text":"Definitions The following glossary provides terms used in EdgeX Foundry. The definition are based on how EdgeX and its community use the term versus any strict technical or industry definition. Actuate To cause a machine or device to operate. In EdgeX terms, to command a device or sensor under management of EdgeX to do something (example: stop a motor) or to reconfigure itself (example: set a thermostat's cooling point). Brownfield and Greenfield Brownfield refers to older legacy equipment (nodes, devices, sensors) in an edge/IoT deployment, which typically uses older protocols. Greenfield refers to, typically, new equipment with modern protocols. CBOR An acronym for \"concise binary object representation.\" A binary data serialization format used by EdgeX to transport binary sensed data (like an image). The user can also choose to send all data via CBOR for efficiency purposes, but at the expense of having EdgeX convert the CBOR into another format whenever the data needs to be understood and inspected or to persist the data. Containerized EdgeX micro services and infrastructure (i.e. databases, registry, etc.) are built as executable programs, put into Docker images, and made available via Docker Hub (and Nexus repository for nightly builds). A service (or infrastructure element) that is available in Docker Hub (or Nexus) is said to be containerized. Docker images can be quickly downloaded and new Docker containers created from the images. Contributor/Developer If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort. Created time stamp The Created time stamp is the time the data was created in the database and is unchangeable. The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data was sent to EdgeX Foundry and the database. Usually, the Origin and Created time stamps are the same, or very close to being the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different. If persistence is disable in core-data, the time stamp will default to 0. Device In EdgeX parlance, \"device\" is used to refer to a sensor, actuator, or IoT \"thing\". A sensor generally collects information from the physical world - like a temperature or vibration sensor. Actuators are machines that can be told to do something. Actuators move or otherwise control a mechanism or system - like a value on a pump. While there may be some technical differences, for the purposes of EdgeX documentation, device will refer to a sensor, actuator or \"thing\". Edge Analytics The terms edge or local analytics (the terms are used interchangeably and have the same meaning in this context) for the purposes of edge computing (and EdgeX), refers to an \u201canalytics\u201d service is that: - Receives and interprets the EdgeX sensor data to some degree; some analytics services are more sophisticated and able to provide more insights than others - Make determinations on what actions and actuations need to occur based on the insights it has achieved, thereby driving actuation requests to EdgeX associated devices or other services (like notifications) The analytics service could be some simple logic built into an app service, a rules engine package, or an agent of some artificial intelligence/machine learning system. From an EdgeX perspective, actionable intelligence generation is all the same. From an EdgeX perspective, edge analytics = seeing the edge data and be able to make requests to act on what is seen. While EdgeX provides a rules engine service as its reference implementation of local analytics, app services and its data preparation capability allow sensor data to be streamed to any analytics package. Because of EdgeX\u2019s micro service architecture and distributed nature, the analytics service would not necessarily have to run local to the devices / sensors. In other words, it would not have to run at the edge. App services could deliver the edge data to analytics living in the cloud. However, in these scenarios, the insight intelligence would not be considered local or edge in context. Because of latency concerns, data security and privacy needs, intermittent connectivity of edge systems, and other reasons, it is often vital for edge platforms to retain an analytic capability at the edge or local. Gateway An IoT gateway is a compute platform at the farthest ends of an edge or IoT network. It is the host or \u201cbox\u201d to which physical sensors and devices connect and that is, in turn, connected to the networks (wired or wirelessly) of the information technology realm. IoT or edge gateways are compute platforms that connect \u201cthings\u201d (sensors and devices) to IT networks and systems. Micro service In a micro service architecture, each component has its own process. This is in contrast to a monolithic architecture in which all components of the application run in the same process. Benefits of micro service architectures include: - Allow any one service to be replaced and upgraded more easily - Allow services to be programmed using different programming languages and underlying technical solutions (use the best technology for each specific service) - Ex: services written in C can communicate and work with services written in Go - This allows organizations building solutions to maximize available developer resources and some legacy code - Allow services to be distributed across host compute platforms - allowing better utilization of available compute resources - Allow for more scalable solutions by adding copies of services when needed Origin time stamp The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data is sent to EdgeX Foundry and the database. The Created time stamp is the time the data was created in the database. Usually, the Origin and Created time stamps are the same or very close to the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different. Reference Implementation Default and example implementation(s) offered by the EdgeX community. Other implementations may be offered by 3rd parties or for specialization. Resource A piece of information or data available from a sensor or \"thing\". For example, a thermostat would have temperature and humidity resources. A resource has a name (ResourceName) to identify it (\"temperature\" or \"humidity\" in this example) and a value (the sensed data - like 72 degrees). A resource may also have additional properties or attributes associated with it. The data type of the value (e.g., integer, float, string, etc.) would be an example of a resource property. Rules Engine Rules engines are important to the IoT edge system. A rules engine is a software system that is connected to a collection of data (either database or data stream). The rules engine examines various elements of the data and monitors the data, and then triggers some action based on the results of the monitoring of the data it. A rules engine is a collection of \"If-Then\" conditional statements. The \"If\" informs the rules engine what data to look at and what ranges or values of data must match in order to trigger the \"Then\" part of the statement, which then informs the rules engine what action to take or what external resource to call on, when the data is a match to the \"If\" statement. Most rules engines can be dynamically programmed meaning that new \"If-Then\" statements or rules, can be provided while the engine is running. The rules are often defined by some type of rule language with simple syntax to enable non-Developers to provide the new rules. Rules engines are one of the simplest forms of \"edge analytics\" provided in IoT systems. Rules engines enable data picked up by IoT sensors to be monitored and acted upon (actuated). Typically, the actuation is accomplished on another IoT device or sensor. For example, a temperature sensor in an equipment enclosure may be monitored by a rules engine to detect when the temperature is getting too warm (or too cold) for safe or optimum operation of the equipment. The rules engine, upon detecting temperatures outside of the acceptable range, shuts off the equipment in the enclosure. Software Development Kit In EdgeX, a software development kit (or SDK) is a library or module to be incorporated into a new micro service. It provides a lot of the boilerplate code and scaffolding associated with the type of service being created. The SDK allows the developer to focus on the details of the service functionality and not have to worry about the mundane tasks associated with EdgeX services. South and North Side South Side: All IoT objects, within the physical realm, and the edge of the network that communicates directly with those devices, sensors, actuators, and other IoT objects, and collects the data from them, is known collectively as the \"south side.\" North Side: The cloud (or enterprise system) where data is collected, stored, aggregated, analyzed, and turned into information, and the part of the network that communicates with the cloud, is referred to as the \"north side\" of the network. EdgeX enables data to be sent \"north, \" \"south, \" or laterally as needed and as directed. \"Snappy\" / Ubuntu Core & Snaps A Linux-based Operating System provided by Ubuntu - formally called Ubuntu Core but often referred to as \"Snappy\". The packages are called 'snaps' and the tool for using them 'snapd', and works for phone, cloud, internet of things, and desktop computers. The \"Snap\" packages are self-contained and have no dependency on external stores. \"Snaps\" can be used to create command line tools, background services, and desktop applications. User If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\".","title":"Definitions"},{"location":"general/Definitions/#definitions","text":"The following glossary provides terms used in EdgeX Foundry. The definition are based on how EdgeX and its community use the term versus any strict technical or industry definition.","title":"Definitions"},{"location":"general/Definitions/#actuate","text":"To cause a machine or device to operate. In EdgeX terms, to command a device or sensor under management of EdgeX to do something (example: stop a motor) or to reconfigure itself (example: set a thermostat's cooling point).","title":"Actuate"},{"location":"general/Definitions/#brownfield-and-greenfield","text":"Brownfield refers to older legacy equipment (nodes, devices, sensors) in an edge/IoT deployment, which typically uses older protocols. Greenfield refers to, typically, new equipment with modern protocols.","title":"Brownfield and Greenfield"},{"location":"general/Definitions/#cbor","text":"An acronym for \"concise binary object representation.\" A binary data serialization format used by EdgeX to transport binary sensed data (like an image). The user can also choose to send all data via CBOR for efficiency purposes, but at the expense of having EdgeX convert the CBOR into another format whenever the data needs to be understood and inspected or to persist the data.","title":"CBOR"},{"location":"general/Definitions/#containerized","text":"EdgeX micro services and infrastructure (i.e. databases, registry, etc.) are built as executable programs, put into Docker images, and made available via Docker Hub (and Nexus repository for nightly builds). A service (or infrastructure element) that is available in Docker Hub (or Nexus) is said to be containerized. Docker images can be quickly downloaded and new Docker containers created from the images.","title":"Containerized"},{"location":"general/Definitions/#contributordeveloper","text":"If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort.","title":"Contributor/Developer"},{"location":"general/Definitions/#created-time-stamp","text":"The Created time stamp is the time the data was created in the database and is unchangeable. The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data was sent to EdgeX Foundry and the database. Usually, the Origin and Created time stamps are the same, or very close to being the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different. If persistence is disable in core-data, the time stamp will default to 0.","title":"Created time stamp"},{"location":"general/Definitions/#device","text":"In EdgeX parlance, \"device\" is used to refer to a sensor, actuator, or IoT \"thing\". A sensor generally collects information from the physical world - like a temperature or vibration sensor. Actuators are machines that can be told to do something. Actuators move or otherwise control a mechanism or system - like a value on a pump. While there may be some technical differences, for the purposes of EdgeX documentation, device will refer to a sensor, actuator or \"thing\".","title":"Device"},{"location":"general/Definitions/#edge-analytics","text":"The terms edge or local analytics (the terms are used interchangeably and have the same meaning in this context) for the purposes of edge computing (and EdgeX), refers to an \u201canalytics\u201d service is that: - Receives and interprets the EdgeX sensor data to some degree; some analytics services are more sophisticated and able to provide more insights than others - Make determinations on what actions and actuations need to occur based on the insights it has achieved, thereby driving actuation requests to EdgeX associated devices or other services (like notifications) The analytics service could be some simple logic built into an app service, a rules engine package, or an agent of some artificial intelligence/machine learning system. From an EdgeX perspective, actionable intelligence generation is all the same. From an EdgeX perspective, edge analytics = seeing the edge data and be able to make requests to act on what is seen. While EdgeX provides a rules engine service as its reference implementation of local analytics, app services and its data preparation capability allow sensor data to be streamed to any analytics package. Because of EdgeX\u2019s micro service architecture and distributed nature, the analytics service would not necessarily have to run local to the devices / sensors. In other words, it would not have to run at the edge. App services could deliver the edge data to analytics living in the cloud. However, in these scenarios, the insight intelligence would not be considered local or edge in context. Because of latency concerns, data security and privacy needs, intermittent connectivity of edge systems, and other reasons, it is often vital for edge platforms to retain an analytic capability at the edge or local.","title":"Edge Analytics"},{"location":"general/Definitions/#gateway","text":"An IoT gateway is a compute platform at the farthest ends of an edge or IoT network. It is the host or \u201cbox\u201d to which physical sensors and devices connect and that is, in turn, connected to the networks (wired or wirelessly) of the information technology realm. IoT or edge gateways are compute platforms that connect \u201cthings\u201d (sensors and devices) to IT networks and systems.","title":"Gateway"},{"location":"general/Definitions/#micro-service","text":"In a micro service architecture, each component has its own process. This is in contrast to a monolithic architecture in which all components of the application run in the same process. Benefits of micro service architectures include: - Allow any one service to be replaced and upgraded more easily - Allow services to be programmed using different programming languages and underlying technical solutions (use the best technology for each specific service) - Ex: services written in C can communicate and work with services written in Go - This allows organizations building solutions to maximize available developer resources and some legacy code - Allow services to be distributed across host compute platforms - allowing better utilization of available compute resources - Allow for more scalable solutions by adding copies of services when needed","title":"Micro service"},{"location":"general/Definitions/#origin-time-stamp","text":"The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data is sent to EdgeX Foundry and the database. The Created time stamp is the time the data was created in the database. Usually, the Origin and Created time stamps are the same or very close to the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different.","title":"Origin time stamp"},{"location":"general/Definitions/#reference-implementation","text":"Default and example implementation(s) offered by the EdgeX community. Other implementations may be offered by 3rd parties or for specialization.","title":"Reference Implementation"},{"location":"general/Definitions/#resource","text":"A piece of information or data available from a sensor or \"thing\". For example, a thermostat would have temperature and humidity resources. A resource has a name (ResourceName) to identify it (\"temperature\" or \"humidity\" in this example) and a value (the sensed data - like 72 degrees). A resource may also have additional properties or attributes associated with it. The data type of the value (e.g., integer, float, string, etc.) would be an example of a resource property.","title":"Resource"},{"location":"general/Definitions/#rules-engine","text":"Rules engines are important to the IoT edge system. A rules engine is a software system that is connected to a collection of data (either database or data stream). The rules engine examines various elements of the data and monitors the data, and then triggers some action based on the results of the monitoring of the data it. A rules engine is a collection of \"If-Then\" conditional statements. The \"If\" informs the rules engine what data to look at and what ranges or values of data must match in order to trigger the \"Then\" part of the statement, which then informs the rules engine what action to take or what external resource to call on, when the data is a match to the \"If\" statement. Most rules engines can be dynamically programmed meaning that new \"If-Then\" statements or rules, can be provided while the engine is running. The rules are often defined by some type of rule language with simple syntax to enable non-Developers to provide the new rules. Rules engines are one of the simplest forms of \"edge analytics\" provided in IoT systems. Rules engines enable data picked up by IoT sensors to be monitored and acted upon (actuated). Typically, the actuation is accomplished on another IoT device or sensor. For example, a temperature sensor in an equipment enclosure may be monitored by a rules engine to detect when the temperature is getting too warm (or too cold) for safe or optimum operation of the equipment. The rules engine, upon detecting temperatures outside of the acceptable range, shuts off the equipment in the enclosure.","title":"Rules Engine"},{"location":"general/Definitions/#software-development-kit","text":"In EdgeX, a software development kit (or SDK) is a library or module to be incorporated into a new micro service. It provides a lot of the boilerplate code and scaffolding associated with the type of service being created. The SDK allows the developer to focus on the details of the service functionality and not have to worry about the mundane tasks associated with EdgeX services.","title":"Software Development Kit"},{"location":"general/Definitions/#south-and-north-side","text":"South Side: All IoT objects, within the physical realm, and the edge of the network that communicates directly with those devices, sensors, actuators, and other IoT objects, and collects the data from them, is known collectively as the \"south side.\" North Side: The cloud (or enterprise system) where data is collected, stored, aggregated, analyzed, and turned into information, and the part of the network that communicates with the cloud, is referred to as the \"north side\" of the network. EdgeX enables data to be sent \"north, \" \"south, \" or laterally as needed and as directed.","title":"South and North Side"},{"location":"general/Definitions/#snappy-ubuntu-core-snaps","text":"A Linux-based Operating System provided by Ubuntu - formally called Ubuntu Core but often referred to as \"Snappy\". The packages are called 'snaps' and the tool for using them 'snapd', and works for phone, cloud, internet of things, and desktop computers. The \"Snap\" packages are self-contained and have no dependency on external stores. \"Snaps\" can be used to create command line tools, background services, and desktop applications.","title":"\"Snappy\" / Ubuntu Core & Snaps"},{"location":"general/Definitions/#user","text":"If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\".","title":"User"},{"location":"general/PlatformRequirements/","text":"Platform Requirements EdgeX Foundry is an operating system (OS)-agnostic and hardware (HW)-agnostic IoT edge platform. At this time the following platform minimums are recommended: Memory Memory: minimum of 1 GB When considering memory for your EdgeX platform consider your use of database - Redis is the current default. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory. Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see list below). Those strategies has continued to evolve . When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual). Antirez Redis RAM Ramifications Redis IO Memory Optimization Storage Hard drive space: minimum of 3 GB of space to run the EdgeX Foundry containers, but you may want more depending on how long sensor and device data is to be retained. Approximately 32GB of storage is minimally recommended to start. Operating Systems EdgeX Foundry has been run successfully on many systems, including, but not limited to the following systems Windows (ver 7 - 10) Ubuntu Desktop (ver 14-20) Ubuntu Server (ver 14-20) Ubuntu Core (ver 16-18) Mac OS X 10 Info EdgeX is agnostic with regards to hardware (x86 and ARM), but only release artifacts for x86 and ARM 64 systems. EdgeX has been successfully run on ARM 32 platforms but has required users to build their own executable from source. EdgeX does not officially support ARM 32.","title":"Platform Requirements"},{"location":"general/PlatformRequirements/#platform-requirements","text":"EdgeX Foundry is an operating system (OS)-agnostic and hardware (HW)-agnostic IoT edge platform. At this time the following platform minimums are recommended: Memory Memory: minimum of 1 GB When considering memory for your EdgeX platform consider your use of database - Redis is the current default. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory. Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see list below). Those strategies has continued to evolve . When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual). Antirez Redis RAM Ramifications Redis IO Memory Optimization Storage Hard drive space: minimum of 3 GB of space to run the EdgeX Foundry containers, but you may want more depending on how long sensor and device data is to be retained. Approximately 32GB of storage is minimally recommended to start. Operating Systems EdgeX Foundry has been run successfully on many systems, including, but not limited to the following systems Windows (ver 7 - 10) Ubuntu Desktop (ver 14-20) Ubuntu Server (ver 14-20) Ubuntu Core (ver 16-18) Mac OS X 10 Info EdgeX is agnostic with regards to hardware (x86 and ARM), but only release artifacts for x86 and ARM 64 systems. EdgeX has been successfully run on ARM 32 platforms but has required users to build their own executable from source. EdgeX does not officially support ARM 32.","title":"Platform Requirements"},{"location":"general/ServiceConfiguration/","text":"Service Configuration Each EdgeX micro service requires configuration (i.e. - a repository of initialization and operating values). The configuration is initially provided by a TOML file but a service can utilize the centralized configuration management provided by EdgeX for its configuration. See the Configuration and Registry documentation for more details about initialization of services and the use of the configuration service. Please refer to the EdgeX Foundry architectural decision record for details (and design decisions) behind the configuration in EdgeX. Please refer to the general Common Configuration documentation for configuration properties common to all services. Find service specific configuration references in the tabs below. EdgeX 2.0 For EdgeX 2.0 the Service configuration section has been standardized across all EdgeX services. Core Service Name Configuration Reference core-data Core Data Configuration core-metadata Core Metadata Configuration core-command Core Command Configuration Supporting Service Name Configuration Reference support-notifications Support Notifications Configuration support-scheduler Support Scheduler Configuration Application & Analytics Services Name Configuration Reference app-service General Application Service Configuration app-service-configurable Configurable Application Service Configuration eKuiper rules engine/eKuiper Basic eKuiper Configuration Device Services Name Configuration Reference device-service General Device Service Configuration device-virtual Virtual Device Service Configuration Security Services Name Configuration Reference API Gateway Kong Configuration Add-on Services Configuring Add-on Service System Management Services Name Configuration Reference system management System Management Agent Configuration","title":"Service Configuration"},{"location":"general/ServiceConfiguration/#service-configuration","text":"Each EdgeX micro service requires configuration (i.e. - a repository of initialization and operating values). The configuration is initially provided by a TOML file but a service can utilize the centralized configuration management provided by EdgeX for its configuration. See the Configuration and Registry documentation for more details about initialization of services and the use of the configuration service. Please refer to the EdgeX Foundry architectural decision record for details (and design decisions) behind the configuration in EdgeX. Please refer to the general Common Configuration documentation for configuration properties common to all services. Find service specific configuration references in the tabs below. EdgeX 2.0 For EdgeX 2.0 the Service configuration section has been standardized across all EdgeX services. Core Service Name Configuration Reference core-data Core Data Configuration core-metadata Core Metadata Configuration core-command Core Command Configuration Supporting Service Name Configuration Reference support-notifications Support Notifications Configuration support-scheduler Support Scheduler Configuration Application & Analytics Services Name Configuration Reference app-service General Application Service Configuration app-service-configurable Configurable Application Service Configuration eKuiper rules engine/eKuiper Basic eKuiper Configuration Device Services Name Configuration Reference device-service General Device Service Configuration device-virtual Virtual Device Service Configuration Security Services Name Configuration Reference API Gateway Kong Configuration Add-on Services Configuring Add-on Service System Management Services Name Configuration Reference system management System Management Agent Configuration","title":"Service Configuration"},{"location":"general/ServicePorts/","text":"Default Service Ports The following tables (organized by type of service) capture the default service ports. These default ports are also used in the EdgeX provided service routes defined in the Kong API Gateway for access control. Core Services Name Port Definition core-data 59880 ZMQ - to be deprecated in a future release 5563 core-metadata 59881 core-command 59882 Supporting Services Name Port Definition support-notifications 59860 support-scheduler 59861 Application & Analytics Services Name Port Definition app-sample 59700 app-service-rules 59701 app-push-to-core 59702 app-mqtt-export 59703 app-http-export 59704 app-functional-tests 59705 app-rfid-llrp-inventory 59711 rules engine/eKuiper 59720 Device Services Name Port Definition device-virtual 59900 device-modbus 59901 device-bacnet 59980 device-mqtt 59982 device-camera 59985 device-rest 59986 device-coap 59988 device-llrp 59989 device-grove 59992 device-snmp 59993 device-gpio 59994 Security Services Name Port Definition kong-db 5432 vault 8200 kong 8000 8100 8443 security-spire-server 59840 security-spiffe-token-provider 59841 Miscellaneous Services Name Port Definition Modbus simulator 1502 MQTT broker 1883 redis 6379 consul 8500 system management 58890","title":"Default Service Ports"},{"location":"general/ServicePorts/#default-service-ports","text":"The following tables (organized by type of service) capture the default service ports. These default ports are also used in the EdgeX provided service routes defined in the Kong API Gateway for access control. Core Services Name Port Definition core-data 59880 ZMQ - to be deprecated in a future release 5563 core-metadata 59881 core-command 59882 Supporting Services Name Port Definition support-notifications 59860 support-scheduler 59861 Application & Analytics Services Name Port Definition app-sample 59700 app-service-rules 59701 app-push-to-core 59702 app-mqtt-export 59703 app-http-export 59704 app-functional-tests 59705 app-rfid-llrp-inventory 59711 rules engine/eKuiper 59720 Device Services Name Port Definition device-virtual 59900 device-modbus 59901 device-bacnet 59980 device-mqtt 59982 device-camera 59985 device-rest 59986 device-coap 59988 device-llrp 59989 device-grove 59992 device-snmp 59993 device-gpio 59994 Security Services Name Port Definition kong-db 5432 vault 8200 kong 8000 8100 8443 security-spire-server 59840 security-spiffe-token-provider 59841 Miscellaneous Services Name Port Definition Modbus simulator 1502 MQTT broker 1883 redis 6379 consul 8500 system management 58890","title":"Default Service Ports"},{"location":"getting-started/","text":"Getting Started To get started you need to get EdgeX Foundry either as a User or as a Developer/Contributor. User If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". You will want to follow the Getting Started as a User guide which takes you through the process of deploying the latest EdgeX releases. For demo purposes and to run EdgeX on your machine in just a few minutes, please refer to the Quick Start guide. Developer and Contributor If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort. You will want to follow the Getting Started for Developers guide. Hybrid See Getting Started Hybrid if you are developing or working on a particular micro service, but want to run the other micro services via Docker Containers. When working on something like an analytics service (as a developer or contributor) you may not wish to download, build and run all the EdgeX code - you only want to work with the code of your service. Your new service may still need to communicate with other services while you test your new service. Unless you want to get and build all the services, developers will often get and run the containers for the other EdgeX micro services and run only their service natively in a development environment. The EdgeX community refers to this as \"Hybrid\" development. Device Service Developer As a developer, if you intend to connect IoT objects (device, sensor or other \"thing\") that are not currently connected to EdgeX Foundry, you may also want to obtain the Device Service Software Development Kit (DS SDK) and create new device services. The DS SDK creates all the scaffolding code for a new EdgeX Foundry device service; allowing you to focus on the details of interfacing with the device in its native protocol. See Getting Started with Device SDK for help on using the DS SDK to create a new device service. Learn more about Device Services and the Device Service SDK at Device Services . Application Service Developer As a developer, if you intend to get EdgeX sensor data to external systems (be that an enterprise application, on-prem server or Cloud platform like Azure IoT Hub, AWS IoT, Google Cloud IOT, etc.), you will likely want to obtain the Application Functions SDK (App Func SDK) and create new application services. The App Func SDK creates all the scaffolding code for a new EdgeX Foundry application service; allowing you to focus on the details of data transformation, filtering, and otherwise prepare the sensor data for the external endpoint. Learn more about Application Services and the Application Functions SDK at Application Services . Versioning Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX. Long Term Support Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.","title":"Getting Started"},{"location":"getting-started/#getting-started","text":"To get started you need to get EdgeX Foundry either as a User or as a Developer/Contributor.","title":"Getting Started"},{"location":"getting-started/#user","text":"If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". You will want to follow the Getting Started as a User guide which takes you through the process of deploying the latest EdgeX releases. For demo purposes and to run EdgeX on your machine in just a few minutes, please refer to the Quick Start guide.","title":"User"},{"location":"getting-started/#developer-and-contributor","text":"If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort. You will want to follow the Getting Started for Developers guide.","title":"Developer and Contributor"},{"location":"getting-started/#hybrid","text":"See Getting Started Hybrid if you are developing or working on a particular micro service, but want to run the other micro services via Docker Containers. When working on something like an analytics service (as a developer or contributor) you may not wish to download, build and run all the EdgeX code - you only want to work with the code of your service. Your new service may still need to communicate with other services while you test your new service. Unless you want to get and build all the services, developers will often get and run the containers for the other EdgeX micro services and run only their service natively in a development environment. The EdgeX community refers to this as \"Hybrid\" development.","title":"Hybrid"},{"location":"getting-started/#device-service-developer","text":"As a developer, if you intend to connect IoT objects (device, sensor or other \"thing\") that are not currently connected to EdgeX Foundry, you may also want to obtain the Device Service Software Development Kit (DS SDK) and create new device services. The DS SDK creates all the scaffolding code for a new EdgeX Foundry device service; allowing you to focus on the details of interfacing with the device in its native protocol. See Getting Started with Device SDK for help on using the DS SDK to create a new device service. Learn more about Device Services and the Device Service SDK at Device Services .","title":"Device Service Developer"},{"location":"getting-started/#application-service-developer","text":"As a developer, if you intend to get EdgeX sensor data to external systems (be that an enterprise application, on-prem server or Cloud platform like Azure IoT Hub, AWS IoT, Google Cloud IOT, etc.), you will likely want to obtain the Application Functions SDK (App Func SDK) and create new application services. The App Func SDK creates all the scaffolding code for a new EdgeX Foundry application service; allowing you to focus on the details of data transformation, filtering, and otherwise prepare the sensor data for the external endpoint. Learn more about Application Services and the Application Functions SDK at Application Services .","title":"Application Service Developer"},{"location":"getting-started/#versioning","text":"Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX.","title":"Versioning"},{"location":"getting-started/#long-term-support","text":"Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.","title":"Long Term Support"},{"location":"getting-started/ApplicationFunctionsSDK/","text":"Getting Started The Application Functions SDK The SDK is built around the idea of a \"Functions Pipeline\". A functions pipeline is a collection of various functions that process the data in the order that you've specified. The functions pipeline is executed by the specified trigger in the configuration.toml . The first function in the pipeline is called with the event that triggered the pipeline (ex. dtos.Event ). Each successive call in the pipeline is called with the return result of the previous function. Let's take a look at a simple example that creates a pipeline to filter particular device ids and subsequently transform the data to XML: package main import ( \"errors\" \"fmt\" \"os\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/interfaces\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/transforms\" ) const ( serviceKey = \"app-simple-filter-xml\" ) func main () { // turn off secure mode for examples. Not recommended for production _ = os . Setenv ( \"EDGEX_SECURITY_SECRET_STORE\" , \"false\" ) // 1) First thing to do is to create an new instance of an EdgeX Application Service. service , ok := pkg . NewAppService ( serviceKey ) if ! ok { os . Exit ( - 1 ) } // Leverage the built in logging service in EdgeX lc := service . LoggingClient () // 2) shows how to access the application's specific configuration settings. deviceNames , err := service . GetAppSettingStrings ( \"DeviceNames\" ) if err != nil { lc . Error ( err . Error ()) os . Exit ( - 1 ) } lc . Info ( fmt . Sprintf ( \"Filtering for devices %v\" , deviceNames )) // 3) This is our pipeline configuration, the collection of functions to // execute every time an event is triggered. if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML ); err != nil { lc . Errorf ( \"SetFunctionsPipeline returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } // 4) Lastly, we'll go ahead and tell the SDK to \"start\" and begin listening for events // to trigger the pipeline. err = service . MakeItRun () if err != nil { lc . Errorf ( \"MakeItRun returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } // Do any required cleanup here os . Exit ( 0 ) } The above example is meant to merely demonstrate the structure of your application. Notice that the output of the last function is not available anywhere inside this application. You must provide a function in order to work with the data from the previous function. Let's go ahead and add the following function that prints the output to the console. func printXMLToConsole ( ctx interfaces . AppFunctionContext , data interface {}) ( bool , interface {}) { // Leverage the built in logging service in EdgeX lc := ctx . LoggingClient () if data == nil { return false , errors . New ( \"printXMLToConsole: No data received\" ) } xml , ok := data .( string ) if ! ok { return false , errors . New ( \"printXMLToConsole: Data received is not the expected 'string' type\" ) } println ( xml ) return true , nil } After placing the above function in your code, the next step is to modify the pipeline to call this function: if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML , printXMLToConsole //notice this is not a function call, but simply a function pointer. ); err != nil { ... } Set the Trigger type to http in res/configuration.toml [Trigger] Type = \"http\" Using PostMan or curl send the following JSON to localhost:/api/v2/trigger { \"requestId\" : \"82eb2e26-0f24-48ba-ae4c-de9dac3fb9bc\" , \"apiVersion\" : \"v2\" , \"event\" : { \"apiVersion\" : \"v2\" , \"deviceName\" : \"Random-Float-Device\" , \"profileName\" : \"Random-Float-Device\" , \"sourceName\" : \"Float32\" , \"origin\" : 1540855006456 , \"id\" : \"94eb2e26-0f24-5555-2222-de9dac3fb228\" , \"readings\" : [ { \"apiVersion\" : \"v2\" , \"resourceName\" : \"Float32\" , \"profileName\" : \"Random-Float-Device\" , \"deviceName\" : \"Random-Float-Device\" , \"value\" : \"76677\" , \"origin\" : 1540855006469 , \"ValueType\" : \"Float32\" , \"id\" : \"82eb2e36-0f24-48aa-ae4c-de9dac3fb920\" } ] } } After making the above modifications, you should now see data printing out to the console in XML when an event is triggered. Note You can find this complete example \" Simple Filter XML \" and more examples located in the examples section. Up until this point, the pipeline has been triggered by an event over HTTP and the data at the end of that pipeline lands in the last function specified. In the example, data ends up printed to the console. Perhaps we'd like to send the data back to where it came from. In the case of an HTTP trigger, this would be the HTTP response. In the case of EdgeX MessageBus, this could be a new topic to send the data back to the MessageBus for other applications that wish to receive it. To do this, simply call ctx.SetResponseData(data []byte) passing in the data you wish to \"respond\" with. In the above printXMLToConsole(...) function, replace println(xml) with ctx.SetResponseData([]byte(xml)) . You should now see the response in your postman window when testing the pipeline.","title":"Application Functions SDK"},{"location":"getting-started/ApplicationFunctionsSDK/#getting-started","text":"","title":"Getting Started"},{"location":"getting-started/ApplicationFunctionsSDK/#the-application-functions-sdk","text":"The SDK is built around the idea of a \"Functions Pipeline\". A functions pipeline is a collection of various functions that process the data in the order that you've specified. The functions pipeline is executed by the specified trigger in the configuration.toml . The first function in the pipeline is called with the event that triggered the pipeline (ex. dtos.Event ). Each successive call in the pipeline is called with the return result of the previous function. Let's take a look at a simple example that creates a pipeline to filter particular device ids and subsequently transform the data to XML: package main import ( \"errors\" \"fmt\" \"os\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/interfaces\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/transforms\" ) const ( serviceKey = \"app-simple-filter-xml\" ) func main () { // turn off secure mode for examples. Not recommended for production _ = os . Setenv ( \"EDGEX_SECURITY_SECRET_STORE\" , \"false\" ) // 1) First thing to do is to create an new instance of an EdgeX Application Service. service , ok := pkg . NewAppService ( serviceKey ) if ! ok { os . Exit ( - 1 ) } // Leverage the built in logging service in EdgeX lc := service . LoggingClient () // 2) shows how to access the application's specific configuration settings. deviceNames , err := service . GetAppSettingStrings ( \"DeviceNames\" ) if err != nil { lc . Error ( err . Error ()) os . Exit ( - 1 ) } lc . Info ( fmt . Sprintf ( \"Filtering for devices %v\" , deviceNames )) // 3) This is our pipeline configuration, the collection of functions to // execute every time an event is triggered. if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML ); err != nil { lc . Errorf ( \"SetFunctionsPipeline returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } // 4) Lastly, we'll go ahead and tell the SDK to \"start\" and begin listening for events // to trigger the pipeline. err = service . MakeItRun () if err != nil { lc . Errorf ( \"MakeItRun returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } // Do any required cleanup here os . Exit ( 0 ) } The above example is meant to merely demonstrate the structure of your application. Notice that the output of the last function is not available anywhere inside this application. You must provide a function in order to work with the data from the previous function. Let's go ahead and add the following function that prints the output to the console. func printXMLToConsole ( ctx interfaces . AppFunctionContext , data interface {}) ( bool , interface {}) { // Leverage the built in logging service in EdgeX lc := ctx . LoggingClient () if data == nil { return false , errors . New ( \"printXMLToConsole: No data received\" ) } xml , ok := data .( string ) if ! ok { return false , errors . New ( \"printXMLToConsole: Data received is not the expected 'string' type\" ) } println ( xml ) return true , nil } After placing the above function in your code, the next step is to modify the pipeline to call this function: if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML , printXMLToConsole //notice this is not a function call, but simply a function pointer. ); err != nil { ... } Set the Trigger type to http in res/configuration.toml [Trigger] Type = \"http\" Using PostMan or curl send the following JSON to localhost:/api/v2/trigger { \"requestId\" : \"82eb2e26-0f24-48ba-ae4c-de9dac3fb9bc\" , \"apiVersion\" : \"v2\" , \"event\" : { \"apiVersion\" : \"v2\" , \"deviceName\" : \"Random-Float-Device\" , \"profileName\" : \"Random-Float-Device\" , \"sourceName\" : \"Float32\" , \"origin\" : 1540855006456 , \"id\" : \"94eb2e26-0f24-5555-2222-de9dac3fb228\" , \"readings\" : [ { \"apiVersion\" : \"v2\" , \"resourceName\" : \"Float32\" , \"profileName\" : \"Random-Float-Device\" , \"deviceName\" : \"Random-Float-Device\" , \"value\" : \"76677\" , \"origin\" : 1540855006469 , \"ValueType\" : \"Float32\" , \"id\" : \"82eb2e36-0f24-48aa-ae4c-de9dac3fb920\" } ] } } After making the above modifications, you should now see data printing out to the console in XML when an event is triggered. Note You can find this complete example \" Simple Filter XML \" and more examples located in the examples section. Up until this point, the pipeline has been triggered by an event over HTTP and the data at the end of that pipeline lands in the last function specified. In the example, data ends up printed to the console. Perhaps we'd like to send the data back to where it came from. In the case of an HTTP trigger, this would be the HTTP response. In the case of EdgeX MessageBus, this could be a new topic to send the data back to the MessageBus for other applications that wish to receive it. To do this, simply call ctx.SetResponseData(data []byte) passing in the data you wish to \"respond\" with. In the above printXMLToConsole(...) function, replace println(xml) with ctx.SetResponseData([]byte(xml)) . You should now see the response in your postman window when testing the pipeline.","title":"The Application Functions SDK"},{"location":"getting-started/Ch-GettingStartedCDevelopers/","text":"Getting Started - C Developers Introduction These instructions are for C Developers and Contributors to get, run and otherwise work with C-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements . If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User ) What You Need For C Development Many of EdgeX device services are built in C. In the future, other services could be built in C. In additional to the hardware and software listed in the Developers guide , to build EdgeX C services, you will need the following: libmicrohttpd libcurl libyaml libcbor paho libuuid hiredis You can install these on Debian 11 (Bullseye) by running: sudo apt-get install libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev libcbor-dev libpaho-mqtt-dev uuid-dev libhiredis-dev Some of these supporting packages have dependencies of their own, which will be automatically installed when using package managers such as APT , DNF etc. libpaho-mqtt-dev is not included in Ubuntu prior to Groovy (20.10). IOTech provides a package for Focal (20.04 LTS) which may be installed as follows: sudo curl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public -o /etc/apt/trusted.gpg.d/iotech-public.asc sudo echo \"deb https://iotech.jfrog.io/iotech/debian-release $( lsb_release -cs ) main\" | tee -a /etc/apt/sources.list.d/iotech.list sudo apt-get update sudo apt-get install libpaho-mqtt EdgeX 2.0 For EdgeX 2.0 the C SDK now supports MQTT and Redis implementations of the EdgeX MessageBus CMake is required to build the SDKs. Version 3 or better is required. You can install CMake on Debian by running: sudo apt-get install cmake Check that your C development environment includes the following: a version of GCC supporting C11 CMake version 3 or greater Development libraries and headers for: curl (version 7.56 or later) microhttpd (version 0.9) libyaml (version 0.1.6 or later) libcbor (version 0.5) libuuid (from util-linux v2.x) paho (version 1.3.x) hiredis (version 0.14) Next Steps To explore how to create and build EdgeX device services in C, head to the Device Services, C SDK guide .","title":"Getting Started - C Developers"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#getting-started-c-developers","text":"","title":"Getting Started - C Developers"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#introduction","text":"These instructions are for C Developers and Contributors to get, run and otherwise work with C-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements . If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User )","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#what-you-need-for-c-development","text":"Many of EdgeX device services are built in C. In the future, other services could be built in C. In additional to the hardware and software listed in the Developers guide , to build EdgeX C services, you will need the following: libmicrohttpd libcurl libyaml libcbor paho libuuid hiredis You can install these on Debian 11 (Bullseye) by running: sudo apt-get install libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev libcbor-dev libpaho-mqtt-dev uuid-dev libhiredis-dev Some of these supporting packages have dependencies of their own, which will be automatically installed when using package managers such as APT , DNF etc. libpaho-mqtt-dev is not included in Ubuntu prior to Groovy (20.10). IOTech provides a package for Focal (20.04 LTS) which may be installed as follows: sudo curl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public -o /etc/apt/trusted.gpg.d/iotech-public.asc sudo echo \"deb https://iotech.jfrog.io/iotech/debian-release $( lsb_release -cs ) main\" | tee -a /etc/apt/sources.list.d/iotech.list sudo apt-get update sudo apt-get install libpaho-mqtt EdgeX 2.0 For EdgeX 2.0 the C SDK now supports MQTT and Redis implementations of the EdgeX MessageBus CMake is required to build the SDKs. Version 3 or better is required. You can install CMake on Debian by running: sudo apt-get install cmake Check that your C development environment includes the following: a version of GCC supporting C11 CMake version 3 or greater Development libraries and headers for: curl (version 7.56 or later) microhttpd (version 0.9) libyaml (version 0.1.6 or later) libcbor (version 0.5) libuuid (from util-linux v2.x) paho (version 1.3.x) hiredis (version 0.14)","title":"What You Need For C Development"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#next-steps","text":"To explore how to create and build EdgeX device services in C, head to the Device Services, C SDK guide .","title":"Next Steps"},{"location":"getting-started/Ch-GettingStartedDevelopers/","text":"Getting Started as a Developer Introduction These instructions are for Developers and Contributors to get and run EdgeX Foundry. If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User ) EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. EdgeX consists of a collection of reference implementation services and SDK tools. The micro services and SDKs are written in Go or C. These documentation pages provide a developer with the information and instructions to get and run EdgeX Foundry in development mode - that is running natively outside of containers and with the intent of adding to or changing the existing code base. What You Need Hardware EdgeX Foundry is an operating system (OS) and hardware (HW)-agnostic edge software platform. See the reference page for platform requirements . These provide guidance on a minimal platform to run the EdgeX platform. However, as a developer, you may find that additional memory, disk space, and improved CPU are essential to building and debugging. Software Developers need to install the following software to get, run and develop EdgeX Foundry micro services: Git Use this free and open source version control (SVC) system to download (and upload) the EdgeX Foundry source code from the project's GitHub repositories. See https://git-scm.com/downloads for download and install instructions. Alternative tools (Easy Git for example) could be used, but this document assumes use of git and leaves how to use alternative SVC tools to the reader. Redis By default, EdgeX Foundry uses Redis (version 5 starting with the Geneva release) as the persistence mechanism for sensor data as well as metadata about the devices/sensors that are connected. See https://redis.io/ for download and installation instructions. MongoDB As an alternative, EdgeX Foundry allows use of MongoDB (version 4.2 as of Geneva) as the alternative persistence mechanism in place of Redis for sensor data as well as metadata about the connected devices/sensors. See https://www.mongodb.com/download-center?jmp=nav#community for download and installation instructions. Warning Use of MongoDB is deprecated with the Geneva release. EdgeX will remove MongoDB support in a future release. Developers should start to migrate to Redis in all development efforts targeting future EdgeX releases. ZeroMQ Several EdgeX Foundry services depend on ZeroMQ for communications by default. See the installation for your OS. Linux/Unix The easiest way to get and install ZeroMQ on Linux is to use this setup script: https://gist.github.com/katopz/8b766a5cb0ca96c816658e9407e83d00 . Note The 0MQ install script above assumes bash is available on your system and the bash executable is in /usr/bin. Before running the script at the link, run which bash at your Linux terminal to insure that bash is in /usr/bin. If not, change the first line of the script so that it points to the correct location of bash. MacOS For MacOS, use brew to install ZeroMQ. brew install zeromq Windows For directions installing ZeroMQ on Windows, please see the Windows documentation: https://github.com/edgexfoundry/edgex-go/blob/master/ZMQWindows.md Docker (Optional) If you intend to create Docker images for your updated or newly created EdgeX services, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information. Additional Programming Tools and Next Steps Depending on which part of EdgeX you work on, you need to install one or more programming languages (Go, C, etc.) and associated tooling. These tools are covered under the documentation specific to each type of development. Go (Golang) C Versioning Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX. Long Term Support Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.","title":"Getting Started as a Developer"},{"location":"getting-started/Ch-GettingStartedDevelopers/#getting-started-as-a-developer","text":"","title":"Getting Started as a Developer"},{"location":"getting-started/Ch-GettingStartedDevelopers/#introduction","text":"These instructions are for Developers and Contributors to get and run EdgeX Foundry. If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User ) EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. EdgeX consists of a collection of reference implementation services and SDK tools. The micro services and SDKs are written in Go or C. These documentation pages provide a developer with the information and instructions to get and run EdgeX Foundry in development mode - that is running natively outside of containers and with the intent of adding to or changing the existing code base.","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedDevelopers/#what-you-need","text":"","title":"What You Need"},{"location":"getting-started/Ch-GettingStartedDevelopers/#hardware","text":"EdgeX Foundry is an operating system (OS) and hardware (HW)-agnostic edge software platform. See the reference page for platform requirements . These provide guidance on a minimal platform to run the EdgeX platform. However, as a developer, you may find that additional memory, disk space, and improved CPU are essential to building and debugging.","title":"Hardware"},{"location":"getting-started/Ch-GettingStartedDevelopers/#software","text":"Developers need to install the following software to get, run and develop EdgeX Foundry micro services:","title":"Software"},{"location":"getting-started/Ch-GettingStartedDevelopers/#git","text":"Use this free and open source version control (SVC) system to download (and upload) the EdgeX Foundry source code from the project's GitHub repositories. See https://git-scm.com/downloads for download and install instructions. Alternative tools (Easy Git for example) could be used, but this document assumes use of git and leaves how to use alternative SVC tools to the reader.","title":"Git"},{"location":"getting-started/Ch-GettingStartedDevelopers/#redis","text":"By default, EdgeX Foundry uses Redis (version 5 starting with the Geneva release) as the persistence mechanism for sensor data as well as metadata about the devices/sensors that are connected. See https://redis.io/ for download and installation instructions.","title":"Redis"},{"location":"getting-started/Ch-GettingStartedDevelopers/#mongodb","text":"As an alternative, EdgeX Foundry allows use of MongoDB (version 4.2 as of Geneva) as the alternative persistence mechanism in place of Redis for sensor data as well as metadata about the connected devices/sensors. See https://www.mongodb.com/download-center?jmp=nav#community for download and installation instructions. Warning Use of MongoDB is deprecated with the Geneva release. EdgeX will remove MongoDB support in a future release. Developers should start to migrate to Redis in all development efforts targeting future EdgeX releases.","title":"MongoDB"},{"location":"getting-started/Ch-GettingStartedDevelopers/#zeromq","text":"Several EdgeX Foundry services depend on ZeroMQ for communications by default. See the installation for your OS. Linux/Unix The easiest way to get and install ZeroMQ on Linux is to use this setup script: https://gist.github.com/katopz/8b766a5cb0ca96c816658e9407e83d00 . Note The 0MQ install script above assumes bash is available on your system and the bash executable is in /usr/bin. Before running the script at the link, run which bash at your Linux terminal to insure that bash is in /usr/bin. If not, change the first line of the script so that it points to the correct location of bash. MacOS For MacOS, use brew to install ZeroMQ. brew install zeromq Windows For directions installing ZeroMQ on Windows, please see the Windows documentation: https://github.com/edgexfoundry/edgex-go/blob/master/ZMQWindows.md","title":"ZeroMQ"},{"location":"getting-started/Ch-GettingStartedDevelopers/#docker-optional","text":"If you intend to create Docker images for your updated or newly created EdgeX services, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information.","title":"Docker (Optional)"},{"location":"getting-started/Ch-GettingStartedDevelopers/#additional-programming-tools-and-next-steps","text":"Depending on which part of EdgeX you work on, you need to install one or more programming languages (Go, C, etc.) and associated tooling. These tools are covered under the documentation specific to each type of development. Go (Golang) C","title":"Additional Programming Tools and Next Steps"},{"location":"getting-started/Ch-GettingStartedDevelopers/#versioning","text":"Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX.","title":"Versioning"},{"location":"getting-started/Ch-GettingStartedDevelopers/#long-term-support","text":"Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.","title":"Long Term Support"},{"location":"getting-started/Ch-GettingStartedDockerUsers/","text":"Getting Started using Docker Introduction These instructions are for users to get and run EdgeX Foundry using the latest stable Docker images. If you wish to get the latest builds of EdgeX Docker images (prior to releases), then see the EdgeX Nexus Repository guide. Get & Run EdgeX Foundry Install Docker & Docker Compose To run Dockerized EdgeX, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information. The following short video is also very informative https://www.youtube.com/watch?time_continue=3&v=VhabrYF1nms Use Docker Compose to orchestrate the fetch (or pull), install, and start the EdgeX micro service containers. Also use Docker Compose to stop the micro service containers. See: https://docs.docker.com/compose/ to learn more about Docker Compose. You do not need to be an expert with Docker (or Docker Compose) to get and run EdgeX. This guide provides the steps to get EdgeX running in your environment. Some knowledge of Docker and Docker Compose are nice to have, but not required. Basic Docker and Docker Compose commands provided here enable you to run, update, and diagnose issues within EdgeX. Select a EdgeX Foundry Compose File After installing Docker and Docker Compose, you need a EdgeX Docker Compose file. EdgeX Foundry has over a dozen micro services, each deployed in its own Docker container. This file is a manifest of all the EdgeX Foundry micro services to run. The Docker Compose file provides details about how to run each of the services. Specifically, a Docker Compose file is a manifest file, which lists: The Docker container images that should be downloaded, The order in which the containers should be started, The parameters (such as ports) under which the containers should be run The EdgeX development team provides Docker Compose files for each release. Visit the project's GitHub and find the edgex-compose repository . This repository holds all of the EdgeX Docker Compose files for each of the EdgeX releases/versions. The Compose files for each release are found in separate branches. Click on the main button to see all the branches. The edgex-compose repositor contains branches for each release. Select the release branch to locate the Docker Compose files for each release. Locate the branch containing the EdgeX Docker Compose file for the version of EdgeX you want to run. Note The main branch contains the Docker Compose files that use artifacts created from the latest code submitted by contributors (from the night builds). Most end users should avoid using these Docker Compose files. They are work-in-progress. Users should use the Docker Compose files for the latest version of EdgeX. In each edgex-compose branch, you will find several Docker Compose files (all with a .yml extension). The name of the file will suggest the type of EdgeX instance the Compose file will help setup. The table below provides a list of the Docker Compose filenames for the latest release (Ireland). Find the Docker Compose file that matches: your hardware (x86 or ARM) your desire to have security services on or off filename Docker Compose contents docker-compose-arm64.yml Specifies x86 containers, uses Redis database for persistence, and includes security services docker-compose-no-secty-arm64.yml Specifies ARM 64 containers, uses Redis database for persistence, but does not include security services docker-compose-no-secty.yml Specifies x86 containers, uses Redis database for persistence, but does not include security services docker-compose.yml Specifies x86 containers, uses Redis database for persistence, and includes security services docker-compose-no-secty-with-ui-arm64. Same as docker-compose-no-secty-arm64.yml but also includes EdgeX user interface docker-compose-no-secty-with-ui.yml Same as docker-compose-no-secty.yml but also includes EdgeX user interface docker-compose-portainer.yml Specifies the Portainer user interface extension (to be used with the x86 or ARM EdgeX platform) Download a EdgeX Foundry Compose File Once you have selected the release branch of edgex-compose you want to use, download it using your favorite tool. The examples below uses wget to fetch Docker Compose for the Ireland release with no security. x86 wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/ireland/docker-compose-no-secty.yml -O docker-compose.yml ARM wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/ireland/docker-compose-no-secty-arm64.yml -O docker-compose.yml Note The commands above fetch the Docker Compose to a file named 'docker-compose.yml' in the current directory. Docker Compose commands look for a file named 'docker-compose.yml' by default. You can use an alternate file name but then must specify that file name when issuing Docker Compose commands. See Compose reference documentation for help. Generate a custom Docker Compose file The Docker Compose files in the ireland branch contain the standard set of EdgeX services configured to use Redis message bus and include only the Virtual and REST device services. If you need to have different device services running or use MQTT for the message bus, you need a modified version of one of the standard Docker Compose files. You could manually add the device services to one of the existing EdgeX Compose files or, use the EdgeX Compose Builder tool to generate a new custom Compose file that contains the services you would like included. When you use Compose Builder, you don't have to worry about adding all the necessary ports, variables, etc. as the tool will generate the service elements in the file for you. The Compose Builder tool was added with the Hanoi release. You will find the Compose Builder tool in each of the release branches since Hanoi under the compose-builder folder of those branches. You will also find a compose-builder folder on the main branch for creating custom Compose files for the nightly builds. Do the following to use this tool to generate a custom Compose file: Clone the edgex-compose repository. git clone https://github.com/edgexfoundry/edgex-compose.git 2. Change directories to the clone and checkout the appropriate release branch. Checkout of the Ireland release branch is shown here. cd edgex-compose/ git checkout ireland 3. Change directories to the compose-builder folder and then use the make gen command to generate your custom compose file. The generated Docker Compose file is named docker-compose.yaml . Here are some examples: cd compose-builder/ make gen ds-mqtt mqtt-broker - Generates secure Compose file configured to use MQTT for the message bus, adds then MQTT broker and the Device MQTT services. make gen no-secty ds-modbus - Generates non-secure compose file with just the Device Modbus device service. make gen no-secty arm64 ds-grove - Generates non-secure compose file for ARM64 with just the Device Grove device service. \u200b See the README document in the compose-builder directory for details on all the available options. The Compose Builder is different per release, so make sure to consult the README in the appropriate release branch. See Ireland's Compose Builder README for details on the lastest release Compose Builder options for make gen . Note The generated Docker Compose file may require addition customizations for your specific needs, such as environment override(s) to set appropriate Host IP address, etc. Run EdgeX Foundry Now that you have the EdgeX Docker Compose file, you are ready to run EdgeX. Follow these steps to get the container images and start EdgeX! In a command terminal, change directories to the location of your docker-compose.yml. Run the following command in the terminal to pull (fetch) and then start the EdgeX containers. docker-compose up -d Info If you wish, you can fetch the images first and then run them. This allows you to make sure the EdgeX images you need are all available before trying to run. docker-compose pull docker-compose up -d Note The -d option indicates you want Docker Compose to run the EdgeX containers in detached mode - that is to run the containers in the background. Without -d, the containers will all start in the terminal and in order to use the terminal further you have to stop the containers. Verify EdgeX Foundry Running In the same terminal, run the process status command shown below to confirm that all the containers downloaded and started. docker-compose ps If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above. If you are using a custom Compose file, your containers list may vary. Also note that some \"setup\" containers are designed to start and then exit after configuring your EdgeX instance. Checking the Status of EdgeX Foundry In addition to the process status of the EdgeX containers, there are a number of other tools to check on the health and status of your EdgeX instance. EdgeX Foundry Container Logs Use the command below to see the log of any service. # see the logs of a service docker-compose logs -f [ compose-service-name ] # example - core data docker-compose logs -f data See EdgeX Container Names for a list of the EdgeX Docker Compose service names. A check of an EdgeX service log usually indicates if the service is running normally or has errors. When you are done reviewing the content of the log, select Control-c to stop the output to your terminal. Ping Check Each EdgeX micro service has a built-in response to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro service containers are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available. http://localhost:[service port]/api/v2/ping See EdgeX Default Service Ports for a list of the EdgeX default service ports. \"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues. Consul Registry Check EdgeX uses the open source Consul project as its registry service. All EdgeX micro services are expected to register with Consul as they start. Going to Consul's dashboard UI enables you to see which services are up. Find the Consul UI at http://localhost:8500/ui . EdgeX 2.0 Please note that as of EdgeX 2.0, Consul can be secured. When EdgeX is running in secure mode with secure Consul , you must provide Consul's access token to get to the dashboard UI referenced above. See How to get Consul ACL token for details.","title":"Getting Started using Docker"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#getting-started-using-docker","text":"","title":"Getting Started using Docker"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#introduction","text":"These instructions are for users to get and run EdgeX Foundry using the latest stable Docker images. If you wish to get the latest builds of EdgeX Docker images (prior to releases), then see the EdgeX Nexus Repository guide.","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#get-run-edgex-foundry","text":"","title":"Get & Run EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#install-docker-docker-compose","text":"To run Dockerized EdgeX, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information. The following short video is also very informative https://www.youtube.com/watch?time_continue=3&v=VhabrYF1nms Use Docker Compose to orchestrate the fetch (or pull), install, and start the EdgeX micro service containers. Also use Docker Compose to stop the micro service containers. See: https://docs.docker.com/compose/ to learn more about Docker Compose. You do not need to be an expert with Docker (or Docker Compose) to get and run EdgeX. This guide provides the steps to get EdgeX running in your environment. Some knowledge of Docker and Docker Compose are nice to have, but not required. Basic Docker and Docker Compose commands provided here enable you to run, update, and diagnose issues within EdgeX.","title":"Install Docker & Docker Compose"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#select-a-edgex-foundry-compose-file","text":"After installing Docker and Docker Compose, you need a EdgeX Docker Compose file. EdgeX Foundry has over a dozen micro services, each deployed in its own Docker container. This file is a manifest of all the EdgeX Foundry micro services to run. The Docker Compose file provides details about how to run each of the services. Specifically, a Docker Compose file is a manifest file, which lists: The Docker container images that should be downloaded, The order in which the containers should be started, The parameters (such as ports) under which the containers should be run The EdgeX development team provides Docker Compose files for each release. Visit the project's GitHub and find the edgex-compose repository . This repository holds all of the EdgeX Docker Compose files for each of the EdgeX releases/versions. The Compose files for each release are found in separate branches. Click on the main button to see all the branches. The edgex-compose repositor contains branches for each release. Select the release branch to locate the Docker Compose files for each release. Locate the branch containing the EdgeX Docker Compose file for the version of EdgeX you want to run. Note The main branch contains the Docker Compose files that use artifacts created from the latest code submitted by contributors (from the night builds). Most end users should avoid using these Docker Compose files. They are work-in-progress. Users should use the Docker Compose files for the latest version of EdgeX. In each edgex-compose branch, you will find several Docker Compose files (all with a .yml extension). The name of the file will suggest the type of EdgeX instance the Compose file will help setup. The table below provides a list of the Docker Compose filenames for the latest release (Ireland). Find the Docker Compose file that matches: your hardware (x86 or ARM) your desire to have security services on or off filename Docker Compose contents docker-compose-arm64.yml Specifies x86 containers, uses Redis database for persistence, and includes security services docker-compose-no-secty-arm64.yml Specifies ARM 64 containers, uses Redis database for persistence, but does not include security services docker-compose-no-secty.yml Specifies x86 containers, uses Redis database for persistence, but does not include security services docker-compose.yml Specifies x86 containers, uses Redis database for persistence, and includes security services docker-compose-no-secty-with-ui-arm64. Same as docker-compose-no-secty-arm64.yml but also includes EdgeX user interface docker-compose-no-secty-with-ui.yml Same as docker-compose-no-secty.yml but also includes EdgeX user interface docker-compose-portainer.yml Specifies the Portainer user interface extension (to be used with the x86 or ARM EdgeX platform)","title":"Select a EdgeX Foundry Compose File"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#download-a-edgex-foundry-compose-file","text":"Once you have selected the release branch of edgex-compose you want to use, download it using your favorite tool. The examples below uses wget to fetch Docker Compose for the Ireland release with no security. x86 wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/ireland/docker-compose-no-secty.yml -O docker-compose.yml ARM wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/ireland/docker-compose-no-secty-arm64.yml -O docker-compose.yml Note The commands above fetch the Docker Compose to a file named 'docker-compose.yml' in the current directory. Docker Compose commands look for a file named 'docker-compose.yml' by default. You can use an alternate file name but then must specify that file name when issuing Docker Compose commands. See Compose reference documentation for help.","title":"Download a EdgeX Foundry Compose File"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#generate-a-custom-docker-compose-file","text":"The Docker Compose files in the ireland branch contain the standard set of EdgeX services configured to use Redis message bus and include only the Virtual and REST device services. If you need to have different device services running or use MQTT for the message bus, you need a modified version of one of the standard Docker Compose files. You could manually add the device services to one of the existing EdgeX Compose files or, use the EdgeX Compose Builder tool to generate a new custom Compose file that contains the services you would like included. When you use Compose Builder, you don't have to worry about adding all the necessary ports, variables, etc. as the tool will generate the service elements in the file for you. The Compose Builder tool was added with the Hanoi release. You will find the Compose Builder tool in each of the release branches since Hanoi under the compose-builder folder of those branches. You will also find a compose-builder folder on the main branch for creating custom Compose files for the nightly builds. Do the following to use this tool to generate a custom Compose file: Clone the edgex-compose repository. git clone https://github.com/edgexfoundry/edgex-compose.git 2. Change directories to the clone and checkout the appropriate release branch. Checkout of the Ireland release branch is shown here. cd edgex-compose/ git checkout ireland 3. Change directories to the compose-builder folder and then use the make gen command to generate your custom compose file. The generated Docker Compose file is named docker-compose.yaml . Here are some examples: cd compose-builder/ make gen ds-mqtt mqtt-broker - Generates secure Compose file configured to use MQTT for the message bus, adds then MQTT broker and the Device MQTT services. make gen no-secty ds-modbus - Generates non-secure compose file with just the Device Modbus device service. make gen no-secty arm64 ds-grove - Generates non-secure compose file for ARM64 with just the Device Grove device service. \u200b See the README document in the compose-builder directory for details on all the available options. The Compose Builder is different per release, so make sure to consult the README in the appropriate release branch. See Ireland's Compose Builder README for details on the lastest release Compose Builder options for make gen . Note The generated Docker Compose file may require addition customizations for your specific needs, such as environment override(s) to set appropriate Host IP address, etc.","title":"Generate a custom Docker Compose file"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#run-edgex-foundry","text":"Now that you have the EdgeX Docker Compose file, you are ready to run EdgeX. Follow these steps to get the container images and start EdgeX! In a command terminal, change directories to the location of your docker-compose.yml. Run the following command in the terminal to pull (fetch) and then start the EdgeX containers. docker-compose up -d Info If you wish, you can fetch the images first and then run them. This allows you to make sure the EdgeX images you need are all available before trying to run. docker-compose pull docker-compose up -d Note The -d option indicates you want Docker Compose to run the EdgeX containers in detached mode - that is to run the containers in the background. Without -d, the containers will all start in the terminal and in order to use the terminal further you have to stop the containers.","title":"Run EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#verify-edgex-foundry-running","text":"In the same terminal, run the process status command shown below to confirm that all the containers downloaded and started. docker-compose ps If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above. If you are using a custom Compose file, your containers list may vary. Also note that some \"setup\" containers are designed to start and then exit after configuring your EdgeX instance.","title":"Verify EdgeX Foundry Running"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#checking-the-status-of-edgex-foundry","text":"In addition to the process status of the EdgeX containers, there are a number of other tools to check on the health and status of your EdgeX instance.","title":"Checking the Status of EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#edgex-foundry-container-logs","text":"Use the command below to see the log of any service. # see the logs of a service docker-compose logs -f [ compose-service-name ] # example - core data docker-compose logs -f data See EdgeX Container Names for a list of the EdgeX Docker Compose service names. A check of an EdgeX service log usually indicates if the service is running normally or has errors. When you are done reviewing the content of the log, select Control-c to stop the output to your terminal.","title":"EdgeX Foundry Container Logs"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#ping-check","text":"Each EdgeX micro service has a built-in response to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro service containers are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available. http://localhost:[service port]/api/v2/ping See EdgeX Default Service Ports for a list of the EdgeX default service ports. \"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues.","title":"Ping Check"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#consul-registry-check","text":"EdgeX uses the open source Consul project as its registry service. All EdgeX micro services are expected to register with Consul as they start. Going to Consul's dashboard UI enables you to see which services are up. Find the Consul UI at http://localhost:8500/ui . EdgeX 2.0 Please note that as of EdgeX 2.0, Consul can be secured. When EdgeX is running in secure mode with secure Consul , you must provide Consul's access token to get to the dashboard UI referenced above. See How to get Consul ACL token for details.","title":"Consul Registry Check"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/","text":"Getting Started - Go Developers Introduction These instructions are for Go Lang Developers and Contributors to get, run and otherwise work with Go-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements . If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User ) What You Need For Go Development In additional to the hardware and software listed in the Developers guide , you will need the following to work with the EdgeX Go-based micro services. Go The open sourced micro services of EdgeX Foundry are written in Go 1.16. See https://golang.org/dl/ for download and installation instructions. Newer versions of Go are available and may work, but the project has not built and tested to these newer versions of the language. Older versions of Go, especially 1.10 or older, are likely to cause issues (EdgeX now uses Go Modules which were introduced with Go Lang 1.11). Build Essentials In order to compile and build some elements of EdgeX, Gnu C compiler, utilities (like make), and associated librarires need to be installed. Some IDEs may already come with these tools. Some OS environments may already come with these tools. Others environments may require you install them. For Ubuntu environments, you can install a convenience package called Build Essentials . Note If you are installing Build Essentials, note that there is a build-essential package for each Ubuntu release. Search for 'build-essential' associated to your Ubuntu version via Ubuntu Packages Search . IDE (Optional) There are many tool options for writing and editing Go Lang code. You could use a simple text editor. For more convenience, you may choose to use an integrated development environment (IDE). The list below highlights IDEs used by some of the EdgeX community (without any project endorsement). GoLand GoLand is a popular, although subscription-fee based, Go specific IDE. Learn how to purchase and download Go Land here: https://www.jetbrains.com/go/ . Visual Studio Code Visual Studio Code is a free, open source IDE developed by Microsoft. Find and download Visual Studio Code here: https://code.visualstudio.com/ . Atom Atom is also a free, open source IDE used with many languages. Find and download Atom here: https://ide.atom.io/ . Get the code This part of the documentation assumes you wish to get and work with the key EdgeX services. This includes but is not limited to Core, Supporting, some security, and system management services. To work with other Go-based security services, device services, application services, SDKs, user interface, or other service you may need to pull in other EdgeX repository code. See other getting started guides for working with other Go-based services. As you will see below, you do not need to explicitly pull in dependency modules (whether EdgeX or 3rd party provided). Dependencies will automatically be pulled through the building process. To work with the key services, you will need to download the source code from the EdgeX Go repository . The EdgeX Go-based micro services are all available in a single GitHub repository download. Once the code is pulled, the Go micro services are built and packaged as platform dependent executables. If Docker is installed, the executable can also be containerized for end user deployment/use. To download the EdgeX Go code, first change directories to the location where you want to download the code (to edgex in the image below). Then use your git tool and request to clone this repository with the following command: git clone https://github.com/edgexfoundry/edgex-go.git Note If you plan to contribute code back to the EdgeX project (as a Contributor), you are going to want to fork the repositories you plan to work with and then pull your fork versus the EdgeX repositories directly. This documentation does not address the process and procedures for working with an EdgeX fork, committing changes and submitting contribution pull requests (PRs). See some of the links below in the EdgeX Wiki for help on how to fork and contribute EdgeX code. https://wiki.edgexfoundry.org/display/FA/Contributor%27s+Guide https://wiki.edgexfoundry.org/display/FA/Contributor%27s+Guide+-+Go+Lang https://wiki.edgexfoundry.org/display/FA/Contributor+Process?searchId=AW768BAW7 Furthermore, this pulls and works with the latest code from the main branch. The main branch contains code that is \"work in progress\" for the upcoming release. If you want to work with a specific release, checkout code from the specific release branch or tag(e.g. v2.0.0 , hanoi , v1.3.11 , etc.) Build EdgeX Foundry To build the Go Lang services found in edgex-go, first change directories to the root of the edgex-go code cd edgex-go Second, use the community provided Makefile to build all the services in a single call make build Info The first time EdgeX builds, it will take longer than other builds as it has to download all dependencies. Depending on the size of your host machine, an initial build can take several minutes. Make sure the build completes and has no errors. If it does build, you should find new service executables in each of the service folders under the service directories found in the /edgex-go/cmd folder. Run EdgeX Foundry Run the Database Several of the EdgeX Foundry micro services use a database. This includes core-data, core-metadata, support-scheduler, among others. Therefore, when working with EdgeX Foundry its a good idea to have the database up and running as a general rule. See the Redis Quick Start Guide for how to run Redis in a Linux environment (or find similar documentation for other environments). Run EdgeX Services With the services built, and the database up and running, you can now run each of the services. In this example, the services will run without security services turned on. If you wish to run with security, you will need to clone, build and run the security services. In order to turn security off, first set the EDGEX_SECURITY_SECRET_STORE environment variable to false with an export call. Simply call export EDGEX_SECURITY_SECRET_STORE = false Next, move to the cmd folder and then change folders to the service folder for the service you want to run. Start the executable (with default configuration) that is in that folder. For example, to start Core Metadata, enter the cmd/core-metadata folder and start core-metadata. cd cmd/core-metadata/ ./core-metadata & Note When running the services from the command line, you will usually want to start the service with the & character after the command. This makes the command run in the background. If you do not run the service in the background, then you will need to leave the service running in the terminal and open another terminal to start the other services. This will start the EdgeX go service and leave it running in the background until you kill it. The log entries from the service will still display in the terminal. Watch the log entries for any ERROR indicators. Info To kill a service there are several options, but an easy means is to use pkill with the service name. pkill core-metadata Start as many services as you need in order to carry out your development, testing, etc. As an absolute minimal set, you will typically need to run core-metadata, core-data, core-command and a device service. Selection of the device service will depend on which physical sensor or device you want to use (or use the virtual device to simulate a sensor). Here are the set of commands to launch core-data and core-command (in addition to core-metadata above) cd ../core-data/ ./core-data & cd ../core-command/ ./core-command & Tip You can run some services via Docker containers while working on specific services in Go. See Working in a Hybrid Environment for more details. While the EdgeX services are running you can make EdgeX API calls to localhost . Info No sensor data will flow yet as this just gets the key services up and running. To get sensor data flowing into EdgeX, you will need to get, build and run an EdgeX device service in a similar fashion. The community provides a virtual device service to test and experiment with ( https://github.com/edgexfoundry/device-virtual-go ). Verify EdgeX is Working Each EdgeX micro service has a built-in respond to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro services are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available. http://localhost:[port]/api/v2/ping See EdgeX Default Service Ports for a list of the EdgeX default service ports. \"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues. The example above shows the ping of core-data. Next Steps Application services and some device services are also built in Go. To explore how to create and build EdgeX application and devices services in Go, head to SDK documentation covering these EdgeX elements. Application Services and the Application Functions SDK Device Services in Go EdgeX Foundry in GoLand IDEs offer many code editing conveniences. Go Land was specifically built to edit and work with Go code. So if you are doing any significant code work with the EdgeX Go micro services, you will likely find it convenient to edit, build, run, test, etc. from GoLand or other IDE. Import EdgeX To bring in the EdgeX repository code into Go Land, use the File \u2192 Open... menu option in Go Land to open the Open File or Project Window. In the \"Open File or Project\" popup, select the location of the folder containing your cloned edgex-go repo. Open the Terminal From the View menu in Go Land, select the Terminal menu option. This will open a command terminal from which you can issue commands to install the dependencies, build the micro services, run the micro services, etc. Build the EdgeX Micro Services Run \"make build\" in the Terminal view (as shown below) to build the services. This can take a few minutes to build all the services. Just as when running make build from the command line in a terminal, the micro service executables that get built in Go Land's terminal will be created in each of the service folders under the service directories found in the /edgex-go/cmd folder.. Run EdgeX With all the micro services built, you can now run EdgeX services. You may first want to make sure the database is running. Then, set any environment variables, change directories to the /cmd and service subfolder, and run the service right from the the terminal (same as in Run EdgeX Services ). You can now call on the service APIs to make sure they are running correctly. Namely, call on http://localhost:\\[service port\\]/api/v2/ping to see each service respond to the simplest of requests.","title":"Getting Started - Go Developers"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#getting-started-go-developers","text":"","title":"Getting Started - Go Developers"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#introduction","text":"These instructions are for Go Lang Developers and Contributors to get, run and otherwise work with Go-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements . If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User )","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#what-you-need-for-go-development","text":"In additional to the hardware and software listed in the Developers guide , you will need the following to work with the EdgeX Go-based micro services.","title":"What You Need For Go Development"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#go","text":"The open sourced micro services of EdgeX Foundry are written in Go 1.16. See https://golang.org/dl/ for download and installation instructions. Newer versions of Go are available and may work, but the project has not built and tested to these newer versions of the language. Older versions of Go, especially 1.10 or older, are likely to cause issues (EdgeX now uses Go Modules which were introduced with Go Lang 1.11).","title":"Go"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-essentials","text":"In order to compile and build some elements of EdgeX, Gnu C compiler, utilities (like make), and associated librarires need to be installed. Some IDEs may already come with these tools. Some OS environments may already come with these tools. Others environments may require you install them. For Ubuntu environments, you can install a convenience package called Build Essentials . Note If you are installing Build Essentials, note that there is a build-essential package for each Ubuntu release. Search for 'build-essential' associated to your Ubuntu version via Ubuntu Packages Search .","title":"Build Essentials"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#ide-optional","text":"There are many tool options for writing and editing Go Lang code. You could use a simple text editor. For more convenience, you may choose to use an integrated development environment (IDE). The list below highlights IDEs used by some of the EdgeX community (without any project endorsement).","title":"IDE (Optional)"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#goland","text":"GoLand is a popular, although subscription-fee based, Go specific IDE. Learn how to purchase and download Go Land here: https://www.jetbrains.com/go/ .","title":"GoLand"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#visual-studio-code","text":"Visual Studio Code is a free, open source IDE developed by Microsoft. Find and download Visual Studio Code here: https://code.visualstudio.com/ .","title":"Visual Studio Code"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#atom","text":"Atom is also a free, open source IDE used with many languages. Find and download Atom here: https://ide.atom.io/ .","title":"Atom"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#get-the-code","text":"This part of the documentation assumes you wish to get and work with the key EdgeX services. This includes but is not limited to Core, Supporting, some security, and system management services. To work with other Go-based security services, device services, application services, SDKs, user interface, or other service you may need to pull in other EdgeX repository code. See other getting started guides for working with other Go-based services. As you will see below, you do not need to explicitly pull in dependency modules (whether EdgeX or 3rd party provided). Dependencies will automatically be pulled through the building process. To work with the key services, you will need to download the source code from the EdgeX Go repository . The EdgeX Go-based micro services are all available in a single GitHub repository download. Once the code is pulled, the Go micro services are built and packaged as platform dependent executables. If Docker is installed, the executable can also be containerized for end user deployment/use. To download the EdgeX Go code, first change directories to the location where you want to download the code (to edgex in the image below). Then use your git tool and request to clone this repository with the following command: git clone https://github.com/edgexfoundry/edgex-go.git Note If you plan to contribute code back to the EdgeX project (as a Contributor), you are going to want to fork the repositories you plan to work with and then pull your fork versus the EdgeX repositories directly. This documentation does not address the process and procedures for working with an EdgeX fork, committing changes and submitting contribution pull requests (PRs). See some of the links below in the EdgeX Wiki for help on how to fork and contribute EdgeX code. https://wiki.edgexfoundry.org/display/FA/Contributor%27s+Guide https://wiki.edgexfoundry.org/display/FA/Contributor%27s+Guide+-+Go+Lang https://wiki.edgexfoundry.org/display/FA/Contributor+Process?searchId=AW768BAW7 Furthermore, this pulls and works with the latest code from the main branch. The main branch contains code that is \"work in progress\" for the upcoming release. If you want to work with a specific release, checkout code from the specific release branch or tag(e.g. v2.0.0 , hanoi , v1.3.11 , etc.)","title":"Get the code"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-edgex-foundry","text":"To build the Go Lang services found in edgex-go, first change directories to the root of the edgex-go code cd edgex-go Second, use the community provided Makefile to build all the services in a single call make build Info The first time EdgeX builds, it will take longer than other builds as it has to download all dependencies. Depending on the size of your host machine, an initial build can take several minutes. Make sure the build completes and has no errors. If it does build, you should find new service executables in each of the service folders under the service directories found in the /edgex-go/cmd folder.","title":"Build EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex-foundry","text":"","title":"Run EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-the-database","text":"Several of the EdgeX Foundry micro services use a database. This includes core-data, core-metadata, support-scheduler, among others. Therefore, when working with EdgeX Foundry its a good idea to have the database up and running as a general rule. See the Redis Quick Start Guide for how to run Redis in a Linux environment (or find similar documentation for other environments).","title":"Run the Database"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex-services","text":"With the services built, and the database up and running, you can now run each of the services. In this example, the services will run without security services turned on. If you wish to run with security, you will need to clone, build and run the security services. In order to turn security off, first set the EDGEX_SECURITY_SECRET_STORE environment variable to false with an export call. Simply call export EDGEX_SECURITY_SECRET_STORE = false Next, move to the cmd folder and then change folders to the service folder for the service you want to run. Start the executable (with default configuration) that is in that folder. For example, to start Core Metadata, enter the cmd/core-metadata folder and start core-metadata. cd cmd/core-metadata/ ./core-metadata & Note When running the services from the command line, you will usually want to start the service with the & character after the command. This makes the command run in the background. If you do not run the service in the background, then you will need to leave the service running in the terminal and open another terminal to start the other services. This will start the EdgeX go service and leave it running in the background until you kill it. The log entries from the service will still display in the terminal. Watch the log entries for any ERROR indicators. Info To kill a service there are several options, but an easy means is to use pkill with the service name. pkill core-metadata Start as many services as you need in order to carry out your development, testing, etc. As an absolute minimal set, you will typically need to run core-metadata, core-data, core-command and a device service. Selection of the device service will depend on which physical sensor or device you want to use (or use the virtual device to simulate a sensor). Here are the set of commands to launch core-data and core-command (in addition to core-metadata above) cd ../core-data/ ./core-data & cd ../core-command/ ./core-command & Tip You can run some services via Docker containers while working on specific services in Go. See Working in a Hybrid Environment for more details. While the EdgeX services are running you can make EdgeX API calls to localhost . Info No sensor data will flow yet as this just gets the key services up and running. To get sensor data flowing into EdgeX, you will need to get, build and run an EdgeX device service in a similar fashion. The community provides a virtual device service to test and experiment with ( https://github.com/edgexfoundry/device-virtual-go ).","title":"Run EdgeX Services"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#verify-edgex-is-working","text":"Each EdgeX micro service has a built-in respond to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro services are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available. http://localhost:[port]/api/v2/ping See EdgeX Default Service Ports for a list of the EdgeX default service ports. \"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues. The example above shows the ping of core-data.","title":"Verify EdgeX is Working"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#next-steps","text":"Application services and some device services are also built in Go. To explore how to create and build EdgeX application and devices services in Go, head to SDK documentation covering these EdgeX elements. Application Services and the Application Functions SDK Device Services in Go","title":"Next Steps"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#edgex-foundry-in-goland","text":"IDEs offer many code editing conveniences. Go Land was specifically built to edit and work with Go code. So if you are doing any significant code work with the EdgeX Go micro services, you will likely find it convenient to edit, build, run, test, etc. from GoLand or other IDE.","title":"EdgeX Foundry in GoLand"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#import-edgex","text":"To bring in the EdgeX repository code into Go Land, use the File \u2192 Open... menu option in Go Land to open the Open File or Project Window. In the \"Open File or Project\" popup, select the location of the folder containing your cloned edgex-go repo.","title":"Import EdgeX"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#open-the-terminal","text":"From the View menu in Go Land, select the Terminal menu option. This will open a command terminal from which you can issue commands to install the dependencies, build the micro services, run the micro services, etc.","title":"Open the Terminal"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-the-edgex-micro-services","text":"Run \"make build\" in the Terminal view (as shown below) to build the services. This can take a few minutes to build all the services. Just as when running make build from the command line in a terminal, the micro service executables that get built in Go Land's terminal will be created in each of the service folders under the service directories found in the /edgex-go/cmd folder..","title":"Build the EdgeX Micro Services"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex","text":"With all the micro services built, you can now run EdgeX services. You may first want to make sure the database is running. Then, set any environment variables, change directories to the /cmd and service subfolder, and run the service right from the the terminal (same as in Run EdgeX Services ). You can now call on the service APIs to make sure they are running correctly. Namely, call on http://localhost:\\[service port\\]/api/v2/ping to see each service respond to the simplest of requests.","title":"Run EdgeX"},{"location":"getting-started/Ch-GettingStartedHybrid/","text":"Working in a Hybrid Environment In some cases, as a developer or contributor , you want to work on a particular micro service. Yet, you don't want to have to download all the source code, and then build and run all the micro services. There is an alternative approach! You can download and run the EdgeX Docker containers for all the micro services you need and run your single micro service (the one you are presumably working on) natively or from a developer tool of choice outside of a container. Within EdgeX, we call this a \"hybrid\" environment - where part of your EdgeX platform is running from a development environment, while other parts are running from Docker containers. This page outlines how to work in a hybrid development environment. As an example of this process, let's say you want to do coding work with/on the Virtual Device service. You want the rest of the EdgeX environment up and running via Docker containers. How would you set up this hybrid environment? Let's take a look. Get and Run the EdgeX Docker Containers If you haven't already, follow the Getting Started using Docker guide to set up your environment (Docker, Docker Compose, etc.) before continuing. Since we plan to work with the virtual device service in this example, you don't need or want to run the virtual device service. You will run all the other services via Docker Compose. Based on the instructions found in the Getting Started using Docker , locate and download the appropriate Docker Compose file for your development environment. Next, issue the following commands to start the EdgeX containers and then stop the virtual device service (which is the service you are working on in this example). docker-compose up -d docker-compose stop device-virtual Run the EdgeX containers and then stop the service container that you are going to work on - in this case the virtual device service container. Note These notes assume you are working with the EdgeX Ireland release. It also assumes you have downloaded the appropriate Docker Compose file and have named it docker-compose.yml so you don't have to specify the file name each time you run a Docker Compose command. Some versions of EdgeX may require other or additional containers to run. Tip You can also use the EdgeX Compose Builder tool to create a custom Docker Compose file with just the services you want. See the Compose Builder documentation on and checkout the Compose Builder tool in GitHub . Run the command below to confirm that all the containers have started and that the virtual device container is no longer running. docker-compose ps Get, Build and Run the (non-Docker) Service With the EdgeX containers running, you can now download, build and run natively (outside of a container) the service you want to work on. In this example, the virtual device service is used to exemplify the steps necessary to get, build and run the native service with the EdgeX containerized services. However, the practice could be applied to any service. Get the service code Per Getting Started Go Developers , pull the micro service code you want to work on from GitHub. In this example, we use the device-virtual-go as the micro service that is going to be worked on. git clone https://github.com/edgexfoundry/device-virtual-go.git Build the service code At this time, you can add or modify the code to make the service changes you need. Once ready, you must compile and build the service into an executable. Change folders to the cloned micro service directory and build the service. cd device-virtual-go/ make build Clone the service from Github, make your code changes and then build the service locally. Change the configuration Depending on the service you are working on, you may need to change the configuration of the service to point to and use the other services that are containerized (running in Docker). In particular, if the service you are working on is not on the same host as the Docker Engine running the containerized services, you will likely need to change the configuration. Examine the configuration.toml file in the cmd/res folder of the device-virtual-go. Note that the Service (located in the [Service] section of the configuration), Registry (located in the [Registry] section) and all the \"Clients\" (located in the [Clients] section) suggest that the Host of these services is \"localhost\". These and other host configuration elements need to change when the services are not running on the same host - specifically the localhost. When your service is running on a different host than the rest of EdgeX, change the [Service] Host to be the address of the machine hosting your service. Change the [Registry] and [Clients] Host configuration to specify the location of the machine hosting these services. If you do have to change the configuration, save the configuration.toml file after making changes. Run the service code natively. The executable created by the make build command is found in the cmd folder of the service. Change folders to the location of the executable. Set any environment variables needed depending on your EdgeX setup. In this example, we did not start the security elements so we need to set EDGEX_SECURITY_SECRET_STORE to false in order to turn off security. Finally, run the service right from a terminal. cd cmd export EDGEX_SECURITY_SECRET_STORE = false ./device-virtual Change folders to the service's cmd/ folder, set env vars, and then execute the service executable in the cmd folder. Check the results At this time, your virtual device micro service should be communicating with the other EdgeX micro services running in their Docker containers. Because Core Metadata callbacks do not work in the hybrid environment, the virtual device service will not receive the Add Device callbacks on the inital run after creating them in Core Metadata. The simple work around for this issue is to stop ( Ctrl-c from the terminal) and restart the virtual device service (again with ./device-virtual execution). The virtual device service log after stopping and restarting. Give the virtual device a few seconds or so to initialize itself and start sending data to Core Data. To check that it is working properly, open a browser and point your browser to Core Data to check that events are being deposited. You can do this by calling on the Core Data API that checks the count of events in Core Data. http://localhost:59880/api/v2/event/count For this example, you can check that the virtual device service is sending data into Core Data by checking the event count. Note If you choose, you can also import the service into GoLand and then code and run the service from GoLand. Follow the instructions in the Getting Started - Go Developers to learn how to import, build and run a service in GoLand.","title":"Working in a Hybrid Environment"},{"location":"getting-started/Ch-GettingStartedHybrid/#working-in-a-hybrid-environment","text":"In some cases, as a developer or contributor , you want to work on a particular micro service. Yet, you don't want to have to download all the source code, and then build and run all the micro services. There is an alternative approach! You can download and run the EdgeX Docker containers for all the micro services you need and run your single micro service (the one you are presumably working on) natively or from a developer tool of choice outside of a container. Within EdgeX, we call this a \"hybrid\" environment - where part of your EdgeX platform is running from a development environment, while other parts are running from Docker containers. This page outlines how to work in a hybrid development environment. As an example of this process, let's say you want to do coding work with/on the Virtual Device service. You want the rest of the EdgeX environment up and running via Docker containers. How would you set up this hybrid environment? Let's take a look.","title":"Working in a Hybrid Environment"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-and-run-the-edgex-docker-containers","text":"If you haven't already, follow the Getting Started using Docker guide to set up your environment (Docker, Docker Compose, etc.) before continuing. Since we plan to work with the virtual device service in this example, you don't need or want to run the virtual device service. You will run all the other services via Docker Compose. Based on the instructions found in the Getting Started using Docker , locate and download the appropriate Docker Compose file for your development environment. Next, issue the following commands to start the EdgeX containers and then stop the virtual device service (which is the service you are working on in this example). docker-compose up -d docker-compose stop device-virtual Run the EdgeX containers and then stop the service container that you are going to work on - in this case the virtual device service container. Note These notes assume you are working with the EdgeX Ireland release. It also assumes you have downloaded the appropriate Docker Compose file and have named it docker-compose.yml so you don't have to specify the file name each time you run a Docker Compose command. Some versions of EdgeX may require other or additional containers to run. Tip You can also use the EdgeX Compose Builder tool to create a custom Docker Compose file with just the services you want. See the Compose Builder documentation on and checkout the Compose Builder tool in GitHub . Run the command below to confirm that all the containers have started and that the virtual device container is no longer running. docker-compose ps","title":"Get and Run the EdgeX Docker Containers"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-build-and-run-the-non-docker-service","text":"With the EdgeX containers running, you can now download, build and run natively (outside of a container) the service you want to work on. In this example, the virtual device service is used to exemplify the steps necessary to get, build and run the native service with the EdgeX containerized services. However, the practice could be applied to any service.","title":"Get, Build and Run the (non-Docker) Service"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-the-service-code","text":"Per Getting Started Go Developers , pull the micro service code you want to work on from GitHub. In this example, we use the device-virtual-go as the micro service that is going to be worked on. git clone https://github.com/edgexfoundry/device-virtual-go.git","title":"Get the service code"},{"location":"getting-started/Ch-GettingStartedHybrid/#build-the-service-code","text":"At this time, you can add or modify the code to make the service changes you need. Once ready, you must compile and build the service into an executable. Change folders to the cloned micro service directory and build the service. cd device-virtual-go/ make build Clone the service from Github, make your code changes and then build the service locally.","title":"Build the service code"},{"location":"getting-started/Ch-GettingStartedHybrid/#change-the-configuration","text":"Depending on the service you are working on, you may need to change the configuration of the service to point to and use the other services that are containerized (running in Docker). In particular, if the service you are working on is not on the same host as the Docker Engine running the containerized services, you will likely need to change the configuration. Examine the configuration.toml file in the cmd/res folder of the device-virtual-go. Note that the Service (located in the [Service] section of the configuration), Registry (located in the [Registry] section) and all the \"Clients\" (located in the [Clients] section) suggest that the Host of these services is \"localhost\". These and other host configuration elements need to change when the services are not running on the same host - specifically the localhost. When your service is running on a different host than the rest of EdgeX, change the [Service] Host to be the address of the machine hosting your service. Change the [Registry] and [Clients] Host configuration to specify the location of the machine hosting these services. If you do have to change the configuration, save the configuration.toml file after making changes.","title":"Change the configuration"},{"location":"getting-started/Ch-GettingStartedHybrid/#run-the-service-code-natively","text":"The executable created by the make build command is found in the cmd folder of the service. Change folders to the location of the executable. Set any environment variables needed depending on your EdgeX setup. In this example, we did not start the security elements so we need to set EDGEX_SECURITY_SECRET_STORE to false in order to turn off security. Finally, run the service right from a terminal. cd cmd export EDGEX_SECURITY_SECRET_STORE = false ./device-virtual Change folders to the service's cmd/ folder, set env vars, and then execute the service executable in the cmd folder.","title":"Run the service code natively."},{"location":"getting-started/Ch-GettingStartedHybrid/#check-the-results","text":"At this time, your virtual device micro service should be communicating with the other EdgeX micro services running in their Docker containers. Because Core Metadata callbacks do not work in the hybrid environment, the virtual device service will not receive the Add Device callbacks on the inital run after creating them in Core Metadata. The simple work around for this issue is to stop ( Ctrl-c from the terminal) and restart the virtual device service (again with ./device-virtual execution). The virtual device service log after stopping and restarting. Give the virtual device a few seconds or so to initialize itself and start sending data to Core Data. To check that it is working properly, open a browser and point your browser to Core Data to check that events are being deposited. You can do this by calling on the Core Data API that checks the count of events in Core Data. http://localhost:59880/api/v2/event/count For this example, you can check that the virtual device service is sending data into Core Data by checking the event count. Note If you choose, you can also import the service into GoLand and then code and run the service from GoLand. Follow the instructions in the Getting Started - Go Developers to learn how to import, build and run a service in GoLand.","title":"Check the results"},{"location":"getting-started/Ch-GettingStartedSDK-C/","text":"C SDK In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some of the SDK framework and work necessary to complete a device service without actually having a device to talk to. Install dependencies See the Getting Started - C Developers guide to install the necessary tools and infrastructure needed to develop a C service. Get the EdgeX Device SDK for C The next step is to download and build the EdgeX device service SDK for C. First, clone the device-sdk-c from Github: git clone -b v2.0.0 https://github.com/edgexfoundry/device-sdk-c.git cd ./device-sdk-c Note The clone command above has you pull v2.0.0 of the C SDK which is the version compatible with the Ireland release. Then, build the device-sdk-c: make Starting a new Device Service For this guide, you use the example template provided by the C SDK as a starting point for a new device service. You modify the device service to generate random integer values. Begin by copying the template example source into a new directory named example-device-c : mkdir -p ../example-device-c/res/profiles mkdir -p ../example-device-c/res/devices cp ./src/c/examples/template.c ../example-device-c cd ../example-device-c EdgeX 2.0 In EdgeX 2.0 the profiles have been moved to their own res/profiles directory and device definitions have been moved out of the configuration file into the res/devices directory. Build your Device Service Now you are ready to build your new device service using the C SDK you compiled in an earlier step. Tell the compiler where to find the C SDK files: export CSDK_DIR = ../device-sdk-c/build/release/_CPack_Packages/Linux/TGZ/csdk-2.0.0 Note The exact path to your compiled CSDK_DIR may differ depending on the tagged version number on the SDK. The version of the SDK can be found in the VERSION file located in the ./device-sdk-c/VERSION file. In the example above, the Ireland release of 2.0.0 is used. Now build your device service executable: gcc -I $CSDK_DIR /include -L $CSDK_DIR /lib -o device-example-c template.c -lcsdk If everything is working properly, a device-example-c executable will be created in the directory. Customize your Device Service Up to now you've been building the example device service provided by the C SDK. In order to change it to a device service that generates random numbers, you need to modify your template.c method template_get_handler . Replace the following code: for ( uint32_t i = 0 ; i < nreadings ; i ++ ) { /* Log the attributes for each requested resource */ iot_log_debug ( driver -> lc , \" Requested reading %u:\" , i ); dump_attributes ( driver -> lc , requests [ i ]. resource -> attrs ); /* Fill in a result regardless */ readings [ i ]. value = iot_data_alloc_string ( \"Template result\" , IOT_DATA_REF ); } return true ; with this code: for ( uint32_t i = 0 ; i < nreadings ; i ++ ) { const char * rdtype = iot_data_string_map_get_string ( requests [ i ]. resource -> attrs , \"type\" ); if ( rdtype ) { if ( strcmp ( rdtype , \"random\" ) == 0 ) { /* Set the reading as a random value between 0 and 100 */ readings [ i ]. value = iot_data_alloc_i32 ( rand () % 100 ); } else { * exception = iot_data_alloc_string ( \"Unknown sensor type requested\" , IOT_DATA_REF ); return false ; } } else { * exception = iot_data_alloc_string ( \"Unable to read value, no \\\" type \\\" attribute given\" , IOT_DATA_REF ); return false ; } } return true ; Here the reading value is set to a random signed integer. Various iot_data_alloc_ functions are defined in the iot/data.h header allowing readings of different types to be generated. Creating your Device Profile A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it. Follow these steps to create a device profile for the simple random number generating device service. Explore the files in the device-sdk-c/src/c/examples/res/profiles folder. Note the example TemplateProfile.json device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources in the file represent properties of a device (properties like SensorOne, SensorTwo and Switch). A pre-created device profile for the random number device is provided in this documentation. This is supplied in the alternative file format .yaml. Download random-generator-device.yaml and save the file to the ./res/profiles folder. Open the random-generator-device.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber . Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a Int32. In real world IoT situations, this deviceResource list could be extensive and filled with many deviceResources all different types of data. Creating your Device Device Service accepts pre-defined devices to be added to EdgeX during device service startup. Follow these steps to create a pre-defined device for the simple random number generating device service. Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.json that is already in this folder. Open the file with your favorite editor and explore its contents. Note how the file contents represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). A pre-created device for the random number device is provided in this documentation. Download random-generator-device.json and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices folder. Open the random-generator-device.json file in a text editor. In this example, the device described has a profileName: RandNum-Device . In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile Configuring your Device Service Now update the configuration for the new device service. This documentation provides a new configuration.toml file. This configuration file: - changes the port the service operates on so as not to conflict with other device services Download configuration.toml and save the file to the ./res folder. Custom Structured Configuration C Device Services support structured custom configuration as part of the [Driver] section in the configuration.toml file. View the main function of template.c . The confparams variable is initialized with default values for three test parameters. These values may be overridden by entries in the configuration file or by environment variables in the usual way. The resulting configuration is passed to the init function when the service starts. Configuration parameters X , Y/Z and Writable/Q correspond to configuration file entries as follows: [Writable] [Writable.Driver] Q = \"foo\" [Driver] X = \"bar\" [Driver.Y] Z = \"baz\" Entries in the writable section can be changed dynamically if using the registry; the reconfigure callback will be invoked with the new configuration when changes are made. In addition to strings, configuration entries may be integer, float or boolean typed. Use the different iot_data_alloc_ functions when setting up the defaults as appropriate. Rebuild your Device Service Now you have your new device service, modified to return a random number, a device profile that will tell EdgeX how to read that random number, as well as a configuration file that will let your device service register itself and its device profile with EdgeX, and begin taking readings every 10 seconds. Rebuild your Device Service to reflect the changes that you have made: gcc -I $CSDK_DIR /include -L $CSDK_DIR /lib -o device-example-c template.c -lcsdk Run your Device Service Allow your newly created Device Service, which was formed out of the Device Service C SDK, to create sensor mimicking data which it then sends to EdgeX. Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call: docker-compose up -d Back in your custom device service directory, tell your device service where to find the libcsdk.so : export LD_LIBRARY_PATH = $CSDK_DIR /lib Run your device service: ./device-example-c You should now see your device service having its /Random command called every 10 seconds. You can verify that it is sending data into EdgeX by watching the logs of the edgex-core-data service: docker logs -f edgex-core-data Which would print an event record every time your device service is called. You can manually generate an event using curl to query the device service directly: curl 0 :59992/api/v2/device/name/RandNum-Device01/RandomNumber Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX: http://localhost:59880/api/v2/event/device/name/RandNum-Device01?limit=100 This request asks core data to provide the last 100 events/readings associated to the RandNum-Device-01.","title":"C SDK"},{"location":"getting-started/Ch-GettingStartedSDK-C/#c-sdk","text":"In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some of the SDK framework and work necessary to complete a device service without actually having a device to talk to.","title":"C SDK"},{"location":"getting-started/Ch-GettingStartedSDK-C/#install-dependencies","text":"See the Getting Started - C Developers guide to install the necessary tools and infrastructure needed to develop a C service.","title":"Install dependencies"},{"location":"getting-started/Ch-GettingStartedSDK-C/#get-the-edgex-device-sdk-for-c","text":"The next step is to download and build the EdgeX device service SDK for C. First, clone the device-sdk-c from Github: git clone -b v2.0.0 https://github.com/edgexfoundry/device-sdk-c.git cd ./device-sdk-c Note The clone command above has you pull v2.0.0 of the C SDK which is the version compatible with the Ireland release. Then, build the device-sdk-c: make","title":"Get the EdgeX Device SDK for C"},{"location":"getting-started/Ch-GettingStartedSDK-C/#starting-a-new-device-service","text":"For this guide, you use the example template provided by the C SDK as a starting point for a new device service. You modify the device service to generate random integer values. Begin by copying the template example source into a new directory named example-device-c : mkdir -p ../example-device-c/res/profiles mkdir -p ../example-device-c/res/devices cp ./src/c/examples/template.c ../example-device-c cd ../example-device-c EdgeX 2.0 In EdgeX 2.0 the profiles have been moved to their own res/profiles directory and device definitions have been moved out of the configuration file into the res/devices directory.","title":"Starting a new Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#build-your-device-service","text":"Now you are ready to build your new device service using the C SDK you compiled in an earlier step. Tell the compiler where to find the C SDK files: export CSDK_DIR = ../device-sdk-c/build/release/_CPack_Packages/Linux/TGZ/csdk-2.0.0 Note The exact path to your compiled CSDK_DIR may differ depending on the tagged version number on the SDK. The version of the SDK can be found in the VERSION file located in the ./device-sdk-c/VERSION file. In the example above, the Ireland release of 2.0.0 is used. Now build your device service executable: gcc -I $CSDK_DIR /include -L $CSDK_DIR /lib -o device-example-c template.c -lcsdk If everything is working properly, a device-example-c executable will be created in the directory.","title":"Build your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#customize-your-device-service","text":"Up to now you've been building the example device service provided by the C SDK. In order to change it to a device service that generates random numbers, you need to modify your template.c method template_get_handler . Replace the following code: for ( uint32_t i = 0 ; i < nreadings ; i ++ ) { /* Log the attributes for each requested resource */ iot_log_debug ( driver -> lc , \" Requested reading %u:\" , i ); dump_attributes ( driver -> lc , requests [ i ]. resource -> attrs ); /* Fill in a result regardless */ readings [ i ]. value = iot_data_alloc_string ( \"Template result\" , IOT_DATA_REF ); } return true ; with this code: for ( uint32_t i = 0 ; i < nreadings ; i ++ ) { const char * rdtype = iot_data_string_map_get_string ( requests [ i ]. resource -> attrs , \"type\" ); if ( rdtype ) { if ( strcmp ( rdtype , \"random\" ) == 0 ) { /* Set the reading as a random value between 0 and 100 */ readings [ i ]. value = iot_data_alloc_i32 ( rand () % 100 ); } else { * exception = iot_data_alloc_string ( \"Unknown sensor type requested\" , IOT_DATA_REF ); return false ; } } else { * exception = iot_data_alloc_string ( \"Unable to read value, no \\\" type \\\" attribute given\" , IOT_DATA_REF ); return false ; } } return true ; Here the reading value is set to a random signed integer. Various iot_data_alloc_ functions are defined in the iot/data.h header allowing readings of different types to be generated.","title":"Customize your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#creating-your-device-profile","text":"A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it. Follow these steps to create a device profile for the simple random number generating device service. Explore the files in the device-sdk-c/src/c/examples/res/profiles folder. Note the example TemplateProfile.json device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources in the file represent properties of a device (properties like SensorOne, SensorTwo and Switch). A pre-created device profile for the random number device is provided in this documentation. This is supplied in the alternative file format .yaml. Download random-generator-device.yaml and save the file to the ./res/profiles folder. Open the random-generator-device.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber . Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a Int32. In real world IoT situations, this deviceResource list could be extensive and filled with many deviceResources all different types of data.","title":"Creating your Device Profile"},{"location":"getting-started/Ch-GettingStartedSDK-C/#creating-your-device","text":"Device Service accepts pre-defined devices to be added to EdgeX during device service startup. Follow these steps to create a pre-defined device for the simple random number generating device service. Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.json that is already in this folder. Open the file with your favorite editor and explore its contents. Note how the file contents represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). A pre-created device for the random number device is provided in this documentation. Download random-generator-device.json and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices folder. Open the random-generator-device.json file in a text editor. In this example, the device described has a profileName: RandNum-Device . In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile","title":"Creating your Device"},{"location":"getting-started/Ch-GettingStartedSDK-C/#configuring-your-device-service","text":"Now update the configuration for the new device service. This documentation provides a new configuration.toml file. This configuration file: - changes the port the service operates on so as not to conflict with other device services Download configuration.toml and save the file to the ./res folder.","title":"Configuring your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#custom-structured-configuration","text":"C Device Services support structured custom configuration as part of the [Driver] section in the configuration.toml file. View the main function of template.c . The confparams variable is initialized with default values for three test parameters. These values may be overridden by entries in the configuration file or by environment variables in the usual way. The resulting configuration is passed to the init function when the service starts. Configuration parameters X , Y/Z and Writable/Q correspond to configuration file entries as follows: [Writable] [Writable.Driver] Q = \"foo\" [Driver] X = \"bar\" [Driver.Y] Z = \"baz\" Entries in the writable section can be changed dynamically if using the registry; the reconfigure callback will be invoked with the new configuration when changes are made. In addition to strings, configuration entries may be integer, float or boolean typed. Use the different iot_data_alloc_ functions when setting up the defaults as appropriate.","title":"Custom Structured Configuration"},{"location":"getting-started/Ch-GettingStartedSDK-C/#rebuild-your-device-service","text":"Now you have your new device service, modified to return a random number, a device profile that will tell EdgeX how to read that random number, as well as a configuration file that will let your device service register itself and its device profile with EdgeX, and begin taking readings every 10 seconds. Rebuild your Device Service to reflect the changes that you have made: gcc -I $CSDK_DIR /include -L $CSDK_DIR /lib -o device-example-c template.c -lcsdk","title":"Rebuild your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#run-your-device-service","text":"Allow your newly created Device Service, which was formed out of the Device Service C SDK, to create sensor mimicking data which it then sends to EdgeX. Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call: docker-compose up -d Back in your custom device service directory, tell your device service where to find the libcsdk.so : export LD_LIBRARY_PATH = $CSDK_DIR /lib Run your device service: ./device-example-c You should now see your device service having its /Random command called every 10 seconds. You can verify that it is sending data into EdgeX by watching the logs of the edgex-core-data service: docker logs -f edgex-core-data Which would print an event record every time your device service is called. You can manually generate an event using curl to query the device service directly: curl 0 :59992/api/v2/device/name/RandNum-Device01/RandomNumber Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX: http://localhost:59880/api/v2/event/device/name/RandNum-Device01?limit=100 This request asks core data to provide the last 100 events/readings associated to the RandNum-Device-01.","title":"Run your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/","text":"Golang SDK In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some SDK framework and work necessary to complete a device service without actually having a device to talk to. Install dependencies See the Getting Started - Go Developers guide to install the necessary tools and infrastructure needed to develop a GoLang service. Get the EdgeX Device SDK for Go Follow these steps to create a folder on your file system, download the Device SDK , and get the GoLang device service SDK on your system. Create a collection of nested folders, ~/edgexfoundry on your file system. This folder will hold your new Device Service. In Linux, create a directory with a single mkdir command mkdir -p ~/edgexfoundry In a terminal window, change directories to the folder just created and pull down the SDK in Go with the commands as shown. cd ~/edgexfoundry git clone --depth 1 --branch v2.0.0 https://github.com/edgexfoundry/device-sdk-go.git Note The clone command above has you pull v2.0.0 of the Go SDK which is the version associated to Ireland. There are later releases of EdgeX, and it is always a good idea to pull and use the latest version associated with the major version of EdgeX you are using. You may want to check for the latest released version by going to https://github.com/edgexfoundry/device-sdk-go and look for the latest release. Create a folder that will hold the new device service. The name of the folder is also the name you want to give your new device service. Standard practice in EdgeX is to prefix the name of a device service with device- . In this example, the name 'device-simple' is used. mkdir -p ~/edgexfoundry/device-simple Copy the example code from device-sdk-go to device-simple : cd ~/edgexfoundry cp -rf ./device-sdk-go/example/* ./device-simple/ Copy Makefile to device-simple: cp ./device-sdk-go/Makefile ./device-simple Copy version.go to device-simple: cp ./device-sdk-go/version.go ./device-simple/ After completing these steps, your device-simple folder should look like the listing below. Start a new Device Service With the device service application structure in place, time now to program the service to act like a sensor data fetching service. Change folders to the device-simple directory. cd ~/edgexfoundry/device-simple Open main.go file in the cmd/device-simple folder with your favorite text editor. Modify the import statements. Replace github.com/edgexfoundry/device-sdk-go/v2/example/driver with github.com/edgexfoundry/device-simple/driver in the import statements. Also replace github.com/edgexfoundry/device-sdk-go/v2 with github.com/edgexfoundry/device-simple . Save the file when you have finished editing. Open Makefile found in the base folder (~/edgexfoundry/device-simple) in your favorite text editor and make the following changes Replace: MICROSERVICES=example/cmd/device-simple/device-simple with: MICROSERVICES=cmd/device-simple/device-simple Change: GOFLAGS = -ldflags \"-X github.com/edgexfoundry/device-sdk-go/v2.Version= $( VERSION ) \" to refer to the new service with: GOFLAGS = -ldflags \"-X github.com/edgexfoundry/device-simple.Version= $( VERSION ) \" Change: example/cmd/device-simple/device-simple : go mod tidy $( GOCGO ) build $( GOFLAGS ) -o $@ ./example/cmd/device-simple to: cmd/device-simple/device-simple : go mod tidy $( GOCGO ) build $( GOFLAGS ) -o $@ ./cmd/device-simple Save the file. Enter the following command to create the initial module definition and write it to the go.mod file: GO111MODULE = on go mod init github . com / edgexfoundry / device - simple Use an editor to open and edit the go.mod file created in ~/edgexfoundry/device-simple. Add the code highlighted below to the bottom of the file. This code indicates which version of the device service SDK and the associated EdgeX contracts module to use. require ( github . com / edgexfoundry / device - sdk - go / v2 v2 .0.0 github . com / edgexfoundry / go - mod - core - contracts / v2 v2 .0.0 ) Note You should always check the go.mod file in the latest released version SDK for the correct versions of the Go SDK and go-mod-contracts to use in your go.mod. Build your Device Service To ensure that the code you have moved and updated still works, build the device service. In a terminal window, make sure you are still in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command: make build If there are no errors, your service is ready for you to add custom code to generate data values as if there was a sensor attached. Customize your Device Service The device service you are creating isn't going to talk to a real device. Instead, it is going to generate a random number where the service would ordinarily make a call to get sensor data from the actual device. Locate the simpledriver.go file in the /driver folder and open it with your favorite editor. In the import() area at the top of the file, add \"math/rand\" under \"time\". Locate the HandleReadCommands() function in this same file (simpledriver.go). Find the following lines of code in this file (around line 139): if reqs [ 0 ]. DeviceResourceName == \"SwitchButton\" { cv , _ := sdkModels . NewCommandValue ( reqs [ 0 ]. DeviceResourceName , common . ValueTypeBool , s . switchButton ) res [ 0 ] = cv } Add the conditional (if-else) code in front of the above conditional: if reqs [ 0 ]. DeviceResourceName == \"randomnumber\" { cv , _ := sdkModels . NewCommandValue ( reqs [ 0 ]. DeviceResourceName , common . ValueTypeInt32 , int32 ( rand . Intn ( 100 ))) res [ 0 ] = cv } else The first line of code checks that the current request is for a resource called \"RandomNumber\". The second line of code generates an integer (between 0 and 100) and uses that as the value the device service sends to EdgeX -- mimicking the collection of data from a real device. It is here that the device service would normally capture some sensor reading from a device and send the data to EdgeX. The HandleReadCommands is where you'd need to do some customization work to talk to the device, get the latest sensor values and send them into EdgeX. Save the simpledriver.go file Creating your Device Profile A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it. Follow these steps to create a device profile for the simple random number generating device service. Explore the files in the cmd/device-simple/res/profiles folder. Note the example Simple-Driver.yaml device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources in the file represent properties of a device (properties like SwitchButton, X, Y and Z rotation). A pre-created device profile for the random number device is provided in this documentation. Download random-generator-device.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/profiles folder. Open the random-generator-device.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber . Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a INT32. In real world IoT situations, this deviceResource list could be extensive. Rather than a single deviceResource, you might find this section filled with many deviceResources and each deviceResource associated to a different type. Creating your Device Device Service accepts pre-defined devices to be added to EdgeX during device service startup. Follow these steps to create a pre-defined device for the simple random number generating device service. Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.toml that is already in this folder. Open the file with your favorite editor and explore its contents. Note how DeviceList in the file represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). A pre-created device for the random number device is provided in this documentation. Download random-generator-device.toml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices folder. Open the random-generator-device.toml file in a text editor. In this example, the device described has a ProfileName: RandNum-Device . In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile Validating your Device Go Device Services provide /api/v2/validate/device API to validate device's ProtocolProperties. This feature allows Device Services whose protocol has strict rule to validate their devices before adding them into EdgeX. Go SDK provides DeviceValidator interface: // DeviceValidator is a low-level device-specific interface implemented // by device services that validate device's protocol properties. type DeviceValidator interface { // ValidateDevice triggers device's protocol properties validation, returns error // if validation failed and the incoming device will not be added into EdgeX. ValidateDevice ( device models . Device ) error } By implementing DeviceValidator interface whenever a device is added or updated, ValidateDevice function will be called to validate incoming device's ProtocolProperties and reject the request if validation failed. Configuring your Device Service Now update the configuration for the new device service. This documentation provides a new configuration.toml file. This configuration file: changes the port the service operates on so as not to conflict with other device services Download configuration.toml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res folder (overwrite the existing configuration file). Change the host address of the device service to your system's IP address. Warning In the configuration.toml, change the host address (around line 14) to the IP address of the system host. This allows core metadata to callback to your new device service when a new device is created. Because the rest of EdgeX, to include core metadata, will be running in Docker, the IP address of the host system on the Docker network must be provided to allow metadata in Docker to call out from Docker to the new device service running on your host system. Custom Structured Configuration EdgeX 2.0 New for EdgeX 2.0 Go Device Services can now define their own custom structured configuration section in the configuration.toml file. Any additional sections in the TOML are ignored by the SDK when it parses the file for the SDK defined sections. This feature allows a Device Service to define and watch it's own structured section in the service's TOML configuration file. The SDK API provides the follow APIs to enable structured custom configuration: LoadCustomConfig(config UpdatableConfig, sectionName string) error Loads the service's custom configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration the first time the service is started, if service is using the Configuration Provider. The UpdateFromRaw interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider. ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error Starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the UpdateWritableFromRaw interface will be called on the custom configuration to apply the updates and then signal that the changes occurred via changedCallback. See the Device MQTT Service for an example of using the new Structured Custom Configuration capability. See here for defining the structured custom configuration See here for custom section on the configuration.toml file See here for loading, validating and watching the configuration Retrieving Secrets The Go Device SDK provides the SecretProvider.GetSecret() API to retrieve the Device Services secrets. See the Device MQTT Service for an example of using the SecretProvider.GetSecret() API. Note that this code implements a retry loop allowing time for the secret(s) to be push into the service's SecretStore via the /secret endpoint. See Storing Secrets section for more details. Rebuild your Device Service Just as you did in the Build your Device Service step above, build the device-simple service, which creates the executable program that is your device service. In a terminal window, make sure you are in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command: cd ~/edgexfoundry/device-simple make build If there are no errors, your service is created and put in the ~/edgexfoundry/device-simple/cmd/device-simple folder. Look for the device-simple executable in the folder. Run your Device Service Allow the newly created device service, which was formed out of the Device Service Go SDK, to create sensor-mimicking data that it then sends to EdgeX: Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call (we're using non-security EdgeX in this example): docker-compose -f docker-compose-no-secty.yml up -d In a terminal window, change directories to the device-simple's cmd/device-simple folder and run the new device-simple service. cd ~/edgexfoundry/device-simple/cmd/device-simple ./device-simple This starts the service and immediately displays log entries in the terminal. Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX: http://localhost:59880/api/v2/event/device/name/RandNum-Device01 This request asks core data to provide the events associated to the RandNum-Device-01.","title":"Golang SDK"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#golang-sdk","text":"In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some SDK framework and work necessary to complete a device service without actually having a device to talk to.","title":"Golang SDK"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#install-dependencies","text":"See the Getting Started - Go Developers guide to install the necessary tools and infrastructure needed to develop a GoLang service.","title":"Install dependencies"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#get-the-edgex-device-sdk-for-go","text":"Follow these steps to create a folder on your file system, download the Device SDK , and get the GoLang device service SDK on your system. Create a collection of nested folders, ~/edgexfoundry on your file system. This folder will hold your new Device Service. In Linux, create a directory with a single mkdir command mkdir -p ~/edgexfoundry In a terminal window, change directories to the folder just created and pull down the SDK in Go with the commands as shown. cd ~/edgexfoundry git clone --depth 1 --branch v2.0.0 https://github.com/edgexfoundry/device-sdk-go.git Note The clone command above has you pull v2.0.0 of the Go SDK which is the version associated to Ireland. There are later releases of EdgeX, and it is always a good idea to pull and use the latest version associated with the major version of EdgeX you are using. You may want to check for the latest released version by going to https://github.com/edgexfoundry/device-sdk-go and look for the latest release. Create a folder that will hold the new device service. The name of the folder is also the name you want to give your new device service. Standard practice in EdgeX is to prefix the name of a device service with device- . In this example, the name 'device-simple' is used. mkdir -p ~/edgexfoundry/device-simple Copy the example code from device-sdk-go to device-simple : cd ~/edgexfoundry cp -rf ./device-sdk-go/example/* ./device-simple/ Copy Makefile to device-simple: cp ./device-sdk-go/Makefile ./device-simple Copy version.go to device-simple: cp ./device-sdk-go/version.go ./device-simple/ After completing these steps, your device-simple folder should look like the listing below.","title":"Get the EdgeX Device SDK for Go"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#start-a-new-device-service","text":"With the device service application structure in place, time now to program the service to act like a sensor data fetching service. Change folders to the device-simple directory. cd ~/edgexfoundry/device-simple Open main.go file in the cmd/device-simple folder with your favorite text editor. Modify the import statements. Replace github.com/edgexfoundry/device-sdk-go/v2/example/driver with github.com/edgexfoundry/device-simple/driver in the import statements. Also replace github.com/edgexfoundry/device-sdk-go/v2 with github.com/edgexfoundry/device-simple . Save the file when you have finished editing. Open Makefile found in the base folder (~/edgexfoundry/device-simple) in your favorite text editor and make the following changes Replace: MICROSERVICES=example/cmd/device-simple/device-simple with: MICROSERVICES=cmd/device-simple/device-simple Change: GOFLAGS = -ldflags \"-X github.com/edgexfoundry/device-sdk-go/v2.Version= $( VERSION ) \" to refer to the new service with: GOFLAGS = -ldflags \"-X github.com/edgexfoundry/device-simple.Version= $( VERSION ) \" Change: example/cmd/device-simple/device-simple : go mod tidy $( GOCGO ) build $( GOFLAGS ) -o $@ ./example/cmd/device-simple to: cmd/device-simple/device-simple : go mod tidy $( GOCGO ) build $( GOFLAGS ) -o $@ ./cmd/device-simple Save the file. Enter the following command to create the initial module definition and write it to the go.mod file: GO111MODULE = on go mod init github . com / edgexfoundry / device - simple Use an editor to open and edit the go.mod file created in ~/edgexfoundry/device-simple. Add the code highlighted below to the bottom of the file. This code indicates which version of the device service SDK and the associated EdgeX contracts module to use. require ( github . com / edgexfoundry / device - sdk - go / v2 v2 .0.0 github . com / edgexfoundry / go - mod - core - contracts / v2 v2 .0.0 ) Note You should always check the go.mod file in the latest released version SDK for the correct versions of the Go SDK and go-mod-contracts to use in your go.mod.","title":"Start a new Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#build-your-device-service","text":"To ensure that the code you have moved and updated still works, build the device service. In a terminal window, make sure you are still in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command: make build If there are no errors, your service is ready for you to add custom code to generate data values as if there was a sensor attached.","title":"Build your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#customize-your-device-service","text":"The device service you are creating isn't going to talk to a real device. Instead, it is going to generate a random number where the service would ordinarily make a call to get sensor data from the actual device. Locate the simpledriver.go file in the /driver folder and open it with your favorite editor. In the import() area at the top of the file, add \"math/rand\" under \"time\". Locate the HandleReadCommands() function in this same file (simpledriver.go). Find the following lines of code in this file (around line 139): if reqs [ 0 ]. DeviceResourceName == \"SwitchButton\" { cv , _ := sdkModels . NewCommandValue ( reqs [ 0 ]. DeviceResourceName , common . ValueTypeBool , s . switchButton ) res [ 0 ] = cv } Add the conditional (if-else) code in front of the above conditional: if reqs [ 0 ]. DeviceResourceName == \"randomnumber\" { cv , _ := sdkModels . NewCommandValue ( reqs [ 0 ]. DeviceResourceName , common . ValueTypeInt32 , int32 ( rand . Intn ( 100 ))) res [ 0 ] = cv } else The first line of code checks that the current request is for a resource called \"RandomNumber\". The second line of code generates an integer (between 0 and 100) and uses that as the value the device service sends to EdgeX -- mimicking the collection of data from a real device. It is here that the device service would normally capture some sensor reading from a device and send the data to EdgeX. The HandleReadCommands is where you'd need to do some customization work to talk to the device, get the latest sensor values and send them into EdgeX. Save the simpledriver.go file","title":"Customize your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#creating-your-device-profile","text":"A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it. Follow these steps to create a device profile for the simple random number generating device service. Explore the files in the cmd/device-simple/res/profiles folder. Note the example Simple-Driver.yaml device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources in the file represent properties of a device (properties like SwitchButton, X, Y and Z rotation). A pre-created device profile for the random number device is provided in this documentation. Download random-generator-device.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/profiles folder. Open the random-generator-device.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber . Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a INT32. In real world IoT situations, this deviceResource list could be extensive. Rather than a single deviceResource, you might find this section filled with many deviceResources and each deviceResource associated to a different type.","title":"Creating your Device Profile"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#creating-your-device","text":"Device Service accepts pre-defined devices to be added to EdgeX during device service startup. Follow these steps to create a pre-defined device for the simple random number generating device service. Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.toml that is already in this folder. Open the file with your favorite editor and explore its contents. Note how DeviceList in the file represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). A pre-created device for the random number device is provided in this documentation. Download random-generator-device.toml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices folder. Open the random-generator-device.toml file in a text editor. In this example, the device described has a ProfileName: RandNum-Device . In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile","title":"Creating your Device"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#validating-your-device","text":"Go Device Services provide /api/v2/validate/device API to validate device's ProtocolProperties. This feature allows Device Services whose protocol has strict rule to validate their devices before adding them into EdgeX. Go SDK provides DeviceValidator interface: // DeviceValidator is a low-level device-specific interface implemented // by device services that validate device's protocol properties. type DeviceValidator interface { // ValidateDevice triggers device's protocol properties validation, returns error // if validation failed and the incoming device will not be added into EdgeX. ValidateDevice ( device models . Device ) error } By implementing DeviceValidator interface whenever a device is added or updated, ValidateDevice function will be called to validate incoming device's ProtocolProperties and reject the request if validation failed.","title":"Validating your Device"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#configuring-your-device-service","text":"Now update the configuration for the new device service. This documentation provides a new configuration.toml file. This configuration file: changes the port the service operates on so as not to conflict with other device services Download configuration.toml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res folder (overwrite the existing configuration file). Change the host address of the device service to your system's IP address. Warning In the configuration.toml, change the host address (around line 14) to the IP address of the system host. This allows core metadata to callback to your new device service when a new device is created. Because the rest of EdgeX, to include core metadata, will be running in Docker, the IP address of the host system on the Docker network must be provided to allow metadata in Docker to call out from Docker to the new device service running on your host system.","title":"Configuring your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#custom-structured-configuration","text":"EdgeX 2.0 New for EdgeX 2.0 Go Device Services can now define their own custom structured configuration section in the configuration.toml file. Any additional sections in the TOML are ignored by the SDK when it parses the file for the SDK defined sections. This feature allows a Device Service to define and watch it's own structured section in the service's TOML configuration file. The SDK API provides the follow APIs to enable structured custom configuration: LoadCustomConfig(config UpdatableConfig, sectionName string) error Loads the service's custom configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration the first time the service is started, if service is using the Configuration Provider. The UpdateFromRaw interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider. ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error Starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the UpdateWritableFromRaw interface will be called on the custom configuration to apply the updates and then signal that the changes occurred via changedCallback. See the Device MQTT Service for an example of using the new Structured Custom Configuration capability. See here for defining the structured custom configuration See here for custom section on the configuration.toml file See here for loading, validating and watching the configuration","title":"Custom Structured Configuration"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#retrieving-secrets","text":"The Go Device SDK provides the SecretProvider.GetSecret() API to retrieve the Device Services secrets. See the Device MQTT Service for an example of using the SecretProvider.GetSecret() API. Note that this code implements a retry loop allowing time for the secret(s) to be push into the service's SecretStore via the /secret endpoint. See Storing Secrets section for more details.","title":"Retrieving Secrets"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#rebuild-your-device-service","text":"Just as you did in the Build your Device Service step above, build the device-simple service, which creates the executable program that is your device service. In a terminal window, make sure you are in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command: cd ~/edgexfoundry/device-simple make build If there are no errors, your service is created and put in the ~/edgexfoundry/device-simple/cmd/device-simple folder. Look for the device-simple executable in the folder.","title":"Rebuild your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#run-your-device-service","text":"Allow the newly created device service, which was formed out of the Device Service Go SDK, to create sensor-mimicking data that it then sends to EdgeX: Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call (we're using non-security EdgeX in this example): docker-compose -f docker-compose-no-secty.yml up -d In a terminal window, change directories to the device-simple's cmd/device-simple folder and run the new device-simple service. cd ~/edgexfoundry/device-simple/cmd/device-simple ./device-simple This starts the service and immediately displays log entries in the terminal. Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX: http://localhost:59880/api/v2/event/device/name/RandNum-Device01 This request asks core data to provide the events associated to the RandNum-Device-01.","title":"Run your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK/","text":"Device Service SDK The EdgeX device service software development kits (SDKs) help developers create new device connectors for EdgeX. An SDK provides the common scaffolding that each device service needs. This allows developers to create new device/sensor connectors more quickly. The EdgeX community already provides many device services. However, there is no way the community can provide for every protocol and every sensor. Even if the EdgeX community provided a device service for every protocol, your use case, sensor, or security infrastructure might require customization. Thus, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity. EdgeX provides two SDKs to help developers create new device services. Most of EdgeX is written in Go and C. Thus, there's a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, the community may offer alternate language SDKs. The SDKs are libraries that get incorporated into a new micro services. They make writing a new device service much easier. By importing the SDK library into your new device service project, developers are left to focus on the code that is specific to the communications with the device via the protocol of the device. The code in the SDK handles the other details, such as: - initialization of the device service - getting the service configured - sending sensor data to core data - managing communications with core metadata - and much more. The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX. For example, it makes sure the service registers with the EdgeX registry service when it starts. Use the GoLang SDK Use the C SDK","title":"Device Service SDK"},{"location":"getting-started/Ch-GettingStartedSDK/#device-service-sdk","text":"The EdgeX device service software development kits (SDKs) help developers create new device connectors for EdgeX. An SDK provides the common scaffolding that each device service needs. This allows developers to create new device/sensor connectors more quickly. The EdgeX community already provides many device services. However, there is no way the community can provide for every protocol and every sensor. Even if the EdgeX community provided a device service for every protocol, your use case, sensor, or security infrastructure might require customization. Thus, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity. EdgeX provides two SDKs to help developers create new device services. Most of EdgeX is written in Go and C. Thus, there's a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, the community may offer alternate language SDKs. The SDKs are libraries that get incorporated into a new micro services. They make writing a new device service much easier. By importing the SDK library into your new device service project, developers are left to focus on the code that is specific to the communications with the device via the protocol of the device. The code in the SDK handles the other details, such as: - initialization of the device service - getting the service configured - sending sensor data to core data - managing communications with core metadata - and much more. The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX. For example, it makes sure the service registers with the EdgeX registry service when it starts. Use the GoLang SDK Use the C SDK","title":"Device Service SDK"},{"location":"getting-started/Ch-GettingStartedSnapUsers/","text":"Getting Started using Snaps Introduction Snaps are application packages that are easy to install and update while being secure, cross\u2010platform and self-contained. Snaps can be installed on any Linux distribution with snap support . Snap packages of EdgeX services are published on the Snap Store . The list of all EdgeX snaps is available below . EdgeX Snaps The following snaps are maintained by the EdgeX working groups: Platform snap: edgexfoundry : the main platform snap containing all reference core services along with several other security, supporting, application, and device services. Development tools: edgex-ui edgex-cli Application services: edgex-app-service-configurable Device services: edgex-device-camera edgex-device-modbus edgex-device-mqtt edgex-device-rest edgex-device-snmp edgex-device-grove Other EdgeX snaps do exist on the public Snap Store ( search by keyword ) or private stores under brand accounts. Installing the edgexfoundry snap This is the main platform snap which contains all reference core services along with several other security, supporting, application, and device services. The Snap Store allows access to multiple versions of the EdgeX Foundry snap using channels . If not specified, snaps are installed from the default latest/stable channel. You can see the current snap channels available for your machine's architecture by running the command: snap info edgexfoundry In order to install a specific version of the snap by setting the --channel flag. For example, to install the Jakarta (2.1) release: sudo snap install edgexfoundry --channel = 2 .1 To install the latest beta: sudo snap install edgexfoundry --channel = latest/beta # or using the shorthand sudo snap install edgexfoundry --beta Replace beta with edge to get the latest nightly build! Upon installation, the following internal EdgeX services are automatically started: consul vault redis kong postgres core-data core-command core-metadata security-services (see Security Services section below) The following services are disabled by default: app-service-configurable (required for eKuiper) device-virtual kuiper support-notifications support-scheduler sys-mgmt-agent Any disabled services can be enabled and started up using snap set : sudo snap set edgexfoundry support-notifications = on To turn a service off (thereby disabling and immediately stopping it) set the service to off: sudo snap set edgexfoundry support-notifications = off All services which are installed on the system as systemd units, which if enabled will automatically start running when the system boots or reboots. Configuring individual services This snap supports configuration overrides via snap configure hooks which generate service-specific .env files which are used to provide a custom environment to the service, overriding the default configuration provided by the service's configuration.toml file. If a configuration override is made after a service has already started, then the service must be restarted via command-line (e.g. snap restart edgexfoundry. ), or snapd's REST API . If the overrides are provided via the snap configuration defaults capability of a gadget snap, the overrides will be picked up when the services are first started. The following syntax is used to specify service-specific configuration overrides for the edgexfoundry snap: env... For instance, to setup an override of core data's port use: sudo snap set edgexfoundry env.core-data.service.port = 2112 And restart the service: sudo snap restart edgexfoundry.core-data Note At this time changes to configuration values in the [Writable] section are not supported. For details on the mapping of configuration options to Config options, please refer to Service Environment Configuration Overrides . Viewing logs To view the logs for all services in an EdgeX snap use the snap log command: sudo snap logs edgexfoundry Individual service logs may be viewed by specifying the service name: sudo snap logs edgexfoundry.consul Or by using the systemd unit name and journalctl : journalctl -u snap.edgexfoundry.consul These techniques can be used with any snap including application snap and device services snaps. Security services Currently, The EdgeX snap has security (Secret Store and API Gateway) enabled by default. The security services constitute the following components: kong-daemon (API Gateway a.k.a. Reverse Proxy) postgres (kong's database) vault (Secret Store) Oneshot services which perform the necessary security setup and stop, when listed using snap services , they show up as enabled/inactive : security-proxy-setup (kong setup) security-secretstore-setup (vault setup) security-bootstrapper-redis (secure redis setup) security-consul-bootstrapper (secure consul setup) Vault is known within EdgeX as the Secret Store, while Kong+PostgreSQL are used to provide the EdgeX API Gateway. For more details please refer to the snap's Secret Store and API Gateway documentation.","title":"Getting Started using Snaps"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#getting-started-using-snaps","text":"","title":"Getting Started using Snaps"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#introduction","text":"Snaps are application packages that are easy to install and update while being secure, cross\u2010platform and self-contained. Snaps can be installed on any Linux distribution with snap support . Snap packages of EdgeX services are published on the Snap Store . The list of all EdgeX snaps is available below .","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#edgex-snaps","text":"The following snaps are maintained by the EdgeX working groups: Platform snap: edgexfoundry : the main platform snap containing all reference core services along with several other security, supporting, application, and device services. Development tools: edgex-ui edgex-cli Application services: edgex-app-service-configurable Device services: edgex-device-camera edgex-device-modbus edgex-device-mqtt edgex-device-rest edgex-device-snmp edgex-device-grove Other EdgeX snaps do exist on the public Snap Store ( search by keyword ) or private stores under brand accounts.","title":"EdgeX Snaps"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#installing-the-edgexfoundry-snap","text":"This is the main platform snap which contains all reference core services along with several other security, supporting, application, and device services. The Snap Store allows access to multiple versions of the EdgeX Foundry snap using channels . If not specified, snaps are installed from the default latest/stable channel. You can see the current snap channels available for your machine's architecture by running the command: snap info edgexfoundry In order to install a specific version of the snap by setting the --channel flag. For example, to install the Jakarta (2.1) release: sudo snap install edgexfoundry --channel = 2 .1 To install the latest beta: sudo snap install edgexfoundry --channel = latest/beta # or using the shorthand sudo snap install edgexfoundry --beta Replace beta with edge to get the latest nightly build! Upon installation, the following internal EdgeX services are automatically started: consul vault redis kong postgres core-data core-command core-metadata security-services (see Security Services section below) The following services are disabled by default: app-service-configurable (required for eKuiper) device-virtual kuiper support-notifications support-scheduler sys-mgmt-agent Any disabled services can be enabled and started up using snap set : sudo snap set edgexfoundry support-notifications = on To turn a service off (thereby disabling and immediately stopping it) set the service to off: sudo snap set edgexfoundry support-notifications = off All services which are installed on the system as systemd units, which if enabled will automatically start running when the system boots or reboots.","title":"Installing the edgexfoundry snap"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#configuring-individual-services","text":"This snap supports configuration overrides via snap configure hooks which generate service-specific .env files which are used to provide a custom environment to the service, overriding the default configuration provided by the service's configuration.toml file. If a configuration override is made after a service has already started, then the service must be restarted via command-line (e.g. snap restart edgexfoundry. ), or snapd's REST API . If the overrides are provided via the snap configuration defaults capability of a gadget snap, the overrides will be picked up when the services are first started. The following syntax is used to specify service-specific configuration overrides for the edgexfoundry snap: env... For instance, to setup an override of core data's port use: sudo snap set edgexfoundry env.core-data.service.port = 2112 And restart the service: sudo snap restart edgexfoundry.core-data Note At this time changes to configuration values in the [Writable] section are not supported. For details on the mapping of configuration options to Config options, please refer to Service Environment Configuration Overrides .","title":"Configuring individual services"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#viewing-logs","text":"To view the logs for all services in an EdgeX snap use the snap log command: sudo snap logs edgexfoundry Individual service logs may be viewed by specifying the service name: sudo snap logs edgexfoundry.consul Or by using the systemd unit name and journalctl : journalctl -u snap.edgexfoundry.consul These techniques can be used with any snap including application snap and device services snaps.","title":"Viewing logs"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#security-services","text":"Currently, The EdgeX snap has security (Secret Store and API Gateway) enabled by default. The security services constitute the following components: kong-daemon (API Gateway a.k.a. Reverse Proxy) postgres (kong's database) vault (Secret Store) Oneshot services which perform the necessary security setup and stop, when listed using snap services , they show up as enabled/inactive : security-proxy-setup (kong setup) security-secretstore-setup (vault setup) security-bootstrapper-redis (secure redis setup) security-consul-bootstrapper (secure consul setup) Vault is known within EdgeX as the Secret Store, while Kong+PostgreSQL are used to provide the EdgeX API Gateway. For more details please refer to the snap's Secret Store and API Gateway documentation.","title":"Security services"},{"location":"getting-started/Ch-GettingStartedUsers/","text":"Getting Started as a User This section provides instructions for Users to get EdgeX up and running. If you are a Developer, you should read Getting Started as a Developer . EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. You can download EdgeX micro service source code and build your own micro services. However, if you do not have a need to change or add to EdgeX, then you do not need to download source code. Instead, you can download and run the pre-built EdgeX micro service artifacts. The EdgeX community builds and creates Docker images as well as Snap packages with each release. The community also provides the latest unstable builds (prior to releases). Please continue by referring to: Getting Started using Docker Getting Started using Snaps","title":"Getting Started as a User"},{"location":"getting-started/Ch-GettingStartedUsers/#getting-started-as-a-user","text":"This section provides instructions for Users to get EdgeX up and running. If you are a Developer, you should read Getting Started as a Developer . EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. You can download EdgeX micro service source code and build your own micro services. However, if you do not have a need to change or add to EdgeX, then you do not need to download source code. Instead, you can download and run the pre-built EdgeX micro service artifacts. The EdgeX community builds and creates Docker images as well as Snap packages with each release. The community also provides the latest unstable builds (prior to releases). Please continue by referring to: Getting Started using Docker Getting Started using Snaps","title":"Getting Started as a User"},{"location":"getting-started/Ch-GettingStartedUsersNexus/","text":"Getting Docker Images from EdgeX Nexus Repository Released EdgeX Docker container images are available from Docker Hub . Please refer to the Getting Started using Docker for instructions related to stable releases. In some cases, it may be necessary to get your EdgeX container images from the Nexus repository. The Linux Foundation manages the Nexus repository for the project. Warning Containers used from Nexus are considered \"work in progress\". There is no guarantee that these containers will function properly or function properly with other containers from the current release. Nexus contains the EdgeX project staging and development container images. In other words, Nexus contains work-in-progress or pre-release images. These, pre-release/work-in-progress Docker images are built nightly and made available at the following Nexus location: nexus3.edgexfoundry.org:10004 Rationale To Use Nexus Images Reasons you might want to use container images from Nexus include: The container is not available from Docker Hub (or Docker Hub is down temporarily) You need the latest development container image (the work in progress) You are working in a Windows or non-Linux environment and you are unable to build a container without some issues. A set of Docker Compose files have been created to allow you to get and use the latest EdgeX service images from Nexus. Find these Nexus \"Nightly Build\" Compose files in the main branch of the edgex-compose respository in GitHub. The EdgeX development team provides these Docker Compose files. As with the EdgeX release Compose files, you will find several different Docker Compose files that allow you to get the type of EdgeX instance setup based on: your hardware (x86 or ARM) your desire to have security services on or off your desire to run with the EdgeX GUI included Warning The \"Nightly Build\" images are provided as-is and may not always function properly or with other EdgeX services. Use with caution and typically only if you are a developer/contributor to EdgeX. These images represent the latest development work and may not have been thoroughly tested or integrated. Using Nexus Images The operations to pull the images and run the Nexus Repository containers are the same as when using EdgeX images from Docker Hub (see Getting Started using Docker ). To get container images from the Nexus Repository, in a command terminal, change directories to the location of your downloaded Nexus Docker Compose yaml. Rename the file to docker-compose.yml. Then run the following command in the terminal to pull (fetch) and then start the EdgeX Nexus-image containers. docker-compose up -d Using a Single Nexus Image In some cases, you may only need to use a single image from Nexus while other EdgeX services are created from the Docker Hub images. In this case, you can simply replace the image location for the selected image in your original Docker Compose file. The address of Nexus is nexus3.edgexfoundry.org at port 10004 . So, if you wished to use the EdgeX core data image from Nexus, you would replace the name and location of the core data image edgexfoundry/core-data:2.0.0 with nexus3.edgexfoundry.org:10004/core-data:latest in the Compose file. Note The example above replaces the Ireland core data service from Docker Hub with the latest core data image in Nexus.","title":"Getting Docker Images from EdgeX Nexus Repository"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#getting-docker-images-from-edgex-nexus-repository","text":"Released EdgeX Docker container images are available from Docker Hub . Please refer to the Getting Started using Docker for instructions related to stable releases. In some cases, it may be necessary to get your EdgeX container images from the Nexus repository. The Linux Foundation manages the Nexus repository for the project. Warning Containers used from Nexus are considered \"work in progress\". There is no guarantee that these containers will function properly or function properly with other containers from the current release. Nexus contains the EdgeX project staging and development container images. In other words, Nexus contains work-in-progress or pre-release images. These, pre-release/work-in-progress Docker images are built nightly and made available at the following Nexus location: nexus3.edgexfoundry.org:10004","title":"Getting Docker Images from EdgeX Nexus Repository"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#rationale-to-use-nexus-images","text":"Reasons you might want to use container images from Nexus include: The container is not available from Docker Hub (or Docker Hub is down temporarily) You need the latest development container image (the work in progress) You are working in a Windows or non-Linux environment and you are unable to build a container without some issues. A set of Docker Compose files have been created to allow you to get and use the latest EdgeX service images from Nexus. Find these Nexus \"Nightly Build\" Compose files in the main branch of the edgex-compose respository in GitHub. The EdgeX development team provides these Docker Compose files. As with the EdgeX release Compose files, you will find several different Docker Compose files that allow you to get the type of EdgeX instance setup based on: your hardware (x86 or ARM) your desire to have security services on or off your desire to run with the EdgeX GUI included Warning The \"Nightly Build\" images are provided as-is and may not always function properly or with other EdgeX services. Use with caution and typically only if you are a developer/contributor to EdgeX. These images represent the latest development work and may not have been thoroughly tested or integrated.","title":"Rationale To Use Nexus Images"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#using-nexus-images","text":"The operations to pull the images and run the Nexus Repository containers are the same as when using EdgeX images from Docker Hub (see Getting Started using Docker ). To get container images from the Nexus Repository, in a command terminal, change directories to the location of your downloaded Nexus Docker Compose yaml. Rename the file to docker-compose.yml. Then run the following command in the terminal to pull (fetch) and then start the EdgeX Nexus-image containers. docker-compose up -d","title":"Using Nexus Images"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#using-a-single-nexus-image","text":"In some cases, you may only need to use a single image from Nexus while other EdgeX services are created from the Docker Hub images. In this case, you can simply replace the image location for the selected image in your original Docker Compose file. The address of Nexus is nexus3.edgexfoundry.org at port 10004 . So, if you wished to use the EdgeX core data image from Nexus, you would replace the name and location of the core data image edgexfoundry/core-data:2.0.0 with nexus3.edgexfoundry.org:10004/core-data:latest in the Compose file. Note The example above replaces the Ireland core data service from Docker Hub with the latest core data image in Nexus.","title":"Using a Single Nexus Image"},{"location":"getting-started/quick-start/","text":"Quick Start This guide will get EdgeX up and running on your machine in as little as 5 minutes using Docker containers. We will skip over lengthy descriptions for now. The goal here is to get you a working IoT Edge stack, from device to cloud, as simply as possible. When you need more detailed instructions or a breakdown of some of the commands you see in this quick start, see either the Getting Started as a User or Getting Started as a Developer guides. Setup The fastest way to start running EdgeX is by using our pre-built Docker images. To use them you'll need to install the following: Docker https://docs.docker.com/install/ Docker Compose https://docs.docker.com/compose/install/ Running EdgeX Info Jakarta (v 2.1) is the latest version of EdgeX and used by example in this guide. Once you have Docker and Docker Compose installed, you need to: download / save the latest docker-compose file issue command to download and run the EdgeX Foundry Docker images from Docker Hub This can be accomplished with a single command as shown below (please note the tabs for x86 vs ARM architectures). x86 curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/jakarta/docker-compose-no-secty.yml -o docker-compose.yml; docker-compose up -d ARM curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/Jakarta/docker-compose-no-secty-arm64.yml -o docker-compose.yml; docker-compose up -d Verify that the EdgeX containers have started: docker-compose ps If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above. Connected Devices EdgeX Foundry provides a Virtual device service which is useful for testing and development. It simulates a number of devices , each randomly generating data of various types and within configurable parameters. For example, the Random-Integer-Device will generate random integers. The Virtual Device (also known as Device Virtual) service is already a service pulled and running as part of the default EdgeX configuration. You can verify that Virtual Device readings are already being sent by querying the EdgeX core data service for the event records sent for Random-Integer-Device: curl http://localhost:59880/api/v2/event/device/name/Random-Integer-Device Verify the virtual device service is operating correctly by requesting the last event records received by core data for the Random-Integer-Device. Note By default, the maximum number of events returned will be 20 (the default limit). You can pass a limit parameter to get more or less event records. curl http://localhost:59880/api/v2/event/device/name/Random-Integer-Device?limit=50 Controlling the Device Reading data from devices is only part of what EdgeX is capable of. You can also use it to control your devices - this is termed 'actuating' the device. When a device registers with the EdgeX services, it provides a Device Profile that describes both the data readings available from that device, and also the commands that control it. When our Virtual Device service registered the device Random-Integer-Device , it used a profile to also define commands that allow you to tell the service not to generate random integers, but to always return a value you set. You won't call commands on devices directly, instead you use the EdgeX Foundry Command Service to do that. The first step is to check what commands are available to call by asking the Command service about your device: curl http://localhost:59882/api/v2/device/name/Random-Integer-Device This will return a lot of JSON, because there are a number of commands you can call on this device, but the commands we're going to use in this guide are Int16 (the comand to get the current integer 16 value) and WriteInt16Value (the command to disable the generation of the random integer 16 number and specify the integer value to return). Look for the Int16 and WriteInt16Value commands like those shown in the JSON as below: { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"deviceCoreCommand\" : { \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"coreCommands\" : [ { \"name\" : \"WriteInt16Value\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Integer-Device/WriteInt16Value\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Int16\" , \"valueType\" : \"Int16\" }, { \"resourceName\" : \"EnableRandomization_Int16\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"Int16\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Integer-Device/Int16\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Int16\" , \"valueType\" : \"Int16\" } ] } ... ] } } You'll notice that the commands have get or set (or both) options. A get call will return a random number (integer 16), and is what is being called automatically to send data into the rest of EdgeX (specifically core data). You can also call get manually using the URL provided (with no additinal parameters needed): curl http://localhost:59882/api/v2/device/name/Random-Integer-Device/Int16 Warning Notice that localhost replaces edgex-core-command here. That's because the EdgeX Foundry services are running in Docker. Docker recognizes the internal hostname edgex-core-command , but when calling the service from outside of Docker, you have to use localhost to reach it. This command will return a JSON result that looks like this: { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"event\" : { \"apiVersion\" : \"v2\" , \"id\" : \"6d829637-730c-4b70-9208-dc179070003f\" , \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"sourceName\" : \"Int16\" , \"origin\" : 1625605672073875500 , \"readings\" : [ { \"id\" : \"545b7add-683b-4745-84f1-d859f3d839e0\" , \"origin\" : 1625605672073875500 , \"deviceName\" : \"Random-Integer-Device\" , \"resourceName\" : \"Int16\" , \"profileName\" : \"Random-Integer-Device\" , \"valueType\" : \"Int16\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"value\" : \"-8146\" } ] } } A call to GET of the Int16 device's Random-Integer-Device operation through the command service results in the next random value produced by the device in JSON format. The default range for this reading is -32,768 to 32,767. In the example above, a value of -8146 was returned as the reading value. With the service set up to randomly return values, the value returned will be different each time the Int16 command is sent. However, we can use the WriteInt16Value command to disable random values from being returned and instead specify a value to return. Use the curl command below to call the set command to disable random values and return the value 42 each time. curl -X PUT -d '{\"Int16\":\"42\", \"EnableRandomization_Int16\":\"false\"}' http://localhost:59882/api/v2/device/name/Random-Integer-Device/WriteInt16Value Warning Again, also notice that localhost replaces edgex-core-command . If successful, the service will confirm your setting of the value to be returned with a 200 status code. A call to the device's SET command through the command service will return the API version and a status code (200 for success). Now every time we call get on the Int16 command, the returned value will be 42 . A call to GET of the Int16 device's Random-Integer-Device operation after setting the Int16 value to 42 and disabling randomization will always return a value of 42. Exporting Data EdgeX provides exporters (called application services) for a variety of cloud services and applications. To keep this guide simple, we're going to use the community provided 'application service configurable' to send the EdgeX data to a public MQTT broker hosted by HiveMQ. You can then watch for the EdgeX event data via HiveMQ provided MQTT browser client. First add the following application service to your docker-compose.yml file right after the 'app-service-rules' service (the first service in the file). Spacing is important in YAML, so make sure to copy and paste it correctly. app-service-mqtt : container_name : edgex-app-mqtt depends_on : - consul - data environment : CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis EDGEX_PROFILE : mqtt-export EDGEX_SECURITY_SECRET_STORE : \"false\" MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul SERVICE_HOST : edgex-app-mqtt TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS : tcp://broker.mqttdashboard.com:1883 WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_TOPIC : EdgeXEvents hostname : edgex-app-mqtt image : edgexfoundry/app-service-configurable:2.0.0 networks : edgex-network : {} ports : - 127.0.0.1:59702:59702/tcp read_only : true security_opt : - no-new-privileges:true user : 2002:2001 Note This adds the application service configurable to your EdgeX system. The application service configurable allows you to configure (versus program) new exports - in this case exporting the EdgeX sensor data to the HiveMQ broker at tcp://broker.mqttdashboard.com:1883 . You will be publishing to the EdgeXEvents topic. For convenience, see documentation on the EdgeX Compose Builder to create custom Docker Compose files. Save the compose file and then execute another compose up command to have Docker Compose pull and start the configurable application service. docker-compose up -d You can connect to this broker with any MQTT client to watch the sent data. HiveMQ provides a web-based client that you can use. Use a browser to go to the client's URL. Once there, hit the Connect button to connect to the HiveMQ public broker. Using the HiveMQ provided client tool, connect to the same public HiveMQ broker your configurable application service is sending EdgeX data to. Then, use the Subscriptions area to subscribe to the \"EdgeXEvents\" topic. You must subscribe to the same topic - EdgeXEvents - to see the EdgeX data sent by the configurable application service. You will begin seeing your random number readings appear in the Messages area on the screen. Once subscribed, the EdgeX event data will begin to appear in the Messages area on the browser screen. Next Steps Congratulations! You now have a full EdgeX deployment reading data from a (virtual) device and publishing it to an MQTT broker in the cloud, and you were able to control your device through commands into EdgeX. It's time to continue your journey by reading the Introduction to EdgeX Foundry, what it is and how it's built. From there you can take the Walkthrough to learn how the micro services work together to control devices and read data from them as you just did.","title":"Quick Start"},{"location":"getting-started/quick-start/#quick-start","text":"This guide will get EdgeX up and running on your machine in as little as 5 minutes using Docker containers. We will skip over lengthy descriptions for now. The goal here is to get you a working IoT Edge stack, from device to cloud, as simply as possible. When you need more detailed instructions or a breakdown of some of the commands you see in this quick start, see either the Getting Started as a User or Getting Started as a Developer guides.","title":"Quick Start"},{"location":"getting-started/quick-start/#setup","text":"The fastest way to start running EdgeX is by using our pre-built Docker images. To use them you'll need to install the following: Docker https://docs.docker.com/install/ Docker Compose https://docs.docker.com/compose/install/","title":"Setup"},{"location":"getting-started/quick-start/#running-edgex","text":"Info Jakarta (v 2.1) is the latest version of EdgeX and used by example in this guide. Once you have Docker and Docker Compose installed, you need to: download / save the latest docker-compose file issue command to download and run the EdgeX Foundry Docker images from Docker Hub This can be accomplished with a single command as shown below (please note the tabs for x86 vs ARM architectures). x86 curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/jakarta/docker-compose-no-secty.yml -o docker-compose.yml; docker-compose up -d ARM curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/Jakarta/docker-compose-no-secty-arm64.yml -o docker-compose.yml; docker-compose up -d Verify that the EdgeX containers have started: docker-compose ps If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above.","title":"Running EdgeX"},{"location":"getting-started/quick-start/#connected-devices","text":"EdgeX Foundry provides a Virtual device service which is useful for testing and development. It simulates a number of devices , each randomly generating data of various types and within configurable parameters. For example, the Random-Integer-Device will generate random integers. The Virtual Device (also known as Device Virtual) service is already a service pulled and running as part of the default EdgeX configuration. You can verify that Virtual Device readings are already being sent by querying the EdgeX core data service for the event records sent for Random-Integer-Device: curl http://localhost:59880/api/v2/event/device/name/Random-Integer-Device Verify the virtual device service is operating correctly by requesting the last event records received by core data for the Random-Integer-Device. Note By default, the maximum number of events returned will be 20 (the default limit). You can pass a limit parameter to get more or less event records. curl http://localhost:59880/api/v2/event/device/name/Random-Integer-Device?limit=50","title":"Connected Devices"},{"location":"getting-started/quick-start/#controlling-the-device","text":"Reading data from devices is only part of what EdgeX is capable of. You can also use it to control your devices - this is termed 'actuating' the device. When a device registers with the EdgeX services, it provides a Device Profile that describes both the data readings available from that device, and also the commands that control it. When our Virtual Device service registered the device Random-Integer-Device , it used a profile to also define commands that allow you to tell the service not to generate random integers, but to always return a value you set. You won't call commands on devices directly, instead you use the EdgeX Foundry Command Service to do that. The first step is to check what commands are available to call by asking the Command service about your device: curl http://localhost:59882/api/v2/device/name/Random-Integer-Device This will return a lot of JSON, because there are a number of commands you can call on this device, but the commands we're going to use in this guide are Int16 (the comand to get the current integer 16 value) and WriteInt16Value (the command to disable the generation of the random integer 16 number and specify the integer value to return). Look for the Int16 and WriteInt16Value commands like those shown in the JSON as below: { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"deviceCoreCommand\" : { \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"coreCommands\" : [ { \"name\" : \"WriteInt16Value\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Integer-Device/WriteInt16Value\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Int16\" , \"valueType\" : \"Int16\" }, { \"resourceName\" : \"EnableRandomization_Int16\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"Int16\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Integer-Device/Int16\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Int16\" , \"valueType\" : \"Int16\" } ] } ... ] } } You'll notice that the commands have get or set (or both) options. A get call will return a random number (integer 16), and is what is being called automatically to send data into the rest of EdgeX (specifically core data). You can also call get manually using the URL provided (with no additinal parameters needed): curl http://localhost:59882/api/v2/device/name/Random-Integer-Device/Int16 Warning Notice that localhost replaces edgex-core-command here. That's because the EdgeX Foundry services are running in Docker. Docker recognizes the internal hostname edgex-core-command , but when calling the service from outside of Docker, you have to use localhost to reach it. This command will return a JSON result that looks like this: { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"event\" : { \"apiVersion\" : \"v2\" , \"id\" : \"6d829637-730c-4b70-9208-dc179070003f\" , \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"sourceName\" : \"Int16\" , \"origin\" : 1625605672073875500 , \"readings\" : [ { \"id\" : \"545b7add-683b-4745-84f1-d859f3d839e0\" , \"origin\" : 1625605672073875500 , \"deviceName\" : \"Random-Integer-Device\" , \"resourceName\" : \"Int16\" , \"profileName\" : \"Random-Integer-Device\" , \"valueType\" : \"Int16\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"value\" : \"-8146\" } ] } } A call to GET of the Int16 device's Random-Integer-Device operation through the command service results in the next random value produced by the device in JSON format. The default range for this reading is -32,768 to 32,767. In the example above, a value of -8146 was returned as the reading value. With the service set up to randomly return values, the value returned will be different each time the Int16 command is sent. However, we can use the WriteInt16Value command to disable random values from being returned and instead specify a value to return. Use the curl command below to call the set command to disable random values and return the value 42 each time. curl -X PUT -d '{\"Int16\":\"42\", \"EnableRandomization_Int16\":\"false\"}' http://localhost:59882/api/v2/device/name/Random-Integer-Device/WriteInt16Value Warning Again, also notice that localhost replaces edgex-core-command . If successful, the service will confirm your setting of the value to be returned with a 200 status code. A call to the device's SET command through the command service will return the API version and a status code (200 for success). Now every time we call get on the Int16 command, the returned value will be 42 . A call to GET of the Int16 device's Random-Integer-Device operation after setting the Int16 value to 42 and disabling randomization will always return a value of 42.","title":"Controlling the Device"},{"location":"getting-started/quick-start/#exporting-data","text":"EdgeX provides exporters (called application services) for a variety of cloud services and applications. To keep this guide simple, we're going to use the community provided 'application service configurable' to send the EdgeX data to a public MQTT broker hosted by HiveMQ. You can then watch for the EdgeX event data via HiveMQ provided MQTT browser client. First add the following application service to your docker-compose.yml file right after the 'app-service-rules' service (the first service in the file). Spacing is important in YAML, so make sure to copy and paste it correctly. app-service-mqtt : container_name : edgex-app-mqtt depends_on : - consul - data environment : CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis EDGEX_PROFILE : mqtt-export EDGEX_SECURITY_SECRET_STORE : \"false\" MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul SERVICE_HOST : edgex-app-mqtt TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS : tcp://broker.mqttdashboard.com:1883 WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_TOPIC : EdgeXEvents hostname : edgex-app-mqtt image : edgexfoundry/app-service-configurable:2.0.0 networks : edgex-network : {} ports : - 127.0.0.1:59702:59702/tcp read_only : true security_opt : - no-new-privileges:true user : 2002:2001 Note This adds the application service configurable to your EdgeX system. The application service configurable allows you to configure (versus program) new exports - in this case exporting the EdgeX sensor data to the HiveMQ broker at tcp://broker.mqttdashboard.com:1883 . You will be publishing to the EdgeXEvents topic. For convenience, see documentation on the EdgeX Compose Builder to create custom Docker Compose files. Save the compose file and then execute another compose up command to have Docker Compose pull and start the configurable application service. docker-compose up -d You can connect to this broker with any MQTT client to watch the sent data. HiveMQ provides a web-based client that you can use. Use a browser to go to the client's URL. Once there, hit the Connect button to connect to the HiveMQ public broker. Using the HiveMQ provided client tool, connect to the same public HiveMQ broker your configurable application service is sending EdgeX data to. Then, use the Subscriptions area to subscribe to the \"EdgeXEvents\" topic. You must subscribe to the same topic - EdgeXEvents - to see the EdgeX data sent by the configurable application service. You will begin seeing your random number readings appear in the Messages area on the screen. Once subscribed, the EdgeX event data will begin to appear in the Messages area on the browser screen.","title":"Exporting Data"},{"location":"getting-started/quick-start/#next-steps","text":"Congratulations! You now have a full EdgeX deployment reading data from a (virtual) device and publishing it to an MQTT broker in the cloud, and you were able to control your device through commands into EdgeX. It's time to continue your journey by reading the Introduction to EdgeX Foundry, what it is and how it's built. From there you can take the Walkthrough to learn how the micro services work together to control devices and read data from them as you just did.","title":"Next Steps"},{"location":"getting-started/tools/Ch-CommandLineInterface/","text":"Command Line Interface (CLI) What is EdgeX CLI? EdgeX CLI is a command-line interface tool for developers, used for interacting with EdgeX Foundry microservices. Installing EdgeX CLI The client can be installed using a snap sudo snap install edgex-cli You can also download the appropriate binary for your operating system from GitHub . If you want to build EdgeX CLI from source, do the following: git clone http://github.com/edgexfoundry/edgex-cli.git cd edgex-cli make tidy make build ./bin/edgex-cli For more information, see the EdgeX CLI README . Features EdgeX CLI provides access to most of the core and support APIs. The commands map directly to the REST API structure. Running edgex-cli with no arguments shows a list of the available commands and information for each of them, including the name of the service implementing the command. Use the -h or --help flag to get more information about each command. $ edgex-cli EdgeX-CLI Usage: edgex-cli [command] Available Commands: command Read, write and list commands [Core Command] config Return the current configuration of all EdgeX core/support microservices device Add, remove, get, list and modify devices [Core Metadata] deviceprofile Add, remove, get and list device profiles [Core Metadata] deviceservice Add, remove, get, list and modify device services [Core Metadata] event Add, remove and list events help Help about any command interval Add, get and list intervals [Support Scheduler] intervalaction Get, list, update and remove interval actions [Support Scheduler] metrics Output the CPU/memory usage stats for all EdgeX core/support microservices notification Add, remove and list notifications [Support Notifications] ping Ping (health check) all EdgeX core/support microservices provisionwatcher Add, remove, get, list and modify provison watchers [Core Metadata] reading Count and list readings subscription Add, remove and list subscriptions [Support Notificationss] transmission Remove and list transmissions [Support Notifications] version Output the current version of EdgeX CLI and EdgeX microservices Flags: -h, --help help for edgex-cli Use \"edgex-cli [command] --help\" for more information about a command. Commands implemented by all microservices The ping , config , metrics and version work with more than one microservice. By default these commands will return values from all core and support services: $ edgex-cli metrics Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-metadata 13 1878936 38262 9445 47707 75318280 5967608 core-data 13 1716256 40200 8997 49197 75580424 5949504 core-command 13 1737288 31367 8582 39949 75318280 5380584 support-scheduler 10 2612296 20754 20224 40978 74728456 4146800 support-notifications 10 2714480 21199 20678 41877 74728456 4258640 To only return information for one service, specify the service to use: -c, --command use core-command service endpoint -d, --data use core-data service endpoint -m, --metadata use core-metadata service endpoint -n, --notifications use support-notifications service endpoint -s, --scheduler use support-scheduler service endpoint Example: $ edgex-cli metrics -d Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-data 14 1917712 870037 12258 882295 75580424 64148880 $ edgex-cli metrics -c Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-command 13 1618424 90890 8328 99218 75580424 22779448 $ edgex-cli metrics --metadata Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-metadata 12 1704256 39606 8870 48476 75318280 6139912 The -j/--json flag can be used with most of edgex-go commands to return the JSON output: $ edgex-cli metrics --metadata --json {\"apiVersion\":\"v2\",\"metrics\":{\"memAlloc\":1974544,\"memFrees\":39625,\"memLiveObjects\":9780,\"memMallocs\":49405,\"memSys\":75318280,\"memTotalAlloc\":6410200,\"cpuBusyAvg\":13}} This could then be formatted and filtered using jq : $ edgex-cli metrics --metadata --json | jq '.' { \"apiVersion\": \"v2\", \"metrics\": { \"memAlloc\": 1684176, \"memFrees\": 41142, \"memLiveObjects\": 8679, \"memMallocs\": 49821, \"memSys\": 75318280, \"memTotalAlloc\": 6530824, \"cpuBusyAvg\": 12 } } Core-command service edgex-cli command list Return a list of all supported device commands, optionally filtered by device name. Example: $ edgex-cli command list Name Device Name Profile Name Methods URL BoolArray Random-Boolean-Device Random-Boolean-Device Get, Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/BoolArray WriteBoolValue Random-Boolean-Device Random-Boolean-Device Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue WriteBoolArrayValue Random-Boolean-Device Random-Boolean-Device Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolArrayValue edgex-cli command read Issue a read command to the specified device. Example: $ edgex-cli command read -c Int16 -d Random-Integer-Device -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"event\": { \"apiVersion\": \"v2\", \"id\": \"e19f417e-3130-485f-8212-64b593b899f9\", \"deviceName\": \"Random-Integer-Device\", \"profileName\": \"Random-Integer-Device\", \"sourceName\": \"Int16\", \"origin\": 1641484109458647300, \"readings\": [ { \"id\": \"dc1f212d-148a-457c-ab13-48aa0fa58dd1\", \"origin\": 1641484109458647300, \"deviceName\": \"Random-Integer-Device\", \"resourceName\": \"Int16\", \"profileName\": \"Random-Integer-Device\", \"valueType\": \"Int16\", \"binaryValue\": null, \"mediaType\": \"\", \"value\": \"587\" } ] } } edgex-cli command write Issue a write command to the specified device. Example using in-line request body: $ edgex-cli command write -d Random-Integer-Device -c Int8 -b \"{\\\"Int8\\\": \\\"99\\\"}\" $ edgex-cli command read -d Random-Integer-Device -c Int8 apiVersion: v2,statusCode: 200 Command Name Device Name Profile Name Value Type Value Int8 Random-Integer-Device Random-Integer-Device Int8 99 Example using a file containing the request: $ echo \"{ \\\"Int8\\\":\\\"88\\\" }\" > file.txt $ edgex-cli command write -d Random-Integer-Device -c Int8 -f file.txt apiVersion: v2,statusCode: 200 $ edgex-cli command read -d Random-Integer-Device -c Int8 Command Name Device Name Profile Name Value Type Value Int8 Random-Integer-Device Random-Integer-Device Int8 88 Core-metadata service edgex-cli deviceservice list List device services $ edgex-cli deviceservice list edgex-cli deviceservice add Add a device service $ edgex-cli deviceservice add -n TestDeviceService -b \"http://localhost:51234\" edgex-cli deviceservice name Shows information about a device service. Most edgex-cli commands support the -v/--verbose and -j/--json flags: $ edgex-cli deviceservice name -n TestDeviceService Name BaseAddress Description TestDeviceService http://localhost:51234 $ edgex-cli deviceservice name -n TestDeviceService -v Name BaseAddress Description AdminState Id Labels LastConnected LastReported Modified TestDeviceService http://localhost:51234 UNLOCKED 7f29ad45-65dc-46c0-a928-00147d328032 [] 0 0 10 Jan 22 17:26 GMT $ edgex-cli deviceservice name -n TestDeviceService -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"service\": { \"created\": 1641835585465, \"modified\": 1641835585465, \"id\": \"7f29ad45-65dc-46c0-a928-00147d328032\", \"name\": \"TestDeviceService\", \"baseAddress\": \"http://localhost:51234\", \"adminState\": \"UNLOCKED\" } } edgex-cli deviceservice rm Remove a device service $ edgex-cli deviceservice rm -n TestDeviceService edgex-cli deviceservice update Update the device service, getting the ID using jq and confirm that the labels were added $ edgex-cli deviceservice add -n TestDeviceService -b \"http://localhost:51234\" {{{v2} c2600ad2-6489-4c3f-9207-5bdffdb8d68f 201} 844473b1-551d-4545-9143-28cfdf68a539} $ ID=`edgex-cli deviceservice name -n TestDeviceService -j | jq -r '.service.id'` $ edgex-cli deviceservice update -n TestDeviceService -i $ID --labels \"label1,label2\" {{v2} 9f4a4758-48a1-43ce-a232-828f442c2e34 200} $ edgex-cli deviceservice name -n TestDeviceService -v Name BaseAddress Description AdminState Id Labels LastConnected LastReported Modified TestDeviceService http://localhost:51234 UNLOCKED 844473b1-551d-4545-9143-28cfdf68a539 [label1 label2] 0 0 28 Jan 22 12:00 GMT edgex-cli deviceprofile list List device profiles $ edgex-cli deviceprofile list edgex-cli deviceprofile add Add a device profile $ edgex-cli deviceprofile add -n TestProfile -r \"[{\\\"name\\\": \\\"SwitchButton\\\",\\\"description\\\": \\\"Switch On/Off.\\\",\\\"properties\\\": {\\\"valueType\\\": \\\"String\\\",\\\"readWrite\\\": \\\"RW\\\",\\\"defaultValue\\\": \\\"On\\\",\\\"units\\\": \\\"On/Off\\\" } }]\" -c \"[{\\\"name\\\": \\\"Switch\\\",\\\"readWrite\\\": \\\"RW\\\",\\\"resourceOperations\\\": [{\\\"deviceResource\\\": \\\"SwitchButton\\\",\\\"DefaultValue\\\": \\\"false\\\" }]} ]\" {{{v2} 65d083cc-b876-4744-af65-59a00c63fc25 201} 4c0af6b0-4e83-4f3c-a574-dcea5f42d3f0} edgex-cli deviceprofile name Show information about a specifed device profile $ edgex-cli deviceprofile name -n TestProfile Name Description Manufacturer Model Name TestProfile TestProfile edgex-cli deviceprofile rm Remove a device profile $ edgex-cli deviceprofile rm -n TestProfile edgex-cli device list List current devices $ edgex-cli device list Name Description ServiceName ProfileName Labels AutoEvents Random-Float-Device Example of Device Virtual device-virtual Random-Float-Device [device-virtual-example] [{30s false Float32} {30s false Float64}] Random-UnsignedInteger-Device Example of Device Virtual device-virtual Random-UnsignedInteger-Device [device-virtual-example] [{20s false Uint8} {20s false Uint16} {20s false Uint32} {20s false Uint64}] Random-Boolean-Device Example of Device Virtual device-virtual Random-Boolean-Device [device-virtual-example] [{10s false Bool}] TestDevice TestDeviceService TestProfile [] [] Random-Binary-Device Example of Device Virtual device-virtual Random-Binary-Device [device-virtual-example] [] Random-Integer-Device Example of Device Virtual device-virtual Random-Integer-Device [device-virtual-example] [{15s false Int8} {15s false Int16} {15s false Int32} {15s false Int64}] edgex-cli device add Add a new device. This needs a device service and device profile to be created first $ edgex-cli device add -n TestDevice -p TestProfile -s TestDeviceService --protocols \"{\\\"modbus-tcp\\\":{\\\"Address\\\": \\\"localhost\\\",\\\"Port\\\": \\\"1234\\\" }}\" {{{v2} e912aa16-af4a-491d-993b-b0aeb8cd9c67 201} ae0e8b95-52fc-4778-892d-ae7e1127ed39} edgex-cli device name Show information about a specified named device $ edgex-cli device name -n TestDevice Name Description ServiceName ProfileName Labels AutoEvents TestDevice TestDeviceService TestProfile [] [] edgex-cli device rm Remove a device edgex-cli device rm -n TestDevice edgex-cli device list edgex-cli device add -n TestDevice -p TestProfile -s TestDeviceService --protocols \"{\\\"modbus-tcp\\\":{\\\"Address\\\": \\\"localhost\\\",\\\"Port\\\": \\\"1234\\\" }}\" edgex-cli device list edgex-cli device update Update a device This example gets the ID of a device, updates it using that ID and then displays device information to confirm that the labels were added $ ID=`edgex-cli device name -n TestDevice -j | jq -r '.device.id'` $ edgex-cli device update -n TestDevice -i $ID --labels \"label1,label2\" {{v2} 73427492-1158-45b2-9a7c-491a474cecce 200} $ edgex-cli device name -n TestDevice Name Description ServiceName ProfileName Labels AutoEvents TestDevice TestDeviceService TestProfile [label1 label2] [] edgex-cli provisionwatcher add Add a new provision watcher $ edgex-cli provisionwatcher add -n TestWatcher --identifiers \"{\\\"address\\\":\\\"localhost\\\",\\\"port\\\":\\\"1234\\\"}\" -p TestProfile -s TestDeviceService {{{v2} 3f05f6e0-9d9b-4d96-96df-f394cc2ad6f4 201} ee76f4d8-46d4-454c-a4da-8ad9e06d8d7e} edgex-cli provisionwatcher list List provision watchers $ edgex-cli provisionwatcher list Name ServiceName ProfileName Labels Identifiers TestWatcher TestDeviceService TestProfile [] map[address:localhost port:1234] edgex-cli provisionwatcher name Show information about a specific named provision watcher $ edgex-cli provisionwatcher name -n TestWatcher Name ServiceName ProfileName Labels Identifiers TestWatcher TestDeviceService TestProfile [] map[address:localhost port:1234] edgex-cli provisionwatcher rm Remove a provision watcher $ edgex-cli provisionwatcher rm -n TestWatcher $ edgex-cli provisionwatcher list No provision watchers available edgex-cli provisionwatcher update Update a provision watcher This example gets the ID of a provision watcher, updates it using that ID and then displays information about it to confirm that the labels were added $ edgex-cli provisionwatcher add -n TestWatcher2 --identifiers \"{\\\"address\\\":\\\"localhost\\\",\\\"port\\\":\\\"1234\\\"}\" -p TestProfile -s TestDeviceService {{{v2} fb7b8bcf-8f58-477b-929e-8dac53cddc81 201} 7aadb7df-1ff1-4b3b-8986-b97e0ef53116} $ ID=`edgex-cli provisionwatcher name -n TestWatcher2 -j | jq -r '.provisionWatcher.id'` $ edgex-cli provisionwatcher update -n TestWatcher2 -i $ID --labels \"label1,label2\" {{v2} af1e70bf-4705-47f4-9046-c7b789799405 200} $ edgex-cli provisionwatcher name -n TestWatcher2 Name ServiceName ProfileName Labels Identifiers TestWatcher2 TestDeviceService TestProfile [label1 label2] map[address:localhost port:1234] Core-data service edgex-cli event add Create an event with a specified number of random readings $ edgex-cli event add -d Random-Integer-Device -p Random-Integer-Device -r 1 -s Int16 -t int16 Added event 75f06078-e8da-4671-8938-ab12ebb2c244 $ edgex-cli event list -v Origin Device Profile Source Id Versionable Readings 10 Jan 22 15:38 GMT Random-Integer-Device Random-Integer-Device Int16 75f06078-e8da-4671-8938-ab12ebb2c244 {v2} [{974a70fe-71ef-4a47-a008-c89f0e4e3bb6 1641829092129391876 Random-Integer-Device Int16 Random-Integer-Device Int16 {[] } {13342}}] edgex-cli event count Count the number of events in core data, optionally filtering by device name $ edgex-cli event count -d Random-Integer-Device Total Random-Integer-Device events: 54 edgex-cli event list List all events, optionally specifying a limit and offset $ edgex-cli event list To see two readings only, skipping the first 100 readings: $ edgex-cli reading list --limit 2 --offset 100 Origin Device ProfileName Value ValueType 28 Jan 22 12:55 GMT Random-Integer-Device Random-Integer-Device 22502 Int16 28 Jan 22 12:55 GMT Random-Integer-Device Random-Integer-Device 1878517239016780388 Int64 edgex-cli event rm Remove events, specifying either device name or maximum event age in milliseconds - edgex-cli event rm --device {devicename} removes all events for the specified device - edgex-cli event rm --age {ms} removes all events generated in the last {ms} milliseconds $ edgex-cli event rm -a 30000 $ edgex-cli event count Total events: 0 edgex-cli reading count Count the number of readings in core data, optionally filtering by device name $ edgex-cli reading count Total readings: 235 edgex-cli reading list List all readings, optionally specifying a limit and offset $ edgex-cli reading list Support-scheduler service edgex-cli interval add Add an interval $ edgex-cli interval add -n \"hourly\" -i \"1h\" {{{v2} c7c51f21-dab5-4307-a4c9-bc5d5f2194d9 201} 98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18} edgex-cli interval name Return an interval by name $ edgex-cli interval name -n \"hourly\" Name Interval Start End hourly 1h edgex-cli interval list List all intervals $ edgex-cli interval list -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"intervals\": [ { \"created\": 1641830955058, \"modified\": 1641830955058, \"id\": \"98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18\", \"name\": \"hourly\", \"interval\": \"1h\" }, { \"created\": 1641830953884, \"modified\": 1641830953884, \"id\": \"507a2a9a-82eb-41ea-afa8-79a9b0033665\", \"name\": \"midnight\", \"start\": \"20180101T000000\", \"interval\": \"24h\" } ] } edgex-cli interval update Update an interval, specifying either ID or name $ edgex-cli interval update -n \"hourly\" -i \"1m\" {{v2} 08239cc4-d4d7-4ea2-9915-d91b9557c742 200} $ edgex-cli interval name -n \"hourly\" -v Id Name Interval Start End 98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18 hourly 1m edgex-cli interval rm Delete a named interval and associated interval actions $ edgex-cli interval rm -n \"hourly\" edgex-cli intervalaction add Add an interval action $ edgex-cli intervalaction add -n \"name01\" -i \"midnight\" -a \"{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"192.168.0.102\\\", \\\"port\\\": 8080, \\\"httpMethod\\\": \\\"GET\\\"}\" edgex-cli intervalaction name Return an interval action by name $ edgex-cli intervalaction name -n \"name01\" Name Interval Address Content ContentType name01 midnight {REST 192.168.0.102 8080 { GET} { 0 0 false false 0} {[]}} edgex-cli intervalaction list List all interval actions $ edgex-cli intervalaction list Name Interval Address Content ContentType name01 midnight {REST 192.168.0.102 8080 { GET} { 0 0 false false 0} {[]}} scrub-aged-events midnight {REST localhost 59880 {/api/v2/event/age/604800000000000 DELETE} { 0 0 false false 0} {[]}} edgex-cli intervalaction update Update an interval action, specifying either ID or name $ edgex-cli intervalaction update -n \"name01\" --admin-state \"LOCKED\" {{v2} afc7b08c-5dc6-4923-9786-30bfebc8a8b6 200} $ edgex-cli intervalaction name -n \"name01\" -j | jq '.action.adminState' \"LOCKED\" edgex-cli intervalaction rm Delete an interval action by name $ edgex-cli intervalaction rm -n \"name01\" Support-notifications service edgex-cli notification add Add a notification to be sent $ edgex-cli notification add -s \"sender01\" -c \"content\" --category \"category04\" --labels \"l3\" {{{v2} 13938e01-a560-47d8-bb50-060effdbe490 201} 6a1138c2-b58e-4696-afa7-2074e95165eb} edgex-cli notification list List notifications associated with a given label, category or time range $ edgex-cli notification list -c \"category04\" Category Content Description Labels Sender Severity Status category04 content [l3] sender01 NORMAL PROCESSED $ edgex-cli notification list --start \"01 jan 20 00:00 GMT\" --end \"01 dec 24 00:00 GMT\" Category Content Description Labels Sender Severity Status category04 content [l3] sender01 NORMAL PROCESSED edgex-cli notification rm Delete a notification and all of its associated transmissions $ ID=`edgex-cli notification list -c \"category04\" -v -j | jq -r '.notifications[0].id'` $ echo $ID 6a1138c2-b58e-4696-afa7-2074e95165eb $ edgex-cli notification rm -i $ID $ edgex-cli notification list -c \"category04\" No notifications available edgex-cli notification cleanup Delete all notifications and corresponding transmissions $ edgex-cli notification cleanup $ edgex-cli notification list --start \"01 jan 20 00:00 GMT\" --end \"01 dec 24 00:00 GMT\" No notifications available edgex-cli subscription add Add a new subscription $ edgex-cli subscription add -n \"name01\" --receiver \"receiver01\" -c \"[{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 7770, \\\"httpMethod\\\": \\\"POST\\\"}]\" --labels \"l1,l2,l3\" {{{v2} 2bbfdac0-d2e1-4f08-8344-392b8e8ddc5e 201} 1ec08af0-5767-4505-82f7-581fada6006b} $ edgex-cli subscription add -n \"name02\" --receiver \"receiver01\" -c \"[{\\\"type\\\": \\\"EMAIL\\\", \\\"recipients\\\": [\\\"123@gmail.com\\\"]}]\" --labels \"l1,l2,l3\" {{{v2} f6b417ca-740c-4dee-bc1e-c721c0de4051 201} 156fc2b9-de60-423b-9bff-5312d8452c48} edgex-cli subscription name Return a subscription by its unique name $ edgex-cli subscription name -n \"name01\" Name Description Channels Receiver Categories Labels name01 [{REST localhost 7770 { POST} { 0 0 false false 0} {[]}}] receiver01 [] [l1 l2 l3] edgex-cli subscription list List all subscriptions, optionally filtered by a given category, label or receiver $ edgex-cli subscription list --label \"l1\" Name Description Channels Receiver Categories Labels name02 [{EMAIL 0 { } { 0 0 false false 0} {[123@gmail.com]}}] receiver01 [] [l1 l2 l3] name01 [{REST localhost 7770 { POST} { 0 0 false false 0} {[]}}] receiver01 [] [l1 l2 l3] edgex-cli subscription rm Delete the named subscription $ edgex-cli subscription rm -n \"name01\" edgex-cli transmission list To create a transmission, first create a subscription and notifications: $ edgex-cli subscription add -n \"Test-Subscription\" --description \"Test data for subscription\" --categories \"health-check\" --labels \"simple\" --receiver \"tafuser\" --resend-limit 0 --admin-state \"UNLOCKED\" -c \"[{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 7770, \\\"httpMethod\\\": \\\"POST\\\"}]\" {{{v2} f281ec1a-876e-4a29-a14d-195b66d0506c 201} 3b489d23-b0c7-4791-b839-d9a578ebccb9} $ edgex-cli notification add -d \"Test data for notification 1\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} 8df79c7c-03fb-4626-b6e8-bf2d616fa327 201} 0be98b91-daf9-46e2-bcca-39f009d93866} $ edgex-cli notification add -d \"Test data for notification 2\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} ec0b2444-c8b0-45d0-bbd6-847dd007c2fd 201} a7c65d7d-0f9c-47e1-82c2-c8098c47c016} $ edgex-cli notification add -d \"Test data for notification 3\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} 45af7f94-c99e-4fb1-a632-fab5ff475be4 201} f982fc97-f53f-4154-bfce-3ef8666c3911} Then list the transmissions: $ edgex-cli transmission list SubscriptionName ResendCount Status Test-Subscription 0 FAILED Test-Subscription 0 FAILED Test-Subscription 0 FAILED edgex-cli transmission id Return a transmission by ID $ ID=`edgex-cli transmission list -j | jq -r '.transmissions[0].id'` $ edgex-cli transmission id -i $ID SubscriptionName ResendCount Status Test-Subscription 0 FAILED edgex-cli transmission rm Delete processed transmissions older than the specificed age (in milliseconds) $ edgex-cli transmission rm -a 100","title":"Command Line Interface (CLI)"},{"location":"getting-started/tools/Ch-CommandLineInterface/#command-line-interface-cli","text":"","title":"Command Line Interface (CLI)"},{"location":"getting-started/tools/Ch-CommandLineInterface/#what-is-edgex-cli","text":"EdgeX CLI is a command-line interface tool for developers, used for interacting with EdgeX Foundry microservices.","title":"What is EdgeX CLI?"},{"location":"getting-started/tools/Ch-CommandLineInterface/#installing-edgex-cli","text":"The client can be installed using a snap sudo snap install edgex-cli You can also download the appropriate binary for your operating system from GitHub . If you want to build EdgeX CLI from source, do the following: git clone http://github.com/edgexfoundry/edgex-cli.git cd edgex-cli make tidy make build ./bin/edgex-cli For more information, see the EdgeX CLI README .","title":"Installing EdgeX CLI"},{"location":"getting-started/tools/Ch-CommandLineInterface/#features","text":"EdgeX CLI provides access to most of the core and support APIs. The commands map directly to the REST API structure. Running edgex-cli with no arguments shows a list of the available commands and information for each of them, including the name of the service implementing the command. Use the -h or --help flag to get more information about each command. $ edgex-cli EdgeX-CLI Usage: edgex-cli [command] Available Commands: command Read, write and list commands [Core Command] config Return the current configuration of all EdgeX core/support microservices device Add, remove, get, list and modify devices [Core Metadata] deviceprofile Add, remove, get and list device profiles [Core Metadata] deviceservice Add, remove, get, list and modify device services [Core Metadata] event Add, remove and list events help Help about any command interval Add, get and list intervals [Support Scheduler] intervalaction Get, list, update and remove interval actions [Support Scheduler] metrics Output the CPU/memory usage stats for all EdgeX core/support microservices notification Add, remove and list notifications [Support Notifications] ping Ping (health check) all EdgeX core/support microservices provisionwatcher Add, remove, get, list and modify provison watchers [Core Metadata] reading Count and list readings subscription Add, remove and list subscriptions [Support Notificationss] transmission Remove and list transmissions [Support Notifications] version Output the current version of EdgeX CLI and EdgeX microservices Flags: -h, --help help for edgex-cli Use \"edgex-cli [command] --help\" for more information about a command.","title":"Features"},{"location":"getting-started/tools/Ch-CommandLineInterface/#commands-implemented-by-all-microservices","text":"The ping , config , metrics and version work with more than one microservice. By default these commands will return values from all core and support services: $ edgex-cli metrics Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-metadata 13 1878936 38262 9445 47707 75318280 5967608 core-data 13 1716256 40200 8997 49197 75580424 5949504 core-command 13 1737288 31367 8582 39949 75318280 5380584 support-scheduler 10 2612296 20754 20224 40978 74728456 4146800 support-notifications 10 2714480 21199 20678 41877 74728456 4258640 To only return information for one service, specify the service to use: -c, --command use core-command service endpoint -d, --data use core-data service endpoint -m, --metadata use core-metadata service endpoint -n, --notifications use support-notifications service endpoint -s, --scheduler use support-scheduler service endpoint Example: $ edgex-cli metrics -d Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-data 14 1917712 870037 12258 882295 75580424 64148880 $ edgex-cli metrics -c Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-command 13 1618424 90890 8328 99218 75580424 22779448 $ edgex-cli metrics --metadata Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-metadata 12 1704256 39606 8870 48476 75318280 6139912 The -j/--json flag can be used with most of edgex-go commands to return the JSON output: $ edgex-cli metrics --metadata --json {\"apiVersion\":\"v2\",\"metrics\":{\"memAlloc\":1974544,\"memFrees\":39625,\"memLiveObjects\":9780,\"memMallocs\":49405,\"memSys\":75318280,\"memTotalAlloc\":6410200,\"cpuBusyAvg\":13}} This could then be formatted and filtered using jq : $ edgex-cli metrics --metadata --json | jq '.' { \"apiVersion\": \"v2\", \"metrics\": { \"memAlloc\": 1684176, \"memFrees\": 41142, \"memLiveObjects\": 8679, \"memMallocs\": 49821, \"memSys\": 75318280, \"memTotalAlloc\": 6530824, \"cpuBusyAvg\": 12 } }","title":"Commands implemented by all microservices"},{"location":"getting-started/tools/Ch-CommandLineInterface/#core-command-service","text":"","title":"Core-command service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-command-list","text":"Return a list of all supported device commands, optionally filtered by device name. Example: $ edgex-cli command list Name Device Name Profile Name Methods URL BoolArray Random-Boolean-Device Random-Boolean-Device Get, Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/BoolArray WriteBoolValue Random-Boolean-Device Random-Boolean-Device Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue WriteBoolArrayValue Random-Boolean-Device Random-Boolean-Device Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolArrayValue","title":"edgex-cli command list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-command-read","text":"Issue a read command to the specified device. Example: $ edgex-cli command read -c Int16 -d Random-Integer-Device -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"event\": { \"apiVersion\": \"v2\", \"id\": \"e19f417e-3130-485f-8212-64b593b899f9\", \"deviceName\": \"Random-Integer-Device\", \"profileName\": \"Random-Integer-Device\", \"sourceName\": \"Int16\", \"origin\": 1641484109458647300, \"readings\": [ { \"id\": \"dc1f212d-148a-457c-ab13-48aa0fa58dd1\", \"origin\": 1641484109458647300, \"deviceName\": \"Random-Integer-Device\", \"resourceName\": \"Int16\", \"profileName\": \"Random-Integer-Device\", \"valueType\": \"Int16\", \"binaryValue\": null, \"mediaType\": \"\", \"value\": \"587\" } ] } }","title":"edgex-cli command read"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-command-write","text":"Issue a write command to the specified device. Example using in-line request body: $ edgex-cli command write -d Random-Integer-Device -c Int8 -b \"{\\\"Int8\\\": \\\"99\\\"}\" $ edgex-cli command read -d Random-Integer-Device -c Int8 apiVersion: v2,statusCode: 200 Command Name Device Name Profile Name Value Type Value Int8 Random-Integer-Device Random-Integer-Device Int8 99 Example using a file containing the request: $ echo \"{ \\\"Int8\\\":\\\"88\\\" }\" > file.txt $ edgex-cli command write -d Random-Integer-Device -c Int8 -f file.txt apiVersion: v2,statusCode: 200 $ edgex-cli command read -d Random-Integer-Device -c Int8 Command Name Device Name Profile Name Value Type Value Int8 Random-Integer-Device Random-Integer-Device Int8 88","title":"edgex-cli command write"},{"location":"getting-started/tools/Ch-CommandLineInterface/#core-metadata-service","text":"","title":"Core-metadata service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-list","text":"List device services $ edgex-cli deviceservice list","title":"edgex-cli deviceservice list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-add","text":"Add a device service $ edgex-cli deviceservice add -n TestDeviceService -b \"http://localhost:51234\"","title":"edgex-cli deviceservice add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-name","text":"Shows information about a device service. Most edgex-cli commands support the -v/--verbose and -j/--json flags: $ edgex-cli deviceservice name -n TestDeviceService Name BaseAddress Description TestDeviceService http://localhost:51234 $ edgex-cli deviceservice name -n TestDeviceService -v Name BaseAddress Description AdminState Id Labels LastConnected LastReported Modified TestDeviceService http://localhost:51234 UNLOCKED 7f29ad45-65dc-46c0-a928-00147d328032 [] 0 0 10 Jan 22 17:26 GMT $ edgex-cli deviceservice name -n TestDeviceService -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"service\": { \"created\": 1641835585465, \"modified\": 1641835585465, \"id\": \"7f29ad45-65dc-46c0-a928-00147d328032\", \"name\": \"TestDeviceService\", \"baseAddress\": \"http://localhost:51234\", \"adminState\": \"UNLOCKED\" } }","title":"edgex-cli deviceservice name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-rm","text":"Remove a device service $ edgex-cli deviceservice rm -n TestDeviceService","title":"edgex-cli deviceservice rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-update","text":"Update the device service, getting the ID using jq and confirm that the labels were added $ edgex-cli deviceservice add -n TestDeviceService -b \"http://localhost:51234\" {{{v2} c2600ad2-6489-4c3f-9207-5bdffdb8d68f 201} 844473b1-551d-4545-9143-28cfdf68a539} $ ID=`edgex-cli deviceservice name -n TestDeviceService -j | jq -r '.service.id'` $ edgex-cli deviceservice update -n TestDeviceService -i $ID --labels \"label1,label2\" {{v2} 9f4a4758-48a1-43ce-a232-828f442c2e34 200} $ edgex-cli deviceservice name -n TestDeviceService -v Name BaseAddress Description AdminState Id Labels LastConnected LastReported Modified TestDeviceService http://localhost:51234 UNLOCKED 844473b1-551d-4545-9143-28cfdf68a539 [label1 label2] 0 0 28 Jan 22 12:00 GMT","title":"edgex-cli deviceservice update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceprofile-list","text":"List device profiles $ edgex-cli deviceprofile list","title":"edgex-cli deviceprofile list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceprofile-add","text":"Add a device profile $ edgex-cli deviceprofile add -n TestProfile -r \"[{\\\"name\\\": \\\"SwitchButton\\\",\\\"description\\\": \\\"Switch On/Off.\\\",\\\"properties\\\": {\\\"valueType\\\": \\\"String\\\",\\\"readWrite\\\": \\\"RW\\\",\\\"defaultValue\\\": \\\"On\\\",\\\"units\\\": \\\"On/Off\\\" } }]\" -c \"[{\\\"name\\\": \\\"Switch\\\",\\\"readWrite\\\": \\\"RW\\\",\\\"resourceOperations\\\": [{\\\"deviceResource\\\": \\\"SwitchButton\\\",\\\"DefaultValue\\\": \\\"false\\\" }]} ]\" {{{v2} 65d083cc-b876-4744-af65-59a00c63fc25 201} 4c0af6b0-4e83-4f3c-a574-dcea5f42d3f0}","title":"edgex-cli deviceprofile add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceprofile-name","text":"Show information about a specifed device profile $ edgex-cli deviceprofile name -n TestProfile Name Description Manufacturer Model Name TestProfile TestProfile","title":"edgex-cli deviceprofile name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceprofile-rm","text":"Remove a device profile $ edgex-cli deviceprofile rm -n TestProfile","title":"edgex-cli deviceprofile rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-list","text":"List current devices $ edgex-cli device list Name Description ServiceName ProfileName Labels AutoEvents Random-Float-Device Example of Device Virtual device-virtual Random-Float-Device [device-virtual-example] [{30s false Float32} {30s false Float64}] Random-UnsignedInteger-Device Example of Device Virtual device-virtual Random-UnsignedInteger-Device [device-virtual-example] [{20s false Uint8} {20s false Uint16} {20s false Uint32} {20s false Uint64}] Random-Boolean-Device Example of Device Virtual device-virtual Random-Boolean-Device [device-virtual-example] [{10s false Bool}] TestDevice TestDeviceService TestProfile [] [] Random-Binary-Device Example of Device Virtual device-virtual Random-Binary-Device [device-virtual-example] [] Random-Integer-Device Example of Device Virtual device-virtual Random-Integer-Device [device-virtual-example] [{15s false Int8} {15s false Int16} {15s false Int32} {15s false Int64}]","title":"edgex-cli device list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-add","text":"Add a new device. This needs a device service and device profile to be created first $ edgex-cli device add -n TestDevice -p TestProfile -s TestDeviceService --protocols \"{\\\"modbus-tcp\\\":{\\\"Address\\\": \\\"localhost\\\",\\\"Port\\\": \\\"1234\\\" }}\" {{{v2} e912aa16-af4a-491d-993b-b0aeb8cd9c67 201} ae0e8b95-52fc-4778-892d-ae7e1127ed39}","title":"edgex-cli device add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-name","text":"Show information about a specified named device $ edgex-cli device name -n TestDevice Name Description ServiceName ProfileName Labels AutoEvents TestDevice TestDeviceService TestProfile [] []","title":"edgex-cli device name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-rm","text":"Remove a device edgex-cli device rm -n TestDevice edgex-cli device list edgex-cli device add -n TestDevice -p TestProfile -s TestDeviceService --protocols \"{\\\"modbus-tcp\\\":{\\\"Address\\\": \\\"localhost\\\",\\\"Port\\\": \\\"1234\\\" }}\" edgex-cli device list","title":"edgex-cli device rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-update","text":"Update a device This example gets the ID of a device, updates it using that ID and then displays device information to confirm that the labels were added $ ID=`edgex-cli device name -n TestDevice -j | jq -r '.device.id'` $ edgex-cli device update -n TestDevice -i $ID --labels \"label1,label2\" {{v2} 73427492-1158-45b2-9a7c-491a474cecce 200} $ edgex-cli device name -n TestDevice Name Description ServiceName ProfileName Labels AutoEvents TestDevice TestDeviceService TestProfile [label1 label2] []","title":"edgex-cli device update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-add","text":"Add a new provision watcher $ edgex-cli provisionwatcher add -n TestWatcher --identifiers \"{\\\"address\\\":\\\"localhost\\\",\\\"port\\\":\\\"1234\\\"}\" -p TestProfile -s TestDeviceService {{{v2} 3f05f6e0-9d9b-4d96-96df-f394cc2ad6f4 201} ee76f4d8-46d4-454c-a4da-8ad9e06d8d7e}","title":"edgex-cli provisionwatcher add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-list","text":"List provision watchers $ edgex-cli provisionwatcher list Name ServiceName ProfileName Labels Identifiers TestWatcher TestDeviceService TestProfile [] map[address:localhost port:1234]","title":"edgex-cli provisionwatcher list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-name","text":"Show information about a specific named provision watcher $ edgex-cli provisionwatcher name -n TestWatcher Name ServiceName ProfileName Labels Identifiers TestWatcher TestDeviceService TestProfile [] map[address:localhost port:1234]","title":"edgex-cli provisionwatcher name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-rm","text":"Remove a provision watcher $ edgex-cli provisionwatcher rm -n TestWatcher $ edgex-cli provisionwatcher list No provision watchers available","title":"edgex-cli provisionwatcher rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-update","text":"Update a provision watcher This example gets the ID of a provision watcher, updates it using that ID and then displays information about it to confirm that the labels were added $ edgex-cli provisionwatcher add -n TestWatcher2 --identifiers \"{\\\"address\\\":\\\"localhost\\\",\\\"port\\\":\\\"1234\\\"}\" -p TestProfile -s TestDeviceService {{{v2} fb7b8bcf-8f58-477b-929e-8dac53cddc81 201} 7aadb7df-1ff1-4b3b-8986-b97e0ef53116} $ ID=`edgex-cli provisionwatcher name -n TestWatcher2 -j | jq -r '.provisionWatcher.id'` $ edgex-cli provisionwatcher update -n TestWatcher2 -i $ID --labels \"label1,label2\" {{v2} af1e70bf-4705-47f4-9046-c7b789799405 200} $ edgex-cli provisionwatcher name -n TestWatcher2 Name ServiceName ProfileName Labels Identifiers TestWatcher2 TestDeviceService TestProfile [label1 label2] map[address:localhost port:1234]","title":"edgex-cli provisionwatcher update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#core-data-service","text":"","title":"Core-data service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-event-add","text":"Create an event with a specified number of random readings $ edgex-cli event add -d Random-Integer-Device -p Random-Integer-Device -r 1 -s Int16 -t int16 Added event 75f06078-e8da-4671-8938-ab12ebb2c244 $ edgex-cli event list -v Origin Device Profile Source Id Versionable Readings 10 Jan 22 15:38 GMT Random-Integer-Device Random-Integer-Device Int16 75f06078-e8da-4671-8938-ab12ebb2c244 {v2} [{974a70fe-71ef-4a47-a008-c89f0e4e3bb6 1641829092129391876 Random-Integer-Device Int16 Random-Integer-Device Int16 {[] } {13342}}]","title":"edgex-cli event add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-event-count","text":"Count the number of events in core data, optionally filtering by device name $ edgex-cli event count -d Random-Integer-Device Total Random-Integer-Device events: 54","title":"edgex-cli event count"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-event-list","text":"List all events, optionally specifying a limit and offset $ edgex-cli event list To see two readings only, skipping the first 100 readings: $ edgex-cli reading list --limit 2 --offset 100 Origin Device ProfileName Value ValueType 28 Jan 22 12:55 GMT Random-Integer-Device Random-Integer-Device 22502 Int16 28 Jan 22 12:55 GMT Random-Integer-Device Random-Integer-Device 1878517239016780388 Int64","title":"edgex-cli event list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-event-rm","text":"Remove events, specifying either device name or maximum event age in milliseconds - edgex-cli event rm --device {devicename} removes all events for the specified device - edgex-cli event rm --age {ms} removes all events generated in the last {ms} milliseconds $ edgex-cli event rm -a 30000 $ edgex-cli event count Total events: 0","title":"edgex-cli event rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-reading-count","text":"Count the number of readings in core data, optionally filtering by device name $ edgex-cli reading count Total readings: 235","title":"edgex-cli reading count"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-reading-list","text":"List all readings, optionally specifying a limit and offset $ edgex-cli reading list","title":"edgex-cli reading list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#support-scheduler-service","text":"","title":"Support-scheduler service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-add","text":"Add an interval $ edgex-cli interval add -n \"hourly\" -i \"1h\" {{{v2} c7c51f21-dab5-4307-a4c9-bc5d5f2194d9 201} 98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18}","title":"edgex-cli interval add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-name","text":"Return an interval by name $ edgex-cli interval name -n \"hourly\" Name Interval Start End hourly 1h","title":"edgex-cli interval name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-list","text":"List all intervals $ edgex-cli interval list -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"intervals\": [ { \"created\": 1641830955058, \"modified\": 1641830955058, \"id\": \"98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18\", \"name\": \"hourly\", \"interval\": \"1h\" }, { \"created\": 1641830953884, \"modified\": 1641830953884, \"id\": \"507a2a9a-82eb-41ea-afa8-79a9b0033665\", \"name\": \"midnight\", \"start\": \"20180101T000000\", \"interval\": \"24h\" } ] }","title":"edgex-cli interval list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-update","text":"Update an interval, specifying either ID or name $ edgex-cli interval update -n \"hourly\" -i \"1m\" {{v2} 08239cc4-d4d7-4ea2-9915-d91b9557c742 200} $ edgex-cli interval name -n \"hourly\" -v Id Name Interval Start End 98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18 hourly 1m","title":"edgex-cli interval update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-rm","text":"Delete a named interval and associated interval actions $ edgex-cli interval rm -n \"hourly\"","title":"edgex-cli interval rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-add","text":"Add an interval action $ edgex-cli intervalaction add -n \"name01\" -i \"midnight\" -a \"{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"192.168.0.102\\\", \\\"port\\\": 8080, \\\"httpMethod\\\": \\\"GET\\\"}\"","title":"edgex-cli intervalaction add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-name","text":"Return an interval action by name $ edgex-cli intervalaction name -n \"name01\" Name Interval Address Content ContentType name01 midnight {REST 192.168.0.102 8080 { GET} { 0 0 false false 0} {[]}}","title":"edgex-cli intervalaction name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-list","text":"List all interval actions $ edgex-cli intervalaction list Name Interval Address Content ContentType name01 midnight {REST 192.168.0.102 8080 { GET} { 0 0 false false 0} {[]}} scrub-aged-events midnight {REST localhost 59880 {/api/v2/event/age/604800000000000 DELETE} { 0 0 false false 0} {[]}}","title":"edgex-cli intervalaction list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-update","text":"Update an interval action, specifying either ID or name $ edgex-cli intervalaction update -n \"name01\" --admin-state \"LOCKED\" {{v2} afc7b08c-5dc6-4923-9786-30bfebc8a8b6 200} $ edgex-cli intervalaction name -n \"name01\" -j | jq '.action.adminState' \"LOCKED\"","title":"edgex-cli intervalaction update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-rm","text":"Delete an interval action by name $ edgex-cli intervalaction rm -n \"name01\"","title":"edgex-cli intervalaction rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#support-notifications-service","text":"","title":"Support-notifications service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-notification-add","text":"Add a notification to be sent $ edgex-cli notification add -s \"sender01\" -c \"content\" --category \"category04\" --labels \"l3\" {{{v2} 13938e01-a560-47d8-bb50-060effdbe490 201} 6a1138c2-b58e-4696-afa7-2074e95165eb}","title":"edgex-cli notification add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-notification-list","text":"List notifications associated with a given label, category or time range $ edgex-cli notification list -c \"category04\" Category Content Description Labels Sender Severity Status category04 content [l3] sender01 NORMAL PROCESSED $ edgex-cli notification list --start \"01 jan 20 00:00 GMT\" --end \"01 dec 24 00:00 GMT\" Category Content Description Labels Sender Severity Status category04 content [l3] sender01 NORMAL PROCESSED","title":"edgex-cli notification list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-notification-rm","text":"Delete a notification and all of its associated transmissions $ ID=`edgex-cli notification list -c \"category04\" -v -j | jq -r '.notifications[0].id'` $ echo $ID 6a1138c2-b58e-4696-afa7-2074e95165eb $ edgex-cli notification rm -i $ID $ edgex-cli notification list -c \"category04\" No notifications available","title":"edgex-cli notification rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-notification-cleanup","text":"Delete all notifications and corresponding transmissions $ edgex-cli notification cleanup $ edgex-cli notification list --start \"01 jan 20 00:00 GMT\" --end \"01 dec 24 00:00 GMT\" No notifications available","title":"edgex-cli notification cleanup"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-subscription-add","text":"Add a new subscription $ edgex-cli subscription add -n \"name01\" --receiver \"receiver01\" -c \"[{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 7770, \\\"httpMethod\\\": \\\"POST\\\"}]\" --labels \"l1,l2,l3\" {{{v2} 2bbfdac0-d2e1-4f08-8344-392b8e8ddc5e 201} 1ec08af0-5767-4505-82f7-581fada6006b} $ edgex-cli subscription add -n \"name02\" --receiver \"receiver01\" -c \"[{\\\"type\\\": \\\"EMAIL\\\", \\\"recipients\\\": [\\\"123@gmail.com\\\"]}]\" --labels \"l1,l2,l3\" {{{v2} f6b417ca-740c-4dee-bc1e-c721c0de4051 201} 156fc2b9-de60-423b-9bff-5312d8452c48}","title":"edgex-cli subscription add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-subscription-name","text":"Return a subscription by its unique name $ edgex-cli subscription name -n \"name01\" Name Description Channels Receiver Categories Labels name01 [{REST localhost 7770 { POST} { 0 0 false false 0} {[]}}] receiver01 [] [l1 l2 l3]","title":"edgex-cli subscription name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-subscription-list","text":"List all subscriptions, optionally filtered by a given category, label or receiver $ edgex-cli subscription list --label \"l1\" Name Description Channels Receiver Categories Labels name02 [{EMAIL 0 { } { 0 0 false false 0} {[123@gmail.com]}}] receiver01 [] [l1 l2 l3] name01 [{REST localhost 7770 { POST} { 0 0 false false 0} {[]}}] receiver01 [] [l1 l2 l3]","title":"edgex-cli subscription list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-subscription-rm","text":"Delete the named subscription $ edgex-cli subscription rm -n \"name01\"","title":"edgex-cli subscription rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-transmission-list","text":"To create a transmission, first create a subscription and notifications: $ edgex-cli subscription add -n \"Test-Subscription\" --description \"Test data for subscription\" --categories \"health-check\" --labels \"simple\" --receiver \"tafuser\" --resend-limit 0 --admin-state \"UNLOCKED\" -c \"[{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 7770, \\\"httpMethod\\\": \\\"POST\\\"}]\" {{{v2} f281ec1a-876e-4a29-a14d-195b66d0506c 201} 3b489d23-b0c7-4791-b839-d9a578ebccb9} $ edgex-cli notification add -d \"Test data for notification 1\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} 8df79c7c-03fb-4626-b6e8-bf2d616fa327 201} 0be98b91-daf9-46e2-bcca-39f009d93866} $ edgex-cli notification add -d \"Test data for notification 2\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} ec0b2444-c8b0-45d0-bbd6-847dd007c2fd 201} a7c65d7d-0f9c-47e1-82c2-c8098c47c016} $ edgex-cli notification add -d \"Test data for notification 3\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} 45af7f94-c99e-4fb1-a632-fab5ff475be4 201} f982fc97-f53f-4154-bfce-3ef8666c3911} Then list the transmissions: $ edgex-cli transmission list SubscriptionName ResendCount Status Test-Subscription 0 FAILED Test-Subscription 0 FAILED Test-Subscription 0 FAILED","title":"edgex-cli transmission list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-transmission-id","text":"Return a transmission by ID $ ID=`edgex-cli transmission list -j | jq -r '.transmissions[0].id'` $ edgex-cli transmission id -i $ID SubscriptionName ResendCount Status Test-Subscription 0 FAILED","title":"edgex-cli transmission id"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-transmission-rm","text":"Delete processed transmissions older than the specificed age (in milliseconds) $ edgex-cli transmission rm -a 100","title":"edgex-cli transmission rm"},{"location":"getting-started/tools/Ch-GUI/","text":"Graphical User Interface (GUI) EdgeX's graphical user interface (GUI) is provided for demonstration and development use to manage and monitor a single instance of EdgeX Foundry. Setup You can quickly run the GUI in a Docker container or as a Snap. You can also download, build and run the GUI natively on your host. Docker Compose The EdgeX GUI is now incorporated into all the secure and non-sure Docker Compose files provided by the project. Locate and download the Docker Compose file that best suits your needs from https://github.com/edgexfoundry/edgex-compose. For example, in the Jakarta branch of edgex-compose the *-with-app-sample* compose files include the Sample App Service allowing the configurable pipeline to be manipulated from the UI. See the four Docker Compose files that include the Sample App Service circled below. Note The GUI can now be used in secure mode as well as non-secure mode. See the Getting Started using Docker guide for help on how to find, download and use a Docker Compose file to run EdgeX - in this case with the Sample App Service. Secure mode with API Gateway token When first running the UI in secure mode, you will be prompted to enter a token. Following the How to get access token? link to view the documentation how get an API Gateway access token. Once you enter the token the UI will have asses to the EdgeX service via the API Gateway. Note The UI is no longer restricted to access from localhost . It can now be accessed from any IP address that can access the host system. This is allowed because the UI is secured via API Gateway token when running in secure mode. Snaps Installing EdgeX UI as a snap The latest stable version of the snap can be installed using: $ sudo snap install edgex-ui A specific release of the snap can be installed from a dedicated channel. For example, to install the 2.1 (Jakarta) release: $ sudo snap install edgex-ui --channel=2.1 The latest development version of the edgex-ui snap can be installed using: $ sudo snap install edgex-ui --edge Generate token for entering UI secure mode A JWT access token is required to access the UI securely through the API Gateway. To do so: Generate a public/private keypair $ openssl ecparam -genkey -name prime256v1 -noout -out private.pem $ openssl ec -in private.pem -pubout -out public.pem Configure user and public-key $ sudo snap set edgexfoundry env.security-proxy.user=user01,USER_ID,ES256 $ sudo snap set edgexfoundry env.security-proxy.public-key=\"$(cat public.pem)\" Generate a token $ edgexfoundry.secrets-config proxy jwt --algorithm ES256 \\ --private_key private.pem --id USER_ID --expiration=1h This output is the JWT token for UI login in secure mode. Please keep the token in a safe place for future re-use as the same token cannot be regenerated or recovered from EdgeX's secret-config CLI. The token is required each time you reopen the web page. Using the edgex-ui snap Open your browser http://localhost:4000 Please log in to EdgeX with the JWT token we generated above. For more details please refer to edgex-ui Snap Native If you are running EdgeX natively (outside of Docker Compose or a Snap), you will find instructions on how to build and run the GUI on your platform in the GUI repository README General GUI Address Once the GUI is up and running, simply visit port 4000 on the GUI's host machine (ex: http://localhost:4000) to enter the GUI Dashboard (see below). The GUI does not require any login. Menu Bar The left side of the Dashboard holds a menu bar that allows you access to the GUI functionality. The \"hamburger\" icon on the menu bar allows you to shrink or expand the menu bar to icons vs icons and menu bar labels. Mobile Device Ready The EdgeX GUI can be used/displayed on a mobile device via the mobile device's browser if the GUI address is accessible to the device. The display may be skewed in order to fit the device screen. For example, the Dashboard menu will often change to icons over the expanded labeled menu bar when shown on a mobile device. Capability The GUI allows you to manage (add, remove, update) most of the EdgeX objects to include devices, device profiles, device services, rules, schedules, notifications, app services, etc. start, stop or restart the EdgeX services explore the memory, CPU and network traffic usage of EdgeX services monitor the data stream (the events and readings) collected by sensors and devices explore the configuration of an EdgeX service Dashboard The Dashboard page (the main page of the GUI) presents you with a set of clickable \"tiles\" that provide a quick view of the status of your EdgeX instance. That is, it provides some quick data points about the EdgeX instance and what the GUI is tracking. Specifically, the tiles in the Dashboard show you: the number of device services that it is aware of and their status (locked vs unlocked) the number of devices being managed by EdgeX (through the associated device services) the number of device profiles registered with core metadata the number of schedules (or intervals) EdgeX is managing the number of notifications EdgeX has seen the number of events and readings generated by device services and passing through core data the number of EdgeX micro services currently being monitored through the system management service If for some reason the GUI has an issue or difficulty getting the information it needs to display a tile in the Dashboard when it is displayed, a popup will be displayed over the screen indicating the issue. In the example below, the support scheduling service was down and the GUI Dashboard was unable to access the scheduler service. In this way, the Dashboard provides a quick and easy way to see whether the EdgeX instance is nominal or has underlying issues. You can click on each of the tiles in the Dashboard. Doing so provides more details about each. More precisely, clicking on a tile takes you to another part of the GUI where the details of that item can be found. For example, clicking on the Device Profiles tile takes you to the Metadata page and the Device Profile tab (covered below) System The EdgeX platform is comprised of a set of micro services. The system management service (and associated executors) tracks the micro services status (up or down), metrics of the running service (memory, CPU, network traffic), and configuration influencing the operation of the service. The system management service also provides the ability (through APIs) to start, stop and restart a service. Service information and the ability to call on the start, stop, restart APIs is surfaced through the System page. Warning The system management services are deprecated in EdgeX as of Ireland. Their full replacement has not been identified, but adopters should be aware that the service will be replaced in a future release. Please note that the System List display provides access to a static list of EdgeX services. As device services and application services (among other services) may be added or removed based on use case needs (often requireing new custom south and north side services), the GUI is not made aware of these and therefore will not display details on these services. Metrics From the System Service List, you can click on the Metric icon for any service to see the memory, CPU and network traffic telemetry for any service. The referesh rate can be adjusted on the display to have the GUI poll the system management service more or less frequently. Info The metrics are provided via an associated executor feeding the system management agent telemtry data. In the case of Docker, a Docker executor is capturing standard Docker stats and relaying them to the system management agent that in turn makes these available through its APIs to the GUI. Config The configuration of each service is made available for each service by clicking on the Config icon for any service from the System Service List. The configuration is displayed in JSON form and is read only. If running Consul, use the Consul Web UI to make changes to the configuration. Operation From the System Service List, you can request to stop, start or restart any of the listed services with the operation buttons in the far right column. Warning There is no confirmation popup or warning on these requests. When you push a stop, start, restart button, the request is immediately made to the system management service for that operation. The state of the service will change when these operations are invoked. When a service is stopped, the metric and config information for the service will be unavailable. After starting (or restarting) a service, you may need to hit the Refresh button on the page to get the state and metric/config icons to change. Metadata The Metadata page (available from the Metadata menu option) provides three tabs to be able to see and manage the basic elements of metadata: device services, device profiles and devices. Device Service Tab The Device Service tab displays the device services known to EdgeX (as device services registered in core metadata). Device services cannot be added or removed through the GUI, but information about the existing device services (i.e., port, admin state) and several actions on the existing device services can be accomplished on this tab. First note that for each device service listed, the number of associated devices are depicted. If you click on the Associated Devices button, it will take you to the Device tab to be able to get more information about or work with any of the associated devices. The Settings button on each device service allows you to change the description or the admin state of the device service. Alert Please note that you must hit the Save button after making any changes to the Device Service Settings. If you don't and move away from the page, your changes will be lost. Device Tab The Device Tab on the Metadata page offers you details about all the sensors/devices known to your EdgeX instance. Buttons at the top of the tab allow you to add, remove or edit a device (or collection of devices when deleting and using the selector checkbox in the device list). On the row of each device listed, links take you to the appropriate tabs to see the associated device profile or device service for the device. Icons on the row of each device listed cause editable areas to expand at the bottom of the tab to execute a device command or see/modify the device's AutoEvents. The command execution display allows you to select the specific device resource or device command (from the Command Name List ), and execute or try either a GET or SET command (depending on what the associated device profile for the device says is allowed). The response will be displayed in the ResponseRaw area after the try button is pushed. Add Device Wizard The Add button on the Device List tab will take you to the Add Device Wizard . This nice utility will assist you, entry screen by entry screen, in getting a new device setup in EdgeX. Specifically, it has you (in order): select the device service to which the new device will be associated select the device profile to which the new device will be templated or typed after enter general characteristics for the device (name, description, labels, etc.) and set its operating and admin states optionally setup auto events for scheduled data collection enter specific protocol properties for the device (based on known templates the GUI has at its disposal such as REST, MQTT, Modbus, etc.) Once all the information in the Add Device Wizard screens is entered, the Submit button at the end of the wizard causes your new device to be created in core metadata with all appropriate associations. Device Profile Tab The Device Profile Tab on the Metadata page displays the device profiles known to EdgeX and allows you to add new profiles or edit/remove existing profiles. The AssociatedDevice button on each row of the Device Profile List will take you to the Device tab and show you the list of devices currently associated to the device profile. Warning When deleting a profile, the system will popup an error if deices are still associated to the profile. Data Center (Seeing Event/Reading Data) From the Data Center option on the GUI's menu bar you can see the stream of Event/Readings coming from the device services into core data. The event/reading data will be displayed in JSON form. There are two tabs on the Data Stream page, both with Start and Pause buttons: Event (which allows incoming events to be displayed and the display will include the event's associated readings) Reading (allows incoming readings to be displayed, which will only show the reading and not its associated owning event) Hit the Start button on either tab to see the event or reading data displayed in the stream pane (events are shown in the example below). Push the Pause button to stop the display of event or reading data. Warning In actuality, the event and reading data is pulled from core data via REST call every three (3) seconds - so it is not a live stream display but a poll of data. Furthermore, if EdgeX is setup to have device services send data directly to application services via message bus and core data is not running or if core data is configured to have persistence turned off, there will be no data in core data to pull and so there will be no events or readings to see. Scheduler (Interval/Interval List) Interval and Interval Actions, which help define task management schedules in EdgeX, are managed via the Scheduler page from selecting Scheduler off the menu bar. Again, as with many of the EdgeX GUI pages, there are two tabs on the Scheduler page: Interval List to display, add, edit and delete Intervals Interval Action List to display, add, edit and delete Interval Actions which must be associated to an Interval Interval List When updating or adding an Interval, you must provide a name Interval duration string which takes an unsigned integer plus a unit of measure which must be one of \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Optionally provide a start/end dates and an indication that the interval runs only once (and thereby ignores the interval). Interval Action List Interval Actions define what happens when the Interval kicks off. Interval Actions can define REST, MQTT or Email actions that take place when an Interval timer hits. The GUI provides the means to edit or create any of these actions. Note that an Interval Action must be associated to an already defined Interval. Notifications Notifications are messages from EdgeX to external systems about something that has happened in EdgeX - for example that a new device has been created. Currently, notifications can be sent by email or REST call. The Notification Center page, available from the Notifications menu option, allows you to see new (not processed), processed or escalated (notifications that have failed to be sent within its resend limit) notifications. By default, the new notifications are displayed, but if you click on the Advanced >> link on the page (see below), you can select which type of notifications to display. The Subscriptions tab on the Notification Center page allows you to add, update or remove subscriptions to notifications. Subscribers are registered receivers of notifications - either via email or REST. When adding (or editing) a subscription, you must provide a name, category, label, receiver, and either an email address or REST endpoint. A template is provided to specify either the email or REST endpoint configuration data needed for the subscription. RuleEngine The Rule Engine page, from the RuleEngine menu option, provides the means to define streams and rules for the integrated eKuiper rules engine. Via the Stream tab, streams are defined by JSON. All that is really required is a stream name (EdgeXStream in the example below). The Rules tab allows eKuiper rules to be added, removed or updated/edited as well as started, stopped or restarted. When adding or editing a rule, you must provide a name, the rule SQL and action. The action can be one of the following (some requiring extra parameters): send the result to a REST HTTP Server (allowing an EdgeX command to be called) send the result to an MQTT broker send the result to the EdgeX message bus send the result to a log file See the eKuiper documentation for more information on how to define rules. Alert Once a rule is created, it is started by default. Return to the Rules tab on the RulesEngine page to stop a new rule. When creating or editing the rule, if the stream referenced in the rule is not already defined, the GUI will present an error when trying to submit the rule. AppService In the AppService page, you can configure existing configurable application services . The list of available configurable app services is determined by the UI automatically (based on a query for available app services from the registry service). Configurable When the application service is a configurable app service and is known to the GUI, the Configurable button on the App Service List allows you to change the triggers, functions, secrets and other configuration associated to the configurable app service. There are four tabs in the Configurable Setting editor: Trigger which defines how the configurable app service begins execution Pipeline Functions defining which functions are part of the configurable app service pipeline and in which order should they be executed Insecure Secrets - setting up secrets used by the configurable app service when running in non-secure mode (meaning Vault is not used to provide the secrets) Store and Forward which enables and configures the batch store and forward export capability Note When the Trigger is changed, the service must be restarted for the change to take effect. Why Demo and Developer Use Only The GUI is meant as a developer tool or to be used in EdgeX demonstration situations. It is not yet designed for production settings. There are several reasons for this restriction. The GUI is not designed to assist you in managing multiple EdgeX instances running in a deployment as would be typical in a production setting. It cannot be dynamically pointed to any running instance of EdgeX on multiple hosts. The GUI knows about a single instance of EdgeX running (by default, the instance that is on the same host as the GUI). The GUI provides no access controls. All functionality is open to anyone that can access the GUI URL. The GUI does not have the Kong token to negotiate through the API Gateway when the GUI is running outside of the Docker network - where the other EdgeX services are running. This would mean that the GUI would not be able to access any of the EdgeX service instance APIs. The EdgeX community is exploring efforts to make the GUI available in secure mode in a future release.","title":"Graphical User Interface (GUI)"},{"location":"getting-started/tools/Ch-GUI/#graphical-user-interface-gui","text":"EdgeX's graphical user interface (GUI) is provided for demonstration and development use to manage and monitor a single instance of EdgeX Foundry.","title":"Graphical User Interface (GUI)"},{"location":"getting-started/tools/Ch-GUI/#setup","text":"You can quickly run the GUI in a Docker container or as a Snap. You can also download, build and run the GUI natively on your host.","title":"Setup"},{"location":"getting-started/tools/Ch-GUI/#docker-compose","text":"The EdgeX GUI is now incorporated into all the secure and non-sure Docker Compose files provided by the project. Locate and download the Docker Compose file that best suits your needs from https://github.com/edgexfoundry/edgex-compose. For example, in the Jakarta branch of edgex-compose the *-with-app-sample* compose files include the Sample App Service allowing the configurable pipeline to be manipulated from the UI. See the four Docker Compose files that include the Sample App Service circled below. Note The GUI can now be used in secure mode as well as non-secure mode. See the Getting Started using Docker guide for help on how to find, download and use a Docker Compose file to run EdgeX - in this case with the Sample App Service.","title":"Docker Compose"},{"location":"getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token","text":"When first running the UI in secure mode, you will be prompted to enter a token. Following the How to get access token? link to view the documentation how get an API Gateway access token. Once you enter the token the UI will have asses to the EdgeX service via the API Gateway. Note The UI is no longer restricted to access from localhost . It can now be accessed from any IP address that can access the host system. This is allowed because the UI is secured via API Gateway token when running in secure mode.","title":"Secure mode with API Gateway token"},{"location":"getting-started/tools/Ch-GUI/#snaps","text":"","title":"Snaps"},{"location":"getting-started/tools/Ch-GUI/#installing-edgex-ui-as-a-snap","text":"The latest stable version of the snap can be installed using: $ sudo snap install edgex-ui A specific release of the snap can be installed from a dedicated channel. For example, to install the 2.1 (Jakarta) release: $ sudo snap install edgex-ui --channel=2.1 The latest development version of the edgex-ui snap can be installed using: $ sudo snap install edgex-ui --edge","title":"Installing EdgeX UI as a snap"},{"location":"getting-started/tools/Ch-GUI/#generate-token-for-entering-ui-secure-mode","text":"A JWT access token is required to access the UI securely through the API Gateway. To do so: Generate a public/private keypair $ openssl ecparam -genkey -name prime256v1 -noout -out private.pem $ openssl ec -in private.pem -pubout -out public.pem Configure user and public-key $ sudo snap set edgexfoundry env.security-proxy.user=user01,USER_ID,ES256 $ sudo snap set edgexfoundry env.security-proxy.public-key=\"$(cat public.pem)\" Generate a token $ edgexfoundry.secrets-config proxy jwt --algorithm ES256 \\ --private_key private.pem --id USER_ID --expiration=1h This output is the JWT token for UI login in secure mode. Please keep the token in a safe place for future re-use as the same token cannot be regenerated or recovered from EdgeX's secret-config CLI. The token is required each time you reopen the web page.","title":"Generate token for entering UI secure mode"},{"location":"getting-started/tools/Ch-GUI/#using-the-edgex-ui-snap","text":"Open your browser http://localhost:4000 Please log in to EdgeX with the JWT token we generated above. For more details please refer to edgex-ui Snap","title":"Using the edgex-ui snap"},{"location":"getting-started/tools/Ch-GUI/#native","text":"If you are running EdgeX natively (outside of Docker Compose or a Snap), you will find instructions on how to build and run the GUI on your platform in the GUI repository README","title":"Native"},{"location":"getting-started/tools/Ch-GUI/#general","text":"","title":"General"},{"location":"getting-started/tools/Ch-GUI/#gui-address","text":"Once the GUI is up and running, simply visit port 4000 on the GUI's host machine (ex: http://localhost:4000) to enter the GUI Dashboard (see below). The GUI does not require any login.","title":"GUI Address"},{"location":"getting-started/tools/Ch-GUI/#menu-bar","text":"The left side of the Dashboard holds a menu bar that allows you access to the GUI functionality. The \"hamburger\" icon on the menu bar allows you to shrink or expand the menu bar to icons vs icons and menu bar labels.","title":"Menu Bar"},{"location":"getting-started/tools/Ch-GUI/#mobile-device-ready","text":"The EdgeX GUI can be used/displayed on a mobile device via the mobile device's browser if the GUI address is accessible to the device. The display may be skewed in order to fit the device screen. For example, the Dashboard menu will often change to icons over the expanded labeled menu bar when shown on a mobile device.","title":"Mobile Device Ready"},{"location":"getting-started/tools/Ch-GUI/#capability","text":"The GUI allows you to manage (add, remove, update) most of the EdgeX objects to include devices, device profiles, device services, rules, schedules, notifications, app services, etc. start, stop or restart the EdgeX services explore the memory, CPU and network traffic usage of EdgeX services monitor the data stream (the events and readings) collected by sensors and devices explore the configuration of an EdgeX service","title":"Capability"},{"location":"getting-started/tools/Ch-GUI/#dashboard","text":"The Dashboard page (the main page of the GUI) presents you with a set of clickable \"tiles\" that provide a quick view of the status of your EdgeX instance. That is, it provides some quick data points about the EdgeX instance and what the GUI is tracking. Specifically, the tiles in the Dashboard show you: the number of device services that it is aware of and their status (locked vs unlocked) the number of devices being managed by EdgeX (through the associated device services) the number of device profiles registered with core metadata the number of schedules (or intervals) EdgeX is managing the number of notifications EdgeX has seen the number of events and readings generated by device services and passing through core data the number of EdgeX micro services currently being monitored through the system management service If for some reason the GUI has an issue or difficulty getting the information it needs to display a tile in the Dashboard when it is displayed, a popup will be displayed over the screen indicating the issue. In the example below, the support scheduling service was down and the GUI Dashboard was unable to access the scheduler service. In this way, the Dashboard provides a quick and easy way to see whether the EdgeX instance is nominal or has underlying issues. You can click on each of the tiles in the Dashboard. Doing so provides more details about each. More precisely, clicking on a tile takes you to another part of the GUI where the details of that item can be found. For example, clicking on the Device Profiles tile takes you to the Metadata page and the Device Profile tab (covered below)","title":"Dashboard"},{"location":"getting-started/tools/Ch-GUI/#system","text":"The EdgeX platform is comprised of a set of micro services. The system management service (and associated executors) tracks the micro services status (up or down), metrics of the running service (memory, CPU, network traffic), and configuration influencing the operation of the service. The system management service also provides the ability (through APIs) to start, stop and restart a service. Service information and the ability to call on the start, stop, restart APIs is surfaced through the System page. Warning The system management services are deprecated in EdgeX as of Ireland. Their full replacement has not been identified, but adopters should be aware that the service will be replaced in a future release. Please note that the System List display provides access to a static list of EdgeX services. As device services and application services (among other services) may be added or removed based on use case needs (often requireing new custom south and north side services), the GUI is not made aware of these and therefore will not display details on these services.","title":"System"},{"location":"getting-started/tools/Ch-GUI/#metrics","text":"From the System Service List, you can click on the Metric icon for any service to see the memory, CPU and network traffic telemetry for any service. The referesh rate can be adjusted on the display to have the GUI poll the system management service more or less frequently. Info The metrics are provided via an associated executor feeding the system management agent telemtry data. In the case of Docker, a Docker executor is capturing standard Docker stats and relaying them to the system management agent that in turn makes these available through its APIs to the GUI.","title":"Metrics"},{"location":"getting-started/tools/Ch-GUI/#config","text":"The configuration of each service is made available for each service by clicking on the Config icon for any service from the System Service List. The configuration is displayed in JSON form and is read only. If running Consul, use the Consul Web UI to make changes to the configuration.","title":"Config"},{"location":"getting-started/tools/Ch-GUI/#operation","text":"From the System Service List, you can request to stop, start or restart any of the listed services with the operation buttons in the far right column. Warning There is no confirmation popup or warning on these requests. When you push a stop, start, restart button, the request is immediately made to the system management service for that operation. The state of the service will change when these operations are invoked. When a service is stopped, the metric and config information for the service will be unavailable. After starting (or restarting) a service, you may need to hit the Refresh button on the page to get the state and metric/config icons to change.","title":"Operation"},{"location":"getting-started/tools/Ch-GUI/#metadata","text":"The Metadata page (available from the Metadata menu option) provides three tabs to be able to see and manage the basic elements of metadata: device services, device profiles and devices.","title":"Metadata"},{"location":"getting-started/tools/Ch-GUI/#device-service-tab","text":"The Device Service tab displays the device services known to EdgeX (as device services registered in core metadata). Device services cannot be added or removed through the GUI, but information about the existing device services (i.e., port, admin state) and several actions on the existing device services can be accomplished on this tab. First note that for each device service listed, the number of associated devices are depicted. If you click on the Associated Devices button, it will take you to the Device tab to be able to get more information about or work with any of the associated devices. The Settings button on each device service allows you to change the description or the admin state of the device service. Alert Please note that you must hit the Save button after making any changes to the Device Service Settings. If you don't and move away from the page, your changes will be lost.","title":"Device Service Tab"},{"location":"getting-started/tools/Ch-GUI/#device-tab","text":"The Device Tab on the Metadata page offers you details about all the sensors/devices known to your EdgeX instance. Buttons at the top of the tab allow you to add, remove or edit a device (or collection of devices when deleting and using the selector checkbox in the device list). On the row of each device listed, links take you to the appropriate tabs to see the associated device profile or device service for the device. Icons on the row of each device listed cause editable areas to expand at the bottom of the tab to execute a device command or see/modify the device's AutoEvents. The command execution display allows you to select the specific device resource or device command (from the Command Name List ), and execute or try either a GET or SET command (depending on what the associated device profile for the device says is allowed). The response will be displayed in the ResponseRaw area after the try button is pushed.","title":"Device Tab"},{"location":"getting-started/tools/Ch-GUI/#add-device-wizard","text":"The Add button on the Device List tab will take you to the Add Device Wizard . This nice utility will assist you, entry screen by entry screen, in getting a new device setup in EdgeX. Specifically, it has you (in order): select the device service to which the new device will be associated select the device profile to which the new device will be templated or typed after enter general characteristics for the device (name, description, labels, etc.) and set its operating and admin states optionally setup auto events for scheduled data collection enter specific protocol properties for the device (based on known templates the GUI has at its disposal such as REST, MQTT, Modbus, etc.) Once all the information in the Add Device Wizard screens is entered, the Submit button at the end of the wizard causes your new device to be created in core metadata with all appropriate associations.","title":"Add Device Wizard"},{"location":"getting-started/tools/Ch-GUI/#device-profile-tab","text":"The Device Profile Tab on the Metadata page displays the device profiles known to EdgeX and allows you to add new profiles or edit/remove existing profiles. The AssociatedDevice button on each row of the Device Profile List will take you to the Device tab and show you the list of devices currently associated to the device profile. Warning When deleting a profile, the system will popup an error if deices are still associated to the profile.","title":"Device Profile Tab"},{"location":"getting-started/tools/Ch-GUI/#data-center-seeing-eventreading-data","text":"From the Data Center option on the GUI's menu bar you can see the stream of Event/Readings coming from the device services into core data. The event/reading data will be displayed in JSON form. There are two tabs on the Data Stream page, both with Start and Pause buttons: Event (which allows incoming events to be displayed and the display will include the event's associated readings) Reading (allows incoming readings to be displayed, which will only show the reading and not its associated owning event) Hit the Start button on either tab to see the event or reading data displayed in the stream pane (events are shown in the example below). Push the Pause button to stop the display of event or reading data. Warning In actuality, the event and reading data is pulled from core data via REST call every three (3) seconds - so it is not a live stream display but a poll of data. Furthermore, if EdgeX is setup to have device services send data directly to application services via message bus and core data is not running or if core data is configured to have persistence turned off, there will be no data in core data to pull and so there will be no events or readings to see.","title":"Data Center (Seeing Event/Reading Data)"},{"location":"getting-started/tools/Ch-GUI/#scheduler-intervalinterval-list","text":"Interval and Interval Actions, which help define task management schedules in EdgeX, are managed via the Scheduler page from selecting Scheduler off the menu bar. Again, as with many of the EdgeX GUI pages, there are two tabs on the Scheduler page: Interval List to display, add, edit and delete Intervals Interval Action List to display, add, edit and delete Interval Actions which must be associated to an Interval","title":"Scheduler (Interval/Interval List)"},{"location":"getting-started/tools/Ch-GUI/#interval-list","text":"When updating or adding an Interval, you must provide a name Interval duration string which takes an unsigned integer plus a unit of measure which must be one of \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Optionally provide a start/end dates and an indication that the interval runs only once (and thereby ignores the interval).","title":"Interval List"},{"location":"getting-started/tools/Ch-GUI/#interval-action-list","text":"Interval Actions define what happens when the Interval kicks off. Interval Actions can define REST, MQTT or Email actions that take place when an Interval timer hits. The GUI provides the means to edit or create any of these actions. Note that an Interval Action must be associated to an already defined Interval.","title":"Interval Action List"},{"location":"getting-started/tools/Ch-GUI/#notifications","text":"Notifications are messages from EdgeX to external systems about something that has happened in EdgeX - for example that a new device has been created. Currently, notifications can be sent by email or REST call. The Notification Center page, available from the Notifications menu option, allows you to see new (not processed), processed or escalated (notifications that have failed to be sent within its resend limit) notifications. By default, the new notifications are displayed, but if you click on the Advanced >> link on the page (see below), you can select which type of notifications to display. The Subscriptions tab on the Notification Center page allows you to add, update or remove subscriptions to notifications. Subscribers are registered receivers of notifications - either via email or REST. When adding (or editing) a subscription, you must provide a name, category, label, receiver, and either an email address or REST endpoint. A template is provided to specify either the email or REST endpoint configuration data needed for the subscription.","title":"Notifications"},{"location":"getting-started/tools/Ch-GUI/#ruleengine","text":"The Rule Engine page, from the RuleEngine menu option, provides the means to define streams and rules for the integrated eKuiper rules engine. Via the Stream tab, streams are defined by JSON. All that is really required is a stream name (EdgeXStream in the example below). The Rules tab allows eKuiper rules to be added, removed or updated/edited as well as started, stopped or restarted. When adding or editing a rule, you must provide a name, the rule SQL and action. The action can be one of the following (some requiring extra parameters): send the result to a REST HTTP Server (allowing an EdgeX command to be called) send the result to an MQTT broker send the result to the EdgeX message bus send the result to a log file See the eKuiper documentation for more information on how to define rules. Alert Once a rule is created, it is started by default. Return to the Rules tab on the RulesEngine page to stop a new rule. When creating or editing the rule, if the stream referenced in the rule is not already defined, the GUI will present an error when trying to submit the rule.","title":"RuleEngine"},{"location":"getting-started/tools/Ch-GUI/#appservice","text":"In the AppService page, you can configure existing configurable application services . The list of available configurable app services is determined by the UI automatically (based on a query for available app services from the registry service).","title":"AppService"},{"location":"getting-started/tools/Ch-GUI/#configurable","text":"When the application service is a configurable app service and is known to the GUI, the Configurable button on the App Service List allows you to change the triggers, functions, secrets and other configuration associated to the configurable app service. There are four tabs in the Configurable Setting editor: Trigger which defines how the configurable app service begins execution Pipeline Functions defining which functions are part of the configurable app service pipeline and in which order should they be executed Insecure Secrets - setting up secrets used by the configurable app service when running in non-secure mode (meaning Vault is not used to provide the secrets) Store and Forward which enables and configures the batch store and forward export capability Note When the Trigger is changed, the service must be restarted for the change to take effect.","title":"Configurable"},{"location":"getting-started/tools/Ch-GUI/#why-demo-and-developer-use-only","text":"The GUI is meant as a developer tool or to be used in EdgeX demonstration situations. It is not yet designed for production settings. There are several reasons for this restriction. The GUI is not designed to assist you in managing multiple EdgeX instances running in a deployment as would be typical in a production setting. It cannot be dynamically pointed to any running instance of EdgeX on multiple hosts. The GUI knows about a single instance of EdgeX running (by default, the instance that is on the same host as the GUI). The GUI provides no access controls. All functionality is open to anyone that can access the GUI URL. The GUI does not have the Kong token to negotiate through the API Gateway when the GUI is running outside of the Docker network - where the other EdgeX services are running. This would mean that the GUI would not be able to access any of the EdgeX service instance APIs. The EdgeX community is exploring efforts to make the GUI available in secure mode in a future release.","title":"Why Demo and Developer Use Only"},{"location":"microservices/application/AdvancedTopics/","text":"Advanced Topics The following items discuss topics that are a bit beyond the basic use cases of the Application Functions SDK when interacting with EdgeX. Configurable Functions Pipeline This SDK provides the capability to define the functions pipeline via configuration rather than code by using the app-service-configurable application service. See the App Service Configurable section for more details. Custom REST Endpoints It is not uncommon to require your own custom REST endpoints when building an Application Service. Rather than spin up your own webserver inside of your app (alongside the already existing running webserver), we've exposed a method that allows you add your own routes to the existing webserver. A few routes are reserved and cannot be used: /api/v2/version /api/v2/ping /api/v2/metrics /api/v2/config /api/v2/trigger /api/v2/secret To add your own route, use the AddRoute() API provided on the ApplicationService interface. Example - Add Custom REST route myhandler := func ( writer http . ResponseWriter , req * http . Request ) { service := req . Context (). Value ( interfaces . AppServiceContextKey ).( interfaces . ApplicationService ) service . LoggingClient (). Info ( \"TEST\" ) writer . Header (). Set ( \"Content-Type\" , \"text/plain\" ) writer . Write ([] byte ( \"hello\" )) writer . WriteHeader ( 200 ) } service := pkg . NewAppService ( serviceKey ) service . AddRoute ( \"/myroute\" , myHandler , \"GET\" ) Under the hood, this simply adds the provided route, handler, and method to the gorilla mux.Router used in the SDK. For more information on gorilla mux you can check out the github repo here . You can access the interfaces.ApplicationService API for resources such as the logging client by pulling it from the context as shown above -- this is useful for when your routes might not be defined in your main.go where you have access to the interfaces.ApplicationService instance. Target Type The target type is the object type of the incoming data that is sent to the first function in the function pipeline. By default this is an EdgeX dtos.Event since typical usage is receiving Events from the EdgeX MessageBus. There are scenarios where the incoming data is not an EdgeX Event . One example scenario is two application services are chained via the EdgeX MessageBus. The output of the first service is inference data from analyzing the original Event data, and published back to the EdgeX MessageBus. The second service needs to be able to let the SDK know the target type of the input data it is expecting. For usages where the incoming data is not events , the TargetType of the expected incoming data can be set when the ApplicationService instance is created using the NewAppServiceWithTargetType() factory function. Example - Set and use custom Target Type type Person struct { FirstName string `json:\"first_name\"` LastName string `json:\"last_name\"` } service := pkg . NewAppServiceWithTargetType ( serviceKey , & Person {}) TargetType must be set to a pointer to an instance of your target type such as &Person{} . The first function in your function pipeline will be passed an instance of your target type, not a pointer to it. In the example above, the first function in the pipeline would start something like: func MyPersonFunction ( ctx interfaces . AppFunctionContext , data interface {}) ( bool , interface {}) { ctx . LoggingClient (). Debug ( \"MyPersonFunction executing\" ) if data == nil { return false , errors . New ( \"no data received to MyPersonFunction\" ) } person , ok := data .( Person ) if ! ok { return false , errors . New ( \"MyPersonFunction type received is not a Person\" ) } // .... The SDK supports un-marshaling JSON or CBOR encoded data into an instance of the target type. If your incoming data is not JSON or CBOR encoded, you then need to set the TargetType to &[]byte . If the target type is set to &[]byte the incoming data will not be un-marshaled. The content type, if set, will be set on the interfaces.AppFunctionContext and can be access via the InputContentType() API. Your first function will be responsible for decoding the data or not. Command Line Options See the Common Command Line Options for the set of command line options common to all EdgeX services. The following command line options are specific to Application Services. Skip Version Check -s/--skipVersionCheck Indicates the service should skip the Core Service's version compatibility check. Service Key -sk/--serviceKey Sets the service key that is used with Registry, Configuration Provider and security services. The default service key is set by the application service. If the name provided contains the placeholder text , this text will be replaced with the name of the profile used. If profile is not set, the text is simply removed Can be overridden with EDGEX_SERVICE_KEY environment variable. Environment Variables See the Common Environment Variables section for the list of environment variables common to all EdgeX Services. The remaining in this section are specific to Application Services. EDGEX_SERVICE_KEY This environment variable overrides the -sk/--serviceKey command-line option and the default set by the application service. Note If the name provided contains the text , this text will be replaced with the name of the profile used. Example - Service Key EDGEX_SERVICE_KEY: app--mycloud profile: http-export then service key will be app-http-export-mycloud EdgeX 2.0 The deprecated lowercase `edgex_service environment variable specific have been removed for EdgeX 2.0 Custom Configuration Applications can specify custom configuration in the TOML file in two ways. Application Settings The first simple way is to add items to the ApplicationSetting section. This is a map of string key/value pairs, i.e. map[string]string . Use for simple string values or comma separated list of string values. The ApplicationService API provides the follow access APIs for this configuration section: ApplicationSettings() map[string]string Returns the whole list of application settings GetAppSetting(setting string) (string, error) Returns single entry from the map who's key matches the passed in setting value GetAppSettingStrings(setting string) ([]string, error) Returns list of strings for the entry who's key matches the passed in setting value. The Entry is assumed to be a comma separated list of strings. Structure Custom Configuration EdgeX 2.0 Structure Custom Configuration is new for Edgex 2.0 The second is the more complex Structured Custom Configuration which allows the Application Service to define and watch it's own structured section in the service's TOML configuration file. The ApplicationService API provides the follow APIs to enable structured custom configuration: LoadCustomConfig(config UpdatableConfig, sectionName string) error Loads the service's custom configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration the first time the service is started, if service is using the Configuration Provider. The UpdateFromRaw interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider. ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error Starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the UpdateWritableFromRaw interface will be called on the custom configuration to apply the updates and then signal that the changes occurred via changedCallback. See the Application Service Template for an example of using the new Structured Custom Configuration capability. See here for defining the structured custom configuration See here for loading, validating and watching the configuration Store and Forward The Store and Forward capability allows for export functions to persist data on failure and for the export of the data to be retried at a later time. Note The order the data exported via this retry mechanism is not guaranteed to be the same order in which the data was initial received from Core Data Configuration Writable.StoreAndForward allows enabling, setting the interval between retries and the max number of retries. If running with Configuration Provider, these setting can be changed on the fly via Consul without having to restart the service. Example - Store and Forward configuration [Writable.StoreAndForward] Enabled = false RetryInterval = \"5m\" MaxRetryCount = 10 Note RetryInterval should be at least 1 second (eg. '1s') or greater. If a value less than 1 second is specified, 1 second will be used. Endless retries will occur when MaxRetryCount is set to 0. If MaxRetryCount is set to less than 0, a default of 1 retry will be used. Database configuration section describes which database type to use and the information required to connect to the database. This section is required if Store and Forward is enabled. It is optional if not using Redis for the EdgeX MessageBus which is now the default. Example - Database configuration [Database] Type = \"redisdb\" Host = \"localhost\" Port = 6379 Timeout = \"30s\" EdgeX 2.0 Support for Mongo DB has been removed in EdgeX 2.0 How it works When an export function encounters an error sending data it can call SetRetryData(payload []byte) on the AppFunctionContext . This will store the data for later retry. If the Application Service is stopped and then restarted while stored data hasn't been successfully exported, the export retry will resume once the service is up and running again. Note It is important that export functions return an error and stop pipeline execution after the call to SetRetryData . See HTTPPost function in SDK as an example When the RetryInterval expires, the function pipeline will be re-executed starting with the export function that saved the data. The saved data will be passed to the export function which can then attempt to resend the data. Note The export function will receive the data as it was stored, so it is important that any transformation of the data occur in functions prior to the export function. The export function should only export the data that it receives. One of three out comes can occur after the export retried has completed. Export retry was successful In this case, the stored data is removed from the database and the execution of the pipeline functions after the export function, if any, continues. Export retry fails and retry count has not been exceeded In this case, the stored data is updated in the database with the incremented retry count Export retry fails and retry count has been exceeded In this case, the stored data is removed from the database and never retried again. Note Changing Writable.Pipeline.ExecutionOrder will invalidate all currently stored data and result in it all being removed from the database on the next retry. This is because the position of the export function can no longer be guaranteed and no way to ensure it is properly executed on the retry. Secrets Configuration All instances of App Services running in secure mode require a SecretStore to be configured. With the use of Redis Pub/Sub as the default EdgeX MessageBus all App Services need the redisdb known secret added to their SecretStore so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details. Example - SecretStore configuration [SecretStore] Type = \"vault\" Host = \"localhost\" Port = 8200 Path = \"app-sample/\" Protocol = \"http\" RootCaCertPath = \"\" ServerName = \"\" TokenFile = \"/tmp/edgex/secrets/app-sample/secrets-token.json\" [SecretStore.Authentication] AuthType = \"X-Vault-Token\" EdgeX 2.0 For Edgex 2.0 all Application Service Secret Stores are exclusive so the explicit [SecretStoreExclusive] configuration has been removed. Storing Secrets Secure Mode When running an application service in secure mode, secrets can be stored in the SecretStore by making an HTTP POST call to the /api/v2/secret API route in the application service. The secret data POSTed is stored and retrieved from the SecretStore based on values in the [SecretStore] section of the configuration file. Once a secret is stored, only the service that added the secret will be able to retrieve it. For secret retrieval see Getting Secrets section below. Example - JSON message body { \"path\" : \"MyPath\" , \"secretData\" : [ { \"key\" : \"MySecretKey\" , \"value\" : \"MySecretValue\" } ] } Note Path specifies the type or location of the secret in the SecretStore. It is appended to the base path from the [SecretStore] configuration. Insecure Mode When running in insecure mode, the secrets are stored and retrieved from the Writable.InsecureSecrets section of the service's configuration toml file. Insecure secrets and their paths can be configured as below. Example - InsecureSecrets Configuration [ Writable . InsecureSecrets ] [Writable.InsecureSecrets.AWS] Path = \"aws\" [Writable.InsecureSecrets.AWS.Secrets] username = \"aws-user\" password = \"aws-pw\" [Writable.InsecureSecrets.DB] Path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" Getting Secrets Application Services can retrieve their secrets from their SecretStore using the interfaces.ApplicationService.GetSecret() API or from the interfaces.AppFunctionContext.GetSecret() API When in secure mode, the secrets are retrieved from the SecretStore based on the [SecretStore] configuration values. When running in insecure mode, the secrets are retrieved from the [Writable.InsecureSecrets] configuration. Background Publishing Application Services using the MessageBus trigger can request a background publisher using the AddBackgroundPublisher API in the SDK. This method takes an int representing the background channel's capacity as the only parameter and returns a reference to a BackgroundPublisher. This reference can then be used by background processes to publish to the configured MessageBus output. A custom topic can be provided to use instead of the configured message bus output as well. Edgex 2.0 For EdgeX 2.0 the background publish operation takes a full AppContext instead of just the parameters used to create a message envelope. This allows the background publisher to leverage context-based topic formatting functionality as the trigger output. Example - Background Publisher func runJob ( service interfaces . ApplicationService , done chan struct {}){ ticker := time . NewTicker ( 1 * time . Minute ) //initialize background publisher with a channel capacity of 10 and a custom topic publisher , err := service . AddBackgroundPublisherWithTopic ( 10 , \"custom-topic\" ) if err != nil { // do something } go func ( pub interfaces . BackgroundPublisher ) { for { select { case <- ticker . C : msg := myDataService . GetMessage () payload , err := json . Marshal ( message ) if err != nil { //do something } ctx := svc . BuildContext ( uuid . NewString (), common . ContentTypeJSON ) // modify context as needed err = pub . Publish ( payload , ctx ) if err != nil { //do something } case <- j . done : ticker . Stop () return } } }( publisher ) } func main () { service := pkg . NewAppService ( serviceKey ) done := make ( chan struct {}) defer close ( done ) //pass publisher to your background job runJob ( service , done ) service . SetFunctionsPipeline ( All , My , Functions , ) service . MakeItRun () os . Exit ( 0 ) } Stopping the Service Application Services will listen for SIGTERM / SIGINT signals from the OS and stop the function pipeline in response. The pipeline can also be exited programmatically by calling sdk.MakeItStop() on the running ApplicationService instance. This can be useful for cases where you want to stop a service in response to a runtime condition, e.g. receiving a \"poison pill\" message through its trigger. Received Topic EdgeX 2.0 Received Topic is new for Edgex 2.0 When messages are received via the EdgeX MessageBus or External MQTT triggers, the topic that the data was received on is seeded into the new Context Storage on the AppFunctionContext with the key receivedtopic . This make the Received Topic available to all functions in the pipeline. The SDK provides the interfaces.RECEIVEDTOPIC constant for this key. See the Context Storage section for more details on extracting values. Pipeline Per Topics EdgeX 2.1 Pipeline Per Topics is new for EdgeX 2.1 The Pipeline Per Topics feature allows for multiple function pipelines to be defined. Each will execute only when one of the specified pipeline topics matches the received topic. The pipeline topics can have wildcards ( # ) allowing the topic to match a variety of received topics. Each pipeline has its own set of functions (transforms) that are executed on the received message. If the # wildcard is used by itself for a pipeline topic, it will match all received topics and the specified functions pipeline will execute on every message received. Note The Pipeline Per Topics feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank , so the pipeline's topics must contain a single topic set to the # wildcard so that all messages received are processed by the pipeline. Example pipeline topics with wildcards \"#\" - Matches all messages received \"edegex/events/#\" - Matches all messages received with the based topic `edegex/events/` \"edegex/events/core/#\" - Matches all messages received just from Core Data \"edegex/events/device/#\" - Matches all messages received just from Device services \"edegex/events/#/my-profile/#\" - Matches all messages received from Core Data or Device services for `my-profile` \"edegex/events/#/#/my-device/#\" - Matches all messages received from Core Data or Device services for `my-device` \"edegex/events/#/#/#/my-source\" - Matches all messages received from Core Data or Device services for `my-source` Refer to the Filter By Topics section for details on the structure of the received topic. All pipeline function capabilities such as Store and Forward, Batching, etc. can be used with one or more of the multiple function pipelines. Store and Forward uses the Pipeline's ID to find and restart the pipeline on retries. Example - Adding multiple function pipelines This example adds two pipelines. One to process data from the Random-Float-Device device and one to process data from the Int32 and Int64 sources. sample := functions . NewSample () err = service . AddFunctionsPipelineForTopics ( \"Floats-Pipeline\" , [] string { \"edgex/events/#/#/Random-Float-Device/#\" }, transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , sample . LogEventDetails , sample . ConvertEventToXML , sample . OutputXML ) if err != nil { ... return - 1 } err = app . service . AddFunctionsPipelineForTopics ( \"Int32-Pipleine\" , [] string { \"edgex/events/#/#/#/Int32\" , \"edgex/events/#/#/#/Int64\" }, transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , sample . LogEventDetails , sample . ConvertEventToXML , sample . OutputXML ) if err != nil { ... return - 1 }","title":"Advanced Topics"},{"location":"microservices/application/AdvancedTopics/#advanced-topics","text":"The following items discuss topics that are a bit beyond the basic use cases of the Application Functions SDK when interacting with EdgeX.","title":"Advanced Topics"},{"location":"microservices/application/AdvancedTopics/#configurable-functions-pipeline","text":"This SDK provides the capability to define the functions pipeline via configuration rather than code by using the app-service-configurable application service. See the App Service Configurable section for more details.","title":"Configurable Functions Pipeline"},{"location":"microservices/application/AdvancedTopics/#custom-rest-endpoints","text":"It is not uncommon to require your own custom REST endpoints when building an Application Service. Rather than spin up your own webserver inside of your app (alongside the already existing running webserver), we've exposed a method that allows you add your own routes to the existing webserver. A few routes are reserved and cannot be used: /api/v2/version /api/v2/ping /api/v2/metrics /api/v2/config /api/v2/trigger /api/v2/secret To add your own route, use the AddRoute() API provided on the ApplicationService interface. Example - Add Custom REST route myhandler := func ( writer http . ResponseWriter , req * http . Request ) { service := req . Context (). Value ( interfaces . AppServiceContextKey ).( interfaces . ApplicationService ) service . LoggingClient (). Info ( \"TEST\" ) writer . Header (). Set ( \"Content-Type\" , \"text/plain\" ) writer . Write ([] byte ( \"hello\" )) writer . WriteHeader ( 200 ) } service := pkg . NewAppService ( serviceKey ) service . AddRoute ( \"/myroute\" , myHandler , \"GET\" ) Under the hood, this simply adds the provided route, handler, and method to the gorilla mux.Router used in the SDK. For more information on gorilla mux you can check out the github repo here . You can access the interfaces.ApplicationService API for resources such as the logging client by pulling it from the context as shown above -- this is useful for when your routes might not be defined in your main.go where you have access to the interfaces.ApplicationService instance.","title":"Custom REST Endpoints"},{"location":"microservices/application/AdvancedTopics/#target-type","text":"The target type is the object type of the incoming data that is sent to the first function in the function pipeline. By default this is an EdgeX dtos.Event since typical usage is receiving Events from the EdgeX MessageBus. There are scenarios where the incoming data is not an EdgeX Event . One example scenario is two application services are chained via the EdgeX MessageBus. The output of the first service is inference data from analyzing the original Event data, and published back to the EdgeX MessageBus. The second service needs to be able to let the SDK know the target type of the input data it is expecting. For usages where the incoming data is not events , the TargetType of the expected incoming data can be set when the ApplicationService instance is created using the NewAppServiceWithTargetType() factory function. Example - Set and use custom Target Type type Person struct { FirstName string `json:\"first_name\"` LastName string `json:\"last_name\"` } service := pkg . NewAppServiceWithTargetType ( serviceKey , & Person {}) TargetType must be set to a pointer to an instance of your target type such as &Person{} . The first function in your function pipeline will be passed an instance of your target type, not a pointer to it. In the example above, the first function in the pipeline would start something like: func MyPersonFunction ( ctx interfaces . AppFunctionContext , data interface {}) ( bool , interface {}) { ctx . LoggingClient (). Debug ( \"MyPersonFunction executing\" ) if data == nil { return false , errors . New ( \"no data received to MyPersonFunction\" ) } person , ok := data .( Person ) if ! ok { return false , errors . New ( \"MyPersonFunction type received is not a Person\" ) } // .... The SDK supports un-marshaling JSON or CBOR encoded data into an instance of the target type. If your incoming data is not JSON or CBOR encoded, you then need to set the TargetType to &[]byte . If the target type is set to &[]byte the incoming data will not be un-marshaled. The content type, if set, will be set on the interfaces.AppFunctionContext and can be access via the InputContentType() API. Your first function will be responsible for decoding the data or not.","title":"Target Type"},{"location":"microservices/application/AdvancedTopics/#command-line-options","text":"See the Common Command Line Options for the set of command line options common to all EdgeX services. The following command line options are specific to Application Services.","title":"Command Line Options"},{"location":"microservices/application/AdvancedTopics/#skip-version-check","text":"-s/--skipVersionCheck Indicates the service should skip the Core Service's version compatibility check.","title":"Skip Version Check"},{"location":"microservices/application/AdvancedTopics/#service-key","text":"-sk/--serviceKey Sets the service key that is used with Registry, Configuration Provider and security services. The default service key is set by the application service. If the name provided contains the placeholder text , this text will be replaced with the name of the profile used. If profile is not set, the text is simply removed Can be overridden with EDGEX_SERVICE_KEY environment variable.","title":"Service Key"},{"location":"microservices/application/AdvancedTopics/#environment-variables","text":"See the Common Environment Variables section for the list of environment variables common to all EdgeX Services. The remaining in this section are specific to Application Services.","title":"Environment Variables"},{"location":"microservices/application/AdvancedTopics/#edgex_service_key","text":"This environment variable overrides the -sk/--serviceKey command-line option and the default set by the application service. Note If the name provided contains the text , this text will be replaced with the name of the profile used. Example - Service Key EDGEX_SERVICE_KEY: app--mycloud profile: http-export then service key will be app-http-export-mycloud EdgeX 2.0 The deprecated lowercase `edgex_service environment variable specific have been removed for EdgeX 2.0","title":"EDGEX_SERVICE_KEY"},{"location":"microservices/application/AdvancedTopics/#custom-configuration","text":"Applications can specify custom configuration in the TOML file in two ways.","title":"Custom Configuration"},{"location":"microservices/application/AdvancedTopics/#application-settings","text":"The first simple way is to add items to the ApplicationSetting section. This is a map of string key/value pairs, i.e. map[string]string . Use for simple string values or comma separated list of string values. The ApplicationService API provides the follow access APIs for this configuration section: ApplicationSettings() map[string]string Returns the whole list of application settings GetAppSetting(setting string) (string, error) Returns single entry from the map who's key matches the passed in setting value GetAppSettingStrings(setting string) ([]string, error) Returns list of strings for the entry who's key matches the passed in setting value. The Entry is assumed to be a comma separated list of strings.","title":"Application Settings"},{"location":"microservices/application/AdvancedTopics/#structure-custom-configuration","text":"EdgeX 2.0 Structure Custom Configuration is new for Edgex 2.0 The second is the more complex Structured Custom Configuration which allows the Application Service to define and watch it's own structured section in the service's TOML configuration file. The ApplicationService API provides the follow APIs to enable structured custom configuration: LoadCustomConfig(config UpdatableConfig, sectionName string) error Loads the service's custom configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration the first time the service is started, if service is using the Configuration Provider. The UpdateFromRaw interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider. ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error Starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the UpdateWritableFromRaw interface will be called on the custom configuration to apply the updates and then signal that the changes occurred via changedCallback. See the Application Service Template for an example of using the new Structured Custom Configuration capability. See here for defining the structured custom configuration See here for loading, validating and watching the configuration","title":"Structure Custom Configuration"},{"location":"microservices/application/AdvancedTopics/#store-and-forward","text":"The Store and Forward capability allows for export functions to persist data on failure and for the export of the data to be retried at a later time. Note The order the data exported via this retry mechanism is not guaranteed to be the same order in which the data was initial received from Core Data","title":"Store and Forward"},{"location":"microservices/application/AdvancedTopics/#configuration","text":"Writable.StoreAndForward allows enabling, setting the interval between retries and the max number of retries. If running with Configuration Provider, these setting can be changed on the fly via Consul without having to restart the service. Example - Store and Forward configuration [Writable.StoreAndForward] Enabled = false RetryInterval = \"5m\" MaxRetryCount = 10 Note RetryInterval should be at least 1 second (eg. '1s') or greater. If a value less than 1 second is specified, 1 second will be used. Endless retries will occur when MaxRetryCount is set to 0. If MaxRetryCount is set to less than 0, a default of 1 retry will be used. Database configuration section describes which database type to use and the information required to connect to the database. This section is required if Store and Forward is enabled. It is optional if not using Redis for the EdgeX MessageBus which is now the default. Example - Database configuration [Database] Type = \"redisdb\" Host = \"localhost\" Port = 6379 Timeout = \"30s\" EdgeX 2.0 Support for Mongo DB has been removed in EdgeX 2.0","title":"Configuration"},{"location":"microservices/application/AdvancedTopics/#how-it-works","text":"When an export function encounters an error sending data it can call SetRetryData(payload []byte) on the AppFunctionContext . This will store the data for later retry. If the Application Service is stopped and then restarted while stored data hasn't been successfully exported, the export retry will resume once the service is up and running again. Note It is important that export functions return an error and stop pipeline execution after the call to SetRetryData . See HTTPPost function in SDK as an example When the RetryInterval expires, the function pipeline will be re-executed starting with the export function that saved the data. The saved data will be passed to the export function which can then attempt to resend the data. Note The export function will receive the data as it was stored, so it is important that any transformation of the data occur in functions prior to the export function. The export function should only export the data that it receives. One of three out comes can occur after the export retried has completed. Export retry was successful In this case, the stored data is removed from the database and the execution of the pipeline functions after the export function, if any, continues. Export retry fails and retry count has not been exceeded In this case, the stored data is updated in the database with the incremented retry count Export retry fails and retry count has been exceeded In this case, the stored data is removed from the database and never retried again. Note Changing Writable.Pipeline.ExecutionOrder will invalidate all currently stored data and result in it all being removed from the database on the next retry. This is because the position of the export function can no longer be guaranteed and no way to ensure it is properly executed on the retry.","title":"How it works"},{"location":"microservices/application/AdvancedTopics/#secrets","text":"","title":"Secrets"},{"location":"microservices/application/AdvancedTopics/#configuration_1","text":"All instances of App Services running in secure mode require a SecretStore to be configured. With the use of Redis Pub/Sub as the default EdgeX MessageBus all App Services need the redisdb known secret added to their SecretStore so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details. Example - SecretStore configuration [SecretStore] Type = \"vault\" Host = \"localhost\" Port = 8200 Path = \"app-sample/\" Protocol = \"http\" RootCaCertPath = \"\" ServerName = \"\" TokenFile = \"/tmp/edgex/secrets/app-sample/secrets-token.json\" [SecretStore.Authentication] AuthType = \"X-Vault-Token\" EdgeX 2.0 For Edgex 2.0 all Application Service Secret Stores are exclusive so the explicit [SecretStoreExclusive] configuration has been removed.","title":"Configuration"},{"location":"microservices/application/AdvancedTopics/#storing-secrets","text":"","title":"Storing Secrets"},{"location":"microservices/application/AdvancedTopics/#secure-mode","text":"When running an application service in secure mode, secrets can be stored in the SecretStore by making an HTTP POST call to the /api/v2/secret API route in the application service. The secret data POSTed is stored and retrieved from the SecretStore based on values in the [SecretStore] section of the configuration file. Once a secret is stored, only the service that added the secret will be able to retrieve it. For secret retrieval see Getting Secrets section below. Example - JSON message body { \"path\" : \"MyPath\" , \"secretData\" : [ { \"key\" : \"MySecretKey\" , \"value\" : \"MySecretValue\" } ] } Note Path specifies the type or location of the secret in the SecretStore. It is appended to the base path from the [SecretStore] configuration.","title":"Secure Mode"},{"location":"microservices/application/AdvancedTopics/#insecure-mode","text":"When running in insecure mode, the secrets are stored and retrieved from the Writable.InsecureSecrets section of the service's configuration toml file. Insecure secrets and their paths can be configured as below. Example - InsecureSecrets Configuration [ Writable . InsecureSecrets ] [Writable.InsecureSecrets.AWS] Path = \"aws\" [Writable.InsecureSecrets.AWS.Secrets] username = \"aws-user\" password = \"aws-pw\" [Writable.InsecureSecrets.DB] Path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\"","title":"Insecure Mode"},{"location":"microservices/application/AdvancedTopics/#getting-secrets","text":"Application Services can retrieve their secrets from their SecretStore using the interfaces.ApplicationService.GetSecret() API or from the interfaces.AppFunctionContext.GetSecret() API When in secure mode, the secrets are retrieved from the SecretStore based on the [SecretStore] configuration values. When running in insecure mode, the secrets are retrieved from the [Writable.InsecureSecrets] configuration.","title":"Getting Secrets"},{"location":"microservices/application/AdvancedTopics/#background-publishing","text":"Application Services using the MessageBus trigger can request a background publisher using the AddBackgroundPublisher API in the SDK. This method takes an int representing the background channel's capacity as the only parameter and returns a reference to a BackgroundPublisher. This reference can then be used by background processes to publish to the configured MessageBus output. A custom topic can be provided to use instead of the configured message bus output as well. Edgex 2.0 For EdgeX 2.0 the background publish operation takes a full AppContext instead of just the parameters used to create a message envelope. This allows the background publisher to leverage context-based topic formatting functionality as the trigger output. Example - Background Publisher func runJob ( service interfaces . ApplicationService , done chan struct {}){ ticker := time . NewTicker ( 1 * time . Minute ) //initialize background publisher with a channel capacity of 10 and a custom topic publisher , err := service . AddBackgroundPublisherWithTopic ( 10 , \"custom-topic\" ) if err != nil { // do something } go func ( pub interfaces . BackgroundPublisher ) { for { select { case <- ticker . C : msg := myDataService . GetMessage () payload , err := json . Marshal ( message ) if err != nil { //do something } ctx := svc . BuildContext ( uuid . NewString (), common . ContentTypeJSON ) // modify context as needed err = pub . Publish ( payload , ctx ) if err != nil { //do something } case <- j . done : ticker . Stop () return } } }( publisher ) } func main () { service := pkg . NewAppService ( serviceKey ) done := make ( chan struct {}) defer close ( done ) //pass publisher to your background job runJob ( service , done ) service . SetFunctionsPipeline ( All , My , Functions , ) service . MakeItRun () os . Exit ( 0 ) }","title":"Background Publishing"},{"location":"microservices/application/AdvancedTopics/#stopping-the-service","text":"Application Services will listen for SIGTERM / SIGINT signals from the OS and stop the function pipeline in response. The pipeline can also be exited programmatically by calling sdk.MakeItStop() on the running ApplicationService instance. This can be useful for cases where you want to stop a service in response to a runtime condition, e.g. receiving a \"poison pill\" message through its trigger.","title":"Stopping the Service"},{"location":"microservices/application/AdvancedTopics/#received-topic","text":"EdgeX 2.0 Received Topic is new for Edgex 2.0 When messages are received via the EdgeX MessageBus or External MQTT triggers, the topic that the data was received on is seeded into the new Context Storage on the AppFunctionContext with the key receivedtopic . This make the Received Topic available to all functions in the pipeline. The SDK provides the interfaces.RECEIVEDTOPIC constant for this key. See the Context Storage section for more details on extracting values.","title":"Received Topic"},{"location":"microservices/application/AdvancedTopics/#pipeline-per-topics","text":"EdgeX 2.1 Pipeline Per Topics is new for EdgeX 2.1 The Pipeline Per Topics feature allows for multiple function pipelines to be defined. Each will execute only when one of the specified pipeline topics matches the received topic. The pipeline topics can have wildcards ( # ) allowing the topic to match a variety of received topics. Each pipeline has its own set of functions (transforms) that are executed on the received message. If the # wildcard is used by itself for a pipeline topic, it will match all received topics and the specified functions pipeline will execute on every message received. Note The Pipeline Per Topics feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank , so the pipeline's topics must contain a single topic set to the # wildcard so that all messages received are processed by the pipeline. Example pipeline topics with wildcards \"#\" - Matches all messages received \"edegex/events/#\" - Matches all messages received with the based topic `edegex/events/` \"edegex/events/core/#\" - Matches all messages received just from Core Data \"edegex/events/device/#\" - Matches all messages received just from Device services \"edegex/events/#/my-profile/#\" - Matches all messages received from Core Data or Device services for `my-profile` \"edegex/events/#/#/my-device/#\" - Matches all messages received from Core Data or Device services for `my-device` \"edegex/events/#/#/#/my-source\" - Matches all messages received from Core Data or Device services for `my-source` Refer to the Filter By Topics section for details on the structure of the received topic. All pipeline function capabilities such as Store and Forward, Batching, etc. can be used with one or more of the multiple function pipelines. Store and Forward uses the Pipeline's ID to find and restart the pipeline on retries. Example - Adding multiple function pipelines This example adds two pipelines. One to process data from the Random-Float-Device device and one to process data from the Int32 and Int64 sources. sample := functions . NewSample () err = service . AddFunctionsPipelineForTopics ( \"Floats-Pipeline\" , [] string { \"edgex/events/#/#/Random-Float-Device/#\" }, transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , sample . LogEventDetails , sample . ConvertEventToXML , sample . OutputXML ) if err != nil { ... return - 1 } err = app . service . AddFunctionsPipelineForTopics ( \"Int32-Pipleine\" , [] string { \"edgex/events/#/#/#/Int32\" , \"edgex/events/#/#/#/Int64\" }, transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , sample . LogEventDetails , sample . ConvertEventToXML , sample . OutputXML ) if err != nil { ... return - 1 }","title":"Pipeline Per Topics"},{"location":"microservices/application/AppFunctionContextAPI/","text":"App Function Context API The context parameter passed to each function/transform provides operations and data associated with each execution of the pipeline. EdgeX 2.0 For EdgeX 2.0 the AppFunctionContext API replaces the direct access to the appcontext.Context struct. Let's take a look at its API: type AppFunctionContext interface { CorrelationID () string InputContentType () string SetResponseData ( data [] byte ) ResponseData () [] byte SetResponseContentType ( string ) ResponseContentType () string SetRetryData ( data [] byte ) GetSecret ( path string , keys ... string ) ( map [ string ] string , error ) SecretsLastUpdated () time . Time LoggingClient () logger . LoggingClient EventClient () interfaces . EventClient CommandClient () interfaces . CommandClient NotificationClient () interfaces . NotificationClient SubscriptionClient () interfaces . SubscriptionClient DeviceServiceClient () interfaces . DeviceServiceClient DeviceProfileClient () interfaces . DeviceProfileClient DeviceClient () interfaces . DeviceClient PushToCore ( event dtos . Event ) ( common . BaseWithIdResponse , error ) GetDeviceResource ( profileName string , resourceName string ) ( dtos . DeviceResource , error ) AddValue ( key string , value string ) RemoveValue ( key string ) GetValue ( key string ) ( string , bool ) GetAllValues () map [ string ] string ApplyValues ( format string ) ( string , error ) PipelineId () string } Response Data SetResponseData SetResponseData(data []byte) This API sets the response data that will be returned to the trigger when pipeline execution is complete. ResponseData ResponseData() This API returns the data that will be returned to the trigger when pipeline execution is complete. SetResponseContentType SetResponseContentType(string) This API sets the content type that will be returned to the trigger when pipeline execution is complete. ResponseContentType ResponseContentType() This API returns the content type that will be returned to the trigger when pipeline execution is complete. Clients LoggingClient LoggingClient() logger.LoggingClient Returns a LoggingClient to leverage logging libraries/service utilized throughout the EdgeX framework. The SDK has initialized everything so it can be used to log Trace , Debug , Warn , Info , and Error messages as appropriate. Example - LoggingClient ctx . LoggingClient (). Info ( \"Hello World\" ) c . LoggingClient (). Errorf ( \"Some error occurred: %w\" , err ) EventClient EventClient() interfaces.EventClient Returns an EventClient to leverage Core Data's Event API. See interface definition for more details. This client is useful for querying events and is used by the PushToCore convenience API described below. Note if Core Data is not specified in the Clients configuration, this will return nil. CommandClient CommandClient() interfaces.CommandClient Returns a CommandClient to leverage Core Command's Command API. See interface definition for more details. Useful for sending commands to devices. Note if Core Command is not specified in the Clients configuration, this will return nil. NotificationClient NotificationClient() interfaces.NotificationClient Returns a NotificationClient to leverage Support Notifications' Notifications API. See interface definition for more details. Useful for sending notifications. Note if Support Notifications is not specified in the Clients configuration, this will return nil. SubscriptionClient SubscriptionClient() interfaces.SubscriptionClient Returns a SubscriptionClient to leverage Support Notifications' Subscription API. See interface definition for more details. Useful for creating notification subscriptions. Note if Support Notifications is not specified in the Clients configuration, this will return nil. DeviceServiceClient DeviceServiceClient() interfaces.DeviceServiceClient Returns a DeviceServiceClient to leverage Core Metadata's DeviceService API. See interface definition for more details. Useful for querying information about Device Services. Note if Core Metadata is not specified in the Clients configuration, this will return nil. DeviceProfileClient DeviceProfileClient() interfaces.DeviceProfileClient Returns a DeviceProfileClient to leverage Core Metadata's DeviceProfile API. See interface definition for more details. Useful for querying information about Device Profiles and is used by the GetDeviceResource helper function below. Note if Core Metadata is not specified in the Clients configuration, this will return nil. DeviceClient DeviceClient() interfaces.DeviceClient Returns a DeviceClient to leverage Core Metadata's Device API. See interface definition for more details. Useful for querying information about Devices. Note if Core Metadata is not specified in the Clients configuration, this will return nil. Note about Clients Each of the clients above is only initialized if the Clients section of the configuration contains an entry for the service associated with the Client API. If it isn't in the configuration the client will be nil . Your code must check for nil to avoid panic in case it is missing from the configuration. Only add the clients to your configuration that your Application Service will actually be using. All application services need Core-Data for version compatibility check done on start-up. The following is an example Clients section of a configuration.toml with all supported clients specified: Example - Client Configuration Section [Clients] [Clients.core-data] Protocol = 'http' Host = 'localhost' Port = 59880 [Clients.core-metadata] Protocol = 'http' Host = 'localhost' Port = 59881 [Clients.core-command] Protocol = 'http' Host = 'localhost' Port = 59882 [Clients.support-notifications] Protocol = 'http' Host = 'localhost' Port = 59860 Context Storage The context API exposes a map-like interface that can be used to store custom data specific to a given pipeline execution. This data is persisted for retry if needed. Currently only strings are supported, and keys are treated as case-insensitive. There following values are seeded into the Context Storage when an Event is received: Profile Name (key to retrieve value is interfaces.PROFILENAME ) Device Name (key to retrieve value is interfaces.DEVICENAME ) Source Name (key to retrieve value is interfaces.SOURCENAME ) Received Topic (key to retrieve value is interfaces.RECEIVEDTOPIC ) Note Received Topic only available when the message was received from the Edgex MessageBus or External MQTT triggers. Storage can be accessed using the following methods: AddValue AddValue(key string, value string) This API stores a value for access within a pipeline execution RemoveValue RemoveValue(key string) This API deletes a value stored in the context at the given key GetValue GetValue(key string) (string, bool) This API attempts to retrieve a value stored in the context at the given key GetAllValues GetAllValues() map[string]string This API returns a read-only copy of all data stored in the context ApplyValues ApplyValues(format string) (string, error) This API will replace placeholders of the form {context-key-name} with the value found in the context at context-key-name . Note that key matching is case insensitive. An error will be returned if any placeholders in the provided string do NOT have a corresponding entry in the context storage map. Secrets GetSecret GetSecret(path string, keys ...string) This API is used to retrieve secrets from the secret store. path specifies the type or location of the secrets to retrieve. If specified, it is appended to the base path from the exclusive secret store configuration. keys specifies the list of secrets to be retrieved. If no keys are provided then all the keys associated with the specified path will be returned. SecretsLastUpdated SecretsLastUpdated() This API returns that timestamp for when the secrets in the SecretStore where last updated. Useful when a connection to external source needs to be redone when the credentials have been updated. Miscellaneous CorrelationID() CorrelationID() This API returns the ID used to track the EdgeX event through entire EdgeX framework. PipelineId PipelineId() string This API returns the ID of the pipeline currently executing. Useful when logging messages from pipeline functions so the message contain the ID of the pipeline that executed the pipeline function. InputContentType() InputContentType() This API returns the content type of the data that initiated the pipeline execution. Only useful when the TargetType for the pipeline is []byte, otherwise the data will be the type specified by TargetType. GetDeviceResource() GetDeviceResource(profileName string, resourceName string) (dtos.DeviceResource, error) This API retrieves the DeviceResource for the given profile / resource name. Results are cached to minimize HTTP traffic to core-metadata. PushToCore() PushToCore(event dtos.Event) This API is used to push data to EdgeX Core Data so that it can be shared with other applications that are subscribed to the message bus that core-data publishes to. This function will return the new EdgeX Event with the ID populated, along with any error encountered. Note that CorrelationId will not be available. Note If validation is turned on in CoreServices then your deviceName and readingName must exist in the CoreMetadata and be properly registered in EdgeX. Warning Be aware that without a filter in your pipeline, it is possible to create an infinite loop when the Message Bus trigger is used. Choose your device-name and reading name appropriately. SetRetryData() SetRetryData(data []byte) This method can be used to store data for later retry. This is useful when creating a custom export function that needs to retry on failure. The payload data will be stored for later retry based on Store and Forward configuration. When the retry is triggered, the function pipeline will be re-executed starting with the function that called this API. That function will be passed the stored data, so it is important that all transformations occur in functions prior to the export function. The Context will also be restored to the state when the function called this API. See Store and Forward for more details. Note Store and Forward be must enabled when calling this API, otherwise the data is ignored.","title":"App Function Context API"},{"location":"microservices/application/AppFunctionContextAPI/#app-function-context-api","text":"The context parameter passed to each function/transform provides operations and data associated with each execution of the pipeline. EdgeX 2.0 For EdgeX 2.0 the AppFunctionContext API replaces the direct access to the appcontext.Context struct. Let's take a look at its API: type AppFunctionContext interface { CorrelationID () string InputContentType () string SetResponseData ( data [] byte ) ResponseData () [] byte SetResponseContentType ( string ) ResponseContentType () string SetRetryData ( data [] byte ) GetSecret ( path string , keys ... string ) ( map [ string ] string , error ) SecretsLastUpdated () time . Time LoggingClient () logger . LoggingClient EventClient () interfaces . EventClient CommandClient () interfaces . CommandClient NotificationClient () interfaces . NotificationClient SubscriptionClient () interfaces . SubscriptionClient DeviceServiceClient () interfaces . DeviceServiceClient DeviceProfileClient () interfaces . DeviceProfileClient DeviceClient () interfaces . DeviceClient PushToCore ( event dtos . Event ) ( common . BaseWithIdResponse , error ) GetDeviceResource ( profileName string , resourceName string ) ( dtos . DeviceResource , error ) AddValue ( key string , value string ) RemoveValue ( key string ) GetValue ( key string ) ( string , bool ) GetAllValues () map [ string ] string ApplyValues ( format string ) ( string , error ) PipelineId () string }","title":"App Function Context API"},{"location":"microservices/application/AppFunctionContextAPI/#response-data","text":"","title":"Response Data"},{"location":"microservices/application/AppFunctionContextAPI/#setresponsedata","text":"SetResponseData(data []byte) This API sets the response data that will be returned to the trigger when pipeline execution is complete.","title":"SetResponseData"},{"location":"microservices/application/AppFunctionContextAPI/#responsedata","text":"ResponseData() This API returns the data that will be returned to the trigger when pipeline execution is complete.","title":"ResponseData"},{"location":"microservices/application/AppFunctionContextAPI/#setresponsecontenttype","text":"SetResponseContentType(string) This API sets the content type that will be returned to the trigger when pipeline execution is complete.","title":"SetResponseContentType"},{"location":"microservices/application/AppFunctionContextAPI/#responsecontenttype","text":"ResponseContentType() This API returns the content type that will be returned to the trigger when pipeline execution is complete.","title":"ResponseContentType"},{"location":"microservices/application/AppFunctionContextAPI/#clients","text":"","title":"Clients"},{"location":"microservices/application/AppFunctionContextAPI/#loggingclient","text":"LoggingClient() logger.LoggingClient Returns a LoggingClient to leverage logging libraries/service utilized throughout the EdgeX framework. The SDK has initialized everything so it can be used to log Trace , Debug , Warn , Info , and Error messages as appropriate. Example - LoggingClient ctx . LoggingClient (). Info ( \"Hello World\" ) c . LoggingClient (). Errorf ( \"Some error occurred: %w\" , err )","title":"LoggingClient"},{"location":"microservices/application/AppFunctionContextAPI/#eventclient","text":"EventClient() interfaces.EventClient Returns an EventClient to leverage Core Data's Event API. See interface definition for more details. This client is useful for querying events and is used by the PushToCore convenience API described below. Note if Core Data is not specified in the Clients configuration, this will return nil.","title":"EventClient"},{"location":"microservices/application/AppFunctionContextAPI/#commandclient","text":"CommandClient() interfaces.CommandClient Returns a CommandClient to leverage Core Command's Command API. See interface definition for more details. Useful for sending commands to devices. Note if Core Command is not specified in the Clients configuration, this will return nil.","title":"CommandClient"},{"location":"microservices/application/AppFunctionContextAPI/#notificationclient","text":"NotificationClient() interfaces.NotificationClient Returns a NotificationClient to leverage Support Notifications' Notifications API. See interface definition for more details. Useful for sending notifications. Note if Support Notifications is not specified in the Clients configuration, this will return nil.","title":"NotificationClient"},{"location":"microservices/application/AppFunctionContextAPI/#subscriptionclient","text":"SubscriptionClient() interfaces.SubscriptionClient Returns a SubscriptionClient to leverage Support Notifications' Subscription API. See interface definition for more details. Useful for creating notification subscriptions. Note if Support Notifications is not specified in the Clients configuration, this will return nil.","title":"SubscriptionClient"},{"location":"microservices/application/AppFunctionContextAPI/#deviceserviceclient","text":"DeviceServiceClient() interfaces.DeviceServiceClient Returns a DeviceServiceClient to leverage Core Metadata's DeviceService API. See interface definition for more details. Useful for querying information about Device Services. Note if Core Metadata is not specified in the Clients configuration, this will return nil.","title":"DeviceServiceClient"},{"location":"microservices/application/AppFunctionContextAPI/#deviceprofileclient","text":"DeviceProfileClient() interfaces.DeviceProfileClient Returns a DeviceProfileClient to leverage Core Metadata's DeviceProfile API. See interface definition for more details. Useful for querying information about Device Profiles and is used by the GetDeviceResource helper function below. Note if Core Metadata is not specified in the Clients configuration, this will return nil.","title":"DeviceProfileClient"},{"location":"microservices/application/AppFunctionContextAPI/#deviceclient","text":"DeviceClient() interfaces.DeviceClient Returns a DeviceClient to leverage Core Metadata's Device API. See interface definition for more details. Useful for querying information about Devices. Note if Core Metadata is not specified in the Clients configuration, this will return nil.","title":"DeviceClient"},{"location":"microservices/application/AppFunctionContextAPI/#note-about-clients","text":"Each of the clients above is only initialized if the Clients section of the configuration contains an entry for the service associated with the Client API. If it isn't in the configuration the client will be nil . Your code must check for nil to avoid panic in case it is missing from the configuration. Only add the clients to your configuration that your Application Service will actually be using. All application services need Core-Data for version compatibility check done on start-up. The following is an example Clients section of a configuration.toml with all supported clients specified: Example - Client Configuration Section [Clients] [Clients.core-data] Protocol = 'http' Host = 'localhost' Port = 59880 [Clients.core-metadata] Protocol = 'http' Host = 'localhost' Port = 59881 [Clients.core-command] Protocol = 'http' Host = 'localhost' Port = 59882 [Clients.support-notifications] Protocol = 'http' Host = 'localhost' Port = 59860","title":"Note about Clients"},{"location":"microservices/application/AppFunctionContextAPI/#context-storage","text":"The context API exposes a map-like interface that can be used to store custom data specific to a given pipeline execution. This data is persisted for retry if needed. Currently only strings are supported, and keys are treated as case-insensitive. There following values are seeded into the Context Storage when an Event is received: Profile Name (key to retrieve value is interfaces.PROFILENAME ) Device Name (key to retrieve value is interfaces.DEVICENAME ) Source Name (key to retrieve value is interfaces.SOURCENAME ) Received Topic (key to retrieve value is interfaces.RECEIVEDTOPIC ) Note Received Topic only available when the message was received from the Edgex MessageBus or External MQTT triggers. Storage can be accessed using the following methods:","title":"Context Storage"},{"location":"microservices/application/AppFunctionContextAPI/#addvalue","text":"AddValue(key string, value string) This API stores a value for access within a pipeline execution","title":"AddValue"},{"location":"microservices/application/AppFunctionContextAPI/#removevalue","text":"RemoveValue(key string) This API deletes a value stored in the context at the given key","title":"RemoveValue"},{"location":"microservices/application/AppFunctionContextAPI/#getvalue","text":"GetValue(key string) (string, bool) This API attempts to retrieve a value stored in the context at the given key","title":"GetValue"},{"location":"microservices/application/AppFunctionContextAPI/#getallvalues","text":"GetAllValues() map[string]string This API returns a read-only copy of all data stored in the context","title":"GetAllValues"},{"location":"microservices/application/AppFunctionContextAPI/#applyvalues","text":"ApplyValues(format string) (string, error) This API will replace placeholders of the form {context-key-name} with the value found in the context at context-key-name . Note that key matching is case insensitive. An error will be returned if any placeholders in the provided string do NOT have a corresponding entry in the context storage map.","title":"ApplyValues"},{"location":"microservices/application/AppFunctionContextAPI/#secrets","text":"","title":"Secrets"},{"location":"microservices/application/AppFunctionContextAPI/#getsecret","text":"GetSecret(path string, keys ...string) This API is used to retrieve secrets from the secret store. path specifies the type or location of the secrets to retrieve. If specified, it is appended to the base path from the exclusive secret store configuration. keys specifies the list of secrets to be retrieved. If no keys are provided then all the keys associated with the specified path will be returned.","title":"GetSecret"},{"location":"microservices/application/AppFunctionContextAPI/#secretslastupdated","text":"SecretsLastUpdated() This API returns that timestamp for when the secrets in the SecretStore where last updated. Useful when a connection to external source needs to be redone when the credentials have been updated.","title":"SecretsLastUpdated"},{"location":"microservices/application/AppFunctionContextAPI/#miscellaneous","text":"","title":"Miscellaneous"},{"location":"microservices/application/AppFunctionContextAPI/#correlationid","text":"CorrelationID() This API returns the ID used to track the EdgeX event through entire EdgeX framework.","title":"CorrelationID()"},{"location":"microservices/application/AppFunctionContextAPI/#pipelineid","text":"PipelineId() string This API returns the ID of the pipeline currently executing. Useful when logging messages from pipeline functions so the message contain the ID of the pipeline that executed the pipeline function.","title":"PipelineId"},{"location":"microservices/application/AppFunctionContextAPI/#inputcontenttype","text":"InputContentType() This API returns the content type of the data that initiated the pipeline execution. Only useful when the TargetType for the pipeline is []byte, otherwise the data will be the type specified by TargetType.","title":"InputContentType()"},{"location":"microservices/application/AppFunctionContextAPI/#getdeviceresource","text":"GetDeviceResource(profileName string, resourceName string) (dtos.DeviceResource, error) This API retrieves the DeviceResource for the given profile / resource name. Results are cached to minimize HTTP traffic to core-metadata.","title":"GetDeviceResource()"},{"location":"microservices/application/AppFunctionContextAPI/#pushtocore","text":"PushToCore(event dtos.Event) This API is used to push data to EdgeX Core Data so that it can be shared with other applications that are subscribed to the message bus that core-data publishes to. This function will return the new EdgeX Event with the ID populated, along with any error encountered. Note that CorrelationId will not be available. Note If validation is turned on in CoreServices then your deviceName and readingName must exist in the CoreMetadata and be properly registered in EdgeX. Warning Be aware that without a filter in your pipeline, it is possible to create an infinite loop when the Message Bus trigger is used. Choose your device-name and reading name appropriately.","title":"PushToCore()"},{"location":"microservices/application/AppFunctionContextAPI/#setretrydata","text":"SetRetryData(data []byte) This method can be used to store data for later retry. This is useful when creating a custom export function that needs to retry on failure. The payload data will be stored for later retry based on Store and Forward configuration. When the retry is triggered, the function pipeline will be re-executed starting with the function that called this API. That function will be passed the stored data, so it is important that all transformations occur in functions prior to the export function. The Context will also be restored to the state when the function called this API. See Store and Forward for more details. Note Store and Forward be must enabled when calling this API, otherwise the data is ignored.","title":"SetRetryData()"},{"location":"microservices/application/AppServiceConfigurable/","text":"App Service Configurable Getting Started App-Service-Configurable is provided as an easy way to get started with processing data flowing through EdgeX. This service leverages the App Functions SDK and provides a way for developers to use configuration instead of having to compile standalone services to utilize built in functions in the SDK. Please refer to Available Configurable Pipeline Functions section below for full list of built-in functions that can be used in the configurable pipeline. To get started with App Service Configurable, you'll want to start by determining which functions are required in your pipeline. Using a simple example, let's assume you wish to use the following functions from the SDK: FilterByDeviceName - to filter events for a specific device. Transform - to transform the data to XML HTTPExport - to send the data to an HTTP endpoint that takes our XML data Once the functions have been identified, we'll go ahead and build out the configuration in the configuration.toml file under the [Writable.Pipeline] section. Example - Writable.Pipeline [Writable] LogLevel = \"DEBUG\" [Writable.Pipeline] ExecutionOrder = \"FilterByDeviceName, Transform, HTTPExport\" [Writable.Pipeline.Functions] [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] FilterValues = \"Random-Float-Device, Random-Integer-Device\" [Writable.Pipeline.Functions.Transform] [Writable.Pipeline.Functions.Transform.Parameters] Type = \"xml\" [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api.net/edgexdata\" The first line of note is ExecutionOrder = \"FilterByDeviceName, Transform, HTTPExport\" . This specifies the order in which to execute your functions. Each function specified here must also be placed in the [Writeable.Pipeline.Functions] section. Next, each function and its required information is listed. Each function typically has associated Parameters that must be configured to properly execute the function as designated by [Writable.Pipeline.Functions.{FunctionName}.Parameters] . Knowing which parameters are required for each function, can be referenced by taking a look at the Available Configurable Pipeline Functions section below. Note By default, the configuration provided is set to use EdgexMessageBus as a trigger. This means you must have EdgeX Running with devices sending data in order to trigger the pipeline. You can also change the trigger to be HTTP. For more details on triggers, view the Triggers documentation located in the Triggers section. That's it! Now we can run/deploy this service and the functions pipeline will process the data with functions we've defined. Pipeline Per Topics EdgeX 2.1 Pipeline Per Topics is new for EdgeX 2.1 The above pipeline configuration in Getting Started section is the preferred way if your use case only requires a single functions pipeline. For use cases that require multiple functions pipelines in order to process the data differently based on the profile , device or source for the Event, there is the Pipeline Per Topics feature. This feature allows multiple pipelines to be configured in the [Writable.Pipeline.PerTopicPipelines] section. This section is a map of pipelines. The map key must be unique , but isn't used so can be any value. Each pipleline is defined by the following configuration settings: Id - This is the unique ID given to each pipeline Topics - Comma separated list of topics that control when the pipeline is executed. See Pipeline Per Topics for details on using wildcards in the topic. ExecutionOrder - This is the list of functions, in order, that the pipeline will execute. Same as ExecutionOrder in the above example in the Getting Started section Example - Writable.Pipeline.PerTopicPipelines In this example Events from the device Random-Float-Device are transformed to JSON and then HTTP exported. At the same time, Events for the source Int8 are transformed to XML and then HTTP exported to same endpoint. Note the custom naming for TransformJson and TransformXml . This is taking advantage of the Multiple Instances of a Function described below. [Writable] LogLevel = \"DEBUG\" [Writable.Pipeline] [Writable.Pipeline.PerTopicPipelines] [Writable.Pipeline.PerTopicPipelines.float] Id = \"float-pipeline\" Topics = \"edgex/events/device/#/Random-Float-Device/#, edgex/events/device/#/Random-Integer-Device/#\" ExecutionOrder = \"TransformJson, HTTPExport\" [Writable.Pipeline.PerTopicPipelines.int8] Id = \"int8-pipeline\" Topic = \"edgex/events/device/#/#/Int8\" ExecutionOrder = \"TransformXml, HTTPExport\" [Writable.Pipeline.Functions] [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] FilterValues = \"Random-Float-Device, Random-Integer-Device\" [Writable.Pipeline.Functions.TransformJson] [Writable.Pipeline.Functions.TransformJson.Parameters] Type = \"json\" [Writable.Pipeline.Functions.TransformXml] [Writable.Pipeline.Functions.TransformXml.Parameters] Type = \"xml\" [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api.net/edgexdata\" Note The Pipeline Per Topics feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank , so the pipeline's topics must contain a single topic set to the # wildcard so that all messages received are processed by the pipeline. Environment Variable Overrides For Docker EdgeX services no longer have docker specific profiles. They now rely on environment variable overrides in the docker compose files for the docker specific differences. Example - Environment settings required in the compose files for App Service Configurable EDGEX_PROFILE : [ target profile ] SERVICE_HOST : [ services network host name ] EDGEX_SECURITY_SECRET_STORE : \"false\" # only need to disable as default is true CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis Example - Docker compose entry for App Service Configurable in no-secure compose file app-service-rules : container_name : edgex-app-rules-engine depends_on : - consul - data environment : CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis EDGEX_PROFILE : rules-engine EDGEX_SECURITY_SECRET_STORE : \"false\" MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul SERVICE_HOST : edgex-app-rules-engine TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis hostname : edgex-app-rules-engine image : edgexfoundry/app-service-configurable:2.0.0 networks : edgex-network : {} ports : - 127.0.0.1:59701:59701/tcp read_only : true security_opt : - no-new-privileges:true user : 2002:2001 Note App Service Configurable is designed to be run multiple times each with different profiles. This is why in the above example the name edgex-app-rules-engine is used for the instance running the rules-engine profile. Deploying Multiple Instances using profiles App Service Configurable was designed to be deployed as multiple instances for different purposes. Since the function pipeline is specified in the configuration.toml file, we can use this as a way to run each instance with a different function pipeline. App Service Configurable does not have the standard default configuration at /res/configuration.toml . This default configuration has been moved to the sample profile. This forces you to specify the profile for the configuration you would like to run. The profile is specified using the -p/--profile=[profilename] command line option or the EDGEX_PROFILE=[profilename] environment variable override. The profile name selected is used in the service key ( app-[profile name] ) to make each instance unique, e.g. AppService-sample when specifying sample as the profile. Edgex 2.0 Default service key for App Service Configurable instances has changed in Edgex 2.0 from AppService-[profile name] to app-[profile name] Note If you need to run multiple instances with the same profile, e.g. http-export , but configured differently, you will need to override the service key with a custom name for one or more of the services. This is done with the -sk/-serviceKey command-line option or the EDGEX_SERVICE_KEY environment variable. See the Command-line Options and Environment Overrides sections for more detail. The following profiles and their purposes are provided with App Service Configurable. rules-engine - Profile used to push Event messages to the Rules Engine via the Redis Pub/Sub Message Bus. This is used in the default docker compose files for the app-rules-engine service One can optionally add Filter function via environment overrides WRITABLE_PIPELINE_EXECUTIONORDER: \"FilterByDeviceName, HTTPExport\" WRITABLE_PIPELINE_FUNCTIONS_FILTERBYDEVICENAME_PARAMETERS_DEVICENAMES: \"[comma separated list]\" There are many optional functions and parameters provided in this profile. See the complete profile for more details http-export - Starter profile used for exporting data via HTTP. Requires further configuration which can easily be accomplished using environment variable overrides Required: WRITABLE_PIPELINE_FUNCTIONS_HTTPEXPORT_PARAMETERS_URL: [Your URL] There are many more optional functions and parameters provided in this profile. See the complete profile for more details. mqtt-export - Starter profile used for exporting data via MQTT. Requires further configuration which can easily be accomplished using environment variable overrides Required: WRITABLE_PIPELINE_FUNCTIONS_MQTTSEND_ADDRESSABLE_ADDRESS: [Your Broker Address] There are many optional functions and parameters provided in this profile. See the complete profile for more details push-to-core - Example profile demonstrating how to use the PushToCore function. Provided as an exmaple that can be copied and modified to create new custom profile. See the complete profile for more details Requires further configuration which can easily be accomplished using environment variable overrides Required: WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_PROFILENAME: [Your Event's profile name] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_DEVICENAME: [Your Event's device name] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_SOURCENAME: [Your Event's source name] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_RESOURCENAME: [Your Event reading's resource name] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_VALUETYPE: [Your Event reading's value type] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_MEDIATYPE: [Your Event binary reading's media type] Required only when ValueType is Binary sample - Sample profile with all available functions declared and a sample pipeline. Provided as a sample that can be copied and modified to create new custom profiles. See the complete profile for more details functional-tests - Profile used for the TAF functional testing Note Functions can be declared in a profile but not used in the pipeline ExecutionOrder allowing them to be added to the pipeline ExecutionOrder later at runtime if needed. What if my input data isn't an EdgeX Event ? The default TargetType for data flowing into the functions pipeline is an EdgeX Event DTO. There are cases when this incoming data might not be an EdgeX Event DTO. In these cases the Pipeline can be configured using UseTargetTypeOfByteArray=true to set the TargetType to be a byte array/slice, i.e. []byte . The first function in the pipeline must then be one that can handle the []byte data. The compression , encryption and export functions are examples of pipeline functions that will take input data that is []byte . Example - Configure the functions pipeline to compress , encrypt and then export the []byte data via HTTP [Writable] LogLevel = \"DEBUG\" [Writable.Pipeline] UseTargetTypeOfByteArray = true ExecutionOrder = \"Compress, Encrypt, HTTPExport\" [Writable.Pipeline.Functions.Compress] [Writable.Pipeline.Functions.Compress.Parameters] Alogrithm = \"gzip\" [Writable.Pipeline.Functions.Encrypt] [Writable.Pipeline.Functions.Encrypt.Parameters] Algorithm = \"aes\" Key = \"aquqweoruqwpeoruqwpoeruqwpoierupqoweiurpoqwiuerpqowieurqpowieurpoqiweuroipwqure\" InitVector = \"123456789012345678901234567890\" [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" Url = \"http://my.api.net/edgexdata\" MimeType = \"application/text\" If along with this pipeline configuration, you also configured the Trigger to be http trigger, you could then send any data to the app-service-configurable' s /api/v2/trigger endpoint and have it compressed, encrypted and sent to your configured URL above. Example - HTTP Trigger configuration [Trigger] Type = \"http\" Multiple Instances of a Function Edgex 2.0 New for EdgeX 2.0 Now multiple instances of the same configurable pipeline function can be specified, configured differently and used together in the functions pipeline. Previously the function names specified in the [Writable.Pipeline.Functions] section had to match a built-in configurable pipeline function name exactly. Now the names specified only need to start with a built-in configurable pipeline function name. See the HttpExport section below for an example. Available Configurable Pipeline Functions Below are the functions that are available to use in the configurable pipeline function pipeline ( [Writable.Pipeline] ) section of the configuration. The function names below can be added to the Writable.Pipeline.ExecutionOrder setting (comma separated list) and must also be present or added to the [Writable.Pipeline.Functions] section as [Writable.Pipeline.Functions.{FunctionName}] . The functions will also have the [Writable.Pipeline.Functions.{FunctionName}.Parameters] section where the function's parameters are configured. Please refer to the Getting Started section above for an example. Note The Parameters section for each function is a key/value map of string values. So even tough the parameter is referred to as an Integer or Boolean, it has to be specified as a valid string representation, e.g. \"20\" or \"true\". Please refer to the function's detailed documentation by clicking the function name below. AddTags Parameters tags - String containing comma separated list of tag key/value pairs. The tag key/value pairs are colon seperated Example [Writable.Pipeline.Functions.AddTags] [Writable.Pipeline.Functions.AddTags.Parameters] tags = \"GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603\" Batch Parameters Mode - The batch mode to use. can be 'bycount', 'bytime' or 'bytimecount' BatchThreshold - Number of items to batch before sending batched items to the next function in the pipeline. Used with 'bycount' and 'bytimecount' modes TimeInterval - Amount of time to batch before sending batched items to the next function in the pipeline. Used with 'bytime' and 'bytimecount' modes IsEventData - If true, specifies that the data being batch is Events and to un-marshal the batched data to []Event prior to returning the batched data. By default the batched data returned is [][]byte Example [Writable.Pipeline.Functions.Batch] [Writable.Pipeline.Functions.Batch.Parameters] Mode = \"bytimecount\" # can be \"bycount\", \"bytime\" or \"bytimecount\" BatchThreshold = \"30\" TimeInterval = \"60s\" IsEventData = \"true\" EdgeX 2.0 For EdgeX 2.0 the BatchByCount , BatchByTime , and BatchByTimeCount configurable pipeline functions have been replaced by single Batch configurable pipeline function with additional Mode parameter. EdgeX 2.1 The IsEventData setting is new for EdgeX 2.1 Compress Parameters Algorithm - Compression algorithm to use. Can be 'gzip' or 'zlib' Example [Writable.Pipeline.Functions.Compress] [Writable.Pipeline.Functions.Compress.Parameters] Algorithm = \"gzip\" EdgeX 2.0 For EdgeX 2.0 the CompressWithGZIP and CompressWithZLIB configurable pipeline functions have been replaced by the single Compress configurable pipeline function with additional Algorithm parameter. Encrypt Parameters Algorithm - AES (deprecated) or AES256 Key - (optional, deprecated) Encryption key used for the encryption. Required if not using Secret Store for the encryption key data InitVector - (deprecated) Initialization vector used for the encryption. SecretPath - (required for AES256) Path in the Secret Store where the encryption key is located. Required if Key not specified. SecretName - (required for AES256) Name of the secret for the encryption key in the Secret Store . Required if Key not specified. Example # Encrypt with key specified in configuration [Writable.Pipeline.Functions.Encrypt] [Writable.Pipeline.Functions.Encrypt.Parameters] Algorithm = \"aes\" Key = \"aquqweoruqwpeoruqwpoeruqwpoierupqoweiurpoqwiuerpqowieurqpowieurpoqiweuroipwqure\" InitVector = \"123456789012345678901234567890\" # Encrypt with key pulled from Secret Store [Writable.Pipeline.Functions.Encrypt] [Writable.Pipeline.Functions.Encrypt.Parameters] Algorithm = \"aes\" InitVector = \"123456789012345678901234567890\" SecretPath = \"aes\" SecretName = \"key\" EdgeX 2.0 For EdgeX 2.0 the EncryptWithAES configurable pipeline function have been replaced by the Encrypt configurable pipeline function with additional Algorithm parameter. In addition the ability to pull the encryption key from the Secret Store has been added. FilterByDeviceName Parameters DeviceNames - Comma separated list of device names for filtering FilterOut - Boolean indicating if the data matching the device names should be filtered out or filtered for. Example [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] DeviceNames = \"Random-Float-Device,Random-Integer-Device\" FilterOut = \"false\" FilterByProfileName Parameters ProfileNames - Comma separated list of profile names for filtering FilterOut - Boolean indicating if the data matching the profile names should be filtered out or filtered for. Example [Writable.Pipeline.Functions.FilterByProfileName] [Writable.Pipeline.Functions.FilterByProfileName.Parameters] ProfileNames = \"Random-Float-Device, Random-Integer-Device\" FilterOut = \"false\" EdgeX 2.0 The FilterByProfileName configurable pipeline function is new for EdgeX 2.0 FilterByResourceName Parameters ResourceName - Comma separated list of reading resource names for filtering FilterOut - Boolean indicating if the readings matching the resource names should be filtered out or filtered for. Example [Writable.Pipeline.Functions.FilterByResourceName] [Writable.Pipeline.Functions.FilterByResourceName.Parameters] ResourceNames = \"Int8, Int64\" FilterOut = \"true\" EdgeX 2.0 For EdgeX 2.0 the FilterByValueDescriptor configurable pipeline function has been renamed to FilterByResourceName and parameter names adjusted. FilterBySourceName Parameters SourceNames - Comma separated list of source names for filtering. Source name is either the device command name or the resource name that created the Event FilterOut - Boolean indicating if the data matching the device names should be filtered out or filtered for. Example [Writable.Pipeline.Functions.FilterBySourceName] [Writable.Pipeline.Functions.FilterBySource.Parameters] SourceNames = \"Bool, BoolArray\" FilterOut = \"false\" EdgeX 2.0 The FilterBySourceName configurable pipeline function is new for EdgeX 2.0 HTTPExport Parameters Method - HTTP Method to use. Can be post or put Url - HTTP endpoint to POST/PUT the data. MimeType - Optional mime type for the data. Defaults to application/json if not set. PersistOnError - Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to \"true\". ContinueOnSendError - For chained multi destination exports, if true continues after send error so next export function executes. ReturnInputData - For chained multi destination exports if true, passes the input data to next export function. HeaderName - (Optional) Name of the header key to add to the HTTP header SecretPath - (Optional) Path of the secret in the Secret Store where the header value is stored. SecretName - (Optional) Name of the secret for the header value in the Secret Store . Example # Simple HTTP Export [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api.net/edgexdata\" # HTTP Export with secret header data pull from Secret Store [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api.net/edgexdata\" HeaderName = \"MyApiKey\" SecretPath = \"http\" SecretName = \"apikey\" # Http Export to multiple destinations [Writable.Pipeline] ExecutionOrder = \"HTTPExport1, HTTPExport2\" [Writable.Pipeline.Functions.HTTPExport1] [Writable.Pipeline.Functions.HTTPExport1.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api1.net/edgexdata2\" ContinueOnSendError = \"true\" ReturnInputData = \"true\" [Writable.Pipeline.Functions.HTTPExport2] [Writable.Pipeline.Functions.HTTPExport2.Parameters] Method = \"put\" MimeType = \"application/xml\" Url = \"http://my.api2.net/edgexdata2\" EdgeX 2.0 For EdgeX 2.0 the HTTPPost , HTTPPostJSON , HTTPPostXML , HTTPPut , HTTPPutJSON , and HTTPPutXML configurable pipeline functions have been replaced by the single HTTPExport function with additional Method parameter. ContinueOnSendError and ReturnInputData parameter have been added to support multi destination exports. In addition the HeaderName and SecretName parameters have replaced the SecretHeaderName parameter. EdgeX 2.0 The capability to chain Http Export functions to export to multiple destinations is new for Edgex 2.0. EdgeX 2.0 Multiple instances (configured differently) of the same configurable pipeline function is new for EdgeX 2.0. The function names in the Writable.Pipeline.Functions section now only need to start with a built-in configurable pipeline function name, rather than be an exact match. JSONLogic Parameters Rule - The JSON formatted rule that with be executed on the data by JSONLogic Example [Writable.Pipeline.Functions.JSONLogic] [Writable.Pipeline.Functions.JSONLogic.Parameters] Rule = \"{ \\\"and\\\" : [{\\\"<\\\" : [{ \\\"var\\\" : \\\"temp\\\" }, 110 ]}, {\\\"==\\\" : [{ \\\"var\\\" : \\\"sensor.type\\\" }, \\\"temperature\\\" ]} ] }\" MQTTExport Parameters BrokerAddress - URL specify the address of the MQTT Broker Topic - Topic to publish the data ClientId - Id to use when connecting to the MQTT Broker Qos - MQTT Quality of Service (QOS) setting to use (0, 1 or 2). Please refer here for more details on QOS values AutoReconnect - Boolean specifying if reconnect should be automatic if connection to MQTT broker is lost Retain - Boolean specifying if the MQTT Broker should save the last message published as the \u201cLast Good Message\u201d on that topic. SkipVerify - Boolean indicating if the certificate verification should be skipped. PersistOnError - Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to \"true\". AuthMode - Mode of authentication to use when connecting to the MQTT Broker none - No authentication required usernamepassword - Use username and password authentication. The Secret Store (Vault or InsecureSecrets ) must contain the username and password secrets. clientcert - Use Client Certificate authentication. The Secret Store (Vault or InsecureSecrets ) must contain the clientkey and clientcert secrets. cacert - Use CA Certificate authentication. The Secret Store (Vault or InsecureSecrets ) must contain the cacert secret. SecretPath - Path in the secret store where authentication secrets are stored. Note Authmode=cacert is only needed when client authentication (e.g. usernamepassword ) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert. Example # Simple MQTT Export [Writable.Pipeline.Functions.MQTTExport] [Writable.Pipeline.Functions.MQTTExport.Parameters] BrokerAddress = \"tcps://localhost:8883\" Topic = \"mytopic\" ClientId = \"myclientid\" # MQTT Export with auth credentials pull from the Secret Store [Writable.Pipeline.Functions.MQTTExport] [Writable.Pipeline.Functions.MQTTExport.Parameters] BrokerAddress = \"tcps://my-broker-host.com:8883\" Topic = \"mytopic\" ClientId = \"myclientid\" Qos = \"2\" AutoReconnect = \"true\" Retain = \"true\" SkipVerify = \"false\" PersistOnError = \"true\" AuthMode = \"usernamepassword\" SecretPath = \"mqtt\" EdgeX 2.0 For EdgeX 2.0 the MQTTSecretSend configurable pipeline function has been renamed to MQTTExport and the deprecated MQTTSend configurable pipeline function has been removed PushToCore Parameters ProfileName - Profile name to use for the new Event DeviceName - Device name to use for the new Event ResourceName - Resource name name to use for the new Event's SourceName and Reading's ResourceName ValueType - Value type to use the new Event Reading's value type MediaType - Media type to use the new Event Reading's value type. Required when the value type is Binary Example [Writable.Pipeline.Functions.PushToCore] [Writable.Pipeline.Functions.PushToCore.Parameters] ProfileName = \"MyProfile\" DeviceName = \"MyDevice\" ResourceName = \"SomeResource\" ValueType = \"String\" EdgeX 2.0 For EdgeX 2.0 the ProfileName , ValueType and MediaType parameters are new and the ReadingName parameter has been renamed to ResourceName . SetResponseData Parameters ResponseContentType - Used to specify content-type header for response - optional Example [Writable.Pipeline.Functions.SetResponseData] [Writable.Pipeline.Functions.SetResponseData.Parameters] ResponseContentType = \"application/json\" EdgeX 2.0 For EdgeX 2.0 the SetOutputData configurable pipeline function has been renamed to SetResponseData . Transform Parameters Type - Type of transformation to perform. Can be 'xml' or 'json' Example [Writable.Pipeline.Functions.Transform] [Writable.Pipeline.Functions.Transform.Parameters] Type = \"xml\" EdgeX 2.0 For EdgeX 2.0 the TransformToJSON and TransformToXML configurable pipeline functions have been replaced by the single Transform configurable pipeline function with additional Type parameter.","title":"App Service Configurable"},{"location":"microservices/application/AppServiceConfigurable/#app-service-configurable","text":"","title":"App Service Configurable"},{"location":"microservices/application/AppServiceConfigurable/#getting-started","text":"App-Service-Configurable is provided as an easy way to get started with processing data flowing through EdgeX. This service leverages the App Functions SDK and provides a way for developers to use configuration instead of having to compile standalone services to utilize built in functions in the SDK. Please refer to Available Configurable Pipeline Functions section below for full list of built-in functions that can be used in the configurable pipeline. To get started with App Service Configurable, you'll want to start by determining which functions are required in your pipeline. Using a simple example, let's assume you wish to use the following functions from the SDK: FilterByDeviceName - to filter events for a specific device. Transform - to transform the data to XML HTTPExport - to send the data to an HTTP endpoint that takes our XML data Once the functions have been identified, we'll go ahead and build out the configuration in the configuration.toml file under the [Writable.Pipeline] section. Example - Writable.Pipeline [Writable] LogLevel = \"DEBUG\" [Writable.Pipeline] ExecutionOrder = \"FilterByDeviceName, Transform, HTTPExport\" [Writable.Pipeline.Functions] [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] FilterValues = \"Random-Float-Device, Random-Integer-Device\" [Writable.Pipeline.Functions.Transform] [Writable.Pipeline.Functions.Transform.Parameters] Type = \"xml\" [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api.net/edgexdata\" The first line of note is ExecutionOrder = \"FilterByDeviceName, Transform, HTTPExport\" . This specifies the order in which to execute your functions. Each function specified here must also be placed in the [Writeable.Pipeline.Functions] section. Next, each function and its required information is listed. Each function typically has associated Parameters that must be configured to properly execute the function as designated by [Writable.Pipeline.Functions.{FunctionName}.Parameters] . Knowing which parameters are required for each function, can be referenced by taking a look at the Available Configurable Pipeline Functions section below. Note By default, the configuration provided is set to use EdgexMessageBus as a trigger. This means you must have EdgeX Running with devices sending data in order to trigger the pipeline. You can also change the trigger to be HTTP. For more details on triggers, view the Triggers documentation located in the Triggers section. That's it! Now we can run/deploy this service and the functions pipeline will process the data with functions we've defined.","title":"Getting Started"},{"location":"microservices/application/AppServiceConfigurable/#pipeline-per-topics","text":"EdgeX 2.1 Pipeline Per Topics is new for EdgeX 2.1 The above pipeline configuration in Getting Started section is the preferred way if your use case only requires a single functions pipeline. For use cases that require multiple functions pipelines in order to process the data differently based on the profile , device or source for the Event, there is the Pipeline Per Topics feature. This feature allows multiple pipelines to be configured in the [Writable.Pipeline.PerTopicPipelines] section. This section is a map of pipelines. The map key must be unique , but isn't used so can be any value. Each pipleline is defined by the following configuration settings: Id - This is the unique ID given to each pipeline Topics - Comma separated list of topics that control when the pipeline is executed. See Pipeline Per Topics for details on using wildcards in the topic. ExecutionOrder - This is the list of functions, in order, that the pipeline will execute. Same as ExecutionOrder in the above example in the Getting Started section Example - Writable.Pipeline.PerTopicPipelines In this example Events from the device Random-Float-Device are transformed to JSON and then HTTP exported. At the same time, Events for the source Int8 are transformed to XML and then HTTP exported to same endpoint. Note the custom naming for TransformJson and TransformXml . This is taking advantage of the Multiple Instances of a Function described below. [Writable] LogLevel = \"DEBUG\" [Writable.Pipeline] [Writable.Pipeline.PerTopicPipelines] [Writable.Pipeline.PerTopicPipelines.float] Id = \"float-pipeline\" Topics = \"edgex/events/device/#/Random-Float-Device/#, edgex/events/device/#/Random-Integer-Device/#\" ExecutionOrder = \"TransformJson, HTTPExport\" [Writable.Pipeline.PerTopicPipelines.int8] Id = \"int8-pipeline\" Topic = \"edgex/events/device/#/#/Int8\" ExecutionOrder = \"TransformXml, HTTPExport\" [Writable.Pipeline.Functions] [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] FilterValues = \"Random-Float-Device, Random-Integer-Device\" [Writable.Pipeline.Functions.TransformJson] [Writable.Pipeline.Functions.TransformJson.Parameters] Type = \"json\" [Writable.Pipeline.Functions.TransformXml] [Writable.Pipeline.Functions.TransformXml.Parameters] Type = \"xml\" [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api.net/edgexdata\" Note The Pipeline Per Topics feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank , so the pipeline's topics must contain a single topic set to the # wildcard so that all messages received are processed by the pipeline.","title":"Pipeline Per Topics"},{"location":"microservices/application/AppServiceConfigurable/#environment-variable-overrides-for-docker","text":"EdgeX services no longer have docker specific profiles. They now rely on environment variable overrides in the docker compose files for the docker specific differences. Example - Environment settings required in the compose files for App Service Configurable EDGEX_PROFILE : [ target profile ] SERVICE_HOST : [ services network host name ] EDGEX_SECURITY_SECRET_STORE : \"false\" # only need to disable as default is true CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis Example - Docker compose entry for App Service Configurable in no-secure compose file app-service-rules : container_name : edgex-app-rules-engine depends_on : - consul - data environment : CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis EDGEX_PROFILE : rules-engine EDGEX_SECURITY_SECRET_STORE : \"false\" MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul SERVICE_HOST : edgex-app-rules-engine TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis hostname : edgex-app-rules-engine image : edgexfoundry/app-service-configurable:2.0.0 networks : edgex-network : {} ports : - 127.0.0.1:59701:59701/tcp read_only : true security_opt : - no-new-privileges:true user : 2002:2001 Note App Service Configurable is designed to be run multiple times each with different profiles. This is why in the above example the name edgex-app-rules-engine is used for the instance running the rules-engine profile.","title":"Environment Variable Overrides For Docker"},{"location":"microservices/application/AppServiceConfigurable/#deploying-multiple-instances-using-profiles","text":"App Service Configurable was designed to be deployed as multiple instances for different purposes. Since the function pipeline is specified in the configuration.toml file, we can use this as a way to run each instance with a different function pipeline. App Service Configurable does not have the standard default configuration at /res/configuration.toml . This default configuration has been moved to the sample profile. This forces you to specify the profile for the configuration you would like to run. The profile is specified using the -p/--profile=[profilename] command line option or the EDGEX_PROFILE=[profilename] environment variable override. The profile name selected is used in the service key ( app-[profile name] ) to make each instance unique, e.g. AppService-sample when specifying sample as the profile. Edgex 2.0 Default service key for App Service Configurable instances has changed in Edgex 2.0 from AppService-[profile name] to app-[profile name] Note If you need to run multiple instances with the same profile, e.g. http-export , but configured differently, you will need to override the service key with a custom name for one or more of the services. This is done with the -sk/-serviceKey command-line option or the EDGEX_SERVICE_KEY environment variable. See the Command-line Options and Environment Overrides sections for more detail. The following profiles and their purposes are provided with App Service Configurable. rules-engine - Profile used to push Event messages to the Rules Engine via the Redis Pub/Sub Message Bus. This is used in the default docker compose files for the app-rules-engine service One can optionally add Filter function via environment overrides WRITABLE_PIPELINE_EXECUTIONORDER: \"FilterByDeviceName, HTTPExport\" WRITABLE_PIPELINE_FUNCTIONS_FILTERBYDEVICENAME_PARAMETERS_DEVICENAMES: \"[comma separated list]\" There are many optional functions and parameters provided in this profile. See the complete profile for more details http-export - Starter profile used for exporting data via HTTP. Requires further configuration which can easily be accomplished using environment variable overrides Required: WRITABLE_PIPELINE_FUNCTIONS_HTTPEXPORT_PARAMETERS_URL: [Your URL] There are many more optional functions and parameters provided in this profile. See the complete profile for more details. mqtt-export - Starter profile used for exporting data via MQTT. Requires further configuration which can easily be accomplished using environment variable overrides Required: WRITABLE_PIPELINE_FUNCTIONS_MQTTSEND_ADDRESSABLE_ADDRESS: [Your Broker Address] There are many optional functions and parameters provided in this profile. See the complete profile for more details push-to-core - Example profile demonstrating how to use the PushToCore function. Provided as an exmaple that can be copied and modified to create new custom profile. See the complete profile for more details Requires further configuration which can easily be accomplished using environment variable overrides Required: WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_PROFILENAME: [Your Event's profile name] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_DEVICENAME: [Your Event's device name] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_SOURCENAME: [Your Event's source name] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_RESOURCENAME: [Your Event reading's resource name] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_VALUETYPE: [Your Event reading's value type] WRITABLE_PIPELINE_FUNCTIONS_PUSHTOCORE_MEDIATYPE: [Your Event binary reading's media type] Required only when ValueType is Binary sample - Sample profile with all available functions declared and a sample pipeline. Provided as a sample that can be copied and modified to create new custom profiles. See the complete profile for more details functional-tests - Profile used for the TAF functional testing Note Functions can be declared in a profile but not used in the pipeline ExecutionOrder allowing them to be added to the pipeline ExecutionOrder later at runtime if needed.","title":"Deploying Multiple Instances using profiles"},{"location":"microservices/application/AppServiceConfigurable/#what-if-my-input-data-isnt-an-edgex-event","text":"The default TargetType for data flowing into the functions pipeline is an EdgeX Event DTO. There are cases when this incoming data might not be an EdgeX Event DTO. In these cases the Pipeline can be configured using UseTargetTypeOfByteArray=true to set the TargetType to be a byte array/slice, i.e. []byte . The first function in the pipeline must then be one that can handle the []byte data. The compression , encryption and export functions are examples of pipeline functions that will take input data that is []byte . Example - Configure the functions pipeline to compress , encrypt and then export the []byte data via HTTP [Writable] LogLevel = \"DEBUG\" [Writable.Pipeline] UseTargetTypeOfByteArray = true ExecutionOrder = \"Compress, Encrypt, HTTPExport\" [Writable.Pipeline.Functions.Compress] [Writable.Pipeline.Functions.Compress.Parameters] Alogrithm = \"gzip\" [Writable.Pipeline.Functions.Encrypt] [Writable.Pipeline.Functions.Encrypt.Parameters] Algorithm = \"aes\" Key = \"aquqweoruqwpeoruqwpoeruqwpoierupqoweiurpoqwiuerpqowieurqpowieurpoqiweuroipwqure\" InitVector = \"123456789012345678901234567890\" [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" Url = \"http://my.api.net/edgexdata\" MimeType = \"application/text\" If along with this pipeline configuration, you also configured the Trigger to be http trigger, you could then send any data to the app-service-configurable' s /api/v2/trigger endpoint and have it compressed, encrypted and sent to your configured URL above. Example - HTTP Trigger configuration [Trigger] Type = \"http\"","title":"What if my input data isn't an EdgeX Event ?"},{"location":"microservices/application/AppServiceConfigurable/#multiple-instances-of-a-function","text":"Edgex 2.0 New for EdgeX 2.0 Now multiple instances of the same configurable pipeline function can be specified, configured differently and used together in the functions pipeline. Previously the function names specified in the [Writable.Pipeline.Functions] section had to match a built-in configurable pipeline function name exactly. Now the names specified only need to start with a built-in configurable pipeline function name. See the HttpExport section below for an example.","title":"Multiple Instances of a Function"},{"location":"microservices/application/AppServiceConfigurable/#available-configurable-pipeline-functions","text":"Below are the functions that are available to use in the configurable pipeline function pipeline ( [Writable.Pipeline] ) section of the configuration. The function names below can be added to the Writable.Pipeline.ExecutionOrder setting (comma separated list) and must also be present or added to the [Writable.Pipeline.Functions] section as [Writable.Pipeline.Functions.{FunctionName}] . The functions will also have the [Writable.Pipeline.Functions.{FunctionName}.Parameters] section where the function's parameters are configured. Please refer to the Getting Started section above for an example. Note The Parameters section for each function is a key/value map of string values. So even tough the parameter is referred to as an Integer or Boolean, it has to be specified as a valid string representation, e.g. \"20\" or \"true\". Please refer to the function's detailed documentation by clicking the function name below.","title":"Available Configurable Pipeline Functions"},{"location":"microservices/application/AppServiceConfigurable/#addtags","text":"Parameters tags - String containing comma separated list of tag key/value pairs. The tag key/value pairs are colon seperated Example [Writable.Pipeline.Functions.AddTags] [Writable.Pipeline.Functions.AddTags.Parameters] tags = \"GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603\"","title":"AddTags"},{"location":"microservices/application/AppServiceConfigurable/#batch","text":"Parameters Mode - The batch mode to use. can be 'bycount', 'bytime' or 'bytimecount' BatchThreshold - Number of items to batch before sending batched items to the next function in the pipeline. Used with 'bycount' and 'bytimecount' modes TimeInterval - Amount of time to batch before sending batched items to the next function in the pipeline. Used with 'bytime' and 'bytimecount' modes IsEventData - If true, specifies that the data being batch is Events and to un-marshal the batched data to []Event prior to returning the batched data. By default the batched data returned is [][]byte Example [Writable.Pipeline.Functions.Batch] [Writable.Pipeline.Functions.Batch.Parameters] Mode = \"bytimecount\" # can be \"bycount\", \"bytime\" or \"bytimecount\" BatchThreshold = \"30\" TimeInterval = \"60s\" IsEventData = \"true\" EdgeX 2.0 For EdgeX 2.0 the BatchByCount , BatchByTime , and BatchByTimeCount configurable pipeline functions have been replaced by single Batch configurable pipeline function with additional Mode parameter. EdgeX 2.1 The IsEventData setting is new for EdgeX 2.1","title":"Batch"},{"location":"microservices/application/AppServiceConfigurable/#compress","text":"Parameters Algorithm - Compression algorithm to use. Can be 'gzip' or 'zlib' Example [Writable.Pipeline.Functions.Compress] [Writable.Pipeline.Functions.Compress.Parameters] Algorithm = \"gzip\" EdgeX 2.0 For EdgeX 2.0 the CompressWithGZIP and CompressWithZLIB configurable pipeline functions have been replaced by the single Compress configurable pipeline function with additional Algorithm parameter.","title":"Compress"},{"location":"microservices/application/AppServiceConfigurable/#encrypt","text":"Parameters Algorithm - AES (deprecated) or AES256 Key - (optional, deprecated) Encryption key used for the encryption. Required if not using Secret Store for the encryption key data InitVector - (deprecated) Initialization vector used for the encryption. SecretPath - (required for AES256) Path in the Secret Store where the encryption key is located. Required if Key not specified. SecretName - (required for AES256) Name of the secret for the encryption key in the Secret Store . Required if Key not specified. Example # Encrypt with key specified in configuration [Writable.Pipeline.Functions.Encrypt] [Writable.Pipeline.Functions.Encrypt.Parameters] Algorithm = \"aes\" Key = \"aquqweoruqwpeoruqwpoeruqwpoierupqoweiurpoqwiuerpqowieurqpowieurpoqiweuroipwqure\" InitVector = \"123456789012345678901234567890\" # Encrypt with key pulled from Secret Store [Writable.Pipeline.Functions.Encrypt] [Writable.Pipeline.Functions.Encrypt.Parameters] Algorithm = \"aes\" InitVector = \"123456789012345678901234567890\" SecretPath = \"aes\" SecretName = \"key\" EdgeX 2.0 For EdgeX 2.0 the EncryptWithAES configurable pipeline function have been replaced by the Encrypt configurable pipeline function with additional Algorithm parameter. In addition the ability to pull the encryption key from the Secret Store has been added.","title":"Encrypt"},{"location":"microservices/application/AppServiceConfigurable/#filterbydevicename","text":"Parameters DeviceNames - Comma separated list of device names for filtering FilterOut - Boolean indicating if the data matching the device names should be filtered out or filtered for. Example [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] DeviceNames = \"Random-Float-Device,Random-Integer-Device\" FilterOut = \"false\"","title":"FilterByDeviceName"},{"location":"microservices/application/AppServiceConfigurable/#filterbyprofilename","text":"Parameters ProfileNames - Comma separated list of profile names for filtering FilterOut - Boolean indicating if the data matching the profile names should be filtered out or filtered for. Example [Writable.Pipeline.Functions.FilterByProfileName] [Writable.Pipeline.Functions.FilterByProfileName.Parameters] ProfileNames = \"Random-Float-Device, Random-Integer-Device\" FilterOut = \"false\" EdgeX 2.0 The FilterByProfileName configurable pipeline function is new for EdgeX 2.0","title":"FilterByProfileName"},{"location":"microservices/application/AppServiceConfigurable/#filterbyresourcename","text":"Parameters ResourceName - Comma separated list of reading resource names for filtering FilterOut - Boolean indicating if the readings matching the resource names should be filtered out or filtered for. Example [Writable.Pipeline.Functions.FilterByResourceName] [Writable.Pipeline.Functions.FilterByResourceName.Parameters] ResourceNames = \"Int8, Int64\" FilterOut = \"true\" EdgeX 2.0 For EdgeX 2.0 the FilterByValueDescriptor configurable pipeline function has been renamed to FilterByResourceName and parameter names adjusted.","title":"FilterByResourceName"},{"location":"microservices/application/AppServiceConfigurable/#filterbysourcename","text":"Parameters SourceNames - Comma separated list of source names for filtering. Source name is either the device command name or the resource name that created the Event FilterOut - Boolean indicating if the data matching the device names should be filtered out or filtered for. Example [Writable.Pipeline.Functions.FilterBySourceName] [Writable.Pipeline.Functions.FilterBySource.Parameters] SourceNames = \"Bool, BoolArray\" FilterOut = \"false\" EdgeX 2.0 The FilterBySourceName configurable pipeline function is new for EdgeX 2.0","title":"FilterBySourceName"},{"location":"microservices/application/AppServiceConfigurable/#httpexport","text":"Parameters Method - HTTP Method to use. Can be post or put Url - HTTP endpoint to POST/PUT the data. MimeType - Optional mime type for the data. Defaults to application/json if not set. PersistOnError - Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to \"true\". ContinueOnSendError - For chained multi destination exports, if true continues after send error so next export function executes. ReturnInputData - For chained multi destination exports if true, passes the input data to next export function. HeaderName - (Optional) Name of the header key to add to the HTTP header SecretPath - (Optional) Path of the secret in the Secret Store where the header value is stored. SecretName - (Optional) Name of the secret for the header value in the Secret Store . Example # Simple HTTP Export [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api.net/edgexdata\" # HTTP Export with secret header data pull from Secret Store [Writable.Pipeline.Functions.HTTPExport] [Writable.Pipeline.Functions.HTTPExport.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api.net/edgexdata\" HeaderName = \"MyApiKey\" SecretPath = \"http\" SecretName = \"apikey\" # Http Export to multiple destinations [Writable.Pipeline] ExecutionOrder = \"HTTPExport1, HTTPExport2\" [Writable.Pipeline.Functions.HTTPExport1] [Writable.Pipeline.Functions.HTTPExport1.Parameters] Method = \"post\" MimeType = \"application/xml\" Url = \"http://my.api1.net/edgexdata2\" ContinueOnSendError = \"true\" ReturnInputData = \"true\" [Writable.Pipeline.Functions.HTTPExport2] [Writable.Pipeline.Functions.HTTPExport2.Parameters] Method = \"put\" MimeType = \"application/xml\" Url = \"http://my.api2.net/edgexdata2\" EdgeX 2.0 For EdgeX 2.0 the HTTPPost , HTTPPostJSON , HTTPPostXML , HTTPPut , HTTPPutJSON , and HTTPPutXML configurable pipeline functions have been replaced by the single HTTPExport function with additional Method parameter. ContinueOnSendError and ReturnInputData parameter have been added to support multi destination exports. In addition the HeaderName and SecretName parameters have replaced the SecretHeaderName parameter. EdgeX 2.0 The capability to chain Http Export functions to export to multiple destinations is new for Edgex 2.0. EdgeX 2.0 Multiple instances (configured differently) of the same configurable pipeline function is new for EdgeX 2.0. The function names in the Writable.Pipeline.Functions section now only need to start with a built-in configurable pipeline function name, rather than be an exact match.","title":"HTTPExport"},{"location":"microservices/application/AppServiceConfigurable/#jsonlogic","text":"Parameters Rule - The JSON formatted rule that with be executed on the data by JSONLogic Example [Writable.Pipeline.Functions.JSONLogic] [Writable.Pipeline.Functions.JSONLogic.Parameters] Rule = \"{ \\\"and\\\" : [{\\\"<\\\" : [{ \\\"var\\\" : \\\"temp\\\" }, 110 ]}, {\\\"==\\\" : [{ \\\"var\\\" : \\\"sensor.type\\\" }, \\\"temperature\\\" ]} ] }\"","title":"JSONLogic"},{"location":"microservices/application/AppServiceConfigurable/#mqttexport","text":"Parameters BrokerAddress - URL specify the address of the MQTT Broker Topic - Topic to publish the data ClientId - Id to use when connecting to the MQTT Broker Qos - MQTT Quality of Service (QOS) setting to use (0, 1 or 2). Please refer here for more details on QOS values AutoReconnect - Boolean specifying if reconnect should be automatic if connection to MQTT broker is lost Retain - Boolean specifying if the MQTT Broker should save the last message published as the \u201cLast Good Message\u201d on that topic. SkipVerify - Boolean indicating if the certificate verification should be skipped. PersistOnError - Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to \"true\". AuthMode - Mode of authentication to use when connecting to the MQTT Broker none - No authentication required usernamepassword - Use username and password authentication. The Secret Store (Vault or InsecureSecrets ) must contain the username and password secrets. clientcert - Use Client Certificate authentication. The Secret Store (Vault or InsecureSecrets ) must contain the clientkey and clientcert secrets. cacert - Use CA Certificate authentication. The Secret Store (Vault or InsecureSecrets ) must contain the cacert secret. SecretPath - Path in the secret store where authentication secrets are stored. Note Authmode=cacert is only needed when client authentication (e.g. usernamepassword ) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert. Example # Simple MQTT Export [Writable.Pipeline.Functions.MQTTExport] [Writable.Pipeline.Functions.MQTTExport.Parameters] BrokerAddress = \"tcps://localhost:8883\" Topic = \"mytopic\" ClientId = \"myclientid\" # MQTT Export with auth credentials pull from the Secret Store [Writable.Pipeline.Functions.MQTTExport] [Writable.Pipeline.Functions.MQTTExport.Parameters] BrokerAddress = \"tcps://my-broker-host.com:8883\" Topic = \"mytopic\" ClientId = \"myclientid\" Qos = \"2\" AutoReconnect = \"true\" Retain = \"true\" SkipVerify = \"false\" PersistOnError = \"true\" AuthMode = \"usernamepassword\" SecretPath = \"mqtt\" EdgeX 2.0 For EdgeX 2.0 the MQTTSecretSend configurable pipeline function has been renamed to MQTTExport and the deprecated MQTTSend configurable pipeline function has been removed","title":"MQTTExport"},{"location":"microservices/application/AppServiceConfigurable/#pushtocore","text":"Parameters ProfileName - Profile name to use for the new Event DeviceName - Device name to use for the new Event ResourceName - Resource name name to use for the new Event's SourceName and Reading's ResourceName ValueType - Value type to use the new Event Reading's value type MediaType - Media type to use the new Event Reading's value type. Required when the value type is Binary Example [Writable.Pipeline.Functions.PushToCore] [Writable.Pipeline.Functions.PushToCore.Parameters] ProfileName = \"MyProfile\" DeviceName = \"MyDevice\" ResourceName = \"SomeResource\" ValueType = \"String\" EdgeX 2.0 For EdgeX 2.0 the ProfileName , ValueType and MediaType parameters are new and the ReadingName parameter has been renamed to ResourceName .","title":"PushToCore"},{"location":"microservices/application/AppServiceConfigurable/#setresponsedata","text":"Parameters ResponseContentType - Used to specify content-type header for response - optional Example [Writable.Pipeline.Functions.SetResponseData] [Writable.Pipeline.Functions.SetResponseData.Parameters] ResponseContentType = \"application/json\" EdgeX 2.0 For EdgeX 2.0 the SetOutputData configurable pipeline function has been renamed to SetResponseData .","title":"SetResponseData"},{"location":"microservices/application/AppServiceConfigurable/#transform","text":"Parameters Type - Type of transformation to perform. Can be 'xml' or 'json' Example [Writable.Pipeline.Functions.Transform] [Writable.Pipeline.Functions.Transform.Parameters] Type = \"xml\" EdgeX 2.0 For EdgeX 2.0 the TransformToJSON and TransformToXML configurable pipeline functions have been replaced by the single Transform configurable pipeline function with additional Type parameter.","title":"Transform"},{"location":"microservices/application/ApplicationFunctionsSDK/","text":"App Functions SDK Introduction Welcome the App Functions SDK for EdgeX. This SDK is meant to provide all the plumbing necessary for developers to get started in processing/transforming/exporting data out of EdgeX. If you're new to the SDK - checkout the Getting Started guide. If you're already familiar - checkout the various sections about the SDK: Section Description Application Service API Provides a list of all available APIs on the interface use to build Application Services App Function Context API Provides a list of all available APIs on the context interface that is available inside of a pipeline function Pipeline Function Error Handling Describes how to properly handle pipeline execution failures Built-In Pipeline Functions Provides a list of the available pipeline functions/transforms in the SDK Advanced Topics Learn about other ways to leverage the SDK beyond basic use cases The App Functions SDK implements a small REST API which can be seen Here .","title":"App Functions SDK Introduction"},{"location":"microservices/application/ApplicationFunctionsSDK/#app-functions-sdk-introduction","text":"Welcome the App Functions SDK for EdgeX. This SDK is meant to provide all the plumbing necessary for developers to get started in processing/transforming/exporting data out of EdgeX. If you're new to the SDK - checkout the Getting Started guide. If you're already familiar - checkout the various sections about the SDK: Section Description Application Service API Provides a list of all available APIs on the interface use to build Application Services App Function Context API Provides a list of all available APIs on the context interface that is available inside of a pipeline function Pipeline Function Error Handling Describes how to properly handle pipeline execution failures Built-In Pipeline Functions Provides a list of the available pipeline functions/transforms in the SDK Advanced Topics Learn about other ways to leverage the SDK beyond basic use cases The App Functions SDK implements a small REST API which can be seen Here .","title":"App Functions SDK Introduction"},{"location":"microservices/application/ApplicationServiceAPI/","text":"Application Service API The ApplicationService API is the central API for creating an EdgeX Application Service. EdgeX 2.0 For EdgeX 2.0 the ApplicationService API and factory functions replace direct access to the AppFunctionsSDK struct. The new ApplicationService API is as follows: type AppFunction = func ( appCxt AppFunctionContext , data interface {}) ( bool , interface {}) type FunctionPipeline struct { Id string Transforms [] AppFunction Topic string Hash string } type ApplicationService interface { ApplicationSettings () map [ string ] string GetAppSetting ( setting string ) ( string , error ) GetAppSettingStrings ( setting string ) ([] string , error ) LoadCustomConfig ( config UpdatableConfig , sectionName string ) error ListenForCustomConfigChanges ( configToWatch interface {}, sectionName string , changedCallback func ( interface {})) error SetFunctionsPipeline ( transforms ... AppFunction ) error *** DEPRECATED *** SetDefaultFunctionsPipeline ( transforms ... AppFunction ) error AddFunctionsPipelineByTopics ( id string , topics [] string , transforms ... AppFunction ) error LoadConfigurablePipeline () ([] AppFunction , error ) *** DEPRECATED by LoadConfigurableFunctionPipelines *** LoadConfigurableFunctionPipelines () ( map [ string ] FunctionPipeline , error ) MakeItRun () error MakeItStop () GetSecret ( path string , keys ... string ) ( map [ string ] string , error ) StoreSecret ( path string , secretData map [ string ] string ) error LoggingClient () logger . LoggingClient EventClient () interfaces . EventClient CommandClient () interfaces . CommandClient NotificationClient () interfaces . NotificationClient SubscriptionClient () interfaces . SubscriptionClient DeviceServiceClient () interfaces . DeviceServiceClient DeviceProfileClient () interfaces . DeviceProfileClient DeviceClient () interfaces . DeviceClient RegistryClient () registry . Client AddBackgroundPublisher ( capacity int ) ( BackgroundPublisher , error ) AddBackgroundPublisherWithTopic ( capacity int , topic string ) ( BackgroundPublisher , error ) BuildContext ( correlationId string , contentType string ) AppFunctionContext AddRoute ( route string , handler func ( http . ResponseWriter , * http . Request ), methods ... string ) error RequestTimeout () time . Duration RegisterCustomTriggerFactory ( name string , factory func ( TriggerConfig ) ( Trigger , error )) error } Factory Functions The App Functions SDK provides two factory functions for creating an ApplicationService NewAppService NewAppService(serviceKey string) (interfaces.ApplicationService, bool) This factory function returns an interfaces.ApplicationService using the default Target Type of dtos.Event and initializes the service. The second bool return parameter will be true if successfully initialized, otherwise it will be false when error(s) occurred during initialization. All error(s) are logged so the caller just needs to call os.Exit(-1) if false is returned. Example - NewAppService const serviceKey = \"app-myservice\" ... service , ok := pkg . NewAppService ( serviceKey ) if ! ok { os . Exit ( - 1 ) } NewAppServiceWithTargetType NewAppServiceWithTargetType(serviceKey string, targetType interface{}) (interfaces.ApplicationService, bool) This factory function returns an interfaces.ApplicationService using the passed in Target Type and initializes the service. The second bool return parameter will be true if successfully initialized, otherwise it will be false when error(s) occurred during initialization. All error(s) are logged so the caller just needs to call os.Exit(-1) if false is returned. See the Target Type advanced topic for more details. Example - NewAppServiceWithTargetType const serviceKey = \"app-myservice\" ... service , ok := pkg . NewAppServiceWithTargetType ( serviceKey , & [] byte {}) if ! ok { os . Exit ( - 1 ) } Custom Configuration APIs The following ApplicationService APIs allow your service to access their custom configuration from the TOML file and/or Configuration Provider. See the Custom Configuration advanced topic for more details. ApplicationSettings ApplicationSettings() map[string]string This API returns the complete key/value map of custom settings Example - ApplicationSettings [ApplicationSettings] Greeting = \"Hello World\" appSettings := service . ApplicationSettings () greeting := appSettings [ \"Greeting\" ] service . LoggingClient . Info ( greeting ) GetAppSetting GetAppSetting(setting string) (string, error) This API is a convenience API that returns a single setting from the [ApplicationSetting] section of the service configuration. An error is returned if the specified setting is not found. Example - GetAppSetting [ApplicationSettings] Greeting = \"Hello World\" greeting , err := service . GetAppSetting [ \"Greeting\" ] if err != nil { ... } service . LoggingClient . Info ( greeting ) GetAppSettingStrings GetAppSettingStrings(setting string) ([]string, error) This API is a convenience API that parses the string value for the specified custom application setting as a comma separated list. It returns the list of strings. An error is returned if the specified setting is not found. Example - GetAppSettingStrings [ApplicationSettings] Greetings = \"Hello World, Welcome World, Hi World\" greetings , err := service . GetAppSettingStrings [ \"Greetings\" ] if err != nil { ... } for _ , greeting := range greetings { service . LoggingClient . Info ( greeting ) } LoadCustomConfig LoadCustomConfig(config UpdatableConfig, sectionName string) error This API loads the service's Structured Custom Configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration if service is using the Configuration Provider. The UpdateFromRaw API ( UpdatableConfig interface) will be called on the custom configuration when the configuration is loaded from the Configuration Provider. The custom config must implement the UpdatableConfig interface. Example - LoadCustomConfig [ AppCustom ] # Can be any name you choose ResourceNames = \"Boolean, Int32, Uint32, Float32, Binary\" SomeValue = 123 [AppCustom.SomeService] Host = \"localhost\" Port = 9080 Protocol = \"http\" type ServiceConfig struct { AppCustom AppCustomConfig } type AppCustomConfig struct { ResourceNames string SomeValue int SomeService HostInfo } func ( c * ServiceConfig ) UpdateFromRaw ( rawConfig interface {}) bool { configuration , ok := rawConfig .( * ServiceConfig ) if ! ok { return false //errors.New(\"unable to cast raw config to type 'ServiceConfig'\") } * c = * configuration return true } ... serviceConfig := & ServiceConfig {} err := service . LoadCustomConfig ( serviceConfig , \"AppCustom\" ) if err != nil { ... } See the App Service Template for a complete example of using Structured Custom Configuration ListenForCustomConfigChanges ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error This API starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the provided changedCallback function is called with the updated section of configuration. The service must then implement the code to copy the updates into it's copy of the configuration and respond to the updates if needed. Example - ListenForCustomConfigChanges [ AppCustom ] # Can be any name you choose ResourceNames = \"Boolean, Int32, Uint32, Float32, Binary\" SomeValue = 123 [AppCustom.SomeService] Host = \"localhost\" Port = 9080 Protocol = \"http\" ... err := service . ListenForCustomConfigChanges ( & serviceConfig . AppCustom , \"AppCustom\" , ProcessConfigUpdates ) if err != nil { logger . Errorf ( \"unable to watch custom writable configuration: %s\" , err . Error ()) } ... func ( app * myApp ) ProcessConfigUpdates ( rawWritableConfig interface {}) { updated , ok := rawWritableConfig .( * config . AppCustomConfig ) if ! ok { ... return } previous := app . serviceConfig . AppCustom app . serviceConfig . AppCustom = * updated if reflect . DeepEqual ( previous , updated ) { logger . Info ( \"No changes detected\" ) return } if previous . SomeValue != updated . SomeValue { logger . Infof ( \"AppCustom.SomeValue changed to: %d\" , updated . SomeValue ) } if previous . ResourceNames != updated . ResourceNames { logger . Infof ( \"AppCustom.ResourceNames changed to: %s\" , updated . ResourceNames ) } if ! reflect . DeepEqual ( previous . SomeService , updated . SomeService ) { logger . Infof ( \"AppCustom.SomeService changed to: %v\" , updated . SomeService ) } } See the App Service Template for a complete example of using Structured Custom Configuration Function Pipeline APIs The following ApplicationService APIs allow your service to set the Functions Pipeline and start and stop the Functions Pipeline. AppFunction type AppFunction = func(appCxt AppFunctionContext, data interface{}) (bool, interface{}) This type defines the signature that all pipeline functions must implement. FunctionPipeline This type defines the struct that contains the metadata for a functions pipeline instance. type FunctionPipeline struct { Id string Transforms [] AppFunction Topic string Hash string } SetFunctionsPipeline SetFunctionsPipeline(transforms ...AppFunction) error This API has been deprecated (Replaced by SetDefaultFunctionsPipeline) and will be removed in a future release. Functions the same as SetDefaultFunctionsPipeline. SetDefaultFunctionsPipeline SetDefaultFunctionsPipeline(transforms ...AppFunction) error This API sets the default functions pipeline with the specified list of Application Functions. This pipeline is executed for all messages received from the configured trigger. Note that the functions are executed in the order provided in the list. An error is returned if the list is empty. Example - SetDefaultFunctionsPipeline sample := functions . NewSample () err = service . SetDefaultFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , sample . LogEventDetails , sample . ConvertEventToXML , sample . OutputXML ) if err != nil { app . lc . Errorf ( \"SetDefaultFunctionsPipeline returned error: %s\" , err . Error ()) return - 1 } AddFunctionsPipelineForTopics AddFunctionsPipelineForTopics(id string, topics []string, transforms ...AppFunction) error This API adds a functions pipeline with the specified unique ID and list of functions (transforms) to be executed when the received topic matches one of the specified pipeline topics. See the Pipeline Per Topic section for more details. Example - AddFunctionsPipelineForTopics sample := functions . NewSample () err = service . AddFunctionsPipelineForTopic ( \"Floats-Pipeline\" , [] string { \"edgex/events/#/#/Random-Float-Device/#\" }, transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , sample . LogEventDetails , sample . ConvertEventToXML , sample . OutputXML ) if err != nil { ... return - 1 } LoadConfigurablePipeline LoadConfigurablePipeline() ([]AppFunction, error) This API loads the default function pipeline from configuration. An error is returned if the configuration is not valid, i.e. missing required function parameters, invalid function name, etc. Warning This API is Deprecated , has been replaced by LoadConfigurableFunctionPipelines below and will be removed in a future release. LoadConfigurableFunctionPipelines LoadConfigurableFunctionPipelines() (map[string]FunctionPipeline, error) This API loads the function pipelines (default and per topic) from configuration. An error is returned if the configuration is not valid, i.e. missing required function parameters, invalid function name, etc. Note This API is only useful if pipeline is always defined in configuration as is with App Service Configurable. Example - LoadConfigurableFunctionPipelines configuredPipelines , err := service . LoadConfigurableFunctionPipelines () if err != nil { ... os . Exit ( - 1 ) } ... for _ , pipeline := range configuredPipelines { switch pipeline . Id { case interfaces . DefaultPipelineId : if err = service . SetFunctionsPipeline ( pipeline . Transforms ... ); err != nil { ... os . Exit ( - 1 ) } default : if err = service . AddFunctionsPipelineForTopic ( pipeline . Id , pipeline . Topic , pipeline . Transforms ... ); err != nil { ... os . Exit ( - 1 ) } } } MakeItRun MakeItRun() error This API starts the configured trigger to allow the Functions Pipeline to execute when the trigger receives data. The internal webserver is also started. This is a long running API which does not return until the service is stopped or MakeItStop() is called. An error is returned if the trigger can not be create or initialized or if the internal webserver encounters an error. Example - MakeItRun if err := service . MakeItRun (); err != nil { logger . Errorf ( \"MakeItRun returned error: %s\" , err . Error ()) os . exit ( - 1 ) } // Do any required cleanup here, if needed os . exit ( 0 ) MakeItStop MakeItStop() This API stops the configured trigger so that the functions pipeline no longer executes. The internal webserver continues to accept requests. See Stopping the Service advanced topic for more details Example - MakeItStop service . MakeItStop () ... Secrets APIs The following ApplicationService APIs allow your service retrieve and store secrets from/to the service's SecretStore. See the Secrets advanced topic for more details about using secrets. GetSecret GetSecret(path string, keys ...string) (map[string]string, error) This API returns the secret data from the secret store (secure or insecure) for the specified path. An error is returned if the path is not found or any of the keys (if specified) are not found. Omit keys if all secret data for the specified path is required. Example - GetSecret secretData , err := service . GetSecret ( \"mqtt\" ) if err != nil { ... } username := secretData [ \"user\" ] password := secretData [ \"password\" ] ... StoreSecret StoreSecret(path string, secretData map[string]string) error This API stores the specified secret data into the secret store (secure mode only) for the specified path An error is returned if: Specified secret data is empty Not using the secure secret store, i.e. not valid with InsecureSecrets configuration Secure secret provider is not properly initialized Connection issues with Secret Store service. Note Typically Application Services only needs to retrieve secrets via the code. The /secret REST API is used to seed secrets into the service's SecretStore. Example - StoreSecret secretData := generateMqttCredentials () err := service . StoreSecret ( \"mqtt\" , secretData ) if err != nil { ... } ... Client APIs The following ApplicationService APIs allow your service access the various EdgeX clients and their APIs. LoggingClient LoggingClient() logger.LoggingClient This API returns the LoggingClient instance which the service uses to log messages. See the LoggingClient interface for more details. Example - LoggingClient service . LoggingClient (). Info ( \"Hello World\" ) service . LoggingClient (). Errorf ( \"Some error occurred: %w\" , err ) RegistryClient RegistryClient() registry.Client This API returns the Registry Client. Note the registry must been enabled, otherwise this will return nil. See the Registry Client interface for more details. Useful if service needs to add additional health checks or needs to get endpoint of another registered service. EventClient EventClient() interfaces.EventClient This API returns the Event Client. Note if Core Data is not specified in the Clients configuration, this will return nil. See the Event Client interface for more details. Useful for adding, deleting or querying Events. CommandClient CommandClient() interfaces.CommandClient This API returns the Command Client. Note if Support Command is not specified in the Clients configuration, this will return nil. See the Command Client interface for more details. Useful for issuing commands to devices. NotificationClient NotificationClient() interfaces.NotificationClient This API returns the Notification Client. Note if Support Notifications is not specified in the Clients configuration, this will return nil. See the Notification Client interface for more details. Useful for sending notifications. SubscriptionClient SubscriptionClient() interfaces.SubscriptionClient This API returns the Subscription client. Note if Support Notifications is not specified in the Clients configuration, this will return nil. See the Subscription Client interface for more details. Useful for creating notification subscriptions. DeviceServiceClient DeviceServiceClient() interfaces.DeviceServiceClient This API returns the Device Service Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Service Client interface for more details. Useful for querying information about a Device Service. DeviceProfileClient DeviceProfileClient() interfaces.DeviceProfileClient This API returns the Device Profile Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Profile Client interface for more details. Useful for querying information about a Device Profile such as Device Resource details. DeviceClient DeviceClient() interfaces.DeviceClient This API returns the Device Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Client interface for more details. Useful for querying list of devices for a specific Device Service or Device Profile. Background Publisher APIs The following ApplicationService APIs allow Application Services to have background publishers. See the Background Publishing advanced topic for more details and example. AddBackgroundPublisher AddBackgroundPublisher(capacity int) (BackgroundPublisher, error) This API adds and returns a BackgroundPublisher which is used to publish asynchronously to the Edgex MessageBus. Not valid for use with the HTTP or External MQTT triggers AddBackgroundPublisherWithTopic AddBackgroundPublisherWithTopic(capacity int, topic string) (BackgroundPublisher, error) This API adds and returns a BackgroundPublisher which is used to publish asynchronously to the Edgex MessageBus on the specified topic. Not valid for use with the HTTP or External MQTT triggers. BuildContext BuildContext(correlationId string, contentType string) AppFunctionContext This API allows external callers that may need a context (eg background publishers) to easily create one. Other APIs AddRoute AddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error This API adds a custom REST route to the application service's internal webserver. A reference to the ApplicationService is add the the context that is passed to the handler, which can be retrieved using the AppService key. See Custom REST Endpoints advanced topic for more details and example. RequestTimeout() RequestTimeout() time.Duration This API returns the parsed value for the Service.RequestTimeout configuration setting. The setting is parsed on start-up so that any error is caught then. Example - RequestTimeout [Service] : RequestTimeout = \"60s\" : timeout := service . RequestTimeout () RegisterCustomTriggerFactory RegisterCustomTriggerFactory(name string, factory func(TriggerConfig) (Trigger, error)) error This API registers a trigger factory for a custom trigger to be used. See the Custom Triggers section for more details and example.","title":"Application Service API"},{"location":"microservices/application/ApplicationServiceAPI/#application-service-api","text":"The ApplicationService API is the central API for creating an EdgeX Application Service. EdgeX 2.0 For EdgeX 2.0 the ApplicationService API and factory functions replace direct access to the AppFunctionsSDK struct. The new ApplicationService API is as follows: type AppFunction = func ( appCxt AppFunctionContext , data interface {}) ( bool , interface {}) type FunctionPipeline struct { Id string Transforms [] AppFunction Topic string Hash string } type ApplicationService interface { ApplicationSettings () map [ string ] string GetAppSetting ( setting string ) ( string , error ) GetAppSettingStrings ( setting string ) ([] string , error ) LoadCustomConfig ( config UpdatableConfig , sectionName string ) error ListenForCustomConfigChanges ( configToWatch interface {}, sectionName string , changedCallback func ( interface {})) error SetFunctionsPipeline ( transforms ... AppFunction ) error *** DEPRECATED *** SetDefaultFunctionsPipeline ( transforms ... AppFunction ) error AddFunctionsPipelineByTopics ( id string , topics [] string , transforms ... AppFunction ) error LoadConfigurablePipeline () ([] AppFunction , error ) *** DEPRECATED by LoadConfigurableFunctionPipelines *** LoadConfigurableFunctionPipelines () ( map [ string ] FunctionPipeline , error ) MakeItRun () error MakeItStop () GetSecret ( path string , keys ... string ) ( map [ string ] string , error ) StoreSecret ( path string , secretData map [ string ] string ) error LoggingClient () logger . LoggingClient EventClient () interfaces . EventClient CommandClient () interfaces . CommandClient NotificationClient () interfaces . NotificationClient SubscriptionClient () interfaces . SubscriptionClient DeviceServiceClient () interfaces . DeviceServiceClient DeviceProfileClient () interfaces . DeviceProfileClient DeviceClient () interfaces . DeviceClient RegistryClient () registry . Client AddBackgroundPublisher ( capacity int ) ( BackgroundPublisher , error ) AddBackgroundPublisherWithTopic ( capacity int , topic string ) ( BackgroundPublisher , error ) BuildContext ( correlationId string , contentType string ) AppFunctionContext AddRoute ( route string , handler func ( http . ResponseWriter , * http . Request ), methods ... string ) error RequestTimeout () time . Duration RegisterCustomTriggerFactory ( name string , factory func ( TriggerConfig ) ( Trigger , error )) error }","title":"Application Service API"},{"location":"microservices/application/ApplicationServiceAPI/#factory-functions","text":"The App Functions SDK provides two factory functions for creating an ApplicationService","title":"Factory Functions"},{"location":"microservices/application/ApplicationServiceAPI/#newappservice","text":"NewAppService(serviceKey string) (interfaces.ApplicationService, bool) This factory function returns an interfaces.ApplicationService using the default Target Type of dtos.Event and initializes the service. The second bool return parameter will be true if successfully initialized, otherwise it will be false when error(s) occurred during initialization. All error(s) are logged so the caller just needs to call os.Exit(-1) if false is returned. Example - NewAppService const serviceKey = \"app-myservice\" ... service , ok := pkg . NewAppService ( serviceKey ) if ! ok { os . Exit ( - 1 ) }","title":"NewAppService"},{"location":"microservices/application/ApplicationServiceAPI/#newappservicewithtargettype","text":"NewAppServiceWithTargetType(serviceKey string, targetType interface{}) (interfaces.ApplicationService, bool) This factory function returns an interfaces.ApplicationService using the passed in Target Type and initializes the service. The second bool return parameter will be true if successfully initialized, otherwise it will be false when error(s) occurred during initialization. All error(s) are logged so the caller just needs to call os.Exit(-1) if false is returned. See the Target Type advanced topic for more details. Example - NewAppServiceWithTargetType const serviceKey = \"app-myservice\" ... service , ok := pkg . NewAppServiceWithTargetType ( serviceKey , & [] byte {}) if ! ok { os . Exit ( - 1 ) }","title":"NewAppServiceWithTargetType"},{"location":"microservices/application/ApplicationServiceAPI/#custom-configuration-apis","text":"The following ApplicationService APIs allow your service to access their custom configuration from the TOML file and/or Configuration Provider. See the Custom Configuration advanced topic for more details.","title":"Custom Configuration APIs"},{"location":"microservices/application/ApplicationServiceAPI/#applicationsettings","text":"ApplicationSettings() map[string]string This API returns the complete key/value map of custom settings Example - ApplicationSettings [ApplicationSettings] Greeting = \"Hello World\" appSettings := service . ApplicationSettings () greeting := appSettings [ \"Greeting\" ] service . LoggingClient . Info ( greeting )","title":"ApplicationSettings"},{"location":"microservices/application/ApplicationServiceAPI/#getappsetting","text":"GetAppSetting(setting string) (string, error) This API is a convenience API that returns a single setting from the [ApplicationSetting] section of the service configuration. An error is returned if the specified setting is not found. Example - GetAppSetting [ApplicationSettings] Greeting = \"Hello World\" greeting , err := service . GetAppSetting [ \"Greeting\" ] if err != nil { ... } service . LoggingClient . Info ( greeting )","title":"GetAppSetting"},{"location":"microservices/application/ApplicationServiceAPI/#getappsettingstrings","text":"GetAppSettingStrings(setting string) ([]string, error) This API is a convenience API that parses the string value for the specified custom application setting as a comma separated list. It returns the list of strings. An error is returned if the specified setting is not found. Example - GetAppSettingStrings [ApplicationSettings] Greetings = \"Hello World, Welcome World, Hi World\" greetings , err := service . GetAppSettingStrings [ \"Greetings\" ] if err != nil { ... } for _ , greeting := range greetings { service . LoggingClient . Info ( greeting ) }","title":"GetAppSettingStrings"},{"location":"microservices/application/ApplicationServiceAPI/#loadcustomconfig","text":"LoadCustomConfig(config UpdatableConfig, sectionName string) error This API loads the service's Structured Custom Configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration if service is using the Configuration Provider. The UpdateFromRaw API ( UpdatableConfig interface) will be called on the custom configuration when the configuration is loaded from the Configuration Provider. The custom config must implement the UpdatableConfig interface. Example - LoadCustomConfig [ AppCustom ] # Can be any name you choose ResourceNames = \"Boolean, Int32, Uint32, Float32, Binary\" SomeValue = 123 [AppCustom.SomeService] Host = \"localhost\" Port = 9080 Protocol = \"http\" type ServiceConfig struct { AppCustom AppCustomConfig } type AppCustomConfig struct { ResourceNames string SomeValue int SomeService HostInfo } func ( c * ServiceConfig ) UpdateFromRaw ( rawConfig interface {}) bool { configuration , ok := rawConfig .( * ServiceConfig ) if ! ok { return false //errors.New(\"unable to cast raw config to type 'ServiceConfig'\") } * c = * configuration return true } ... serviceConfig := & ServiceConfig {} err := service . LoadCustomConfig ( serviceConfig , \"AppCustom\" ) if err != nil { ... } See the App Service Template for a complete example of using Structured Custom Configuration","title":"LoadCustomConfig"},{"location":"microservices/application/ApplicationServiceAPI/#listenforcustomconfigchanges","text":"ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error This API starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the provided changedCallback function is called with the updated section of configuration. The service must then implement the code to copy the updates into it's copy of the configuration and respond to the updates if needed. Example - ListenForCustomConfigChanges [ AppCustom ] # Can be any name you choose ResourceNames = \"Boolean, Int32, Uint32, Float32, Binary\" SomeValue = 123 [AppCustom.SomeService] Host = \"localhost\" Port = 9080 Protocol = \"http\" ... err := service . ListenForCustomConfigChanges ( & serviceConfig . AppCustom , \"AppCustom\" , ProcessConfigUpdates ) if err != nil { logger . Errorf ( \"unable to watch custom writable configuration: %s\" , err . Error ()) } ... func ( app * myApp ) ProcessConfigUpdates ( rawWritableConfig interface {}) { updated , ok := rawWritableConfig .( * config . AppCustomConfig ) if ! ok { ... return } previous := app . serviceConfig . AppCustom app . serviceConfig . AppCustom = * updated if reflect . DeepEqual ( previous , updated ) { logger . Info ( \"No changes detected\" ) return } if previous . SomeValue != updated . SomeValue { logger . Infof ( \"AppCustom.SomeValue changed to: %d\" , updated . SomeValue ) } if previous . ResourceNames != updated . ResourceNames { logger . Infof ( \"AppCustom.ResourceNames changed to: %s\" , updated . ResourceNames ) } if ! reflect . DeepEqual ( previous . SomeService , updated . SomeService ) { logger . Infof ( \"AppCustom.SomeService changed to: %v\" , updated . SomeService ) } } See the App Service Template for a complete example of using Structured Custom Configuration","title":"ListenForCustomConfigChanges"},{"location":"microservices/application/ApplicationServiceAPI/#function-pipeline-apis","text":"The following ApplicationService APIs allow your service to set the Functions Pipeline and start and stop the Functions Pipeline.","title":"Function Pipeline APIs"},{"location":"microservices/application/ApplicationServiceAPI/#appfunction","text":"type AppFunction = func(appCxt AppFunctionContext, data interface{}) (bool, interface{}) This type defines the signature that all pipeline functions must implement.","title":"AppFunction"},{"location":"microservices/application/ApplicationServiceAPI/#functionpipeline","text":"This type defines the struct that contains the metadata for a functions pipeline instance. type FunctionPipeline struct { Id string Transforms [] AppFunction Topic string Hash string }","title":"FunctionPipeline"},{"location":"microservices/application/ApplicationServiceAPI/#setfunctionspipeline","text":"SetFunctionsPipeline(transforms ...AppFunction) error This API has been deprecated (Replaced by SetDefaultFunctionsPipeline) and will be removed in a future release. Functions the same as SetDefaultFunctionsPipeline.","title":"SetFunctionsPipeline"},{"location":"microservices/application/ApplicationServiceAPI/#setdefaultfunctionspipeline","text":"SetDefaultFunctionsPipeline(transforms ...AppFunction) error This API sets the default functions pipeline with the specified list of Application Functions. This pipeline is executed for all messages received from the configured trigger. Note that the functions are executed in the order provided in the list. An error is returned if the list is empty. Example - SetDefaultFunctionsPipeline sample := functions . NewSample () err = service . SetDefaultFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , sample . LogEventDetails , sample . ConvertEventToXML , sample . OutputXML ) if err != nil { app . lc . Errorf ( \"SetDefaultFunctionsPipeline returned error: %s\" , err . Error ()) return - 1 }","title":"SetDefaultFunctionsPipeline"},{"location":"microservices/application/ApplicationServiceAPI/#addfunctionspipelinefortopics","text":"AddFunctionsPipelineForTopics(id string, topics []string, transforms ...AppFunction) error This API adds a functions pipeline with the specified unique ID and list of functions (transforms) to be executed when the received topic matches one of the specified pipeline topics. See the Pipeline Per Topic section for more details. Example - AddFunctionsPipelineForTopics sample := functions . NewSample () err = service . AddFunctionsPipelineForTopic ( \"Floats-Pipeline\" , [] string { \"edgex/events/#/#/Random-Float-Device/#\" }, transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , sample . LogEventDetails , sample . ConvertEventToXML , sample . OutputXML ) if err != nil { ... return - 1 }","title":"AddFunctionsPipelineForTopics"},{"location":"microservices/application/ApplicationServiceAPI/#loadconfigurablepipeline","text":"LoadConfigurablePipeline() ([]AppFunction, error) This API loads the default function pipeline from configuration. An error is returned if the configuration is not valid, i.e. missing required function parameters, invalid function name, etc. Warning This API is Deprecated , has been replaced by LoadConfigurableFunctionPipelines below and will be removed in a future release.","title":"LoadConfigurablePipeline"},{"location":"microservices/application/ApplicationServiceAPI/#loadconfigurablefunctionpipelines","text":"LoadConfigurableFunctionPipelines() (map[string]FunctionPipeline, error) This API loads the function pipelines (default and per topic) from configuration. An error is returned if the configuration is not valid, i.e. missing required function parameters, invalid function name, etc. Note This API is only useful if pipeline is always defined in configuration as is with App Service Configurable. Example - LoadConfigurableFunctionPipelines configuredPipelines , err := service . LoadConfigurableFunctionPipelines () if err != nil { ... os . Exit ( - 1 ) } ... for _ , pipeline := range configuredPipelines { switch pipeline . Id { case interfaces . DefaultPipelineId : if err = service . SetFunctionsPipeline ( pipeline . Transforms ... ); err != nil { ... os . Exit ( - 1 ) } default : if err = service . AddFunctionsPipelineForTopic ( pipeline . Id , pipeline . Topic , pipeline . Transforms ... ); err != nil { ... os . Exit ( - 1 ) } } }","title":"LoadConfigurableFunctionPipelines"},{"location":"microservices/application/ApplicationServiceAPI/#makeitrun","text":"MakeItRun() error This API starts the configured trigger to allow the Functions Pipeline to execute when the trigger receives data. The internal webserver is also started. This is a long running API which does not return until the service is stopped or MakeItStop() is called. An error is returned if the trigger can not be create or initialized or if the internal webserver encounters an error. Example - MakeItRun if err := service . MakeItRun (); err != nil { logger . Errorf ( \"MakeItRun returned error: %s\" , err . Error ()) os . exit ( - 1 ) } // Do any required cleanup here, if needed os . exit ( 0 )","title":"MakeItRun"},{"location":"microservices/application/ApplicationServiceAPI/#makeitstop","text":"MakeItStop() This API stops the configured trigger so that the functions pipeline no longer executes. The internal webserver continues to accept requests. See Stopping the Service advanced topic for more details Example - MakeItStop service . MakeItStop () ...","title":"MakeItStop"},{"location":"microservices/application/ApplicationServiceAPI/#secrets-apis","text":"The following ApplicationService APIs allow your service retrieve and store secrets from/to the service's SecretStore. See the Secrets advanced topic for more details about using secrets.","title":"Secrets APIs"},{"location":"microservices/application/ApplicationServiceAPI/#getsecret","text":"GetSecret(path string, keys ...string) (map[string]string, error) This API returns the secret data from the secret store (secure or insecure) for the specified path. An error is returned if the path is not found or any of the keys (if specified) are not found. Omit keys if all secret data for the specified path is required. Example - GetSecret secretData , err := service . GetSecret ( \"mqtt\" ) if err != nil { ... } username := secretData [ \"user\" ] password := secretData [ \"password\" ] ...","title":"GetSecret"},{"location":"microservices/application/ApplicationServiceAPI/#storesecret","text":"StoreSecret(path string, secretData map[string]string) error This API stores the specified secret data into the secret store (secure mode only) for the specified path An error is returned if: Specified secret data is empty Not using the secure secret store, i.e. not valid with InsecureSecrets configuration Secure secret provider is not properly initialized Connection issues with Secret Store service. Note Typically Application Services only needs to retrieve secrets via the code. The /secret REST API is used to seed secrets into the service's SecretStore. Example - StoreSecret secretData := generateMqttCredentials () err := service . StoreSecret ( \"mqtt\" , secretData ) if err != nil { ... } ...","title":"StoreSecret"},{"location":"microservices/application/ApplicationServiceAPI/#client-apis","text":"The following ApplicationService APIs allow your service access the various EdgeX clients and their APIs.","title":"Client APIs"},{"location":"microservices/application/ApplicationServiceAPI/#loggingclient","text":"LoggingClient() logger.LoggingClient This API returns the LoggingClient instance which the service uses to log messages. See the LoggingClient interface for more details. Example - LoggingClient service . LoggingClient (). Info ( \"Hello World\" ) service . LoggingClient (). Errorf ( \"Some error occurred: %w\" , err )","title":"LoggingClient"},{"location":"microservices/application/ApplicationServiceAPI/#registryclient","text":"RegistryClient() registry.Client This API returns the Registry Client. Note the registry must been enabled, otherwise this will return nil. See the Registry Client interface for more details. Useful if service needs to add additional health checks or needs to get endpoint of another registered service.","title":"RegistryClient"},{"location":"microservices/application/ApplicationServiceAPI/#eventclient","text":"EventClient() interfaces.EventClient This API returns the Event Client. Note if Core Data is not specified in the Clients configuration, this will return nil. See the Event Client interface for more details. Useful for adding, deleting or querying Events.","title":"EventClient"},{"location":"microservices/application/ApplicationServiceAPI/#commandclient","text":"CommandClient() interfaces.CommandClient This API returns the Command Client. Note if Support Command is not specified in the Clients configuration, this will return nil. See the Command Client interface for more details. Useful for issuing commands to devices.","title":"CommandClient"},{"location":"microservices/application/ApplicationServiceAPI/#notificationclient","text":"NotificationClient() interfaces.NotificationClient This API returns the Notification Client. Note if Support Notifications is not specified in the Clients configuration, this will return nil. See the Notification Client interface for more details. Useful for sending notifications.","title":"NotificationClient"},{"location":"microservices/application/ApplicationServiceAPI/#subscriptionclient","text":"SubscriptionClient() interfaces.SubscriptionClient This API returns the Subscription client. Note if Support Notifications is not specified in the Clients configuration, this will return nil. See the Subscription Client interface for more details. Useful for creating notification subscriptions.","title":"SubscriptionClient"},{"location":"microservices/application/ApplicationServiceAPI/#deviceserviceclient","text":"DeviceServiceClient() interfaces.DeviceServiceClient This API returns the Device Service Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Service Client interface for more details. Useful for querying information about a Device Service.","title":"DeviceServiceClient"},{"location":"microservices/application/ApplicationServiceAPI/#deviceprofileclient","text":"DeviceProfileClient() interfaces.DeviceProfileClient This API returns the Device Profile Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Profile Client interface for more details. Useful for querying information about a Device Profile such as Device Resource details.","title":"DeviceProfileClient"},{"location":"microservices/application/ApplicationServiceAPI/#deviceclient","text":"DeviceClient() interfaces.DeviceClient This API returns the Device Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Client interface for more details. Useful for querying list of devices for a specific Device Service or Device Profile.","title":"DeviceClient"},{"location":"microservices/application/ApplicationServiceAPI/#background-publisher-apis","text":"The following ApplicationService APIs allow Application Services to have background publishers. See the Background Publishing advanced topic for more details and example.","title":"Background Publisher APIs"},{"location":"microservices/application/ApplicationServiceAPI/#addbackgroundpublisher","text":"AddBackgroundPublisher(capacity int) (BackgroundPublisher, error) This API adds and returns a BackgroundPublisher which is used to publish asynchronously to the Edgex MessageBus. Not valid for use with the HTTP or External MQTT triggers","title":"AddBackgroundPublisher"},{"location":"microservices/application/ApplicationServiceAPI/#addbackgroundpublisherwithtopic","text":"AddBackgroundPublisherWithTopic(capacity int, topic string) (BackgroundPublisher, error) This API adds and returns a BackgroundPublisher which is used to publish asynchronously to the Edgex MessageBus on the specified topic. Not valid for use with the HTTP or External MQTT triggers.","title":"AddBackgroundPublisherWithTopic"},{"location":"microservices/application/ApplicationServiceAPI/#buildcontext","text":"BuildContext(correlationId string, contentType string) AppFunctionContext This API allows external callers that may need a context (eg background publishers) to easily create one.","title":"BuildContext"},{"location":"microservices/application/ApplicationServiceAPI/#other-apis","text":"","title":"Other APIs"},{"location":"microservices/application/ApplicationServiceAPI/#addroute","text":"AddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error This API adds a custom REST route to the application service's internal webserver. A reference to the ApplicationService is add the the context that is passed to the handler, which can be retrieved using the AppService key. See Custom REST Endpoints advanced topic for more details and example.","title":"AddRoute"},{"location":"microservices/application/ApplicationServiceAPI/#requesttimeout","text":"RequestTimeout() time.Duration This API returns the parsed value for the Service.RequestTimeout configuration setting. The setting is parsed on start-up so that any error is caught then. Example - RequestTimeout [Service] : RequestTimeout = \"60s\" : timeout := service . RequestTimeout ()","title":"RequestTimeout()"},{"location":"microservices/application/ApplicationServiceAPI/#registercustomtriggerfactory","text":"RegisterCustomTriggerFactory(name string, factory func(TriggerConfig) (Trigger, error)) error This API registers a trigger factory for a custom trigger to be used. See the Custom Triggers section for more details and example.","title":"RegisterCustomTriggerFactory"},{"location":"microservices/application/ApplicationServices/","text":"Application Services Application Services are a means to get data from EdgeX Foundry to be processed at the edge and/or sent to external systems (be it analytics package, enterprise or on-prem application, cloud systems like Azure IoT, AWS IoT, or Google IoT Core, etc.). Application Services provide the means for data to be prepared (transformed, enriched, filtered, etc.) and groomed (formatted, compressed, encrypted, etc.) before being sent to an endpoint of choice or published back to other Application Service to consume. The export endpoints supported out of the box today include HTTP and MQTT endpoints, but custom endpoints can be implemented along side the existing functionality. Application Services are based on the idea of a \"Functions Pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event/reading messages) in the order that you've specified. Triggers seed the first function in the pipeline with the data received by the Application Service. A trigger is something like a message landing in a watched message queue. The most commonly used Trigger is the MessageBus Trigger. See the Triggers section for more details An Applications Functions Software Development Kit (or App Functions SDK ) is available to help create Application Services. Currently the only SDK supported language is Golang, with the intention that community developed and supported SDKs may come in the future for other languages. The SDK is available as a Golang module to remain operating system (OS) agnostic and to comply with the latest EdgeX guidelines on dependency management. Any application built on top of the Application Functions SDK is considered an App Service. This SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a pipeline. Standard Functions As mentioned, an Application Service is a function pipeline. The SDK provides some standard functions that can be used in a functions pipeline. In the future, additional functions will be provided \"standard\" or in other words provided with the SDK. Additionally, developers can implement their own custom functions and add those to their Application Service functions pipeline. One of the most common use cases for working with data that comes from the MessageBus is to filter data down to what is relevant for a given application and to format it. To help facilitate this, six primary functions are included in the SDK. The first is the FilterByProfileName function which will remove events that do or do not match the configured ProfileNames and execution of the pipeline will cease if no event remains after filtering. The second is the FilterByDeviceName function which will remove events that do or do not match the configured DeviceNames and execution of the pipeline will cease if no event remains after filtering. The third is the FilterBySourceName function which will remove events that do or do not match the configured SourceNames and execution of the pipeline will cease if no event remains after filtering. A SourceName is the name of the source (command or resource) that the Event was created from. The fourth is the FilterByResourceName which exhibits the same behavior as DeviceNameFilter except filtering the event's Readings on ResourceName instead of DeviceName . Execution of the pipeline will cease if no readings remain after filtering. The fifth and sixth provided functions in the SDK transform the data received to either XML or JSON by calling XMLTransform or JSONTransform . EdgeX 2.0 The FilterByProfileName and FilterBySourceName pipeline functions are new in EdgeX 2.0 with the addition of the ProfileName and SourceName on the V2 Event DTO. FilterByResourceName replaces the FileterByValueDescriptor pipeline function in EdgeX 2.0 with the change of Name to ResourceName on the V2 Reading DTO. This function serves the same purpose of filtering Event Readings. Typically, after filtering and transforming the data as needed, exporting is the last step in a pipeline to ship the data where it needs to go. There are three primary functions included in the SDK to help facilitate this. The first are the HTTPPost/HTTPPut functions that will POST/PUT the provided data to a specified endpoint, and the third is an MQTTSecretSend() function that will publish the provided data to an MQTT Broker as specified in the configuration. See Built-in Functions section for full list of SDK supplied functions Note The App SDK provides much more functionality than just filtering, formatting and exporting. The above simple example is provided to demonstrate how the functions pipeline works. With the ability to write your custom pipeline functions, your custom application services can do what ever your use case demands. There are three primary triggers that have been included in the SDK that initiate the start of the function pipeline. First is the HTTP Trigger via a POST to the endpoint /api/v2/trigger with the EdgeX Event data as the body. Second is the EdgeX MessageBus Trigger with connection details as specified in the configuration and the third it the External MQTT Trigger with connection details as specified in the configuration. See the Triggers section for full list of available Triggers Finally, data may be sent back to the Trigger response by calling .SetResponseData() on the context. If the trigger is HTTP, then it will be an HTTP Response. If the trigger is EdgeX MessageBus, then it will be published to the configured host and publish topic. If the trigger is External MQTT, then it will be published to the configured publish topic.","title":"Introduction"},{"location":"microservices/application/ApplicationServices/#application-services","text":"Application Services are a means to get data from EdgeX Foundry to be processed at the edge and/or sent to external systems (be it analytics package, enterprise or on-prem application, cloud systems like Azure IoT, AWS IoT, or Google IoT Core, etc.). Application Services provide the means for data to be prepared (transformed, enriched, filtered, etc.) and groomed (formatted, compressed, encrypted, etc.) before being sent to an endpoint of choice or published back to other Application Service to consume. The export endpoints supported out of the box today include HTTP and MQTT endpoints, but custom endpoints can be implemented along side the existing functionality. Application Services are based on the idea of a \"Functions Pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event/reading messages) in the order that you've specified. Triggers seed the first function in the pipeline with the data received by the Application Service. A trigger is something like a message landing in a watched message queue. The most commonly used Trigger is the MessageBus Trigger. See the Triggers section for more details An Applications Functions Software Development Kit (or App Functions SDK ) is available to help create Application Services. Currently the only SDK supported language is Golang, with the intention that community developed and supported SDKs may come in the future for other languages. The SDK is available as a Golang module to remain operating system (OS) agnostic and to comply with the latest EdgeX guidelines on dependency management. Any application built on top of the Application Functions SDK is considered an App Service. This SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a pipeline.","title":"Application Services"},{"location":"microservices/application/ApplicationServices/#standard-functions","text":"As mentioned, an Application Service is a function pipeline. The SDK provides some standard functions that can be used in a functions pipeline. In the future, additional functions will be provided \"standard\" or in other words provided with the SDK. Additionally, developers can implement their own custom functions and add those to their Application Service functions pipeline. One of the most common use cases for working with data that comes from the MessageBus is to filter data down to what is relevant for a given application and to format it. To help facilitate this, six primary functions are included in the SDK. The first is the FilterByProfileName function which will remove events that do or do not match the configured ProfileNames and execution of the pipeline will cease if no event remains after filtering. The second is the FilterByDeviceName function which will remove events that do or do not match the configured DeviceNames and execution of the pipeline will cease if no event remains after filtering. The third is the FilterBySourceName function which will remove events that do or do not match the configured SourceNames and execution of the pipeline will cease if no event remains after filtering. A SourceName is the name of the source (command or resource) that the Event was created from. The fourth is the FilterByResourceName which exhibits the same behavior as DeviceNameFilter except filtering the event's Readings on ResourceName instead of DeviceName . Execution of the pipeline will cease if no readings remain after filtering. The fifth and sixth provided functions in the SDK transform the data received to either XML or JSON by calling XMLTransform or JSONTransform . EdgeX 2.0 The FilterByProfileName and FilterBySourceName pipeline functions are new in EdgeX 2.0 with the addition of the ProfileName and SourceName on the V2 Event DTO. FilterByResourceName replaces the FileterByValueDescriptor pipeline function in EdgeX 2.0 with the change of Name to ResourceName on the V2 Reading DTO. This function serves the same purpose of filtering Event Readings. Typically, after filtering and transforming the data as needed, exporting is the last step in a pipeline to ship the data where it needs to go. There are three primary functions included in the SDK to help facilitate this. The first are the HTTPPost/HTTPPut functions that will POST/PUT the provided data to a specified endpoint, and the third is an MQTTSecretSend() function that will publish the provided data to an MQTT Broker as specified in the configuration. See Built-in Functions section for full list of SDK supplied functions Note The App SDK provides much more functionality than just filtering, formatting and exporting. The above simple example is provided to demonstrate how the functions pipeline works. With the ability to write your custom pipeline functions, your custom application services can do what ever your use case demands. There are three primary triggers that have been included in the SDK that initiate the start of the function pipeline. First is the HTTP Trigger via a POST to the endpoint /api/v2/trigger with the EdgeX Event data as the body. Second is the EdgeX MessageBus Trigger with connection details as specified in the configuration and the third it the External MQTT Trigger with connection details as specified in the configuration. See the Triggers section for full list of available Triggers Finally, data may be sent back to the Trigger response by calling .SetResponseData() on the context. If the trigger is HTTP, then it will be an HTTP Response. If the trigger is EdgeX MessageBus, then it will be published to the configured host and publish topic. If the trigger is External MQTT, then it will be published to the configured publish topic.","title":"Standard Functions"},{"location":"microservices/application/BuiltIn/","text":"Built-In Pipeline Functions All pipeline functions define a type and a factory function which is used to initialize an instance of the type with the required options. The instances returned by these factory functions give access to their appropriate pipeline function pointers when setting up the function pipeline. Example NewFilterFor ([] { \"Device1\" , \"Device2\" }). FilterByDeviceName Batching Included in the SDK is an in-memory batch function that will hold on to your data before continuing the pipeline. There are three functions provided for batching each with their own strategy. Factory Method Description NewBatchByTime(timeInterval string) This function returns a BatchConfig instance with time being the strategy that is used for determining when to release the batched data and continue the pipeline. timeInterval is the duration to wait (i.e. 10s ). The time begins after the first piece of data is received. If no data has been received no data will be sent forward. // Example: NewBatchByTime ( \"10s\" ). Batch NewBatchByCount(batchThreshold int) This function returns a BatchConfig instance with count being the strategy that is used for determining when to release the batched data and continue the pipeline. batchThreshold is how many events to hold on to (i.e. 25 ). The count begins after the first piece of data is received and once the threshold is met, the batched data will continue forward and the counter will be reset. // Example: NewBatchByCount ( 10 ). Batch NewBatchByTimeAndCount(timeInterval string, batchThreshold int) This function returns a BatchConfig instance with a combination of both time and count being the strategy that is used for determining when to release the batched data and continue the pipeline. Whichever occurs first will trigger the data to continue and be reset. // Example: NewBatchByTimeAndCount ( \"30s\" , 10 ). Batch ### Batch Batch - This pipeline function will apply the selected strategy in your pipeline. By default the batched data returned by this function is [][]byte . This is because this function doesn't need to know the type of the individual items batched. It simply marshals the items to JSON if the data isn't already a []byte . Edgex 2.1 New for EdgeX 2.1 is the IsEventData flag on the BatchConfig instance. The IsEventData flag, when true, lets this function know that the data being batched is Events and to un-marshal the data a []Event prior to returning the batched data. Batch with IsEventData flag set to true. batch := NewBatchByTimeAndCount(\"30s\", 10) batch.IsEventData = true ... batch.Batch Warning Keep memory usage in mind as you determine the thresholds for both time and count. The larger they are the more memory is required and could lead to performance issue. Compression There are two compression types included in the SDK that can be added to your pipeline. These transforms return a []byte . Factory Method Description NewCompression() This factory function returns a Compression instance that is used to access the compression functions. GZIP CompressWithGZIP - This pipeline function receives either a string , []byte , or json.Marshaler type, GZIP compresses the data, converts result to base64 encoded string, which is returned as a []byte to the pipeline. Example NewCompression (). CompressWithGZIP ZLIB CompressWithZLIB - This pipeline function receives either a string , []byte , or json.Marshaler type, ZLIB compresses the data, converts result to base64 encoded string, which is returned as a []byte to the pipeline. Example NewCompression (). CompressWithZLIB Conversion There are two conversions included in the SDK that can be added to your pipeline. These transforms return a string . Factory Method Description NewConversion() This factory function returns a Conversion instance that is used to access the conversion functions. JSON TransformToJSON - This pipeline function receives an dtos.Event type and converts it to JSON format and returns the JSON string to the pipeline. Example NewConversion (). TransformToJSON XML TransformToXML - This pipeline function receives an dtos.Event type, converts it to XML format and returns the XML string to the pipeline. Example NewConversion (). TransformToXML Core Data There is one Core Data function that enables interactions with the Core Data REST API Factory Method Description NewCoreDataSimpleReading(profileName string, deviceName string, resourceName string, valueType string) This factory function returns a CoreData instance configured to push a Simple reading. The CoreData instance returned is used to access core data functions. NewCoreDataBinaryReading(profileName string, deviceName string, resourceName string, mediaType string) This factory function returns a CoreData instance configured to push a Binary reading. The CoreData instance returned is used to access core data functions. NewCoreDataObejctReading(profileName string, deviceName string, resourceName string) This factory function returns a CoreData instance configured to push an Object reading. The CoreData instance returned is used to access core data functions. EdgeX 2.0 For EdgeX 2.0 the NewCoreData factory function has been replaced with the NewCoreDataSimpleReading and NewCoreDataBinaryReading functions EdgeX 2.1 The NewCoreDataObejctReading factory method is new for EdgeX 2.1 Push to Core Data PushToCoreData - This pipeline function provides the capability to push a new Event/Reading to Core Data. The data passed into this function from the pipeline is wrapped in an EdgeX Event with the Event and Reading metadata specified from the factory function options. The function returns the new EdgeX Event with ID populated. Example NewCoreDataSimpleReading ( \"my-profile\" , \"my-device\" , \"my-resource\" , \"string\" ). PushToCoreData Data Protection There are two transforms included in the SDK that can be added to your pipeline for data protection. Encryption (Deprecated) EdgeX 2.1 This is deprecated in EdgeX 2.1 - it is recommended to use the new AESProtection transform. Please see this security advisory for more detail. Factory Method Description NewEncryption(key string, initializationVector string) This function returns a Encryption instance initialized with the passed in key and initialization vector . This Encryption instance is used to access the following encryption function that will use the specified key and initialization vector . NewEncryptionWithSecrets(secretPath string, secretName string, initializationVector string) This function returns a Encryption instance initialized with the passed in secret path , Secret name and initialization vector . This Encryption instance is used to access the following encryption function that will use the encryption key from the Secret Store and the passed in initialization vector . It uses the passed in secret path and secret name to pull the encryption key from the Secret Store EdgeX 2.0 New for EdgeX 2.0 is the ability to pull the encryption key from the Secret Store. The encryption key must be seeded into the Secret Store using the /api/v2/secret endpoint on the running instance of the Application Service prior to the Encryption function executing. See App Functions SDK swagger for more details on this endpoint. EncryptWithAES - This pipeline function receives either a string , []byte , or json.Marshaller type and encrypts it using AES encryption and returns a []byte to the pipeline. Example NewEncryption ( \"key\" , \"initializationVector\" ). EncryptWithAES or NewEncryptionWithSecrets ( \"aes\" , \"aes-key\" , \"initializationVector\" ). EncryptWithAES ) Note The algorithm used used with app-service-configurable configuration to access this transform is AES AESProtection Edgex 2.1 This transform provides AES 256 encryption with a random initialization vector and authentication using a SHA 512 hash. It can only be configured using secrets. Factory Method Description NewAESProtection(secretPath string, secretName string) This function returns a Encryption instance initialized with the passed in secretPath and secretName It requires a 64-byte key from secrets which is split in half, the first half used for encryption, the second for generating the signature. Encrypt : This pipeline function receives either a string , []byte , or json.Marshaller type and encrypts it using AES256 encryption, signs it with a SHA512 hash and returns a []byte to the pipeline of the following form: initialization vector ciphertext signing hash 16 bytes variable bytes 32 bytes Example transforms . NewAESProtection ( secretPath , secretName ). Encrypt ( ctx , data ) Note The Algorithm used with app-service-configurable configuration to access this transform is AES256 Export There are two export functions included in the SDK that can be added to your pipeline. HTTP Export EdgeX 2.0 For EdgeX 2.0 the signature of the NewHTTPSenderWithSecretHeader factory function has changed. See below for details. Factory Method Description NewHTTPSender(url string, mimeType string, persistOnError bool) This factory function returns a HTTPSender instance initialized with the passed in url, mime type and persistOnError values. NewHTTPSenderWithSecretHeader(url string, mimeType string, persistOnError bool, headerName string, secretPath string, secretName string) This factory function returns a HTTPSender instance similar to the above function however will set up the HTTPSender to add a header to the HTTP request using the headerName for the field name and the secretPath and secretName to pull the header field value from the Secret Store. NewHTTPSenderWithOptions(options HTTPSenderOptions) This factory function returns a HTTPSender using the passed in options to configure it. EdgeX 2.0 New in EdgeX 2.0 is the ability to chain multiple instances of the HTTP exports to accomplish exporting to multiple destinations. The new NewHTTPSenderWithOptions factory function was added to allow for configuring all the options, including the new ContinueOnSendError and ReturnInputData options that enable this chaining. // HTTPSenderOptions contains all options available to the sender type HTTPSenderOptions struct { // URL of destination URL string // MimeType to send to destination MimeType string // PersistOnError enables use of store & forward loop if true PersistOnError bool // HTTPHeaderName to use for passing configured secret HTTPHeaderName string // SecretPath to search for configured secret SecretPath string // SecretName for configured secret SecretName string // URLFormatter specifies custom formatting behavior to be applied to configured URL. // If nothing specified, default behavior is to attempt to replace placeholders in the // form '{some-context-key}' with the values found in the context storage. URLFormatter StringValuesFormatter // ContinueOnSendError allows execution of subsequent chained senders after errors if true ContinueOnSendError bool // ReturnInputData enables chaining multiple HTTP senders if true ReturnInputData bool } HTTP POST HTTPPost - This pipeline function receives either a string , []byte , or json.Marshaler type from the previous function in the pipeline and posts it to the configured endpoint and returns the HTTP response. If no previous function exists, then the event that triggered the pipeline, marshaled to json, will be used. If the post fails and persistOnError=true and Store and Forward is enabled, the data will be stored for later retry. See Store and Forward for more details. If ReturnInputData=true the function will return the data that it received instead of the HTTP response. This allows the following function in the pipeline to be another HTTP Export which receives the same data but is configured to send to a different endpoint. When chaining for multiple HTTP Exports you need to decide how to handle errors. Do you want to stop execution of the pipeline or continue so that the next HTTP Export function can attempt to export to its endpoint. This is where ContinueOnSendError comes in. If set to true the error is logged and the function returns the received data for the next function to use. ContinueOnSendError=true can only be used when ReturnInputData=true and cannot be use when PersistOnError=true . Example POST NewHTTPSender(\"https://myendpoint.com\",\"application/json\",false).HTTPPost PUT NewHTTPSender(\"https://myendpoint.com\",\"application/json\",false).HTTPPut POST with secure header NewHTTPSenderWithSecretHeader(\"https://myendpoint.com\",\"application/json\",false,\"Authentication\",\"/jwt\",\"AuthToken\").HTTPPost PUT with secure header NewHTTPSenderWithSecretHeader(\"https://myendpoint.com\",\"application/json\",false,\"Authentication\",\"/jwt\",\"AuthToken\").HTTPPPut HTTP PUT HTTPPut - This pipeline function operates the same as HTTPPost but uses the PUT method rather than POST . URL Formatting EdgeX 2.0 URL Formatting is new in EdgeX 2.0 The configured URL is dynamically formatted prior to the POST/PUT request. The default formatter (used if URLFormatter is nil) simply replaces any placeholder text, {key-name} , in the configured URL with matching values from the new Context Storage . An error will occur if a specified placeholder does not exist in the Context Storage . See the Context Storage documentation for more details on seeded values and storing your own values. The URLFormatter option allows you to override the default formatter with your own custom URL formatting scheme. Example Export the Events to different endpoints base on their device name Url=\"http://myhost.com/edgex-events/{devicename}\" MQTT Export EdgeX 2.0 New for EdgeX 2.0 is the the new NewMQTTSecretSenderWithTopicFormatter factory function. The deprecated NewMQTTSender factory function has been removed. Factory Method Description NewMQTTSecretSender(mqttConfig MQTTSecretConfig, persistOnError bool) This factory function returns a MQTTSecretSender instance initialized with the options specified in the MQTTSecretConfig and persistOnError . NewMQTTSecretSenderWithTopicFormatter(mqttConfig MQTTSecretConfig, persistOnError bool, topicFormatter StringValuesFormatter) This factory function returns a MQTTSecretSender instance initialized with the options specified in the MQTTSecretConfig , persistOnError and topicFormatter . See Topic Formatting below for more details. EdgeX 2.0 New in EdgeX 2.0 the KeepAlive and ConnectTimeout MQTTSecretConfig settings have been added. type MQTTSecretConfig struct { // BrokerAddress should be set to the complete broker address i.e. mqtts://mosquitto:8883/mybroker BrokerAddress string // ClientId to connect with the broker with. ClientId string // The name of the path in secret provider to retrieve your secrets SecretPath string // AutoReconnect indicated whether or not to retry connection if disconnected AutoReconnect bool // KeepAlive is the interval duration between client sending keepalive ping to broker KeepAlive string // ConnectTimeout is the duration for timing out on connecting to the broker ConnectTimeout string // Topic that you wish to publish to Topic string // QoS for MQTT Connection QoS byte // Retain setting for MQTT Connection Retain bool // SkipCertVerify SkipCertVerify bool // AuthMode indicates what to use when connecting to the broker. // Options are \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". // If a CA Cert exists in the SecretPath then it will be used for // all modes except \"none\". AuthMode string } Secrets in the Secret Store may be located at any path however they must have some or all the follow keys at the specified SecretPath . username - username to connect to the broker password - password used to connect to the broker clientkey - client private key in PEM format clientcert - client cert in PEM format cacert - ca cert in PEM format The AuthMode setting you choose depends on what secret values above are used. For example, if \"none\" is specified as auth mode all keys will be ignored. Similarly, if AuthMode is set to \"clientcert\" username and password will be ignored. Topic Formatting EdgeX 2.0 Topic Formatting is new in EdgeX 2.0 The configured Topic is dynamically formatted prior to publishing . The default formatter (used if topicFormatter is nil) simply replaces any placeholder text, {key-name} , in the configured Topic with matching values from the new Context Storage . An error will occur if a specified placeholder does not exist in the Context Storage . See the Context Storage documentation for more details on seeded values and storing your own values. The topicFormatter option allows you to override the default formatter with your own custom topic formatting scheme. Filtering There are four basic types of filtering included in the SDK to add to your pipeline. There is also an option to Filter Out specific items. These provided filter functions return a type of dtos.Event . If filtering results in no remaining data, the pipeline execution for that pass is terminated. If no values are provided for filtering, then data flows through unfiltered. Factory Method Description NewFilterFor([]string filterValues) This factory function returns a Filter instance initialized with the passed in filter values with FilterOut set to false . This Filter instance is used to access the following filter functions that will operate using the specified filter values. NewFilterOut([]string filterValues) This factory function returns a Filter instance initialized with the passed in filter values with FilterOut set to true . This Filter instance is used to access the following filter functions that will operate using the specified filter values. EdgeX 2.0 For EdgeX 2.0 the NewFilter factory function has been renamed to NewFilterFor and the new NewFilterOut factory function has been added. type Filter struct { // Holds the values to be filtered FilterValues [] string // Determines if items in FilterValues should be filtered out. If set to true all items found in the filter will be removed. If set to false all items found in the filter will be returned. If FilterValues is empty then all items will be returned. FilterOut bool } EdgeX 2.0 New for EdgeX 2.0 are the FilterByProfileName and FilterBySourceName pipeline functions. The FilterByValueDescriptor pipeline function has been renamed to FilterByResourceName By Profile Name FilterByProfileName - This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified profiles names. Example NewFilterFor ([] { \"Profile1\" , \"Profile2\" }). FilterByProfileName By Device Name FilterByDeviceName - This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified device names. Example NewFilterFor ([] { \"Device1\" , \"Device2\" }). FilterByDeviceName By Source Name FilterBySourceName - This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified source names. Source name is either the resource name or command name responsible for the Event creation. Example NewFilterFor ([] { \"Source1\" , \"Source2\" }). FilterBySourceName By Resource Name FilterByResourceName - This pipeline function will filter the Event's reading data down to Readings that either have (For) or don't have (Out) the specified resource names. If the result of filtering is zero Readings remaining, the function terminates pipeline execution. Example NewFilterFor ([] { \"Resource1\" , \"Resource2\" }). FilterByResourceName JSON Logic Factory Method Description NewJSONLogic(rule string) This factory function returns a JSONLogic instance initialized with the passed in JSON rule. The rule passed in should be a JSON string conforming to the specification here: http://jsonlogic.com/operations.html. Evaluate Evaluate - This is the pipeline function that will be used in the pipeline to apply the JSON rule to data coming in on the pipeline. If the condition of your rule is met, then the pipeline will continue and the data will continue to flow to the next function in the pipeline. If the condition of your rule is NOT met, then pipeline execution stops. Example NewJSONLogic ( \"{ \\\"in\\\" : [{ \\\"var\\\" : \\\"device\\\" }, [\\\"Random-Integer-Device\\\",\\\"Random-Float-Device\\\"] ] }\" ). Evaluate Note Only operations that return true or false are supported. See http://jsonlogic.com/operations.html# for the complete list of operations paying attention to return values. Any operator that returns manipulated data is currently not supported. For more advanced scenarios checkout LF Edge eKuiper . Tip Leverage http://jsonlogic.com/play.html to get your rule right before implementing in code. JSON can be a bit tricky to get right in code with all the escaped double quotes. Response Data There is one response data function included in the SDK that can be added to your pipeline. Factory Method Description NewResponseData() This factory function returns a ResponseData instance that is used to access the following pipeline function below. Content Type ResponseContentType - This property is used to set the content-type of the response. Example responseData := NewResponseData () responseData . ResponseContentType = \"application/json\" Set Response Data SetResponseData - This pipeline function receives either a string , []byte , or json.Marshaler type from the previous function in the pipeline and sets it as the response data that the pipeline returns to the configured trigger. If configured to use the EdgeXMessageBus trigger, the data will be published back to the EdgeX MessageBus as determined by the configuration. Similar, if configured to use the ExternalMQTT trigger, the data will be published back to the external MQTT Broker as determined by the configuration. If configured to use HTTP trigger the data is returned as the HTTP response. Note Calling SetResponseData() and SetResponseContentType() from the Context API in a custom function can be used in place of adding this function to your pipeline. Tags There is one Tags transform included in the SDK that can be added to your pipeline. Factory Method Description NewGenericTags(tags map[string]interface{} ) Tags This factory function returns a Tags instance initialized with the passed in collection of generic tag key/value pairs. This Tags instance is used to access the following Tags function that will use the specified collection of tag key/value pairs. This allows for generic complex types for the Tag values. NewTags(tags map[string]string ) Tags This factory function returns a Tags instance initialized with the passed in collection of tag key/value pairs. This Tags instance is used to access the following Tags function that will use the specified collection of tag key/value pairs. This factor function has been Deprecated. Use NewGenericTags instead . EdgeX 2.1 The Tags property on Events in Edgex 2.1 has changed from map[string]string to map[string]interface{} . The new NewGenericTags() factory function takes this new definition and replaces the deprecated NewTags() factory function. Add Tags AddTags - This pipeline function receives an Edgex Event type and adds the collection of specified tags to the Event's Tags collection. Example var myTags = map [ string ] interface {}{ \"MyValue\" : 123 , \"GatewayId\" : \"HoustonStore000123\" , \"Coordinates\" : map [ string ] float32 { \"Latitude\" : 29.630771 , \"Longitude\" : \"-95.377603\" , }, } NewGenericTags ( myTags ). AddTags","title":"Built-In Pipeline Functions"},{"location":"microservices/application/BuiltIn/#built-in-pipeline-functions","text":"All pipeline functions define a type and a factory function which is used to initialize an instance of the type with the required options. The instances returned by these factory functions give access to their appropriate pipeline function pointers when setting up the function pipeline. Example NewFilterFor ([] { \"Device1\" , \"Device2\" }). FilterByDeviceName","title":"Built-In Pipeline Functions"},{"location":"microservices/application/BuiltIn/#batching","text":"Included in the SDK is an in-memory batch function that will hold on to your data before continuing the pipeline. There are three functions provided for batching each with their own strategy. Factory Method Description NewBatchByTime(timeInterval string) This function returns a BatchConfig instance with time being the strategy that is used for determining when to release the batched data and continue the pipeline. timeInterval is the duration to wait (i.e. 10s ). The time begins after the first piece of data is received. If no data has been received no data will be sent forward. // Example: NewBatchByTime ( \"10s\" ). Batch NewBatchByCount(batchThreshold int) This function returns a BatchConfig instance with count being the strategy that is used for determining when to release the batched data and continue the pipeline. batchThreshold is how many events to hold on to (i.e. 25 ). The count begins after the first piece of data is received and once the threshold is met, the batched data will continue forward and the counter will be reset. // Example: NewBatchByCount ( 10 ). Batch NewBatchByTimeAndCount(timeInterval string, batchThreshold int) This function returns a BatchConfig instance with a combination of both time and count being the strategy that is used for determining when to release the batched data and continue the pipeline. Whichever occurs first will trigger the data to continue and be reset. // Example: NewBatchByTimeAndCount ( \"30s\" , 10 ). Batch ### Batch Batch - This pipeline function will apply the selected strategy in your pipeline. By default the batched data returned by this function is [][]byte . This is because this function doesn't need to know the type of the individual items batched. It simply marshals the items to JSON if the data isn't already a []byte . Edgex 2.1 New for EdgeX 2.1 is the IsEventData flag on the BatchConfig instance. The IsEventData flag, when true, lets this function know that the data being batched is Events and to un-marshal the data a []Event prior to returning the batched data. Batch with IsEventData flag set to true. batch := NewBatchByTimeAndCount(\"30s\", 10) batch.IsEventData = true ... batch.Batch Warning Keep memory usage in mind as you determine the thresholds for both time and count. The larger they are the more memory is required and could lead to performance issue.","title":"Batching"},{"location":"microservices/application/BuiltIn/#compression","text":"There are two compression types included in the SDK that can be added to your pipeline. These transforms return a []byte . Factory Method Description NewCompression() This factory function returns a Compression instance that is used to access the compression functions.","title":"Compression"},{"location":"microservices/application/BuiltIn/#gzip","text":"CompressWithGZIP - This pipeline function receives either a string , []byte , or json.Marshaler type, GZIP compresses the data, converts result to base64 encoded string, which is returned as a []byte to the pipeline. Example NewCompression (). CompressWithGZIP","title":"GZIP"},{"location":"microservices/application/BuiltIn/#zlib","text":"CompressWithZLIB - This pipeline function receives either a string , []byte , or json.Marshaler type, ZLIB compresses the data, converts result to base64 encoded string, which is returned as a []byte to the pipeline. Example NewCompression (). CompressWithZLIB","title":"ZLIB"},{"location":"microservices/application/BuiltIn/#conversion","text":"There are two conversions included in the SDK that can be added to your pipeline. These transforms return a string . Factory Method Description NewConversion() This factory function returns a Conversion instance that is used to access the conversion functions.","title":"Conversion"},{"location":"microservices/application/BuiltIn/#json","text":"TransformToJSON - This pipeline function receives an dtos.Event type and converts it to JSON format and returns the JSON string to the pipeline. Example NewConversion (). TransformToJSON","title":"JSON"},{"location":"microservices/application/BuiltIn/#xml","text":"TransformToXML - This pipeline function receives an dtos.Event type, converts it to XML format and returns the XML string to the pipeline. Example NewConversion (). TransformToXML","title":"XML"},{"location":"microservices/application/BuiltIn/#core-data","text":"There is one Core Data function that enables interactions with the Core Data REST API Factory Method Description NewCoreDataSimpleReading(profileName string, deviceName string, resourceName string, valueType string) This factory function returns a CoreData instance configured to push a Simple reading. The CoreData instance returned is used to access core data functions. NewCoreDataBinaryReading(profileName string, deviceName string, resourceName string, mediaType string) This factory function returns a CoreData instance configured to push a Binary reading. The CoreData instance returned is used to access core data functions. NewCoreDataObejctReading(profileName string, deviceName string, resourceName string) This factory function returns a CoreData instance configured to push an Object reading. The CoreData instance returned is used to access core data functions. EdgeX 2.0 For EdgeX 2.0 the NewCoreData factory function has been replaced with the NewCoreDataSimpleReading and NewCoreDataBinaryReading functions EdgeX 2.1 The NewCoreDataObejctReading factory method is new for EdgeX 2.1","title":"Core Data"},{"location":"microservices/application/BuiltIn/#push-to-core-data","text":"PushToCoreData - This pipeline function provides the capability to push a new Event/Reading to Core Data. The data passed into this function from the pipeline is wrapped in an EdgeX Event with the Event and Reading metadata specified from the factory function options. The function returns the new EdgeX Event with ID populated. Example NewCoreDataSimpleReading ( \"my-profile\" , \"my-device\" , \"my-resource\" , \"string\" ). PushToCoreData","title":"Push to Core Data"},{"location":"microservices/application/BuiltIn/#data-protection","text":"There are two transforms included in the SDK that can be added to your pipeline for data protection.","title":"Data Protection"},{"location":"microservices/application/BuiltIn/#encryption-deprecated","text":"EdgeX 2.1 This is deprecated in EdgeX 2.1 - it is recommended to use the new AESProtection transform. Please see this security advisory for more detail. Factory Method Description NewEncryption(key string, initializationVector string) This function returns a Encryption instance initialized with the passed in key and initialization vector . This Encryption instance is used to access the following encryption function that will use the specified key and initialization vector . NewEncryptionWithSecrets(secretPath string, secretName string, initializationVector string) This function returns a Encryption instance initialized with the passed in secret path , Secret name and initialization vector . This Encryption instance is used to access the following encryption function that will use the encryption key from the Secret Store and the passed in initialization vector . It uses the passed in secret path and secret name to pull the encryption key from the Secret Store EdgeX 2.0 New for EdgeX 2.0 is the ability to pull the encryption key from the Secret Store. The encryption key must be seeded into the Secret Store using the /api/v2/secret endpoint on the running instance of the Application Service prior to the Encryption function executing. See App Functions SDK swagger for more details on this endpoint. EncryptWithAES - This pipeline function receives either a string , []byte , or json.Marshaller type and encrypts it using AES encryption and returns a []byte to the pipeline. Example NewEncryption ( \"key\" , \"initializationVector\" ). EncryptWithAES or NewEncryptionWithSecrets ( \"aes\" , \"aes-key\" , \"initializationVector\" ). EncryptWithAES ) Note The algorithm used used with app-service-configurable configuration to access this transform is AES","title":"Encryption (Deprecated)"},{"location":"microservices/application/BuiltIn/#aesprotection","text":"Edgex 2.1 This transform provides AES 256 encryption with a random initialization vector and authentication using a SHA 512 hash. It can only be configured using secrets. Factory Method Description NewAESProtection(secretPath string, secretName string) This function returns a Encryption instance initialized with the passed in secretPath and secretName It requires a 64-byte key from secrets which is split in half, the first half used for encryption, the second for generating the signature. Encrypt : This pipeline function receives either a string , []byte , or json.Marshaller type and encrypts it using AES256 encryption, signs it with a SHA512 hash and returns a []byte to the pipeline of the following form: initialization vector ciphertext signing hash 16 bytes variable bytes 32 bytes Example transforms . NewAESProtection ( secretPath , secretName ). Encrypt ( ctx , data ) Note The Algorithm used with app-service-configurable configuration to access this transform is AES256","title":"AESProtection"},{"location":"microservices/application/BuiltIn/#export","text":"There are two export functions included in the SDK that can be added to your pipeline.","title":"Export"},{"location":"microservices/application/BuiltIn/#http-export","text":"EdgeX 2.0 For EdgeX 2.0 the signature of the NewHTTPSenderWithSecretHeader factory function has changed. See below for details. Factory Method Description NewHTTPSender(url string, mimeType string, persistOnError bool) This factory function returns a HTTPSender instance initialized with the passed in url, mime type and persistOnError values. NewHTTPSenderWithSecretHeader(url string, mimeType string, persistOnError bool, headerName string, secretPath string, secretName string) This factory function returns a HTTPSender instance similar to the above function however will set up the HTTPSender to add a header to the HTTP request using the headerName for the field name and the secretPath and secretName to pull the header field value from the Secret Store. NewHTTPSenderWithOptions(options HTTPSenderOptions) This factory function returns a HTTPSender using the passed in options to configure it. EdgeX 2.0 New in EdgeX 2.0 is the ability to chain multiple instances of the HTTP exports to accomplish exporting to multiple destinations. The new NewHTTPSenderWithOptions factory function was added to allow for configuring all the options, including the new ContinueOnSendError and ReturnInputData options that enable this chaining. // HTTPSenderOptions contains all options available to the sender type HTTPSenderOptions struct { // URL of destination URL string // MimeType to send to destination MimeType string // PersistOnError enables use of store & forward loop if true PersistOnError bool // HTTPHeaderName to use for passing configured secret HTTPHeaderName string // SecretPath to search for configured secret SecretPath string // SecretName for configured secret SecretName string // URLFormatter specifies custom formatting behavior to be applied to configured URL. // If nothing specified, default behavior is to attempt to replace placeholders in the // form '{some-context-key}' with the values found in the context storage. URLFormatter StringValuesFormatter // ContinueOnSendError allows execution of subsequent chained senders after errors if true ContinueOnSendError bool // ReturnInputData enables chaining multiple HTTP senders if true ReturnInputData bool }","title":"HTTP Export"},{"location":"microservices/application/BuiltIn/#http-post","text":"HTTPPost - This pipeline function receives either a string , []byte , or json.Marshaler type from the previous function in the pipeline and posts it to the configured endpoint and returns the HTTP response. If no previous function exists, then the event that triggered the pipeline, marshaled to json, will be used. If the post fails and persistOnError=true and Store and Forward is enabled, the data will be stored for later retry. See Store and Forward for more details. If ReturnInputData=true the function will return the data that it received instead of the HTTP response. This allows the following function in the pipeline to be another HTTP Export which receives the same data but is configured to send to a different endpoint. When chaining for multiple HTTP Exports you need to decide how to handle errors. Do you want to stop execution of the pipeline or continue so that the next HTTP Export function can attempt to export to its endpoint. This is where ContinueOnSendError comes in. If set to true the error is logged and the function returns the received data for the next function to use. ContinueOnSendError=true can only be used when ReturnInputData=true and cannot be use when PersistOnError=true . Example POST NewHTTPSender(\"https://myendpoint.com\",\"application/json\",false).HTTPPost PUT NewHTTPSender(\"https://myendpoint.com\",\"application/json\",false).HTTPPut POST with secure header NewHTTPSenderWithSecretHeader(\"https://myendpoint.com\",\"application/json\",false,\"Authentication\",\"/jwt\",\"AuthToken\").HTTPPost PUT with secure header NewHTTPSenderWithSecretHeader(\"https://myendpoint.com\",\"application/json\",false,\"Authentication\",\"/jwt\",\"AuthToken\").HTTPPPut","title":"HTTP POST"},{"location":"microservices/application/BuiltIn/#http-put","text":"HTTPPut - This pipeline function operates the same as HTTPPost but uses the PUT method rather than POST .","title":"HTTP PUT"},{"location":"microservices/application/BuiltIn/#url-formatting","text":"EdgeX 2.0 URL Formatting is new in EdgeX 2.0 The configured URL is dynamically formatted prior to the POST/PUT request. The default formatter (used if URLFormatter is nil) simply replaces any placeholder text, {key-name} , in the configured URL with matching values from the new Context Storage . An error will occur if a specified placeholder does not exist in the Context Storage . See the Context Storage documentation for more details on seeded values and storing your own values. The URLFormatter option allows you to override the default formatter with your own custom URL formatting scheme. Example Export the Events to different endpoints base on their device name Url=\"http://myhost.com/edgex-events/{devicename}\"","title":"URL Formatting"},{"location":"microservices/application/BuiltIn/#mqtt-export","text":"EdgeX 2.0 New for EdgeX 2.0 is the the new NewMQTTSecretSenderWithTopicFormatter factory function. The deprecated NewMQTTSender factory function has been removed. Factory Method Description NewMQTTSecretSender(mqttConfig MQTTSecretConfig, persistOnError bool) This factory function returns a MQTTSecretSender instance initialized with the options specified in the MQTTSecretConfig and persistOnError . NewMQTTSecretSenderWithTopicFormatter(mqttConfig MQTTSecretConfig, persistOnError bool, topicFormatter StringValuesFormatter) This factory function returns a MQTTSecretSender instance initialized with the options specified in the MQTTSecretConfig , persistOnError and topicFormatter . See Topic Formatting below for more details. EdgeX 2.0 New in EdgeX 2.0 the KeepAlive and ConnectTimeout MQTTSecretConfig settings have been added. type MQTTSecretConfig struct { // BrokerAddress should be set to the complete broker address i.e. mqtts://mosquitto:8883/mybroker BrokerAddress string // ClientId to connect with the broker with. ClientId string // The name of the path in secret provider to retrieve your secrets SecretPath string // AutoReconnect indicated whether or not to retry connection if disconnected AutoReconnect bool // KeepAlive is the interval duration between client sending keepalive ping to broker KeepAlive string // ConnectTimeout is the duration for timing out on connecting to the broker ConnectTimeout string // Topic that you wish to publish to Topic string // QoS for MQTT Connection QoS byte // Retain setting for MQTT Connection Retain bool // SkipCertVerify SkipCertVerify bool // AuthMode indicates what to use when connecting to the broker. // Options are \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". // If a CA Cert exists in the SecretPath then it will be used for // all modes except \"none\". AuthMode string } Secrets in the Secret Store may be located at any path however they must have some or all the follow keys at the specified SecretPath . username - username to connect to the broker password - password used to connect to the broker clientkey - client private key in PEM format clientcert - client cert in PEM format cacert - ca cert in PEM format The AuthMode setting you choose depends on what secret values above are used. For example, if \"none\" is specified as auth mode all keys will be ignored. Similarly, if AuthMode is set to \"clientcert\" username and password will be ignored.","title":"MQTT Export"},{"location":"microservices/application/BuiltIn/#topic-formatting","text":"EdgeX 2.0 Topic Formatting is new in EdgeX 2.0 The configured Topic is dynamically formatted prior to publishing . The default formatter (used if topicFormatter is nil) simply replaces any placeholder text, {key-name} , in the configured Topic with matching values from the new Context Storage . An error will occur if a specified placeholder does not exist in the Context Storage . See the Context Storage documentation for more details on seeded values and storing your own values. The topicFormatter option allows you to override the default formatter with your own custom topic formatting scheme.","title":"Topic Formatting"},{"location":"microservices/application/BuiltIn/#filtering","text":"There are four basic types of filtering included in the SDK to add to your pipeline. There is also an option to Filter Out specific items. These provided filter functions return a type of dtos.Event . If filtering results in no remaining data, the pipeline execution for that pass is terminated. If no values are provided for filtering, then data flows through unfiltered. Factory Method Description NewFilterFor([]string filterValues) This factory function returns a Filter instance initialized with the passed in filter values with FilterOut set to false . This Filter instance is used to access the following filter functions that will operate using the specified filter values. NewFilterOut([]string filterValues) This factory function returns a Filter instance initialized with the passed in filter values with FilterOut set to true . This Filter instance is used to access the following filter functions that will operate using the specified filter values. EdgeX 2.0 For EdgeX 2.0 the NewFilter factory function has been renamed to NewFilterFor and the new NewFilterOut factory function has been added. type Filter struct { // Holds the values to be filtered FilterValues [] string // Determines if items in FilterValues should be filtered out. If set to true all items found in the filter will be removed. If set to false all items found in the filter will be returned. If FilterValues is empty then all items will be returned. FilterOut bool } EdgeX 2.0 New for EdgeX 2.0 are the FilterByProfileName and FilterBySourceName pipeline functions. The FilterByValueDescriptor pipeline function has been renamed to FilterByResourceName","title":"Filtering"},{"location":"microservices/application/BuiltIn/#by-profile-name","text":"FilterByProfileName - This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified profiles names. Example NewFilterFor ([] { \"Profile1\" , \"Profile2\" }). FilterByProfileName","title":"By Profile Name"},{"location":"microservices/application/BuiltIn/#by-device-name","text":"FilterByDeviceName - This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified device names. Example NewFilterFor ([] { \"Device1\" , \"Device2\" }). FilterByDeviceName","title":"By Device Name"},{"location":"microservices/application/BuiltIn/#by-source-name","text":"FilterBySourceName - This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified source names. Source name is either the resource name or command name responsible for the Event creation. Example NewFilterFor ([] { \"Source1\" , \"Source2\" }). FilterBySourceName","title":"By Source Name"},{"location":"microservices/application/BuiltIn/#by-resource-name","text":"FilterByResourceName - This pipeline function will filter the Event's reading data down to Readings that either have (For) or don't have (Out) the specified resource names. If the result of filtering is zero Readings remaining, the function terminates pipeline execution. Example NewFilterFor ([] { \"Resource1\" , \"Resource2\" }). FilterByResourceName","title":"By Resource Name"},{"location":"microservices/application/BuiltIn/#json-logic","text":"Factory Method Description NewJSONLogic(rule string) This factory function returns a JSONLogic instance initialized with the passed in JSON rule. The rule passed in should be a JSON string conforming to the specification here: http://jsonlogic.com/operations.html.","title":"JSON Logic"},{"location":"microservices/application/BuiltIn/#evaluate","text":"Evaluate - This is the pipeline function that will be used in the pipeline to apply the JSON rule to data coming in on the pipeline. If the condition of your rule is met, then the pipeline will continue and the data will continue to flow to the next function in the pipeline. If the condition of your rule is NOT met, then pipeline execution stops. Example NewJSONLogic ( \"{ \\\"in\\\" : [{ \\\"var\\\" : \\\"device\\\" }, [\\\"Random-Integer-Device\\\",\\\"Random-Float-Device\\\"] ] }\" ). Evaluate Note Only operations that return true or false are supported. See http://jsonlogic.com/operations.html# for the complete list of operations paying attention to return values. Any operator that returns manipulated data is currently not supported. For more advanced scenarios checkout LF Edge eKuiper . Tip Leverage http://jsonlogic.com/play.html to get your rule right before implementing in code. JSON can be a bit tricky to get right in code with all the escaped double quotes.","title":"Evaluate"},{"location":"microservices/application/BuiltIn/#response-data","text":"There is one response data function included in the SDK that can be added to your pipeline. Factory Method Description NewResponseData() This factory function returns a ResponseData instance that is used to access the following pipeline function below.","title":"Response Data"},{"location":"microservices/application/BuiltIn/#content-type","text":"ResponseContentType - This property is used to set the content-type of the response. Example responseData := NewResponseData () responseData . ResponseContentType = \"application/json\"","title":"Content Type"},{"location":"microservices/application/BuiltIn/#set-response-data","text":"SetResponseData - This pipeline function receives either a string , []byte , or json.Marshaler type from the previous function in the pipeline and sets it as the response data that the pipeline returns to the configured trigger. If configured to use the EdgeXMessageBus trigger, the data will be published back to the EdgeX MessageBus as determined by the configuration. Similar, if configured to use the ExternalMQTT trigger, the data will be published back to the external MQTT Broker as determined by the configuration. If configured to use HTTP trigger the data is returned as the HTTP response. Note Calling SetResponseData() and SetResponseContentType() from the Context API in a custom function can be used in place of adding this function to your pipeline.","title":"Set Response Data"},{"location":"microservices/application/BuiltIn/#tags","text":"There is one Tags transform included in the SDK that can be added to your pipeline. Factory Method Description NewGenericTags(tags map[string]interface{} ) Tags This factory function returns a Tags instance initialized with the passed in collection of generic tag key/value pairs. This Tags instance is used to access the following Tags function that will use the specified collection of tag key/value pairs. This allows for generic complex types for the Tag values. NewTags(tags map[string]string ) Tags This factory function returns a Tags instance initialized with the passed in collection of tag key/value pairs. This Tags instance is used to access the following Tags function that will use the specified collection of tag key/value pairs. This factor function has been Deprecated. Use NewGenericTags instead . EdgeX 2.1 The Tags property on Events in Edgex 2.1 has changed from map[string]string to map[string]interface{} . The new NewGenericTags() factory function takes this new definition and replaces the deprecated NewTags() factory function.","title":"Tags"},{"location":"microservices/application/BuiltIn/#add-tags","text":"AddTags - This pipeline function receives an Edgex Event type and adds the collection of specified tags to the Event's Tags collection. Example var myTags = map [ string ] interface {}{ \"MyValue\" : 123 , \"GatewayId\" : \"HoustonStore000123\" , \"Coordinates\" : map [ string ] float32 { \"Latitude\" : 29.630771 , \"Longitude\" : \"-95.377603\" , }, } NewGenericTags ( myTags ). AddTags","title":"Add Tags"},{"location":"microservices/application/ErrorHandling/","text":"Pipeline Function Error Handling Each transform returns a true or false as part of the return signature. This is called the continuePipeline flag and indicates whether the SDK should continue calling successive transforms in the pipeline. return false, nil will stop the pipeline and stop processing the event. This is useful, for example, when filtering on values and nothing matches the criteria you've filtered on. return false, error , will stop the pipeline as well and the SDK will log the error you have returned. return true, nil tells the SDK to continue, and will call the next function in the pipeline with your result. The SDK will return control back to main when receiving a SIGTERM/SIGINT event to allow for custom clean up.","title":"Pipeline Function Error Handling"},{"location":"microservices/application/ErrorHandling/#pipeline-function-error-handling","text":"Each transform returns a true or false as part of the return signature. This is called the continuePipeline flag and indicates whether the SDK should continue calling successive transforms in the pipeline. return false, nil will stop the pipeline and stop processing the event. This is useful, for example, when filtering on values and nothing matches the criteria you've filtered on. return false, error , will stop the pipeline as well and the SDK will log the error you have returned. return true, nil tells the SDK to continue, and will call the next function in the pipeline with your result. The SDK will return control back to main when receiving a SIGTERM/SIGINT event to allow for custom clean up.","title":"Pipeline Function Error Handling"},{"location":"microservices/application/GeneralAppServiceConfig/","text":"Application Service Configuration Similar to other EdgeX services, configuration is first determined by the configuration.toml file in the /res folder. Once loaded any environment overrides are applied. If -cp is passed to the application on startup, the SDK will leverage the specific configuration provider (i.e Consul) to push the configuration into the provider and monitor Writeable configuration from there. You will find the configuration under the edgex/appservices/2.0/ key in the provider (i.e Consul). On re-restart the service will pull the configuration from the provider and apply any environment overrides. This section describes the configuration elements that are unique to Application Services Please first refer to the general Configuration documentation for configuration properties common across all EdgeX services. Note * indicates the configuration value can be changed on the fly if using a configuration provider (like Consul). ** indicates the configuration value can be changed but the service must be restarted. Writable The tabs below provide additional entries in the Writable section which are applicable to Application Services. Writable StoreAndForward The section configures the Store and Forward capability. Please refer to Store and Forward documentation for more details. Configuration Default Value Enabled false* Indicates whether the Store and Forward capability enabled or disabled RetryInterval \"5m\"* Indicates the duration of time to wait before retries, aka Forward MaxRetryCount 10* Indicates whether maximum number of retries of failed data. The failed data is removed after the maximum retries has been exceeded. A value of 0 indicates endless retries. Writable Pipeline The section configures the Configurable Function Pipeline which is used only by App Service Configurable. Please refer to App Service Configurable - Getting Started section for more details Writable InsecureSecrets This section defines Insecure Secrets that are used when running in non-secure mode, i.e. when Vault isn't available. This is a dynamic map of configuration, so can empty if no secrets are used or can have as many or few user define secrets. It simulates a Secret Store in non-secure mode. Below are a few examples that are need if using the indicated capabilities. Configuration Default Value Description DB --- This section defines a block of insecure secrets for database credentials when Redis is used for the MessageBus and/or when Store and Forward is enabled and running is non-secure mode. This section is not required if Store and Forward is not enabled and not using Redis for the MessageBus . path redisdb* Indicates the location in the simulated Secret Store where the DB secret resides. DB Secrets --- This section is the collection of DB secret data username blank* Indicates the value for the username when connecting to the database. When running in non-secure mode it is blank . password blank* Indicates the value for the password when connecting to the database. When running in non-secure mode it is blank . http --- This section defines a block of insecure secrets for HTTP Export, i.e HTTPPost function path http* Indicates the location in the simulated Secret Store where the HTTP secret resides. http Secrets --- This section is the collection of HTTP secret data. See Http Export documentation for more details on use of secret data. headervalue undefined* This indicates the name of the secret value to use as the value in the HTTP header. mqtt --- This section defines a block of insecure secrets for MQTT export, i.e. MQTTSecretSend function. path mqtt* Indicates the location in the simulated Secret Store where the MQTT secret reside. mqtt Secrets --- This section is the collection of MQTT secret data. See Mqtt Export documentation for more details on use of secret data. username blank* Indicates the value for the username when connecting to the MQTT broker using usernamepassword authentication mode. Must be configured to the value the MQTT broker is expecting. password blank* Indicates the value for the password when connecting to the MQTT broker using usernamepassword authentication mode. Must be configured to the value the MQTT broker is expecting. cacert blank* Indicates the value (contents) for the CA Certificate when connecting to the MQTT broker using cacert authentication mode. Must be configured to the value the MQTT broker is expecting. clientcert blank* Indicates the value (contents) for the Client Certificate when connecting to the MQTT broker using clientcert authentication mode. Must be configured to the value the MQTT broker is expecting. clientkey blank* Indicates the value (contents) for the Client Key when connecting to the MQTT broker using clientcert authentication mode. Must be configured to the value the MQTT broker is expecting. Not Writable The tabs below provide additional configuration which are applicable to Application Services that require the service to be restarted after value(s) are changed. HttpServer EdgeX 2.0 New for EdgeX 2.0. These setting previously were in the Service configuration section specific to Application Services. Now the Service configuration is the same for all EdgeX services. See the general Configuration documentation for more details on the common Service configuration. This section contains the configuration for the internal Webserver. Only need if configuring the Webserver for HTTPS Configuration Default Value Description Protocol http** Indicates the protocol for the webserver to use SecretName blank** Indicates the name of the secret in the Secret Store where the HTTPS secret data resides HTTPSCertName blank** Indicates the key name in the HTTPS secret data that contains the certificate data to use for HTTPS HTTPSKeyName blank** Indicates the key name in the HTTPS secret data that contains the key data to use for HTTPS Database This section contains the connection information. It is required when using redis for the MessageBus (which is the default) and/or when the Store and Forward capability is enabled. Note that it has a slightly different format than the database section used in the core services configuration. Configuration Default Value Description Type redisdb** Indicates the type of database used. redisdb is the only valid type. Host localhost** Indicates the hostname for the database Port 6379** Indicates the port number for the database Timeout \"30s\"** Indicates the connection timeout for the database Clients This section defines the connect information for the EdgeX Clients and is the same as that used by all EdgeX services, just which clients are needed differs. Please refer to the Note about Clients section for more details. Trigger This section defines the Trigger for incoming data. See the Triggers documentation for more details on the inner working of triggers. EdgeX 2.0 For EdgeX 2.0 the Binding section has been renamed to Trigger . Configuration Default Value Description Type edgex-messagebus** Indicates the Trigger binding type. valid values are edgex-messagebus , external-mqtt , http , or Trigger EdgeXMessageBus This section defines the message bus connect information. Only used for edgex-messagebus binding type EdgeX 2.0 For EdgeX 2.0 the MessageBus section has been renamed to EdgexMessageBus and moved under the Trigger section. The SubscribeTopic setting has changed to SubscribeTopics and moved under the SubscribeHost section of EdgexMessageBus . The PublishTopic has been moved under the PublishHost section of EdgexMessageBus . Configuration Default Value Description Type redis** Indicates the type of MessageBus being used. Valid type are redis , mqtt , or zero SubscribeHost ... This section defines the connection information for subscribing/publishing to the MessageBus Host localhost** Indicates the hostname for subscribing to the MessageBus Port 6379** Indicates the port number for subscribing to the MessageBus Protocol redis** Indicates the protocol number for subscribing to the MessageBus SubscribeTopics edgex/events/#** MessageBus topic(s) to subscribe to. This is a comma separated list of topics. Supports filtering by subscribe topics. See EdgeXMessageBus Trigger for more details. PublishHost ... This section defines the connection information for publishing to the MessageBus Host localhost** Indicates the hostname for publishing to the Message Bus Port 6379** Indicates the port number for publishing to the Message Bus Protocol redis** Indicates the protocol number for publishing to the Message Bus PublishTopic blank** Indicates the topic in which to publish the function pipeline response data, if any. Supports dynamic topic places holders. See EdgeXMessageBus Trigger for more details. Optional ... This section is used for optional configuration specific to the MessageBus type used. Please refer to go-mod-messaging for more details Trigger ExternalMqtt This section defines the external MQTT Broker connect information. Only used for external-mqtt trigger binding type EdgeX 2.0 For EdgeX 2.0 the MqttBroker section has been renamed to ExternalMqtt and moved under the Trigger section. The ExternalMqtt section now has it's own SubscribeTopics and PublishTopic settings. Note external-mqtt is not the default Trigger type, so there are no default values for ExternalMqtt settings beyond those that the Go compiler gives to the empty struct. Some of those default values are not valid and must be specified, i.e. Authmode Configuration Default Value Description Url blank** Fully qualified URL to connect to the MQTT broker, i.e. tcp://localhost:1883 SubscribeTopics blank** MQTT topic(s) to subscribe to. This is a comma separated list of topics PublishTopic blank** MQTT topic to publish the function pipeline response data, if any. Supports dynamic topic places holders. See ExternalMqtt Trigger for more details. ClientId blank** ClientId to connect to the broker with ConnectTimeout blank** Time duration indicating how long to wait before timing out broker connection, i.e \"30s\" AutoReconnect false** Indicates whether or not to retry connection if disconnected KeepAlive 0** Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 QOS 0** Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) Retain false** Retain setting for MQTT Connection SkipCertVerify false** Indicates if the certificate verification should be skipped SecretPath blank** Name of the path in secret provider to retrieve your secrets. Must be non-blank. AuthMode blank** Indicates what to use when connecting to the broker. Must be one of \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". If a CA Cert exists in the SecretPath then it will be used for all modes except \"none\". Note Authmode=cacert is only needed when client authentication (e.g. usernamepassword ) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert. Application Settings [ApplicationSettings] - Is used for custom application settings and is accessed via the ApplicationSettings() API. The ApplicationSettings API returns a map[string] string containing the contents on the ApplicationSetting section of the configuration.toml file. [ApplicationSettings] ApplicationName = \"My Application Service\" Custom Structured Configuration EdgeX 2.0 New for EdgeX 2.0 Custom Application Services can now define their own custom structured configuration section in the configuration.toml file. Any additional sections in the TOML are ignore by the SDK when it parses the file for the SDK defined sections. See the Custom Configuration section of the SDK documentation for more details.","title":"Application Service Configuration"},{"location":"microservices/application/GeneralAppServiceConfig/#application-service-configuration","text":"Similar to other EdgeX services, configuration is first determined by the configuration.toml file in the /res folder. Once loaded any environment overrides are applied. If -cp is passed to the application on startup, the SDK will leverage the specific configuration provider (i.e Consul) to push the configuration into the provider and monitor Writeable configuration from there. You will find the configuration under the edgex/appservices/2.0/ key in the provider (i.e Consul). On re-restart the service will pull the configuration from the provider and apply any environment overrides. This section describes the configuration elements that are unique to Application Services Please first refer to the general Configuration documentation for configuration properties common across all EdgeX services. Note * indicates the configuration value can be changed on the fly if using a configuration provider (like Consul). ** indicates the configuration value can be changed but the service must be restarted.","title":"Application Service Configuration"},{"location":"microservices/application/GeneralAppServiceConfig/#writable","text":"The tabs below provide additional entries in the Writable section which are applicable to Application Services. Writable StoreAndForward The section configures the Store and Forward capability. Please refer to Store and Forward documentation for more details. Configuration Default Value Enabled false* Indicates whether the Store and Forward capability enabled or disabled RetryInterval \"5m\"* Indicates the duration of time to wait before retries, aka Forward MaxRetryCount 10* Indicates whether maximum number of retries of failed data. The failed data is removed after the maximum retries has been exceeded. A value of 0 indicates endless retries. Writable Pipeline The section configures the Configurable Function Pipeline which is used only by App Service Configurable. Please refer to App Service Configurable - Getting Started section for more details Writable InsecureSecrets This section defines Insecure Secrets that are used when running in non-secure mode, i.e. when Vault isn't available. This is a dynamic map of configuration, so can empty if no secrets are used or can have as many or few user define secrets. It simulates a Secret Store in non-secure mode. Below are a few examples that are need if using the indicated capabilities. Configuration Default Value Description DB --- This section defines a block of insecure secrets for database credentials when Redis is used for the MessageBus and/or when Store and Forward is enabled and running is non-secure mode. This section is not required if Store and Forward is not enabled and not using Redis for the MessageBus . path redisdb* Indicates the location in the simulated Secret Store where the DB secret resides. DB Secrets --- This section is the collection of DB secret data username blank* Indicates the value for the username when connecting to the database. When running in non-secure mode it is blank . password blank* Indicates the value for the password when connecting to the database. When running in non-secure mode it is blank . http --- This section defines a block of insecure secrets for HTTP Export, i.e HTTPPost function path http* Indicates the location in the simulated Secret Store where the HTTP secret resides. http Secrets --- This section is the collection of HTTP secret data. See Http Export documentation for more details on use of secret data. headervalue undefined* This indicates the name of the secret value to use as the value in the HTTP header. mqtt --- This section defines a block of insecure secrets for MQTT export, i.e. MQTTSecretSend function. path mqtt* Indicates the location in the simulated Secret Store where the MQTT secret reside. mqtt Secrets --- This section is the collection of MQTT secret data. See Mqtt Export documentation for more details on use of secret data. username blank* Indicates the value for the username when connecting to the MQTT broker using usernamepassword authentication mode. Must be configured to the value the MQTT broker is expecting. password blank* Indicates the value for the password when connecting to the MQTT broker using usernamepassword authentication mode. Must be configured to the value the MQTT broker is expecting. cacert blank* Indicates the value (contents) for the CA Certificate when connecting to the MQTT broker using cacert authentication mode. Must be configured to the value the MQTT broker is expecting. clientcert blank* Indicates the value (contents) for the Client Certificate when connecting to the MQTT broker using clientcert authentication mode. Must be configured to the value the MQTT broker is expecting. clientkey blank* Indicates the value (contents) for the Client Key when connecting to the MQTT broker using clientcert authentication mode. Must be configured to the value the MQTT broker is expecting.","title":"Writable"},{"location":"microservices/application/GeneralAppServiceConfig/#not-writable","text":"The tabs below provide additional configuration which are applicable to Application Services that require the service to be restarted after value(s) are changed. HttpServer EdgeX 2.0 New for EdgeX 2.0. These setting previously were in the Service configuration section specific to Application Services. Now the Service configuration is the same for all EdgeX services. See the general Configuration documentation for more details on the common Service configuration. This section contains the configuration for the internal Webserver. Only need if configuring the Webserver for HTTPS Configuration Default Value Description Protocol http** Indicates the protocol for the webserver to use SecretName blank** Indicates the name of the secret in the Secret Store where the HTTPS secret data resides HTTPSCertName blank** Indicates the key name in the HTTPS secret data that contains the certificate data to use for HTTPS HTTPSKeyName blank** Indicates the key name in the HTTPS secret data that contains the key data to use for HTTPS Database This section contains the connection information. It is required when using redis for the MessageBus (which is the default) and/or when the Store and Forward capability is enabled. Note that it has a slightly different format than the database section used in the core services configuration. Configuration Default Value Description Type redisdb** Indicates the type of database used. redisdb is the only valid type. Host localhost** Indicates the hostname for the database Port 6379** Indicates the port number for the database Timeout \"30s\"** Indicates the connection timeout for the database Clients This section defines the connect information for the EdgeX Clients and is the same as that used by all EdgeX services, just which clients are needed differs. Please refer to the Note about Clients section for more details. Trigger This section defines the Trigger for incoming data. See the Triggers documentation for more details on the inner working of triggers. EdgeX 2.0 For EdgeX 2.0 the Binding section has been renamed to Trigger . Configuration Default Value Description Type edgex-messagebus** Indicates the Trigger binding type. valid values are edgex-messagebus , external-mqtt , http , or Trigger EdgeXMessageBus This section defines the message bus connect information. Only used for edgex-messagebus binding type EdgeX 2.0 For EdgeX 2.0 the MessageBus section has been renamed to EdgexMessageBus and moved under the Trigger section. The SubscribeTopic setting has changed to SubscribeTopics and moved under the SubscribeHost section of EdgexMessageBus . The PublishTopic has been moved under the PublishHost section of EdgexMessageBus . Configuration Default Value Description Type redis** Indicates the type of MessageBus being used. Valid type are redis , mqtt , or zero SubscribeHost ... This section defines the connection information for subscribing/publishing to the MessageBus Host localhost** Indicates the hostname for subscribing to the MessageBus Port 6379** Indicates the port number for subscribing to the MessageBus Protocol redis** Indicates the protocol number for subscribing to the MessageBus SubscribeTopics edgex/events/#** MessageBus topic(s) to subscribe to. This is a comma separated list of topics. Supports filtering by subscribe topics. See EdgeXMessageBus Trigger for more details. PublishHost ... This section defines the connection information for publishing to the MessageBus Host localhost** Indicates the hostname for publishing to the Message Bus Port 6379** Indicates the port number for publishing to the Message Bus Protocol redis** Indicates the protocol number for publishing to the Message Bus PublishTopic blank** Indicates the topic in which to publish the function pipeline response data, if any. Supports dynamic topic places holders. See EdgeXMessageBus Trigger for more details. Optional ... This section is used for optional configuration specific to the MessageBus type used. Please refer to go-mod-messaging for more details Trigger ExternalMqtt This section defines the external MQTT Broker connect information. Only used for external-mqtt trigger binding type EdgeX 2.0 For EdgeX 2.0 the MqttBroker section has been renamed to ExternalMqtt and moved under the Trigger section. The ExternalMqtt section now has it's own SubscribeTopics and PublishTopic settings. Note external-mqtt is not the default Trigger type, so there are no default values for ExternalMqtt settings beyond those that the Go compiler gives to the empty struct. Some of those default values are not valid and must be specified, i.e. Authmode Configuration Default Value Description Url blank** Fully qualified URL to connect to the MQTT broker, i.e. tcp://localhost:1883 SubscribeTopics blank** MQTT topic(s) to subscribe to. This is a comma separated list of topics PublishTopic blank** MQTT topic to publish the function pipeline response data, if any. Supports dynamic topic places holders. See ExternalMqtt Trigger for more details. ClientId blank** ClientId to connect to the broker with ConnectTimeout blank** Time duration indicating how long to wait before timing out broker connection, i.e \"30s\" AutoReconnect false** Indicates whether or not to retry connection if disconnected KeepAlive 0** Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 QOS 0** Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) Retain false** Retain setting for MQTT Connection SkipCertVerify false** Indicates if the certificate verification should be skipped SecretPath blank** Name of the path in secret provider to retrieve your secrets. Must be non-blank. AuthMode blank** Indicates what to use when connecting to the broker. Must be one of \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". If a CA Cert exists in the SecretPath then it will be used for all modes except \"none\". Note Authmode=cacert is only needed when client authentication (e.g. usernamepassword ) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert. Application Settings [ApplicationSettings] - Is used for custom application settings and is accessed via the ApplicationSettings() API. The ApplicationSettings API returns a map[string] string containing the contents on the ApplicationSetting section of the configuration.toml file. [ApplicationSettings] ApplicationName = \"My Application Service\" Custom Structured Configuration EdgeX 2.0 New for EdgeX 2.0 Custom Application Services can now define their own custom structured configuration section in the configuration.toml file. Any additional sections in the TOML are ignore by the SDK when it parses the file for the SDK defined sections. See the Custom Configuration section of the SDK documentation for more details.","title":"Not Writable"},{"location":"microservices/application/GettingStarted/","text":"Getting Started with Application Services Types of Application Services There are two flavors of Applications Service which are configurable and custom . This section will describe how and when each flavor should be used. Configurable The App Functions SDK has a full suite of built-in features that are accessible via configuration when using the App Service Configurable service. This service is built using the App Functions SDK and uses configuration profiles to define separate distinct instances of the service. The service comes with a few built in profiles for common use cases, but custom profiles can also be used. If your use case needs can be meet with the built-in functionality then the App Service Configurable service is right for you. See the App Service Configurable section for more details. Custom Custom Application Services are needed when use case needs can not be meet with just the built-in functionality. This is when you must develop you own custom Application Service use the App Functions SDK . Typically this is triggered by the use case needing an custom Pipeline Function . See the App Functions SDK section for all the details on the features you custom Application Service can take advantage of. Template To help accelerate the creation of your custom Application Service the App Functions SDK contains a template for new custom Application Services. This template has TODO's in the code and a README that walk you through the creation of your new custom Application Service. See the template README for more details. Triggers Triggers are common to both Configurable and Custom Application Services. The are the next logical area to get familiar with. See the Triggers section for more details. Configuration Finally service configuration is very important to understand for both Configurable and Custom Application Services. The service configuration documentation is broken into two parts. First is the configuration that is common to all EdgeX services and the second is the configuration that is specific to Application Services. See the Common Configuration and Application Service Configuration sections for more details.","title":"Getting Started"},{"location":"microservices/application/GettingStarted/#getting-started-with-application-services","text":"","title":"Getting Started with Application Services"},{"location":"microservices/application/GettingStarted/#types-of-application-services","text":"There are two flavors of Applications Service which are configurable and custom . This section will describe how and when each flavor should be used.","title":"Types of Application Services"},{"location":"microservices/application/GettingStarted/#configurable","text":"The App Functions SDK has a full suite of built-in features that are accessible via configuration when using the App Service Configurable service. This service is built using the App Functions SDK and uses configuration profiles to define separate distinct instances of the service. The service comes with a few built in profiles for common use cases, but custom profiles can also be used. If your use case needs can be meet with the built-in functionality then the App Service Configurable service is right for you. See the App Service Configurable section for more details.","title":"Configurable"},{"location":"microservices/application/GettingStarted/#custom","text":"Custom Application Services are needed when use case needs can not be meet with just the built-in functionality. This is when you must develop you own custom Application Service use the App Functions SDK . Typically this is triggered by the use case needing an custom Pipeline Function . See the App Functions SDK section for all the details on the features you custom Application Service can take advantage of.","title":"Custom"},{"location":"microservices/application/GettingStarted/#template","text":"To help accelerate the creation of your custom Application Service the App Functions SDK contains a template for new custom Application Services. This template has TODO's in the code and a README that walk you through the creation of your new custom Application Service. See the template README for more details.","title":"Template"},{"location":"microservices/application/GettingStarted/#triggers","text":"Triggers are common to both Configurable and Custom Application Services. The are the next logical area to get familiar with. See the Triggers section for more details.","title":"Triggers"},{"location":"microservices/application/GettingStarted/#configuration","text":"Finally service configuration is very important to understand for both Configurable and Custom Application Services. The service configuration documentation is broken into two parts. First is the configuration that is common to all EdgeX services and the second is the configuration that is specific to Application Services. See the Common Configuration and Application Service Configuration sections for more details.","title":"Configuration"},{"location":"microservices/application/Triggers/","text":"Application Service Triggers Introduction Triggers determine how the App Functions Pipeline begins execution. The trigger is determined by the [Trigger] configuration section in the configuration.toml file. Edgex 2.0 For Edgex 2.0 the [Binding] configuration section has been renamed to [Trigger] . The [MessageBus] section has been renamed to EdgexMessageBus and moved under the [Trigger] section. The [MqttBroker] section has been renamed to ExternalMqtt and moved under the [Trigger] section. There are 4 types of Triggers supported in the App Functions SDK which are discussed in this document EdgeX Message Bus - Default Trigger for most use cases as this is how the App Services receive Events from EdgeX Core Data and/or Devices Services External MQTT - Useful when receiving commands from an external/Cloud MQTT broker. HTTP - Useful during development and testing of custom functions. Custom - Allows custom Application Services to implement their own Custom Trigger EdgeX MessageBus Trigger An EdgeX MessageBus trigger will execute the pipeline every time data is received from the configured Edgex MessageBus SubscribeTopics . The EdgeX MessageBus is the central message bus internal to EdgeX and has a specific message envelope that wraps all data published to this message bus. There currently are three implementations of the EdgeX MessageBus available to be used. These are Redis Pub/Sub (default), MQTT and ZeroMQ (ZMQ). The implementation type is selected via the [Trigger.EdgexMessageBus] configuration described below. Type Configuration Edgex 2.0 For EdgeX 2.0 the SubscribeTopic has been renamed to SubscribeTopics and moved under the EdgexMessageBus SubscribeHost section. The PublishTopic has also been moved under the EdgexMessageBus PublishHost section. Also the legacy type of messagebus has been removed. Here's an example: [Trigger] Type = \"edgex-messagebus\" The Type= is set to edgex-messagebus trigger type. The Context function ctx.SetResponseData([]byte outputData) stores the data to send back to the EdgeX MessageBus on the topic specified by the PublishHost PublishTopic= setting. MessageBus Connection Configuration The other piece of configuration required are the connection settings: [Trigger.EdgexMessageBus] Type = \"redis\" # message bus type (i.e \"redis`, `mqtt` or `zero` for ZeroMQ) [Trigger.EdgexMessageBus.SubscribeHost] Host = \"localhost\" Port = 6379 Protocol = \"redis\" SubscribeTopics = \"edgex/events/#\" [Trigger.EdgexMessageBus.PublishHost] Host = \"localhost\" Port = 6379 Protocol = \"redis\" PublishTopic = \"\" # optional if publishing response back to the MessageBus Edgex 2.0 For Edgex 2.0 the PublishTopic can now have placeholders. See Publish Topic Placeholders section below for more details As stated above there are three EdgeX MessageBus implementations you can choose from. These type values are as follows: redis - for Redis Pub/Sub (Requires Redis running and Core Data and/or Device Services configure to use Redis Pub/Sub) mqtt - for MQTT (Requires a MQTT Broker running and Core Data and/or Device Services configure to use MQTT) zero - for ZeroMQ (No Broker/Service required. Core Data must be configured to use Zero and Device service configure to use REST to Core Data) Edgex 2.0 For Edgex 2.0 Redis is now the default EdgeX MessageBus implementation used. Also, the Redis implementation changed from Redis streams to Redis Pub/Sub , thus the type value changed from redisstreams to redis Important When using ZMQ for the message bus, the Publish Host MUST be different for each publisher to since the they will bind to the specific port. 5563 for example cannot be used to publish since EdgeX Core Data has bound to that port. Similarly, you cannot have two separate instances of the app functions SDK running and publishing to the same port. This is why once Device services started publishing the the EdgeX MessageBus the default was changed to Redis Pub/Sub Note When using MQTT for the message bus, there is additional configuration required for specifying the MQTT specific options. Example Using MQTT Here is example EdgexMessageBus configuration when using MQTT as the message bus: [Trigger.EdgexMessageBus] Type = \"mqtt\" [Trigger.EdgexMessageBus.SubscribeHost] Host = \"localhost\" Port = 1883 Protocol = \"tcp\" SubscribeTopics = \"edgex/events/#\" [Trigger.EdgexMessageBus.PublishHost] Host = \"localhost\" Port = 1883 Protocol = \"tcp\" PublishTopic = \"\" # optional if publishing response back to the MessageBus [Trigger.EdgexMessageBus.Optional] # MQTT Specific options ClientId = \"new-app-service\" Qos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"30\" # Seconds SkipCertVerify = \"false\" authmode = \"none\" # change to \"usernamepassword\", \"clientcert\", or \"cacert\" for secure MQTT messagebus. secretname = \"mqtt-bus\" EdgeX 2.0 New for EdgeX 2.0 is the Secure MessageBus when use the Redis Pub/Sub implementation. See the Secure MessageBus documentation for more details. EdgeX 2.0 Also new for EdgeX 2.0 is the MQTT MessageBus implementation now supports retrieving secrets from the Secret Store for secure MQTT connection, but there is not any facility yet to generate the credentials on first startup and distribute them to all services, as is done with Redis Pub/sub . This MQTT credentials generation and distribution is a future enhancement for EdgeX security services. Filter By Topics EdgeX 2.0 New for EdgeX 2.0 App services now have the capability to filter by EdgeX MessageBus topics rather then using Filter functions in the functions pipeline. Filtering by topic is more efficient since the App Service never receives the data off the MessageBus. Core Data and/or Device Services now publish to multi-level topics that include the profilename , devicename and sourcename . Sources are the commandname or resourcename that generated the Event. The publish topics now look like this: # From Core Data edgex/events/core/// # From Device Services edgex/events/device/// This with App Services capability to have multiple subscriptions allows for multiple filters by subscriptions. The SubscribeTopics setting takes a comma separated list of subscribe topics. Here are a few examples of how to configure the SubscribeTopics setting under the Trigger.EdgexMessageBus.SubscribeHost section to filter by subscriptions using the profile , device and source names from the SNMP Device Service file here : Filter for all Events SubscribeTopics = \"edgex/events/#\" Filter for Events only from a single class of devices (device profile defines a class of device) SubscribeTopics = \"edgex/events/#/trendnet/#\" Filter for Events only from a single actual device SubscribeTopics = \"edgex/events/#/#/trendnet01/#\" Filter for Events from two specific actual devices SubscribeTopics = \"edgex/events/#/#/trendnet01/#, edgex/events/#/#/trendnet02/#\" Filter for Events from two specific sources. SubscribeTopics = \"edgex/events/#/#/#/Uptime, edgex/events/#/#/#/MacAddress\" Note The above examples are for when Redis is used as the EdgeX MessageBus implementation, which is now the default. The Redis implementation uses the # wildcard character for multi-level and single level. The implementation actually converts all # 's to the * 's. The * is the actual wildcard character used by Redis Pub/Sub. In the first example (multi-level) the # is used at the end in the location for where Core Data's and Device Service's publish topics differ. This location will be core when coming from Core Data or device when coming from a Device Service. The additional use of # within the topic, not at the end, (single-level) allows for any Profile , Device or Source when specifying one of the others. Note For the MQTT implementation of the EdgeX MessageBus, the # is also used for the multi-level wildcard, but the single-level wildcard is the + character. So the first and last examples above would be as follows for when using the MQTT implementation SubscribeTopics = \"edgex/events/#\" SubscribeTopics = \"edgex/events/+/trendnet/#\" SubscribeTopics = \"edgex/events/+/+/trendnet01/#\" SubscribeTopics = \"edgex/events/+/+/trendnet01/#, edgex/events/+/+/trendnet02/#\" SubscribeTopics = \"edgex/events/+/+/+/Uptime, edgex/events/+/+/+/MacAddress\" External MQTT Trigger An External MQTT trigger will execute the pipeline every time data is received from an external MQTT broker on the configured SubscribeTopics . Note The data received from the external MQTT broker is not wrapped with any metadata known to EdgeX. The data is handled as JSON or CBOR. The data is assumed to be JSON unless the first byte in the data is not a { or a [ , in which case it is then assumed to be CBOR. Note The data received, encoded as JSON or CBOR, must match the TargetType defined by your application service. The default TargetType is an Edgex Event . See TargetType for more details. Type Configuration Here's an example: [Trigger] Type = \"external-mqtt\" [Trigger.externalmqtt] Url = \"tls://test.mosquitto.org:8884\" SubscribeTopics = \"edgex/#\" ClientId = \"app-external-mqtt-trigger\" Qos = 0 KeepAlive = 10 Retained = false AutoReconnect = true ConnectTimeout = \"30s\" SkipCertVerify = true AuthMode = \"clientcert\" SecretPath = \"external-mqtt\" Edgex 2.0 For EdgeX 2.0 the SubscribeTopic has been renamed to SubscribeTopics and moved under the ExternalMqtt section. The PublishTopic has also been moved under the ExternalMqtt section. The Type= is set to external-mqtt . To receive data from the external MQTT Broker you must set your SubscribeTopics= to the appropriate topic(s) that the external publisher is using. You may also designate a PublishTopic= if you wish to publish data back to the external MQTT Broker. The Context function ctx.SetResponseData([]byte outputData) stores the data to send back to the external MQTT Broker on the topic specified by the PublishTopic= setting. Edgex 2.2 Prior to EdgeX 2.2 if AuthMode is set to usernamepassword , clientcert , or cacert and App Service will be run in secure mode, the required credentials must be stored to Secret Store via Vault CLI, REST API, or WEB UI before starting App Service. Otherwise App Service will fail to initialize the External MQTT Trigger and then shutdown because the required credentials do not exist in the Secret Store at the time service starts. Today, you can start App Service and store the required credentials using the App Service API afterwards. If the credentials found in Secret Store cannot satisfy App Service, once the secret creation API is called the App Service will try to fetch credentials again. External MQTT Broker Configuration The other piece of configuration required are the MQTT Broker connection settings: [Trigger.ExternalMqtt] Url = \"tcp://localhost:1883\" # fully qualified URL to connect to the MQTT broker SubscribeTopics = \"SomeTopics\" PublishTopic = \"\" # optional if publishing response back to the the External MQTT Broker ClientId = \"AppService\" ConnectTimeout = \"5s\" # 5 seconds AutoReconnect = true KeepAlive = 10 # Seconds (must be 2 or greater) QoS = 0 # Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) Retain = true SkipCertVerify = false SecretPath = \"mqtt-trigger\" AuthMode = \"none\" # Options are \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". Edgex 2.0 For Edgex 2.0 the PublishTopic can have placeholders. See Publish Topic Placeholders section below for more details HTTP Trigger Designating an HTTP trigger will allow the pipeline to be triggered by a RESTful POST call to http://[host]:[port]/api/v2/trigger/ . Type Configuration Here's an example: [Trigger] Type = \"http\" The Type= is set to http . This will will enable listening to the api/v2/trigger/ endpoint. No other configuration is required. The Context function ctx.SetResponseData([]byte outputData) stores the data to send back as the response to the requestor that originally triggered the HTTP Request. Note The HTTP trigger uses the content-type from the HTTP Header to determine if the data is JSON or CBOR encoded and the optional X-Correlation-ID to set the correlation ID for the request. Note The data received, encoded as JSON or CBOR, must match the TargetType defined by your application service. The default TargetType is an Edgex Event . See TargetType for more details. Custom Triggers Edgex 2.0 New for EdgeX 2.0 It is also possible to define your own trigger and register a factory function for it with the SDK. You can then configure the trigger by registering a factory function to build it along with a name to use in the config file. These triggers can be registered with: service . RegisterCustomTriggerFactory ( \"my-trigger-name\" , myFactoryFunc ) Note You can NOT override trigger names built into the SDK ( \"edgex-messagebus\", \"external-mqtt\", or \"http\") for a custom trigger. The trigger factory function is bound to an instance of a trigger configuration struct that is provided by the SDK: type TriggerConfig struct { Logger logger . LoggingClient ContextBuilder TriggerContextBuilder // Deprecated: use MessageReceived MessageProcessor TriggerMessageProcessor MessageReceived TriggerMessageHandler ConfigLoader TriggerConfigLoader } This type carries a pointer to the internal edgex logger, along with three functions: ContextBuilder builds an interfaces.AppFunctionContext from a message envelope you construct. MessageProcessor (DEPRECATED) exposes a function that sends your message envelope and context built above into the default function pipeline. MessageReceived exposes a function that sends your message envelope and context to any pipelines configured in the EdgeX service. It also takes a function that will be run to process the response for each successful pipeline. Note The context passed in to Received will be cloned for each pipeline configured to run. If a nil context is passed a new one will be initialized from the message. ConfigLoader exposes a function that loads your custom config struct. By default this is done from the primary EdgeX configuration pipeline, and only loads root-level elements. If you need to override these functions it can be done in the factory function registered with the service. The custom trigger constructed here will then need to implement the trigger interface so that the SDK can invoke it: type Trigger interface { Initialize ( wg * sync . WaitGroup , ctx context . Context , background <- chan BackgroundMessage ) ( bootstrap . Deferred , error ) } type BackgroundMessage interface { Message () types . MessageEnvelope Topic () string } This leaves a lot of flexibility for how you want the trigger to behave (for example you could write a trigger to watch for file changes, or run on a timer). Below is a sample implementation of a trigger that reads lines from os.Stdin and pass the captured string through the edgex function pipeline. In this case the target type for the service is set to &[]byte{} . type stdinTrigger struct { tc appsdk . TriggerConfig } func ( t * stdinTrigger ) Initialize ( wg * sync . WaitGroup , ctx context . Context , _ <- chan interfaces . BackgroundMessage ) ( bootstrap . Deferred , error ) { msgs := make ( chan [] byte ) receiveMessage := true responseHandler := func ( ctx AppFunctionContext , pipeline * FunctionPipeline ) { // do stuff } go func () { fmt . Print ( \"> \" ) rdr := bufio . NewReader ( os . Stdin ) for receiveMessage { s , err := rdr . ReadString ( '\\n' ) s = strings . TrimRight ( s , \"\\n\" ) if err != nil { t . tc . Logger . Error ( err . Error ()) continue } msgs <- [] byte ( s ) } }() go func () { for receiveMessage { select { case <- ctx . Done (): receiveMessage = false case m := <- msgs : go func () { env := types . MessageEnvelope { Payload : m , } ctx := t . tc . ContextBuilder ( env ) err := t . tc . MessageReceived ( ctx , env , responseHandler ) if err != nil { t . tc . Logger . Error ( err . Error ()) } }() } } }() return cancel , nil } This trigger can then be registered by calling: appService . RegisterCustomTriggerFactory ( \"custom-stdin\" , func ( config appsdk . TriggerConfig ) ( appsdk . Trigger , error ) { return & stdinTrigger { tc : config , }, nil }) Type Configuration Here's an example: [Trigger] Type = \"custom-stdin\" Now the custom trigger is configured to be used rather than one of the built-in triggers. A complete working example can be found here Publish Topic Placeholders Edgex 2.0 New for EdgeX 2.0 Both the EdgeX MessageBus and the External MQTT triggers support the new Publish Topic Placeholders capability. The configured PublishTopic for either of these triggers can contain placeholders for runtime replacements. The placeholders are replaced with values from the new Context Storage whose key match the placeholder name. Function pipelines can add values to the Context Storage which can then be used as replacement values in the publish topic. If an EdgeX Event is received by the configured trigger the Event's profilename , devicename and sourcename as well as the will be seeded into the Context Storage . See the Context Storage documentation for more details. The Publish Topic Placeholders format is a simple {} that can appear anywhere in the topic multiple times. An error will occur if a specified placeholder does not exist in the Context Storage . Example PublishTopic = \"data/{profilename}/{devicename}/{custom}\" Received Topic Edgex 2.0 New for EdgeX 2.0 The topic the data was received on for EdgeX MessageBus and the External MQTT triggers is now stored in the new Context Storage with the key receivedtopic . This makes it available to pipeline functions via the Context Storage .","title":"Triggers"},{"location":"microservices/application/Triggers/#application-service-triggers","text":"","title":"Application Service Triggers"},{"location":"microservices/application/Triggers/#introduction","text":"Triggers determine how the App Functions Pipeline begins execution. The trigger is determined by the [Trigger] configuration section in the configuration.toml file. Edgex 2.0 For Edgex 2.0 the [Binding] configuration section has been renamed to [Trigger] . The [MessageBus] section has been renamed to EdgexMessageBus and moved under the [Trigger] section. The [MqttBroker] section has been renamed to ExternalMqtt and moved under the [Trigger] section. There are 4 types of Triggers supported in the App Functions SDK which are discussed in this document EdgeX Message Bus - Default Trigger for most use cases as this is how the App Services receive Events from EdgeX Core Data and/or Devices Services External MQTT - Useful when receiving commands from an external/Cloud MQTT broker. HTTP - Useful during development and testing of custom functions. Custom - Allows custom Application Services to implement their own Custom Trigger","title":"Introduction"},{"location":"microservices/application/Triggers/#edgex-messagebus-trigger","text":"An EdgeX MessageBus trigger will execute the pipeline every time data is received from the configured Edgex MessageBus SubscribeTopics . The EdgeX MessageBus is the central message bus internal to EdgeX and has a specific message envelope that wraps all data published to this message bus. There currently are three implementations of the EdgeX MessageBus available to be used. These are Redis Pub/Sub (default), MQTT and ZeroMQ (ZMQ). The implementation type is selected via the [Trigger.EdgexMessageBus] configuration described below.","title":"EdgeX MessageBus Trigger"},{"location":"microservices/application/Triggers/#type-configuration","text":"Edgex 2.0 For EdgeX 2.0 the SubscribeTopic has been renamed to SubscribeTopics and moved under the EdgexMessageBus SubscribeHost section. The PublishTopic has also been moved under the EdgexMessageBus PublishHost section. Also the legacy type of messagebus has been removed. Here's an example: [Trigger] Type = \"edgex-messagebus\" The Type= is set to edgex-messagebus trigger type. The Context function ctx.SetResponseData([]byte outputData) stores the data to send back to the EdgeX MessageBus on the topic specified by the PublishHost PublishTopic= setting.","title":"Type Configuration"},{"location":"microservices/application/Triggers/#messagebus-connection-configuration","text":"The other piece of configuration required are the connection settings: [Trigger.EdgexMessageBus] Type = \"redis\" # message bus type (i.e \"redis`, `mqtt` or `zero` for ZeroMQ) [Trigger.EdgexMessageBus.SubscribeHost] Host = \"localhost\" Port = 6379 Protocol = \"redis\" SubscribeTopics = \"edgex/events/#\" [Trigger.EdgexMessageBus.PublishHost] Host = \"localhost\" Port = 6379 Protocol = \"redis\" PublishTopic = \"\" # optional if publishing response back to the MessageBus Edgex 2.0 For Edgex 2.0 the PublishTopic can now have placeholders. See Publish Topic Placeholders section below for more details As stated above there are three EdgeX MessageBus implementations you can choose from. These type values are as follows: redis - for Redis Pub/Sub (Requires Redis running and Core Data and/or Device Services configure to use Redis Pub/Sub) mqtt - for MQTT (Requires a MQTT Broker running and Core Data and/or Device Services configure to use MQTT) zero - for ZeroMQ (No Broker/Service required. Core Data must be configured to use Zero and Device service configure to use REST to Core Data) Edgex 2.0 For Edgex 2.0 Redis is now the default EdgeX MessageBus implementation used. Also, the Redis implementation changed from Redis streams to Redis Pub/Sub , thus the type value changed from redisstreams to redis Important When using ZMQ for the message bus, the Publish Host MUST be different for each publisher to since the they will bind to the specific port. 5563 for example cannot be used to publish since EdgeX Core Data has bound to that port. Similarly, you cannot have two separate instances of the app functions SDK running and publishing to the same port. This is why once Device services started publishing the the EdgeX MessageBus the default was changed to Redis Pub/Sub Note When using MQTT for the message bus, there is additional configuration required for specifying the MQTT specific options.","title":"MessageBus Connection Configuration"},{"location":"microservices/application/Triggers/#example-using-mqtt","text":"Here is example EdgexMessageBus configuration when using MQTT as the message bus: [Trigger.EdgexMessageBus] Type = \"mqtt\" [Trigger.EdgexMessageBus.SubscribeHost] Host = \"localhost\" Port = 1883 Protocol = \"tcp\" SubscribeTopics = \"edgex/events/#\" [Trigger.EdgexMessageBus.PublishHost] Host = \"localhost\" Port = 1883 Protocol = \"tcp\" PublishTopic = \"\" # optional if publishing response back to the MessageBus [Trigger.EdgexMessageBus.Optional] # MQTT Specific options ClientId = \"new-app-service\" Qos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"30\" # Seconds SkipCertVerify = \"false\" authmode = \"none\" # change to \"usernamepassword\", \"clientcert\", or \"cacert\" for secure MQTT messagebus. secretname = \"mqtt-bus\" EdgeX 2.0 New for EdgeX 2.0 is the Secure MessageBus when use the Redis Pub/Sub implementation. See the Secure MessageBus documentation for more details. EdgeX 2.0 Also new for EdgeX 2.0 is the MQTT MessageBus implementation now supports retrieving secrets from the Secret Store for secure MQTT connection, but there is not any facility yet to generate the credentials on first startup and distribute them to all services, as is done with Redis Pub/sub . This MQTT credentials generation and distribution is a future enhancement for EdgeX security services.","title":"Example Using MQTT"},{"location":"microservices/application/Triggers/#filter-by-topics","text":"EdgeX 2.0 New for EdgeX 2.0 App services now have the capability to filter by EdgeX MessageBus topics rather then using Filter functions in the functions pipeline. Filtering by topic is more efficient since the App Service never receives the data off the MessageBus. Core Data and/or Device Services now publish to multi-level topics that include the profilename , devicename and sourcename . Sources are the commandname or resourcename that generated the Event. The publish topics now look like this: # From Core Data edgex/events/core/// # From Device Services edgex/events/device/// This with App Services capability to have multiple subscriptions allows for multiple filters by subscriptions. The SubscribeTopics setting takes a comma separated list of subscribe topics. Here are a few examples of how to configure the SubscribeTopics setting under the Trigger.EdgexMessageBus.SubscribeHost section to filter by subscriptions using the profile , device and source names from the SNMP Device Service file here : Filter for all Events SubscribeTopics = \"edgex/events/#\" Filter for Events only from a single class of devices (device profile defines a class of device) SubscribeTopics = \"edgex/events/#/trendnet/#\" Filter for Events only from a single actual device SubscribeTopics = \"edgex/events/#/#/trendnet01/#\" Filter for Events from two specific actual devices SubscribeTopics = \"edgex/events/#/#/trendnet01/#, edgex/events/#/#/trendnet02/#\" Filter for Events from two specific sources. SubscribeTopics = \"edgex/events/#/#/#/Uptime, edgex/events/#/#/#/MacAddress\" Note The above examples are for when Redis is used as the EdgeX MessageBus implementation, which is now the default. The Redis implementation uses the # wildcard character for multi-level and single level. The implementation actually converts all # 's to the * 's. The * is the actual wildcard character used by Redis Pub/Sub. In the first example (multi-level) the # is used at the end in the location for where Core Data's and Device Service's publish topics differ. This location will be core when coming from Core Data or device when coming from a Device Service. The additional use of # within the topic, not at the end, (single-level) allows for any Profile , Device or Source when specifying one of the others. Note For the MQTT implementation of the EdgeX MessageBus, the # is also used for the multi-level wildcard, but the single-level wildcard is the + character. So the first and last examples above would be as follows for when using the MQTT implementation SubscribeTopics = \"edgex/events/#\" SubscribeTopics = \"edgex/events/+/trendnet/#\" SubscribeTopics = \"edgex/events/+/+/trendnet01/#\" SubscribeTopics = \"edgex/events/+/+/trendnet01/#, edgex/events/+/+/trendnet02/#\" SubscribeTopics = \"edgex/events/+/+/+/Uptime, edgex/events/+/+/+/MacAddress\"","title":"Filter By Topics"},{"location":"microservices/application/Triggers/#external-mqtt-trigger","text":"An External MQTT trigger will execute the pipeline every time data is received from an external MQTT broker on the configured SubscribeTopics . Note The data received from the external MQTT broker is not wrapped with any metadata known to EdgeX. The data is handled as JSON or CBOR. The data is assumed to be JSON unless the first byte in the data is not a { or a [ , in which case it is then assumed to be CBOR. Note The data received, encoded as JSON or CBOR, must match the TargetType defined by your application service. The default TargetType is an Edgex Event . See TargetType for more details.","title":"External MQTT Trigger"},{"location":"microservices/application/Triggers/#type-configuration_1","text":"Here's an example: [Trigger] Type = \"external-mqtt\" [Trigger.externalmqtt] Url = \"tls://test.mosquitto.org:8884\" SubscribeTopics = \"edgex/#\" ClientId = \"app-external-mqtt-trigger\" Qos = 0 KeepAlive = 10 Retained = false AutoReconnect = true ConnectTimeout = \"30s\" SkipCertVerify = true AuthMode = \"clientcert\" SecretPath = \"external-mqtt\" Edgex 2.0 For EdgeX 2.0 the SubscribeTopic has been renamed to SubscribeTopics and moved under the ExternalMqtt section. The PublishTopic has also been moved under the ExternalMqtt section. The Type= is set to external-mqtt . To receive data from the external MQTT Broker you must set your SubscribeTopics= to the appropriate topic(s) that the external publisher is using. You may also designate a PublishTopic= if you wish to publish data back to the external MQTT Broker. The Context function ctx.SetResponseData([]byte outputData) stores the data to send back to the external MQTT Broker on the topic specified by the PublishTopic= setting. Edgex 2.2 Prior to EdgeX 2.2 if AuthMode is set to usernamepassword , clientcert , or cacert and App Service will be run in secure mode, the required credentials must be stored to Secret Store via Vault CLI, REST API, or WEB UI before starting App Service. Otherwise App Service will fail to initialize the External MQTT Trigger and then shutdown because the required credentials do not exist in the Secret Store at the time service starts. Today, you can start App Service and store the required credentials using the App Service API afterwards. If the credentials found in Secret Store cannot satisfy App Service, once the secret creation API is called the App Service will try to fetch credentials again.","title":"Type Configuration"},{"location":"microservices/application/Triggers/#external-mqtt-broker-configuration","text":"The other piece of configuration required are the MQTT Broker connection settings: [Trigger.ExternalMqtt] Url = \"tcp://localhost:1883\" # fully qualified URL to connect to the MQTT broker SubscribeTopics = \"SomeTopics\" PublishTopic = \"\" # optional if publishing response back to the the External MQTT Broker ClientId = \"AppService\" ConnectTimeout = \"5s\" # 5 seconds AutoReconnect = true KeepAlive = 10 # Seconds (must be 2 or greater) QoS = 0 # Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) Retain = true SkipCertVerify = false SecretPath = \"mqtt-trigger\" AuthMode = \"none\" # Options are \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". Edgex 2.0 For Edgex 2.0 the PublishTopic can have placeholders. See Publish Topic Placeholders section below for more details","title":"External MQTT Broker Configuration"},{"location":"microservices/application/Triggers/#http-trigger","text":"Designating an HTTP trigger will allow the pipeline to be triggered by a RESTful POST call to http://[host]:[port]/api/v2/trigger/ .","title":"HTTP Trigger"},{"location":"microservices/application/Triggers/#type-configuration_2","text":"Here's an example: [Trigger] Type = \"http\" The Type= is set to http . This will will enable listening to the api/v2/trigger/ endpoint. No other configuration is required. The Context function ctx.SetResponseData([]byte outputData) stores the data to send back as the response to the requestor that originally triggered the HTTP Request. Note The HTTP trigger uses the content-type from the HTTP Header to determine if the data is JSON or CBOR encoded and the optional X-Correlation-ID to set the correlation ID for the request. Note The data received, encoded as JSON or CBOR, must match the TargetType defined by your application service. The default TargetType is an Edgex Event . See TargetType for more details.","title":"Type Configuration"},{"location":"microservices/application/Triggers/#custom-triggers","text":"Edgex 2.0 New for EdgeX 2.0 It is also possible to define your own trigger and register a factory function for it with the SDK. You can then configure the trigger by registering a factory function to build it along with a name to use in the config file. These triggers can be registered with: service . RegisterCustomTriggerFactory ( \"my-trigger-name\" , myFactoryFunc ) Note You can NOT override trigger names built into the SDK ( \"edgex-messagebus\", \"external-mqtt\", or \"http\") for a custom trigger. The trigger factory function is bound to an instance of a trigger configuration struct that is provided by the SDK: type TriggerConfig struct { Logger logger . LoggingClient ContextBuilder TriggerContextBuilder // Deprecated: use MessageReceived MessageProcessor TriggerMessageProcessor MessageReceived TriggerMessageHandler ConfigLoader TriggerConfigLoader } This type carries a pointer to the internal edgex logger, along with three functions: ContextBuilder builds an interfaces.AppFunctionContext from a message envelope you construct. MessageProcessor (DEPRECATED) exposes a function that sends your message envelope and context built above into the default function pipeline. MessageReceived exposes a function that sends your message envelope and context to any pipelines configured in the EdgeX service. It also takes a function that will be run to process the response for each successful pipeline. Note The context passed in to Received will be cloned for each pipeline configured to run. If a nil context is passed a new one will be initialized from the message. ConfigLoader exposes a function that loads your custom config struct. By default this is done from the primary EdgeX configuration pipeline, and only loads root-level elements. If you need to override these functions it can be done in the factory function registered with the service. The custom trigger constructed here will then need to implement the trigger interface so that the SDK can invoke it: type Trigger interface { Initialize ( wg * sync . WaitGroup , ctx context . Context , background <- chan BackgroundMessage ) ( bootstrap . Deferred , error ) } type BackgroundMessage interface { Message () types . MessageEnvelope Topic () string } This leaves a lot of flexibility for how you want the trigger to behave (for example you could write a trigger to watch for file changes, or run on a timer). Below is a sample implementation of a trigger that reads lines from os.Stdin and pass the captured string through the edgex function pipeline. In this case the target type for the service is set to &[]byte{} . type stdinTrigger struct { tc appsdk . TriggerConfig } func ( t * stdinTrigger ) Initialize ( wg * sync . WaitGroup , ctx context . Context , _ <- chan interfaces . BackgroundMessage ) ( bootstrap . Deferred , error ) { msgs := make ( chan [] byte ) receiveMessage := true responseHandler := func ( ctx AppFunctionContext , pipeline * FunctionPipeline ) { // do stuff } go func () { fmt . Print ( \"> \" ) rdr := bufio . NewReader ( os . Stdin ) for receiveMessage { s , err := rdr . ReadString ( '\\n' ) s = strings . TrimRight ( s , \"\\n\" ) if err != nil { t . tc . Logger . Error ( err . Error ()) continue } msgs <- [] byte ( s ) } }() go func () { for receiveMessage { select { case <- ctx . Done (): receiveMessage = false case m := <- msgs : go func () { env := types . MessageEnvelope { Payload : m , } ctx := t . tc . ContextBuilder ( env ) err := t . tc . MessageReceived ( ctx , env , responseHandler ) if err != nil { t . tc . Logger . Error ( err . Error ()) } }() } } }() return cancel , nil } This trigger can then be registered by calling: appService . RegisterCustomTriggerFactory ( \"custom-stdin\" , func ( config appsdk . TriggerConfig ) ( appsdk . Trigger , error ) { return & stdinTrigger { tc : config , }, nil })","title":"Custom Triggers"},{"location":"microservices/application/Triggers/#type-configuration_3","text":"Here's an example: [Trigger] Type = \"custom-stdin\" Now the custom trigger is configured to be used rather than one of the built-in triggers. A complete working example can be found here","title":"Type Configuration"},{"location":"microservices/application/Triggers/#publish-topic-placeholders","text":"Edgex 2.0 New for EdgeX 2.0 Both the EdgeX MessageBus and the External MQTT triggers support the new Publish Topic Placeholders capability. The configured PublishTopic for either of these triggers can contain placeholders for runtime replacements. The placeholders are replaced with values from the new Context Storage whose key match the placeholder name. Function pipelines can add values to the Context Storage which can then be used as replacement values in the publish topic. If an EdgeX Event is received by the configured trigger the Event's profilename , devicename and sourcename as well as the will be seeded into the Context Storage . See the Context Storage documentation for more details. The Publish Topic Placeholders format is a simple {} that can appear anywhere in the topic multiple times. An error will occur if a specified placeholder does not exist in the Context Storage .","title":"Publish Topic Placeholders"},{"location":"microservices/application/Triggers/#example","text":"PublishTopic = \"data/{profilename}/{devicename}/{custom}\"","title":"Example"},{"location":"microservices/application/Triggers/#received-topic","text":"Edgex 2.0 New for EdgeX 2.0 The topic the data was received on for EdgeX MessageBus and the External MQTT triggers is now stored in the new Context Storage with the key receivedtopic . This makes it available to pipeline functions via the Context Storage .","title":"Received Topic"},{"location":"microservices/application/V2Migration/","text":"V2 Migration Guide EdgeX 2.0 For the EdgeX 2.0 (Ireland) release there are many backward breaking changes. These changes require custom Application Services and custom profiles (app-service-configurable) to be migrated. This section outlines the necessary steps for this migration. Custom Application Services Configuration The migration of any Application Service's configuration starts with migrating configuration common to all EdgeX services. See the V2 Migration of Common Configuration section for details. The remainder of this section focuses on configuration specific to Application Services. SecretStoreExclusive The SecretStoreExclusive section has been removed in EdgeX 2.0. With EdgeX 2.0 all SecretStores are exclusive, so the existing SecretStore section is all that is required. Services requiring known secrets such as redisdb must inform the Security SecretStore Setup service (via environment variables) that the application service requires the secret added to its SecretStore. See the Configuring Add-on Services section for more details. Clients The client used for the version validation check has changed to being from Core Metadata, rather than Core Data. This is because Core Data is now optional when persistence isn't required since all Device Services publish directly to the EdgeX MessageBus. The configuration for Core Metadata is the only Clients entry required, all other (see below) are optional based on use case needs. Note The port numbers for all EdgeX services have changed which must be reflected in the Clients configuration. Please see the Default Service Ports section for complete list of the new port assignments. Example - Core Metadata client configuration [Clients] [Clients.core-metadata] Protocol = \"http\" Host = \"localhost\" Port = 59881 Example - All available clients configured with new port numbers [Clients] # Used for version check on start-up # Also used for DeviceService, DeviceProfile and Device clients [Clients.core-metadata] Protocol = \"http\" Host = \"localhost\" Port = 59881 # Used for Event client which is used by PushToCoreData function [Clients.core-data] Protocol = \"http\" Host = \"localhost\" Port = 59880 # Used for Command client [Clients.core-command] Protocol = \"http\" Host = \"localhost\" Port = 59882 # Used for Notification and Subscription clients [Clients.support-notifications] Protocol = \"http\" Host = \"localhost\" Port = 59860 Trigger The Trigger section (previously named Binding ) has been restructured with EdgexMessageBus (previously named MessageBus ) and ExternalMqtt (previously named MqttBroker ) moved under it. The SubscribeTopics (previously named SubscribeTopic ) has been moved under the EdgexMessageBus.SubscribeHost and ExternalMqtt sections. The PublishTopic has been moved under the EdgexMessageBus.PublishHost and ExternalMqtt sections. EdgeX MessageBus If your Application Service is using the EdgeX MessageBus trigger, you can then simply copy the complete Trigger configuration from the example below and tweak it as needed. Example - EdgeX MessageBus trigger configuration [Trigger] Type = \"edgex-messagebus\" [Trigger.EdgexMessageBus] Type = \"redis\" [Trigger.EdgexMessageBus.SubscribeHost] Host = \"localhost\" Port = 6379 Protocol = \"redis\" SubscribeTopics = \"edgex/events/#\" [Trigger.EdgexMessageBus.PublishHost] Host = \"localhost\" Port = 6379 Protocol = \"redis\" PublishTopic = \"example\" [Trigger.EdgexMessageBus.Optional] AuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure). SecretName = \"redisdb\" From the above example you can see the improved structure and the following changes: Default EdgexMessageBus type has changed from ZeroMQ to Redis . Type value for Redis has changed from redistreams to redis . This is because the implementation no longer uses Redis Streams. It now uses Redis Pub/Sub. SubscribeTopics is now plural since it now accepts a comma separated list of topics. The default value uses a multi-level topic with a wild card. This is because Core Data and Device Services now publish to a multi-level topics which have edgex/events as their base. This allows Application Services to filter by topic rather then receive the data and then filter it out via a pipeline filter function. See the Filter By Topics section for more details. The EdgeX MessageBus using Redis is a Secure MessageBus, thus the addition of the AuthMode and SecretName settings which allow the credentials to be pulled from the service's SecretStore. See the Secure MessageBus secure for more details. External MQTT If your Application service is using the External MQTT trigger do the following: Move your existing MqttBroker configuration under the Trigger section (renaming it to ExternalMqtt ) Move your SubscribeTopic (renaming it to SubscribeTopics ) under the ExternalMqtt section. Move your PublishTopic under the ExternalMqtt section. Example - External MQTT trigger configuration [Trigger] Type = \"external-mqtt\" [Trigger.ExternalMqtt] Url = \"tcp://broker.hivemq.com:1883\" SubscribeTopics = \"edgex-trigger\" PublishTopic = \"edgex-trigger-response\" ClientId = \"app-my-service\" ConnectTimeout = \"30s\" AutoReconnect = false KeepAlive = 60 QoS = 0 Retain = false SkipCertVerify = false SecretPath = \"\" AuthMode = \"none\" HTTP The HTTP trigger configuration has not changed beyond the renaming of Binding to Trigger . Example - HTTP trigger configuration [Trigger] Type = \"http\" Code Dependencies You first need to update the go.mod file to specify go 1.16 and the V2 versions of the App Functions SDK and any EdgeX go-mods directly used by your service. Note the extra /v2 for the modules. Example go.mod for V2 module < your service > go 1.16 require ( github . com / edgexfoundry / app - functions - sdk - go / v2 v2 .0.0 github . com / edgexfoundry / go - mod - core - contracts / v2 v2 .0.0 ) Once that is complete then the import statements for these dependencies must be updated to include the /v2 in the path. Example import statements for V2 import ( ... \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/interfaces\" \"github.com/edgexfoundry/go-mod-core-contracts/v2/dtos\" ) New APIs Next changes you will encounter in your code are that the AppFunctionsSDK and Context structs have been abstracted into the new ApplicationService and AppFunctionContext APIs. See the Application Service API and App Function Context API sections for complete details on these new APIs. The following sections cover migrating your code for these new APIs. main() The following changes to your main() function will be necessary. Create and Initialize Your main() will change to use a factory function to create and initialize the Application Service instance, rather than create instance of AppFunctionsSDK and call Initialize() Example - Create Application Service instance const serviceKey = \"app-myservice\" ... service , ok := pkg . NewAppService ( serviceKey ) if ! ok { os . Exit ( - 1 ) } Example - Create Application Service instance with Target Type specified const serviceKey = \"app-myservice\" ... service , ok := pkg . NewAppServiceWithTargetType ( serviceKey , & [] byte {}) if ! ok { os . Exit ( - 1 ) } Since the factory function logs all errors, all you need to do is exit if it returns false . Logging Client The Logging client is now accessible from the service.LoggingClient() API. New extended Logging Client API The Logging Client API now has formatted versions of all the logging APIs, which are Infof , Debugf , Tracef , Warnf and Errorf . If your code uses fmt.Sprintf to format your log messages then it can now be simplified by using these new APIs. Application Settings The access functions for retrieving the service's custom Application Settings ( ApplicationSettings , GetAppSettingStrings , and GetAppSetting ) have not changed. An improved capability to have structured custom configuration has been added. See the Structure Custom Configuration section for more details. Functions Pipeline Setting the Functions Pipeline has not changed, but the name of some built in functions have changed and new ones have been added. See the Built-In Pipeline Functions section for more details. Example - Setting Functions Pipeline if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML , transforms . NewHTTPSender ( exportUrl , \"application/xml\" , false ). HTTPPost , ); err != nil { lc . Errorf ( \"SetFunctionsPipeline returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } MakeItRun The MakeItRun API has not changed. Example - Call to MakeItRun err = service . MakeItRun () if err != nil { lc . Errorf ( \"MakeItRun returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } Custom Pipeline Functions Pipeline Function signature The major change to custom Pipeline Functions for EdgeX 2.0 is the new function signature which drives all the other changes. Example - New Pipeline Function signature type AppFunction = func ( ctx AppFunctionContext , data interface {}) ( bool , interface {}) This function signature passes in an instance of the new AppFunctionContext API for the context and now has only a single data instance for the function to operate on. Return Values The definitions for the Pipeline Function return values have not changed. Data The data passed in either set to a single instance for the function to process or nil. Now you no longer need to check the length of the incoming data. Example if data == nil { return false , errors . New ( \"No Data Received\" ) } Logging Client The Logging client is now accessible from the ctx.LoggingClient() API. Clients The available clients have changed with a few additions and ValueDescriptorClient has been removed. See the Context Clients section for complete list of available clients. ResponseData The SetResponseData and ResponseData APIs replace the previous Complete function and direct access to the OutputData field. ResponseContentType The SetResponseContentType and ResponseContentType APIs replace the previous direct access to the ResponseContentType field. RetryData The SetRetryData API replaces the SetRetryData function and direct access to the RetryData field. MarkAsPushed The MarkAsPushed capability has been removed PushToCore The PushToCore API replaces the PushToCoreData function. The API signature has changed. See the PushToCore section for more details. New Capabilities Some new capabilities have been added to the new AppFunctionContext API. See the App Function Context API section for complete details. App Service Configurable Profiles Custom profiles used with App Service Configurable are configuration files. These follow the same migration above for custom Application Service configuration , except for the Configurable Functions Pipeline items. The following are the changes for the Configurable Functions Pipeline: FilterByValueDescriptor changed to FilterByResourceName . See the FilterByResourceName section for details. TransformToXML and TransformToJSON have been collapsed into Transform with additional parameters. See the Transform section for more details. CompressWithGZIP and CompressWithZLIB have been collapsed into Compress with additional parameters. See the Compress section for more details. EncryptWithAES has been changed to Encrypt with additional parameters. See the Encrypt section for more details. BatchByCount , BatchByTime and BatchByTimeAndCount have been collapsed into Batch with additional parameters. See the Batch section for more details. SetOutputData has been renamed to SetResponseData . See the SetResponseData section for more details. PushToCore parameters have changed. See the PushToCore section for more details. HTTPPost , HTTPPostJSON , HTTPPostXML , HTTPPut , HTTPPutJSON and HTTPPutXML have been collapsed into HTTPExport with additional parameters. See the HTTPExport section for more details. MQTTSecretSend has been renamed to MQTTExport with additional parameters. See the MQTTExport section for more details. MarkAsPushed has been removed. The mark as push capability has been removed from Core Data, which this depended on. MQTTSend has been removed. This has been replaced by MQTTExport . See the MQTTExport section for more details. FilterByProfileName and FilterBySourceName have been added. See the FilterByProfileName and FilterBySourceName sections for more details. Ability to define multiple instances of the same Configurable Pipeline Function has been added. See the Multiple Instances of Function section for more details.","title":"V2 Migration Guide"},{"location":"microservices/application/V2Migration/#v2-migration-guide","text":"EdgeX 2.0 For the EdgeX 2.0 (Ireland) release there are many backward breaking changes. These changes require custom Application Services and custom profiles (app-service-configurable) to be migrated. This section outlines the necessary steps for this migration.","title":"V2 Migration Guide"},{"location":"microservices/application/V2Migration/#custom-application-services","text":"","title":"Custom Application Services"},{"location":"microservices/application/V2Migration/#configuration","text":"The migration of any Application Service's configuration starts with migrating configuration common to all EdgeX services. See the V2 Migration of Common Configuration section for details. The remainder of this section focuses on configuration specific to Application Services.","title":"Configuration"},{"location":"microservices/application/V2Migration/#secretstoreexclusive","text":"The SecretStoreExclusive section has been removed in EdgeX 2.0. With EdgeX 2.0 all SecretStores are exclusive, so the existing SecretStore section is all that is required. Services requiring known secrets such as redisdb must inform the Security SecretStore Setup service (via environment variables) that the application service requires the secret added to its SecretStore. See the Configuring Add-on Services section for more details.","title":"SecretStoreExclusive"},{"location":"microservices/application/V2Migration/#clients","text":"The client used for the version validation check has changed to being from Core Metadata, rather than Core Data. This is because Core Data is now optional when persistence isn't required since all Device Services publish directly to the EdgeX MessageBus. The configuration for Core Metadata is the only Clients entry required, all other (see below) are optional based on use case needs. Note The port numbers for all EdgeX services have changed which must be reflected in the Clients configuration. Please see the Default Service Ports section for complete list of the new port assignments. Example - Core Metadata client configuration [Clients] [Clients.core-metadata] Protocol = \"http\" Host = \"localhost\" Port = 59881 Example - All available clients configured with new port numbers [Clients] # Used for version check on start-up # Also used for DeviceService, DeviceProfile and Device clients [Clients.core-metadata] Protocol = \"http\" Host = \"localhost\" Port = 59881 # Used for Event client which is used by PushToCoreData function [Clients.core-data] Protocol = \"http\" Host = \"localhost\" Port = 59880 # Used for Command client [Clients.core-command] Protocol = \"http\" Host = \"localhost\" Port = 59882 # Used for Notification and Subscription clients [Clients.support-notifications] Protocol = \"http\" Host = \"localhost\" Port = 59860","title":"Clients"},{"location":"microservices/application/V2Migration/#trigger","text":"The Trigger section (previously named Binding ) has been restructured with EdgexMessageBus (previously named MessageBus ) and ExternalMqtt (previously named MqttBroker ) moved under it. The SubscribeTopics (previously named SubscribeTopic ) has been moved under the EdgexMessageBus.SubscribeHost and ExternalMqtt sections. The PublishTopic has been moved under the EdgexMessageBus.PublishHost and ExternalMqtt sections.","title":"Trigger"},{"location":"microservices/application/V2Migration/#edgex-messagebus","text":"If your Application Service is using the EdgeX MessageBus trigger, you can then simply copy the complete Trigger configuration from the example below and tweak it as needed. Example - EdgeX MessageBus trigger configuration [Trigger] Type = \"edgex-messagebus\" [Trigger.EdgexMessageBus] Type = \"redis\" [Trigger.EdgexMessageBus.SubscribeHost] Host = \"localhost\" Port = 6379 Protocol = \"redis\" SubscribeTopics = \"edgex/events/#\" [Trigger.EdgexMessageBus.PublishHost] Host = \"localhost\" Port = 6379 Protocol = \"redis\" PublishTopic = \"example\" [Trigger.EdgexMessageBus.Optional] AuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure). SecretName = \"redisdb\" From the above example you can see the improved structure and the following changes: Default EdgexMessageBus type has changed from ZeroMQ to Redis . Type value for Redis has changed from redistreams to redis . This is because the implementation no longer uses Redis Streams. It now uses Redis Pub/Sub. SubscribeTopics is now plural since it now accepts a comma separated list of topics. The default value uses a multi-level topic with a wild card. This is because Core Data and Device Services now publish to a multi-level topics which have edgex/events as their base. This allows Application Services to filter by topic rather then receive the data and then filter it out via a pipeline filter function. See the Filter By Topics section for more details. The EdgeX MessageBus using Redis is a Secure MessageBus, thus the addition of the AuthMode and SecretName settings which allow the credentials to be pulled from the service's SecretStore. See the Secure MessageBus secure for more details.","title":"EdgeX MessageBus"},{"location":"microservices/application/V2Migration/#external-mqtt","text":"If your Application service is using the External MQTT trigger do the following: Move your existing MqttBroker configuration under the Trigger section (renaming it to ExternalMqtt ) Move your SubscribeTopic (renaming it to SubscribeTopics ) under the ExternalMqtt section. Move your PublishTopic under the ExternalMqtt section. Example - External MQTT trigger configuration [Trigger] Type = \"external-mqtt\" [Trigger.ExternalMqtt] Url = \"tcp://broker.hivemq.com:1883\" SubscribeTopics = \"edgex-trigger\" PublishTopic = \"edgex-trigger-response\" ClientId = \"app-my-service\" ConnectTimeout = \"30s\" AutoReconnect = false KeepAlive = 60 QoS = 0 Retain = false SkipCertVerify = false SecretPath = \"\" AuthMode = \"none\"","title":"External MQTT"},{"location":"microservices/application/V2Migration/#http","text":"The HTTP trigger configuration has not changed beyond the renaming of Binding to Trigger . Example - HTTP trigger configuration [Trigger] Type = \"http\"","title":"HTTP"},{"location":"microservices/application/V2Migration/#code","text":"","title":"Code"},{"location":"microservices/application/V2Migration/#dependencies","text":"You first need to update the go.mod file to specify go 1.16 and the V2 versions of the App Functions SDK and any EdgeX go-mods directly used by your service. Note the extra /v2 for the modules. Example go.mod for V2 module < your service > go 1.16 require ( github . com / edgexfoundry / app - functions - sdk - go / v2 v2 .0.0 github . com / edgexfoundry / go - mod - core - contracts / v2 v2 .0.0 ) Once that is complete then the import statements for these dependencies must be updated to include the /v2 in the path. Example import statements for V2 import ( ... \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/interfaces\" \"github.com/edgexfoundry/go-mod-core-contracts/v2/dtos\" )","title":"Dependencies"},{"location":"microservices/application/V2Migration/#new-apis","text":"Next changes you will encounter in your code are that the AppFunctionsSDK and Context structs have been abstracted into the new ApplicationService and AppFunctionContext APIs. See the Application Service API and App Function Context API sections for complete details on these new APIs. The following sections cover migrating your code for these new APIs.","title":"New APIs"},{"location":"microservices/application/V2Migration/#main","text":"The following changes to your main() function will be necessary.","title":"main()"},{"location":"microservices/application/V2Migration/#create-and-initialize","text":"Your main() will change to use a factory function to create and initialize the Application Service instance, rather than create instance of AppFunctionsSDK and call Initialize() Example - Create Application Service instance const serviceKey = \"app-myservice\" ... service , ok := pkg . NewAppService ( serviceKey ) if ! ok { os . Exit ( - 1 ) } Example - Create Application Service instance with Target Type specified const serviceKey = \"app-myservice\" ... service , ok := pkg . NewAppServiceWithTargetType ( serviceKey , & [] byte {}) if ! ok { os . Exit ( - 1 ) } Since the factory function logs all errors, all you need to do is exit if it returns false .","title":"Create and Initialize"},{"location":"microservices/application/V2Migration/#logging-client","text":"The Logging client is now accessible from the service.LoggingClient() API. New extended Logging Client API The Logging Client API now has formatted versions of all the logging APIs, which are Infof , Debugf , Tracef , Warnf and Errorf . If your code uses fmt.Sprintf to format your log messages then it can now be simplified by using these new APIs.","title":"Logging Client"},{"location":"microservices/application/V2Migration/#application-settings","text":"The access functions for retrieving the service's custom Application Settings ( ApplicationSettings , GetAppSettingStrings , and GetAppSetting ) have not changed. An improved capability to have structured custom configuration has been added. See the Structure Custom Configuration section for more details.","title":"Application Settings"},{"location":"microservices/application/V2Migration/#functions-pipeline","text":"Setting the Functions Pipeline has not changed, but the name of some built in functions have changed and new ones have been added. See the Built-In Pipeline Functions section for more details. Example - Setting Functions Pipeline if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML , transforms . NewHTTPSender ( exportUrl , \"application/xml\" , false ). HTTPPost , ); err != nil { lc . Errorf ( \"SetFunctionsPipeline returned error: %s\" , err . Error ()) os . Exit ( - 1 ) }","title":"Functions Pipeline"},{"location":"microservices/application/V2Migration/#makeitrun","text":"The MakeItRun API has not changed. Example - Call to MakeItRun err = service . MakeItRun () if err != nil { lc . Errorf ( \"MakeItRun returned error: %s\" , err . Error ()) os . Exit ( - 1 ) }","title":"MakeItRun"},{"location":"microservices/application/V2Migration/#custom-pipeline-functions","text":"","title":"Custom Pipeline Functions"},{"location":"microservices/application/V2Migration/#pipeline-function-signature","text":"The major change to custom Pipeline Functions for EdgeX 2.0 is the new function signature which drives all the other changes. Example - New Pipeline Function signature type AppFunction = func ( ctx AppFunctionContext , data interface {}) ( bool , interface {}) This function signature passes in an instance of the new AppFunctionContext API for the context and now has only a single data instance for the function to operate on.","title":"Pipeline Function signature"},{"location":"microservices/application/V2Migration/#return-values","text":"The definitions for the Pipeline Function return values have not changed.","title":"Return Values"},{"location":"microservices/application/V2Migration/#data","text":"The data passed in either set to a single instance for the function to process or nil. Now you no longer need to check the length of the incoming data. Example if data == nil { return false , errors . New ( \"No Data Received\" ) }","title":"Data"},{"location":"microservices/application/V2Migration/#logging-client_1","text":"The Logging client is now accessible from the ctx.LoggingClient() API.","title":"Logging Client"},{"location":"microservices/application/V2Migration/#clients_1","text":"The available clients have changed with a few additions and ValueDescriptorClient has been removed. See the Context Clients section for complete list of available clients.","title":"Clients"},{"location":"microservices/application/V2Migration/#responsedata","text":"The SetResponseData and ResponseData APIs replace the previous Complete function and direct access to the OutputData field.","title":"ResponseData"},{"location":"microservices/application/V2Migration/#responsecontenttype","text":"The SetResponseContentType and ResponseContentType APIs replace the previous direct access to the ResponseContentType field.","title":"ResponseContentType"},{"location":"microservices/application/V2Migration/#retrydata","text":"The SetRetryData API replaces the SetRetryData function and direct access to the RetryData field.","title":"RetryData"},{"location":"microservices/application/V2Migration/#markaspushed","text":"The MarkAsPushed capability has been removed","title":"MarkAsPushed"},{"location":"microservices/application/V2Migration/#pushtocore","text":"The PushToCore API replaces the PushToCoreData function. The API signature has changed. See the PushToCore section for more details.","title":"PushToCore"},{"location":"microservices/application/V2Migration/#new-capabilities","text":"Some new capabilities have been added to the new AppFunctionContext API. See the App Function Context API section for complete details.","title":"New Capabilities"},{"location":"microservices/application/V2Migration/#app-service-configurable-profiles","text":"Custom profiles used with App Service Configurable are configuration files. These follow the same migration above for custom Application Service configuration , except for the Configurable Functions Pipeline items. The following are the changes for the Configurable Functions Pipeline: FilterByValueDescriptor changed to FilterByResourceName . See the FilterByResourceName section for details. TransformToXML and TransformToJSON have been collapsed into Transform with additional parameters. See the Transform section for more details. CompressWithGZIP and CompressWithZLIB have been collapsed into Compress with additional parameters. See the Compress section for more details. EncryptWithAES has been changed to Encrypt with additional parameters. See the Encrypt section for more details. BatchByCount , BatchByTime and BatchByTimeAndCount have been collapsed into Batch with additional parameters. See the Batch section for more details. SetOutputData has been renamed to SetResponseData . See the SetResponseData section for more details. PushToCore parameters have changed. See the PushToCore section for more details. HTTPPost , HTTPPostJSON , HTTPPostXML , HTTPPut , HTTPPutJSON and HTTPPutXML have been collapsed into HTTPExport with additional parameters. See the HTTPExport section for more details. MQTTSecretSend has been renamed to MQTTExport with additional parameters. See the MQTTExport section for more details. MarkAsPushed has been removed. The mark as push capability has been removed from Core Data, which this depended on. MQTTSend has been removed. This has been replaced by MQTTExport . See the MQTTExport section for more details. FilterByProfileName and FilterBySourceName have been added. See the FilterByProfileName and FilterBySourceName sections for more details. Ability to define multiple instances of the same Configurable Pipeline Function has been added. See the Multiple Instances of Function section for more details.","title":"App Service Configurable Profiles"},{"location":"microservices/configuration/CommonCommandLineOptions/","text":"Common Command Line Options This section describes the command line options that are common to all EdgeX services. Some services have addition command line options which are documented in the specific sections for those services. ConfDir -c/--confdir Specify local configuration directory. Default is ./res Can be overridden with EDGEX_CONF_DIR environment variable. File -f/--file Indicates the name of the local configuration file. Default is configuration.toml Can be overridden with EDGEX_CONFIG_FILE environment variable. Config Provider -cp/ --configProvider Indicates to use Configuration Provider service at specified URL. URL Format: {type}.{protocol}://{host}:{port} ex: consul.http://localhost:8500 Can be overridden with EDGEX_CONFIGURATION_PROVIDER environment variable. Profile -p/--profile Indicates configuration profile other than default. Default is no profile name resulting in using ./res/configuration.toml if -f and -c are not used. Can be overridden with EDGEX_PROFILE environment variable. Registry -r/ --registry Indicates service should use the Registry. Connection information is pulled from the [Registry] configuration section. Can be overridden with EDGEX_USE_REGISTRY environment variable. Overwrite -o/--overwrite Overwrite configuration in provider with local configuration. Use with cation This will clobber existing settings in provider, problematic if those settings were edited by hand intentionally. Typically only used during development. Help -h/--help Show the help message","title":"Common Command Line Options"},{"location":"microservices/configuration/CommonCommandLineOptions/#common-command-line-options","text":"This section describes the command line options that are common to all EdgeX services. Some services have addition command line options which are documented in the specific sections for those services.","title":"Common Command Line Options"},{"location":"microservices/configuration/CommonCommandLineOptions/#confdir","text":"-c/--confdir Specify local configuration directory. Default is ./res Can be overridden with EDGEX_CONF_DIR environment variable.","title":"ConfDir"},{"location":"microservices/configuration/CommonCommandLineOptions/#file","text":"-f/--file Indicates the name of the local configuration file. Default is configuration.toml Can be overridden with EDGEX_CONFIG_FILE environment variable.","title":"File"},{"location":"microservices/configuration/CommonCommandLineOptions/#config-provider","text":"-cp/ --configProvider Indicates to use Configuration Provider service at specified URL. URL Format: {type}.{protocol}://{host}:{port} ex: consul.http://localhost:8500 Can be overridden with EDGEX_CONFIGURATION_PROVIDER environment variable.","title":"Config Provider"},{"location":"microservices/configuration/CommonCommandLineOptions/#profile","text":"-p/--profile Indicates configuration profile other than default. Default is no profile name resulting in using ./res/configuration.toml if -f and -c are not used. Can be overridden with EDGEX_PROFILE environment variable.","title":"Profile"},{"location":"microservices/configuration/CommonCommandLineOptions/#registry","text":"-r/ --registry Indicates service should use the Registry. Connection information is pulled from the [Registry] configuration section. Can be overridden with EDGEX_USE_REGISTRY environment variable.","title":"Registry"},{"location":"microservices/configuration/CommonCommandLineOptions/#overwrite","text":"-o/--overwrite Overwrite configuration in provider with local configuration. Use with cation This will clobber existing settings in provider, problematic if those settings were edited by hand intentionally. Typically only used during development.","title":"Overwrite"},{"location":"microservices/configuration/CommonCommandLineOptions/#help","text":"-h/--help Show the help message","title":"Help"},{"location":"microservices/configuration/CommonConfiguration/","text":"Common Configuration The tables in each of the tabs below document configuration properties that are common to all services in the EdgeX Foundry platform. Service-specific properties can be found on the respective documentation page for each service. Configuration Properties Edgex 2.0 For EdgeX 2.0 the Logging and Startup sections have been removed. Startup has been replaced with the EDGEX_STARTUP_DURATION (default is 60 secs) and EDGEX_STARTUP_INTERVAL (default is 1 sec) environment variables. Writable Property Default Value Description entries in the Writable section of the configuration can be changed on the fly while the service is running if the service is running with the -cp/--configProvider flag LogLevel INFO log entry severity level . Log entries not of the default level or higher are ignored. InsecureSecrets --- This section a map of secrets which simulates the SecretStore for accessing secrets when running in non-secure mode. All services have a default entry for Redis DB credentials called redisdb Edgex 2.0 For EdgeX 2.0 the For EdgeX 2.0 the Writable.InsecureSecrets configuration section is new. Service Property Default Value Description HealthCheckInterval 10s The interval in seconds at which the service registry(Consul) will conduct a health check of this service. Host localhost Micro service host name Port --- Micro service port number (specific for each service) ServerBindAddr '' (empty string) The interface on which the service's REST server should listen. By default the server is to listen on the interface to which the Host option resolves (leaving it blank). A value of 0.0.0.0 means listen on all available interfaces. App & Device service do not implement this setting StartupMsg --- Message logged when service completes bootstrap start-up MaxResultCount 1024* Read data limit per invocation. *Default value is for core/support services. Application and Device services do not implement this setting. MaxRequestSize 0 Defines the maximum size of http request body in bytes. 0 represents default to system max. Not all services actual implement this setting. Those that do not have a comment stating this fact. RequestTimeout 5s Specifies a timeout duration for handling requests Edgex 2.0 For EdgeX 2.0 Protocol and BootTimeout have been removed. CheckInterval and Timeout have been renamed to HealthCheckInterval and RequestTimeout respectively. MaxRequestSize was added for all services. Databases.Primary Property Default Value Description configuration that govern database connectivity and the type of database to use. While not all services require DB connectivity, most do and so this has been included in the common configuration docs. Host localhost DB host name Port 6379 DB port number Name ---- Database or document store name (Specific to the service) Timeout 5000 DB connection timeout Type redisdb DB type. Redis is the only supported DB Edgex 2.0 For EdgeX 2.0 mongodb has been remove as a supported DB. The credentials username and password have been removed and are now in the Writable.InsecureSecrets.DB section. Registry Property Default Value Description this configuration only takes effect when connecting to the registry for configuration info Host localhost Registry host name Port 8500 Registry port number Type consul Registry implementation type Clients.[service-key] Property Default Value Description Each service has it own collect of Clients that it uses Protocol http The protocol to use when building a URI to local the service endpoint Host localhost The host name or IP address where the service is hosted Port 598xx The port exposed by the target service Edgex 2.0 For EdgeX 2.0 the map keys have changed to be the service's service-key, i.e. Metadata changed to core-metadata SecretStore Property Default Value Description these config values are used when security is enabled and SecretStore service access is required for obtaining secrets, such as database credentials Type vault The type of the SecretStore service to use. Currenly only vault is supported. Host localhost The host name or IP address associated with the SecretStore service Port 8200 The configured port on which the SecretStore service is listening Path / The service-specific path where the secrets are kept. This path will differ according to the given service. Protocol http The protocol to be used when communicating with the SecretStore service RootCaCertPath blank Default is to not use HTTPS ServerName blank Not needed for HTTP TokenFile /tmp/edgex/secrets/ /secrets-token.json Fully-qualified path to the location of the service's SecretStore access token. This path will differ according to the given service. SecretsFile blank Fully-qualified path to the location of the service's JSON secrets file contains secrets to seed at start-up. See Seeding Service Secrets section for more details on seed a service's secrets. DisableScrubSecretsFile false Controls if the secrets file is scrubbed (secret data remove) and rewritten after importing the secrets. Authentication AuthType X-Vault-Token A header used to indicate how the given service will authenticate with the SecretStore service Edgex 2.0 For EdgeX 2.0 the Protocol default has changed to HTTP which no longer requires RootCaCertPath and ServerName to be set. Path has been reduce to the sub-path for the service since the based path is fixed. TokenFile default value has changed and requires the service-key be used in the path. Service.CORSConfiguration Property Default Value Description The settings of controling CORS http headers EnableCORS false Enable or disable CORS support. CORSAllowCredentials false The value of Access-Control-Allow-Credentials http header. It appears only if the value is true . CORSAllowedOrigin \"https://localhost\" The value of Access-Control-Allow-Origin http header. CORSAllowedMethods \"GET, POST, PUT, PATCH, DELETE\" The value of Access-Control-Allow-Methods http header. CORSAllowedHeaders \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\" The value of Access-Control-Allow-Headers http header. CORSExposeHeaders \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\" The value of Access-Control-Expose-Headers http header. CORSMaxAge 3600 The value of Access-Control-Max-Age http header. Edgex 2.1 New for EdgeX 2.1 is the ability to enable CORS access to EdgeX microservices through configuration. To understand more details about these HTTP headers, please refer to MDN Web Docs , and refer to CORS enabling to learn more. Writable vs Readable Settings Within a given service's configuration, there are keys whose values can be edited and change the behavior of the service while it is running versus those that are effectively read-only. These writable settings are grouped under a given service key. For example, the top-level groupings for edgex-core-data are: /edgex/core/2.0/edgex-core-data/Writable /edgex/core/2.0/edgex-core-data/Service /edgex/core/2.0/edgex-core-data/Clients /edgex/core/2.0/edgex-core-data/Databases /edgex/core/2.0/edgex-core-data/MessageQueue /edgex/core/2.0/edgex-core-data/Registry /edgex/core/2.0/edgex-core-data/SecretStore Any configuration settings found in a service's Writable section may be changed and affect a service's behavior without a restart. Any modifications to the other settings (read-only configuration) would require a restart.","title":"Common Configuration"},{"location":"microservices/configuration/CommonConfiguration/#common-configuration","text":"The tables in each of the tabs below document configuration properties that are common to all services in the EdgeX Foundry platform. Service-specific properties can be found on the respective documentation page for each service.","title":"Common Configuration"},{"location":"microservices/configuration/CommonConfiguration/#configuration-properties","text":"Edgex 2.0 For EdgeX 2.0 the Logging and Startup sections have been removed. Startup has been replaced with the EDGEX_STARTUP_DURATION (default is 60 secs) and EDGEX_STARTUP_INTERVAL (default is 1 sec) environment variables. Writable Property Default Value Description entries in the Writable section of the configuration can be changed on the fly while the service is running if the service is running with the -cp/--configProvider flag LogLevel INFO log entry severity level . Log entries not of the default level or higher are ignored. InsecureSecrets --- This section a map of secrets which simulates the SecretStore for accessing secrets when running in non-secure mode. All services have a default entry for Redis DB credentials called redisdb Edgex 2.0 For EdgeX 2.0 the For EdgeX 2.0 the Writable.InsecureSecrets configuration section is new. Service Property Default Value Description HealthCheckInterval 10s The interval in seconds at which the service registry(Consul) will conduct a health check of this service. Host localhost Micro service host name Port --- Micro service port number (specific for each service) ServerBindAddr '' (empty string) The interface on which the service's REST server should listen. By default the server is to listen on the interface to which the Host option resolves (leaving it blank). A value of 0.0.0.0 means listen on all available interfaces. App & Device service do not implement this setting StartupMsg --- Message logged when service completes bootstrap start-up MaxResultCount 1024* Read data limit per invocation. *Default value is for core/support services. Application and Device services do not implement this setting. MaxRequestSize 0 Defines the maximum size of http request body in bytes. 0 represents default to system max. Not all services actual implement this setting. Those that do not have a comment stating this fact. RequestTimeout 5s Specifies a timeout duration for handling requests Edgex 2.0 For EdgeX 2.0 Protocol and BootTimeout have been removed. CheckInterval and Timeout have been renamed to HealthCheckInterval and RequestTimeout respectively. MaxRequestSize was added for all services. Databases.Primary Property Default Value Description configuration that govern database connectivity and the type of database to use. While not all services require DB connectivity, most do and so this has been included in the common configuration docs. Host localhost DB host name Port 6379 DB port number Name ---- Database or document store name (Specific to the service) Timeout 5000 DB connection timeout Type redisdb DB type. Redis is the only supported DB Edgex 2.0 For EdgeX 2.0 mongodb has been remove as a supported DB. The credentials username and password have been removed and are now in the Writable.InsecureSecrets.DB section. Registry Property Default Value Description this configuration only takes effect when connecting to the registry for configuration info Host localhost Registry host name Port 8500 Registry port number Type consul Registry implementation type Clients.[service-key] Property Default Value Description Each service has it own collect of Clients that it uses Protocol http The protocol to use when building a URI to local the service endpoint Host localhost The host name or IP address where the service is hosted Port 598xx The port exposed by the target service Edgex 2.0 For EdgeX 2.0 the map keys have changed to be the service's service-key, i.e. Metadata changed to core-metadata SecretStore Property Default Value Description these config values are used when security is enabled and SecretStore service access is required for obtaining secrets, such as database credentials Type vault The type of the SecretStore service to use. Currenly only vault is supported. Host localhost The host name or IP address associated with the SecretStore service Port 8200 The configured port on which the SecretStore service is listening Path / The service-specific path where the secrets are kept. This path will differ according to the given service. Protocol http The protocol to be used when communicating with the SecretStore service RootCaCertPath blank Default is to not use HTTPS ServerName blank Not needed for HTTP TokenFile /tmp/edgex/secrets/ /secrets-token.json Fully-qualified path to the location of the service's SecretStore access token. This path will differ according to the given service. SecretsFile blank Fully-qualified path to the location of the service's JSON secrets file contains secrets to seed at start-up. See Seeding Service Secrets section for more details on seed a service's secrets. DisableScrubSecretsFile false Controls if the secrets file is scrubbed (secret data remove) and rewritten after importing the secrets. Authentication AuthType X-Vault-Token A header used to indicate how the given service will authenticate with the SecretStore service Edgex 2.0 For EdgeX 2.0 the Protocol default has changed to HTTP which no longer requires RootCaCertPath and ServerName to be set. Path has been reduce to the sub-path for the service since the based path is fixed. TokenFile default value has changed and requires the service-key be used in the path. Service.CORSConfiguration Property Default Value Description The settings of controling CORS http headers EnableCORS false Enable or disable CORS support. CORSAllowCredentials false The value of Access-Control-Allow-Credentials http header. It appears only if the value is true . CORSAllowedOrigin \"https://localhost\" The value of Access-Control-Allow-Origin http header. CORSAllowedMethods \"GET, POST, PUT, PATCH, DELETE\" The value of Access-Control-Allow-Methods http header. CORSAllowedHeaders \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\" The value of Access-Control-Allow-Headers http header. CORSExposeHeaders \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\" The value of Access-Control-Expose-Headers http header. CORSMaxAge 3600 The value of Access-Control-Max-Age http header. Edgex 2.1 New for EdgeX 2.1 is the ability to enable CORS access to EdgeX microservices through configuration. To understand more details about these HTTP headers, please refer to MDN Web Docs , and refer to CORS enabling to learn more.","title":"Configuration Properties"},{"location":"microservices/configuration/CommonConfiguration/#writable-vs-readable-settings","text":"Within a given service's configuration, there are keys whose values can be edited and change the behavior of the service while it is running versus those that are effectively read-only. These writable settings are grouped under a given service key. For example, the top-level groupings for edgex-core-data are: /edgex/core/2.0/edgex-core-data/Writable /edgex/core/2.0/edgex-core-data/Service /edgex/core/2.0/edgex-core-data/Clients /edgex/core/2.0/edgex-core-data/Databases /edgex/core/2.0/edgex-core-data/MessageQueue /edgex/core/2.0/edgex-core-data/Registry /edgex/core/2.0/edgex-core-data/SecretStore Any configuration settings found in a service's Writable section may be changed and affect a service's behavior without a restart. Any modifications to the other settings (read-only configuration) would require a restart.","title":"Writable vs Readable Settings"},{"location":"microservices/configuration/CommonEnvironmentVariables/","text":"Common Environment Variables There are two types of environment variables used by all EdgeX services. They are standard and overrides . The only difference is that the overrides apply to command-line options and service configuration settings where as standard do not have any corresponding command-line option or configuration setting. Standard Environment Variables This section describes the standard environment variables common to all EdgeX services. Some service may have additional standard environment variables which are documented in those service specific sections. EDGEX_SECURITY_SECRET_STORE This environment variables indicates whether the service is expected to initialize the secure SecretStore which allows the service to access secrets from Vault. Defaults to true if not set or not set to false . When set to true the EdgeX security services must be running. If running EdgeX in non-secure mode you then want this explicitly set to false . Example - Using docker-compose to disable secure SecretStore environment : EDGEX_SECURITY_SECRET_STORE : \"false\" EdgeX 2.0 For EdgeX 2.0 when running in secure mode Consul is secured, which requires all services to have this environment variable be true so they can request their Consul access token from Vault. See the Secure Consul section for more details. EDGEX_STARTUP_DURATION This environment variable sets the total duration in seconds allowed for the services to complete the bootstrap start-up. Default is 60 seconds. Example - Using docker-compose to set start-up duration to 120 seconds environment : EDGEX_STARTUP_DURATION : \"120\" EdgeX 2.0 For EdgeX 2.0 the deprecated lower case version startup_duration has been removed EDGEX_STARTUP_INTERVAL This environment variable sets the retry interval in seconds for the services retrying a failed action during the bootstrap start-up. Default is 1 second. Example - Using docker-compose to set start-up interval to 3 seconds environment : EDGEX_STARTUP_INTERVAL : \"3\" EdgeX 2.0 For EdgeX 2.0 the deprecated lower case version startup_interval has been removed Environment Overrides There are two types of environment overrides which are command-line and configuration . Important Environment variable overrides have precedence over all command-line, local configuration and remote configuration. i.e. configuration setting changed in Consul will be overridden after the service loads the configuration from Consul if that setting has an environment override. Command-line Overrides EDGEX_CONF_DIR This environment variable overrides the -c/--confdir command-line option . Note All EdgeX service Docker images have this option set to /res . Example - Using docker-compose to override the configuration folder name environment : EDGEX_CONF_DIR : \"/my-config\" EDGEX_CONFIG_FILE This environment variable overrides the -f/--file command-line option . Example - Using docker-compose to override the configuration file name used environment : EDGEX_CONFIG_FILE : \"my-config.toml\" EDGEX_CONFIGURATION_PROVIDER This environment variable overrides the -cp/--configProvider command-line option . Note All EdgeX service Docker images have this option set to -cp=consul.http://edgex-core-consul:8500 . Example - Using docker-compose to override with different port number environment : EDGEX_CONFIGURATION_PROVIDER : \"consul.http://edgex-consul:9500\" EDGEX_PROFILE This environment variable overrides the -p/--profile command-line option . When non-empty, the value is used in the path to the configuration file. i.e. /res/my-profile/configuation.toml. This is useful when running multiple instances of a service such as App Service Configurable. Example - Using docker-compose to override the profile to use app-service-rules : image : edgexfoundry/docker-app-service-configurable:2.0.0 environment : EDGEX_PROFILE : \"rules-engine\" ... This sets the profile so that the App Service Configurable uses the rules-engine configuration profile which resides at /res/rules-engine/configuration.toml EdgeX 2.0 For EdgeX 2.0 the deprecated lower case version edgex_profile has been removed EDGEX_USE_REGISTRY This environment variable overrides the -r/--registry command-line option . Note All EdgeX service Docker images have this option set to --registry . Example - Using docker-compose to override use of the Registry environment : EDGEX_USE_REGISTRY : \"false\" EdgeX 2.0 For EdgeX 2.0 the deprecated lower case version edgex_registry has been removed Configuration Overrides Any configuration setting from a service's configuration.toml file can be overridden by environment variables. The environment variable names have the following format: < TOM-SECTION-NAME > _ < TOML-KEY-NAME > < TOML-SECTION-NAME > _ < TOML-SUB-SECTION-NAME > _ < TOML-KEY-NAME > EdgeX 2.0 With EdgeX 2.0 the use of CamelCase environment variable names is no longer supported. Instead the variable names must be all uppercase as in the example below. Also the using of dash - in the TOML-NAME is converted to an underscore _ in the environment variable name. Example - Environment Overrides of Configuration ``` toml TOML : [ Writable ] LogLevel = \"INFO\" ENVVAR : WRITABLE_LOGLEVEL = DEBUG TOML : [ Clients ] [Clients.core-data] Host = \"localhost\" ENVVAR : CLIENTS_CORE_DATA_HOST = edgex-core-data ``` Notable Configuration Overrides This section describes environment variable overrides that have special utility, such as enabling a debug capability or facilitating code development. KONG_SSL_CIPHER_SUITE (edgex-kong service) This variable controls the TLS cipher suite and protocols supported by the EdgeX API Gateway as implemented by Kong. This variable, if unspecified, selects the \"intermediate\" cipher suite which supports TLSv1.2, TLSv1.3, and relatively modern TLS ciphers. The EdgeX framework by default overrides this value to \"modern\" , which currently enables only TLSv1.3 and a fixed cipher suite. The \"modern\" cipher suite is known to be incompatible with older web browsers, but since the target use of the API gateway is to support API clients, not browsers, this behavior was deemed acceptable by the EdgeX Security Working Group on September 8, 2021. TOKENFILEPROVIDER_DEFAULTTOKENTTL (security-secretstore-setup service) This variable controls the TTL of the default secretstore tokens that are created for EdgeX microservices. This variable defaults to 1h (one hour) if unspecified. It is often useful when developing a new microservice to set this value to a higher value, such as 12h . This higher value will allow the secret store token to remain valid long enough for a developer to get a new microservice working and into a state where it can renew its own token. (All secret store tokens in EdgeX expire if not renewed periodically.)","title":"Common Environment Variables"},{"location":"microservices/configuration/CommonEnvironmentVariables/#common-environment-variables","text":"There are two types of environment variables used by all EdgeX services. They are standard and overrides . The only difference is that the overrides apply to command-line options and service configuration settings where as standard do not have any corresponding command-line option or configuration setting.","title":"Common Environment Variables"},{"location":"microservices/configuration/CommonEnvironmentVariables/#standard-environment-variables","text":"This section describes the standard environment variables common to all EdgeX services. Some service may have additional standard environment variables which are documented in those service specific sections.","title":"Standard Environment Variables"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_security_secret_store","text":"This environment variables indicates whether the service is expected to initialize the secure SecretStore which allows the service to access secrets from Vault. Defaults to true if not set or not set to false . When set to true the EdgeX security services must be running. If running EdgeX in non-secure mode you then want this explicitly set to false . Example - Using docker-compose to disable secure SecretStore environment : EDGEX_SECURITY_SECRET_STORE : \"false\" EdgeX 2.0 For EdgeX 2.0 when running in secure mode Consul is secured, which requires all services to have this environment variable be true so they can request their Consul access token from Vault. See the Secure Consul section for more details.","title":"EDGEX_SECURITY_SECRET_STORE"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_startup_duration","text":"This environment variable sets the total duration in seconds allowed for the services to complete the bootstrap start-up. Default is 60 seconds. Example - Using docker-compose to set start-up duration to 120 seconds environment : EDGEX_STARTUP_DURATION : \"120\" EdgeX 2.0 For EdgeX 2.0 the deprecated lower case version startup_duration has been removed","title":"EDGEX_STARTUP_DURATION"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_startup_interval","text":"This environment variable sets the retry interval in seconds for the services retrying a failed action during the bootstrap start-up. Default is 1 second. Example - Using docker-compose to set start-up interval to 3 seconds environment : EDGEX_STARTUP_INTERVAL : \"3\" EdgeX 2.0 For EdgeX 2.0 the deprecated lower case version startup_interval has been removed","title":"EDGEX_STARTUP_INTERVAL"},{"location":"microservices/configuration/CommonEnvironmentVariables/#environment-overrides","text":"There are two types of environment overrides which are command-line and configuration . Important Environment variable overrides have precedence over all command-line, local configuration and remote configuration. i.e. configuration setting changed in Consul will be overridden after the service loads the configuration from Consul if that setting has an environment override.","title":"Environment Overrides"},{"location":"microservices/configuration/CommonEnvironmentVariables/#command-line-overrides","text":"","title":"Command-line Overrides"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_conf_dir","text":"This environment variable overrides the -c/--confdir command-line option . Note All EdgeX service Docker images have this option set to /res . Example - Using docker-compose to override the configuration folder name environment : EDGEX_CONF_DIR : \"/my-config\"","title":"EDGEX_CONF_DIR"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_config_file","text":"This environment variable overrides the -f/--file command-line option . Example - Using docker-compose to override the configuration file name used environment : EDGEX_CONFIG_FILE : \"my-config.toml\"","title":"EDGEX_CONFIG_FILE"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_configuration_provider","text":"This environment variable overrides the -cp/--configProvider command-line option . Note All EdgeX service Docker images have this option set to -cp=consul.http://edgex-core-consul:8500 . Example - Using docker-compose to override with different port number environment : EDGEX_CONFIGURATION_PROVIDER : \"consul.http://edgex-consul:9500\"","title":"EDGEX_CONFIGURATION_PROVIDER"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_profile","text":"This environment variable overrides the -p/--profile command-line option . When non-empty, the value is used in the path to the configuration file. i.e. /res/my-profile/configuation.toml. This is useful when running multiple instances of a service such as App Service Configurable. Example - Using docker-compose to override the profile to use app-service-rules : image : edgexfoundry/docker-app-service-configurable:2.0.0 environment : EDGEX_PROFILE : \"rules-engine\" ... This sets the profile so that the App Service Configurable uses the rules-engine configuration profile which resides at /res/rules-engine/configuration.toml EdgeX 2.0 For EdgeX 2.0 the deprecated lower case version edgex_profile has been removed","title":"EDGEX_PROFILE"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_use_registry","text":"This environment variable overrides the -r/--registry command-line option . Note All EdgeX service Docker images have this option set to --registry . Example - Using docker-compose to override use of the Registry environment : EDGEX_USE_REGISTRY : \"false\" EdgeX 2.0 For EdgeX 2.0 the deprecated lower case version edgex_registry has been removed","title":"EDGEX_USE_REGISTRY"},{"location":"microservices/configuration/CommonEnvironmentVariables/#configuration-overrides","text":"Any configuration setting from a service's configuration.toml file can be overridden by environment variables. The environment variable names have the following format: < TOM-SECTION-NAME > _ < TOML-KEY-NAME > < TOML-SECTION-NAME > _ < TOML-SUB-SECTION-NAME > _ < TOML-KEY-NAME > EdgeX 2.0 With EdgeX 2.0 the use of CamelCase environment variable names is no longer supported. Instead the variable names must be all uppercase as in the example below. Also the using of dash - in the TOML-NAME is converted to an underscore _ in the environment variable name. Example - Environment Overrides of Configuration ``` toml TOML : [ Writable ] LogLevel = \"INFO\" ENVVAR : WRITABLE_LOGLEVEL = DEBUG TOML : [ Clients ] [Clients.core-data] Host = \"localhost\" ENVVAR : CLIENTS_CORE_DATA_HOST = edgex-core-data ```","title":"Configuration Overrides"},{"location":"microservices/configuration/CommonEnvironmentVariables/#notable-configuration-overrides","text":"This section describes environment variable overrides that have special utility, such as enabling a debug capability or facilitating code development.","title":"Notable Configuration Overrides"},{"location":"microservices/configuration/CommonEnvironmentVariables/#kong_ssl_cipher_suite-edgex-kong-service","text":"This variable controls the TLS cipher suite and protocols supported by the EdgeX API Gateway as implemented by Kong. This variable, if unspecified, selects the \"intermediate\" cipher suite which supports TLSv1.2, TLSv1.3, and relatively modern TLS ciphers. The EdgeX framework by default overrides this value to \"modern\" , which currently enables only TLSv1.3 and a fixed cipher suite. The \"modern\" cipher suite is known to be incompatible with older web browsers, but since the target use of the API gateway is to support API clients, not browsers, this behavior was deemed acceptable by the EdgeX Security Working Group on September 8, 2021.","title":"KONG_SSL_CIPHER_SUITE (edgex-kong service)"},{"location":"microservices/configuration/CommonEnvironmentVariables/#tokenfileprovider_defaulttokenttl-security-secretstore-setup-service","text":"This variable controls the TTL of the default secretstore tokens that are created for EdgeX microservices. This variable defaults to 1h (one hour) if unspecified. It is often useful when developing a new microservice to set this value to a higher value, such as 12h . This higher value will allow the secret store token to remain valid long enough for a developer to get a new microservice working and into a state where it can renew its own token. (All secret store tokens in EdgeX expire if not renewed periodically.)","title":"TOKENFILEPROVIDER_DEFAULTTOKENTTL (security-secretstore-setup service)"},{"location":"microservices/configuration/ConfigurationAndRegistry/","text":"Configuration and Registry Providers Introduction The EdgeX registry and configuration service provides other EdgeX Foundry micro services with information about associated services within EdgeX Foundry (such as location and status) and configuration properties (i.e. - a repository of initialization and operating values). Today, EdgeX Foundry uses Consul by Hashicorp as its reference implementation configuration and registry providers. However, abstractions are in place so that these functions could be provided by an alternate implementation. In fact, registration and configuration could be provided by different services under the covers. For more, see the Configuration Provider and Registry Provider sections in this page. Configuration Please refer to the EdgeX Foundry architectural decision record for details (and design decisions) behind the configuration in EdgeX. Local Configuration Because EdgeX Foundry may be deployed and run in several different ways, it is important to understand how configuration is loaded and from where it is sourced. Referring to the cmd directory within the edgex-go repository , each service has its own folder. Inside each service folder there is a res directory (short for \"resource\"). There you will find the configuration files in TOML format that defines each service's configuration. A service may support several different configuration profiles, such as a App Service Configurable does. In this case, the configuration file located directly in the res directory should be considered the default configuration profile. Sub-directories will contain configurations appropriate to the respective profile. As of the Geneva release, EdgeX recommends using environment variable overrides instead of creating profiles to override some subset of config values. App Service Configurable is an exception to this as this is how it defined unique instances using the same executable. If you choose to use profiles as described above, the config profile can be indicated using one of the following command line flags: --profile / -p Taking the Core Data and App Service Configurable services as an examples: ./core-data starts the service using the default profile found locally ./app-service-configurable --profile=rules-engine starts the service using the rules-engine profile found locally Note Again, utilizing environment variables for configuration overrides is the recommended path. Config profiles, for the most part, are not used. Seeding Configuration When utilizing the centralized configuration management for the EdgeX Foundry micro services, it is necessary to seed the required configuration before starting the services. Each service has the built-in capability to perform this seeding operation. A service will use its local configuration file to initialize the structure and relevant values, and then overlay any environment variable override values as specified. The end result will be seeded into the configuration provider if such is being used. In order for a service to seed/load the configuration to/from the configuration provider, use one of the following flags: --configProvider / -cp Again, taking the core-data service as an example: ./core-data -cp=consul.http://localhost:8500 will start the service using configuration values found in the provider or seed them if they do not exist. Note Environment overrides are also applied after the configuration is loaded from the configuration provider. Configuration Structure Configuration information is organized into a hierarchical structure allowing for a logical grouping of services, as well as versioning, beneath an \"edgex\" namespace at root level of the configuration tree. The root namespace separates EdgeX Foundry-related configuration information from other applications that may be using the same configuration provider. Below the root, sub-nodes facilitate grouping of device services, core/support/security services, app services, etc. As an example, the top-level nodes shown when one views the configuration registry might be as follows: edgex (root namespace) core (core/support/security services) devices (device services) appservices ( application services ) Versioning Incorporating versioning into the configuration hierarchy looks like this. edgex (root namespace) core (core/support/security services) 2.0 core-command core-data core-metadata support-notifications support-scheduler sys-mgmt-agent 3.0 devices (device services) 2.0 device-mqtt device-virtual device-modbus 3.0 appservices (application services) 2.0 app-rules-engine 3.0 EdgeX 2.0 For EdgeX 2.0 the version number in the path is now 2.0 and the service keys are now used for the service names. The versions shown correspond to major versions of the given services. For all minor/patch versions associated with a major version, the respective service keys live under the major version in configuration (such as 2.0). Changes to the configuration structure that may be required during the associated minor version development cycles can only be additive. That is, key names will not be removed or changed once set in a major version. Furthermore, sections of the configuration tree cannot be moved from one place to another. In this way, backward compatibility for the lifetime of the major version is maintained. An advantage of grouping all minor/patch versions under a major version involves end-user configuration changes that need to be persisted during an upgrade. A service on startup will not overwrite existing configuration when it runs unless explicitly told to do so via the --overwrite / -o command line flag. Therefore if a user leaves their configuration provider running during an EdgeX Foundry upgrade any customization will be left in place. Environment variable overrides such as those supplied in the docker-compose for a given release will always override existing content in the configuration provider. Configuration Provider You can supply and manage configuration in a centralized manner by utilizing the -cp/--configProvider flag when starting a service. If the flag is provided and points to an application such as HashiCorp's Consul , the service will bootstrap its configuration into the provider, if it doesn't exist. If configuration does already exist, it will load the content from the given location applying any environment variables overrides of which the service is aware. Integration with the configuration provider is handled through the go-mod-configuration module referenced by all services. Registry Provider The registry refers to any platform you may use for service discovery. For the EdgeX Foundry reference implementation, the default provider for this responsibility is Consul. Integration with the registry is handled through the go-mod-registry module referenced by all services. Introduction to Registry The objective of the registry is to enable micro services to find and to communicate with each other. When each micro service starts up, it registers itself with the registry, and the registry continues checking its availability periodically via a specified health check endpoint. When one micro service needs to connect to another one, it connects to the registry to retrieve the available host name and port number of the target micro service and then invokes the target micro service. The following figure shows the basic flow. Consul is the default registry implementation and provides native features for service registration, service discovery, and health checking. Please refer to the Consul official web site for more information: https://www.consul.io Physically, the \"registry\" and \"configuration\" management services are combined and running on the same Consul server node. Web User Interface A web user interface is also provided by Consul. Users can view the available service list and their health status through the web user interface. The web user interface is available at the /ui path on the same port as the HTTP API. By default this is http://localhost:8500/ui . For more detail, please see: https://www.consul.io/intro/getting-started/ui.html Running on Docker For ease of use to install and update, the microservices of EdgeX Foundry are published as Docker images onto Docker Hub and compose files that allow you to run EdgeX and dependent service such as Consul. These compose files can be found here in the edgex-compose repository . See the Getting Started using Docker for more details. Once the EdgeX stack is running in docker verify Consul is running by going to http://localhost:8500/ui in your browser. Running on Local Machine To run Consul on the local machine, following these steps: Download the binary from Consul official website: https://www.consul.io/downloads.html . Please choose the correct binary file according to the operation system. Set up the environment variable. Please refer to https://www.consul.io/intro/getting-started/install.html . Execute the following command: consul agent -data-dir \\$ { DATA_FOLDER } -ui -advertise 127 .0.0.1 -server -bootstrap-expect 1 # ${DATA_FOLDER} could be any folder to put the data files of Consul and it needs the read/write permission. Verify the result: http://localhost:8500/ui","title":"Configuration and Registry Providers"},{"location":"microservices/configuration/ConfigurationAndRegistry/#configuration-and-registry-providers","text":"","title":"Configuration and Registry Providers"},{"location":"microservices/configuration/ConfigurationAndRegistry/#introduction","text":"The EdgeX registry and configuration service provides other EdgeX Foundry micro services with information about associated services within EdgeX Foundry (such as location and status) and configuration properties (i.e. - a repository of initialization and operating values). Today, EdgeX Foundry uses Consul by Hashicorp as its reference implementation configuration and registry providers. However, abstractions are in place so that these functions could be provided by an alternate implementation. In fact, registration and configuration could be provided by different services under the covers. For more, see the Configuration Provider and Registry Provider sections in this page.","title":"Introduction"},{"location":"microservices/configuration/ConfigurationAndRegistry/#configuration","text":"Please refer to the EdgeX Foundry architectural decision record for details (and design decisions) behind the configuration in EdgeX.","title":"Configuration"},{"location":"microservices/configuration/ConfigurationAndRegistry/#local-configuration","text":"Because EdgeX Foundry may be deployed and run in several different ways, it is important to understand how configuration is loaded and from where it is sourced. Referring to the cmd directory within the edgex-go repository , each service has its own folder. Inside each service folder there is a res directory (short for \"resource\"). There you will find the configuration files in TOML format that defines each service's configuration. A service may support several different configuration profiles, such as a App Service Configurable does. In this case, the configuration file located directly in the res directory should be considered the default configuration profile. Sub-directories will contain configurations appropriate to the respective profile. As of the Geneva release, EdgeX recommends using environment variable overrides instead of creating profiles to override some subset of config values. App Service Configurable is an exception to this as this is how it defined unique instances using the same executable. If you choose to use profiles as described above, the config profile can be indicated using one of the following command line flags: --profile / -p Taking the Core Data and App Service Configurable services as an examples: ./core-data starts the service using the default profile found locally ./app-service-configurable --profile=rules-engine starts the service using the rules-engine profile found locally Note Again, utilizing environment variables for configuration overrides is the recommended path. Config profiles, for the most part, are not used.","title":"Local Configuration"},{"location":"microservices/configuration/ConfigurationAndRegistry/#seeding-configuration","text":"When utilizing the centralized configuration management for the EdgeX Foundry micro services, it is necessary to seed the required configuration before starting the services. Each service has the built-in capability to perform this seeding operation. A service will use its local configuration file to initialize the structure and relevant values, and then overlay any environment variable override values as specified. The end result will be seeded into the configuration provider if such is being used. In order for a service to seed/load the configuration to/from the configuration provider, use one of the following flags: --configProvider / -cp Again, taking the core-data service as an example: ./core-data -cp=consul.http://localhost:8500 will start the service using configuration values found in the provider or seed them if they do not exist. Note Environment overrides are also applied after the configuration is loaded from the configuration provider.","title":"Seeding Configuration"},{"location":"microservices/configuration/ConfigurationAndRegistry/#configuration-structure","text":"Configuration information is organized into a hierarchical structure allowing for a logical grouping of services, as well as versioning, beneath an \"edgex\" namespace at root level of the configuration tree. The root namespace separates EdgeX Foundry-related configuration information from other applications that may be using the same configuration provider. Below the root, sub-nodes facilitate grouping of device services, core/support/security services, app services, etc. As an example, the top-level nodes shown when one views the configuration registry might be as follows: edgex (root namespace) core (core/support/security services) devices (device services) appservices ( application services )","title":"Configuration Structure"},{"location":"microservices/configuration/ConfigurationAndRegistry/#versioning","text":"Incorporating versioning into the configuration hierarchy looks like this. edgex (root namespace) core (core/support/security services) 2.0 core-command core-data core-metadata support-notifications support-scheduler sys-mgmt-agent 3.0 devices (device services) 2.0 device-mqtt device-virtual device-modbus 3.0 appservices (application services) 2.0 app-rules-engine 3.0 EdgeX 2.0 For EdgeX 2.0 the version number in the path is now 2.0 and the service keys are now used for the service names. The versions shown correspond to major versions of the given services. For all minor/patch versions associated with a major version, the respective service keys live under the major version in configuration (such as 2.0). Changes to the configuration structure that may be required during the associated minor version development cycles can only be additive. That is, key names will not be removed or changed once set in a major version. Furthermore, sections of the configuration tree cannot be moved from one place to another. In this way, backward compatibility for the lifetime of the major version is maintained. An advantage of grouping all minor/patch versions under a major version involves end-user configuration changes that need to be persisted during an upgrade. A service on startup will not overwrite existing configuration when it runs unless explicitly told to do so via the --overwrite / -o command line flag. Therefore if a user leaves their configuration provider running during an EdgeX Foundry upgrade any customization will be left in place. Environment variable overrides such as those supplied in the docker-compose for a given release will always override existing content in the configuration provider.","title":"Versioning"},{"location":"microservices/configuration/ConfigurationAndRegistry/#configuration-provider","text":"You can supply and manage configuration in a centralized manner by utilizing the -cp/--configProvider flag when starting a service. If the flag is provided and points to an application such as HashiCorp's Consul , the service will bootstrap its configuration into the provider, if it doesn't exist. If configuration does already exist, it will load the content from the given location applying any environment variables overrides of which the service is aware. Integration with the configuration provider is handled through the go-mod-configuration module referenced by all services.","title":"Configuration Provider"},{"location":"microservices/configuration/ConfigurationAndRegistry/#registry-provider","text":"The registry refers to any platform you may use for service discovery. For the EdgeX Foundry reference implementation, the default provider for this responsibility is Consul. Integration with the registry is handled through the go-mod-registry module referenced by all services.","title":"Registry Provider"},{"location":"microservices/configuration/ConfigurationAndRegistry/#introduction-to-registry","text":"The objective of the registry is to enable micro services to find and to communicate with each other. When each micro service starts up, it registers itself with the registry, and the registry continues checking its availability periodically via a specified health check endpoint. When one micro service needs to connect to another one, it connects to the registry to retrieve the available host name and port number of the target micro service and then invokes the target micro service. The following figure shows the basic flow. Consul is the default registry implementation and provides native features for service registration, service discovery, and health checking. Please refer to the Consul official web site for more information: https://www.consul.io Physically, the \"registry\" and \"configuration\" management services are combined and running on the same Consul server node.","title":"Introduction to Registry"},{"location":"microservices/configuration/ConfigurationAndRegistry/#web-user-interface","text":"A web user interface is also provided by Consul. Users can view the available service list and their health status through the web user interface. The web user interface is available at the /ui path on the same port as the HTTP API. By default this is http://localhost:8500/ui . For more detail, please see: https://www.consul.io/intro/getting-started/ui.html","title":"Web User Interface"},{"location":"microservices/configuration/ConfigurationAndRegistry/#running-on-docker","text":"For ease of use to install and update, the microservices of EdgeX Foundry are published as Docker images onto Docker Hub and compose files that allow you to run EdgeX and dependent service such as Consul. These compose files can be found here in the edgex-compose repository . See the Getting Started using Docker for more details. Once the EdgeX stack is running in docker verify Consul is running by going to http://localhost:8500/ui in your browser.","title":"Running on Docker"},{"location":"microservices/configuration/ConfigurationAndRegistry/#running-on-local-machine","text":"To run Consul on the local machine, following these steps: Download the binary from Consul official website: https://www.consul.io/downloads.html . Please choose the correct binary file according to the operation system. Set up the environment variable. Please refer to https://www.consul.io/intro/getting-started/install.html . Execute the following command: consul agent -data-dir \\$ { DATA_FOLDER } -ui -advertise 127 .0.0.1 -server -bootstrap-expect 1 # ${DATA_FOLDER} could be any folder to put the data files of Consul and it needs the read/write permission. Verify the result: http://localhost:8500/ui","title":"Running on Local Machine"},{"location":"microservices/configuration/V2MigrationCommonConfig/","text":"V2 Migration of Common Configuration EdgeX 2.0 For EdgeX 2.0 there have been many breaking changes made to the configuration for all services. This section describes how to migrate the configuration sections that are common to all services. This information only applies if you have existing 1.x configuration that you have modified and need to migrate, rather than use the new V2 version of the configuration and modify it as needed. Writable The Writable section has the new InsecureSecrets sub-section. All services need the following added so they can access the Database and/or MessageBus : [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" Logging Remove the [Logging] section. Service The service section is now common to all EdgeX services. The migration to this new version slightly differs for each class of service, i.e. Core/Support, Device or Application Service. The sub-sections below describe the migration for each class. Core/Support For the Core/Support services the following changes are required: Remove BootTimeout Remove Protocol Rename CheckInterval to HealthCheckInterval Rename Timeout to RequestTimeout and change value to be duration string. i.e 5000 changes to 5s Add MaxRequestSize with value of 0 Port value changes to be in proper range for new port assignments. See Port Assignments (TBD) section for more details Device For Device service the changes are the same as Core/Support above plus the following: Remove ConnectRetries Move EnableAsyncReadings to be under the [Device] section Move AsyncBufferSize to be under the [Device] section Move labels to be under the [Device] section Application For Application services the changes are the same as Core/Support above plus the following: Remove ReadMaxLimit Remove ClientMonitor Add ServerBindAddr = \"\" # if blank, uses default Go behavior https://golang.org/pkg/net/#Listen Add MaxResultCount and set value to 0 Databases Remove the Username and Password settings Registry No changes Clients The map key names have changed to uses the service key for each of the target services. Each client entry must be changed to use the appropriate service key as follows: CoreData => core-data Metadata => core-metadata Command => core-command Notifications => support-notifications Scheduler => support-scheduler Remove the [Clients.Logging] section SecretStore All service now require the [SecretStore] section. For those that did not have it previously add the following replacing with the service's actual service key: [SecretStore] Type = 'vault' Protocol = 'http' Host = 'localhost' Port = 8200 Path = '/' TokenFile = '/tmp/edgex/secrets//secrets-token.json' RootCaCertPath = '' ServerName = '' [SecretStore.Authentication] AuthType = 'X-Vault-Token' For those service that previously had the [SecretStore] section, make the following changes replacing with the service's actual service key: Add the Type = 'vault' setting Remove AdditionalRetryAttempts Remove RetryWaitPeriod Change Protocol value to be 'http' Change Path value to be '/' Change TokenFile value to be '/tmp/edgex/secrets//secrets-token.json' Change RootCaCertPath value to be empty, i.e '' Change ServerName value to be empty, i.e ''","title":"V2 Migration of Common Configuration"},{"location":"microservices/configuration/V2MigrationCommonConfig/#v2-migration-of-common-configuration","text":"EdgeX 2.0 For EdgeX 2.0 there have been many breaking changes made to the configuration for all services. This section describes how to migrate the configuration sections that are common to all services. This information only applies if you have existing 1.x configuration that you have modified and need to migrate, rather than use the new V2 version of the configuration and modify it as needed.","title":"V2 Migration of Common Configuration"},{"location":"microservices/configuration/V2MigrationCommonConfig/#writable","text":"The Writable section has the new InsecureSecrets sub-section. All services need the following added so they can access the Database and/or MessageBus : [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\"","title":"Writable"},{"location":"microservices/configuration/V2MigrationCommonConfig/#logging","text":"Remove the [Logging] section.","title":"Logging"},{"location":"microservices/configuration/V2MigrationCommonConfig/#service","text":"The service section is now common to all EdgeX services. The migration to this new version slightly differs for each class of service, i.e. Core/Support, Device or Application Service. The sub-sections below describe the migration for each class.","title":"Service"},{"location":"microservices/configuration/V2MigrationCommonConfig/#coresupport","text":"For the Core/Support services the following changes are required: Remove BootTimeout Remove Protocol Rename CheckInterval to HealthCheckInterval Rename Timeout to RequestTimeout and change value to be duration string. i.e 5000 changes to 5s Add MaxRequestSize with value of 0 Port value changes to be in proper range for new port assignments. See Port Assignments (TBD) section for more details","title":"Core/Support"},{"location":"microservices/configuration/V2MigrationCommonConfig/#device","text":"For Device service the changes are the same as Core/Support above plus the following: Remove ConnectRetries Move EnableAsyncReadings to be under the [Device] section Move AsyncBufferSize to be under the [Device] section Move labels to be under the [Device] section","title":"Device"},{"location":"microservices/configuration/V2MigrationCommonConfig/#application","text":"For Application services the changes are the same as Core/Support above plus the following: Remove ReadMaxLimit Remove ClientMonitor Add ServerBindAddr = \"\" # if blank, uses default Go behavior https://golang.org/pkg/net/#Listen Add MaxResultCount and set value to 0","title":"Application"},{"location":"microservices/configuration/V2MigrationCommonConfig/#databases","text":"Remove the Username and Password settings","title":"Databases"},{"location":"microservices/configuration/V2MigrationCommonConfig/#registry","text":"No changes","title":"Registry"},{"location":"microservices/configuration/V2MigrationCommonConfig/#clients","text":"The map key names have changed to uses the service key for each of the target services. Each client entry must be changed to use the appropriate service key as follows: CoreData => core-data Metadata => core-metadata Command => core-command Notifications => support-notifications Scheduler => support-scheduler Remove the [Clients.Logging] section","title":"Clients"},{"location":"microservices/configuration/V2MigrationCommonConfig/#secretstore","text":"All service now require the [SecretStore] section. For those that did not have it previously add the following replacing with the service's actual service key: [SecretStore] Type = 'vault' Protocol = 'http' Host = 'localhost' Port = 8200 Path = '/' TokenFile = '/tmp/edgex/secrets//secrets-token.json' RootCaCertPath = '' ServerName = '' [SecretStore.Authentication] AuthType = 'X-Vault-Token' For those service that previously had the [SecretStore] section, make the following changes replacing with the service's actual service key: Add the Type = 'vault' setting Remove AdditionalRetryAttempts Remove RetryWaitPeriod Change Protocol value to be 'http' Change Path value to be '/' Change TokenFile value to be '/tmp/edgex/secrets//secrets-token.json' Change RootCaCertPath value to be empty, i.e '' Change ServerName value to be empty, i.e ''","title":"SecretStore"},{"location":"microservices/core/Ch-CoreServices/","text":"Core Services Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where the innate knowledge of \u201cthings\u201d connected, sensor data collected, and EdgeX configuration resides. Core consists of the following micro services: Core data : a persistence repository and associated management service for data collected from south side objects. Command : a service that facilitates and controls actuation requests from the north side to the south side. Metadata : a repository and associated management service of metadata about the objects that are connected to EdgeX Foundry. Metadata provides the capability to provision new devices and pair them with their owning device services. Registry and Configuration : provides other EdgeX Foundry micro services with information about associated services within the system and micro services configuration properties (i.e. - a repository of initialization values).","title":"Core Services"},{"location":"microservices/core/Ch-CoreServices/#core-services","text":"Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where the innate knowledge of \u201cthings\u201d connected, sensor data collected, and EdgeX configuration resides. Core consists of the following micro services: Core data : a persistence repository and associated management service for data collected from south side objects. Command : a service that facilitates and controls actuation requests from the north side to the south side. Metadata : a repository and associated management service of metadata about the objects that are connected to EdgeX Foundry. Metadata provides the capability to provision new devices and pair them with their owning device services. Registry and Configuration : provides other EdgeX Foundry micro services with information about associated services within the system and micro services configuration properties (i.e. - a repository of initialization values).","title":"Core Services"},{"location":"microservices/core/command/Ch-Command/","text":"Command Introduction The command micro service (often called the command and control micro service) enables the issuance of commands or actions to devices on behalf of: other micro services within EdgeX Foundry (for example, an edge analytics or rules engine micro service) other applications that may exist on the same system with EdgeX Foundry (for example, a management agent that needs to shutoff a sensor) To any external system that needs to command those devices (for example, a cloud-based application that determined the need to modify the settings on a collection of devices) The command micro service exposes the commands in a common, normalized way to simplify communications with the devices. There are two types of commands that can be sent to a device. a GET command requests data from the device. This is often used to request the latest sensor reading from the device. SET commands request to take action or actuate the device or to set some configuration on the device. In most cases, GET commands are simple requests for the latest sensor reading from the device. Therefore, the request is often parameter-less (requiring no parameters or body in the request). SET commands require a request body where the body provides a key/value pair array of values used as parameters in the request (i.e. {\"additionalProp1\": \"string\", \"additionalProp2\": \"string\"} ). EdgeX 2.1 v2.1 supports a new value type, Object , to present the structral value instead of encoding it as string for both SET and GET commands, for example, the SET command parameter might be {\"Location\": {\"latitude\": 39.67872546666667, \"longitude\": -104.97710646666667}} . The command micro service gets its knowledge about the devices from the metadata service. The command service always relays commands (GET or SET) to the devices through the device service. The command service never communicates directly to a device. Therefore, the command micro service is a proxy service for command or action requests from the north side of EdgeX (such as analytic or application services) to the protocol-specific device service and associated device. While not currently part of its duties, the command service could provide a layer of protection around device. Additional security could be added that would not allow unwarranted interaction with the devices (via device service). The command service could also regulate the number of requests on a device do not overwhelm the device - perhaps even caching responses so as to avoid waking a device unless necessary. Data Model EdgeX 2.0 While the general concepts of core command's GET/PUT requests are the same, the core command request/response models has changed significantly in EdgeX 2.0. Consult the API documentation for details. Data Dictionary DeviceProfile Property Description Id uniquely identifies the device, a UUID for example Description Name Name for identifying a device Manufacturer Manufacturer of the device Model Model of the device Labels Labels used to search for groups of profiles DeviceResources deviceResource collection DeviceCommands collect of deviceCommand DeviceCoreCommand Property Description DeviceName reference to a device by name ProfileName reference to a device profile by name CoreCommands array of core commands CoreCommand Property Description Name Get bool indicating a get command Set bool indicating a set command Path Url Parameters array of core command parameters CoreCommandParameters Property Description ResourceName ValueType High Level Interaction Diagrams The two following High Level Diagrams show: Issue a PUT command Get a list of devices and the available commands Command PUT Request Request for Devices and Available Commands Configuration Properties Please refer to the general Common Configuration documentation for configuration properties common to all services. Core Command no longer has any additional settings. V2 Configuration Migration Guide Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service . API Reference Core Command API Reference","title":"Command"},{"location":"microservices/core/command/Ch-Command/#command","text":"","title":"Command"},{"location":"microservices/core/command/Ch-Command/#introduction","text":"The command micro service (often called the command and control micro service) enables the issuance of commands or actions to devices on behalf of: other micro services within EdgeX Foundry (for example, an edge analytics or rules engine micro service) other applications that may exist on the same system with EdgeX Foundry (for example, a management agent that needs to shutoff a sensor) To any external system that needs to command those devices (for example, a cloud-based application that determined the need to modify the settings on a collection of devices) The command micro service exposes the commands in a common, normalized way to simplify communications with the devices. There are two types of commands that can be sent to a device. a GET command requests data from the device. This is often used to request the latest sensor reading from the device. SET commands request to take action or actuate the device or to set some configuration on the device. In most cases, GET commands are simple requests for the latest sensor reading from the device. Therefore, the request is often parameter-less (requiring no parameters or body in the request). SET commands require a request body where the body provides a key/value pair array of values used as parameters in the request (i.e. {\"additionalProp1\": \"string\", \"additionalProp2\": \"string\"} ). EdgeX 2.1 v2.1 supports a new value type, Object , to present the structral value instead of encoding it as string for both SET and GET commands, for example, the SET command parameter might be {\"Location\": {\"latitude\": 39.67872546666667, \"longitude\": -104.97710646666667}} . The command micro service gets its knowledge about the devices from the metadata service. The command service always relays commands (GET or SET) to the devices through the device service. The command service never communicates directly to a device. Therefore, the command micro service is a proxy service for command or action requests from the north side of EdgeX (such as analytic or application services) to the protocol-specific device service and associated device. While not currently part of its duties, the command service could provide a layer of protection around device. Additional security could be added that would not allow unwarranted interaction with the devices (via device service). The command service could also regulate the number of requests on a device do not overwhelm the device - perhaps even caching responses so as to avoid waking a device unless necessary.","title":"Introduction"},{"location":"microservices/core/command/Ch-Command/#data-model","text":"EdgeX 2.0 While the general concepts of core command's GET/PUT requests are the same, the core command request/response models has changed significantly in EdgeX 2.0. Consult the API documentation for details.","title":"Data Model"},{"location":"microservices/core/command/Ch-Command/#data-dictionary","text":"DeviceProfile Property Description Id uniquely identifies the device, a UUID for example Description Name Name for identifying a device Manufacturer Manufacturer of the device Model Model of the device Labels Labels used to search for groups of profiles DeviceResources deviceResource collection DeviceCommands collect of deviceCommand DeviceCoreCommand Property Description DeviceName reference to a device by name ProfileName reference to a device profile by name CoreCommands array of core commands CoreCommand Property Description Name Get bool indicating a get command Set bool indicating a set command Path Url Parameters array of core command parameters CoreCommandParameters Property Description ResourceName ValueType","title":"Data Dictionary"},{"location":"microservices/core/command/Ch-Command/#high-level-interaction-diagrams","text":"The two following High Level Diagrams show: Issue a PUT command Get a list of devices and the available commands Command PUT Request Request for Devices and Available Commands","title":"High Level Interaction Diagrams"},{"location":"microservices/core/command/Ch-Command/#configuration-properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services. Core Command no longer has any additional settings.","title":"Configuration Properties"},{"location":"microservices/core/command/Ch-Command/#v2-configuration-migration-guide","text":"Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service .","title":"V2 Configuration Migration Guide"},{"location":"microservices/core/command/Ch-Command/#api-reference","text":"Core Command API Reference","title":"API Reference"},{"location":"microservices/core/data/Ch-CoreData/","text":"Core Data Introduction The core data micro service provides centralized persistence for data collected by devices . Device services that collect sensor data call on the core data service to store the sensor data on the edge system (such as in a gateway ) until the data gets moved \"north\" and then exported to Enterprise and cloud systems. Core data persists the data in a local database. Redis is used by default, but a database abstraction layer allows for other databases to be used. Other services and systems, both within EdgeX Foundry and outside of EdgeX Foundry, access the sensor data through the core data service. Core data could also provide a degree of security and protection of the data collected while the data is at the edge. EdgeX 2.0 As of EdgeX 2.0 (Ireland), core data is completely optional. Device services can send data via message bus directly to application services. If local persistence is not needed, the service can be removed. If persistence is needed, sensor data can be sent via message bus to core data (the new default means to communicate with core data) or can be sent via REST to core data (the legacy way to send data to core data). See below for more details. Sensor data can be sent to core data via two different means: Services (like devices services) and other systems can put sensor data on a message bus topic and core data can be configured to subscribed to that topic. This is the default means of getting data to core data. Any service (like an application service or rules engine service) or 3rd system could also subscribe to the same topic. If the sensor data does not need to persisted locally, core data does not have to subscribe to the message bus topic - making core data completely optional. By default, the message bus is implemented using Redis Pub/Sub. MQTT can be used as an alternate message bus implementation. Services and systems can call on the core data REST API to send data to core data and have the data put in local storage. Prior to EdgeX 2.0, this was the default and only means to send data to core data. Today, it is an alternate means to send data to core data. When data is sent via REST to core data, core data re-publishes the data on to message bus so that other services can subscribe to it. Core data moves data to the application service (and edge analytcs ) via Redis Pub/Sub by default. MQTT or ZeroMQ can alternately be used. Use of MQTT requires the installation of a broker such as ActiveMQ. A messaging infrastructure abstraction is in place that allows for other message bus (e.g., AMQP) implementations to be created and used. Core Data \"Streaming\" By default, core data persists all data sent to it by services and other systems. However, when the data is too sensitive to keep at the edge, or there is no use for the data at the edge by other local services (e.g., by an analytics micro service), the data can be \"streamed\" through core data without persisting it. A configuration change to core data (Writable.PersistData=false) has core data send data to the application services without persisting the data. This option has the advantage of reducing latency through this layer and storage needs at the network edge. But the cost is having no historical data to use for analytics that need to look back in time to make a decision. Note When persistence is turned off via the PersistData flag, it is off for all devices. At this time, you cannot specify which device data is persisted and which device data is not. Application services do allow filtering of device data before it is exported or sent to another service like the rules engine, but this is not based on whether the data is persisted or not. EdgeX 2.0 As mentioned, as of EdgeX 2.0 (Ireland), core data is completely optional. Therefore, if persistence is not needed, and if sensor data is sent from device services directly to application services via message bus, core data can be removed. In addition to reducing resource utilization (memory and CPU for core data), it also removes latency of throughput as the core data layer can be completely bypassed. However, if device services are still using REST to send data into the system, core data is the central receiving endpoint and must remain in place; even if persistence is turned off. Events and Readings Data collected from sensors is marshalled into EdgeX event and reading objects (delivered as JSON objects or a binary object encoded as CBOR to core data). An event represents a collection of one or more sensor readings. Some sensors or devices are only providing a single value \u2013 a single reading - at a time. Other sensors spew multiple values whenever they are read. An event must have at least one reading. Events are associated to a sensor or device \u2013 the \u201cthing\u201d that sensed the environment and produced the readings. Readings represent a sensing on the part of a device or sensor. Readings only exist as part of (are owned by) an event. Readings are essentially a simple key/value pair of what was sensed (the key - called a ResourceName ) and the value sensed (the value). A reading may include other bits of information to provide more context (for example, the data type of the value) for the users of that data. Consumers of the reading data could include things like user interfaces, data visualization systems and analytics tools. In the diagram below, an example event/reading collection is depicted. The event coming from the \u201cmotor123\u201d device has two readings (or sensed values). The first reading indicates that the motor123 device reported the pressure of the motor was 1300 (the unit of measure might be something like PSI). EdgeX 2.0 In EdgeX 2.0, Value Descriptors have been removed. The ResourceName in a reading provides an indication of the data read. The other properties of that were in Value Descriptor (min, max, default value, unit of measure, etc.) can all be obtained from the Resource (in core metadata's resource properties associated to each Resource which are associated to a device profile) by ResourceName. ValueType property is also provided in the Reading so that the data type of the value is immediately available without having to do a lookup in core metadata. The value type property (shown as type above) on the reading lets the consumer of the information know that the value is an integer, base 64. The second reading indicates that the motor123 device also reported the temperature of the motor was 120 at the same time it reported the pressure (perhaps in degrees Fahrenheit). Data Model The following diagram shows the Data Model for core data. Device services send Event objects containing a collection or Readings to core data when a device captures a sensor reading. EdgeX 2.1 v2.1 supports a new value type, Object , to present the structral reading value instead of encoding it as string. Similar to the BinaryValue , there is a new field ObjectValue in the Reading. If the ValueType is Object , the read value will be put into the ObjectValue field in JSON object data type. EdgeX 2.0 Note that ValueDescriptor has been removed from this model as Value Descriptors have been removed in EdgeX 2 (see note above for more details). Data Dictionary Event Property Description Event represents a single measurable event read from a device. Event has a one-to-many relationship with Reading. ID Uniquely identifies an event, for example a UUID. DeviceName DeviceName identifies the source of the event; the device's name. ProfileName Identifies the name of the device profile associated with the device and corresponding resources collected in the readings of the event. SourceName Name of the source request from the device profile (ResourceName or Command) associated to the reading. Origin A timestamp indicating when the original event/reading took place. Most of the time, this indicates when the device service collected/created the event. Tags An arbitrary set of labels or additional information associated with the event. It can be used, for example, to add location information (like GPS coordinates) to the event. Readings A collection (one to many) of associated readings of a given event. Reading Property Description ID Uniquely identifies a reading, for example a UUID. DeviceName DeviceName identifies the source of the reading; the device's name. ProfileName Identifies the name of the device profile associated with the device and corresponding resource collected in the reading. Origin A timestamp indicating when the original event/reading took place. Most of the time, this indicates when the device service collected/created the event. ResourceName ResourceName-Value provide the key/value pair of what was sensed by a device. ResourceName specifies what was the value collected. ResourceName should match a device resource name in the device profile. Value The sensor data value ValueType The type of the sensor data - from a list of allowed value types that includes Bool, String, Uint8, Int8, ... BinaryValue Byte array of sensor data when the data captured is not structured; for example an image is captured. This information is not persisted in the Database and is expected to be empty when retrieving a Reading for the ValueType of Binary. MediaType Indicating the type of binary data when collected. ObjectValue Complex value of sensor data when the data captured is structured; for example a BACnet date object: \"date\":{ \"year\":2021, \"month\":8, \"day\":26, \"wday\":4 } . This is expected to be empty when the Reading for the ValueType is not Object . High Level Interaction Diagrams The two following High Level Interaction Diagrams show: How new sensor readings are collected by a device and added as event/readings to core data and the associated persistence store How a client (inside or outside of EdgeX) can query for events (in this case by device name) Core Data Add Sensor Readings Core Data Request Event / Reading for a Device Configuration Properties Please refer to the general Common Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. Writable Property Default Value Description Writable properties can be set and will dynamically take effect without service restart PersistData true When true, core data persists all sensor data sent to it in its associated database Databases/Databases.Primary Property Default Value Description Name 'coredata' Document store or database name MessageQueue Property Default Value Description Entries in the MessageQueue section of the configuration allow for publication of events to a message bus Protocol redis Indicates the connectivity protocol to use to use the bus. Host localhost Indicates the host of the messaging broker, if applicable. Port 6379 Indicates the port to use when publishing a message. Type redis Indicates the type of messaging library to use. Currently this is Redis by default. Refer to the go-mod-messaging module for more information. AuthMode usernamepassword Auth Mode to connect to EdgeX MessageBUs. SecretName redisdb Name of the secret in the Secret Store to find the MessageBus credentials. PublishTopicPrefix edgex/events/core Indicates the base topic to which messages should be published. / / will be added to this Publish Topic prefix SubscribeEnabled true Indicates wether to subcribe to the EdgeX MessageBus or not. SubscribeTopic edgex/events/device/# Topis to use when subscribing to the EdgeX MessageBus MessageQueue.Optional Property Default Value Description Configuration and connection parameters for use with MQTT message bus - in place of Redis ClientId 'core-data' Client ID used to put messages on the bus Qos '0' Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive '10' Period of time in seconds to keep the connection alive when there is no messages flowing (must be 2 or greater) Retained false Whether to retain messages AutoReconnect true Whether to reconnect to the message bus on connection loss ConnectTimeout 5 Message bus connection timeout in seconds SkipCertVerify false TLS configuration - Only used if Cert/Key file or Cert/Key PEMblock are specified V2 Configuration Migration Guide Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service . Writable The following settings have been removed from the Writable section DeviceUpdateLastConnected MetaDataCheck ServiceUpdateLastConnected ValidateCheck ChecksumAlgo MessageQueue The following MessageQueue setting values have changed: Host - Override value for docker is now edgex-redis Protocol = \"redis\" Port = 6379 Type = 'redis' The following setting has been removed from the MessageQueue section Topic The following new settings have been added to MessageQueue section PublishTopicPrefix = 'edgex/events/core' SubscribeTopic = 'edgex/events/device/#' AuthMode = 'usernamepassword' SecretName = 'redisdb' PublishTopicPrefix = 'edgex/events/core' SubscribeEnabled = true MessageQueue.Optional The following settings have been removed from MessageQueue.Optional section for when using MQTT for the MessageBus. Secure MessageBus using MQTT is not yet supported and will be retrieved from the Secret Store in a future release. Username Password API Reference Core Data API Reference","title":"Core Data"},{"location":"microservices/core/data/Ch-CoreData/#core-data","text":"","title":"Core Data"},{"location":"microservices/core/data/Ch-CoreData/#introduction","text":"The core data micro service provides centralized persistence for data collected by devices . Device services that collect sensor data call on the core data service to store the sensor data on the edge system (such as in a gateway ) until the data gets moved \"north\" and then exported to Enterprise and cloud systems. Core data persists the data in a local database. Redis is used by default, but a database abstraction layer allows for other databases to be used. Other services and systems, both within EdgeX Foundry and outside of EdgeX Foundry, access the sensor data through the core data service. Core data could also provide a degree of security and protection of the data collected while the data is at the edge. EdgeX 2.0 As of EdgeX 2.0 (Ireland), core data is completely optional. Device services can send data via message bus directly to application services. If local persistence is not needed, the service can be removed. If persistence is needed, sensor data can be sent via message bus to core data (the new default means to communicate with core data) or can be sent via REST to core data (the legacy way to send data to core data). See below for more details. Sensor data can be sent to core data via two different means: Services (like devices services) and other systems can put sensor data on a message bus topic and core data can be configured to subscribed to that topic. This is the default means of getting data to core data. Any service (like an application service or rules engine service) or 3rd system could also subscribe to the same topic. If the sensor data does not need to persisted locally, core data does not have to subscribe to the message bus topic - making core data completely optional. By default, the message bus is implemented using Redis Pub/Sub. MQTT can be used as an alternate message bus implementation. Services and systems can call on the core data REST API to send data to core data and have the data put in local storage. Prior to EdgeX 2.0, this was the default and only means to send data to core data. Today, it is an alternate means to send data to core data. When data is sent via REST to core data, core data re-publishes the data on to message bus so that other services can subscribe to it. Core data moves data to the application service (and edge analytcs ) via Redis Pub/Sub by default. MQTT or ZeroMQ can alternately be used. Use of MQTT requires the installation of a broker such as ActiveMQ. A messaging infrastructure abstraction is in place that allows for other message bus (e.g., AMQP) implementations to be created and used.","title":"Introduction"},{"location":"microservices/core/data/Ch-CoreData/#core-data-streaming","text":"By default, core data persists all data sent to it by services and other systems. However, when the data is too sensitive to keep at the edge, or there is no use for the data at the edge by other local services (e.g., by an analytics micro service), the data can be \"streamed\" through core data without persisting it. A configuration change to core data (Writable.PersistData=false) has core data send data to the application services without persisting the data. This option has the advantage of reducing latency through this layer and storage needs at the network edge. But the cost is having no historical data to use for analytics that need to look back in time to make a decision. Note When persistence is turned off via the PersistData flag, it is off for all devices. At this time, you cannot specify which device data is persisted and which device data is not. Application services do allow filtering of device data before it is exported or sent to another service like the rules engine, but this is not based on whether the data is persisted or not. EdgeX 2.0 As mentioned, as of EdgeX 2.0 (Ireland), core data is completely optional. Therefore, if persistence is not needed, and if sensor data is sent from device services directly to application services via message bus, core data can be removed. In addition to reducing resource utilization (memory and CPU for core data), it also removes latency of throughput as the core data layer can be completely bypassed. However, if device services are still using REST to send data into the system, core data is the central receiving endpoint and must remain in place; even if persistence is turned off.","title":"Core Data \"Streaming\""},{"location":"microservices/core/data/Ch-CoreData/#events-and-readings","text":"Data collected from sensors is marshalled into EdgeX event and reading objects (delivered as JSON objects or a binary object encoded as CBOR to core data). An event represents a collection of one or more sensor readings. Some sensors or devices are only providing a single value \u2013 a single reading - at a time. Other sensors spew multiple values whenever they are read. An event must have at least one reading. Events are associated to a sensor or device \u2013 the \u201cthing\u201d that sensed the environment and produced the readings. Readings represent a sensing on the part of a device or sensor. Readings only exist as part of (are owned by) an event. Readings are essentially a simple key/value pair of what was sensed (the key - called a ResourceName ) and the value sensed (the value). A reading may include other bits of information to provide more context (for example, the data type of the value) for the users of that data. Consumers of the reading data could include things like user interfaces, data visualization systems and analytics tools. In the diagram below, an example event/reading collection is depicted. The event coming from the \u201cmotor123\u201d device has two readings (or sensed values). The first reading indicates that the motor123 device reported the pressure of the motor was 1300 (the unit of measure might be something like PSI). EdgeX 2.0 In EdgeX 2.0, Value Descriptors have been removed. The ResourceName in a reading provides an indication of the data read. The other properties of that were in Value Descriptor (min, max, default value, unit of measure, etc.) can all be obtained from the Resource (in core metadata's resource properties associated to each Resource which are associated to a device profile) by ResourceName. ValueType property is also provided in the Reading so that the data type of the value is immediately available without having to do a lookup in core metadata. The value type property (shown as type above) on the reading lets the consumer of the information know that the value is an integer, base 64. The second reading indicates that the motor123 device also reported the temperature of the motor was 120 at the same time it reported the pressure (perhaps in degrees Fahrenheit).","title":"Events and Readings"},{"location":"microservices/core/data/Ch-CoreData/#data-model","text":"The following diagram shows the Data Model for core data. Device services send Event objects containing a collection or Readings to core data when a device captures a sensor reading. EdgeX 2.1 v2.1 supports a new value type, Object , to present the structral reading value instead of encoding it as string. Similar to the BinaryValue , there is a new field ObjectValue in the Reading. If the ValueType is Object , the read value will be put into the ObjectValue field in JSON object data type. EdgeX 2.0 Note that ValueDescriptor has been removed from this model as Value Descriptors have been removed in EdgeX 2 (see note above for more details).","title":"Data Model"},{"location":"microservices/core/data/Ch-CoreData/#data-dictionary","text":"Event Property Description Event represents a single measurable event read from a device. Event has a one-to-many relationship with Reading. ID Uniquely identifies an event, for example a UUID. DeviceName DeviceName identifies the source of the event; the device's name. ProfileName Identifies the name of the device profile associated with the device and corresponding resources collected in the readings of the event. SourceName Name of the source request from the device profile (ResourceName or Command) associated to the reading. Origin A timestamp indicating when the original event/reading took place. Most of the time, this indicates when the device service collected/created the event. Tags An arbitrary set of labels or additional information associated with the event. It can be used, for example, to add location information (like GPS coordinates) to the event. Readings A collection (one to many) of associated readings of a given event. Reading Property Description ID Uniquely identifies a reading, for example a UUID. DeviceName DeviceName identifies the source of the reading; the device's name. ProfileName Identifies the name of the device profile associated with the device and corresponding resource collected in the reading. Origin A timestamp indicating when the original event/reading took place. Most of the time, this indicates when the device service collected/created the event. ResourceName ResourceName-Value provide the key/value pair of what was sensed by a device. ResourceName specifies what was the value collected. ResourceName should match a device resource name in the device profile. Value The sensor data value ValueType The type of the sensor data - from a list of allowed value types that includes Bool, String, Uint8, Int8, ... BinaryValue Byte array of sensor data when the data captured is not structured; for example an image is captured. This information is not persisted in the Database and is expected to be empty when retrieving a Reading for the ValueType of Binary. MediaType Indicating the type of binary data when collected. ObjectValue Complex value of sensor data when the data captured is structured; for example a BACnet date object: \"date\":{ \"year\":2021, \"month\":8, \"day\":26, \"wday\":4 } . This is expected to be empty when the Reading for the ValueType is not Object .","title":"Data Dictionary"},{"location":"microservices/core/data/Ch-CoreData/#high-level-interaction-diagrams","text":"The two following High Level Interaction Diagrams show: How new sensor readings are collected by a device and added as event/readings to core data and the associated persistence store How a client (inside or outside of EdgeX) can query for events (in this case by device name) Core Data Add Sensor Readings Core Data Request Event / Reading for a Device","title":"High Level Interaction Diagrams"},{"location":"microservices/core/data/Ch-CoreData/#configuration-properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. Writable Property Default Value Description Writable properties can be set and will dynamically take effect without service restart PersistData true When true, core data persists all sensor data sent to it in its associated database Databases/Databases.Primary Property Default Value Description Name 'coredata' Document store or database name MessageQueue Property Default Value Description Entries in the MessageQueue section of the configuration allow for publication of events to a message bus Protocol redis Indicates the connectivity protocol to use to use the bus. Host localhost Indicates the host of the messaging broker, if applicable. Port 6379 Indicates the port to use when publishing a message. Type redis Indicates the type of messaging library to use. Currently this is Redis by default. Refer to the go-mod-messaging module for more information. AuthMode usernamepassword Auth Mode to connect to EdgeX MessageBUs. SecretName redisdb Name of the secret in the Secret Store to find the MessageBus credentials. PublishTopicPrefix edgex/events/core Indicates the base topic to which messages should be published. / / will be added to this Publish Topic prefix SubscribeEnabled true Indicates wether to subcribe to the EdgeX MessageBus or not. SubscribeTopic edgex/events/device/# Topis to use when subscribing to the EdgeX MessageBus MessageQueue.Optional Property Default Value Description Configuration and connection parameters for use with MQTT message bus - in place of Redis ClientId 'core-data' Client ID used to put messages on the bus Qos '0' Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive '10' Period of time in seconds to keep the connection alive when there is no messages flowing (must be 2 or greater) Retained false Whether to retain messages AutoReconnect true Whether to reconnect to the message bus on connection loss ConnectTimeout 5 Message bus connection timeout in seconds SkipCertVerify false TLS configuration - Only used if Cert/Key file or Cert/Key PEMblock are specified","title":"Configuration Properties"},{"location":"microservices/core/data/Ch-CoreData/#v2-configuration-migration-guide","text":"Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service .","title":"V2 Configuration Migration Guide"},{"location":"microservices/core/data/Ch-CoreData/#writable","text":"The following settings have been removed from the Writable section DeviceUpdateLastConnected MetaDataCheck ServiceUpdateLastConnected ValidateCheck ChecksumAlgo","title":"Writable"},{"location":"microservices/core/data/Ch-CoreData/#messagequeue","text":"The following MessageQueue setting values have changed: Host - Override value for docker is now edgex-redis Protocol = \"redis\" Port = 6379 Type = 'redis' The following setting has been removed from the MessageQueue section Topic The following new settings have been added to MessageQueue section PublishTopicPrefix = 'edgex/events/core' SubscribeTopic = 'edgex/events/device/#' AuthMode = 'usernamepassword' SecretName = 'redisdb' PublishTopicPrefix = 'edgex/events/core' SubscribeEnabled = true","title":"MessageQueue"},{"location":"microservices/core/data/Ch-CoreData/#messagequeueoptional","text":"The following settings have been removed from MessageQueue.Optional section for when using MQTT for the MessageBus. Secure MessageBus using MQTT is not yet supported and will be retrieved from the Secret Store in a future release. Username Password","title":"MessageQueue.Optional"},{"location":"microservices/core/data/Ch-CoreData/#api-reference","text":"Core Data API Reference","title":"API Reference"},{"location":"microservices/core/database/Ch-Redis/","text":"Redis Database EdgeX Foundry's reference implementation database (for sensor data, metadata and all things that need to be persisted in a database) is Redis. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory. Memory Utilization Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see the list below) and those strategies has continued to evolve . When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual). http://antirez.com/news/92 https://redislabs.com/blog/redis-ram-ramifications-part-i/ https://redis.io/topics/memory-optimization http://antirez.com/news/128 On-disk Persistence Redis supports a number of different levels of on-disk persistence. By default, snapshots of the data are persisted every 60 seconds or after 1000 keys have changed. Beyond increasing the frequency of snapshots, append only files that log every database write are also supported. See https://redis.io/topics/persistence for a detailed discussion on how to balance the options. Redis supports setting a memory usage limit and a policy on what to do if memory cannot be allocated for a write. See the MEMORY MANAGEMENT section of https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf for the configuration options. Since EdgeX and Redis do not currently communicate on data evictions, you will need to use the EdgeX scheduler to control memory usage rather than a Redis eviction policy.","title":"Redis Database"},{"location":"microservices/core/database/Ch-Redis/#redis-database","text":"EdgeX Foundry's reference implementation database (for sensor data, metadata and all things that need to be persisted in a database) is Redis. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory.","title":"Redis Database"},{"location":"microservices/core/database/Ch-Redis/#memory-utilization","text":"Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see the list below) and those strategies has continued to evolve . When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual). http://antirez.com/news/92 https://redislabs.com/blog/redis-ram-ramifications-part-i/ https://redis.io/topics/memory-optimization http://antirez.com/news/128","title":"Memory Utilization"},{"location":"microservices/core/database/Ch-Redis/#on-disk-persistence","text":"Redis supports a number of different levels of on-disk persistence. By default, snapshots of the data are persisted every 60 seconds or after 1000 keys have changed. Beyond increasing the frequency of snapshots, append only files that log every database write are also supported. See https://redis.io/topics/persistence for a detailed discussion on how to balance the options. Redis supports setting a memory usage limit and a policy on what to do if memory cannot be allocated for a write. See the MEMORY MANAGEMENT section of https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf for the configuration options. Since EdgeX and Redis do not currently communicate on data evictions, you will need to use the EdgeX scheduler to control memory usage rather than a Redis eviction policy.","title":"On-disk Persistence"},{"location":"microservices/core/metadata/Ch-Metadata/","text":"Metadata Introduction The core metadata micro service has the knowledge about the devices and sensors and how to communicate with them used by the other services, such as core data, core command, and so forth. Specifically, metadata has the following abilities: Manages information about the devices connected to, and operated by, EdgeX Foundry Knows the type, and organization of data reported by the devices Knows how to command the devices Although metadata has the knowledge, it does not do the following activities: It is not responsible for actual data collection from devices, which is performed by device services and core data It is not responsible for issuing commands to the devices, which is performed by core command and device services Data Models To understand metadata, its important to understand the EdgeX data objects it manages. Metadata stores its knowledge in a local persistence database. Redis is used by default, but a database abstraction layer allows for other databases to be used. Device Profile Device profiles define general characteristics about devices, the data they provide, and how to command them. Think of a device profile as a template of a type or classification of device. For example, a device profile for BACnet thermostats provides general characteristics for the types of data a BACnet thermostat sends, such as current temperature and humidity level. It also defines which types of commands or actions EdgeX can send to the BACnet thermostat. Examples might include actions that set the cooling or heating point. Device profiles are typically specified in YAML file and uploaded to EdgeX. More details are provided below. EdgeX 2.0 The device profile was greatly simplified in EdgeX 2.0 (Ireland). There are now just two sections of the document (deviceResources and deviceCommands) versus the three (deviceResources, deviceCommands and coreCommands) of EdgeX 1.x profiles. Device resources and device commands are made available through the core command service with the isHidden property on either is set to fault. This makes a core command section no longer necessary in EdgeX 2. However, this does mean that EdgeX 2 profiles are not backward compatible and EdgeX 1.x profiles must be migrated. See Device Service V2 Migration Guide for complete details. Device Profile Details Metadata device profile object model General Properties A device profile has a number of high level properties to give the profile context and identification. Its name field is required and must be unique in an EdgeX deployment. Other fields are optional - they are not used by device services but may be populated for informational purposes: Description Manufacturer Model Labels Here is an example general information section for a sample KMC 9001 BACnet thermostat device profile provided with the BACnet device service (you can find the profile in Github) . Only the name is required in this section of the device profile. The name of the device profile must be unique in any EdgeX deployment. The manufacturer, model and labels are all optional bits of information that allow better queries of the device profiles in the system. name : \"BAC-9001\" manufacturer : \"KMC\" model : \"BAC-9001\" labels : - \"B-AAC\" description : \"KMC BAC-9001 BACnet thermostat\" Labels provided a way to tag, organize or categorize the various profiles. They serve no real purpose inside of EdgeX. Device Resources A device resource (in the deviceResources section of the YAML file) specifies a sensor value within a device that may be read from or written to either individually or as part of a device command (see below). Think of a device resource as a specific value that can be obtained from the underlying device or a value that can be set to the underlying device. In a thermostat, a device resource may be a temperature or humidity (values sensed from the devices) or cooling point or heating point (values that can be set/actuated to allow the thermostat to determine when associated heat/cooling systems are turned on or off). A device resource has a name for identification and a description for informational purposes. The properties section of a device resource has also been greatly simplified. See details below. Back to the BACnet example, here are two device resources. One will be used to get the temperature (read) the current temperature and the other to set (write or actuate) the active cooling set point. The device resource name must be provided and it must also be unique in any EdgeX deployment. name : Temperature description : \"Get the current temperature\" isHidden : false name : ActiveCoolingSetpoint description : \"The active cooling set point\" isHidden : false EdgeX 2.0 isHidden is new in EdgeX 2.0. While made explicit in this example, it is false by default when not specified. isHidden indicates whether to expose the device resource to the core command service. The device service allows access to the device resources via REST endpoint. Values specified in the device resources section of the device profile can be accessed through the following URL patterns: http:// : /api/v2/device/name/ / Attributes The attributes associated to a device resource are the specific parameters required by the device service to access the particular value. In other words, attributes are \u201cinward facing\u201d and are used by the device service to determine how to speak to the device to either read or write (get or set) some of its values. Attributes are detailed protocol and/or device specific information that informs the device service how to communication with the device to get (or set) values of interest. Returning to the BACnet device profile example, below are the complete device resource sections for Temperature and ActiveCoolingSetPoint \u2013 inclusive of the attributes \u2013 for the example device. - name : Temperature description : \"Get the current temperature\" isHidden : false attributes : { type : \"analogValue\" , instance : \"1\" , property : \"presentValue\" , index : \"none\" } - name : ActiveCoolingSetpoint description : \"The active cooling set point\" isHidden : false attributes : { type : \"analogValue\" , instance : \"3\" , property : \"presentValue\" , index : \"none\" } Properties The properties of a device resource describe the value obtained or set on the device. The properties can optionally inform the device service of some simple processing to be performed on the value. Again, using the BACnet profile as an example, here are the properties associated to the thermostat's temperature device resource. name : Temperature description : \"Get the current temperature\" attributes : { type : \"analogValue\" , instance : \"1\" , property : \"presentValue\" , index : \"none\" } properties : valueType : \"Float32\" readWrite : \"R\" units : \"Degrees Fahrenheit\" The 'valueType' property of properties gives more detail about the value collected or set. In this case giving the details of the temperature value to be set. The value provides details such as the type of the data collected or set, whether the value can be read, written or both. The following fields are available in the value property: valueType - Required. The data type of the value. Supported types are Bool, Int8 - Int64, Uint8 - Uint64, Float32, Float64, String, Binary, Object and arrays of the primitive types (ints, floats, bool). Arrays are specified as eg. Float32Array, BoolArray etc. readWrite - R, RW, or W indicating whether the value is readable or writable. units - gives more detail about the unit of measure associated with the value. In this case, the temperature unit of measure is in degrees Fahrenheit. min - minimum allowed value max - maximum allowed value defaultValue - a value used for PUT requests which do not specify one. base - a value to be raised to the power of the raw reading before it is returned. scale - a factor by which to multiply a reading before it is returned. offset - a value to be added to a reading before it is returned. mask - a binary mask which will be applied to an integer reading. shift - a number of bits by which an integer reading will be shifted right. The processing defined by base, scale, offset, mask and shift is applied in that order. This is done within the SDK. A reverse transformation is applied by the SDK to incoming data on set operations (NB mask transforms on set are NYI) Device Commands Device commands (in the deviceCommands section of the YAML file) define access to reads and writes for multiple simultaneous device resources. Device commands are optional. Each named device command should contain a number of get and/or set resource operations, describing the read or write respectively. Device commands may be useful when readings are logically related, for example with a 3-axis accelerometer it is helpful to read all axes (X, Y and Z) together. A device command consists of the following properties: name - the name of the command readWrite - R, RW, or W indicating whether the operation is readable or writable. isHidden - indicates whether to expose the device command to the core command service (optional and false by default) resourceOperations - the list of included device resource operations included in the command. Each resourceOperation will specify: the deviceResource - the name of the device resource defaultValue - optional, a value to return when the operation does not provide one parameter - optional, a value that will be used if a PUT request does not specify one. mappings - optional, allows readings of String type to be re-mapped. The device commands can also be accessed through a device service\u2019s REST API in a similar manner as described for device resources. http:// : /api/v2/device/name/ / If a device command and device resource have the same name, it will be the device command which is available. Core Commands EdgeX 2.0 Core commands have been removed in EdgeX 2. Use isHidden with a value of false to service device resources and device commands to the command service. Device resources or device commands that are not hidden are seen and available via the EdgeX core command service. Other services (such as the rules engine) or external clients of EdgeX, should make requests of device services through the core command service, and when they do, they are calling on the device service\u2019s unhidden device commands or device resources. Direct access to the device commands or device resources of a device service is frowned upon. Commands, made available through the EdgeX command service, allow the EdgeX adopter to add additional security or controls on who/what/when things are triggered and called on an actual device. Device Data about actual devices is another type of information that the metadata micro service stores and manages. Each device managed by EdgeX Foundry registers with metadata (via its owning device service. Each device must have a unique name associated to it. Metadata stores information about a device (such as its address) against the name in its database. Each device is also associated to a device profile. This association enables metadata to apply knowledge provided by the device profile to each device. For example, a thermostat profile would say that it reports temperature values in Celsius. Associating a particular thermostat (the thermostat in the lobby for example) to the thermostat profile allows metadata to know that the lobby thermostat reports temperature value in Celsius. Device Service Metadata also stores and manages information about the device services. Device services serve as EdgeX's interfaces to the actual devices and sensors. Device services are other micro services that communicate with devices via the protocol of that device. For example, a Modbus device service facilitates communications among all types of Modbus devices. Examples of Modbus devices include motor controllers, proximity sensors, thermostats, and power meters. Device services simplify communications with the device for the rest of EdgeX. When a device service starts, it registers itself with metadata. When EdgeX provisions a new devices the device gets associated to its owning device service. That association is also stored in metadata. Metadata Device, Device Service and Device Profile Model Metadata's Device Profile, Device and Device Service object model and the association between them Provision Watcher Device services may contain logic to automatically provision new devices. This can be done statically or dynamically. In static device configuration (also known as static provisioning) the device service connects to and establishes a new device that it manages in EdgeX (specifically metadata) from configuration the device service is provided. For example, a device service may be provided with the specific IP address and additional device details for a device (or devices) that it is to onboard at startup. In static provisioning, it is assumed that the device will be there and that it will be available at the address or place specified through configuration. The devices and the connection information for those devices is known at the point that the device service starts. In dynamic discovery (also known as automatic provisioning), a device service is given some general information about where to look and general parameters for a device (or devices). For example, the device service may be given a range of BLE address space and told to look for devices of a certain nature in this range. However, the device service does not know that the device is physically there \u2013 and the device may not be there at start up. It must continually scan during its operations (typically on some sort of schedule) for new devices within the guides of the location and device parameters provided by configuration. Not all device services support dynamic discovery. If it does support dynamic discovery, the configuration about what and where to look (in other words, where to scan) for new devices is specified by a provision watcher. A provision watcher, is specific configuration information provided to a device service (usually at startup) that gets stored in metadata. In addition to providing details about what devices to look for during a scan, a provision watcher may also contain \u201cblocking\u201d indicators, which define parameters about devices that are not to be automatically provisioned. This allows the scope of a device scan to be narrowed or allow specific devices to be avoided. Metadata's provision watcher object model Data Dictionary BaseAddress Property Description The metadata base structure for common information needed to make a request to an EdgeX Foundry target. Type REST or MQTT Host Target's address string - such as an IP address Port Port for the target address RESTAddress Property Description Structure extending BaseAddress, used to make a request of EdgeX Foundry targets via REST. Path URI path beyond the host and port HTTPMethod Method for connecting (i.e. POST) MQTTPubAddress Property Description Structure extending BaseAddress, used to make a request of EdgeX Foundry targets via MQTT. Publisher Publisher name User User id for authentication Password Password of the user for authentication Topic Topic for message bus QoS Quality of service level for message publishing; value 0, 1, or 2 KeepAlive Maximum time interval in seconds with no comms before closing Retained Flag to have the broker store the last rec'd message for future subscribers AutoReconnect Indication to reconnect on failed connection ConnectTimeout Maximum time interval the client will wait for the connection to the MQTT server to be established AutoEvent Property Description AutoEvent supports auto-generated events sourced from a device service Interval How often the specific resource needs to be polled. OnChange indicates whether the device service will generate an event only SourceName the name of the resource in the device profile which describes the event to generate Device Property Description The object that contains information about the state, position, reachability, and methods of interfacing with a Device; represents a registered device participating within the EdgeX Foundry ecosystem Id uniquely identifies the device, a UUID for example Description Name Name for identifying a device AdminState Admin state (locked/unlocked) OperatingState Protocols A map of supported protocols for the given device LastConnected Time (milliseconds) that the device last provided any feedback or responded to any request LastReported Labels Other labels applied to the device to help with searching Location Device service specific location (interface{} is an empty interface so it can be anything) ServiceName Associated Device Service - One per device ProfileName AutoEvents A list of auto-generated events coming from the device DeviceProfile Property Description represents the attributes and operational capabilities of a device. It is a template for which there can be multiple matching devices within a given system. Id uniquely identifies the device, a UUID for example Description Name Name for identifying a device Manufacturer Manufacturer of the device Model Model of the device Labels Labels used to search for groups of profiles DeviceResources deviceResource collection DeviceCommands collect of deviceCommand DeviceResource Property Description The atomic description of a particular protocol level interface for a class of Devices; represents a value on a device that can be read or written Description Name Tag Properties list of associated properties Attributes list of associated attributes DeviceService Property Description represents a service that is responsible for proxying connectivity between a set of devices and the EdgeX Foundry core services; the current state and reachability information for a registered device service Id uniquely identifies the device service, a UUID for example Name LastConnected LastReported ime (milliseconds) that the device service reported data to the core microservice Labels BaseAddress address (MQTT topic, HTTP address, serial bus, etc.) for reaching the service AdminState ResourceProperties Property Description The transformation and constraint properties for a device resource. ValueType Type of the value ReadWrite Read/Write Permissions set for this property Minimum Minimum value that can be get/set from this property Maximum Maximum value that can be get/set from this property DefaultValue Default value set to this property if no argument is passed Mask Mask to be applied prior to get/set of property Shift Shift to be applied after masking, prior to get/set of property Scale Multiplicative factor to be applied after shifting, prior to get/set of property Offset Additive factor to be applied after multiplying, prior to get/set of property Base Base for property to be applied to, leave 0 for no power operation (i.e. base ^ property: 2 ^ 10) Assertion Required value of the property, set for checking error state. Failing an assertion condition will mark the device with an error state MediaType ProvisionWatcher Property Description The metadata used by a Service for automatically provisioning matching Devices. Id Name unique name and identifier of the provision watcher Labels Identifiers set of key value pairs that identify property (MAC, HTTP,...) and value to watch for (00-05-1B-A1-99-99, 10.0.0.1,...) BlockingIdentifiers set of key-values pairs that identify devices which will not be added despite matching on Identifiers ProfileName Name of the device profile that should be applied to the devices available at the identifier addresses ServiceName Name of the device service that new devices will be associated to AdminState administrative state for new devices - either unlocked or locked AutoEvents Associated auto events to this watcher High Level Interaction Diagrams Sequence diagrams for some of the more critical or complex events regarding metadata. These High Level Interaction Diagrams show: Adding a new device profile (Step 1 to provisioning a new device) via metadata Adding a new device via metadata (Step 2 to provisioning a new device) EdgeX Foundry device service startup (and its interactions with metadata) Add a New Device Profile (Step 1 to provisioning a new device) Add a New Device (Step 2 to provisioning a new device) What happens on a device service startup? Configuration Properties Please refer to the general Common Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. Databases/Databases.Primary Property Default Value Description Properties used by the service to access the database Name 'metadata' Document store or database name Notifications Property Default Value Description Configuration to post device changes through the notifiction service PostDeviceChanges false Whether to send out notification when a device has been added, changed, or removed Content 'Meatadata notice: ' Start of the notification message when sending notification messages on device change Sender 'core-metadata' Sender of any notification messages sent on device change Description 'Metadata change notice' Message description of any notification messages sent on device change Label 'metadata' Label to put on messages for any notification messages sent on device change V2 Configuration Migration Guide Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service . Writable The EnableValueDescriptorManagement setting has been removed API Reference Core Metadata API Reference","title":"Metadata"},{"location":"microservices/core/metadata/Ch-Metadata/#metadata","text":"","title":"Metadata"},{"location":"microservices/core/metadata/Ch-Metadata/#introduction","text":"The core metadata micro service has the knowledge about the devices and sensors and how to communicate with them used by the other services, such as core data, core command, and so forth. Specifically, metadata has the following abilities: Manages information about the devices connected to, and operated by, EdgeX Foundry Knows the type, and organization of data reported by the devices Knows how to command the devices Although metadata has the knowledge, it does not do the following activities: It is not responsible for actual data collection from devices, which is performed by device services and core data It is not responsible for issuing commands to the devices, which is performed by core command and device services","title":"Introduction"},{"location":"microservices/core/metadata/Ch-Metadata/#data-models","text":"To understand metadata, its important to understand the EdgeX data objects it manages. Metadata stores its knowledge in a local persistence database. Redis is used by default, but a database abstraction layer allows for other databases to be used.","title":"Data Models"},{"location":"microservices/core/metadata/Ch-Metadata/#device-profile","text":"Device profiles define general characteristics about devices, the data they provide, and how to command them. Think of a device profile as a template of a type or classification of device. For example, a device profile for BACnet thermostats provides general characteristics for the types of data a BACnet thermostat sends, such as current temperature and humidity level. It also defines which types of commands or actions EdgeX can send to the BACnet thermostat. Examples might include actions that set the cooling or heating point. Device profiles are typically specified in YAML file and uploaded to EdgeX. More details are provided below. EdgeX 2.0 The device profile was greatly simplified in EdgeX 2.0 (Ireland). There are now just two sections of the document (deviceResources and deviceCommands) versus the three (deviceResources, deviceCommands and coreCommands) of EdgeX 1.x profiles. Device resources and device commands are made available through the core command service with the isHidden property on either is set to fault. This makes a core command section no longer necessary in EdgeX 2. However, this does mean that EdgeX 2 profiles are not backward compatible and EdgeX 1.x profiles must be migrated. See Device Service V2 Migration Guide for complete details.","title":"Device Profile"},{"location":"microservices/core/metadata/Ch-Metadata/#device-profile-details","text":"Metadata device profile object model General Properties A device profile has a number of high level properties to give the profile context and identification. Its name field is required and must be unique in an EdgeX deployment. Other fields are optional - they are not used by device services but may be populated for informational purposes: Description Manufacturer Model Labels Here is an example general information section for a sample KMC 9001 BACnet thermostat device profile provided with the BACnet device service (you can find the profile in Github) . Only the name is required in this section of the device profile. The name of the device profile must be unique in any EdgeX deployment. The manufacturer, model and labels are all optional bits of information that allow better queries of the device profiles in the system. name : \"BAC-9001\" manufacturer : \"KMC\" model : \"BAC-9001\" labels : - \"B-AAC\" description : \"KMC BAC-9001 BACnet thermostat\" Labels provided a way to tag, organize or categorize the various profiles. They serve no real purpose inside of EdgeX. Device Resources A device resource (in the deviceResources section of the YAML file) specifies a sensor value within a device that may be read from or written to either individually or as part of a device command (see below). Think of a device resource as a specific value that can be obtained from the underlying device or a value that can be set to the underlying device. In a thermostat, a device resource may be a temperature or humidity (values sensed from the devices) or cooling point or heating point (values that can be set/actuated to allow the thermostat to determine when associated heat/cooling systems are turned on or off). A device resource has a name for identification and a description for informational purposes. The properties section of a device resource has also been greatly simplified. See details below. Back to the BACnet example, here are two device resources. One will be used to get the temperature (read) the current temperature and the other to set (write or actuate) the active cooling set point. The device resource name must be provided and it must also be unique in any EdgeX deployment. name : Temperature description : \"Get the current temperature\" isHidden : false name : ActiveCoolingSetpoint description : \"The active cooling set point\" isHidden : false EdgeX 2.0 isHidden is new in EdgeX 2.0. While made explicit in this example, it is false by default when not specified. isHidden indicates whether to expose the device resource to the core command service. The device service allows access to the device resources via REST endpoint. Values specified in the device resources section of the device profile can be accessed through the following URL patterns: http:// : /api/v2/device/name/ / Attributes The attributes associated to a device resource are the specific parameters required by the device service to access the particular value. In other words, attributes are \u201cinward facing\u201d and are used by the device service to determine how to speak to the device to either read or write (get or set) some of its values. Attributes are detailed protocol and/or device specific information that informs the device service how to communication with the device to get (or set) values of interest. Returning to the BACnet device profile example, below are the complete device resource sections for Temperature and ActiveCoolingSetPoint \u2013 inclusive of the attributes \u2013 for the example device. - name : Temperature description : \"Get the current temperature\" isHidden : false attributes : { type : \"analogValue\" , instance : \"1\" , property : \"presentValue\" , index : \"none\" } - name : ActiveCoolingSetpoint description : \"The active cooling set point\" isHidden : false attributes : { type : \"analogValue\" , instance : \"3\" , property : \"presentValue\" , index : \"none\" } Properties The properties of a device resource describe the value obtained or set on the device. The properties can optionally inform the device service of some simple processing to be performed on the value. Again, using the BACnet profile as an example, here are the properties associated to the thermostat's temperature device resource. name : Temperature description : \"Get the current temperature\" attributes : { type : \"analogValue\" , instance : \"1\" , property : \"presentValue\" , index : \"none\" } properties : valueType : \"Float32\" readWrite : \"R\" units : \"Degrees Fahrenheit\" The 'valueType' property of properties gives more detail about the value collected or set. In this case giving the details of the temperature value to be set. The value provides details such as the type of the data collected or set, whether the value can be read, written or both. The following fields are available in the value property: valueType - Required. The data type of the value. Supported types are Bool, Int8 - Int64, Uint8 - Uint64, Float32, Float64, String, Binary, Object and arrays of the primitive types (ints, floats, bool). Arrays are specified as eg. Float32Array, BoolArray etc. readWrite - R, RW, or W indicating whether the value is readable or writable. units - gives more detail about the unit of measure associated with the value. In this case, the temperature unit of measure is in degrees Fahrenheit. min - minimum allowed value max - maximum allowed value defaultValue - a value used for PUT requests which do not specify one. base - a value to be raised to the power of the raw reading before it is returned. scale - a factor by which to multiply a reading before it is returned. offset - a value to be added to a reading before it is returned. mask - a binary mask which will be applied to an integer reading. shift - a number of bits by which an integer reading will be shifted right. The processing defined by base, scale, offset, mask and shift is applied in that order. This is done within the SDK. A reverse transformation is applied by the SDK to incoming data on set operations (NB mask transforms on set are NYI) Device Commands Device commands (in the deviceCommands section of the YAML file) define access to reads and writes for multiple simultaneous device resources. Device commands are optional. Each named device command should contain a number of get and/or set resource operations, describing the read or write respectively. Device commands may be useful when readings are logically related, for example with a 3-axis accelerometer it is helpful to read all axes (X, Y and Z) together. A device command consists of the following properties: name - the name of the command readWrite - R, RW, or W indicating whether the operation is readable or writable. isHidden - indicates whether to expose the device command to the core command service (optional and false by default) resourceOperations - the list of included device resource operations included in the command. Each resourceOperation will specify: the deviceResource - the name of the device resource defaultValue - optional, a value to return when the operation does not provide one parameter - optional, a value that will be used if a PUT request does not specify one. mappings - optional, allows readings of String type to be re-mapped. The device commands can also be accessed through a device service\u2019s REST API in a similar manner as described for device resources. http:// : /api/v2/device/name/ / If a device command and device resource have the same name, it will be the device command which is available. Core Commands EdgeX 2.0 Core commands have been removed in EdgeX 2. Use isHidden with a value of false to service device resources and device commands to the command service. Device resources or device commands that are not hidden are seen and available via the EdgeX core command service. Other services (such as the rules engine) or external clients of EdgeX, should make requests of device services through the core command service, and when they do, they are calling on the device service\u2019s unhidden device commands or device resources. Direct access to the device commands or device resources of a device service is frowned upon. Commands, made available through the EdgeX command service, allow the EdgeX adopter to add additional security or controls on who/what/when things are triggered and called on an actual device.","title":"Device Profile Details"},{"location":"microservices/core/metadata/Ch-Metadata/#device","text":"Data about actual devices is another type of information that the metadata micro service stores and manages. Each device managed by EdgeX Foundry registers with metadata (via its owning device service. Each device must have a unique name associated to it. Metadata stores information about a device (such as its address) against the name in its database. Each device is also associated to a device profile. This association enables metadata to apply knowledge provided by the device profile to each device. For example, a thermostat profile would say that it reports temperature values in Celsius. Associating a particular thermostat (the thermostat in the lobby for example) to the thermostat profile allows metadata to know that the lobby thermostat reports temperature value in Celsius.","title":"Device"},{"location":"microservices/core/metadata/Ch-Metadata/#device-service","text":"Metadata also stores and manages information about the device services. Device services serve as EdgeX's interfaces to the actual devices and sensors. Device services are other micro services that communicate with devices via the protocol of that device. For example, a Modbus device service facilitates communications among all types of Modbus devices. Examples of Modbus devices include motor controllers, proximity sensors, thermostats, and power meters. Device services simplify communications with the device for the rest of EdgeX. When a device service starts, it registers itself with metadata. When EdgeX provisions a new devices the device gets associated to its owning device service. That association is also stored in metadata. Metadata Device, Device Service and Device Profile Model Metadata's Device Profile, Device and Device Service object model and the association between them","title":"Device Service"},{"location":"microservices/core/metadata/Ch-Metadata/#provision-watcher","text":"Device services may contain logic to automatically provision new devices. This can be done statically or dynamically. In static device configuration (also known as static provisioning) the device service connects to and establishes a new device that it manages in EdgeX (specifically metadata) from configuration the device service is provided. For example, a device service may be provided with the specific IP address and additional device details for a device (or devices) that it is to onboard at startup. In static provisioning, it is assumed that the device will be there and that it will be available at the address or place specified through configuration. The devices and the connection information for those devices is known at the point that the device service starts. In dynamic discovery (also known as automatic provisioning), a device service is given some general information about where to look and general parameters for a device (or devices). For example, the device service may be given a range of BLE address space and told to look for devices of a certain nature in this range. However, the device service does not know that the device is physically there \u2013 and the device may not be there at start up. It must continually scan during its operations (typically on some sort of schedule) for new devices within the guides of the location and device parameters provided by configuration. Not all device services support dynamic discovery. If it does support dynamic discovery, the configuration about what and where to look (in other words, where to scan) for new devices is specified by a provision watcher. A provision watcher, is specific configuration information provided to a device service (usually at startup) that gets stored in metadata. In addition to providing details about what devices to look for during a scan, a provision watcher may also contain \u201cblocking\u201d indicators, which define parameters about devices that are not to be automatically provisioned. This allows the scope of a device scan to be narrowed or allow specific devices to be avoided. Metadata's provision watcher object model","title":"Provision Watcher"},{"location":"microservices/core/metadata/Ch-Metadata/#data-dictionary","text":"BaseAddress Property Description The metadata base structure for common information needed to make a request to an EdgeX Foundry target. Type REST or MQTT Host Target's address string - such as an IP address Port Port for the target address RESTAddress Property Description Structure extending BaseAddress, used to make a request of EdgeX Foundry targets via REST. Path URI path beyond the host and port HTTPMethod Method for connecting (i.e. POST) MQTTPubAddress Property Description Structure extending BaseAddress, used to make a request of EdgeX Foundry targets via MQTT. Publisher Publisher name User User id for authentication Password Password of the user for authentication Topic Topic for message bus QoS Quality of service level for message publishing; value 0, 1, or 2 KeepAlive Maximum time interval in seconds with no comms before closing Retained Flag to have the broker store the last rec'd message for future subscribers AutoReconnect Indication to reconnect on failed connection ConnectTimeout Maximum time interval the client will wait for the connection to the MQTT server to be established AutoEvent Property Description AutoEvent supports auto-generated events sourced from a device service Interval How often the specific resource needs to be polled. OnChange indicates whether the device service will generate an event only SourceName the name of the resource in the device profile which describes the event to generate Device Property Description The object that contains information about the state, position, reachability, and methods of interfacing with a Device; represents a registered device participating within the EdgeX Foundry ecosystem Id uniquely identifies the device, a UUID for example Description Name Name for identifying a device AdminState Admin state (locked/unlocked) OperatingState Protocols A map of supported protocols for the given device LastConnected Time (milliseconds) that the device last provided any feedback or responded to any request LastReported Labels Other labels applied to the device to help with searching Location Device service specific location (interface{} is an empty interface so it can be anything) ServiceName Associated Device Service - One per device ProfileName AutoEvents A list of auto-generated events coming from the device DeviceProfile Property Description represents the attributes and operational capabilities of a device. It is a template for which there can be multiple matching devices within a given system. Id uniquely identifies the device, a UUID for example Description Name Name for identifying a device Manufacturer Manufacturer of the device Model Model of the device Labels Labels used to search for groups of profiles DeviceResources deviceResource collection DeviceCommands collect of deviceCommand DeviceResource Property Description The atomic description of a particular protocol level interface for a class of Devices; represents a value on a device that can be read or written Description Name Tag Properties list of associated properties Attributes list of associated attributes DeviceService Property Description represents a service that is responsible for proxying connectivity between a set of devices and the EdgeX Foundry core services; the current state and reachability information for a registered device service Id uniquely identifies the device service, a UUID for example Name LastConnected LastReported ime (milliseconds) that the device service reported data to the core microservice Labels BaseAddress address (MQTT topic, HTTP address, serial bus, etc.) for reaching the service AdminState ResourceProperties Property Description The transformation and constraint properties for a device resource. ValueType Type of the value ReadWrite Read/Write Permissions set for this property Minimum Minimum value that can be get/set from this property Maximum Maximum value that can be get/set from this property DefaultValue Default value set to this property if no argument is passed Mask Mask to be applied prior to get/set of property Shift Shift to be applied after masking, prior to get/set of property Scale Multiplicative factor to be applied after shifting, prior to get/set of property Offset Additive factor to be applied after multiplying, prior to get/set of property Base Base for property to be applied to, leave 0 for no power operation (i.e. base ^ property: 2 ^ 10) Assertion Required value of the property, set for checking error state. Failing an assertion condition will mark the device with an error state MediaType ProvisionWatcher Property Description The metadata used by a Service for automatically provisioning matching Devices. Id Name unique name and identifier of the provision watcher Labels Identifiers set of key value pairs that identify property (MAC, HTTP,...) and value to watch for (00-05-1B-A1-99-99, 10.0.0.1,...) BlockingIdentifiers set of key-values pairs that identify devices which will not be added despite matching on Identifiers ProfileName Name of the device profile that should be applied to the devices available at the identifier addresses ServiceName Name of the device service that new devices will be associated to AdminState administrative state for new devices - either unlocked or locked AutoEvents Associated auto events to this watcher","title":"Data Dictionary"},{"location":"microservices/core/metadata/Ch-Metadata/#high-level-interaction-diagrams","text":"Sequence diagrams for some of the more critical or complex events regarding metadata. These High Level Interaction Diagrams show: Adding a new device profile (Step 1 to provisioning a new device) via metadata Adding a new device via metadata (Step 2 to provisioning a new device) EdgeX Foundry device service startup (and its interactions with metadata) Add a New Device Profile (Step 1 to provisioning a new device) Add a New Device (Step 2 to provisioning a new device) What happens on a device service startup?","title":"High Level Interaction Diagrams"},{"location":"microservices/core/metadata/Ch-Metadata/#configuration-properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. Databases/Databases.Primary Property Default Value Description Properties used by the service to access the database Name 'metadata' Document store or database name Notifications Property Default Value Description Configuration to post device changes through the notifiction service PostDeviceChanges false Whether to send out notification when a device has been added, changed, or removed Content 'Meatadata notice: ' Start of the notification message when sending notification messages on device change Sender 'core-metadata' Sender of any notification messages sent on device change Description 'Metadata change notice' Message description of any notification messages sent on device change Label 'metadata' Label to put on messages for any notification messages sent on device change","title":"Configuration Properties"},{"location":"microservices/core/metadata/Ch-Metadata/#v2-configuration-migration-guide","text":"Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service .","title":"V2 Configuration Migration Guide"},{"location":"microservices/core/metadata/Ch-Metadata/#writable","text":"The EnableValueDescriptorManagement setting has been removed","title":"Writable"},{"location":"microservices/core/metadata/Ch-Metadata/#api-reference","text":"Core Metadata API Reference","title":"API Reference"},{"location":"microservices/device/Ch-DeviceServiceList/","text":"Device Service Support The following table lists the EdgeX device services and protocols they support. Device Service Repository Protocol Releases Versions Status Comments device-camera-go ONVIF Delhi-Jakarta 0.7 - 2.x Active Not a full ONVIF implementation, but a good starter device-rest-go REST Edinburgh-Jakarta 1.0 - 2.x Active provides one-way communications only. Allows posting of binary and JSON data via REST. Events are single reading only. device-rfid-llrp-go LLRP Hanoi 1.3 Active Communications with RFID readers via LLRP. Work ongoing to update to Ireland, 2.x device-snmp-go SNMP Edinburgh-Jakarta 1.0 and 2.x Active Basic implementation of SNMP protocol. Async callbacks and traps not currently supported. device-virtual-go Edinburgh - Jakarta 1.0 and 2.x Active Simulates sensor readings of type binary, Boolean, float, integer and unsigned integer device-mqtt-go MQTT Fuji \u2013 Jakarta 1.1 and 2.x Active Two way communications via multiple MQTT topics device-modbus-go Modbus Dehli \u2013 Jakarta 0.7 - 2.x Active Supports Modbus over TCP or RTU device-gpio GPIO Hanoi \u2013 Jakarta 1.3 and 2.x Active Linux only; uses sysfs ABI device-grove-c Edinburg \u2013 Jakarta 1.0 and 2.x Active Connects the Grove sensor on Grove Raspberry Pi using libmraa library; Linux and ARM only device-bacnet-c BACnet Edinburg \u2013 Hanoi 1.0 and 2.x Active Currently being updated for Ireland and Jakarta. Supports BACnet via ethernet (IP) or serial (MSTP). Uses the Steve Karag BACnet stack device-coap-c CoAP Hanoi - Ireland 1.3 and 2.x Inactive This service is in the process of being redeveloped and expanded for Jakarta \u2013 and will support Thread as a subset of functionality. Currently supports CoAP-based REST and is one way communications (read-only) device-uart UART 2.x in Development Linux only; for connecting serial UART devices to EdgeX Device / Sensor List The following table lists known sensors or devices that have been successfully connected to EdgeX. Note If you have physically connected a sensor or device to EdgeX and can add to this list, please submit an issue in https://github.com/edgexfoundry/edgex-docs so that we can update the list. Provide as many details as possible about the device. Device Model Device Service connectivity Version Reference Comet Temperature Probe T0310 device-modbus-go Hanoi https://www.cometsystem.com/products/t0310-temperature-transmitter-with-rs232-output/reg-t0310 DSD TECH USB to TTL Adapter Built-in FTDI FT232RL IC SH-U09C2 device-uart development http://www.dsdtech-global.com/2017/07/dsd-tech-usb-to-ttl-serial-converter.html GPIO Soil Moisture Sensor unknown device-gpio Hanoi https://learn.sparkfun.com/tutorials/soil-moisture-sensor-hookup-guide/all Patlite Signal Tower NHL-FB2 device-snmp-go Ireland https://www.patlite.com/ Trendnet Network Switch TPE-082WS device-snmp-go Hanoi https://www.trendnet.com/products/managed-switch/10-Port-Gigabit-Web-Smart-PoEplus-Switch-TPE-082WS","title":"Device Service Support"},{"location":"microservices/device/Ch-DeviceServiceList/#device-service-support","text":"The following table lists the EdgeX device services and protocols they support. Device Service Repository Protocol Releases Versions Status Comments device-camera-go ONVIF Delhi-Jakarta 0.7 - 2.x Active Not a full ONVIF implementation, but a good starter device-rest-go REST Edinburgh-Jakarta 1.0 - 2.x Active provides one-way communications only. Allows posting of binary and JSON data via REST. Events are single reading only. device-rfid-llrp-go LLRP Hanoi 1.3 Active Communications with RFID readers via LLRP. Work ongoing to update to Ireland, 2.x device-snmp-go SNMP Edinburgh-Jakarta 1.0 and 2.x Active Basic implementation of SNMP protocol. Async callbacks and traps not currently supported. device-virtual-go Edinburgh - Jakarta 1.0 and 2.x Active Simulates sensor readings of type binary, Boolean, float, integer and unsigned integer device-mqtt-go MQTT Fuji \u2013 Jakarta 1.1 and 2.x Active Two way communications via multiple MQTT topics device-modbus-go Modbus Dehli \u2013 Jakarta 0.7 - 2.x Active Supports Modbus over TCP or RTU device-gpio GPIO Hanoi \u2013 Jakarta 1.3 and 2.x Active Linux only; uses sysfs ABI device-grove-c Edinburg \u2013 Jakarta 1.0 and 2.x Active Connects the Grove sensor on Grove Raspberry Pi using libmraa library; Linux and ARM only device-bacnet-c BACnet Edinburg \u2013 Hanoi 1.0 and 2.x Active Currently being updated for Ireland and Jakarta. Supports BACnet via ethernet (IP) or serial (MSTP). Uses the Steve Karag BACnet stack device-coap-c CoAP Hanoi - Ireland 1.3 and 2.x Inactive This service is in the process of being redeveloped and expanded for Jakarta \u2013 and will support Thread as a subset of functionality. Currently supports CoAP-based REST and is one way communications (read-only) device-uart UART 2.x in Development Linux only; for connecting serial UART devices to EdgeX","title":"Device Service Support"},{"location":"microservices/device/Ch-DeviceServiceList/#device-sensor-list","text":"The following table lists known sensors or devices that have been successfully connected to EdgeX. Note If you have physically connected a sensor or device to EdgeX and can add to this list, please submit an issue in https://github.com/edgexfoundry/edgex-docs so that we can update the list. Provide as many details as possible about the device. Device Model Device Service connectivity Version Reference Comet Temperature Probe T0310 device-modbus-go Hanoi https://www.cometsystem.com/products/t0310-temperature-transmitter-with-rs232-output/reg-t0310 DSD TECH USB to TTL Adapter Built-in FTDI FT232RL IC SH-U09C2 device-uart development http://www.dsdtech-global.com/2017/07/dsd-tech-usb-to-ttl-serial-converter.html GPIO Soil Moisture Sensor unknown device-gpio Hanoi https://learn.sparkfun.com/tutorials/soil-moisture-sensor-hookup-guide/all Patlite Signal Tower NHL-FB2 device-snmp-go Ireland https://www.patlite.com/ Trendnet Network Switch TPE-082WS device-snmp-go Hanoi https://www.trendnet.com/products/managed-switch/10-Port-Gigabit-Web-Smart-PoEplus-Switch-TPE-082WS","title":"Device / Sensor List"},{"location":"microservices/device/Ch-DeviceServices/","text":"Device Services Microservices Introduction The Device Services Layer interacts with Device Services. Device services are the edge connectors interacting with the devices that include, but are not limited to: appliances in your home, alarm systems, HVAC equipment, lighting, machines in any industry, irrigation systems, drones, traffic signals, automated transportation, and so forth. EdgeX device services translate information coming from devices via hundreds of protocols and thousands of formats and bring them into EdgeX. In other terms, device services ingest sensor data provided by \u201cthings\u201d. When it ingests the sensor data, the device service converts the data produced and communicated by the \u201cthing\u201d into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry. Device services also receive and handle any request for actuation back to the device. Device services take a general command from EdgeX to perform some sort of action and it translates that into a protocol specific request and forwards the request to the desired device. Device services serve as the main means EdgeX interacts with sensors/devices. So, in addition to getting sensor data and actuating devices, device services also: Get status updates from devices/sensors Transform data before sending sensor data to EdgeX Change configuration Discover devices Device services may service one or a number of devices at one time. A device that a device service manages, could be something other than a simple, single, physical device. The device could be an edge/IoT gateway (and all of that gateway's devices), a device manager, a sensor hub, a web service available over HTTP, or a software sensor that acts as a device, or collection of devices, to EdgeX Foundry. The device service communicates with the devices through protocols native to each device object. EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, BLE, etc. EdgeX also provides the means to create new devices services through device service software development kits (SDKs) when you encounter a new protocol and need EdgeX to communicate with a new device. Device Service Abstraction A device service is really just a software abstraction around a device and any associated firmware, software and protocol stack. It allows the rest of EdgeX (and users of EdgeX) to talk to a device via the abstraction API so that all devices look the same from the perspective of how you communicate with them. Under the covers, the implementation of the device service has some common elements, but can also vary greatly depending on the underlying device, protocol, and associate software. A device service provides the abstraction between the rest of EdgeX and the physical device. In other terms, the device service \u201cwraps\u201d the protocol communication code, device driver/firmware and actual device. Each device service in EdgeX is an independent micro service. Devices services are typically created using a device service SDK . The SDK is really just a library that provides common scaffolding code and convenience methods that are needed by all device services. While not required, the EdgeX community use the SDKs as the basis for the all device services the community provides. The SDKs make it easier to create device service by allowing a developer to focus on device specific communications, features, etc. versus having to code a lot of EdgeX service boilerplate code. Using the SDKs also helps to ensure the device services adhere to rules required of the device services. Unless you need to create a new device service or modify an existing device service, you may not ever have to go under the covers, so to speak, to understand how a device service works. However, having some general understanding of what a device service does and how it does it can be helpful in customization, setting configuration and diagnosing problems. Device Service Functionality All device services must perform the following tasks: Register with core metadata \u2013 thereby letting all of EdgeX know that it is running and stands ready to manage devices. In the case of an existing device service, the device service will update its metadata registration and get any new information. Get its configuration settings from the EdgeX\u2019s configuration service (or local configuration file if the configuration service is not being used). Register itself an EdgeX running micro service with the EdgeX registry service (when running) \u2013 thereby allowing other EdgeX services to communicate with it. On-board and manage physical devices it knows how to communicate with. This process is called provisioning of the device(s). In some cases, the device service may have the means to automatically detect and provision the devices. For example, a BLE device service may automatically scan a BLE address space, detect a new BLE device in its range, and then provision that device to EdgeX and the associated BLE device service. Update and inform EdgeX on the operating state of the device (does it appear the device is still running and able to communicate). Monitor for configuration changes and apply new configuration where applicable. Note, in some cases configuration changes cannot be dynamically applied (example: change the operating port of the device service). Get sensor data (i.e. ingest sensor data) and pass that data to the core data micro service via REST. Receive and react to REST based actuation commands. As you can imagine, many of these tasks (like registering with core metadata) are generic and the same for all device services and thereby provided by the SDK. Other tasks (like getting sensor data from the underlying device) are quite specific to the underlying device. In these cases, the device service SDK provides empty functions for performing the work, but the developer would need to fill in the function code as it relates to the specific device, the communication protocol, device driver, etc. Device Service Functional Requirements Requirements for the device service are provided in this documentation. These requirements are being used to define what functionality needs to be offered via any Device Service SDK to produce the device service scaffolding code. They may also help the reader further understand the duties and role of a device service. Device Profile EdgeX comes with a number of existing device services for communicating with devices that speak many IoT protocols \u2013 such as Modbus, BACnet, BLE, etc. While these devices services know how to speak to devices that communicate by the associated protocol, the device service doesn\u2019t know the specifics of all devices that speak that protocol. For example, there are thousands of Modbus devices in the world. It is a common industrial protocol used in a variety of devices. Some Modbus devices measure temperature and humidity and provide thermostatic control over building HVAC systems, while other Modbus devices are used in automation control of flare gas meters in the oil and gas industry. This diversity of devices means that the Modbus device service could never know how to communicate with each Modbus device directly. The device service just knows the Modbus protocol generically and must be informed of how to communicate with each individual device based on what that device knows and communicates. Using an analogy, you may speak a language or two. Just because you speak English, doesn\u2019t mean you know everything about all English-speaking people. For example, just because someone spoke English, you would not know if they could solve a calculus problem for you or if they can sing your favorite song. Device profiles describe a specific device to a device service. Each device managed by a device service has an association device profile, which defines that device in terms of the data it reports and operations that it supports. General characteristics about the type of device, the data the device provides, and how to command the device is all provided in a device profile. A device profile is described in YAML which is a human-readable data serialization language (similar to a markup language like XML). See the page on device profiles to learn more about how they provide the detail EdgeX device services need to communicate with a device. Info Device profiles, while normally provided to EdgeX in a YAML file, can also be specified to EdgeX in JSON. See the metadata API for upload via JSON versus upload YAML file . Device Discovery and Provision Watchers Device Services may contain logic to automatically provision new devices. This can be done statically or dynamically . Static Provisioning In static device configuration (also known as static provisioning) the device service connects to and establishes a new device that it manages in EdgeX (specifically metadata) from configuration the device service is provided. For example, a device service may be provided with the specific IP address and additional device details for a device (or devices) that it is to onboard at startup. In static provisioning, it is assumed that the device will be there and that it will be available at the address or place specified through configuration. The devices and the connection information for those devices is known at the point that the device service starts. Dynamic Provisioning In dynamic discovery (also known as automatic provisioning), a device service is given some general information about where to look and general parameters for a device (or devices). For example, the device service may be given a range of BLE address space and told to look for devices of a certain nature in this range. However, the device service does not know that the device is physically there \u2013 and the device may not be there at start up. It must continually scan during its operations (typically on some sort of schedule) for new devices within the guides of the location and device parameters provided by configuration. Not all device services support dynamic discovery. If it does support dynamic discovery, the configuration about what and where to look (in other words, where to scan) for new devices is specified by a provision watcher. A provision watcher is created via a call to the core metadata provision watcher API (and is stored in the metadata database). In addition to providing details about what devices to look for during a scan, a provision watcher may also contain \u201cblocking\u201d indicators, which define parameters about devices that are not to be automatically provisioned. This allows the scope of a device scan to be narrowed or allow specific devices to be avoided. Admin State The adminState is either LOCKED or UNLOCKED for each device. This is an administrative condition applied to the device. This state is periodically set by an administrator of the system \u2013 perhaps for system maintenance or upgrade of the sensor. When LOCKED , requests to the device via the device service are stopped and an indication that the device is locked (HTTP 423 status code) is returned to the caller. Sensor Reading Schedule Data collected from devices by a device service is marshalled into EdgeX event and reading objects (delivered as JSON objects in service REST calls). This is one of the primary responsibilities of a device service. Typically, a configurable schedule - called an auto event schedule - determines when a device service sends data to core data via core data\u2019s REST API (future EdgeX implementations may afford alternate means to send the data to core data or to send sensor data to other services). Test and Demonstration Device Services Among the many available device services provided by EdgeX, there are two device services that are typically used for demonstration, education and testing purposes only. The random device service ( device-random-go ) is a very simple device service used to provide device service authors a bare bones example inclusive of a device profile. It can also be used to create random integer data (either 8, 16, or 32 bit signed or unsigned) to simulate integer readings when developing or testing other EdgeX micro services. It was created from the Go-based device service SDK. The virtual device service ( device-virtual-go ) is also used for demonstration, education and testing. It is a more complex simulator in that it allows any type of data to be generated on a scheduled basis and used an embedded SQL database (ql) to provide simulated data. Manipulating the data in the embedded database allows the service to mimic almost any type of sensing device. More information on the virtual device service is available in this documentation. Running multiple instances Device services support one additional command-line argument, --instance or -i . This allows for running multiple instances of a device service in an EdgeX deployment, by giving them different names. For example, running device-modbus -i 1 results in a service named device-modbus_1 , ie the parameter given to the instance argument is added as a suffix to the device service name. The same effect may be obtained by setting the EDGEX_INSTANCE environment variable. Publish to MessageBus Edgex 2.0 New in Edgex 2.0 Device services now have the capability to publish Events directly to the EdgeX MessageBus, rather than POST the Events to Core Data via REST. This capability is controlled by the Device.UseMessageBus configuration property (see below), which is set to true by default. Core Data is configured by default to subscribe to the EdgeX MessageBus to receive and persist the Events. Application services, as in EdgeX 1.x, subscribe to the EdgeX MessageBus to receive and process the Events. Configuration Properties Please refer to the general Common Configuration documentation for configuration properties common to all services. Device Property Default Value Description Properties that determine how the device service communicates with a device DataTransform true Controls whether transformations are applied to numeric readings MaxCmdOps 128 Maximum number of resources in a device command (hence, readings in an event) MaxCmdResultLen 256 Maximum JSON string length for command results ProfilesDir '' If set, directory containing profile definition files to upload to core-metadata DevicesDir '' If set, directory containing device definition files to upload to core-metadata UpdateLastConnected false If true, update the LastConnected attribute of a device whenever it is successfully accessed UseMessageBus true Controls whether events are published via MessageBus or core-data (REST) Discovery/Enabled true Controls whether device discovery is enabled Discovery/Interval 0 Interval between automatic discovery runs. Zero means do not run discovery automatically MessageQueue Property Default Value Description Entries in the MessageQueue section of the configuration allow for publication of events to a message bus Protocol redis Indicates the connectivity protocol to use to use the bus. Host localhost Indicates the host of the messaging broker, if applicable. Port 6379 Indicates the port to use when publishing a message. Type redis Indicates the type of messaging library to use. Currently this is Redis by default. Refer to the go-mod-messaging module for more information. AuthMode usernamepassword Auth Mode to connect to EdgeX MessageBus. SecretName redisdb Name of the secret in the Secret Store to find the MessageBus credentials. PublishTopicPrefix edgex/events/device Indicates the base topic to which messages should be published. / / will be added to this Publish Topic prefix MessageQueue.Optional Property Default Value Description Configuration and connection parameters for use with MQTT message bus - in place of Redis ClientId [service-key] Client ID used to put messages on the bus Qos '0' Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive '10' Period of time in seconds to keep the connection alive when there is no messages flowing (must be 2 or greater) Retained false Whether to retain messages AutoReconnect true Whether to reconnect to the message bus on connection loss ConnectTimeout 5 Message bus connection timeout in seconds SkipCertVerify false TLS configuration - Only used if Cert/Key file or Cert/Key PEMblock are specified Custom Configuration Device services can have custom configuration in one of two ways. See the table below for details. Driver [Driver] - The Driver section used for simple custom settings and is accessed via the SDK's DriverConfigs() API. The DriverConfigs API returns a map[string] string containing the contents on the Driver section of the configuration.toml file. [Driver] MySetting = \"My Value\" Custom Structured Configuration For Go Device Services see Go Custom Structured Configuration for more details. For C Device Service see C Custom Structured Configuration for more details. Secrets EdgeX 2.0 New in EdgeX 2.0 the Device Services now have the capability to store and retrieve secure secrets. Note that currently this only applies to Go based Device Services. The C SDK currently does not have support for secrets which is planned for the Jakarta 2.1 release. Configuration All instances of Device Services running in secure mode require a SecretStore to be created for the service by the Security Services. See Configuring Add-on Service for details on configuring a SecretStore to be created for the Device Service. With the use of Redis Pub/Sub as the default EdgeX MessageBus all Device Services need the redisdb known secret added to their SecretStore so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details. Each Device Service also has detailed configuration to enable connection to it's exclusive SecretStore Example - SecretStore configuration for Device MQTT [SecretStore] Type = \"vault\" Host = \"localhost\" Port = 8200 Path = \"device-mqtt/\" Protocol = \"http\" RootCaCertPath = \"\" ServerName = \"\" TokenFile = \"/tmp/edgex/secrets/device-mqtt/secrets-token.json\" [SecretStore.Authentication] AuthType = \"X-Vault-Token\" Storing Secrets Secure Mode When running an Device Service in secure mode, secrets can be stored in the SecretStore by making an HTTP POST call to the /api/v2/secret API route on the Device Service. The secret data POSTed is stored to the SecretStore based on values in the [SecretStore] section of the configuration. Once a secret is stored, only the service that added the secret will be able to retrieve it. See the Secret API Reference for more details and example. Insecure Mode When running in insecure mode, the secrets are stored and retrieved from the Writable.InsecureSecrets section of the service's configuration.toml file. Insecure secrets and their paths can be configured as below. Example - InsecureSecrets Configuration [ Writable . InsecureSecrets ] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" [Writable.InsecureSecrets.MQTT] path = \"credentials\" [Writable.InsecureSecrets.MQTT.Secrets] username = \"mqtt-user\" password = \"mqtt-password\" Retrieving Secrets Device Services retrieve secrets from their SecretStore using the SDK API. See Retrieving Secrets for more details using the Go SDK. API Reference Device Service - SDK- API Reference","title":"Device Services Microservices"},{"location":"microservices/device/Ch-DeviceServices/#device-services-microservices","text":"","title":"Device Services Microservices"},{"location":"microservices/device/Ch-DeviceServices/#introduction","text":"The Device Services Layer interacts with Device Services. Device services are the edge connectors interacting with the devices that include, but are not limited to: appliances in your home, alarm systems, HVAC equipment, lighting, machines in any industry, irrigation systems, drones, traffic signals, automated transportation, and so forth. EdgeX device services translate information coming from devices via hundreds of protocols and thousands of formats and bring them into EdgeX. In other terms, device services ingest sensor data provided by \u201cthings\u201d. When it ingests the sensor data, the device service converts the data produced and communicated by the \u201cthing\u201d into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry. Device services also receive and handle any request for actuation back to the device. Device services take a general command from EdgeX to perform some sort of action and it translates that into a protocol specific request and forwards the request to the desired device. Device services serve as the main means EdgeX interacts with sensors/devices. So, in addition to getting sensor data and actuating devices, device services also: Get status updates from devices/sensors Transform data before sending sensor data to EdgeX Change configuration Discover devices Device services may service one or a number of devices at one time. A device that a device service manages, could be something other than a simple, single, physical device. The device could be an edge/IoT gateway (and all of that gateway's devices), a device manager, a sensor hub, a web service available over HTTP, or a software sensor that acts as a device, or collection of devices, to EdgeX Foundry. The device service communicates with the devices through protocols native to each device object. EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, BLE, etc. EdgeX also provides the means to create new devices services through device service software development kits (SDKs) when you encounter a new protocol and need EdgeX to communicate with a new device.","title":"Introduction"},{"location":"microservices/device/Ch-DeviceServices/#device-service-abstraction","text":"A device service is really just a software abstraction around a device and any associated firmware, software and protocol stack. It allows the rest of EdgeX (and users of EdgeX) to talk to a device via the abstraction API so that all devices look the same from the perspective of how you communicate with them. Under the covers, the implementation of the device service has some common elements, but can also vary greatly depending on the underlying device, protocol, and associate software. A device service provides the abstraction between the rest of EdgeX and the physical device. In other terms, the device service \u201cwraps\u201d the protocol communication code, device driver/firmware and actual device. Each device service in EdgeX is an independent micro service. Devices services are typically created using a device service SDK . The SDK is really just a library that provides common scaffolding code and convenience methods that are needed by all device services. While not required, the EdgeX community use the SDKs as the basis for the all device services the community provides. The SDKs make it easier to create device service by allowing a developer to focus on device specific communications, features, etc. versus having to code a lot of EdgeX service boilerplate code. Using the SDKs also helps to ensure the device services adhere to rules required of the device services. Unless you need to create a new device service or modify an existing device service, you may not ever have to go under the covers, so to speak, to understand how a device service works. However, having some general understanding of what a device service does and how it does it can be helpful in customization, setting configuration and diagnosing problems.","title":"Device Service Abstraction"},{"location":"microservices/device/Ch-DeviceServices/#device-service-functionality","text":"All device services must perform the following tasks: Register with core metadata \u2013 thereby letting all of EdgeX know that it is running and stands ready to manage devices. In the case of an existing device service, the device service will update its metadata registration and get any new information. Get its configuration settings from the EdgeX\u2019s configuration service (or local configuration file if the configuration service is not being used). Register itself an EdgeX running micro service with the EdgeX registry service (when running) \u2013 thereby allowing other EdgeX services to communicate with it. On-board and manage physical devices it knows how to communicate with. This process is called provisioning of the device(s). In some cases, the device service may have the means to automatically detect and provision the devices. For example, a BLE device service may automatically scan a BLE address space, detect a new BLE device in its range, and then provision that device to EdgeX and the associated BLE device service. Update and inform EdgeX on the operating state of the device (does it appear the device is still running and able to communicate). Monitor for configuration changes and apply new configuration where applicable. Note, in some cases configuration changes cannot be dynamically applied (example: change the operating port of the device service). Get sensor data (i.e. ingest sensor data) and pass that data to the core data micro service via REST. Receive and react to REST based actuation commands. As you can imagine, many of these tasks (like registering with core metadata) are generic and the same for all device services and thereby provided by the SDK. Other tasks (like getting sensor data from the underlying device) are quite specific to the underlying device. In these cases, the device service SDK provides empty functions for performing the work, but the developer would need to fill in the function code as it relates to the specific device, the communication protocol, device driver, etc.","title":"Device Service Functionality"},{"location":"microservices/device/Ch-DeviceServices/#device-service-functional-requirements","text":"Requirements for the device service are provided in this documentation. These requirements are being used to define what functionality needs to be offered via any Device Service SDK to produce the device service scaffolding code. They may also help the reader further understand the duties and role of a device service.","title":"Device Service Functional Requirements"},{"location":"microservices/device/Ch-DeviceServices/#device-profile","text":"EdgeX comes with a number of existing device services for communicating with devices that speak many IoT protocols \u2013 such as Modbus, BACnet, BLE, etc. While these devices services know how to speak to devices that communicate by the associated protocol, the device service doesn\u2019t know the specifics of all devices that speak that protocol. For example, there are thousands of Modbus devices in the world. It is a common industrial protocol used in a variety of devices. Some Modbus devices measure temperature and humidity and provide thermostatic control over building HVAC systems, while other Modbus devices are used in automation control of flare gas meters in the oil and gas industry. This diversity of devices means that the Modbus device service could never know how to communicate with each Modbus device directly. The device service just knows the Modbus protocol generically and must be informed of how to communicate with each individual device based on what that device knows and communicates. Using an analogy, you may speak a language or two. Just because you speak English, doesn\u2019t mean you know everything about all English-speaking people. For example, just because someone spoke English, you would not know if they could solve a calculus problem for you or if they can sing your favorite song. Device profiles describe a specific device to a device service. Each device managed by a device service has an association device profile, which defines that device in terms of the data it reports and operations that it supports. General characteristics about the type of device, the data the device provides, and how to command the device is all provided in a device profile. A device profile is described in YAML which is a human-readable data serialization language (similar to a markup language like XML). See the page on device profiles to learn more about how they provide the detail EdgeX device services need to communicate with a device. Info Device profiles, while normally provided to EdgeX in a YAML file, can also be specified to EdgeX in JSON. See the metadata API for upload via JSON versus upload YAML file .","title":"Device Profile"},{"location":"microservices/device/Ch-DeviceServices/#device-discovery-and-provision-watchers","text":"Device Services may contain logic to automatically provision new devices. This can be done statically or dynamically .","title":"Device Discovery and Provision Watchers"},{"location":"microservices/device/Ch-DeviceServices/#static-provisioning","text":"In static device configuration (also known as static provisioning) the device service connects to and establishes a new device that it manages in EdgeX (specifically metadata) from configuration the device service is provided. For example, a device service may be provided with the specific IP address and additional device details for a device (or devices) that it is to onboard at startup. In static provisioning, it is assumed that the device will be there and that it will be available at the address or place specified through configuration. The devices and the connection information for those devices is known at the point that the device service starts.","title":"Static Provisioning"},{"location":"microservices/device/Ch-DeviceServices/#dynamic-provisioning","text":"In dynamic discovery (also known as automatic provisioning), a device service is given some general information about where to look and general parameters for a device (or devices). For example, the device service may be given a range of BLE address space and told to look for devices of a certain nature in this range. However, the device service does not know that the device is physically there \u2013 and the device may not be there at start up. It must continually scan during its operations (typically on some sort of schedule) for new devices within the guides of the location and device parameters provided by configuration. Not all device services support dynamic discovery. If it does support dynamic discovery, the configuration about what and where to look (in other words, where to scan) for new devices is specified by a provision watcher. A provision watcher is created via a call to the core metadata provision watcher API (and is stored in the metadata database). In addition to providing details about what devices to look for during a scan, a provision watcher may also contain \u201cblocking\u201d indicators, which define parameters about devices that are not to be automatically provisioned. This allows the scope of a device scan to be narrowed or allow specific devices to be avoided.","title":"Dynamic Provisioning"},{"location":"microservices/device/Ch-DeviceServices/#admin-state","text":"The adminState is either LOCKED or UNLOCKED for each device. This is an administrative condition applied to the device. This state is periodically set by an administrator of the system \u2013 perhaps for system maintenance or upgrade of the sensor. When LOCKED , requests to the device via the device service are stopped and an indication that the device is locked (HTTP 423 status code) is returned to the caller.","title":"Admin State"},{"location":"microservices/device/Ch-DeviceServices/#sensor-reading-schedule","text":"Data collected from devices by a device service is marshalled into EdgeX event and reading objects (delivered as JSON objects in service REST calls). This is one of the primary responsibilities of a device service. Typically, a configurable schedule - called an auto event schedule - determines when a device service sends data to core data via core data\u2019s REST API (future EdgeX implementations may afford alternate means to send the data to core data or to send sensor data to other services).","title":"Sensor Reading Schedule"},{"location":"microservices/device/Ch-DeviceServices/#test-and-demonstration-device-services","text":"Among the many available device services provided by EdgeX, there are two device services that are typically used for demonstration, education and testing purposes only. The random device service ( device-random-go ) is a very simple device service used to provide device service authors a bare bones example inclusive of a device profile. It can also be used to create random integer data (either 8, 16, or 32 bit signed or unsigned) to simulate integer readings when developing or testing other EdgeX micro services. It was created from the Go-based device service SDK. The virtual device service ( device-virtual-go ) is also used for demonstration, education and testing. It is a more complex simulator in that it allows any type of data to be generated on a scheduled basis and used an embedded SQL database (ql) to provide simulated data. Manipulating the data in the embedded database allows the service to mimic almost any type of sensing device. More information on the virtual device service is available in this documentation.","title":"Test and Demonstration Device Services"},{"location":"microservices/device/Ch-DeviceServices/#running-multiple-instances","text":"Device services support one additional command-line argument, --instance or -i . This allows for running multiple instances of a device service in an EdgeX deployment, by giving them different names. For example, running device-modbus -i 1 results in a service named device-modbus_1 , ie the parameter given to the instance argument is added as a suffix to the device service name. The same effect may be obtained by setting the EDGEX_INSTANCE environment variable.","title":"Running multiple instances"},{"location":"microservices/device/Ch-DeviceServices/#publish-to-messagebus","text":"Edgex 2.0 New in Edgex 2.0 Device services now have the capability to publish Events directly to the EdgeX MessageBus, rather than POST the Events to Core Data via REST. This capability is controlled by the Device.UseMessageBus configuration property (see below), which is set to true by default. Core Data is configured by default to subscribe to the EdgeX MessageBus to receive and persist the Events. Application services, as in EdgeX 1.x, subscribe to the EdgeX MessageBus to receive and process the Events.","title":"Publish to MessageBus"},{"location":"microservices/device/Ch-DeviceServices/#configuration-properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services. Device Property Default Value Description Properties that determine how the device service communicates with a device DataTransform true Controls whether transformations are applied to numeric readings MaxCmdOps 128 Maximum number of resources in a device command (hence, readings in an event) MaxCmdResultLen 256 Maximum JSON string length for command results ProfilesDir '' If set, directory containing profile definition files to upload to core-metadata DevicesDir '' If set, directory containing device definition files to upload to core-metadata UpdateLastConnected false If true, update the LastConnected attribute of a device whenever it is successfully accessed UseMessageBus true Controls whether events are published via MessageBus or core-data (REST) Discovery/Enabled true Controls whether device discovery is enabled Discovery/Interval 0 Interval between automatic discovery runs. Zero means do not run discovery automatically MessageQueue Property Default Value Description Entries in the MessageQueue section of the configuration allow for publication of events to a message bus Protocol redis Indicates the connectivity protocol to use to use the bus. Host localhost Indicates the host of the messaging broker, if applicable. Port 6379 Indicates the port to use when publishing a message. Type redis Indicates the type of messaging library to use. Currently this is Redis by default. Refer to the go-mod-messaging module for more information. AuthMode usernamepassword Auth Mode to connect to EdgeX MessageBus. SecretName redisdb Name of the secret in the Secret Store to find the MessageBus credentials. PublishTopicPrefix edgex/events/device Indicates the base topic to which messages should be published. / / will be added to this Publish Topic prefix MessageQueue.Optional Property Default Value Description Configuration and connection parameters for use with MQTT message bus - in place of Redis ClientId [service-key] Client ID used to put messages on the bus Qos '0' Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive '10' Period of time in seconds to keep the connection alive when there is no messages flowing (must be 2 or greater) Retained false Whether to retain messages AutoReconnect true Whether to reconnect to the message bus on connection loss ConnectTimeout 5 Message bus connection timeout in seconds SkipCertVerify false TLS configuration - Only used if Cert/Key file or Cert/Key PEMblock are specified","title":"Configuration Properties"},{"location":"microservices/device/Ch-DeviceServices/#custom-configuration","text":"Device services can have custom configuration in one of two ways. See the table below for details. Driver [Driver] - The Driver section used for simple custom settings and is accessed via the SDK's DriverConfigs() API. The DriverConfigs API returns a map[string] string containing the contents on the Driver section of the configuration.toml file. [Driver] MySetting = \"My Value\" Custom Structured Configuration For Go Device Services see Go Custom Structured Configuration for more details. For C Device Service see C Custom Structured Configuration for more details.","title":"Custom Configuration"},{"location":"microservices/device/Ch-DeviceServices/#secrets","text":"EdgeX 2.0 New in EdgeX 2.0 the Device Services now have the capability to store and retrieve secure secrets. Note that currently this only applies to Go based Device Services. The C SDK currently does not have support for secrets which is planned for the Jakarta 2.1 release.","title":"Secrets"},{"location":"microservices/device/Ch-DeviceServices/#configuration","text":"All instances of Device Services running in secure mode require a SecretStore to be created for the service by the Security Services. See Configuring Add-on Service for details on configuring a SecretStore to be created for the Device Service. With the use of Redis Pub/Sub as the default EdgeX MessageBus all Device Services need the redisdb known secret added to their SecretStore so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details. Each Device Service also has detailed configuration to enable connection to it's exclusive SecretStore Example - SecretStore configuration for Device MQTT [SecretStore] Type = \"vault\" Host = \"localhost\" Port = 8200 Path = \"device-mqtt/\" Protocol = \"http\" RootCaCertPath = \"\" ServerName = \"\" TokenFile = \"/tmp/edgex/secrets/device-mqtt/secrets-token.json\" [SecretStore.Authentication] AuthType = \"X-Vault-Token\"","title":"Configuration"},{"location":"microservices/device/Ch-DeviceServices/#storing-secrets","text":"","title":"Storing Secrets"},{"location":"microservices/device/Ch-DeviceServices/#secure-mode","text":"When running an Device Service in secure mode, secrets can be stored in the SecretStore by making an HTTP POST call to the /api/v2/secret API route on the Device Service. The secret data POSTed is stored to the SecretStore based on values in the [SecretStore] section of the configuration. Once a secret is stored, only the service that added the secret will be able to retrieve it. See the Secret API Reference for more details and example.","title":"Secure Mode"},{"location":"microservices/device/Ch-DeviceServices/#insecure-mode","text":"When running in insecure mode, the secrets are stored and retrieved from the Writable.InsecureSecrets section of the service's configuration.toml file. Insecure secrets and their paths can be configured as below. Example - InsecureSecrets Configuration [ Writable . InsecureSecrets ] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" [Writable.InsecureSecrets.MQTT] path = \"credentials\" [Writable.InsecureSecrets.MQTT.Secrets] username = \"mqtt-user\" password = \"mqtt-password\"","title":"Insecure Mode"},{"location":"microservices/device/Ch-DeviceServices/#retrieving-secrets","text":"Device Services retrieve secrets from their SecretStore using the SDK API. See Retrieving Secrets for more details using the Go SDK.","title":"Retrieving Secrets"},{"location":"microservices/device/Ch-DeviceServices/#api-reference","text":"Device Service - SDK- API Reference","title":"API Reference"},{"location":"microservices/device/V2Migration/","text":"V2 Migration Guide EdgeX 2.0 For the EdgeX 2.0 (Ireland) release there are many backward breaking changes. These changes require custom Device Services and custom device profiles to be migrated. This section outlines the necessary steps for this migration. Custom Device Services Configuration The migration of any Device Service's configuration starts with migrating configuration common to all EdgeX services. See the V2 Migration of Common Configuration section for details. The remainder of this section focuses on configuration specific to Device Services. Device Remove ImitCmd , ImitCmdArgs , RemoveCmd and RemoveCmdArgs Add UseMessageBus to determine events should be published to MessageBus or sent by REST call. For C-based Device Services (eg, BACnet, Grove, CoAP): UpdateLastConnected , MaxCmdOps , DataTransform , Discovery and MaxCmdResultLen are dynamic settings - move these to [Writable.Device] Add DevicesDir and ProfilesDir as an indication of where to load the device profiles and pre-defined devices. Convention is to put them under /res folder: Example configuration [ Device ] DevicesDir = \"./res/devices\" ProfilesDir = \"./res/profiles\" ... Example Project Structure +- res | +- devices | +- device1.toml | +- device2.toml | +- profiles | +- profile1.yml | +- profile2.yml | +- configuration.toml | +- ... +- main.go +- device-service-binary MessageQueue Device Service is capable of pushing Events to Message Bus instead of sending it via REST call. A MessageQueue section is added in configuration to specify the detail of it. MessageQueue Example [MessageQueue] Protocol = \"redis\" Host = \"localhost\" Port = 6379 Type = \"redis\" AuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure). SecretName = \"redisdb\" PublishTopicPrefix = \"edgex/events/device\" # /// will be added to this Publish Topic prefix [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"device-simple\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified See the Device Service MessageQueue section for details. Code (Golang) Dependencies You first need to update the go.mod file to specify go 1.16 and the V2 versions of the Device SDK and any EdgeX go-mods directly used by your service. Note the extra /v2 for the modules. Example go.mod for V2 module < your service > go 1.16 require ( github . com / edgexfoundry / device - sdk - go / v2 v2 .0.0 github . com / edgexfoundry / go - mod - core - contracts / v2 v2 .0.0 ... ) Once that is complete then the import statements for these dependencies must be updated to include the /v2 in the path. Example import statements for V2 import ( ... \"github.com/edgexfoundry/device-sdk-go/v2/pkg/models\" \"github.com/edgexfoundry/go-mod-core-contracts/v2/common\" ) CommandValue CommandValue is redesigned to be more simple and straightforward. A single Value with interface{} type is able to accommodate reading value of supported type. As a result, you might notice the original API to create CommandValue is no longer working. In V2 we refactor all those API functions to create CommandValue of different type to a generic function: Create CommandValue with string Type cv , err := models . NewCommandValue ( deviceResourceName , v2 . ValueTypeString , \"foobar\" ) if err != nil { ... } cv . Origin = time . Now (). Unixnano () cv . Tags [ \"foo\" ] = \"bar\" The 3rd argument in the function must be able to cast into the Type defined in 2nd argument otherwise there will be error. See Data formats for supported data type in EdgeX. Device Service also supports Event Tagging , the tags on the CommandValue will be copied to Event. Code (C) Dependencies The CSDK now has additional dependencies on the Redis client library (hiredis, hiredis-dev) and Paho MQTT (paho-mqtt-c-dev) Attribute and Protocols processing Four new callback functions are defined and implementations of them are required. Their purpose is to take the parsing of attributes and protocols out of the get/put handlers so that it is not done for every single request. The device service implementation should define a structure to hold the attributes of a resource in a form suitable for use with whatever access library is being used to communicate with the devices. A function should then be written which allocates and populates this structure, given a set of resource attributes held in a string map. Another function should be written which frees an instance of the structure and any associated elements. A similar pair of functions should be written to process ProtocolProperties to address a device. devsdk_address_t xxx_create_address (void *impl, const devsdk_protocols *protocols, iot_data_t **exception); void xxx_free_address (void *impl, devsdk_address_t address); devsdk_resource_attr_t xxx_create_resource_attr (void *impl, const iot_data_t *attributes, iot_data_t **exception); void xxx_free_resource_attr (void *impl, devsdk_resource_attr_t attr); In the event of an attribute or protocol set being invalid, the create function should return NULL and allocate a string value into the exception parameter indicating the nature of the problem - this will be logged by the SDK. Get and Put handlers The devname and protocols parameters are replaced by an object of type devsdk_device_t ; this contains name ( char * ) and address ( devsdk_address_t - see above) fields The resource name , type and attributes (the latter now represented as devsdk_resource_attr_t ) in a devsdk_commandrequest are now held in a devsdk_resource_t structure qparams is renamed to options and is now an iot_data_t map (string/string) options is also added to the put handler parameters Callback function list The callback list structure has been made opaque. An instance of it to pass into the devsdk_service_new function is created by calling devsdk_callbacks_init . This takes as parameters the mandatory callback functions (init, get/set handlers, stop, create/free addr and create/free resource attr). Services which implement optional callbacks should set these using the relevant population functions: * devsdk_callbacks_set_discovery * devsdk_callbacks_set_reconfiguration * devsdk_callbacks_set_listeners * devsdk_callbacks_set_autoevent_handlers Misc edgex_free_device() now takes the devsdk_service_t as its first parameter Reflecting changes in the device profile (see below), the edgex_deviceresource struct now contains an edgex_propertyvalue directly, rather than via an edgex_profileproperty . The edgex_propertyvalue contains a new field char *units which replaces the old edgex_units structure. Device Profiles See Device Profile Reference for details, SDK now allows both YAML and JSON format. Device Resource properties field is simplified in device resource: units becomes a single string field and it's optional Float32 and Float64 type are both only represented in eNotation. Base64 encoding is removed so there is no floatEncoding field anymore V1: deviceResources : - name : \"Xrotation\" description : \"X axis rotation rate\" properties : value : { type : \"Int32\" , readWrite : \"RW\" } units : { type : \"string\" , readWrite : \"R\" , defaultValue : \"degrees/sec\" } V2: deviceResources : - name : \"Xrotation\" description : \"X axis rotation rate\" properties : valueType : \"Int32\" readWrite : \"RW\" Device Command get and set ResourceOperation field is replaced with a single readWrite field to eliminate the duplicate definition. V1: deviceCommands : - name : \"Rotation\" get : - { operation : \"get\" , deviceResource : \"Xrotation\" } - { operation : \"get\" , deviceResource : \"Yrotation\" } - { operation : \"get\" , deviceResource : \"Zrotation\" } set : - { operation : \"set\" , deviceResource : \"Xrotation\" , parameter : \"0\" } - { operation : \"set\" , deviceResource : \"Yrotation\" , parameter : \"0\" } - { operation : \"set\" , deviceResource : \"Zrotation\" , parameter : \"0\" } V2: deviceCommands : - name : \"Rotation\" isHidden : false readWrite : \"RW\" resourceOperations : - { deviceResource : \"Xrotation\" , defaultValue : \"0\" } - { deviceResource : \"Yrotation\" , defaultValue : \"0\" } - { deviceResource : \"Zrotation\" , defaultValue : \"0\" } Core Command coreCommands section is removed in V2. We use isHidden field in both deviceResource and deviceCommand to indicates whether it is exposed to Command Service or not. isHidden default to false so all deviceResource and deviceCommand is able to be called via Command Service REST API. Set isHidden to true if you don't want to expose them. Devices State In V2 the values of a device's operating state are changed from ENABLED / DISABLED to UP / DOWN . The additional state value UNKNOWN is added for future use. Pre-defined Devices In V2 pre-defined devices are in their own file, SDK allows both TOML and JSON format. Pre-defined devices [[DeviceList]] Name = \"Simple-Device01\" ProfileName = \"Simple-Device\" Description = \"Example of Simple Device\" Labels = [ \"industrial\" ] [DeviceList.Protocols] [DeviceList.Protocols.other] Address = \"simple01\" Port = \"300\" [[DeviceList.AutoEvents]] Interval = \"10s\" OnChange = false SourceName = \"Switch\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Image\" Notice that we renamed some fields: Profle is renamed to ProfileName Frequency is renamed to Interval Resource is renamed to SourceName Device MQTT The Device MQTT service specific [Driver] and [DeviceList.Protocols.mqtt] sections have changed for V2. The MQTT Broker connection configuration has been consolidated to just one MQTT Client and now supports SecretStore for the authentication credentials. Driver => MQTTBrokerInfo The [Driver] section has been replaced with the new [MQTTBrokerInfo] structured custom configuration section. The setting under [MQTTBrokerInfo.Writable] can be dynamically updated from Consul without needing to restart the service. Example - V1 Driver configuration # Driver configs [Driver] IncomingSchema = 'tcp' IncomingHost = '0.0.0.0' IncomingPort = '1883' IncomingUser = 'admin' IncomingPassword = 'public' IncomingQos = '0' IncomingKeepAlive = '3600' IncomingClientId = 'IncomingDataSubscriber' IncomingTopic = 'DataTopic' ResponseSchema = 'tcp' ResponseHost = '0.0.0.0' ResponsePort = '1883' ResponseUser = 'admin' ResponsePassword = 'public' ResponseQos = '0' ResponseKeepAlive = '3600' ResponseClientId = 'CommandResponseSubscriber' ResponseTopic = 'ResponseTopic' ConnEstablishingRetry = '10' ConnRetryWaitTime = '5' Example - V2 MQTTBrokerInfo configuration section [MQTTBrokerInfo] Schema = \"tcp\" Host = \"0.0.0.0\" Port = 1883 Qos = 0 KeepAlive = 3600 ClientId = \"device-mqtt\" CredentialsRetryTime = 120 # Seconds CredentialsRetryWait = 1 # Seconds ConnEstablishingRetry = 10 ConnRetryWaitTime = 5 # AuthMode is the MQTT broker authentication mechanism. # Currently, \"none\" and \"usernamepassword\" is the only AuthMode # supported by this service, and the secret keys are \"username\" and \"password\". AuthMode = \"none\" CredentialsPath = \"credentials\" IncomingTopic = \"DataTopic\" responseTopic = \"ResponseTopic\" [MQTTBrokerInfo.Writable] # ResponseFetchInterval specifies the retry # interval(milliseconds) to fetch the command response from the MQTT broker ResponseFetchInterval = 500 DeviceList.Protocols.mqtt Now that there is a single MQTT Broker connection, the configuration in [DeviceList.Protocols.mqtt] for each device has been greatly simplified to just the CommandTopic the device is subscribed. Note that this topic needs to be a unique topic for each device defined. Example - V1 DeviceList.Protocols.mqtt device configuration section [DeviceList.Protocols] [DeviceList.Protocols.mqtt] Schema = 'tcp' Host = '0.0.0.0' Port = '1883' ClientId = 'CommandPublisher' User = 'admin' Password = 'public' Topic = 'CommandTopic' Example - V2 DeviceList.Protocols.mqtt device configuration section [DeviceList.Protocols] [DeviceList.Protocols.mqtt] CommandTopic = 'CommandTopic' SecretStore Secure See the Secret API reference for injecting authentication credentials into a Device Service's secure SecretStore. Example - Authentication credentials injected via Device MQTT's Secret endpoint curl -X POST http://localhost:59982/api/v2/secret -H 'Content-Type: application/json' -d '{ \"apiVersion\": \"v2\", \"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\", \"path\": \"credentials\", \"secretData\": [ { \"key\": \"username\", \"value\": \"mqtt-user\" }, { \"key\": \"password\", \"value\": \"mqtt-password\" } ]}' Note The service has to be running for this endpoint to be available. The following [MQTTBrokerInfo] settings from above allow a window of time to inject the credentials. CredentialsRetryTime = 120 # Seconds CredentialsRetryWait = 1 # Seconds Non Secure For non-secure mode the authentication credentials need to be added to the [InsecureSecrets] configuration section. Example - Authentication credentials in Device MQTT's [InsecureSecrets] configuration section [Writable.InsecureSecrets] [Writable.InsecureSecrets.MQTT] path = \"credentials\" [Writable.InsecureSecrets.MQTT.Secrets] username = \"mqtt-user\" password = \"mqtt-password\" Device Camera The Device Camera service specific [Driver] and [DeviceList.Protocols.HTTP] sections have changed for V2 due to the addition of the SecretStore capability and per camera credentials. The plain text camera credentials have been replaced with settings describing where to pull them from the SecretStore for each camera device specified. Driver Example V1 Driver configuration section [Driver] User = 'service' Password = 'Password!1' # Assign AuthMethod to 'digest' | 'basic' | 'none' # AuthMethod specifies the authentication method used when # requesting still images from the URL returned by the ONVIF # \"GetSnapshotURI\" command. All ONVIF requests will be # carried out using digest auth. AuthMethod = 'basic' Example V2 Driver configuration section [Driver] CredentialsRetryTime = '120' # Seconds CredentialsRetryWait = '1' # Seconds DeviceList.Protocols.HTTP Example V1 DeviceList.Protocols.HTTP device configuration section [DeviceList.Protocols] [DeviceList.Protocols.HTTP] Address = '192.168.2.105' Example V2 DeviceList.Protocols.HTTP device configuration section [DeviceList.Protocols] [DeviceList.Protocols.HTTP] Address = '192.168.2.105' # Assign AuthMethod to 'digest' | 'usernamepassword' | 'none' # AuthMethod specifies the authentication method used when # requesting still images from the URL returned by the ONVIF # \"GetSnapshotURI\" command. All ONVIF requests will be # carried out using digest auth. AuthMethod = 'usernamepassword' CredentialsPath = 'credentials001' SecretStore Secure See the Secret API reference for injecting authentication credentials into a Device Service's secure SecretStore. An entry is required for each camera that is configured with AuthMethod = 'usernamepassword' Example - Authentication credentials injected via Device Camera's Secret endpoint curl -X POST http://localhost:59985/api/v2/secret -H 'Content-Type: application/json' -d '{ \"apiVersion\": \"v2\", \"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\", \"path\": \"credentials001\", \"secretData\": [ { \"key\": \"username\", \"value\": \"camera-user\" }, { \"key\": \"password\", \"value\": \"camera-password\" } ]}' Note The service has to be running for this endpoint to be available. The following [Driver] settings from above allow a window of time to inject the credentials. CredentialsRetryTime = 120 # Seconds CredentialsRetryWait = 1 # Seconds Non Secure For non-secure mode the authentication credentials need to be added to the [InsecureSecrets] configuration section. An entry is required for each camera that is configured with AuthMethod = 'usernamepassword' Example - Authentication credentials in Device Camera's [InsecureSecrets] configuration section [Writable.InsecureSecrets.Camera001] path = \"credentials001\" [Writable.InsecureSecrets.Camera001.Secrets] username = \"camera-user\" password = \"camera-password\"","title":"V2 Migration Guide"},{"location":"microservices/device/V2Migration/#v2-migration-guide","text":"EdgeX 2.0 For the EdgeX 2.0 (Ireland) release there are many backward breaking changes. These changes require custom Device Services and custom device profiles to be migrated. This section outlines the necessary steps for this migration.","title":"V2 Migration Guide"},{"location":"microservices/device/V2Migration/#custom-device-services","text":"","title":"Custom Device Services"},{"location":"microservices/device/V2Migration/#configuration","text":"The migration of any Device Service's configuration starts with migrating configuration common to all EdgeX services. See the V2 Migration of Common Configuration section for details. The remainder of this section focuses on configuration specific to Device Services.","title":"Configuration"},{"location":"microservices/device/V2Migration/#device","text":"Remove ImitCmd , ImitCmdArgs , RemoveCmd and RemoveCmdArgs Add UseMessageBus to determine events should be published to MessageBus or sent by REST call. For C-based Device Services (eg, BACnet, Grove, CoAP): UpdateLastConnected , MaxCmdOps , DataTransform , Discovery and MaxCmdResultLen are dynamic settings - move these to [Writable.Device] Add DevicesDir and ProfilesDir as an indication of where to load the device profiles and pre-defined devices. Convention is to put them under /res folder: Example configuration [ Device ] DevicesDir = \"./res/devices\" ProfilesDir = \"./res/profiles\" ... Example Project Structure +- res | +- devices | +- device1.toml | +- device2.toml | +- profiles | +- profile1.yml | +- profile2.yml | +- configuration.toml | +- ... +- main.go +- device-service-binary","title":"Device"},{"location":"microservices/device/V2Migration/#messagequeue","text":"Device Service is capable of pushing Events to Message Bus instead of sending it via REST call. A MessageQueue section is added in configuration to specify the detail of it. MessageQueue Example [MessageQueue] Protocol = \"redis\" Host = \"localhost\" Port = 6379 Type = \"redis\" AuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure). SecretName = \"redisdb\" PublishTopicPrefix = \"edgex/events/device\" # /// will be added to this Publish Topic prefix [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"device-simple\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified See the Device Service MessageQueue section for details.","title":"MessageQueue"},{"location":"microservices/device/V2Migration/#code-golang","text":"","title":"Code (Golang)"},{"location":"microservices/device/V2Migration/#dependencies","text":"You first need to update the go.mod file to specify go 1.16 and the V2 versions of the Device SDK and any EdgeX go-mods directly used by your service. Note the extra /v2 for the modules. Example go.mod for V2 module < your service > go 1.16 require ( github . com / edgexfoundry / device - sdk - go / v2 v2 .0.0 github . com / edgexfoundry / go - mod - core - contracts / v2 v2 .0.0 ... ) Once that is complete then the import statements for these dependencies must be updated to include the /v2 in the path. Example import statements for V2 import ( ... \"github.com/edgexfoundry/device-sdk-go/v2/pkg/models\" \"github.com/edgexfoundry/go-mod-core-contracts/v2/common\" )","title":"Dependencies"},{"location":"microservices/device/V2Migration/#commandvalue","text":"CommandValue is redesigned to be more simple and straightforward. A single Value with interface{} type is able to accommodate reading value of supported type. As a result, you might notice the original API to create CommandValue is no longer working. In V2 we refactor all those API functions to create CommandValue of different type to a generic function: Create CommandValue with string Type cv , err := models . NewCommandValue ( deviceResourceName , v2 . ValueTypeString , \"foobar\" ) if err != nil { ... } cv . Origin = time . Now (). Unixnano () cv . Tags [ \"foo\" ] = \"bar\" The 3rd argument in the function must be able to cast into the Type defined in 2nd argument otherwise there will be error. See Data formats for supported data type in EdgeX. Device Service also supports Event Tagging , the tags on the CommandValue will be copied to Event.","title":"CommandValue"},{"location":"microservices/device/V2Migration/#code-c","text":"","title":"Code (C)"},{"location":"microservices/device/V2Migration/#dependencies_1","text":"The CSDK now has additional dependencies on the Redis client library (hiredis, hiredis-dev) and Paho MQTT (paho-mqtt-c-dev)","title":"Dependencies"},{"location":"microservices/device/V2Migration/#attribute-and-protocols-processing","text":"Four new callback functions are defined and implementations of them are required. Their purpose is to take the parsing of attributes and protocols out of the get/put handlers so that it is not done for every single request. The device service implementation should define a structure to hold the attributes of a resource in a form suitable for use with whatever access library is being used to communicate with the devices. A function should then be written which allocates and populates this structure, given a set of resource attributes held in a string map. Another function should be written which frees an instance of the structure and any associated elements. A similar pair of functions should be written to process ProtocolProperties to address a device. devsdk_address_t xxx_create_address (void *impl, const devsdk_protocols *protocols, iot_data_t **exception); void xxx_free_address (void *impl, devsdk_address_t address); devsdk_resource_attr_t xxx_create_resource_attr (void *impl, const iot_data_t *attributes, iot_data_t **exception); void xxx_free_resource_attr (void *impl, devsdk_resource_attr_t attr); In the event of an attribute or protocol set being invalid, the create function should return NULL and allocate a string value into the exception parameter indicating the nature of the problem - this will be logged by the SDK.","title":"Attribute and Protocols processing"},{"location":"microservices/device/V2Migration/#get-and-put-handlers","text":"The devname and protocols parameters are replaced by an object of type devsdk_device_t ; this contains name ( char * ) and address ( devsdk_address_t - see above) fields The resource name , type and attributes (the latter now represented as devsdk_resource_attr_t ) in a devsdk_commandrequest are now held in a devsdk_resource_t structure qparams is renamed to options and is now an iot_data_t map (string/string) options is also added to the put handler parameters","title":"Get and Put handlers"},{"location":"microservices/device/V2Migration/#callback-function-list","text":"The callback list structure has been made opaque. An instance of it to pass into the devsdk_service_new function is created by calling devsdk_callbacks_init . This takes as parameters the mandatory callback functions (init, get/set handlers, stop, create/free addr and create/free resource attr). Services which implement optional callbacks should set these using the relevant population functions: * devsdk_callbacks_set_discovery * devsdk_callbacks_set_reconfiguration * devsdk_callbacks_set_listeners * devsdk_callbacks_set_autoevent_handlers","title":"Callback function list"},{"location":"microservices/device/V2Migration/#misc","text":"edgex_free_device() now takes the devsdk_service_t as its first parameter Reflecting changes in the device profile (see below), the edgex_deviceresource struct now contains an edgex_propertyvalue directly, rather than via an edgex_profileproperty . The edgex_propertyvalue contains a new field char *units which replaces the old edgex_units structure.","title":"Misc"},{"location":"microservices/device/V2Migration/#device-profiles","text":"See Device Profile Reference for details, SDK now allows both YAML and JSON format.","title":"Device Profiles"},{"location":"microservices/device/V2Migration/#device-resource","text":"properties field is simplified in device resource: units becomes a single string field and it's optional Float32 and Float64 type are both only represented in eNotation. Base64 encoding is removed so there is no floatEncoding field anymore V1: deviceResources : - name : \"Xrotation\" description : \"X axis rotation rate\" properties : value : { type : \"Int32\" , readWrite : \"RW\" } units : { type : \"string\" , readWrite : \"R\" , defaultValue : \"degrees/sec\" } V2: deviceResources : - name : \"Xrotation\" description : \"X axis rotation rate\" properties : valueType : \"Int32\" readWrite : \"RW\"","title":"Device Resource"},{"location":"microservices/device/V2Migration/#device-command","text":"get and set ResourceOperation field is replaced with a single readWrite field to eliminate the duplicate definition. V1: deviceCommands : - name : \"Rotation\" get : - { operation : \"get\" , deviceResource : \"Xrotation\" } - { operation : \"get\" , deviceResource : \"Yrotation\" } - { operation : \"get\" , deviceResource : \"Zrotation\" } set : - { operation : \"set\" , deviceResource : \"Xrotation\" , parameter : \"0\" } - { operation : \"set\" , deviceResource : \"Yrotation\" , parameter : \"0\" } - { operation : \"set\" , deviceResource : \"Zrotation\" , parameter : \"0\" } V2: deviceCommands : - name : \"Rotation\" isHidden : false readWrite : \"RW\" resourceOperations : - { deviceResource : \"Xrotation\" , defaultValue : \"0\" } - { deviceResource : \"Yrotation\" , defaultValue : \"0\" } - { deviceResource : \"Zrotation\" , defaultValue : \"0\" }","title":"Device Command"},{"location":"microservices/device/V2Migration/#core-command","text":"coreCommands section is removed in V2. We use isHidden field in both deviceResource and deviceCommand to indicates whether it is exposed to Command Service or not. isHidden default to false so all deviceResource and deviceCommand is able to be called via Command Service REST API. Set isHidden to true if you don't want to expose them.","title":"Core Command"},{"location":"microservices/device/V2Migration/#devices","text":"","title":"Devices"},{"location":"microservices/device/V2Migration/#state","text":"In V2 the values of a device's operating state are changed from ENABLED / DISABLED to UP / DOWN . The additional state value UNKNOWN is added for future use.","title":"State"},{"location":"microservices/device/V2Migration/#pre-defined-devices","text":"In V2 pre-defined devices are in their own file, SDK allows both TOML and JSON format. Pre-defined devices [[DeviceList]] Name = \"Simple-Device01\" ProfileName = \"Simple-Device\" Description = \"Example of Simple Device\" Labels = [ \"industrial\" ] [DeviceList.Protocols] [DeviceList.Protocols.other] Address = \"simple01\" Port = \"300\" [[DeviceList.AutoEvents]] Interval = \"10s\" OnChange = false SourceName = \"Switch\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Image\" Notice that we renamed some fields: Profle is renamed to ProfileName Frequency is renamed to Interval Resource is renamed to SourceName","title":"Pre-defined Devices"},{"location":"microservices/device/V2Migration/#device-mqtt","text":"The Device MQTT service specific [Driver] and [DeviceList.Protocols.mqtt] sections have changed for V2. The MQTT Broker connection configuration has been consolidated to just one MQTT Client and now supports SecretStore for the authentication credentials.","title":"Device MQTT"},{"location":"microservices/device/V2Migration/#driver-mqttbrokerinfo","text":"The [Driver] section has been replaced with the new [MQTTBrokerInfo] structured custom configuration section. The setting under [MQTTBrokerInfo.Writable] can be dynamically updated from Consul without needing to restart the service. Example - V1 Driver configuration # Driver configs [Driver] IncomingSchema = 'tcp' IncomingHost = '0.0.0.0' IncomingPort = '1883' IncomingUser = 'admin' IncomingPassword = 'public' IncomingQos = '0' IncomingKeepAlive = '3600' IncomingClientId = 'IncomingDataSubscriber' IncomingTopic = 'DataTopic' ResponseSchema = 'tcp' ResponseHost = '0.0.0.0' ResponsePort = '1883' ResponseUser = 'admin' ResponsePassword = 'public' ResponseQos = '0' ResponseKeepAlive = '3600' ResponseClientId = 'CommandResponseSubscriber' ResponseTopic = 'ResponseTopic' ConnEstablishingRetry = '10' ConnRetryWaitTime = '5' Example - V2 MQTTBrokerInfo configuration section [MQTTBrokerInfo] Schema = \"tcp\" Host = \"0.0.0.0\" Port = 1883 Qos = 0 KeepAlive = 3600 ClientId = \"device-mqtt\" CredentialsRetryTime = 120 # Seconds CredentialsRetryWait = 1 # Seconds ConnEstablishingRetry = 10 ConnRetryWaitTime = 5 # AuthMode is the MQTT broker authentication mechanism. # Currently, \"none\" and \"usernamepassword\" is the only AuthMode # supported by this service, and the secret keys are \"username\" and \"password\". AuthMode = \"none\" CredentialsPath = \"credentials\" IncomingTopic = \"DataTopic\" responseTopic = \"ResponseTopic\" [MQTTBrokerInfo.Writable] # ResponseFetchInterval specifies the retry # interval(milliseconds) to fetch the command response from the MQTT broker ResponseFetchInterval = 500","title":"Driver => MQTTBrokerInfo"},{"location":"microservices/device/V2Migration/#devicelistprotocolsmqtt","text":"Now that there is a single MQTT Broker connection, the configuration in [DeviceList.Protocols.mqtt] for each device has been greatly simplified to just the CommandTopic the device is subscribed. Note that this topic needs to be a unique topic for each device defined. Example - V1 DeviceList.Protocols.mqtt device configuration section [DeviceList.Protocols] [DeviceList.Protocols.mqtt] Schema = 'tcp' Host = '0.0.0.0' Port = '1883' ClientId = 'CommandPublisher' User = 'admin' Password = 'public' Topic = 'CommandTopic' Example - V2 DeviceList.Protocols.mqtt device configuration section [DeviceList.Protocols] [DeviceList.Protocols.mqtt] CommandTopic = 'CommandTopic'","title":"DeviceList.Protocols.mqtt"},{"location":"microservices/device/V2Migration/#secretstore","text":"","title":"SecretStore"},{"location":"microservices/device/V2Migration/#secure","text":"See the Secret API reference for injecting authentication credentials into a Device Service's secure SecretStore. Example - Authentication credentials injected via Device MQTT's Secret endpoint curl -X POST http://localhost:59982/api/v2/secret -H 'Content-Type: application/json' -d '{ \"apiVersion\": \"v2\", \"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\", \"path\": \"credentials\", \"secretData\": [ { \"key\": \"username\", \"value\": \"mqtt-user\" }, { \"key\": \"password\", \"value\": \"mqtt-password\" } ]}' Note The service has to be running for this endpoint to be available. The following [MQTTBrokerInfo] settings from above allow a window of time to inject the credentials. CredentialsRetryTime = 120 # Seconds CredentialsRetryWait = 1 # Seconds","title":"Secure"},{"location":"microservices/device/V2Migration/#non-secure","text":"For non-secure mode the authentication credentials need to be added to the [InsecureSecrets] configuration section. Example - Authentication credentials in Device MQTT's [InsecureSecrets] configuration section [Writable.InsecureSecrets] [Writable.InsecureSecrets.MQTT] path = \"credentials\" [Writable.InsecureSecrets.MQTT.Secrets] username = \"mqtt-user\" password = \"mqtt-password\"","title":"Non Secure"},{"location":"microservices/device/V2Migration/#device-camera","text":"The Device Camera service specific [Driver] and [DeviceList.Protocols.HTTP] sections have changed for V2 due to the addition of the SecretStore capability and per camera credentials. The plain text camera credentials have been replaced with settings describing where to pull them from the SecretStore for each camera device specified.","title":"Device Camera"},{"location":"microservices/device/V2Migration/#driver","text":"Example V1 Driver configuration section [Driver] User = 'service' Password = 'Password!1' # Assign AuthMethod to 'digest' | 'basic' | 'none' # AuthMethod specifies the authentication method used when # requesting still images from the URL returned by the ONVIF # \"GetSnapshotURI\" command. All ONVIF requests will be # carried out using digest auth. AuthMethod = 'basic' Example V2 Driver configuration section [Driver] CredentialsRetryTime = '120' # Seconds CredentialsRetryWait = '1' # Seconds","title":"Driver"},{"location":"microservices/device/V2Migration/#devicelistprotocolshttp","text":"Example V1 DeviceList.Protocols.HTTP device configuration section [DeviceList.Protocols] [DeviceList.Protocols.HTTP] Address = '192.168.2.105' Example V2 DeviceList.Protocols.HTTP device configuration section [DeviceList.Protocols] [DeviceList.Protocols.HTTP] Address = '192.168.2.105' # Assign AuthMethod to 'digest' | 'usernamepassword' | 'none' # AuthMethod specifies the authentication method used when # requesting still images from the URL returned by the ONVIF # \"GetSnapshotURI\" command. All ONVIF requests will be # carried out using digest auth. AuthMethod = 'usernamepassword' CredentialsPath = 'credentials001'","title":"DeviceList.Protocols.HTTP"},{"location":"microservices/device/V2Migration/#secretstore_1","text":"","title":"SecretStore"},{"location":"microservices/device/V2Migration/#secure_1","text":"See the Secret API reference for injecting authentication credentials into a Device Service's secure SecretStore. An entry is required for each camera that is configured with AuthMethod = 'usernamepassword' Example - Authentication credentials injected via Device Camera's Secret endpoint curl -X POST http://localhost:59985/api/v2/secret -H 'Content-Type: application/json' -d '{ \"apiVersion\": \"v2\", \"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\", \"path\": \"credentials001\", \"secretData\": [ { \"key\": \"username\", \"value\": \"camera-user\" }, { \"key\": \"password\", \"value\": \"camera-password\" } ]}' Note The service has to be running for this endpoint to be available. The following [Driver] settings from above allow a window of time to inject the credentials. CredentialsRetryTime = 120 # Seconds CredentialsRetryWait = 1 # Seconds","title":"Secure"},{"location":"microservices/device/V2Migration/#non-secure_1","text":"For non-secure mode the authentication credentials need to be added to the [InsecureSecrets] configuration section. An entry is required for each camera that is configured with AuthMethod = 'usernamepassword' Example - Authentication credentials in Device Camera's [InsecureSecrets] configuration section [Writable.InsecureSecrets.Camera001] path = \"credentials001\" [Writable.InsecureSecrets.Camera001.Secrets] username = \"camera-user\" password = \"camera-password\"","title":"Non Secure"},{"location":"microservices/device/profile/Ch-DeviceProfile/","text":"Device Profile The device profile describes a type of device within the EdgeX system. Each device managed by a device service has an association with a device profile, which defines that device type in terms of the operations which it supports. For a full list of device profile fields and their required values see the device profile reference . For a detailed look at the device profile model and all its properties, see the metadata device profile data model . Identification The profile contains various identification fields. The Name field is required and must be unique in an EdgeX deployment. Other fields are optional - they are not used by device services but may be populated for informational purposes: Description Manufacturer Model Labels DeviceResources A deviceResource specifies a sensor value within a device that may be read from or written to either individually or as part of a deviceCommand. It has a name for identification and a description for informational purposes. The device service allows access to deviceResources via its device REST endpoint. The Attributes in a deviceResource are the device-service-specific parameters required to access the particular value. Each device service implementation will have its own set of named values that are required here, for example a BACnet device service may need an Object Identifier and a Property Identifier whereas a Bluetooth device service could use a UUID to identify a value. The Properties of a deviceResource describe the value and optionally request some simple processing to be performed on it. The following fields are available: valueType - Required. The data type of the value. Supported types are Bool , Int8 - Int64 , Uint8 - Uint64 , Float32 , Float64 , String , Binary , Object and arrays of the primitive types (ints, floats, bool). Arrays are specified as eg. Float32Array , BoolArray etc. readWrite - R , RW , or W indicating whether the value is readable or writable. units - indicate the units of the value, eg Amperes, degrees C, etc. minimum - minimum value a SET command is allowed, out of range will result in error. maximum - maximum value a SET command is allowed, out of range will result in error. defaultValue - a value used for SET command which do not specify one. assertion - a string value to which a reading (after processing) is compared. If the reading is not the same as the assertion value, the device's operating state will be set to disable. This can be useful for health checks. base - a value to be raised to the power of the raw reading before it is returned. scale - a factor by which to multiply a reading before it is returned. offset - a value to be added to a reading before it is returned. mask - a binary mask which will be applied to an integer reading. shift - a number of bits by which an integer reading will be shifted right. The processing defined by base, scale, offset, mask and shift is applied in that order. This is done within the SDK. A reverse transformation is applied by the SDK to incoming data on set operations (NB mask transforms on set are NYI) DeviceCommands DeviceCommands define access to reads and writes for multiple simultaneous device resources. Each named deviceCommand should contain a number of resourceOperations . DeviceCommands may be useful when readings are logically related, for example with a 3-axis accelerometer it is helpful to read all axes together. A resourceOperation consists of the following properties: deviceResource - the name of the deviceResource to access. defaultValue - optional, a value that will be used if a SET command does not specify one. mappings - optional, allows readings of String type to be re-mapped. The device service allows access to deviceCommands via the same device REST endpoint as is used to access deviceResources. EdgeX 2.0 For the EdgeX 2.0 (Ireland) release coreCommands section is removed and both deviceResources and deviceCommands are available via the Core Command Service by default. Set isHidden field to true under deviceResource or deviceCommand to disable the outward-facing API.","title":"Device Profile"},{"location":"microservices/device/profile/Ch-DeviceProfile/#device-profile","text":"The device profile describes a type of device within the EdgeX system. Each device managed by a device service has an association with a device profile, which defines that device type in terms of the operations which it supports. For a full list of device profile fields and their required values see the device profile reference . For a detailed look at the device profile model and all its properties, see the metadata device profile data model .","title":"Device Profile"},{"location":"microservices/device/profile/Ch-DeviceProfile/#identification","text":"The profile contains various identification fields. The Name field is required and must be unique in an EdgeX deployment. Other fields are optional - they are not used by device services but may be populated for informational purposes: Description Manufacturer Model Labels","title":"Identification"},{"location":"microservices/device/profile/Ch-DeviceProfile/#deviceresources","text":"A deviceResource specifies a sensor value within a device that may be read from or written to either individually or as part of a deviceCommand. It has a name for identification and a description for informational purposes. The device service allows access to deviceResources via its device REST endpoint. The Attributes in a deviceResource are the device-service-specific parameters required to access the particular value. Each device service implementation will have its own set of named values that are required here, for example a BACnet device service may need an Object Identifier and a Property Identifier whereas a Bluetooth device service could use a UUID to identify a value. The Properties of a deviceResource describe the value and optionally request some simple processing to be performed on it. The following fields are available: valueType - Required. The data type of the value. Supported types are Bool , Int8 - Int64 , Uint8 - Uint64 , Float32 , Float64 , String , Binary , Object and arrays of the primitive types (ints, floats, bool). Arrays are specified as eg. Float32Array , BoolArray etc. readWrite - R , RW , or W indicating whether the value is readable or writable. units - indicate the units of the value, eg Amperes, degrees C, etc. minimum - minimum value a SET command is allowed, out of range will result in error. maximum - maximum value a SET command is allowed, out of range will result in error. defaultValue - a value used for SET command which do not specify one. assertion - a string value to which a reading (after processing) is compared. If the reading is not the same as the assertion value, the device's operating state will be set to disable. This can be useful for health checks. base - a value to be raised to the power of the raw reading before it is returned. scale - a factor by which to multiply a reading before it is returned. offset - a value to be added to a reading before it is returned. mask - a binary mask which will be applied to an integer reading. shift - a number of bits by which an integer reading will be shifted right. The processing defined by base, scale, offset, mask and shift is applied in that order. This is done within the SDK. A reverse transformation is applied by the SDK to incoming data on set operations (NB mask transforms on set are NYI)","title":"DeviceResources"},{"location":"microservices/device/profile/Ch-DeviceProfile/#devicecommands","text":"DeviceCommands define access to reads and writes for multiple simultaneous device resources. Each named deviceCommand should contain a number of resourceOperations . DeviceCommands may be useful when readings are logically related, for example with a 3-axis accelerometer it is helpful to read all axes together. A resourceOperation consists of the following properties: deviceResource - the name of the deviceResource to access. defaultValue - optional, a value that will be used if a SET command does not specify one. mappings - optional, allows readings of String type to be re-mapped. The device service allows access to deviceCommands via the same device REST endpoint as is used to access deviceResources. EdgeX 2.0 For the EdgeX 2.0 (Ireland) release coreCommands section is removed and both deviceResources and deviceCommands are available via the Core Command Service by default. Set isHidden field to true under deviceResource or deviceCommand to disable the outward-facing API.","title":"DeviceCommands"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/","text":"Device Profile Reference This chapter details the structure of a Device Profile and allowable values for its fields. Device Profile Field Name Type Required? Notes name String Y Must be unique in the EdgeX deployment. Only allow unreserved characters as defined in https://tools.ietf.org/html/rfc3986#section-2.3. description String N manufacturer String N model String N labels Array of String N deviceResources Array of DeviceResource Y deviceCommands Array of DeviceCommand N DeviceResource Field Name Type Required? Notes name String Y Must be unique in the EdgeX deployment. Only allow unreserved characters as defined in https://tools.ietf.org/html/rfc3986#section-2.3. description String N isHidden Bool N Expose the DeviceResource to Command Service or not, default false tag String N attributes String-Interface Map N Each Device Service should define required and optional keys properties ResourceProperties Y ResourceProperties Field Name Type Required? Notes valueType Enum Y Uint8 , Uint16 , Uint32 , Uint64 , Int8 , Int16 , Int32 , Int64 , Float32 , Float64 , Bool , String , Binary , Object , Uint8Array , Uint16Array , Uint32Array , Uint64Array , Int8Array , Int16Array , Int32Array , Int64Array , Float32Array , Float64Array , BoolArray readWrite Enum Y R , W , RW units String N Developer is open to define units of value minimum String N Error if SET command value out of minimum range maximum String N Error if SET command value out of maximum range defaultValue String N If present, should be compatible with the Type field mask String N Only valid where Type is one of the unsigned integer types shift String N Only valid where Type is one of the unsigned integer types scale String N Only valid where Type is one of the integer or float types offset String N Only valid where Type is one of the integer or float types base String N Only valid where Type is one of the integer or float types assertion String N String value to which the reading is compared mediaType String N Only required when valueType is Binary DeviceCommand Field Name Type Required? Notes name String Y Must be unique in this profile. A DeviceCommand with a single DeviceResource is redundant unless renaming and/or restricting R/W access. For example DeviceResource is RW, but DeviceCommand is read-only. Only allow unreserved characters as defined in https://tools.ietf.org/html/rfc3986#section-2.3. isHidden Bool N Expose the DeviceCommand to Command Service or not, default false readWrite Enum Y R , W , RW resourceOperations Array of ResourceOperation Y ResourceOperation Field Name Type Required? Notes deviceResource String Y Must name a DeviceResource in this profile defaultValue String N If present, should be compatible with the Type field of the named DeviceResource mappings String-String Map N Map the GET resourceOperation value to another string value","title":"Device Profile Reference"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#device-profile-reference","text":"This chapter details the structure of a Device Profile and allowable values for its fields.","title":"Device Profile Reference"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#device-profile","text":"Field Name Type Required? Notes name String Y Must be unique in the EdgeX deployment. Only allow unreserved characters as defined in https://tools.ietf.org/html/rfc3986#section-2.3. description String N manufacturer String N model String N labels Array of String N deviceResources Array of DeviceResource Y deviceCommands Array of DeviceCommand N","title":"Device Profile"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#deviceresource","text":"Field Name Type Required? Notes name String Y Must be unique in the EdgeX deployment. Only allow unreserved characters as defined in https://tools.ietf.org/html/rfc3986#section-2.3. description String N isHidden Bool N Expose the DeviceResource to Command Service or not, default false tag String N attributes String-Interface Map N Each Device Service should define required and optional keys properties ResourceProperties Y","title":"DeviceResource"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#resourceproperties","text":"Field Name Type Required? Notes valueType Enum Y Uint8 , Uint16 , Uint32 , Uint64 , Int8 , Int16 , Int32 , Int64 , Float32 , Float64 , Bool , String , Binary , Object , Uint8Array , Uint16Array , Uint32Array , Uint64Array , Int8Array , Int16Array , Int32Array , Int64Array , Float32Array , Float64Array , BoolArray readWrite Enum Y R , W , RW units String N Developer is open to define units of value minimum String N Error if SET command value out of minimum range maximum String N Error if SET command value out of maximum range defaultValue String N If present, should be compatible with the Type field mask String N Only valid where Type is one of the unsigned integer types shift String N Only valid where Type is one of the unsigned integer types scale String N Only valid where Type is one of the integer or float types offset String N Only valid where Type is one of the integer or float types base String N Only valid where Type is one of the integer or float types assertion String N String value to which the reading is compared mediaType String N Only required when valueType is Binary","title":"ResourceProperties"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#devicecommand","text":"Field Name Type Required? Notes name String Y Must be unique in this profile. A DeviceCommand with a single DeviceResource is redundant unless renaming and/or restricting R/W access. For example DeviceResource is RW, but DeviceCommand is read-only. Only allow unreserved characters as defined in https://tools.ietf.org/html/rfc3986#section-2.3. isHidden Bool N Expose the DeviceCommand to Command Service or not, default false readWrite Enum Y R , W , RW resourceOperations Array of ResourceOperation Y","title":"DeviceCommand"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#resourceoperation","text":"Field Name Type Required? Notes deviceResource String Y Must name a DeviceResource in this profile defaultValue String N If present, should be compatible with the Type field of the named DeviceResource mappings String-String Map N Map the GET resourceOperation value to another string value","title":"ResourceOperation"},{"location":"microservices/device/sdk/Ch-DeviceSDK/","text":"Device Services SDK Introduction to the SDKs EdgeX provides two software development kits (SDKs) to help developers create new device services. While the EdgeX community and the larger EdgeX ecosystem provide a number of open source and commercially available device services for use with EdgeX, there is no way that every protocol and every sensor can be accommodated and connected to EdgeX with a pre-existing device service. Even if all the device service connectivity were provided, your use case, sensor or security infrastructure may require customization. Therefore, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity. EdgeX is mostly written in Go and C. There is a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, alternate language SDKs may be provided by the community or made available by the larger ecosystem. The SDKs are really libraries to be incorporated into a new micro service. They make writing a new device service much easier. By importing the SDK library of choice into your new device service project, you can focus on the details associated with getting and manipulating sensor data from your device via the specific protocol of your device. Other details, such as initialization of the device service, getting the service configured, sending sensor data to core data, managing communications with core metadata, and much more are handled by the code in the SDK library. The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX \u2013 such as making sure the service registers with the EdgeX registry service when it starts up. The EdgeX Foundry Device Service Software Development Kit (SDK) takes the developer through the step-by-step process to create an EdgeX Foundry device service micro service. Then setup the SDK and execute the code to generate the device service scaffolding to get you started using EdgeX. The Device Service SDK supports: Synchronous read and write operations Asynchronous device data collection Initialization and deconstruction of Driver Interface Initialization and destruction of Device Connection Framework for automated Provisioning Mechanism Support for multiple classes of Devices with Profiles Support for sets of actions triggered by a command Cached responses to queries Writing a Device Service Writing a new Device Service in Go Writing a new Device Service in C","title":"Device Services SDK"},{"location":"microservices/device/sdk/Ch-DeviceSDK/#device-services-sdk","text":"","title":"Device Services SDK"},{"location":"microservices/device/sdk/Ch-DeviceSDK/#introduction-to-the-sdks","text":"EdgeX provides two software development kits (SDKs) to help developers create new device services. While the EdgeX community and the larger EdgeX ecosystem provide a number of open source and commercially available device services for use with EdgeX, there is no way that every protocol and every sensor can be accommodated and connected to EdgeX with a pre-existing device service. Even if all the device service connectivity were provided, your use case, sensor or security infrastructure may require customization. Therefore, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity. EdgeX is mostly written in Go and C. There is a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, alternate language SDKs may be provided by the community or made available by the larger ecosystem. The SDKs are really libraries to be incorporated into a new micro service. They make writing a new device service much easier. By importing the SDK library of choice into your new device service project, you can focus on the details associated with getting and manipulating sensor data from your device via the specific protocol of your device. Other details, such as initialization of the device service, getting the service configured, sending sensor data to core data, managing communications with core metadata, and much more are handled by the code in the SDK library. The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX \u2013 such as making sure the service registers with the EdgeX registry service when it starts up. The EdgeX Foundry Device Service Software Development Kit (SDK) takes the developer through the step-by-step process to create an EdgeX Foundry device service micro service. Then setup the SDK and execute the code to generate the device service scaffolding to get you started using EdgeX. The Device Service SDK supports: Synchronous read and write operations Asynchronous device data collection Initialization and deconstruction of Driver Interface Initialization and destruction of Device Connection Framework for automated Provisioning Mechanism Support for multiple classes of Devices with Profiles Support for sets of actions triggered by a command Cached responses to queries","title":"Introduction to the SDKs"},{"location":"microservices/device/sdk/Ch-DeviceSDK/#writing-a-device-service","text":"Writing a new Device Service in Go Writing a new Device Service in C","title":"Writing a Device Service"},{"location":"microservices/device/virtual/Ch-VirtualDevice/","text":"Virtual Device Introduction The virtual device service simulates different kinds of devices to generate events and readings to the core data micro service, and users send commands and get responses through the command and control micro service. These features of the virtual device services are useful when executing functional or performance tests without having any real devices. The virtual device service, built in Go and based on the device service Go SDK, can simulate sensors by generating data of the following data types: Bool, BoolArray Int8, Int16, Int32, Int64, Int8Array, Int16Array, Int32Array, Int64Array Uint8, Uint16, Uint32, Uint64, Uint8Array, Uint16Array, Uint32Array, Uint64Array Float32, Float64, Float32Array, Float64Array Binary By default, the virtual device service is included and configured to run with all EdgeX Docker Compose files. This allows users to have a complete EdgeX system up and running - with simulated data from the virtual device service - in minutes. Using the Virtual Device Service The virtual device service contains 4 pre-defined devices as random value generators: Random-Boolean-Device Random-Integer-Device Random-UnsignedInteger-Device Random-Float-Device Random-Binary-Device These devices are created by the virtual device service in core metadata when the service first initializes. These devices are defined by device profiles that ship with the virtual device service. Each virtual device causes the generation of one to many values of the type specified by the device name. For example, Random-Integer-Device generates integer values: Int8, Int16, Int32 and Int64. As with all devices, the deviceResources in the associated device profile of the device defind what values are produced by the device service. In the case of Random-Integer-Device, the Int8, Int16, Int32 and Int64 values are defined as deviceResources (see the device profile ). Additionally, there is an accompanying deviceResource for each of the generated value deviceResource. Each deviceResources has an associated EnableRandomization_X deviceResource. In the case of the integer deviceResources above, there are the associated EnableRandomization_IntX deviceResources (see the device profile ). The EnableRandomization deviceResources are boolean values, and when set to true, the associated simulated sensor value is generated by the device service. When the EnableRandomization_IntX value is set to false, then the associated simulator sensor value is fixed. Info The Enable_Randomization attribute of resource is automatically set to false when you use a PUT command to set a specified generated value. Furtehr, the minimum and maximum values of generated value deviceResource can be specified in the device profile. Below, Int8 is set to be between -100 and 100. deviceResources : - name : \"Int8\" isHidden : false description : \"Generate random int8 value\" properties : valueType : \"Int8\" readWrite : \"RW\" minimum : \"-100\" maximum : \"100\" defaultValue : \"0\" For the binary deviceResources, values are generated by the function rand.Read(p []byte) in Golang math package. The []byte size is fixed to MaxBinaryBytes/1000. Core Command and the Virtual Device Service Use the following core command service APIs to execute commands against the virtual device service for the specified devices. Both GET and PUT commands can be issued with these APIs. GET command request the next generated value while PUT commands will allow you to disable randomization (EnableRandomization) and set the fixed values to be returned by the device. http://[host]:59882/api/v2/device/name/Random-Boolean-Device http://[host]:59882/api/v2/device/name/Random-Integer-Device http://[host]:59882/api/v2/device/name/Random-UnsignedInteger-Device http://[host]:59882/api/v2/device/name/Random-Float-Device http://[host]:59882/api/v2/device/name/Random-Binary-Device Note Port 59882 is the default port for the core command service. Configuration Properties Please refer to the general Common Configuration documentation for configuration properties common to all services. For each device, the virual device service will contain a DeviceList with associated Protocols and AutoEvents as shown by the example below. DeviceList Property Example Value Description properties used in defining the static provisioning of each of the virtual devices Name 'Random-Integer-Device' name of the virtual device ProfileName 'Random-Integer-Device' device profile that defines the resources and commands of the virtual device Description 'Example of Device Virtual' description of the virtual device Labels ['device-virtual-example'] labels array used for searching for virtual devices DeviceList/DeviceList.Protocols/DeviceList.Protocols.other Property Example Value Description Address 'device-virtual-int-01' address for the virtual device Protocol '300' DeviceList/DeviceList.AutoEvents Property Default Value Description properties used to define how often an event/reading is schedule for collection to send to core data from the virtual device Interval '15s' every 15 seconds OnChange false collect data regardless of change SourceName 'Int8' deviceResource to collect - in this case the Int8 resource API Reference Device Service - SDK- API Reference","title":"Virtual Device"},{"location":"microservices/device/virtual/Ch-VirtualDevice/#virtual-device","text":"","title":"Virtual Device"},{"location":"microservices/device/virtual/Ch-VirtualDevice/#introduction","text":"The virtual device service simulates different kinds of devices to generate events and readings to the core data micro service, and users send commands and get responses through the command and control micro service. These features of the virtual device services are useful when executing functional or performance tests without having any real devices. The virtual device service, built in Go and based on the device service Go SDK, can simulate sensors by generating data of the following data types: Bool, BoolArray Int8, Int16, Int32, Int64, Int8Array, Int16Array, Int32Array, Int64Array Uint8, Uint16, Uint32, Uint64, Uint8Array, Uint16Array, Uint32Array, Uint64Array Float32, Float64, Float32Array, Float64Array Binary By default, the virtual device service is included and configured to run with all EdgeX Docker Compose files. This allows users to have a complete EdgeX system up and running - with simulated data from the virtual device service - in minutes.","title":"Introduction"},{"location":"microservices/device/virtual/Ch-VirtualDevice/#using-the-virtual-device-service","text":"The virtual device service contains 4 pre-defined devices as random value generators: Random-Boolean-Device Random-Integer-Device Random-UnsignedInteger-Device Random-Float-Device Random-Binary-Device These devices are created by the virtual device service in core metadata when the service first initializes. These devices are defined by device profiles that ship with the virtual device service. Each virtual device causes the generation of one to many values of the type specified by the device name. For example, Random-Integer-Device generates integer values: Int8, Int16, Int32 and Int64. As with all devices, the deviceResources in the associated device profile of the device defind what values are produced by the device service. In the case of Random-Integer-Device, the Int8, Int16, Int32 and Int64 values are defined as deviceResources (see the device profile ). Additionally, there is an accompanying deviceResource for each of the generated value deviceResource. Each deviceResources has an associated EnableRandomization_X deviceResource. In the case of the integer deviceResources above, there are the associated EnableRandomization_IntX deviceResources (see the device profile ). The EnableRandomization deviceResources are boolean values, and when set to true, the associated simulated sensor value is generated by the device service. When the EnableRandomization_IntX value is set to false, then the associated simulator sensor value is fixed. Info The Enable_Randomization attribute of resource is automatically set to false when you use a PUT command to set a specified generated value. Furtehr, the minimum and maximum values of generated value deviceResource can be specified in the device profile. Below, Int8 is set to be between -100 and 100. deviceResources : - name : \"Int8\" isHidden : false description : \"Generate random int8 value\" properties : valueType : \"Int8\" readWrite : \"RW\" minimum : \"-100\" maximum : \"100\" defaultValue : \"0\" For the binary deviceResources, values are generated by the function rand.Read(p []byte) in Golang math package. The []byte size is fixed to MaxBinaryBytes/1000.","title":"Using the Virtual Device Service"},{"location":"microservices/device/virtual/Ch-VirtualDevice/#core-command-and-the-virtual-device-service","text":"Use the following core command service APIs to execute commands against the virtual device service for the specified devices. Both GET and PUT commands can be issued with these APIs. GET command request the next generated value while PUT commands will allow you to disable randomization (EnableRandomization) and set the fixed values to be returned by the device. http://[host]:59882/api/v2/device/name/Random-Boolean-Device http://[host]:59882/api/v2/device/name/Random-Integer-Device http://[host]:59882/api/v2/device/name/Random-UnsignedInteger-Device http://[host]:59882/api/v2/device/name/Random-Float-Device http://[host]:59882/api/v2/device/name/Random-Binary-Device Note Port 59882 is the default port for the core command service.","title":"Core Command and the Virtual Device Service"},{"location":"microservices/device/virtual/Ch-VirtualDevice/#configuration-properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services. For each device, the virual device service will contain a DeviceList with associated Protocols and AutoEvents as shown by the example below. DeviceList Property Example Value Description properties used in defining the static provisioning of each of the virtual devices Name 'Random-Integer-Device' name of the virtual device ProfileName 'Random-Integer-Device' device profile that defines the resources and commands of the virtual device Description 'Example of Device Virtual' description of the virtual device Labels ['device-virtual-example'] labels array used for searching for virtual devices DeviceList/DeviceList.Protocols/DeviceList.Protocols.other Property Example Value Description Address 'device-virtual-int-01' address for the virtual device Protocol '300' DeviceList/DeviceList.AutoEvents Property Default Value Description properties used to define how often an event/reading is schedule for collection to send to core data from the virtual device Interval '15s' every 15 seconds OnChange false collect data regardless of change SourceName 'Int8' deviceResource to collect - in this case the Int8 resource","title":"Configuration Properties"},{"location":"microservices/device/virtual/Ch-VirtualDevice/#api-reference","text":"Device Service - SDK- API Reference","title":"API Reference"},{"location":"microservices/general/","text":"Cross Cutting Concerns Event Tagging In an edge solution, it is likely that several instances of EdgeX are all sending edge data into a central location (enterprise system, cloud provider, etc.) In these circumstances, it will be critical to associate the data to its origin. That origin could be specified by the GPS location of the sensor, the name or identification of the sensor, the name or identification of some edge gateway that originally collected the data, or many other means. EdgeX provides the means to \u201ctag\u201d the event data from any point in the system. The Event object has a Tags property which is a key/value pair map that allows any service that creates or otherwise handles events to add custom information to the Event in order to help identify its origin or otherwise label it before it is sent to the north side . For example, a device service could populate the Tags property with latitude and longitude key/value pairs of the physical location of the sensor when the Event is created to send sensed information to Core Data. Application Service Configurable When the Event gets to the Application Service Configurable , for example, the service has an optional function (defined by Writable.Pipeline.Functions.AddTags in configuration) that will add additional key/value pair to the Event Tags . The key and value for the additional tag are provided in configuration (as shown by the example below). Multiple tags can be provide separated by commas. [Writable.Pipeline.Functions.AddTags] [Writable.Pipeline.Functions.AddTags.Parameters] tags = \"GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603\" Custom Application Service In the case, of a custom application service , an AddTags function can be used to add a collection of specified tags to the Event's Tags collection (see Built in Transforms/Functions ) If the Event already has Tags when it arrives at the application service, then configured tags will be added to the Tags map. If the configured tags have the same key as an existing key in the Tags map, then the configured key/value will override what is already in the Event Tags map.","title":"Cross Cutting Concerns"},{"location":"microservices/general/#cross-cutting-concerns","text":"","title":"Cross Cutting Concerns"},{"location":"microservices/general/#event-tagging","text":"In an edge solution, it is likely that several instances of EdgeX are all sending edge data into a central location (enterprise system, cloud provider, etc.) In these circumstances, it will be critical to associate the data to its origin. That origin could be specified by the GPS location of the sensor, the name or identification of the sensor, the name or identification of some edge gateway that originally collected the data, or many other means. EdgeX provides the means to \u201ctag\u201d the event data from any point in the system. The Event object has a Tags property which is a key/value pair map that allows any service that creates or otherwise handles events to add custom information to the Event in order to help identify its origin or otherwise label it before it is sent to the north side . For example, a device service could populate the Tags property with latitude and longitude key/value pairs of the physical location of the sensor when the Event is created to send sensed information to Core Data.","title":"Event Tagging"},{"location":"microservices/general/#application-service-configurable","text":"When the Event gets to the Application Service Configurable , for example, the service has an optional function (defined by Writable.Pipeline.Functions.AddTags in configuration) that will add additional key/value pair to the Event Tags . The key and value for the additional tag are provided in configuration (as shown by the example below). Multiple tags can be provide separated by commas. [Writable.Pipeline.Functions.AddTags] [Writable.Pipeline.Functions.AddTags.Parameters] tags = \"GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603\"","title":"Application Service Configurable"},{"location":"microservices/general/#custom-application-service","text":"In the case, of a custom application service , an AddTags function can be used to add a collection of specified tags to the Event's Tags collection (see Built in Transforms/Functions ) If the Event already has Tags when it arrives at the application service, then configured tags will be added to the Tags map. If the configured tags have the same key as an existing key in the Tags map, then the configured key/value will override what is already in the Event Tags map.","title":"Custom Application Service"},{"location":"microservices/support/Ch-SupportingServices/","text":"Supporting Services Microservices The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Micro services in the supporting services layer perform normal software application duties such as scheduler, and notifications/alerting . These services often need some amount of core services to function. In all cases, consider supporting service optional. Leave these services out of an EdgeX deployment depending on use case needs and system resources. Supporting services include: Rules Engine : the reference implementation edge analytics service that performs if-then conditional actuation at the edge based on sensor data collected by the EdgeX instance. Replace or augment this service with use case specific analytics capability. Scheduler : an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time, the service will call on any EdgeX service API URL via REST to trigger an operation. For example, at appointed times, the scheduler service calls on core data APIs to expunge old sensed events already exported out of EdgeX. Alerts and Notifications : provides EdgeX services with a central facility to send out an alert or notification. These are notices sent to another system or to a person monitoring the EdgeX instance (internal service communications are often handled more directly).","title":"Supporting Services Microservices"},{"location":"microservices/support/Ch-SupportingServices/#supporting-services-microservices","text":"The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Micro services in the supporting services layer perform normal software application duties such as scheduler, and notifications/alerting . These services often need some amount of core services to function. In all cases, consider supporting service optional. Leave these services out of an EdgeX deployment depending on use case needs and system resources. Supporting services include: Rules Engine : the reference implementation edge analytics service that performs if-then conditional actuation at the edge based on sensor data collected by the EdgeX instance. Replace or augment this service with use case specific analytics capability. Scheduler : an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time, the service will call on any EdgeX service API URL via REST to trigger an operation. For example, at appointed times, the scheduler service calls on core data APIs to expunge old sensed events already exported out of EdgeX. Alerts and Notifications : provides EdgeX services with a central facility to send out an alert or notification. These are notices sent to another system or to a person monitoring the EdgeX instance (internal service communications are often handled more directly).","title":"Supporting Services Microservices"},{"location":"microservices/support/eKuiper/Ch-eKuiper/","text":"eKuiper Rules Engine Overview LF Edge eKuiper is the EdgeX reference implementation rules engine (or edge analytics ) implementation. What is LF Edge eKuiper? LF Edge eKuiper is a lightweight open source software (Apache 2.0 open source license agreement) package for IoT edge analytics and stream processing implemented in Go lang, which can run on various resource constrained edge devices. Users can realize fast data processing on the edge and write rules in SQL. The eKuiper rules engine is based on three components Source , SQL and Sink . Source: Source of stream data, such as data from an MQTT server. For EdgeX, the data source is an EdgeX message bus, which can be implemented by ZeroMQ or MQTT; SQL: SQL is where the specified business logic is processed. eKuiper provides SQL statements to extract, filter, and transform data; Sink: Used to send the analysis result to a specific target, such as sending the analysis results to EdgeX's Command service, or an MQTT broker in the cloud; The relationship among Source, SQL and Sink in eKuiper is shown below. eKuiper runs very efficiently on resource constrained edge devices. For common IoT data processing, the throughput can reach 12k per second. Readers can refer to here to get more performance benchmark data for eKuiper. eKuiper rules engine of EdgeX An extension mechanism allows eKuiper to be customized to analyze and process data from different data sources. By default for the EdgeX configuration, eKuiper analyzes data coming from the EdgeX message bus . EdgeX provides an abstract message bus interface, and implements the ZeroMQ and MQTT protocols respectively to support information exchange between different micro-services. The integration of eKuiper and EdgeX mainly includes the following: Extend an EdgeX message bus source to support receiving data from the EdgeX message bus. By default, eKuiper listens to the port 5566 on which the Application Service publishes messages. After the data from the Core Data Service is processed by the Application Service, it will flow into the eKuiper rules engine for processing. Read the data type definition from Core Contract Service, convert EdgeX data to eKuiper data type, and process it according to the rules specified by the user. eKuiper supports sending analysis results to different Sink: The users can choose to send the analysis results to Command Service to control the equipment; The analysis results can be sent to the EdgeX message bus sink for further processing by other micro-services. Learn more EdgeX 2.0 Note: \"Configure the data flow\" tutorial in the list below is a new tutorial specific to EdgeX 2 and eKuiper 1.2 or later release. EdgeX eKuiper Rules Engine Tutorial : A 10-minute quick start tutorial, readers can refer to this article to start trying out the rules engine. Configure the data flow from EdgeX to eKuiper : a demonstrate on how to set up the various data flows from EdgeX to eKuiper. Learn how to configure the source to adopt any kind of data flow. Control the device with the EdgeX eKuiper rules engine : This article describes how to use the eKuiper rule engine in EdgeX to control the device based on the analysis results. Read EdgeX Source to get more detailed information, and type conversions. How to use the meta function to extract more information sent in the EdgeX message bus? When the device service sends data to the bus, some additional information is also sent, such as creation time and id. If you want to use this information in SQL statements, please refer to this article. EdgeX Message Bus Sink : The document describes how to use EdgeX message bus sink. If you'd like to have your analysis result consumed by other EdgeX services, you can send analysis data with EdgeX data format through this sink, and other EdgeX services can subscribe new message bus exposed by eKuiper sink. Info The eKuiper tutorials and documentation are available in both English and Chinese . For more information on the LF Edge eKuiper project, please refer to the following resources. eKuiper Github Code library eKuiper Reference","title":"eKuiper Rules Engine"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#ekuiper-rules-engine","text":"","title":"eKuiper Rules Engine"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#overview","text":"LF Edge eKuiper is the EdgeX reference implementation rules engine (or edge analytics ) implementation.","title":"Overview"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#what-is-lf-edge-ekuiper","text":"LF Edge eKuiper is a lightweight open source software (Apache 2.0 open source license agreement) package for IoT edge analytics and stream processing implemented in Go lang, which can run on various resource constrained edge devices. Users can realize fast data processing on the edge and write rules in SQL. The eKuiper rules engine is based on three components Source , SQL and Sink . Source: Source of stream data, such as data from an MQTT server. For EdgeX, the data source is an EdgeX message bus, which can be implemented by ZeroMQ or MQTT; SQL: SQL is where the specified business logic is processed. eKuiper provides SQL statements to extract, filter, and transform data; Sink: Used to send the analysis result to a specific target, such as sending the analysis results to EdgeX's Command service, or an MQTT broker in the cloud; The relationship among Source, SQL and Sink in eKuiper is shown below. eKuiper runs very efficiently on resource constrained edge devices. For common IoT data processing, the throughput can reach 12k per second. Readers can refer to here to get more performance benchmark data for eKuiper.","title":"What is LF Edge eKuiper?"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#ekuiper-rules-engine-of-edgex","text":"An extension mechanism allows eKuiper to be customized to analyze and process data from different data sources. By default for the EdgeX configuration, eKuiper analyzes data coming from the EdgeX message bus . EdgeX provides an abstract message bus interface, and implements the ZeroMQ and MQTT protocols respectively to support information exchange between different micro-services. The integration of eKuiper and EdgeX mainly includes the following: Extend an EdgeX message bus source to support receiving data from the EdgeX message bus. By default, eKuiper listens to the port 5566 on which the Application Service publishes messages. After the data from the Core Data Service is processed by the Application Service, it will flow into the eKuiper rules engine for processing. Read the data type definition from Core Contract Service, convert EdgeX data to eKuiper data type, and process it according to the rules specified by the user. eKuiper supports sending analysis results to different Sink: The users can choose to send the analysis results to Command Service to control the equipment; The analysis results can be sent to the EdgeX message bus sink for further processing by other micro-services.","title":"eKuiper rules engine of EdgeX"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#learn-more","text":"EdgeX 2.0 Note: \"Configure the data flow\" tutorial in the list below is a new tutorial specific to EdgeX 2 and eKuiper 1.2 or later release. EdgeX eKuiper Rules Engine Tutorial : A 10-minute quick start tutorial, readers can refer to this article to start trying out the rules engine. Configure the data flow from EdgeX to eKuiper : a demonstrate on how to set up the various data flows from EdgeX to eKuiper. Learn how to configure the source to adopt any kind of data flow. Control the device with the EdgeX eKuiper rules engine : This article describes how to use the eKuiper rule engine in EdgeX to control the device based on the analysis results. Read EdgeX Source to get more detailed information, and type conversions. How to use the meta function to extract more information sent in the EdgeX message bus? When the device service sends data to the bus, some additional information is also sent, such as creation time and id. If you want to use this information in SQL statements, please refer to this article. EdgeX Message Bus Sink : The document describes how to use EdgeX message bus sink. If you'd like to have your analysis result consumed by other EdgeX services, you can send analysis data with EdgeX data format through this sink, and other EdgeX services can subscribe new message bus exposed by eKuiper sink. Info The eKuiper tutorials and documentation are available in both English and Chinese . For more information on the LF Edge eKuiper project, please refer to the following resources. eKuiper Github Code library eKuiper Reference","title":"Learn more"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/","text":"Alerts & Notifications Introduction When another system or a person needs to know that something occurred in EdgeX, the alerts and notifications microservice sends that notification. Examples of alerts and notifications that other services could broadcast, include the provisioning of a new device, sensor data detected outside of certain parameters (usually detected by a device service or rules engine) or system or service malfunctions (usually detected by system management services). Terminology Notifications are informative, whereas Alerts are typically of a more important, critical, or urgent nature, possibly requiring immediate action. This diagram shows the high-level architecture of the notifications service. On the left side, the APIs are provided for other microservices, on-box applications, and off-box applications to use. The APIs could be in REST, AMQP, MQTT, or any standard application protocols. Warning Currently in EdgeX Foundry, only the RESTful interface is provided. On the right side, the notifications receiver could be a person or an application system on Cloud or in a server room. By invoking the Subscription RESTful interface to subscribe the specific types of notifications, the receiver obtains the appropriate notifications through defined receiving channels when events occur. The receiving channels include SMS message, e-mail, REST callback, AMQP, MQTT, and so on. Warning Currently in EdgeX Foundry, e-mail and REST callback channels are provided. When the notifications service receives notifications from any interface, the notifications are passed to the Notifications Handler internally. The Notifications Handler persists the received notifications first, and passes them to the Distribution Coordinator. When the Distribution Coordinator receives a notification, it first queries the Subscription database to get receivers who need this notification and their receiving channel information. According to the channel information, the Distribution Coordinator passes this notification to the corresponding channel senders. Then, the channel senders send out the notifications to the subscribed receivers. Workflow Normal/Minor Notifications When a client requests a notification to be sent with \"NORMAL\" or \"MINOR\" status, the notification is immediately sent to its receivers via the Distribution Coordinator, and the status is updated to \"PROCESSED\". Critical Notifications Notifications with \"CRITICAL\" status are also sent immediately. When encountering any error during sending critical notification, an individual resend task is scheduled, and each transmission record persists. After exceeding the configurable limit (resend limit), the service escalates the notification and create a new notification to notify particular receivers of the escalation subscription (name = \"ESCALATION\") of the failure. Edgex 2.0 For EdgeX 2.0, all notifications are processed immediately. The resend feature is only provided for critical notifications. The resendLimit and resendInterval properties can be defined in each subscription. If the properties are not provided, use the default values in the configuration properties. Data Model The latest developed data model will be updated in the Swagger API document . This diagram is drawn by diagrams.net with the source file EdgeX_SupportingServicesNotificationsModel.xml Data Dictionary Subscription Property Description The object used to describe the receiver and the recipient channels ID Uniquely identifies a subscription, for example a UUID Name Uniquely identifies a subscription Receiver The name of the party interested in the notification Description Human readable description explaining the subscription intent Categories Link the subscription to one or more categories of notification. Labels An array of associated means to label or tag for categorization or identification Channels An array of channel objects indicating the destination for the notification ResendLimit The retry limit for attempts to send notifications ResendInterval The interval in ISO 8691 format of resending the notification AdminState An enumeration string indicating the subscription is locked or unlocked Channel Property Description The object used to describe the notification end point. Channel supports transmissions and notifications with fields for delivery via email or REST Type Object of ChannelType - indicates whether the channel facilitates email or REST MailAddress EmailAddress object for an array of string email addresses RESTAddress RESTAddress object for a REST API destination endpoint Notification Property Description The object used to describe the message and sender content of a notification. ID Uniquely identifies a notification, for example a UUID Sender A string indicating the notification message sender Category A string categorizing the notification Severity An enumeration string indicating the severity of the notification - as either normal or critical Content The message sent to the receivers Description Human readable description explaining the reason for the notification or alert Status An enumeration string indicating the status of the notification as new, processed or escalated Labels Array of associated means to label or tag a notification for better search and filtering ContentType String indicating the type of content in the notification message Transmission Property Description The object used to group Notifications ID Uniquely identifies a transmission, for example a UUID Created A timestamp indicating when the notification was created NotificationId The notification id to be sent SubscriptionName The name of the subscription interested in the notification Channel A channel object indicating the destination for the notification Status An enumeration string indicating whether the transmission failed, was sent, was resending, was acknowledged, or was escalated ResendCount Number indicating the number of resent attempts Records An array of TransmissionRecords TransmissionRecord Property Description Information the status and response of a notification sent to a receiver Status An enumeration string indicating whether the transmission failed, was sent, was acknowledged, or escalated Response The response string from the receiver Sent A timestamp indicating when the notification was sent Configuration Properties Please refer to the general Common Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. Edgex 2.0 For EdgeX 2.0, the SMTP username and password can be set in the Writable.InsecureSecrets.SMTP.Secrets as an insecure secret, or be stored in the Smtp.SecretPath for security. Writable Property Default Value Description Writable properties can be set and will dynamically take effect without service restart ResendLimit 2 Sets the retry limit for attempts to send notifications. CRITICAL notifications are sent to the escalation subscriber when resend limit is exceeded. ResendInterval '5s' Sets the retry interval for attempts to send notifications. Writable.InsecureSecrets.SMTP.Secrets username username@mail.example.com The email to send alerts and notifications Writable.InsecureSecrets.SMTP.Secrets password The email password Databases/Databases.Primary Property Default Value Description Properties used by the service to access the database Name 'notifications' Document store or database name Smtp Property Default Value Description Config to connect to applicable SMTP (email) service. All the properties with prefix \"smtp\" are for mail server configuration. Configure the mail server appropriately to send alerts and notifications. The correct values depend on which mail server is used. Smtp Host smtp.gmail.com SMTP service host name Smtp Port 587 SMTP service port number Smtp EnableSelfSignedCert false Indicates whether a self-signed cert can be used for secure connectivity. Smtp SecretPath smtp Specify the secret path to store the credential(username and password) for connecting the SMTP server via the /secret API, or set Writable SMTP username and password for insecure secrets Smtp Sender jdoe@gmail.com SMTP service sender/username Smtp Subject EdgeX Notification SMTP notification message subject Gmail Configuration Example Before using Gmail to send alerts and notifications, configure the sign-in security settings through one of the following two methods: Enable 2-Step Verification and use an App Password (Recommended). An App password is a 16-digit passcode that gives an app or device permission to access your Google Account. For more detail about this topic, please refer to this Google official document: https://support.google.com/accounts/answer/185833 . Allow less secure apps: If the 2-Step Verification is not enabled, you may need to allow less secure apps to access the Gmail account. Please see the instruction from this Google official document on this topic: https://support.google.com/accounts/answer/6010255 . Then, use the following settings for the mail server properties: Smtp Port=25 Smtp Host=smtp.gmail.com Smtp Sender= ${ Gmail account } Smtp Password= ${ Gmail password or App password } Yahoo Mail Configuration Example Similar to Gmail, configure the sign-in security settings for Yahoo through one of the following two methods: Enable 2-Step Verification and use an App Password (Recommended). Please see this Yahoo official document for more detail: https://help.yahoo.com/kb/SLN15241.html . Allow apps that use less secure sign in. Please see this Yahoo official document for more detail on this topic: https://help.yahoo.com/kb/SLN27791.html . Then, use the following settings for the mail server properties: Smtp Port=25 Smtp Host=smtp.mail.yahoo.com Smtp Sender= ${ Yahoo account } Smtp Password= ${ Yahoo password or App password } V2 Configuration Migration Guide Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service . Writable The Writable.InsecureSecrets.SMTP section has been added. Example Writable.InsecureSecrets.SMTP section [Writable.InsecureSecrets.SMTP] path = \"smtp\" [Writable.InsecureSecrets.SMTP.Secrets] username = \"username@mail.example.com\" password = \"\" API Reference Support Notifications API Reference","title":"Alerts & Notifications"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#alerts-notifications","text":"","title":"Alerts & Notifications"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#introduction","text":"When another system or a person needs to know that something occurred in EdgeX, the alerts and notifications microservice sends that notification. Examples of alerts and notifications that other services could broadcast, include the provisioning of a new device, sensor data detected outside of certain parameters (usually detected by a device service or rules engine) or system or service malfunctions (usually detected by system management services).","title":"Introduction"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#terminology","text":"Notifications are informative, whereas Alerts are typically of a more important, critical, or urgent nature, possibly requiring immediate action. This diagram shows the high-level architecture of the notifications service. On the left side, the APIs are provided for other microservices, on-box applications, and off-box applications to use. The APIs could be in REST, AMQP, MQTT, or any standard application protocols. Warning Currently in EdgeX Foundry, only the RESTful interface is provided. On the right side, the notifications receiver could be a person or an application system on Cloud or in a server room. By invoking the Subscription RESTful interface to subscribe the specific types of notifications, the receiver obtains the appropriate notifications through defined receiving channels when events occur. The receiving channels include SMS message, e-mail, REST callback, AMQP, MQTT, and so on. Warning Currently in EdgeX Foundry, e-mail and REST callback channels are provided. When the notifications service receives notifications from any interface, the notifications are passed to the Notifications Handler internally. The Notifications Handler persists the received notifications first, and passes them to the Distribution Coordinator. When the Distribution Coordinator receives a notification, it first queries the Subscription database to get receivers who need this notification and their receiving channel information. According to the channel information, the Distribution Coordinator passes this notification to the corresponding channel senders. Then, the channel senders send out the notifications to the subscribed receivers.","title":"Terminology"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#workflow","text":"","title":"Workflow"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#normalminor-notifications","text":"When a client requests a notification to be sent with \"NORMAL\" or \"MINOR\" status, the notification is immediately sent to its receivers via the Distribution Coordinator, and the status is updated to \"PROCESSED\".","title":"Normal/Minor Notifications"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#critical-notifications","text":"Notifications with \"CRITICAL\" status are also sent immediately. When encountering any error during sending critical notification, an individual resend task is scheduled, and each transmission record persists. After exceeding the configurable limit (resend limit), the service escalates the notification and create a new notification to notify particular receivers of the escalation subscription (name = \"ESCALATION\") of the failure. Edgex 2.0 For EdgeX 2.0, all notifications are processed immediately. The resend feature is only provided for critical notifications. The resendLimit and resendInterval properties can be defined in each subscription. If the properties are not provided, use the default values in the configuration properties.","title":"Critical Notifications"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#data-model","text":"The latest developed data model will be updated in the Swagger API document . This diagram is drawn by diagrams.net with the source file EdgeX_SupportingServicesNotificationsModel.xml","title":"Data Model"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#data-dictionary","text":"Subscription Property Description The object used to describe the receiver and the recipient channels ID Uniquely identifies a subscription, for example a UUID Name Uniquely identifies a subscription Receiver The name of the party interested in the notification Description Human readable description explaining the subscription intent Categories Link the subscription to one or more categories of notification. Labels An array of associated means to label or tag for categorization or identification Channels An array of channel objects indicating the destination for the notification ResendLimit The retry limit for attempts to send notifications ResendInterval The interval in ISO 8691 format of resending the notification AdminState An enumeration string indicating the subscription is locked or unlocked Channel Property Description The object used to describe the notification end point. Channel supports transmissions and notifications with fields for delivery via email or REST Type Object of ChannelType - indicates whether the channel facilitates email or REST MailAddress EmailAddress object for an array of string email addresses RESTAddress RESTAddress object for a REST API destination endpoint Notification Property Description The object used to describe the message and sender content of a notification. ID Uniquely identifies a notification, for example a UUID Sender A string indicating the notification message sender Category A string categorizing the notification Severity An enumeration string indicating the severity of the notification - as either normal or critical Content The message sent to the receivers Description Human readable description explaining the reason for the notification or alert Status An enumeration string indicating the status of the notification as new, processed or escalated Labels Array of associated means to label or tag a notification for better search and filtering ContentType String indicating the type of content in the notification message Transmission Property Description The object used to group Notifications ID Uniquely identifies a transmission, for example a UUID Created A timestamp indicating when the notification was created NotificationId The notification id to be sent SubscriptionName The name of the subscription interested in the notification Channel A channel object indicating the destination for the notification Status An enumeration string indicating whether the transmission failed, was sent, was resending, was acknowledged, or was escalated ResendCount Number indicating the number of resent attempts Records An array of TransmissionRecords TransmissionRecord Property Description Information the status and response of a notification sent to a receiver Status An enumeration string indicating whether the transmission failed, was sent, was acknowledged, or escalated Response The response string from the receiver Sent A timestamp indicating when the notification was sent","title":"Data Dictionary"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#configuration-properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. Edgex 2.0 For EdgeX 2.0, the SMTP username and password can be set in the Writable.InsecureSecrets.SMTP.Secrets as an insecure secret, or be stored in the Smtp.SecretPath for security. Writable Property Default Value Description Writable properties can be set and will dynamically take effect without service restart ResendLimit 2 Sets the retry limit for attempts to send notifications. CRITICAL notifications are sent to the escalation subscriber when resend limit is exceeded. ResendInterval '5s' Sets the retry interval for attempts to send notifications. Writable.InsecureSecrets.SMTP.Secrets username username@mail.example.com The email to send alerts and notifications Writable.InsecureSecrets.SMTP.Secrets password The email password Databases/Databases.Primary Property Default Value Description Properties used by the service to access the database Name 'notifications' Document store or database name Smtp Property Default Value Description Config to connect to applicable SMTP (email) service. All the properties with prefix \"smtp\" are for mail server configuration. Configure the mail server appropriately to send alerts and notifications. The correct values depend on which mail server is used. Smtp Host smtp.gmail.com SMTP service host name Smtp Port 587 SMTP service port number Smtp EnableSelfSignedCert false Indicates whether a self-signed cert can be used for secure connectivity. Smtp SecretPath smtp Specify the secret path to store the credential(username and password) for connecting the SMTP server via the /secret API, or set Writable SMTP username and password for insecure secrets Smtp Sender jdoe@gmail.com SMTP service sender/username Smtp Subject EdgeX Notification SMTP notification message subject","title":"Configuration Properties"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#gmail-configuration-example","text":"Before using Gmail to send alerts and notifications, configure the sign-in security settings through one of the following two methods: Enable 2-Step Verification and use an App Password (Recommended). An App password is a 16-digit passcode that gives an app or device permission to access your Google Account. For more detail about this topic, please refer to this Google official document: https://support.google.com/accounts/answer/185833 . Allow less secure apps: If the 2-Step Verification is not enabled, you may need to allow less secure apps to access the Gmail account. Please see the instruction from this Google official document on this topic: https://support.google.com/accounts/answer/6010255 . Then, use the following settings for the mail server properties: Smtp Port=25 Smtp Host=smtp.gmail.com Smtp Sender= ${ Gmail account } Smtp Password= ${ Gmail password or App password }","title":"Gmail Configuration Example"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#yahoo-mail-configuration-example","text":"Similar to Gmail, configure the sign-in security settings for Yahoo through one of the following two methods: Enable 2-Step Verification and use an App Password (Recommended). Please see this Yahoo official document for more detail: https://help.yahoo.com/kb/SLN15241.html . Allow apps that use less secure sign in. Please see this Yahoo official document for more detail on this topic: https://help.yahoo.com/kb/SLN27791.html . Then, use the following settings for the mail server properties: Smtp Port=25 Smtp Host=smtp.mail.yahoo.com Smtp Sender= ${ Yahoo account } Smtp Password= ${ Yahoo password or App password }","title":"Yahoo Mail Configuration Example"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#v2-configuration-migration-guide","text":"Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service .","title":"V2 Configuration Migration Guide"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#writable","text":"The Writable.InsecureSecrets.SMTP section has been added. Example Writable.InsecureSecrets.SMTP section [Writable.InsecureSecrets.SMTP] path = \"smtp\" [Writable.InsecureSecrets.SMTP.Secrets] username = \"username@mail.example.com\" password = \"\"","title":"Writable"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#api-reference","text":"Support Notifications API Reference","title":"API Reference"},{"location":"microservices/support/scheduler/Ch-Scheduler/","text":"Scheduler Introduction The support scheduler microservice provide an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time (called an interval ), the service calls on any EdgeX service API URL via REST to trigger an operation (called an interval action ). For example, the scheduler service periodically calls on core data APIs to clean up old sensed events that have been successfully exported out of EdgeX. Edgex 2.0 For EdgeX 2.0 the REST API provided by the Support Scheduler has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters. Default Interval Actions Scheduled interval actions configured by default with the reference implementation of the service include: Clean up of Core-data events/readings that have been persisted for an extended period. In order to prevent the edge node from running out of space, these old events/readings are removed. This is the \"ScrubAged\" operation. Scheduler parameters around this operation determine how often and where to call into Core-data to invoke this operation to expunge of old data. NOTE The removal of stale records occurs on a configurable schedule. By default, the default action above is invoked once a day at midnight. Scheduler Persistence Support scheduler uses a data store to persist the Interval(s) and IntervalAction(s). Persistence is accomplished by the Scheduler DB located in your current configured database for EdgeX. Info Redis DB is used by default to persist all scheduler service information to include intervals and interval actions. ISO 8601 Standard The times and frequencies defined in the scheduler service's intervals are specified using the international date/time standard - ISO 8601 . So, for example, the start of an interval would be represented in YYYYMMDD'T'HHmmss format. 20180101T000000 represents January 1, 2018 at midnight. Frequencies are represented with ISO 8601 durations. Data Model The latest developed data model will be updated in the Swagger API document . This diagram is drawn by diagram.net , and the source file is here . Data Dictionary Intervals Property Description An object defining a specific \"period\" in time Id Uniquely identifies an interval, for example a UUID Created A timestamp indicating when the interval was created in the database Modified A timestamp indicating when the interval was last modified Name the name of the given interval - unique for the EdgeX instance Start The start time of the given interval in ISO 8601 format End The end time of the given interval in ISO 8601 format Interval How often the specific resource needs to be polled. It represents as a duration string. The format of this field is to be an unsigned integer followed by a unit which may be \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Eg, \"100ms\", \"24h\" IntervalActions Property Description The action triggered by the service when the associated interval occurs Id Uniquely identifies an interval action, for example a UUID Created A timestamp indicating when the interval action was created in the database Modified A timestamp indicating when the interval action was last modified Name the name of the interval action Interval associated interval that defines when the action occurs AdminState interval action state - either LOCKED or UNLOCKED Protocol Indicates which protocol should be used. Only http is used today Host The host targeted by the action when it activates Port The port on the targeted host Method Indicates which Http verb should be used for the REST endpoint.(Only using when type is REST Path The HTTP path at the targeted host for fulfillment of the action.(Only using when type is REST) Target The service target which is to receive the REST call - example core-data See Interval and IntervalAction for more information. High Level Interaction Diagrams Scheduler interval actions to expunge old and exported (pushed) records from Core Data Configuration Properties Please refer to the general Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. ScheduleIntervalTime Property Default Value Description ScheduleIntervalTime 500 the time, in milliseconds, to trigger any applicable interval actions Intervals/Intervals.Midnight Property Default Value Description Default intervals for use with default interval actions Name midnight Name of the every day at midnight interval Start 20180101T000000 Indicates the start time for the midnight interval which is a midnight, Jan 1, 2018 which effectively sets the start time as of right now since this is in the past Interval 24h defines a frequency of every 24 hours IntervalActions.IntervalActions.ScrubAged Property Default Value Description Configuration of the core data clean old events operation which is to kick off every midnight Name scrub-aged-events name of the interval action Host localhost run the request on core data assumed to be on the localhost Port 59880 run the request against the default core data port Protocol http Make a RESTful request to core data Method DELETE Make a RESTful delete operation request to core data Target core-data target core data Path /api/v2/event/age/604800000000000 request core data's remove old events API with parameter of 7 days Interval midnight run the operation every midnight as specified by the configuration defined interval V2 Configuration Migration Guide Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service . API Reference Support Scheduler API Reference","title":"Scheduler"},{"location":"microservices/support/scheduler/Ch-Scheduler/#scheduler","text":"","title":"Scheduler"},{"location":"microservices/support/scheduler/Ch-Scheduler/#introduction","text":"The support scheduler microservice provide an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time (called an interval ), the service calls on any EdgeX service API URL via REST to trigger an operation (called an interval action ). For example, the scheduler service periodically calls on core data APIs to clean up old sensed events that have been successfully exported out of EdgeX. Edgex 2.0 For EdgeX 2.0 the REST API provided by the Support Scheduler has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Introduction"},{"location":"microservices/support/scheduler/Ch-Scheduler/#default-interval-actions","text":"Scheduled interval actions configured by default with the reference implementation of the service include: Clean up of Core-data events/readings that have been persisted for an extended period. In order to prevent the edge node from running out of space, these old events/readings are removed. This is the \"ScrubAged\" operation. Scheduler parameters around this operation determine how often and where to call into Core-data to invoke this operation to expunge of old data. NOTE The removal of stale records occurs on a configurable schedule. By default, the default action above is invoked once a day at midnight.","title":"Default Interval Actions"},{"location":"microservices/support/scheduler/Ch-Scheduler/#scheduler-persistence","text":"Support scheduler uses a data store to persist the Interval(s) and IntervalAction(s). Persistence is accomplished by the Scheduler DB located in your current configured database for EdgeX. Info Redis DB is used by default to persist all scheduler service information to include intervals and interval actions.","title":"Scheduler Persistence"},{"location":"microservices/support/scheduler/Ch-Scheduler/#iso-8601-standard","text":"The times and frequencies defined in the scheduler service's intervals are specified using the international date/time standard - ISO 8601 . So, for example, the start of an interval would be represented in YYYYMMDD'T'HHmmss format. 20180101T000000 represents January 1, 2018 at midnight. Frequencies are represented with ISO 8601 durations.","title":"ISO 8601 Standard"},{"location":"microservices/support/scheduler/Ch-Scheduler/#data-model","text":"The latest developed data model will be updated in the Swagger API document . This diagram is drawn by diagram.net , and the source file is here .","title":"Data Model"},{"location":"microservices/support/scheduler/Ch-Scheduler/#data-dictionary","text":"Intervals Property Description An object defining a specific \"period\" in time Id Uniquely identifies an interval, for example a UUID Created A timestamp indicating when the interval was created in the database Modified A timestamp indicating when the interval was last modified Name the name of the given interval - unique for the EdgeX instance Start The start time of the given interval in ISO 8601 format End The end time of the given interval in ISO 8601 format Interval How often the specific resource needs to be polled. It represents as a duration string. The format of this field is to be an unsigned integer followed by a unit which may be \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Eg, \"100ms\", \"24h\" IntervalActions Property Description The action triggered by the service when the associated interval occurs Id Uniquely identifies an interval action, for example a UUID Created A timestamp indicating when the interval action was created in the database Modified A timestamp indicating when the interval action was last modified Name the name of the interval action Interval associated interval that defines when the action occurs AdminState interval action state - either LOCKED or UNLOCKED Protocol Indicates which protocol should be used. Only http is used today Host The host targeted by the action when it activates Port The port on the targeted host Method Indicates which Http verb should be used for the REST endpoint.(Only using when type is REST Path The HTTP path at the targeted host for fulfillment of the action.(Only using when type is REST) Target The service target which is to receive the REST call - example core-data See Interval and IntervalAction for more information.","title":"Data Dictionary"},{"location":"microservices/support/scheduler/Ch-Scheduler/#high-level-interaction-diagrams","text":"Scheduler interval actions to expunge old and exported (pushed) records from Core Data","title":"High Level Interaction Diagrams"},{"location":"microservices/support/scheduler/Ch-Scheduler/#configuration-properties","text":"Please refer to the general Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. ScheduleIntervalTime Property Default Value Description ScheduleIntervalTime 500 the time, in milliseconds, to trigger any applicable interval actions Intervals/Intervals.Midnight Property Default Value Description Default intervals for use with default interval actions Name midnight Name of the every day at midnight interval Start 20180101T000000 Indicates the start time for the midnight interval which is a midnight, Jan 1, 2018 which effectively sets the start time as of right now since this is in the past Interval 24h defines a frequency of every 24 hours IntervalActions.IntervalActions.ScrubAged Property Default Value Description Configuration of the core data clean old events operation which is to kick off every midnight Name scrub-aged-events name of the interval action Host localhost run the request on core data assumed to be on the localhost Port 59880 run the request against the default core data port Protocol http Make a RESTful request to core data Method DELETE Make a RESTful delete operation request to core data Target core-data target core data Path /api/v2/event/age/604800000000000 request core data's remove old events API with parameter of 7 days Interval midnight run the operation every midnight as specified by the configuration defined interval","title":"Configuration Properties"},{"location":"microservices/support/scheduler/Ch-Scheduler/#v2-configuration-migration-guide","text":"Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service .","title":"V2 Configuration Migration Guide"},{"location":"microservices/support/scheduler/Ch-Scheduler/#api-reference","text":"Support Scheduler API Reference","title":"API Reference"},{"location":"microservices/system-management/Ch_SystemManagement/","text":"System Management Micro Services Warning EdgeX System Management services are deprecated with the Ireland release. The service will not be immediately removed (in Ireland or even Jakarta), but adopters should note that it has been tagged for eventual replacement. The reasons for this include: Deployment and orchestration systems (Docker Compose, Kubernetes, etc.) provide for the ability to start, stop, and restart the EdgeX services (making EdgeX system management capability redundant or not-aligned with the current deployment/orchestration tool/strategy). Native start, stop and restart of services is highly dependent on the operating system and/or deployment mechanism. EdgeX is only providing the Docker Linux \"executor\" for these today - which was redundant to the capability in Docker Compose. The reference implementation was insufficient to assist in other native environments. Configuration information is available from Consul (the configuration service) or the service directly. System Management was not being used to provide this information or could be out of sync with the configuration service. Metrics information provided by System Management is dependent on the underlying deployment means (e.g., Docker). The metrics telemetry has information about the memory and CPU used by the service, but this data is readily available from the operating system tools or Docker environment (if containerized). The telemetry really needed by adopters is EdgeX specific telemetry that outside-application tools/systems cannot provide (e.g., the number of events being created by a device service). -Because System Management was not made aware of the addition/removal of services (without a reset of its configuration and a restart of the service), its ability to perform any action with all services (for example stopping all services) was dependent on its static list of services configuration being kept up to date. In a future release (unnamed and unscheduled at this time), EdgeX will offer a better EdgeX facility to collect EdgeX specific metrics telemetry. EdgeX facilitation/support for deployment/orchestration tools will continue to grow (to include integration with LF Edge projects like OpenHorizon or Baetyl) to support service start/stop/restart and allow these tools to better track generic container metrics (memory/CPU). EdgeX configuration service (however implemented) will be the single source of truth regarding service configuration. If there is a documented use case for the existing system management features not covered by other capability in the future, a new system management service may be provide but providing for the needs in a platform independent fashion. System Management facilities provide the central point of contact for external management systems to start/stop/restart EdgeX services, get the configuration for a service, the status/health of a service, or get metrics on the EdgeX services (such as memory usage) so that the EdgeX services can be monitored. Facilitating Larger Management Systems EdgeX is an edge platform. It typically runs as close to the physical sensor/device world as it can in order to provide the fastest and most efficient collection and reaction to the data that it can. In a larger solution deployment, there could be several instances of EdgeX each managing and controlling a subset of the \u201cthings\u201d in the overall deployment. In a very big deployment, a larger management system will want to manage the edge systems and resources of the overall deployment. Just as there is a management system to control all the nodes and infrastructure within a cloud data center, and across cloud data centers, so too there will likely be management systems that will manage and control all the nodes (from edge to cloud) and infrastructure of a complete fog or IoT deployment. EdgeX system management is not the larger control management system. Instead, EdgeX system management capability is meant to facilitate the larger control management systems. When a management system wants to start or stop the entire deployment, EdgeX system management capability is there to receive the command and start or stop the EdgeX platform and associated infrastructure of the EdgeX instance that it is aware of. Likewise, when the larger central management system needs service metrics or configuration from EdgeX, it can call on the EdgeX system management services to provide the information it needs (thereby avoiding communications with each individual service). Use is Optional There are many control management systems today. Each of these systems operates differently. Some solutions may not require the EdgeX management components. For example, if your edge platform is large enough to support the use of something like Kubernetes or Swarm to deploy, orchestrate and manage your containerized edge applications, you may not require the system management services provided with EdgeX Foundry. Therefore, use of the system management services is considered optional. System Management Services There are two services that provide the EdgeX system management capability. System Management Agent : the micro service that other systems or services communicate with and make their management request (to start/stop/restart, get the configuration, get the status/health, or get metrics of the EdgeX service). It communicates with the EdgeX micro services or executor (see below) to satisfy the requests. System Management Executor : the excutable that performs the start, stop and restart of the services as well as metrics gathering from the EdgeX services. While EdgeX provides a single reference implementation of the system management executor today (one for Docker environments), there may be many implementations of the executor in the future.","title":"System Management Micro Services"},{"location":"microservices/system-management/Ch_SystemManagement/#system-management-micro-services","text":"Warning EdgeX System Management services are deprecated with the Ireland release. The service will not be immediately removed (in Ireland or even Jakarta), but adopters should note that it has been tagged for eventual replacement. The reasons for this include: Deployment and orchestration systems (Docker Compose, Kubernetes, etc.) provide for the ability to start, stop, and restart the EdgeX services (making EdgeX system management capability redundant or not-aligned with the current deployment/orchestration tool/strategy). Native start, stop and restart of services is highly dependent on the operating system and/or deployment mechanism. EdgeX is only providing the Docker Linux \"executor\" for these today - which was redundant to the capability in Docker Compose. The reference implementation was insufficient to assist in other native environments. Configuration information is available from Consul (the configuration service) or the service directly. System Management was not being used to provide this information or could be out of sync with the configuration service. Metrics information provided by System Management is dependent on the underlying deployment means (e.g., Docker). The metrics telemetry has information about the memory and CPU used by the service, but this data is readily available from the operating system tools or Docker environment (if containerized). The telemetry really needed by adopters is EdgeX specific telemetry that outside-application tools/systems cannot provide (e.g., the number of events being created by a device service). -Because System Management was not made aware of the addition/removal of services (without a reset of its configuration and a restart of the service), its ability to perform any action with all services (for example stopping all services) was dependent on its static list of services configuration being kept up to date. In a future release (unnamed and unscheduled at this time), EdgeX will offer a better EdgeX facility to collect EdgeX specific metrics telemetry. EdgeX facilitation/support for deployment/orchestration tools will continue to grow (to include integration with LF Edge projects like OpenHorizon or Baetyl) to support service start/stop/restart and allow these tools to better track generic container metrics (memory/CPU). EdgeX configuration service (however implemented) will be the single source of truth regarding service configuration. If there is a documented use case for the existing system management features not covered by other capability in the future, a new system management service may be provide but providing for the needs in a platform independent fashion. System Management facilities provide the central point of contact for external management systems to start/stop/restart EdgeX services, get the configuration for a service, the status/health of a service, or get metrics on the EdgeX services (such as memory usage) so that the EdgeX services can be monitored.","title":"System Management Micro Services"},{"location":"microservices/system-management/Ch_SystemManagement/#facilitating-larger-management-systems","text":"EdgeX is an edge platform. It typically runs as close to the physical sensor/device world as it can in order to provide the fastest and most efficient collection and reaction to the data that it can. In a larger solution deployment, there could be several instances of EdgeX each managing and controlling a subset of the \u201cthings\u201d in the overall deployment. In a very big deployment, a larger management system will want to manage the edge systems and resources of the overall deployment. Just as there is a management system to control all the nodes and infrastructure within a cloud data center, and across cloud data centers, so too there will likely be management systems that will manage and control all the nodes (from edge to cloud) and infrastructure of a complete fog or IoT deployment. EdgeX system management is not the larger control management system. Instead, EdgeX system management capability is meant to facilitate the larger control management systems. When a management system wants to start or stop the entire deployment, EdgeX system management capability is there to receive the command and start or stop the EdgeX platform and associated infrastructure of the EdgeX instance that it is aware of. Likewise, when the larger central management system needs service metrics or configuration from EdgeX, it can call on the EdgeX system management services to provide the information it needs (thereby avoiding communications with each individual service).","title":"Facilitating Larger Management Systems"},{"location":"microservices/system-management/Ch_SystemManagement/#use-is-optional","text":"There are many control management systems today. Each of these systems operates differently. Some solutions may not require the EdgeX management components. For example, if your edge platform is large enough to support the use of something like Kubernetes or Swarm to deploy, orchestrate and manage your containerized edge applications, you may not require the system management services provided with EdgeX Foundry. Therefore, use of the system management services is considered optional.","title":"Use is Optional"},{"location":"microservices/system-management/Ch_SystemManagement/#system-management-services","text":"There are two services that provide the EdgeX system management capability. System Management Agent : the micro service that other systems or services communicate with and make their management request (to start/stop/restart, get the configuration, get the status/health, or get metrics of the EdgeX service). It communicates with the EdgeX micro services or executor (see below) to satisfy the requests. System Management Executor : the excutable that performs the start, stop and restart of the services as well as metrics gathering from the EdgeX services. While EdgeX provides a single reference implementation of the system management executor today (one for Docker environments), there may be many implementations of the executor in the future.","title":"System Management Services"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/","text":"System Management Agent (SMA) Warning The System Management services (inclusive of the Agent) are deprecated with the Ireland (EdgeX 2.0) release. See the notes on the System Management Microservice page. Use this functionality with caution. Introduction The SMA serves as the connection point of management control for an EdgeX instance. Management Architecture The SMA serves as the proxy for management requests. Some management requests (metrics requests and operations to start, stop and restart services) are routed to an executor for execution. Other requests (for service configuration) are routed to the services for a response. Configuration information is only available by asking each service for its current configuration. Metrics and operations (tasks to start, stop, restart) typically need to be performed by some other software that can perform the task best under the platform / deployment environment. When running EdgeX in a Docker Engine, Docker can provide service metrics like memory and CPU usage to the requestor. If EdgeX services were running non-containerized in a Linux environment, the request may be best performed by some Linux shell script or by sysd. An executor encapsulates the implementation for the metrics gathering and start, stop, restart operations. That implementation of the executor can vary based on OS, platform environment, etc. EdgeX defines the system management executor interface and a reference implementation which utilizes Docker (for situations when EdgeX is run in Docker) to responsd to metrics and start, stop, and restart operations. Examples of API Calls EdgeX 2.0 For EdgeX 2.0 the SMA API URIs, request body and request response all have considerable changes. To get an appreciation for some SMA API calls in action, it is instructive to look at what responses the SMA provides to the caller, for the respective calls. The tabs below provide the API path and corresponding response for each of the system management capabilities. Info Consult the API Swagger documentation for status codes and message information returned by the SMA in error situations. Metrics of a service Example request: /api/v2/system/metrics?services=core-command,core-data Corresponding response, in JSON format: [ { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-command\" , \"metrics\" : { \"cpuUsedPercent\" : 0.01 , \"memoryUsed\" : 7524581 , \"raw\" : { \"block_io\" : \"7.18MB / 0B\" , \"cpu_perc\" : \"0.01%\" , \"mem_perc\" : \"0.05%\" , \"mem_usage\" : \"7.176MiB / 15.57GiB\" , \"net_io\" : \"192kB / 95.4kB\" , \"pids\" : \"13\" } } }, { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" , \"metrics\" : { \"cpuUsedPercent\" : 0.01 , \"memoryUsed\" : 9142534 , \"raw\" : { \"block_io\" : \"10.8MB / 0B\" , \"cpu_perc\" : \"0.01%\" , \"mem_perc\" : \"0.05%\" , \"mem_usage\" : \"8.719MiB / 15.57GiB\" , \"net_io\" : \"1.24MB / 1.49MB\" , \"pids\" : \"13\" } } } ] Configuration of a service Example request: /api/v2/system/config?services=core-command,core-data Corresponding response, in JSON format: [ { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-command\" , \"config\" : { \"apiVersion\" : \"v2\" , \"config\" : { \"Clients\" : { \"core-metadata\" : { \"Host\" : \"edgex-core-metadata\" , \"Port\" : 59881 , \"Protocol\" : \"http\" } }, \"Databases\" : { \"Primary\" : { \"Host\" : \"edgex-redis\" , \"Name\" : \"metadata\" , \"Port\" : 6379 , \"Timeout\" : 5000 , \"Type\" : \"redisdb\" } }, \"Registry\" : { \"Host\" : \"edgex-core-consul\" , \"Port\" : 8500 , \"Type\" : \"consul\" }, \"SecretStore\" : { \"Authentication\" : { \"AuthToken\" : \"\" , \"AuthType\" : \"X-Vault-Token\" }, \"Host\" : \"localhost\" , \"Namespace\" : \"\" , \"Path\" : \"core-command/\" , \"Port\" : 8200 , \"Protocol\" : \"http\" , \"RootCaCertPath\" : \"\" , \"ServerName\" : \"\" , \"TokenFile\" : \"/tmp/edgex/secrets/core-command/secrets-token.json\" , \"Type\" : \"vault\" }, \"Service\" : { \"HealthCheckInterval\" : \"10s\" , \"Host\" : \"edgex-core-command\" , \"MaxRequestSize\" : 0 , \"MaxResultCount\" : 50000 , \"Port\" : 59882 , \"RequestTimeout\" : \"45s\" , \"ServerBindAddr\" : \"\" , \"StartupMsg\" : \"This is the Core Command Microservice\" }, \"Writable\" : { \"InsecureSecrets\" : { \"DB\" : { \"Path\" : \"redisdb\" , \"Secrets\" : { \"password\" : \"\" , \"username\" : \"\" } } }, \"LogLevel\" : \"INFO\" } } } }, { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" , \"config\" : { \"apiVersion\" : \"v2\" , \"config\" : { \"Clients\" : { \"core-metadata\" : { \"Host\" : \"edgex-core-metadata\" , \"Port\" : 59881 , \"Protocol\" : \"http\" } }, \"Databases\" : { \"Primary\" : { \"Host\" : \"edgex-redis\" , \"Name\" : \"coredata\" , \"Port\" : 6379 , \"Timeout\" : 5000 , \"Type\" : \"redisdb\" } }, \"MessageQueue\" : { \"AuthMode\" : \"usernamepassword\" , \"Host\" : \"edgex-redis\" , \"Optional\" : { \"AutoReconnect\" : \"true\" , \"ClientId\" : \"core-data\" , \"ConnectTimeout\" : \"5\" , \"KeepAlive\" : \"10\" , \"Password\" : \"\" , \"Qos\" : \"0\" , \"Retained\" : \"false\" , \"SkipCertVerify\" : \"false\" , \"Username\" : \"\" }, \"Port\" : 6379 , \"Protocol\" : \"redis\" , \"PublishTopicPrefix\" : \"edgex/events/core\" , \"SecretName\" : \"redisdb\" , \"SubscribeEnabled\" : true , \"SubscribeTopic\" : \"edgex/events/device/#\" , \"Type\" : \"redis\" }, \"Registry\" : { \"Host\" : \"edgex-core-consul\" , \"Port\" : 8500 , \"Type\" : \"consul\" }, \"SecretStore\" : { \"Authentication\" : { \"AuthToken\" : \"\" , \"AuthType\" : \"X-Vault-Token\" }, \"Host\" : \"localhost\" , \"Namespace\" : \"\" , \"Path\" : \"core-data/\" , \"Port\" : 8200 , \"Protocol\" : \"http\" , \"RootCaCertPath\" : \"\" , \"ServerName\" : \"\" , \"TokenFile\" : \"/tmp/edgex/secrets/core-data/secrets-token.json\" , \"Type\" : \"vault\" }, \"Service\" : { \"HealthCheckInterval\" : \"10s\" , \"Host\" : \"edgex-core-data\" , \"MaxRequestSize\" : 0 , \"MaxResultCount\" : 50000 , \"Port\" : 59880 , \"RequestTimeout\" : \"5s\" , \"ServerBindAddr\" : \"\" , \"StartupMsg\" : \"This is the Core Data Microservice\" }, \"Writable\" : { \"InsecureSecrets\" : { \"DB\" : { \"Path\" : \"redisdb\" , \"Secrets\" : { \"password\" : \"\" , \"username\" : \"\" } } }, \"LogLevel\" : \"INFO\" , \"PersistData\" : true } } } } ] Start a service Example request: /api/v2/system/operation Example (POST) body accompanying the \"start\" request: [{ \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"apiVersion\" : \"v2\" , \"action\" : \"start\" , \"serviceName\" : \"core-data\" }] Corresponding response, in JSON format, on success: [{ \"apiVersion\" : \"v2\" , \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" }] Stop a service Example request: /api/v2/system/operation Example (POST) body accompanying the \"stop\" request: [{ \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"apiVersion\" : \"v2\" , \"action\" : \"stop\" , \"serviceName\" : \"core-data\" }] Corresponding response, in JSON format, on success: [{ \"apiVersion\" : \"v2\" , \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" }] Restart a service Example request: /api/v2/system/operation Example (POST) body accompanying the \"restart\" request: [{ \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"apiVersion\" : \"v2\" , \"action\" : \"restart\" , \"serviceName\" : \"core-data\" }] Corresponding response, in JSON format, on success: [{ \"apiVersion\" : \"v2\" , \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" }] Health check on a service Example request: /api/v2/system/health?services=device-virtual,core-data Corresponding response, in JSON format: [ { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"device-virtual\" }, { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" } ] Configuration Properties Please refer to the general Common Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. General Property Default Value Description general system management configuration properties ExecutorPath '../sys-mgmt-executor/sys-mgmt-executor' path to the executor to use for system management requests other than configuration MetricsMechanism 'direct-service' either direct-service or executor to advise the SMA where to go for service metrics information V2 Configuration Migration Guide Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service . Writable The ResendLimit setting has been removed from the Writable section Service The FormatSpecifier setting has been removed from the Service section API Reference System Management API Reference","title":"System Management Agent (SMA)"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#system-management-agent-sma","text":"Warning The System Management services (inclusive of the Agent) are deprecated with the Ireland (EdgeX 2.0) release. See the notes on the System Management Microservice page. Use this functionality with caution.","title":"System Management Agent (SMA)"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#introduction","text":"The SMA serves as the connection point of management control for an EdgeX instance.","title":"Introduction"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#management-architecture","text":"The SMA serves as the proxy for management requests. Some management requests (metrics requests and operations to start, stop and restart services) are routed to an executor for execution. Other requests (for service configuration) are routed to the services for a response. Configuration information is only available by asking each service for its current configuration. Metrics and operations (tasks to start, stop, restart) typically need to be performed by some other software that can perform the task best under the platform / deployment environment. When running EdgeX in a Docker Engine, Docker can provide service metrics like memory and CPU usage to the requestor. If EdgeX services were running non-containerized in a Linux environment, the request may be best performed by some Linux shell script or by sysd. An executor encapsulates the implementation for the metrics gathering and start, stop, restart operations. That implementation of the executor can vary based on OS, platform environment, etc. EdgeX defines the system management executor interface and a reference implementation which utilizes Docker (for situations when EdgeX is run in Docker) to responsd to metrics and start, stop, and restart operations.","title":"Management Architecture"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#examples-of-api-calls","text":"EdgeX 2.0 For EdgeX 2.0 the SMA API URIs, request body and request response all have considerable changes. To get an appreciation for some SMA API calls in action, it is instructive to look at what responses the SMA provides to the caller, for the respective calls. The tabs below provide the API path and corresponding response for each of the system management capabilities. Info Consult the API Swagger documentation for status codes and message information returned by the SMA in error situations. Metrics of a service Example request: /api/v2/system/metrics?services=core-command,core-data Corresponding response, in JSON format: [ { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-command\" , \"metrics\" : { \"cpuUsedPercent\" : 0.01 , \"memoryUsed\" : 7524581 , \"raw\" : { \"block_io\" : \"7.18MB / 0B\" , \"cpu_perc\" : \"0.01%\" , \"mem_perc\" : \"0.05%\" , \"mem_usage\" : \"7.176MiB / 15.57GiB\" , \"net_io\" : \"192kB / 95.4kB\" , \"pids\" : \"13\" } } }, { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" , \"metrics\" : { \"cpuUsedPercent\" : 0.01 , \"memoryUsed\" : 9142534 , \"raw\" : { \"block_io\" : \"10.8MB / 0B\" , \"cpu_perc\" : \"0.01%\" , \"mem_perc\" : \"0.05%\" , \"mem_usage\" : \"8.719MiB / 15.57GiB\" , \"net_io\" : \"1.24MB / 1.49MB\" , \"pids\" : \"13\" } } } ] Configuration of a service Example request: /api/v2/system/config?services=core-command,core-data Corresponding response, in JSON format: [ { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-command\" , \"config\" : { \"apiVersion\" : \"v2\" , \"config\" : { \"Clients\" : { \"core-metadata\" : { \"Host\" : \"edgex-core-metadata\" , \"Port\" : 59881 , \"Protocol\" : \"http\" } }, \"Databases\" : { \"Primary\" : { \"Host\" : \"edgex-redis\" , \"Name\" : \"metadata\" , \"Port\" : 6379 , \"Timeout\" : 5000 , \"Type\" : \"redisdb\" } }, \"Registry\" : { \"Host\" : \"edgex-core-consul\" , \"Port\" : 8500 , \"Type\" : \"consul\" }, \"SecretStore\" : { \"Authentication\" : { \"AuthToken\" : \"\" , \"AuthType\" : \"X-Vault-Token\" }, \"Host\" : \"localhost\" , \"Namespace\" : \"\" , \"Path\" : \"core-command/\" , \"Port\" : 8200 , \"Protocol\" : \"http\" , \"RootCaCertPath\" : \"\" , \"ServerName\" : \"\" , \"TokenFile\" : \"/tmp/edgex/secrets/core-command/secrets-token.json\" , \"Type\" : \"vault\" }, \"Service\" : { \"HealthCheckInterval\" : \"10s\" , \"Host\" : \"edgex-core-command\" , \"MaxRequestSize\" : 0 , \"MaxResultCount\" : 50000 , \"Port\" : 59882 , \"RequestTimeout\" : \"45s\" , \"ServerBindAddr\" : \"\" , \"StartupMsg\" : \"This is the Core Command Microservice\" }, \"Writable\" : { \"InsecureSecrets\" : { \"DB\" : { \"Path\" : \"redisdb\" , \"Secrets\" : { \"password\" : \"\" , \"username\" : \"\" } } }, \"LogLevel\" : \"INFO\" } } } }, { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" , \"config\" : { \"apiVersion\" : \"v2\" , \"config\" : { \"Clients\" : { \"core-metadata\" : { \"Host\" : \"edgex-core-metadata\" , \"Port\" : 59881 , \"Protocol\" : \"http\" } }, \"Databases\" : { \"Primary\" : { \"Host\" : \"edgex-redis\" , \"Name\" : \"coredata\" , \"Port\" : 6379 , \"Timeout\" : 5000 , \"Type\" : \"redisdb\" } }, \"MessageQueue\" : { \"AuthMode\" : \"usernamepassword\" , \"Host\" : \"edgex-redis\" , \"Optional\" : { \"AutoReconnect\" : \"true\" , \"ClientId\" : \"core-data\" , \"ConnectTimeout\" : \"5\" , \"KeepAlive\" : \"10\" , \"Password\" : \"\" , \"Qos\" : \"0\" , \"Retained\" : \"false\" , \"SkipCertVerify\" : \"false\" , \"Username\" : \"\" }, \"Port\" : 6379 , \"Protocol\" : \"redis\" , \"PublishTopicPrefix\" : \"edgex/events/core\" , \"SecretName\" : \"redisdb\" , \"SubscribeEnabled\" : true , \"SubscribeTopic\" : \"edgex/events/device/#\" , \"Type\" : \"redis\" }, \"Registry\" : { \"Host\" : \"edgex-core-consul\" , \"Port\" : 8500 , \"Type\" : \"consul\" }, \"SecretStore\" : { \"Authentication\" : { \"AuthToken\" : \"\" , \"AuthType\" : \"X-Vault-Token\" }, \"Host\" : \"localhost\" , \"Namespace\" : \"\" , \"Path\" : \"core-data/\" , \"Port\" : 8200 , \"Protocol\" : \"http\" , \"RootCaCertPath\" : \"\" , \"ServerName\" : \"\" , \"TokenFile\" : \"/tmp/edgex/secrets/core-data/secrets-token.json\" , \"Type\" : \"vault\" }, \"Service\" : { \"HealthCheckInterval\" : \"10s\" , \"Host\" : \"edgex-core-data\" , \"MaxRequestSize\" : 0 , \"MaxResultCount\" : 50000 , \"Port\" : 59880 , \"RequestTimeout\" : \"5s\" , \"ServerBindAddr\" : \"\" , \"StartupMsg\" : \"This is the Core Data Microservice\" }, \"Writable\" : { \"InsecureSecrets\" : { \"DB\" : { \"Path\" : \"redisdb\" , \"Secrets\" : { \"password\" : \"\" , \"username\" : \"\" } } }, \"LogLevel\" : \"INFO\" , \"PersistData\" : true } } } } ] Start a service Example request: /api/v2/system/operation Example (POST) body accompanying the \"start\" request: [{ \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"apiVersion\" : \"v2\" , \"action\" : \"start\" , \"serviceName\" : \"core-data\" }] Corresponding response, in JSON format, on success: [{ \"apiVersion\" : \"v2\" , \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" }] Stop a service Example request: /api/v2/system/operation Example (POST) body accompanying the \"stop\" request: [{ \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"apiVersion\" : \"v2\" , \"action\" : \"stop\" , \"serviceName\" : \"core-data\" }] Corresponding response, in JSON format, on success: [{ \"apiVersion\" : \"v2\" , \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" }] Restart a service Example request: /api/v2/system/operation Example (POST) body accompanying the \"restart\" request: [{ \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"apiVersion\" : \"v2\" , \"action\" : \"restart\" , \"serviceName\" : \"core-data\" }] Corresponding response, in JSON format, on success: [{ \"apiVersion\" : \"v2\" , \"requestId\" : \"e6e8a2f4-eb14-4649-9e2b-175247911369\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" }] Health check on a service Example request: /api/v2/system/health?services=device-virtual,core-data Corresponding response, in JSON format: [ { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"device-virtual\" }, { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"serviceName\" : \"core-data\" } ]","title":"Examples of API Calls"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#configuration-properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services. Below are only the additional settings and sections that are not common to all EdgeX Services. General Property Default Value Description general system management configuration properties ExecutorPath '../sys-mgmt-executor/sys-mgmt-executor' path to the executor to use for system management requests other than configuration MetricsMechanism 'direct-service' either direct-service or executor to advise the SMA where to go for service metrics information","title":"Configuration Properties"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#v2-configuration-migration-guide","text":"Refer to the Common Configuration Migration Guide for details on migrating the common configuration sections such as Service .","title":"V2 Configuration Migration Guide"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#writable","text":"The ResendLimit setting has been removed from the Writable section","title":"Writable"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#service","text":"The FormatSpecifier setting has been removed from the Service section","title":"Service"},{"location":"microservices/system-management/agent/Ch_SysMgmtAgent/#api-reference","text":"System Management API Reference","title":"API Reference"},{"location":"microservices/system-management/executor/Ch_SysMgmtExecutor/","text":"System Management Executor (SME) Warning The System Management services (inclusive of the Executor) are deprecated with the Ireland (EdgeX 2.0) release. See the notes on the System Management Microservice page. Use this functionality with caution. Introduction The executable applications that the system management agent (SMA) micro service calls on for some management requests are referred to as the \u201cexecutors\u201d. In particular, executors take care of service operations (start, stop, and restart functionality) as well as providing service metrics (CPU and memory usage). How the executor performs its duties is left for the implementer and is generally dictated by the available operating system, platform environment (existence and use of Docker for example) and associated programming language resources. EdgeX provides the executor interface and a reference implementation executor for use in Docker container runtime environments. The executor design allows use of other orchestrating software (example: Kubernetes or Swarm), scripts, OS specific technology (snaps or sysd), etc. to be used without having to build anything into the SMA \u2013 allowing for a more scalable solution in EdgeX but still allowing use of all sorts of implementation technology outside of EdgeX. The SMA will be informed of what executor to use for metrics and start/stop/restart functionality through a configuration option \u2013 ExecutorPath. The ExecutorPath will specify the location (which may be platform dependent) and executable to be called. Reference Implementation Executor Docker Executor When using the reference implementation Docker executor for metrics collection and start, stop and restart functions, the executor will make command line calls to the Docker Engine. However, this is not as straightforward as one would think. Complexity comes from the fact that the SMA (and associated executor) is itself containerized in this type of environment and so a call from within a normal container to Docker would fail as Docker is not installed inside of that container. Even if Docker were part of the SMA\u2019s container, a call to Docker to start (or stop or restart) the other services would be internal to the SMA\u2019s container. This would not be helpful since it would try to start the EdgeX services inside of the SMA\u2019s container and not on the Docker Engine where all the EdgeX containers exist. The solution to solve this issue is that the SMA must run inside of a special container \u2013 a Docker-in-Docker container - and that container must share a volume with the Docker Engine. This acts in a way as to expose the Docker calls out to the Docker Engine running on the base platform. Thereby allowing the SMA (and its executor) to effect calls to the original EdgeX services running on the same Docker Engine as the SMA. Info Metrics collection is accomplished by making calls to docker stats . Docker Executor Internals Again, the makeup of the executors is at the implementer\u2019s discretion. In the Docker executor reference implementation, code for calling Docker to execute start, stop and restart commands is exemplifeid in command.go while metrics collection (using docker stats ) is exemplified in metrics.go . Other executors for other environments can use these function templates to perform service operations and metrics collection by a variety of means. The reference implementation Docker executor is deployed inside of the SMA container. Therefore, there are no exposed APIs. The SMA makes a direct call to the executor executable inside the container.","title":"System Management Executor (SME)"},{"location":"microservices/system-management/executor/Ch_SysMgmtExecutor/#system-management-executor-sme","text":"Warning The System Management services (inclusive of the Executor) are deprecated with the Ireland (EdgeX 2.0) release. See the notes on the System Management Microservice page. Use this functionality with caution.","title":"System Management Executor (SME)"},{"location":"microservices/system-management/executor/Ch_SysMgmtExecutor/#introduction","text":"The executable applications that the system management agent (SMA) micro service calls on for some management requests are referred to as the \u201cexecutors\u201d. In particular, executors take care of service operations (start, stop, and restart functionality) as well as providing service metrics (CPU and memory usage). How the executor performs its duties is left for the implementer and is generally dictated by the available operating system, platform environment (existence and use of Docker for example) and associated programming language resources. EdgeX provides the executor interface and a reference implementation executor for use in Docker container runtime environments. The executor design allows use of other orchestrating software (example: Kubernetes or Swarm), scripts, OS specific technology (snaps or sysd), etc. to be used without having to build anything into the SMA \u2013 allowing for a more scalable solution in EdgeX but still allowing use of all sorts of implementation technology outside of EdgeX. The SMA will be informed of what executor to use for metrics and start/stop/restart functionality through a configuration option \u2013 ExecutorPath. The ExecutorPath will specify the location (which may be platform dependent) and executable to be called.","title":"Introduction"},{"location":"microservices/system-management/executor/Ch_SysMgmtExecutor/#reference-implementation-executor","text":"","title":"Reference Implementation Executor"},{"location":"microservices/system-management/executor/Ch_SysMgmtExecutor/#docker-executor","text":"When using the reference implementation Docker executor for metrics collection and start, stop and restart functions, the executor will make command line calls to the Docker Engine. However, this is not as straightforward as one would think. Complexity comes from the fact that the SMA (and associated executor) is itself containerized in this type of environment and so a call from within a normal container to Docker would fail as Docker is not installed inside of that container. Even if Docker were part of the SMA\u2019s container, a call to Docker to start (or stop or restart) the other services would be internal to the SMA\u2019s container. This would not be helpful since it would try to start the EdgeX services inside of the SMA\u2019s container and not on the Docker Engine where all the EdgeX containers exist. The solution to solve this issue is that the SMA must run inside of a special container \u2013 a Docker-in-Docker container - and that container must share a volume with the Docker Engine. This acts in a way as to expose the Docker calls out to the Docker Engine running on the base platform. Thereby allowing the SMA (and its executor) to effect calls to the original EdgeX services running on the same Docker Engine as the SMA. Info Metrics collection is accomplished by making calls to docker stats .","title":"Docker Executor"},{"location":"microservices/system-management/executor/Ch_SysMgmtExecutor/#docker-executor-internals","text":"Again, the makeup of the executors is at the implementer\u2019s discretion. In the Docker executor reference implementation, code for calling Docker to execute start, stop and restart commands is exemplifeid in command.go while metrics collection (using docker stats ) is exemplified in metrics.go . Other executors for other environments can use these function templates to perform service operations and metrics collection by a variety of means. The reference implementation Docker executor is deployed inside of the SMA container. Therefore, there are no exposed APIs. The SMA makes a direct call to the executor executable inside the container.","title":"Docker Executor Internals"},{"location":"security/Ch-APIGateway/","text":"API Gateway Introduction The security API gateway is the single point of entry for all EdgeX REST traffic. It is the barrier between external clients and the EdgeX microservices preventing unauthorized access to EdgeX REST APIs. The API gateway accepts client requests, verifies the identity of the clients, redirects the requests to correspondent microservice and relays the results back to the client. Internally, no authentication is required for one EdgeX microservice to call another. The API Gateway provides two HTTP REST management interfaces. The first (insecure) interface is exposed only to localhost : in snaps, this means it is exposed to any local process. In Docker, this insecure interface is bound to the Docker container, and is not reachable from outside of the container. The second (secure) interface is exposed outside of cluster on an administative URL sub-path, /admin and requires a specifically-crafted JWT to access. The management interface offers the means to configure API routing, as well as client authentication and access control. This configuration is stored in an embedded database. KONG ( https://konghq.com/ ) is the product underlying the API gateway. The EdgeX community has added code to initialize the KONG environment, set up service routes for EdgeX microservices, and add various authentication/authorization mechanisms including JWT authentication and ACL. Start the API Gateway The API gateway is started by default when using the secure version of the Docker Compose files found at https://github.com/edgexfoundry/edgex-compose/tree/ireland . The command to start EdgeX inclusive of API gateway related services is: git clone -b ireland https://github.com/edgexfoundry/edgex-compose make run or git clone -b ireland https://github.com/edgexfoundry/edgex-compose make run arm64 The API gateway is not started if EdgeX is started with security features disabled by appending no-secty to the previous commands. This disables all EdgeX security features, not just the API gateway. The API Gateway is provided by the kong service. The proxy-setup service is a one-shot service that configures the proxy and then terminates. proxy-setup docker image also contains the secrets-config utility to assist in common configuration tasks. Configuring API Gateway Using a bring-your-own external TLS certificate for API gateway The API gateway will generate a default self-signed TLS certificate that is used for external communication. Since this certificate is not trusted by client software, it is commonplace to replace this auto-generated certificate with one generated from a known certificate authority, such as an enterprise PKI, or a commercial certificate authority. The process for obtaining a certificate is out-of-scope for this document. For purposes of the example, the X.509 PEM-encoded certificate is assumed to be called cert.pem and the unencrypted PEM-encoded private key is called key.pem . Do not use an encrypted private key as the API gateway will hang on startup in order to prompt for a password. Also, for purposes of the example, the external DNS name of the API gateway is assumed to be edge001.example.com . The API gateway requires client to support Server Name Identification (SNI) and that the client connects to the API gateway using a DNS host name. The API gateway uses the host name supplied by the client to determine which certificate to present to the client. The API gateway will continue to serve the default (untrusted) certificate if clients connect via IP address or do not provide SNI at all. Run the following command to install a custom certficate using the assumptions above: docker - compose - p edgex - f docker - compose . yml run -- rm - v `pwd` :/ host : ro -- entrypoint / edgex / secrets - config edgex - proxy proxy tls -- incert / host / cert . pem -- inkey / host / key . pem -- snis edge001 . example . com -- admin_api_jwt / tmp / edgex / secrets / security - proxy - set up / kong - admin - jwt The utility will always add the internal host names, \"localhost\" and \"kong\" to the specified SNI list. The following command can verify the certificate installation was successful. echo \" GET / \" | openssl s_client - showcerts - servername edge001 . example . com - connect 127 . 0 . 0 . 1 : 8443 Configuration of JWT Authentication for API Gateway When using JWT Authentication, the [KongAuth] section needs to be specified in the configuration file as shown below. This is the default. [KongAuth] Name = \"jwt\" EdgeX 2.0 The \"oauth2\" authentication method has been removed in EdgeX 2.0 as JWT-based authentication is resistant to brute-force attacks and does not require storage of a secret in the Kong database. Configuration of Adding Microservices Routes for API Gateway For the current pre-existing Kong routes, they are configured and initialized statically through configuration TOML file specified in security-proxy-setup application. This is not sufficient for some other new additional microservices like application services, for example. Thus, adding new proxy Kong routes are now possibly achieved via the environment variable, ADD_PROXY_ROUTE , of service edgex-proxy in the docker-compose file. Here is an example: edgex-proxy : ... environment : ... ADD_PROXY_ROUTE : \"myApp.http://my-app:56789\" ... ... my-app : ... container_name : myApp hostname : myApp ... The value of ADD_PROXY_ROUTE takes a comma-separated list of one or more (at least one) paired additional service name and URL for which to create proxy Kong routes. The paired specification is given as the following: . where RoutePrefix is the name of service which requests to create proxy Kong route and it is case insensitive; it is the docker network hostname of the service that want to add the proxy Kong route in the docker-compose file if running from docker-compose, for example, myApp in this case. TargetRouteURL is the full qualified URL for the target service, like http://myapp:56789 as it is known on on the network on which the API gateway is running. For Docker, the hostname should match the hostname specified in the docker-compose file. So as an example, for a single service, the value of ADD_PROXY_ROUTE would be: \" myApp.http://myapp:56789 \". Once ADD_PROXY_ROUTE is configured and composed-up successfully, the proxy route then can be accessed the app's REST API via Kong as https://localhost:8443/myApp/api/v2/... in the same way you would access the EdgeX service. You will also need an access token obtained using the documentation below. Using API Gateway Resource Mapping between EdgeX Microservice and API gateway If the EdgeX API gateway is not in use, a client can access and use any REST API provided by the EdgeX microservices by sending an HTTP request to the service endpoint. E.g., a client can consume the ping endpoint of the Core Data microservice with curl command like this: curl http://:59880/api/v2/ping Once the API gateway is started and initialized successfully, and all the common ports for EdgeX microservices are blocked by disabling the exposed external ports of the EdgeX microservices through updating the docker compose file, the EdgeX microservice will be behind the gateway. At this time both the microservice host/IP Address (\\ in the example) as well as the service port (59880 in the example) are not available to external access. EdgeX uses the gateway as a single entry point for all the REST APIs. With the API gateway in place, the curl command to ping the endpoint of the same Core Data service, as shown above, needs to change to: curl https://:8443/core-data/api/v2/ping Comparing these two curl commands you may notice several differences. http is switched to https as we enable the SSL/TLS for secure communication. This applies to any client side request. (If the certificate is not trusted, the -k option to curl may also be required.) The EdgeX microservice IP address where the request is sent changed to the host/IP address of API gateway service (recall the API gateway becomes the single entry point for all the EdgeX micro services). The API gateway will eventually relay the request to the Core Data service if the client is authorized. Note that for Kong to serve the proper TLS certificate, a DNS host name must be used as SNI does not support IP addresses. This is a standards-compliant behavior, not a limitation of Kong. The port of the request is switched from 48080 to 8443, which is the default SSL/TLS port for API gateway (versus the micro service port). This applies to any client side request. The /core-data/ path in the URL is used to identify which EdgeX micro service the request is routed to. As each EdgeX micro service has a dedicated service port open that accepts incoming requests, there is a mapping table kept by the API gateway that maps paths to micro service ports. A partial listing of the map between ports and URL paths is shown in the table below. Microservice Host Name Port number Partial URL edgex-core-data 59880 core-data edgex-core-metadata 59881 core-metadata edgex-core-command 59882 core-command edgex-support-notifications 59860 support-notifications edgex-support-scheduler 59861 support-scheduler edgex-kuiper 59720 rules-engine device-virtual 59900 device-virtual Creating Access Token for API Gateway Authentication The API gateway is configured to require authentiation prior to passing a request to a back-end microservice. It is necessary to create an API gateway user in order to satify the authentication requirement. Gateway users are created using the proxy subcommand of the secrets-config utility. JWT authentication JWT authentication is based on a public/private keypair, where the public key is registered with the API gateway, and the private key is kept secret. This method does not require exposing any secret to the API gateway and allows JWTs to be generated offline. Before using the JWT authentiation method, it is necessary to create a public/private keypair. This example uses ECDSA keys, but RSA key can be used as well. openssl ecparam -name prime256v1 -genkey -noout -out ec256.key openssl ec -out ec256.pub < ec256.key Next, generate and save a unique ID that will be used in any issued JWTs to look up the public key to be used for validation. Also we need the JWT used to authenticate to Kong--this JWT was written to host-based secrets area when the framework was started. (Note the backtick to capture the uuidegen output.) ID = `uuidgen` KONGJWT = `sudo cat /tmp/edgex/secrets/security-proxy-setup/kong-admin-jwt` Register a user for that key: docker - compose - p edgex - f docker - compose . yml run -- rm - v `pwd` :/ host : ro - u \"$UID\" -- entrypoint \"/edgex/secrets-config\" proxy - set up -- proxy adduser --token-type jwt --id \"$ID\" --algorithm ES256 --public_key /host/ec256.pub --user _SOME_USERNAME_ --jwt \"$KONGJWT\" Lastly, generate a valid JWT. Any JWT library should work, but secrets-config provides a convenient utility: docker - compose - p edgex - f docker - compose . yml run -- rm - v `pwd` :/ host : ro - u \"$UID\" -- entrypoint \"/edgex/secrets-config\" proxy - set up -- proxy jwt --id \"$ID\" --algorithm ES256 --private_key /host/ec256.key The command will output a long alphanumeric sequence of the format '.' '.' The access token is used in the Authorization header of the request (see details below). To de-authorize or delete the user: docker-compose -p edgex -f docker-compose.yml run --rm -u \"$UID\" --entrypoint \"/edgex/secrets-config\" proxy-setup -- proxy deluser --user _SOME_USERNAME_ --jwt \"$KONGJWT\" Using API Gateway to Proxy Existing EdgeX Microservices Once the resource mapping and access token to API gateway are in place, a client can use the access token to use the protected EdgeX REST API resources behind the API gateway. Again, without the API Gateway in place, here is the sample request to hit the ping endpoint of the EdgeX Core Data microservice using curl: curl http://:59880/api/v2/ping With the security service and JWT authentication is enabled, the command changes to: curl -k --resolve kong:8443:127.0.0.1 -H 'Authorization: Bearer ' https://kong:8443/core-data/api/v2/ping (Note the above --resolve line forces \"kong\" to resolve to 127.0.0.1. This is just for illustrative purposes to force SNI. In practice, the TLS certificate would be registered under the external host name.) In summary the difference between the two commands are listed below: --resolve tells curl to resolve https://kong:8443 to the loopback address. This will cause curl to use the hostname kong as the SNI, but connect to the specified IP address to make the connection. -k tells curl to ignore certificate errors. This is for demonstration purposes. In production, a known certificate that the client trusts be installed on the proxy and this parameter omitted. --H \"host: edgex\" is used to indicate that the request is for EdgeX domain as the API gateway could be used to take requests for different domains. Use the https versus http protocol identifier for SSL/TLS secure communication. The service port 8443 is the default TLS service port of API gateway Use the URL path \"coredata\" to indicate which EdgeX microservice the request is routed to Use header of -H \"Authorization: Bearer \\\" to specify the access token associated with the client that was generated when the client was added. Adjust Kong worker processes to optimize the performance The number of the Kong worker processes would impact the memory consumption and the API Gateway performance. In order to reduce the memory consumption, the default value of it in EdgeX Foundry is one instead of auto (the original default value). This setting is defined in the environment variable section of the docker-compose file. KONG_NGINX_WORKER_PROCESSES: '1' Users can adjust this value to meet their requirement, or remove this environment variable to adjust it automatically. Read the references for more details about this setting: https://docs.konghq.com/gateway-oss/2.5.x/configuration/#nginx_worker_processes http://nginx.org/en/docs/ngx_core_module.html#worker_processes","title":"API Gateway"},{"location":"security/Ch-APIGateway/#api-gateway","text":"","title":"API Gateway"},{"location":"security/Ch-APIGateway/#introduction","text":"The security API gateway is the single point of entry for all EdgeX REST traffic. It is the barrier between external clients and the EdgeX microservices preventing unauthorized access to EdgeX REST APIs. The API gateway accepts client requests, verifies the identity of the clients, redirects the requests to correspondent microservice and relays the results back to the client. Internally, no authentication is required for one EdgeX microservice to call another. The API Gateway provides two HTTP REST management interfaces. The first (insecure) interface is exposed only to localhost : in snaps, this means it is exposed to any local process. In Docker, this insecure interface is bound to the Docker container, and is not reachable from outside of the container. The second (secure) interface is exposed outside of cluster on an administative URL sub-path, /admin and requires a specifically-crafted JWT to access. The management interface offers the means to configure API routing, as well as client authentication and access control. This configuration is stored in an embedded database. KONG ( https://konghq.com/ ) is the product underlying the API gateway. The EdgeX community has added code to initialize the KONG environment, set up service routes for EdgeX microservices, and add various authentication/authorization mechanisms including JWT authentication and ACL.","title":"Introduction"},{"location":"security/Ch-APIGateway/#start-the-api-gateway","text":"The API gateway is started by default when using the secure version of the Docker Compose files found at https://github.com/edgexfoundry/edgex-compose/tree/ireland . The command to start EdgeX inclusive of API gateway related services is: git clone -b ireland https://github.com/edgexfoundry/edgex-compose make run or git clone -b ireland https://github.com/edgexfoundry/edgex-compose make run arm64 The API gateway is not started if EdgeX is started with security features disabled by appending no-secty to the previous commands. This disables all EdgeX security features, not just the API gateway. The API Gateway is provided by the kong service. The proxy-setup service is a one-shot service that configures the proxy and then terminates. proxy-setup docker image also contains the secrets-config utility to assist in common configuration tasks.","title":"Start the API Gateway"},{"location":"security/Ch-APIGateway/#configuring-api-gateway","text":"","title":"Configuring API Gateway"},{"location":"security/Ch-APIGateway/#using-a-bring-your-own-external-tls-certificate-for-api-gateway","text":"The API gateway will generate a default self-signed TLS certificate that is used for external communication. Since this certificate is not trusted by client software, it is commonplace to replace this auto-generated certificate with one generated from a known certificate authority, such as an enterprise PKI, or a commercial certificate authority. The process for obtaining a certificate is out-of-scope for this document. For purposes of the example, the X.509 PEM-encoded certificate is assumed to be called cert.pem and the unencrypted PEM-encoded private key is called key.pem . Do not use an encrypted private key as the API gateway will hang on startup in order to prompt for a password. Also, for purposes of the example, the external DNS name of the API gateway is assumed to be edge001.example.com . The API gateway requires client to support Server Name Identification (SNI) and that the client connects to the API gateway using a DNS host name. The API gateway uses the host name supplied by the client to determine which certificate to present to the client. The API gateway will continue to serve the default (untrusted) certificate if clients connect via IP address or do not provide SNI at all. Run the following command to install a custom certficate using the assumptions above: docker - compose - p edgex - f docker - compose . yml run -- rm - v `pwd` :/ host : ro -- entrypoint / edgex / secrets - config edgex - proxy proxy tls -- incert / host / cert . pem -- inkey / host / key . pem -- snis edge001 . example . com -- admin_api_jwt / tmp / edgex / secrets / security - proxy - set up / kong - admin - jwt The utility will always add the internal host names, \"localhost\" and \"kong\" to the specified SNI list. The following command can verify the certificate installation was successful. echo \" GET / \" | openssl s_client - showcerts - servername edge001 . example . com - connect 127 . 0 . 0 . 1 : 8443","title":"Using a bring-your-own external TLS certificate for API gateway"},{"location":"security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway","text":"When using JWT Authentication, the [KongAuth] section needs to be specified in the configuration file as shown below. This is the default. [KongAuth] Name = \"jwt\" EdgeX 2.0 The \"oauth2\" authentication method has been removed in EdgeX 2.0 as JWT-based authentication is resistant to brute-force attacks and does not require storage of a secret in the Kong database.","title":"Configuration of JWT Authentication for API Gateway"},{"location":"security/Ch-APIGateway/#configuration-of-adding-microservices-routes-for-api-gateway","text":"For the current pre-existing Kong routes, they are configured and initialized statically through configuration TOML file specified in security-proxy-setup application. This is not sufficient for some other new additional microservices like application services, for example. Thus, adding new proxy Kong routes are now possibly achieved via the environment variable, ADD_PROXY_ROUTE , of service edgex-proxy in the docker-compose file. Here is an example: edgex-proxy : ... environment : ... ADD_PROXY_ROUTE : \"myApp.http://my-app:56789\" ... ... my-app : ... container_name : myApp hostname : myApp ... The value of ADD_PROXY_ROUTE takes a comma-separated list of one or more (at least one) paired additional service name and URL for which to create proxy Kong routes. The paired specification is given as the following: . where RoutePrefix is the name of service which requests to create proxy Kong route and it is case insensitive; it is the docker network hostname of the service that want to add the proxy Kong route in the docker-compose file if running from docker-compose, for example, myApp in this case. TargetRouteURL is the full qualified URL for the target service, like http://myapp:56789 as it is known on on the network on which the API gateway is running. For Docker, the hostname should match the hostname specified in the docker-compose file. So as an example, for a single service, the value of ADD_PROXY_ROUTE would be: \" myApp.http://myapp:56789 \". Once ADD_PROXY_ROUTE is configured and composed-up successfully, the proxy route then can be accessed the app's REST API via Kong as https://localhost:8443/myApp/api/v2/... in the same way you would access the EdgeX service. You will also need an access token obtained using the documentation below.","title":"Configuration of Adding Microservices Routes for API Gateway"},{"location":"security/Ch-APIGateway/#using-api-gateway","text":"","title":"Using API Gateway"},{"location":"security/Ch-APIGateway/#resource-mapping-between-edgex-microservice-and-api-gateway","text":"If the EdgeX API gateway is not in use, a client can access and use any REST API provided by the EdgeX microservices by sending an HTTP request to the service endpoint. E.g., a client can consume the ping endpoint of the Core Data microservice with curl command like this: curl http://:59880/api/v2/ping Once the API gateway is started and initialized successfully, and all the common ports for EdgeX microservices are blocked by disabling the exposed external ports of the EdgeX microservices through updating the docker compose file, the EdgeX microservice will be behind the gateway. At this time both the microservice host/IP Address (\\ in the example) as well as the service port (59880 in the example) are not available to external access. EdgeX uses the gateway as a single entry point for all the REST APIs. With the API gateway in place, the curl command to ping the endpoint of the same Core Data service, as shown above, needs to change to: curl https://:8443/core-data/api/v2/ping Comparing these two curl commands you may notice several differences. http is switched to https as we enable the SSL/TLS for secure communication. This applies to any client side request. (If the certificate is not trusted, the -k option to curl may also be required.) The EdgeX microservice IP address where the request is sent changed to the host/IP address of API gateway service (recall the API gateway becomes the single entry point for all the EdgeX micro services). The API gateway will eventually relay the request to the Core Data service if the client is authorized. Note that for Kong to serve the proper TLS certificate, a DNS host name must be used as SNI does not support IP addresses. This is a standards-compliant behavior, not a limitation of Kong. The port of the request is switched from 48080 to 8443, which is the default SSL/TLS port for API gateway (versus the micro service port). This applies to any client side request. The /core-data/ path in the URL is used to identify which EdgeX micro service the request is routed to. As each EdgeX micro service has a dedicated service port open that accepts incoming requests, there is a mapping table kept by the API gateway that maps paths to micro service ports. A partial listing of the map between ports and URL paths is shown in the table below. Microservice Host Name Port number Partial URL edgex-core-data 59880 core-data edgex-core-metadata 59881 core-metadata edgex-core-command 59882 core-command edgex-support-notifications 59860 support-notifications edgex-support-scheduler 59861 support-scheduler edgex-kuiper 59720 rules-engine device-virtual 59900 device-virtual","title":"Resource Mapping between EdgeX Microservice and API gateway"},{"location":"security/Ch-APIGateway/#creating-access-token-for-api-gateway-authentication","text":"The API gateway is configured to require authentiation prior to passing a request to a back-end microservice. It is necessary to create an API gateway user in order to satify the authentication requirement. Gateway users are created using the proxy subcommand of the secrets-config utility.","title":"Creating Access Token for API Gateway Authentication"},{"location":"security/Ch-APIGateway/#jwt-authentication","text":"JWT authentication is based on a public/private keypair, where the public key is registered with the API gateway, and the private key is kept secret. This method does not require exposing any secret to the API gateway and allows JWTs to be generated offline. Before using the JWT authentiation method, it is necessary to create a public/private keypair. This example uses ECDSA keys, but RSA key can be used as well. openssl ecparam -name prime256v1 -genkey -noout -out ec256.key openssl ec -out ec256.pub < ec256.key Next, generate and save a unique ID that will be used in any issued JWTs to look up the public key to be used for validation. Also we need the JWT used to authenticate to Kong--this JWT was written to host-based secrets area when the framework was started. (Note the backtick to capture the uuidegen output.) ID = `uuidgen` KONGJWT = `sudo cat /tmp/edgex/secrets/security-proxy-setup/kong-admin-jwt` Register a user for that key: docker - compose - p edgex - f docker - compose . yml run -- rm - v `pwd` :/ host : ro - u \"$UID\" -- entrypoint \"/edgex/secrets-config\" proxy - set up -- proxy adduser --token-type jwt --id \"$ID\" --algorithm ES256 --public_key /host/ec256.pub --user _SOME_USERNAME_ --jwt \"$KONGJWT\" Lastly, generate a valid JWT. Any JWT library should work, but secrets-config provides a convenient utility: docker - compose - p edgex - f docker - compose . yml run -- rm - v `pwd` :/ host : ro - u \"$UID\" -- entrypoint \"/edgex/secrets-config\" proxy - set up -- proxy jwt --id \"$ID\" --algorithm ES256 --private_key /host/ec256.key The command will output a long alphanumeric sequence of the format '.' '.' The access token is used in the Authorization header of the request (see details below). To de-authorize or delete the user: docker-compose -p edgex -f docker-compose.yml run --rm -u \"$UID\" --entrypoint \"/edgex/secrets-config\" proxy-setup -- proxy deluser --user _SOME_USERNAME_ --jwt \"$KONGJWT\"","title":"JWT authentication"},{"location":"security/Ch-APIGateway/#using-api-gateway-to-proxy-existing-edgex-microservices","text":"Once the resource mapping and access token to API gateway are in place, a client can use the access token to use the protected EdgeX REST API resources behind the API gateway. Again, without the API Gateway in place, here is the sample request to hit the ping endpoint of the EdgeX Core Data microservice using curl: curl http://:59880/api/v2/ping With the security service and JWT authentication is enabled, the command changes to: curl -k --resolve kong:8443:127.0.0.1 -H 'Authorization: Bearer ' https://kong:8443/core-data/api/v2/ping (Note the above --resolve line forces \"kong\" to resolve to 127.0.0.1. This is just for illustrative purposes to force SNI. In practice, the TLS certificate would be registered under the external host name.) In summary the difference between the two commands are listed below: --resolve tells curl to resolve https://kong:8443 to the loopback address. This will cause curl to use the hostname kong as the SNI, but connect to the specified IP address to make the connection. -k tells curl to ignore certificate errors. This is for demonstration purposes. In production, a known certificate that the client trusts be installed on the proxy and this parameter omitted. --H \"host: edgex\" is used to indicate that the request is for EdgeX domain as the API gateway could be used to take requests for different domains. Use the https versus http protocol identifier for SSL/TLS secure communication. The service port 8443 is the default TLS service port of API gateway Use the URL path \"coredata\" to indicate which EdgeX microservice the request is routed to Use header of -H \"Authorization: Bearer \\\" to specify the access token associated with the client that was generated when the client was added.","title":"Using API Gateway to Proxy Existing EdgeX Microservices"},{"location":"security/Ch-APIGateway/#adjust-kong-worker-processes-to-optimize-the-performance","text":"The number of the Kong worker processes would impact the memory consumption and the API Gateway performance. In order to reduce the memory consumption, the default value of it in EdgeX Foundry is one instead of auto (the original default value). This setting is defined in the environment variable section of the docker-compose file. KONG_NGINX_WORKER_PROCESSES: '1' Users can adjust this value to meet their requirement, or remove this environment variable to adjust it automatically. Read the references for more details about this setting: https://docs.konghq.com/gateway-oss/2.5.x/configuration/#nginx_worker_processes http://nginx.org/en/docs/ngx_core_module.html#worker_processes","title":"Adjust Kong worker processes to optimize the performance"},{"location":"security/Ch-AddGatewayUserRemotely/","text":"Adding EdgeX API Gateway Users Remotely EdgeX 2.1 Want to know what's new in EdgeX 2.1 (Jakarta)? If you are already familiar with EdgeX, look for the EdgeX 2.1 emoji ( Edgey - the EdgeX mascot) throughout the documentation - like the one on this page outlining what's in the Ireland release. These sections will give you a summary of what's new in each area of the documentation. Starting in EdgeX Ireland, the API gateway administrative interface is exposed by the /admin sub-URL of the gateway. Using this interface, and a special admin-only JWT, it is possible to remotely add gateway users. Support for this method in secrets-config was added in EdgeX Jakarta. Pre-requisite: Obtain a Kong Admin JWT When EdgeX starts, the security-secretstore-setup utility creates a special administrative JWT and writes a Kong configuration file to trust it. The reasons why this is done here is explained in detail in https://github.com/edgexfoundry/edgex-go/blob/main/internal/security/secretstore/init.go For security reasons, the created JWT is transient in nature: the private key used to create it is destroyed after the JWT is generated, and a new JWT using a new key is created each time the EdgeX framework is started. This prevents exfiltration of a private key that could be used to permanently compromise the security of a given EdgeX host. If long-term access to the API gateway admin API is desired, it is left as an excersise to the reader to seed the Kong database with administrative public key whose private key is not known to the EdgeX framework and will persist across reboots. This could be done, for example, by creating a custom EdgeX microservice that has access to kong-admin-jwt and uses it to seed another user in the admin group. Alternatively, one could override kong-admin-config.template.yml to include an additional user and key. It is advisable to make such a key unique to the machine (best) or unique to the deployment (second best). It is inadvisable to code such a key into source code such that it would be shared across deployments. For now, let us make a copy the kong-admin-jwt : sudo cp /tmp/edgex/secrets/security-proxy-setup/kong-admin-jwt . sudo chmod 400 kong-admin-jwt sudo chown \"${USER}:${USER}\" kong-admin-jwt Create ID and Credential for the Gateway User For the new user, create a unique ID and a public/private keypair to authenticate the user. test -f gateway.id || uuidgen > gateway.id test -f gateway.key || openssl ecparam -name prime256v1 -genkey -noout -out gateway.key 2> /dev/null test -f gateway.pub || openssl ec -in gateway.key -pubout -out gateway.pub 2> /dev/null Retain these files, gateway.id , gateway.key , and gateway.pub to create a JWT to access the proxy later. The gateway.id file contains a unique value, in this case, a GUID, that the gateway uses to look up the public key needed to validate the JWT. Create an proxy user and credential First, let us extract the secrets-config utility from an existing EdgeX container. The utility can also be built from source to the same effect. CORE_EDGEX_VERSION=2.0.0 # Update to verion for Jakarta release DEV= PROXY_SETUP_CONTAINER=\"edgexfoundry/security-proxy-setup:${CORE_EDGEX_VERSION}${DEV}\" docker run --rm --entrypoint /bin/cat \"${PROXY_SETUP_CONTAINER}\" /edgex/secrets-config > secrets-config chmod +x secrets-config test -d res || mkdir res docker run --rm --entrypoint /bin/cat \"${PROXY_SETUP_CONTAINER}\" /edgex/res/configuration.toml > res/configuration.toml Then, let us add a user to the gateway. Note: Currently one must use the string \"gateway\" as the group. ID=`cat gateway.id` ADMIN_JWT=`cat kong-admin-jwt` GW_USER=gateway GW_GROUP=gateway export KONGURL_SERVER= ./secrets-config proxy adduser --token-type jwt --id ${ID} --algorithm ES256 --public_key gateway.pub --user \"${GW_USER}\" --group \"${GW_GROUP}\" --jwt \"${ADMIN_JWT}\" Creating JWTs to access the gateway The secrets-config utility has a helper method to create a JWT from the ID and private key: By default, the resulting JWT is valid for only one hour. This can be changed with the --expiration flag if needed. ID=`cat gateway.id` USER_JWT=`./secrets-config proxy jwt --algorithm ES256 --id ${ID} --private_key gateway.key` Use the resulting JWT to call an EdgeX API method through the gateway: curl -k -H \"Authorization: Bearer ${USER_JWT}\" \"https://localhost:8443/core-data/api/v2/ping\" Output: {\"apiVersion\":\"v2\",\"timestamp\":\"Fri Sep 3 00:33:58 UTC 2021\"}","title":"Adding EdgeX API Gateway Users Remotely"},{"location":"security/Ch-AddGatewayUserRemotely/#adding-edgex-api-gateway-users-remotely","text":"EdgeX 2.1 Want to know what's new in EdgeX 2.1 (Jakarta)? If you are already familiar with EdgeX, look for the EdgeX 2.1 emoji ( Edgey - the EdgeX mascot) throughout the documentation - like the one on this page outlining what's in the Ireland release. These sections will give you a summary of what's new in each area of the documentation. Starting in EdgeX Ireland, the API gateway administrative interface is exposed by the /admin sub-URL of the gateway. Using this interface, and a special admin-only JWT, it is possible to remotely add gateway users. Support for this method in secrets-config was added in EdgeX Jakarta.","title":"Adding EdgeX API Gateway Users Remotely"},{"location":"security/Ch-AddGatewayUserRemotely/#pre-requisite-obtain-a-kong-admin-jwt","text":"When EdgeX starts, the security-secretstore-setup utility creates a special administrative JWT and writes a Kong configuration file to trust it. The reasons why this is done here is explained in detail in https://github.com/edgexfoundry/edgex-go/blob/main/internal/security/secretstore/init.go For security reasons, the created JWT is transient in nature: the private key used to create it is destroyed after the JWT is generated, and a new JWT using a new key is created each time the EdgeX framework is started. This prevents exfiltration of a private key that could be used to permanently compromise the security of a given EdgeX host. If long-term access to the API gateway admin API is desired, it is left as an excersise to the reader to seed the Kong database with administrative public key whose private key is not known to the EdgeX framework and will persist across reboots. This could be done, for example, by creating a custom EdgeX microservice that has access to kong-admin-jwt and uses it to seed another user in the admin group. Alternatively, one could override kong-admin-config.template.yml to include an additional user and key. It is advisable to make such a key unique to the machine (best) or unique to the deployment (second best). It is inadvisable to code such a key into source code such that it would be shared across deployments. For now, let us make a copy the kong-admin-jwt : sudo cp /tmp/edgex/secrets/security-proxy-setup/kong-admin-jwt . sudo chmod 400 kong-admin-jwt sudo chown \"${USER}:${USER}\" kong-admin-jwt","title":"Pre-requisite: Obtain a Kong Admin JWT"},{"location":"security/Ch-AddGatewayUserRemotely/#create-id-and-credential-for-the-gateway-user","text":"For the new user, create a unique ID and a public/private keypair to authenticate the user. test -f gateway.id || uuidgen > gateway.id test -f gateway.key || openssl ecparam -name prime256v1 -genkey -noout -out gateway.key 2> /dev/null test -f gateway.pub || openssl ec -in gateway.key -pubout -out gateway.pub 2> /dev/null Retain these files, gateway.id , gateway.key , and gateway.pub to create a JWT to access the proxy later. The gateway.id file contains a unique value, in this case, a GUID, that the gateway uses to look up the public key needed to validate the JWT.","title":"Create ID and Credential for the Gateway User"},{"location":"security/Ch-AddGatewayUserRemotely/#create-an-proxy-user-and-credential","text":"First, let us extract the secrets-config utility from an existing EdgeX container. The utility can also be built from source to the same effect. CORE_EDGEX_VERSION=2.0.0 # Update to verion for Jakarta release DEV= PROXY_SETUP_CONTAINER=\"edgexfoundry/security-proxy-setup:${CORE_EDGEX_VERSION}${DEV}\" docker run --rm --entrypoint /bin/cat \"${PROXY_SETUP_CONTAINER}\" /edgex/secrets-config > secrets-config chmod +x secrets-config test -d res || mkdir res docker run --rm --entrypoint /bin/cat \"${PROXY_SETUP_CONTAINER}\" /edgex/res/configuration.toml > res/configuration.toml Then, let us add a user to the gateway. Note: Currently one must use the string \"gateway\" as the group. ID=`cat gateway.id` ADMIN_JWT=`cat kong-admin-jwt` GW_USER=gateway GW_GROUP=gateway export KONGURL_SERVER= ./secrets-config proxy adduser --token-type jwt --id ${ID} --algorithm ES256 --public_key gateway.pub --user \"${GW_USER}\" --group \"${GW_GROUP}\" --jwt \"${ADMIN_JWT}\"","title":"Create an proxy user and credential"},{"location":"security/Ch-AddGatewayUserRemotely/#creating-jwts-to-access-the-gateway","text":"The secrets-config utility has a helper method to create a JWT from the ID and private key: By default, the resulting JWT is valid for only one hour. This can be changed with the --expiration flag if needed. ID=`cat gateway.id` USER_JWT=`./secrets-config proxy jwt --algorithm ES256 --id ${ID} --private_key gateway.key` Use the resulting JWT to call an EdgeX API method through the gateway: curl -k -H \"Authorization: Bearer ${USER_JWT}\" \"https://localhost:8443/core-data/api/v2/ping\" Output: {\"apiVersion\":\"v2\",\"timestamp\":\"Fri Sep 3 00:33:58 UTC 2021\"}","title":"Creating JWTs to access the gateway"},{"location":"security/Ch-CORS-Settings/","text":"CORS settings EdgeX 2.1 New for EdgeX 2.1 is the ability to enable CORS access to EdgeX microservices through configuration. The EdgeX microservices provide REST APIs and those services might be called from a GUI through a browser. Browsers prevent service calls from a different origin, making it impossible to host a management GUI on one domain that manages an EdgeX device on a different domain. Thus, EdgeX supports Cross-Origin Resource Sharing (CORS) since Jakarta release (v2.1), and this feature can be controlled by the configurations. The default behavior of CORS is disabled. Here is a good reference to understand CORS . Note C Device SDK doesn't support CORS, and enabling CORS in Device Services is not recommended because browsers should not access Device Services directly. Enabling CORS There are two different ways to enable CORS depending on whether EdgeX is deployed in the security-enabled configuration. In the non-security configuration, EdgeX microservices are directly exposed on host ports. EdgeX microservices receive client requests directly in this configuration, and thus, the EdgeX microservices themselves must respond to CORS requests. In the security-enabled configuration, EdgeX microservices are exposed behind an API gateway that will receive CORS requests first. Only authenticated calls will be forwarded to the EdgeX microservice, but CORS pre-flight requests are always unauthenticated. CORS can be enabled at the API gateway in a security-enabled configuration, and at the individual microservice level in the non-security configuration. However, implementers should choose one or the other, not both. Enabling CORS for Individual Microservices Configure CORS in the Service.CORSConfiguration configuration section for each microservice to be exposed via CORS. They can also be set via Service_CORSConfiguration_* environment variables. Please refer to the Common Configuration page to learn the details. Enabling CORS for API Gateway Configure CORS in the CORSConfiguration configuration section for the security-proxy-setup microservice. They can also be set via CORSConfiguration_* environment variables. Note The settings under the CORSConfiguration configuration section are the same as those under the Service.CORSConfiguration so please refer to the Common Configuration page to learn the details. Note The name of the configuration sections and environment variable overrides are intentionally different than the API gateway section, in alignment with the guidance that CORS should be enabled at the microservice level or the API gateway level, but not both.","title":"CORS settings"},{"location":"security/Ch-CORS-Settings/#cors-settings","text":"EdgeX 2.1 New for EdgeX 2.1 is the ability to enable CORS access to EdgeX microservices through configuration. The EdgeX microservices provide REST APIs and those services might be called from a GUI through a browser. Browsers prevent service calls from a different origin, making it impossible to host a management GUI on one domain that manages an EdgeX device on a different domain. Thus, EdgeX supports Cross-Origin Resource Sharing (CORS) since Jakarta release (v2.1), and this feature can be controlled by the configurations. The default behavior of CORS is disabled. Here is a good reference to understand CORS . Note C Device SDK doesn't support CORS, and enabling CORS in Device Services is not recommended because browsers should not access Device Services directly.","title":"CORS settings"},{"location":"security/Ch-CORS-Settings/#enabling-cors","text":"There are two different ways to enable CORS depending on whether EdgeX is deployed in the security-enabled configuration. In the non-security configuration, EdgeX microservices are directly exposed on host ports. EdgeX microservices receive client requests directly in this configuration, and thus, the EdgeX microservices themselves must respond to CORS requests. In the security-enabled configuration, EdgeX microservices are exposed behind an API gateway that will receive CORS requests first. Only authenticated calls will be forwarded to the EdgeX microservice, but CORS pre-flight requests are always unauthenticated. CORS can be enabled at the API gateway in a security-enabled configuration, and at the individual microservice level in the non-security configuration. However, implementers should choose one or the other, not both.","title":"Enabling CORS"},{"location":"security/Ch-CORS-Settings/#enabling-cors-for-individual-microservices","text":"Configure CORS in the Service.CORSConfiguration configuration section for each microservice to be exposed via CORS. They can also be set via Service_CORSConfiguration_* environment variables. Please refer to the Common Configuration page to learn the details.","title":"Enabling CORS for Individual Microservices"},{"location":"security/Ch-CORS-Settings/#enabling-cors-for-api-gateway","text":"Configure CORS in the CORSConfiguration configuration section for the security-proxy-setup microservice. They can also be set via CORSConfiguration_* environment variables. Note The settings under the CORSConfiguration configuration section are the same as those under the Service.CORSConfiguration so please refer to the Common Configuration page to learn the details. Note The name of the configuration sections and environment variable overrides are intentionally different than the API gateway section, in alignment with the guidance that CORS should be enabled at the microservice level or the API gateway level, but not both.","title":"Enabling CORS for API Gateway"},{"location":"security/Ch-Configuring-Add-On-Services/","text":"Configuring Add-on Service In the current EdgeX security serivces, we set up and configure all security related properties and environments for the existing default serivces like core-data , core-metadata , device-virtual , and so on. The settings and service environment variables are pre-wired and ready to run in secure mode without any update or modification to the Docker-compose files. However, there are some pre-built add-on services like some device services (e.g. device-camera , device-modbus ), and some of application services (e.g. app-http-export , app-mqtt-export ) are not pre-wired for by default. Also if you are adding on your custom application service, there is no pre-wiring for it and thus need some configuration efforts to make them run in secure mode. EdgeX provides a way for a user to add and configure those add-on services into EdgeX Docker software stack running in secure mode. This can be done vai Docker-compose files with a few additional environment variables and some modification of micro-service's Dockerfile. From edgex-compose repository, the compose-builder utility provides some ways to deal with those add-on services like through add-security.yml via make targets to generate docker-compose file for running them in secure mode. For more details, please refer to README documentation of compose-builder . The above same guidelines can also be applied to custom device and application services, i.e. non-EdgeX built services. One of the major security features in EdgeX Ireland release is to utilize the service security-bootstrapper to ensure the right starting sequence so that all services have their needed security dependencies when they start up. Currently EdgeX uses Vault as the default implementation for secret store and Consul as the configuration and/or registry server if user chooses to do so. There are some default services pre-configured to have Secret Stores created by default such as EdgeX core/support services, device-virtual, device-rest, and app-rules-engine services. For running additional add-on services (e.g. device-camera , app-http-export ) in secure mode, their Secret Stores are not generated by default but they can be generated through some configuring steps as shown below. In the following scenario, we assume the EdgeX services are running in Docker environments, and thus the examples are given in terms of Docker-compose ways. It should not be much or bigger difference for snap running environment to apply the same steps or concepts if found to do so. If users want to configure and set up an add-on service, e.g. device-camera , they can achieve this by following the steps that are outlined below: Make add-on services security-bootstrapper compatible To use the Docker entrypoint scripts for gating mechanism from security-bootstrapper , the Dockerfile of device-camera should inherit shell scripting capability like alpine -based as the base Docker image and should install dumb-init (see details in Why you need an init system ) via apk add --update command. Dockerfile example using alpine-base image and add dumb-init : ...... FROM alpine:3.12 # dumb-init needed for injected secure bootstrapping entrypoint script when run in secure mode. RUN apk add --update --no-cache dumb-init ...... and then for the service itself should add /edgex-init/ready_to_run_wait_install.sh as the entrypoint script for the service in gating fashion and add related Docker volumes for edgex-init and for Secret Store token, which will be outlined in the next section. A good example of this will be like app-service-rules : ... app-service-rules : entrypoint : [ \"/edgex-init/ready_to_run_wait_install.sh\" ] command : \"/app-service-configurable ${DEFAULT_EDGEX_RUN_CMD_PARMS}\" volumes : - edgex-init:/edgex-init:ro,z - /tmp/edgex/secrets/app-rules-engine:/tmp/edgex/secrets/app-rules-engine:ro,z depends_on : - security-bootstrapper ... Note that we also add command directive override in the above example because we override Docker's entrypoint script in the original Dockerfile and Docker ignores the original command when the entrypoint script is overridden. In this case, we also override the command for app-service-rules service with arguments to execute. Configure the service's Secret Store to use Make sure the TOML configuration file of add-on service like device-camera contains the proper [SecretStore] section. Example: [SecretStore] Type = \"vault\" Host = \"localhost\" Port = 8200 Path = \"device-camera/\" Protocol = \"http\" RootCaCertPath = \"\" ServerName = \"\" TokenFile = \"/tmp/edgex/secrets/device-camera/secrets-token.json\" [SecretStore.Authentication] AuthType = \"X-Vault-Token\" Note that the service key device-camera must be used for the Path and in the TokenFile path to keep it consistent and easier to maintain. And then add the add-on service's service key to EdgeX service secretstore-setup 's ADD_SECRETSTORE_TOKENS environment variable in the environment section of docker-compose as the example shown below: ... secretstore-setup : container_name : edgex-secretstore-setup depends_on : - security-bootstrapper - vault environment : ADD_SECRETSTORE_TOKENS : 'device-camera' ... With that, secretstore-setup then will generate Secret Store token from Vault and store it in the TokenFile path specified in the TOML configuration file like the above example. Also note that the value of ADD_SECRETSTORE_TOKENS can take more than one service in a form of comma separated list like \" device-camera , device-modbus \" if needed. (Optional) Configure known secrets for add-on services The ADD_KNOWN_SECRETS environment variable on secretstore-setup allows for known secrets to be added to an add-on service's Secret Store . For the Ireland release, the only known secret is the Redis DB credentials identified by the name redisdb . Any add-on service needing access to the Redis DB such as App Service HTTP Export with Store and Forward enabled will need the Redis DB credentials put in its Secret Store . Also, since the Redis DB service is now used for the MessageBus implementation, all services that connect to the MessageBus also need the Redis DB credentials Note that the steps needed for connecting add-on services to the Secure MessageBus are: Utilizing the security-bootstrapper to ensure proper startup sequence Creating the Secret Store for the add-on service Adding the redisdb 's known secret to the add-on service's Secret Store and if the add-on service is not connecting to the bus or the Redis database, then this step can be skipped. So given an example for service device-virtual to use the Redis message bus in secure mode, we need to tell secretstore-setup to add the redisdb known secret to Secret Store for device-virtual . This can be done through the configuration of adding redisdb[device-virtual] into the environment variable ADD_KNOWN_SECRETS in secretstore-setup service's environment section, in which redisdb is the name of the known secret and device-virtual is the service key of the add-on service. ... secretstore-setup : container_name : edgex-secretstore-setup depends_on : - security-bootstrapper - vault environment : ADD_SECRETSTORE_TOKENS : 'device-camera, my-service' ADD_KNOWN_SECRETS : redisdb[app-rules-engine],redisdb[device-rest],redisdb[device-virtual] ... In the above docker-compose section of secretstore-setup , we specify the known secret of redisdb to add/copy the Redis database credentials to the Secret Store for the app-rules-engine , device-rest , and device-virtual services. We can also use the alternative or simpler form of ADD_KNOWN_SECRETS environment variable's value like ADD_KNOWN_SECRETS : redisdb[app-rules-engine, device-rest, device-virtual] in which all add-on services are put together in a comma separated list associated with the known secret redisdb . (Optional) Configure the ACL role of configuration/registry to use if the service depends on it This is a new step coming from securing Consul security features as part of EdgeX Ireland release. If the add-on service uses Consul as the configuration and/or registry service, then we also need to configure the environment variable ADD_REGISTRY_ACL_ROLES to tell security-bootstrapper to generate an ACL role for Consul to associate with its token. An example of configuring ACL roles of the registry Consul for the add-on services device-modbus and app-http-export is shown as follows: ... consul : container_name : edgex-core-consul depends_on : - security-bootstrapper - vault entrypoint : - /edgex-init/consul_wait_install.sh environment : ADD_REGISTRY_ACL_ROLES : app-http-export,device-modbus ... The configuration of Edgex service consul 's environment variable ADD_REGISTRY_ACL_ROLES tells the security-bootstrapper to set up Consul ACL role so that the ACL token is generated, hence the permission is granted for that service with the access to Consul in secure mode. Without this step the add-on service will get status Forbidden (HTTP status code = 403) error when the service is depending on Consul and attempting to access Consul for configuration or service registry. (Optional) Configure the API gateway access route for add-on service If it is desirable to let user or other application services outside EdgeX's Docker network access the endpoint of the add-on service, then we can configure and add it via proxy-setup service's ADD_PROXY_ROUTE environment variable. proxy-setup adds those services listed in that environment variable into the API gateway (also known as Kong) route so that the endpoint can be accessible using Kong's proxy endpoint. One example of adding API gateway access routes for both device-camera and device-modbus is given as follows: ... edgex-proxy : ... environment : ... ADD_PROXY_ROUTE : \"device-camera.http://edgex-device-camera:59985, device-modbus.http://edgex-device-modbus:59901\" ... ... where in the comma separated list, the first part of configured value device-camera is the service key and the URL format is the service's hostname with its docker network port number 59985 for device-camera . The same idea applies to device-modbus with its values. With that setup, we can then access the endpoints of device-camera from Kong's host like https://:8443/device-camera/{device-name}/name assuming the caller can resolve from DNS server. For more details on the introduction to the API gateway and how it works, please see APIGateway documentation page .","title":"Configuring Add-on Service"},{"location":"security/Ch-Configuring-Add-On-Services/#configuring-add-on-service","text":"In the current EdgeX security serivces, we set up and configure all security related properties and environments for the existing default serivces like core-data , core-metadata , device-virtual , and so on. The settings and service environment variables are pre-wired and ready to run in secure mode without any update or modification to the Docker-compose files. However, there are some pre-built add-on services like some device services (e.g. device-camera , device-modbus ), and some of application services (e.g. app-http-export , app-mqtt-export ) are not pre-wired for by default. Also if you are adding on your custom application service, there is no pre-wiring for it and thus need some configuration efforts to make them run in secure mode. EdgeX provides a way for a user to add and configure those add-on services into EdgeX Docker software stack running in secure mode. This can be done vai Docker-compose files with a few additional environment variables and some modification of micro-service's Dockerfile. From edgex-compose repository, the compose-builder utility provides some ways to deal with those add-on services like through add-security.yml via make targets to generate docker-compose file for running them in secure mode. For more details, please refer to README documentation of compose-builder . The above same guidelines can also be applied to custom device and application services, i.e. non-EdgeX built services. One of the major security features in EdgeX Ireland release is to utilize the service security-bootstrapper to ensure the right starting sequence so that all services have their needed security dependencies when they start up. Currently EdgeX uses Vault as the default implementation for secret store and Consul as the configuration and/or registry server if user chooses to do so. There are some default services pre-configured to have Secret Stores created by default such as EdgeX core/support services, device-virtual, device-rest, and app-rules-engine services. For running additional add-on services (e.g. device-camera , app-http-export ) in secure mode, their Secret Stores are not generated by default but they can be generated through some configuring steps as shown below. In the following scenario, we assume the EdgeX services are running in Docker environments, and thus the examples are given in terms of Docker-compose ways. It should not be much or bigger difference for snap running environment to apply the same steps or concepts if found to do so. If users want to configure and set up an add-on service, e.g. device-camera , they can achieve this by following the steps that are outlined below:","title":"Configuring Add-on Service"},{"location":"security/Ch-Configuring-Add-On-Services/#make-add-on-services-security-bootstrapper-compatible","text":"To use the Docker entrypoint scripts for gating mechanism from security-bootstrapper , the Dockerfile of device-camera should inherit shell scripting capability like alpine -based as the base Docker image and should install dumb-init (see details in Why you need an init system ) via apk add --update command. Dockerfile example using alpine-base image and add dumb-init : ...... FROM alpine:3.12 # dumb-init needed for injected secure bootstrapping entrypoint script when run in secure mode. RUN apk add --update --no-cache dumb-init ...... and then for the service itself should add /edgex-init/ready_to_run_wait_install.sh as the entrypoint script for the service in gating fashion and add related Docker volumes for edgex-init and for Secret Store token, which will be outlined in the next section. A good example of this will be like app-service-rules : ... app-service-rules : entrypoint : [ \"/edgex-init/ready_to_run_wait_install.sh\" ] command : \"/app-service-configurable ${DEFAULT_EDGEX_RUN_CMD_PARMS}\" volumes : - edgex-init:/edgex-init:ro,z - /tmp/edgex/secrets/app-rules-engine:/tmp/edgex/secrets/app-rules-engine:ro,z depends_on : - security-bootstrapper ... Note that we also add command directive override in the above example because we override Docker's entrypoint script in the original Dockerfile and Docker ignores the original command when the entrypoint script is overridden. In this case, we also override the command for app-service-rules service with arguments to execute.","title":"Make add-on services security-bootstrapper compatible"},{"location":"security/Ch-Configuring-Add-On-Services/#configure-the-services-secret-store-to-use","text":"Make sure the TOML configuration file of add-on service like device-camera contains the proper [SecretStore] section. Example: [SecretStore] Type = \"vault\" Host = \"localhost\" Port = 8200 Path = \"device-camera/\" Protocol = \"http\" RootCaCertPath = \"\" ServerName = \"\" TokenFile = \"/tmp/edgex/secrets/device-camera/secrets-token.json\" [SecretStore.Authentication] AuthType = \"X-Vault-Token\" Note that the service key device-camera must be used for the Path and in the TokenFile path to keep it consistent and easier to maintain. And then add the add-on service's service key to EdgeX service secretstore-setup 's ADD_SECRETSTORE_TOKENS environment variable in the environment section of docker-compose as the example shown below: ... secretstore-setup : container_name : edgex-secretstore-setup depends_on : - security-bootstrapper - vault environment : ADD_SECRETSTORE_TOKENS : 'device-camera' ... With that, secretstore-setup then will generate Secret Store token from Vault and store it in the TokenFile path specified in the TOML configuration file like the above example. Also note that the value of ADD_SECRETSTORE_TOKENS can take more than one service in a form of comma separated list like \" device-camera , device-modbus \" if needed.","title":"Configure the service's Secret Store to use"},{"location":"security/Ch-Configuring-Add-On-Services/#optional-configure-known-secrets-for-add-on-services","text":"The ADD_KNOWN_SECRETS environment variable on secretstore-setup allows for known secrets to be added to an add-on service's Secret Store . For the Ireland release, the only known secret is the Redis DB credentials identified by the name redisdb . Any add-on service needing access to the Redis DB such as App Service HTTP Export with Store and Forward enabled will need the Redis DB credentials put in its Secret Store . Also, since the Redis DB service is now used for the MessageBus implementation, all services that connect to the MessageBus also need the Redis DB credentials Note that the steps needed for connecting add-on services to the Secure MessageBus are: Utilizing the security-bootstrapper to ensure proper startup sequence Creating the Secret Store for the add-on service Adding the redisdb 's known secret to the add-on service's Secret Store and if the add-on service is not connecting to the bus or the Redis database, then this step can be skipped. So given an example for service device-virtual to use the Redis message bus in secure mode, we need to tell secretstore-setup to add the redisdb known secret to Secret Store for device-virtual . This can be done through the configuration of adding redisdb[device-virtual] into the environment variable ADD_KNOWN_SECRETS in secretstore-setup service's environment section, in which redisdb is the name of the known secret and device-virtual is the service key of the add-on service. ... secretstore-setup : container_name : edgex-secretstore-setup depends_on : - security-bootstrapper - vault environment : ADD_SECRETSTORE_TOKENS : 'device-camera, my-service' ADD_KNOWN_SECRETS : redisdb[app-rules-engine],redisdb[device-rest],redisdb[device-virtual] ... In the above docker-compose section of secretstore-setup , we specify the known secret of redisdb to add/copy the Redis database credentials to the Secret Store for the app-rules-engine , device-rest , and device-virtual services. We can also use the alternative or simpler form of ADD_KNOWN_SECRETS environment variable's value like ADD_KNOWN_SECRETS : redisdb[app-rules-engine, device-rest, device-virtual] in which all add-on services are put together in a comma separated list associated with the known secret redisdb .","title":"(Optional) Configure known secrets for add-on services"},{"location":"security/Ch-Configuring-Add-On-Services/#optional-configure-the-acl-role-of-configurationregistry-to-use-if-the-service-depends-on-it","text":"This is a new step coming from securing Consul security features as part of EdgeX Ireland release. If the add-on service uses Consul as the configuration and/or registry service, then we also need to configure the environment variable ADD_REGISTRY_ACL_ROLES to tell security-bootstrapper to generate an ACL role for Consul to associate with its token. An example of configuring ACL roles of the registry Consul for the add-on services device-modbus and app-http-export is shown as follows: ... consul : container_name : edgex-core-consul depends_on : - security-bootstrapper - vault entrypoint : - /edgex-init/consul_wait_install.sh environment : ADD_REGISTRY_ACL_ROLES : app-http-export,device-modbus ... The configuration of Edgex service consul 's environment variable ADD_REGISTRY_ACL_ROLES tells the security-bootstrapper to set up Consul ACL role so that the ACL token is generated, hence the permission is granted for that service with the access to Consul in secure mode. Without this step the add-on service will get status Forbidden (HTTP status code = 403) error when the service is depending on Consul and attempting to access Consul for configuration or service registry.","title":"(Optional) Configure the ACL role of configuration/registry to use if the service depends on it"},{"location":"security/Ch-Configuring-Add-On-Services/#optional-configure-the-api-gateway-access-route-for-add-on-service","text":"If it is desirable to let user or other application services outside EdgeX's Docker network access the endpoint of the add-on service, then we can configure and add it via proxy-setup service's ADD_PROXY_ROUTE environment variable. proxy-setup adds those services listed in that environment variable into the API gateway (also known as Kong) route so that the endpoint can be accessible using Kong's proxy endpoint. One example of adding API gateway access routes for both device-camera and device-modbus is given as follows: ... edgex-proxy : ... environment : ... ADD_PROXY_ROUTE : \"device-camera.http://edgex-device-camera:59985, device-modbus.http://edgex-device-modbus:59901\" ... ... where in the comma separated list, the first part of configured value device-camera is the service key and the URL format is the service's hostname with its docker network port number 59985 for device-camera . The same idea applies to device-modbus with its values. With that setup, we can then access the endpoints of device-camera from Kong's host like https://:8443/device-camera/{device-name}/name assuming the caller can resolve from DNS server. For more details on the introduction to the API gateway and how it works, please see APIGateway documentation page .","title":"(Optional) Configure the API gateway access route for add-on service"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/","text":"Security for EdgeX Stack This page shows how to secure communication between core EdgeX services and various device services by utilizing docker swarm to create an encrypted overlay network between two hosts. We are showcasing two interesting concepts here. 1) Securing the traffic between core and device services 2) Setting up an EdgeX Stack cross platform using docker swarm Docker Swarm Overlay Network Docker's overlay network driver is a software abstraction on top of physical networking hardware to link multiple nodes together in a distributed network. This allows nodes/containers running on the network to communicate securely, if encryption is in enabled. Overlay network encryption is not supported on Windows. We created two docker swarm nodes for this example a manager node and a worker node. The manager node is running all of the core EdgeX services and the worker node runs the device services. Using the docker daemon's overlay network abstraction and enabling security we can have secure communication between these nodes. Reference implementation example The reference implementation example can be found in this repository: Reference example device-service docker-swarm overlay network Setup remote running Virtual Machine In this example setup, similar to the the SSH example , vagrant is used on the top of Virtual Box to set up the secondary/remote VM. Download vagrant from Hashicorp website or if you're on Ubuntu via sudo apt install virtualbox and sudo apt install vagrant . We have a simple vagrant file used for this tutorial here This vagrant file sets the hostname for the new VM and installs docker. Getting the VM running Launch the worker node or VM if it is not yet running: This will create the VM select your network interface and let the prompt continue. Once the prompt finishes, ignore the VMs popup window, we will be login in via SSH in the next step. vagrant up ssh into the worker node via from your host's terminal: vagrant ssh This will give you a terminal prompt in the worker node where you will run the sudo docker swarm join command in a few steps. Connecting the swarm nodes With the VM up and running we need to connect the two nodes using docker swarm. The following command initializes a docker swarm and is to be ran on the host machine: sudo docker swarm init --advertise-addr The previous command will output a token use this token in the following join command. This joins the worker node to the cluster, to be ran your vagrant VM (worker-node): sudo docker swarm join --token :2377 Next, I will walk-through the changes we made to the docker-stack.yml file to convert the edgex compose file into a docker swarm stack file. Setting up the docker-stack-edgex.yml file All of the following changes are already done in the examples repo. I will just outline the necessary changes to go from a compose file to stack file. First, remove 'restart' command from compose file; 'restart' is not a valid command in docker swarm. Next, we define constraints that the edgex core services must not run on the worker node add this section of yml to the 'docker-stack-edgex.yml'. We will do the inverse for the device-service to ensure it does run on the worker node thus ensuring it uses the overlay network to communicate with the other services. Note that this is already done in the example directory. deploy : placement : constraints : - node.hostname != worker-node Here is the inverse of the previous yml block. This gets added to the device services in the stack file. deploy : placement : constraints : - node.hostname == worker-node These work because we set the 'hostname = worker-node' in the Vagrantfile. Adding Host Mounted Volumes In docker swarm bind mount volumes need to be explicitly defined in the stack yml file. Below are the first three bind mount volume definitions these directories must be created on the host before the stack file can be ran. Note that this is only an example. In a production deployment you would want to use a network filesystem or create the shared volumes between containers. secrets-volume : driver : local driver_opts : o : bind type : none device : /tmp/edgex/secrets/ secrets-ca-volume : driver : local driver_opts : o : bind type : none device : /tmp/edgex/secrets/ca/ edgex-consul : driver : local driver_opts : o : bind type : none device : /tmp/edgex/secrets/edgex-consul/ ... The full docker-stack file is included here: docker-stack-edgex.yml file Other changes in docker-stack-edgex.yml file Another change we had to make in the docker-stack-edgex.yml file is to disable IPC_LOCK because the cap_add flag in vault's configuration is not supported in docker swarm. To do this we add SKIP_SETCAP=true and disable_mlock = \"true\" to vault in the stack file. vault : image : vault:1.3.1 hostname : edgex-vault networks : edgex-network : aliases : - edgex-vault ports : - target : 8200 published : 8200 protocol : tcp mode : host # cap_add not allowed in docker swarm, this is a security issue and I don't recommend disabling this in production # cap_add: # - \"IPC_LOCK\" tmpfs : - /vault/config entrypoint : [ \"/vault/init/start_vault.sh\" ] environment : - VAULT_ADDR=https://edgex-vault:8200 - VAULT_CONFIG_DIR=/vault/config - VAULT_UI=true - SKIP_SETCAP=true - | VAULT_LOCAL_CONFIG= listener \"tcp\" { address = \"edgex-vault:8200\" tls_disable = \"0\" cluster_address = \"edgex-vault:8201\" tls_min_version = \"tls12\" tls_client_ca_file =\"/tmp/edgex/secrets/edgex-vault/ca.pem\" tls_cert_file =\"/tmp/edgex/secrets/edgex-vault/server.crt\" tls_key_file = \"/tmp/edgex/secrets/edgex-vault/server.key\" tls_perfer_server_cipher_suites = \"true\" } backend \"consul\" { path = \"vault/\" address = \"edgex-core-consul:8500\" scheme = \"http\" redirect_addr = \"https://edgex-vault:8200\" cluster_addr = \"https://edgex-vault:8201\" } default_lease_ttl = \"168h\" max_lease_ttl = \"720h\" disable_mlock = \"true\" volumes : - vault-file:/vault/file:z - vault-logs:/vault/logs:z - vault-init:/vault/init:ro,z - edgex-vault:/tmp/edgex/secrets/edgex-vault:ro,z depends_on : - consul - security-secrets-setup deploy : endpoint_mode : dnsrr placement : constraints : - node.hostname != worker-node Another change we had to make is to set the restart policy for one-shot initialization containers like kong-migrations and edgex-proxy. Simply add this section of yaml to the services you'd like to only run once and they wont be restarted unless a failure condition happens. restart_policy: condition: on-failure The next and final change in the stack yml file is to ensure the EdgeX services are binding to the correct host. Since Geneva we do this by adding a common variable Service_ServerBindAddr: \"0.0.0.0\" to ensure that the service will bind to any host and not be limited to the hostname. Running the docker stack file With all of these changes in place we are ready to run the stack file. We included a script to run the stack file and create the volumes needed in the stack file. This script simply creates the volumes directories and runs the docker stack deploy ... command. sudo ./run.sh Once the stack is up you can run the following command to view the running services: sudo docker stack services edgex-overlay Confirming results To ensure the device service is running on the worker node you can run the docker stack ps edgex-overlay command. Now check that you see the device service running on the worker-node while all of the other services are running on your host. We have encryption enabled but how to we confirm that the overlay network is encrypting our data? We can use tcpdum with a protocol filter for ESP (Encapsulating Security Payload) traffic on the worker node this allows us to sniff and ensure the traffic is coming over the expected encrypted protocol. Adding a -A flag would also highlight that the data is not in the HTTP protocol format. sudo tcpdump -p esp Tearing everything down To remove the stack run the command: sudo ./down.sh This will remove the volumes and the stack. To remove the swarm itself run: on the worker node docker swarm leave and on the host machine docker swarm leave --force . To remove the vagrant VM run vagrant destroy on the host.","title":"Security for EdgeX Stack"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#security-for-edgex-stack","text":"This page shows how to secure communication between core EdgeX services and various device services by utilizing docker swarm to create an encrypted overlay network between two hosts. We are showcasing two interesting concepts here. 1) Securing the traffic between core and device services 2) Setting up an EdgeX Stack cross platform using docker swarm","title":"Security for EdgeX Stack"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#docker-swarm-overlay-network","text":"Docker's overlay network driver is a software abstraction on top of physical networking hardware to link multiple nodes together in a distributed network. This allows nodes/containers running on the network to communicate securely, if encryption is in enabled. Overlay network encryption is not supported on Windows. We created two docker swarm nodes for this example a manager node and a worker node. The manager node is running all of the core EdgeX services and the worker node runs the device services. Using the docker daemon's overlay network abstraction and enabling security we can have secure communication between these nodes.","title":"Docker Swarm Overlay Network"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#reference-implementation-example","text":"The reference implementation example can be found in this repository: Reference example device-service docker-swarm overlay network","title":"Reference implementation example"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#setup-remote-running-virtual-machine","text":"In this example setup, similar to the the SSH example , vagrant is used on the top of Virtual Box to set up the secondary/remote VM. Download vagrant from Hashicorp website or if you're on Ubuntu via sudo apt install virtualbox and sudo apt install vagrant . We have a simple vagrant file used for this tutorial here This vagrant file sets the hostname for the new VM and installs docker.","title":"Setup remote running Virtual Machine"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#getting-the-vm-running","text":"Launch the worker node or VM if it is not yet running: This will create the VM select your network interface and let the prompt continue. Once the prompt finishes, ignore the VMs popup window, we will be login in via SSH in the next step. vagrant up ssh into the worker node via from your host's terminal: vagrant ssh This will give you a terminal prompt in the worker node where you will run the sudo docker swarm join command in a few steps.","title":"Getting the VM running"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#connecting-the-swarm-nodes","text":"With the VM up and running we need to connect the two nodes using docker swarm. The following command initializes a docker swarm and is to be ran on the host machine: sudo docker swarm init --advertise-addr The previous command will output a token use this token in the following join command. This joins the worker node to the cluster, to be ran your vagrant VM (worker-node): sudo docker swarm join --token :2377 Next, I will walk-through the changes we made to the docker-stack.yml file to convert the edgex compose file into a docker swarm stack file.","title":"Connecting the swarm nodes"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#setting-up-the-docker-stack-edgexyml-file","text":"All of the following changes are already done in the examples repo. I will just outline the necessary changes to go from a compose file to stack file. First, remove 'restart' command from compose file; 'restart' is not a valid command in docker swarm. Next, we define constraints that the edgex core services must not run on the worker node add this section of yml to the 'docker-stack-edgex.yml'. We will do the inverse for the device-service to ensure it does run on the worker node thus ensuring it uses the overlay network to communicate with the other services. Note that this is already done in the example directory. deploy : placement : constraints : - node.hostname != worker-node Here is the inverse of the previous yml block. This gets added to the device services in the stack file. deploy : placement : constraints : - node.hostname == worker-node These work because we set the 'hostname = worker-node' in the Vagrantfile.","title":"Setting up the docker-stack-edgex.yml file"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#adding-host-mounted-volumes","text":"In docker swarm bind mount volumes need to be explicitly defined in the stack yml file. Below are the first three bind mount volume definitions these directories must be created on the host before the stack file can be ran. Note that this is only an example. In a production deployment you would want to use a network filesystem or create the shared volumes between containers. secrets-volume : driver : local driver_opts : o : bind type : none device : /tmp/edgex/secrets/ secrets-ca-volume : driver : local driver_opts : o : bind type : none device : /tmp/edgex/secrets/ca/ edgex-consul : driver : local driver_opts : o : bind type : none device : /tmp/edgex/secrets/edgex-consul/ ... The full docker-stack file is included here: docker-stack-edgex.yml file","title":"Adding Host Mounted Volumes"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#other-changes-in-docker-stack-edgexyml-file","text":"Another change we had to make in the docker-stack-edgex.yml file is to disable IPC_LOCK because the cap_add flag in vault's configuration is not supported in docker swarm. To do this we add SKIP_SETCAP=true and disable_mlock = \"true\" to vault in the stack file. vault : image : vault:1.3.1 hostname : edgex-vault networks : edgex-network : aliases : - edgex-vault ports : - target : 8200 published : 8200 protocol : tcp mode : host # cap_add not allowed in docker swarm, this is a security issue and I don't recommend disabling this in production # cap_add: # - \"IPC_LOCK\" tmpfs : - /vault/config entrypoint : [ \"/vault/init/start_vault.sh\" ] environment : - VAULT_ADDR=https://edgex-vault:8200 - VAULT_CONFIG_DIR=/vault/config - VAULT_UI=true - SKIP_SETCAP=true - | VAULT_LOCAL_CONFIG= listener \"tcp\" { address = \"edgex-vault:8200\" tls_disable = \"0\" cluster_address = \"edgex-vault:8201\" tls_min_version = \"tls12\" tls_client_ca_file =\"/tmp/edgex/secrets/edgex-vault/ca.pem\" tls_cert_file =\"/tmp/edgex/secrets/edgex-vault/server.crt\" tls_key_file = \"/tmp/edgex/secrets/edgex-vault/server.key\" tls_perfer_server_cipher_suites = \"true\" } backend \"consul\" { path = \"vault/\" address = \"edgex-core-consul:8500\" scheme = \"http\" redirect_addr = \"https://edgex-vault:8200\" cluster_addr = \"https://edgex-vault:8201\" } default_lease_ttl = \"168h\" max_lease_ttl = \"720h\" disable_mlock = \"true\" volumes : - vault-file:/vault/file:z - vault-logs:/vault/logs:z - vault-init:/vault/init:ro,z - edgex-vault:/tmp/edgex/secrets/edgex-vault:ro,z depends_on : - consul - security-secrets-setup deploy : endpoint_mode : dnsrr placement : constraints : - node.hostname != worker-node Another change we had to make is to set the restart policy for one-shot initialization containers like kong-migrations and edgex-proxy. Simply add this section of yaml to the services you'd like to only run once and they wont be restarted unless a failure condition happens. restart_policy: condition: on-failure The next and final change in the stack yml file is to ensure the EdgeX services are binding to the correct host. Since Geneva we do this by adding a common variable Service_ServerBindAddr: \"0.0.0.0\" to ensure that the service will bind to any host and not be limited to the hostname.","title":"Other changes in docker-stack-edgex.yml file"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#running-the-docker-stack-file","text":"With all of these changes in place we are ready to run the stack file. We included a script to run the stack file and create the volumes needed in the stack file. This script simply creates the volumes directories and runs the docker stack deploy ... command. sudo ./run.sh Once the stack is up you can run the following command to view the running services: sudo docker stack services edgex-overlay","title":"Running the docker stack file"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#confirming-results","text":"To ensure the device service is running on the worker node you can run the docker stack ps edgex-overlay command. Now check that you see the device service running on the worker-node while all of the other services are running on your host. We have encryption enabled but how to we confirm that the overlay network is encrypting our data? We can use tcpdum with a protocol filter for ESP (Encapsulating Security Payload) traffic on the worker node this allows us to sniff and ensure the traffic is coming over the expected encrypted protocol. Adding a -A flag would also highlight that the data is not in the HTTP protocol format. sudo tcpdump -p esp","title":"Confirming results"},{"location":"security/Ch-Docker-Swarm-SecuringDeviceServices/#tearing-everything-down","text":"To remove the stack run the command: sudo ./down.sh This will remove the volumes and the stack. To remove the swarm itself run: on the worker node docker swarm leave and on the host machine docker swarm leave --force . To remove the vagrant VM run vagrant destroy on the host.","title":"Tearing everything down"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/","text":"Security for EdgeX Stack This page describes one of many options to secure the EdgeX software stack with running remote device services like device-virtual, device-rest, device-mqtt, and so on, via secure two-way SSH-tunnelings. Basic SSH-Tunneling In this option to secure the EdgeX software stack, SSH tunneling is utilized. The basic idea is to create a secure SSH connection between a local machine and a remote machine in which some micro-services or applications can be relayed. In this particular example, the local machine as the primary host is running the whole EdgeX core services including core services and security services but without any device service. The device services are running in the remote machine. The communication is secure because SSH port forwarding connection is encrypted by default. The SSH communication is established by introducing some extra SSH-related services: 1) device-ssh-proxy : this is the service with ssh client opening up the SSH communication between the local machine and the remote one 2) device-ssh-remote : this is actually the SSH server or daemon service together with device services running on the remote machine The high-level diagram is shown as follows: \"Top level diagram for SSH tunneling for device services\" In the local machine, the SSH tunneling handshake is initiated by device-ssh-proxy service to the remote running device services. The dependencies that remote device services needed are reversely tunneling back from the local machine. Reference implementation example The whole reference implementation example can be found in this repository: https://github.com/edgexfoundry/edgex-examples/tree/main/security/remote_devices/ssh-tunneling Setup remote running Virtual Machine In the example setup, vagrant is used on the top of Virtual Box to set up as the secondary/remote VM. The network port for ssh standard port 22 is mapped into 2222 for vagrant ssh itself and the forwarded port is also mapped on the VM network for port 2223 to the host machine port 2223. This port 2223 is used for the ssh daemon Docker container that will be introduced later on below. Once you have downloaded the vagrant from Hashicorp website, typical vagrant setup for the first time can be done via command ./vagrant init and it will generate the Vagrant configuration file. The Vagrantfile can be found in the aforementioned GitHub repository. SSH Tunneling: Setup the SSH server on the remote machine For an example of how to run a SSH server in Docker, checkout https://docs.docker.com/engine/examples/running_ssh_service/ for detailed instructions. Running sshd in Docker is a container anti-pattern, as one can enter a container for remote administration using docker exec . In this use case, however, we are not using sshd for remote administration, but instead to set up a network tunnel. The generate-keys.sh helper script generates an RSA keypair, and copies the authorized_keys file into the remote/sshd-remote folder. The sample's Dockerfile will then build this key into the the remote sshd container image and use it for authentication. SSH Tunneling: Local Port Forwarding In this use case, we want to impersonate a device service that is running on a remote machine. We use local port forwarding to receive inbound requests on the device service's port, and ask that the traffic be forwarded through the ssh tunnel to a remote host and a remote port. The -L flag of ssh command is important here. ssh -N \\ -o StrictHostKeyChecking = no \\ -o UserKnownHostsFile = /dev/null \\ -L *: $SERVICE_PORT : $SERVICE_HOST : $SERVICE_PORT \\ -p $TUNNEL_SSH_PORT \\ $TUNNEL_HOST where environment variables are: TUNNEL_HOST is the remote host name or IP address that SSH daemon or server is running on; TUNNEL_SSH_PROT is the port number to be used on the SSH tunnel communication between the local machine and the remote machine SERVICE_PORT is the port number from the local or the primary to be forwared to the remote machine; without lose of generality, the port number on the remote machine is the same as the local one SERVICE_HOST is the service host name or IP address of the Docker containers that are running on the remote machine; SSH Reverse Tunneling: Remote Port Forwarding This step is to show the reverse direction of SSH tunneling: from the remote back to the local machine. The reverse SSH tunneling is also needed because the device services depends on the core services like core-data , core-metadata , Redis (for message queuing), Vault (for the secret store), and Consul (for registry and configuration). These core services are running on the local machine and should be reverse tunneled back from the remote machine. Essentially, the sshd container will impersonate these services on the remote side. This can be achieved by using -R flag of ssh command. ssh -N \\ -o StrictHostKeyChecking = no \\ -o UserKnownHostsFile = /dev/null \\ -R 0 .0.0.0: $SECRETSTORE_PORT : $SECRETSTORE_HOST : $SECRETSTORE_PORT \\ -R 0 .0.0.0:6379: $MESSAGEQUEUE_HOST :6379 \\ -R 0 .0.0.0:8500: $REGISTRY_HOST :8500 \\ -R 0 .0.0.0:5563: $CLIENTS_CORE_DATA_HOST :5563 \\ -R 0 .0.0.0:59880: $CLIENTS_CORE_DATA_HOST :59880 \\ -R 0 .0.0.0:59881: $CLIENTS_CORE_METADATA_HOST :59881 \\ -p $TUNNEL_SSH_PORT \\ $TUNNEL_HOST where environment variables are: TUNNEL_HOST is the remote host name or IP address that SSH daemon or server is running on; In the reverse tunneling, the service host names of dependent services are used like edgex-core-data , for example. Security: EdgeX Secret Store Token One last detail that needs to be taken care of is to copy the EdgeX secret store token to the remote machine. This is needed in order for the remote service to get access to the EdgeX secret store as well as the registry and configuration provider. This is done by copying the tokens over SSH to the remote machine prior to initiating the port-forwarding describe above. scp -p \\ -o StrictHostKeyChecking = no \\ -o UserKnownHostsFile = /dev/null \\ -P $TUNNEL_SSH_PORT \\ /tmp/edgex/secrets/device-virtual/secrets-token.json $TUNNEL_HOST :/tmp/edgex/secrets/device-virtual/secrets-token.json ssh \\ -o StrictHostKeyChecking = no \\ -o UserKnownHostsFile = /dev/null \\ -p $TUNNEL_SSH_PORT \\ $TUNNEL_HOST -- \\ chown -Rh 2002 :2001 /tmp/edgex/secrets/device-virtual Put it all together Remote host If you don't have a remote host already, and have Vagrant and VirtualBox installed, you can use the Vagrant CLI to start a VM: Launch the remote machine or VM if it is not yet: ~/vm/vagrant up and ssh into the remote machine via ~/vm/vagrant ssh Make sure the edgex-examples repository is checked out to both the local and remote machines. In the local machine, run the generate-keys.sh helper script to generate an id_rsa and id_rsa.pub to the current directory. Copy these files to the same relative location on the remote machine as well so that both machines have access to the same keypair, and run generate-keys.sh on the remote machine as well. The keypair won't be overwritten, but an authorized_keys file for the remote side will be generated and copied to the appropriate location. On the remote machine, change directories into the remote folder and bring up the example stack: $ cd security/remote_devices/ssh-tunneling/remote $ docker-compose -f docker-compose.yml up --build -d This command will build the remote sshd container, with the public key embedded, and start up the device-virtual service. The device-virtual service will sit in a crash/retry loop until the ssh tunnel is initiated from the local side. It is interesting to note how the remote sshd impersonates as several different hosts that actualy exist on the local side. This is where reverse tunneling comes in to play. sshd-remote : image : edgex-sshd-remote:latest build : context : sshd-remote container_name : edgex-sshd-remote hostname : edgex-sshd-remote ports : - \"2223:22\" read_only : true restart : always security_opt : - no-new-privileges:true networks : edgex-network : aliases : - edgex-core-consul - edgex-core-data - edgex-core-metadata - edgex-redis - edgex-vault tmpfs : - /run volumes : - /tmp/edgex/secrets/device-virtual:/tmp/edgex/secrets/device-virtual On the local machine, change directories into the local folder and bring up the example stack: $ cd security/remote_devices/ssh-tunneling/local $ docker-compose -f docker-compose.yml up --build -d The docker-compose.yml is a modified version of the orginal docker-compose.original with the following modifications: The original device-virtual service is commented out A device-ssh-proxy service is started in its place. This new service appears as edgex-device-virtual on the local network. It's job is to initiate the remote tunnel and forward network traffic in both directions. You will need to modify TUNNEL_HOST in the docker-compose.yaml to be the IP address of the remote host. Test with the device-virtual APIs Mainly run curl or postman directly from the local machine to the device-virtual APIs to verify the remote device virtual service can be accessible from the local host machine via two-way SSH tunneling. This can be checked from the console of the local machine: the ping response of calling edgex-device-virtual's ping action: jim@jim-NUC7i5DNHE:~/go/src/github.com/edgexfoundry/developer-scripts/releases/geneva/compose-files$ curl http://localhost:59900/api/v2/ping 1 .2.0-dev.13j or see the configuration of it via curl command: jim@jim-NUC7i5DNHE:~/go/src/github.com/edgexfoundry/developer-scripts/releases/geneva/compose-files$ curl http://localhost:59900/api/v2/config { \"Writable\" :{ \"LogLevel\" : \"INFO\" }, \"Clients\" :{ \"Data\" :{ \"Host\" : \"localhost\" , \"Port\" : 48080 , \"Protocol\" : \"http\" }, \"Logging\" :{ \"Host\" : \"localhost\" , \"Port\" : 48061 , \"Protocol\" : \"http\" }, \"Metadata\" :{ \"Host\" : \"edgex-core-metadata\" , \"Port\" : 48081 , \"Protocol\" : \"http\" }}, \"Logging\" :{ \"EnableRemote\" : false , \"File\" : \"\" }, \"Registry\" :{ \"Host\" : \"edgex-core-consul\" , \"Port\" : 8500 , \"Type\" : \"consul\" }, \"Service\" :{ \"BootTimeout\" : 30000 , \"CheckInterval\" : \"10s\" , \"ClientMonitor\" : 15000 , \"Host\" : \"edgex-device-virtual\" , \"Port\" : 59900 , \"Protocol\" : \"http\" , \"StartupMsg\" : \"device virtual started\" , \"MaxResultCount\" : 0 , \"Timeout\" : 5000 , \"ConnectRetries\" : 10 , \"Labels\" :[], \"EnableAsyncReadings\" : true , \"AsyncBufferSize\" : 16 }, \"Device\" :{ \"DataTransform\" : true , \"InitCmd\" : \"\" , \"InitCmdArgs\" : \"\" , \"MaxCmdOps\" : 128 , \"MaxCmdValueLen\" : 256 , \"RemoveCmd\" : \"\" , \"RemoveCmdArgs\" : \"\" , \"ProfilesDir\" : \"./res\" , \"UpdateLastConnected\" : false , \"Discovery\" :{ \"Enabled\" : false , \"Interval\" : \"\" }}, \"DeviceList\" :[{ \"Name\" : \"Random-Boolean-Device\" , \"Profile\" : \"Random-Boolean-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-bool-01\" , \"Port\" : \"300\" }}, \"AutoEvents\" :[{ \"frequency\" : \"10s\" , \"resource\" : \"Bool\" }]},{ \"Name\" : \"Random-Integer-Device\" , \"Profile\" : \"Random-Integer-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-int-01\" , \"Protocol\" : \"300\" }}, \"AutoEvents\" :[{ \"frequency\" : \"15s\" , \"resource\" : \"Int8\" },{ \"frequency\" : \"15s\" , \"resource\" : \"Int16\" },{ \"frequency\" : \"15s\" , \"resource\" : \"Int32\" },{ \"frequency\" : \"15s\" , \"resource\" : \"Int64\" }]},{ \"Name\" : \"Random-UnsignedInteger-Device\" , \"Profile\" : \"Random-UnsignedInteger-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-uint-01\" , \"Protocol\" : \"300\" }}, \"AutoEvents\" :[{ \"frequency\" : \"20s\" , \"resource\" : \"Uint8\" },{ \"frequency\" : \"20s\" , \"resource\" : \"Uint16\" },{ \"frequency\" : \"20s\" , \"resource\" : \"Uint32\" },{ \"frequency\" : \"20s\" , \"resource\" : \"Uint64\" }]},{ \"Name\" : \"Random-Float-Device\" , \"Profile\" : \"Random-Float-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-float-01\" , \"Protocol\" : \"300\" }}, \"AutoEvents\" :[{ \"frequency\" : \"30s\" , \"resource\" : \"Float32\" },{ \"frequency\" : \"30s\" , \"resource\" : \"Float64\" }]},{ \"Name\" : \"Random-Binary-Device\" , \"Profile\" : \"Random-Binary-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-bool-01\" , \"Port\" : \"300\" }}, \"AutoEvents\" : null }], \"Driver\" :{}} One can also monitor the docker log messages of core-data on the local machine too see if it publishes the events to the bus: $ docker logs -f edgex-core-data level = INFO ts = 2020 -06-10T00:49:26.579819548Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:26.579909649Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 4dc57d03-178e-49f5-a799-67813db9d85b \" level = INFO ts = 2020 -06-10T00:49:27.107028244Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:27.107128916Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 2a0fd8fa-bb16-4d1a-ba1b-c5e70e1a1cec \" level = INFO ts = 2020 -06-10T00:49:27.376915392Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:27.377084206Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 76d288e2-a2e8-4ed4-9265-986661b71bbe \" level = INFO ts = 2020 -06-10T00:49:27.718042678Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:27.718125128Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: f5412a38-0346-4bd3-b9da-69498e4edb9a \" level = INFO ts = 2020 -06-10T00:49:30.49407257Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:30.494162219Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: da54fcc9-4771-4e0f-9eff-e0d2067eac7e \" level = INFO ts = 2020 -06-10T00:49:31.204976003Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:31.205211102Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 08574f61-6ea3-49cf-a776-028876de7957 \" level = INFO ts = 2020 -06-10T00:49:31.778242016Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:31.778342992Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: f1630f13-6fa7-45a6-b6f6-6bbde159b414 \" level = INFO ts = 2020 -06-10T00:49:34.747901983Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:34.748045382Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: cf14c573-60b9-43cd-b95b-2c6ffe26ba20 \" level = INFO ts = 2020 -06-10T00:49:34.944758331Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:34.9449585Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 292b9ca7-a640-4ac8-8650-866b7c4a6d15 \" level = INFO ts = 2020 -06-10T00:49:37.421202715Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:37.421367863Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: bb7a34b1-c65f-4820-91a3-162903ac1e7a \" level = INFO ts = 2020 -06-10T00:49:42.290660694Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:42.290756356Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 8fff92c0-ef69-4758-bf8a-3492fb48cef2 \" level = INFO ts = 2020 -06-10T00:49:42.559019764Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:42.559105855Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 12947a42-4669-4bff-8720-d0e9fbeef343 \" level = INFO ts = 2020 -06-10T00:49:44.922764379Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:44.922848184Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 3c07ce76-203a-4bf5-ab89-b99a1fbbb266 \" and also do the docker log messages of device-virtual container on the remote: vagrant@ubuntu-bionic:~/geneva$ docker logs -f edgex-device-virtual level = INFO ts = 2020 -06-10T00:51:52.602154238Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 3d86a699-c089-412d-94f3-af6cd9093f28 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:53.358352349Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 9612b186-98cb-4dc5-887a-195ce7300978 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:57.649085447Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = da682ffb-9120-4286-9f33-aa0a9f2c0489 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:57.86899148Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = afc1fccf-de8a-46ce-9849-82c5e4e5837e msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:59.543754189Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 80ac32a0-3a9a-4b07-bf3f-b26ec159dc40 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:59.688746606Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 21501030 -c07c-4ac4-a2d2-1243782cb4b8 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:59.853069376Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 3b2927db-e689-4fad-8d53-af6fe20239f8 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:00.055657757Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = a7698f2d-a115-4b46-af5f-3b8bf77e6ea4 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:04.460557145Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 602efd03-8e9d-441b-9a7d-45dbcb6b416f msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:07.696983268Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 88190186 -6f93-4c6a-a1f6-d6a20a6e79e4 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:08.040474761Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 73c60159-f50c-480b-90da-ebe310fa2f6e msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:08.2091048Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 2d799509-dc1d-4075-b193-1e5da24cfa77 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:12.751717832Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 7611a188-23f4-44d0-bd12-f6574535be8d msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:13.553351482Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = a32067c8-adae-4778-b72d-0d8d7d11220f msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:15.20395683Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 41df0427-5998-4d1e-9c26-1f727912638b msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:15.686970839Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = c6a8bb2d-22ab-4932-bdd0-138f12f843b6 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:18.177810023Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = a49d663b-1676-4ecf-ba52-76e9ad7c501d msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:19.600220653Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = b6d2c2d1-5d5c-4f7a-9dd2-2067e732f018 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:19.990751025Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 1db5dde3-bb6b-4600-abbb-d01b3042c329 msg = \"SendEvent: Pushed event to core data\" Test to get random integer value of the remote device-virtual random integer device from the local machine using curl command like this: jim@jim-NUC7i5DNHE:~/go/src/github.com/edgexfoundry/device-virtual-go$ curl -k http://localhost:59900/api/v2/device/name/Random-Integer-Device/Int8 { \"device\" : \"Random-Integer-Device\" , \"origin\" :1592432603445490720, \"readings\" : [{ \"origin\" :1592432603404127336, \"device\" : \"Random-Integer-Device\" , \"name\" : \"Int8\" , \"value\" : \"11\" , \"valueType\" : \"Int8\" }] , \"EncodedEvent\" :null }","title":"Security for EdgeX Stack"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#security-for-edgex-stack","text":"This page describes one of many options to secure the EdgeX software stack with running remote device services like device-virtual, device-rest, device-mqtt, and so on, via secure two-way SSH-tunnelings.","title":"Security for EdgeX Stack"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#basic-ssh-tunneling","text":"In this option to secure the EdgeX software stack, SSH tunneling is utilized. The basic idea is to create a secure SSH connection between a local machine and a remote machine in which some micro-services or applications can be relayed. In this particular example, the local machine as the primary host is running the whole EdgeX core services including core services and security services but without any device service. The device services are running in the remote machine. The communication is secure because SSH port forwarding connection is encrypted by default. The SSH communication is established by introducing some extra SSH-related services: 1) device-ssh-proxy : this is the service with ssh client opening up the SSH communication between the local machine and the remote one 2) device-ssh-remote : this is actually the SSH server or daemon service together with device services running on the remote machine The high-level diagram is shown as follows: \"Top level diagram for SSH tunneling for device services\" In the local machine, the SSH tunneling handshake is initiated by device-ssh-proxy service to the remote running device services. The dependencies that remote device services needed are reversely tunneling back from the local machine.","title":"Basic SSH-Tunneling"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#reference-implementation-example","text":"The whole reference implementation example can be found in this repository: https://github.com/edgexfoundry/edgex-examples/tree/main/security/remote_devices/ssh-tunneling","title":"Reference implementation example"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#setup-remote-running-virtual-machine","text":"In the example setup, vagrant is used on the top of Virtual Box to set up as the secondary/remote VM. The network port for ssh standard port 22 is mapped into 2222 for vagrant ssh itself and the forwarded port is also mapped on the VM network for port 2223 to the host machine port 2223. This port 2223 is used for the ssh daemon Docker container that will be introduced later on below. Once you have downloaded the vagrant from Hashicorp website, typical vagrant setup for the first time can be done via command ./vagrant init and it will generate the Vagrant configuration file. The Vagrantfile can be found in the aforementioned GitHub repository.","title":"Setup remote running Virtual Machine"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#ssh-tunneling-setup-the-ssh-server-on-the-remote-machine","text":"For an example of how to run a SSH server in Docker, checkout https://docs.docker.com/engine/examples/running_ssh_service/ for detailed instructions. Running sshd in Docker is a container anti-pattern, as one can enter a container for remote administration using docker exec . In this use case, however, we are not using sshd for remote administration, but instead to set up a network tunnel. The generate-keys.sh helper script generates an RSA keypair, and copies the authorized_keys file into the remote/sshd-remote folder. The sample's Dockerfile will then build this key into the the remote sshd container image and use it for authentication.","title":"SSH Tunneling: Setup the SSH server on the remote machine"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#ssh-tunneling-local-port-forwarding","text":"In this use case, we want to impersonate a device service that is running on a remote machine. We use local port forwarding to receive inbound requests on the device service's port, and ask that the traffic be forwarded through the ssh tunnel to a remote host and a remote port. The -L flag of ssh command is important here. ssh -N \\ -o StrictHostKeyChecking = no \\ -o UserKnownHostsFile = /dev/null \\ -L *: $SERVICE_PORT : $SERVICE_HOST : $SERVICE_PORT \\ -p $TUNNEL_SSH_PORT \\ $TUNNEL_HOST where environment variables are: TUNNEL_HOST is the remote host name or IP address that SSH daemon or server is running on; TUNNEL_SSH_PROT is the port number to be used on the SSH tunnel communication between the local machine and the remote machine SERVICE_PORT is the port number from the local or the primary to be forwared to the remote machine; without lose of generality, the port number on the remote machine is the same as the local one SERVICE_HOST is the service host name or IP address of the Docker containers that are running on the remote machine;","title":"SSH Tunneling: Local Port Forwarding"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#ssh-reverse-tunneling-remote-port-forwarding","text":"This step is to show the reverse direction of SSH tunneling: from the remote back to the local machine. The reverse SSH tunneling is also needed because the device services depends on the core services like core-data , core-metadata , Redis (for message queuing), Vault (for the secret store), and Consul (for registry and configuration). These core services are running on the local machine and should be reverse tunneled back from the remote machine. Essentially, the sshd container will impersonate these services on the remote side. This can be achieved by using -R flag of ssh command. ssh -N \\ -o StrictHostKeyChecking = no \\ -o UserKnownHostsFile = /dev/null \\ -R 0 .0.0.0: $SECRETSTORE_PORT : $SECRETSTORE_HOST : $SECRETSTORE_PORT \\ -R 0 .0.0.0:6379: $MESSAGEQUEUE_HOST :6379 \\ -R 0 .0.0.0:8500: $REGISTRY_HOST :8500 \\ -R 0 .0.0.0:5563: $CLIENTS_CORE_DATA_HOST :5563 \\ -R 0 .0.0.0:59880: $CLIENTS_CORE_DATA_HOST :59880 \\ -R 0 .0.0.0:59881: $CLIENTS_CORE_METADATA_HOST :59881 \\ -p $TUNNEL_SSH_PORT \\ $TUNNEL_HOST where environment variables are: TUNNEL_HOST is the remote host name or IP address that SSH daemon or server is running on; In the reverse tunneling, the service host names of dependent services are used like edgex-core-data , for example.","title":"SSH Reverse Tunneling: Remote Port Forwarding"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#security-edgex-secret-store-token","text":"One last detail that needs to be taken care of is to copy the EdgeX secret store token to the remote machine. This is needed in order for the remote service to get access to the EdgeX secret store as well as the registry and configuration provider. This is done by copying the tokens over SSH to the remote machine prior to initiating the port-forwarding describe above. scp -p \\ -o StrictHostKeyChecking = no \\ -o UserKnownHostsFile = /dev/null \\ -P $TUNNEL_SSH_PORT \\ /tmp/edgex/secrets/device-virtual/secrets-token.json $TUNNEL_HOST :/tmp/edgex/secrets/device-virtual/secrets-token.json ssh \\ -o StrictHostKeyChecking = no \\ -o UserKnownHostsFile = /dev/null \\ -p $TUNNEL_SSH_PORT \\ $TUNNEL_HOST -- \\ chown -Rh 2002 :2001 /tmp/edgex/secrets/device-virtual","title":"Security: EdgeX Secret Store Token"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#put-it-all-together","text":"","title":"Put it all together"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#remote-host","text":"If you don't have a remote host already, and have Vagrant and VirtualBox installed, you can use the Vagrant CLI to start a VM: Launch the remote machine or VM if it is not yet: ~/vm/vagrant up and ssh into the remote machine via ~/vm/vagrant ssh Make sure the edgex-examples repository is checked out to both the local and remote machines. In the local machine, run the generate-keys.sh helper script to generate an id_rsa and id_rsa.pub to the current directory. Copy these files to the same relative location on the remote machine as well so that both machines have access to the same keypair, and run generate-keys.sh on the remote machine as well. The keypair won't be overwritten, but an authorized_keys file for the remote side will be generated and copied to the appropriate location. On the remote machine, change directories into the remote folder and bring up the example stack: $ cd security/remote_devices/ssh-tunneling/remote $ docker-compose -f docker-compose.yml up --build -d This command will build the remote sshd container, with the public key embedded, and start up the device-virtual service. The device-virtual service will sit in a crash/retry loop until the ssh tunnel is initiated from the local side. It is interesting to note how the remote sshd impersonates as several different hosts that actualy exist on the local side. This is where reverse tunneling comes in to play. sshd-remote : image : edgex-sshd-remote:latest build : context : sshd-remote container_name : edgex-sshd-remote hostname : edgex-sshd-remote ports : - \"2223:22\" read_only : true restart : always security_opt : - no-new-privileges:true networks : edgex-network : aliases : - edgex-core-consul - edgex-core-data - edgex-core-metadata - edgex-redis - edgex-vault tmpfs : - /run volumes : - /tmp/edgex/secrets/device-virtual:/tmp/edgex/secrets/device-virtual On the local machine, change directories into the local folder and bring up the example stack: $ cd security/remote_devices/ssh-tunneling/local $ docker-compose -f docker-compose.yml up --build -d The docker-compose.yml is a modified version of the orginal docker-compose.original with the following modifications: The original device-virtual service is commented out A device-ssh-proxy service is started in its place. This new service appears as edgex-device-virtual on the local network. It's job is to initiate the remote tunnel and forward network traffic in both directions. You will need to modify TUNNEL_HOST in the docker-compose.yaml to be the IP address of the remote host.","title":"Remote host"},{"location":"security/Ch-SSH-Tunneling-HowToSecureDeviceServices/#test-with-the-device-virtual-apis","text":"Mainly run curl or postman directly from the local machine to the device-virtual APIs to verify the remote device virtual service can be accessible from the local host machine via two-way SSH tunneling. This can be checked from the console of the local machine: the ping response of calling edgex-device-virtual's ping action: jim@jim-NUC7i5DNHE:~/go/src/github.com/edgexfoundry/developer-scripts/releases/geneva/compose-files$ curl http://localhost:59900/api/v2/ping 1 .2.0-dev.13j or see the configuration of it via curl command: jim@jim-NUC7i5DNHE:~/go/src/github.com/edgexfoundry/developer-scripts/releases/geneva/compose-files$ curl http://localhost:59900/api/v2/config { \"Writable\" :{ \"LogLevel\" : \"INFO\" }, \"Clients\" :{ \"Data\" :{ \"Host\" : \"localhost\" , \"Port\" : 48080 , \"Protocol\" : \"http\" }, \"Logging\" :{ \"Host\" : \"localhost\" , \"Port\" : 48061 , \"Protocol\" : \"http\" }, \"Metadata\" :{ \"Host\" : \"edgex-core-metadata\" , \"Port\" : 48081 , \"Protocol\" : \"http\" }}, \"Logging\" :{ \"EnableRemote\" : false , \"File\" : \"\" }, \"Registry\" :{ \"Host\" : \"edgex-core-consul\" , \"Port\" : 8500 , \"Type\" : \"consul\" }, \"Service\" :{ \"BootTimeout\" : 30000 , \"CheckInterval\" : \"10s\" , \"ClientMonitor\" : 15000 , \"Host\" : \"edgex-device-virtual\" , \"Port\" : 59900 , \"Protocol\" : \"http\" , \"StartupMsg\" : \"device virtual started\" , \"MaxResultCount\" : 0 , \"Timeout\" : 5000 , \"ConnectRetries\" : 10 , \"Labels\" :[], \"EnableAsyncReadings\" : true , \"AsyncBufferSize\" : 16 }, \"Device\" :{ \"DataTransform\" : true , \"InitCmd\" : \"\" , \"InitCmdArgs\" : \"\" , \"MaxCmdOps\" : 128 , \"MaxCmdValueLen\" : 256 , \"RemoveCmd\" : \"\" , \"RemoveCmdArgs\" : \"\" , \"ProfilesDir\" : \"./res\" , \"UpdateLastConnected\" : false , \"Discovery\" :{ \"Enabled\" : false , \"Interval\" : \"\" }}, \"DeviceList\" :[{ \"Name\" : \"Random-Boolean-Device\" , \"Profile\" : \"Random-Boolean-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-bool-01\" , \"Port\" : \"300\" }}, \"AutoEvents\" :[{ \"frequency\" : \"10s\" , \"resource\" : \"Bool\" }]},{ \"Name\" : \"Random-Integer-Device\" , \"Profile\" : \"Random-Integer-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-int-01\" , \"Protocol\" : \"300\" }}, \"AutoEvents\" :[{ \"frequency\" : \"15s\" , \"resource\" : \"Int8\" },{ \"frequency\" : \"15s\" , \"resource\" : \"Int16\" },{ \"frequency\" : \"15s\" , \"resource\" : \"Int32\" },{ \"frequency\" : \"15s\" , \"resource\" : \"Int64\" }]},{ \"Name\" : \"Random-UnsignedInteger-Device\" , \"Profile\" : \"Random-UnsignedInteger-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-uint-01\" , \"Protocol\" : \"300\" }}, \"AutoEvents\" :[{ \"frequency\" : \"20s\" , \"resource\" : \"Uint8\" },{ \"frequency\" : \"20s\" , \"resource\" : \"Uint16\" },{ \"frequency\" : \"20s\" , \"resource\" : \"Uint32\" },{ \"frequency\" : \"20s\" , \"resource\" : \"Uint64\" }]},{ \"Name\" : \"Random-Float-Device\" , \"Profile\" : \"Random-Float-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-float-01\" , \"Protocol\" : \"300\" }}, \"AutoEvents\" :[{ \"frequency\" : \"30s\" , \"resource\" : \"Float32\" },{ \"frequency\" : \"30s\" , \"resource\" : \"Float64\" }]},{ \"Name\" : \"Random-Binary-Device\" , \"Profile\" : \"Random-Binary-Device\" , \"Description\" : \"Example of Device Virtual\" , \"Labels\" :[ \"device-virtual-example\" ], \"Protocols\" :{ \"other\" :{ \"Address\" : \"device-virtual-bool-01\" , \"Port\" : \"300\" }}, \"AutoEvents\" : null }], \"Driver\" :{}} One can also monitor the docker log messages of core-data on the local machine too see if it publishes the events to the bus: $ docker logs -f edgex-core-data level = INFO ts = 2020 -06-10T00:49:26.579819548Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:26.579909649Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 4dc57d03-178e-49f5-a799-67813db9d85b \" level = INFO ts = 2020 -06-10T00:49:27.107028244Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:27.107128916Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 2a0fd8fa-bb16-4d1a-ba1b-c5e70e1a1cec \" level = INFO ts = 2020 -06-10T00:49:27.376915392Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:27.377084206Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 76d288e2-a2e8-4ed4-9265-986661b71bbe \" level = INFO ts = 2020 -06-10T00:49:27.718042678Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:27.718125128Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: f5412a38-0346-4bd3-b9da-69498e4edb9a \" level = INFO ts = 2020 -06-10T00:49:30.49407257Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:30.494162219Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: da54fcc9-4771-4e0f-9eff-e0d2067eac7e \" level = INFO ts = 2020 -06-10T00:49:31.204976003Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:31.205211102Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 08574f61-6ea3-49cf-a776-028876de7957 \" level = INFO ts = 2020 -06-10T00:49:31.778242016Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:31.778342992Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: f1630f13-6fa7-45a6-b6f6-6bbde159b414 \" level = INFO ts = 2020 -06-10T00:49:34.747901983Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:34.748045382Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: cf14c573-60b9-43cd-b95b-2c6ffe26ba20 \" level = INFO ts = 2020 -06-10T00:49:34.944758331Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:34.9449585Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 292b9ca7-a640-4ac8-8650-866b7c4a6d15 \" level = INFO ts = 2020 -06-10T00:49:37.421202715Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:37.421367863Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: bb7a34b1-c65f-4820-91a3-162903ac1e7a \" level = INFO ts = 2020 -06-10T00:49:42.290660694Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:42.290756356Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 8fff92c0-ef69-4758-bf8a-3492fb48cef2 \" level = INFO ts = 2020 -06-10T00:49:42.559019764Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:42.559105855Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 12947a42-4669-4bff-8720-d0e9fbeef343 \" level = INFO ts = 2020 -06-10T00:49:44.922764379Z app = edgex-core-data source = event.go:284 msg = \"Putting event on message queue\" level = INFO ts = 2020 -06-10T00:49:44.922848184Z app = edgex-core-data source = event.go:302 msg = \"Event Published on message queue. Topic: events, Correlation-id: 3c07ce76-203a-4bf5-ab89-b99a1fbbb266 \" and also do the docker log messages of device-virtual container on the remote: vagrant@ubuntu-bionic:~/geneva$ docker logs -f edgex-device-virtual level = INFO ts = 2020 -06-10T00:51:52.602154238Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 3d86a699-c089-412d-94f3-af6cd9093f28 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:53.358352349Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 9612b186-98cb-4dc5-887a-195ce7300978 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:57.649085447Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = da682ffb-9120-4286-9f33-aa0a9f2c0489 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:57.86899148Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = afc1fccf-de8a-46ce-9849-82c5e4e5837e msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:59.543754189Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 80ac32a0-3a9a-4b07-bf3f-b26ec159dc40 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:59.688746606Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 21501030 -c07c-4ac4-a2d2-1243782cb4b8 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:51:59.853069376Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 3b2927db-e689-4fad-8d53-af6fe20239f8 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:00.055657757Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = a7698f2d-a115-4b46-af5f-3b8bf77e6ea4 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:04.460557145Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 602efd03-8e9d-441b-9a7d-45dbcb6b416f msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:07.696983268Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 88190186 -6f93-4c6a-a1f6-d6a20a6e79e4 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:08.040474761Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 73c60159-f50c-480b-90da-ebe310fa2f6e msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:08.2091048Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 2d799509-dc1d-4075-b193-1e5da24cfa77 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:12.751717832Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 7611a188-23f4-44d0-bd12-f6574535be8d msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:13.553351482Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = a32067c8-adae-4778-b72d-0d8d7d11220f msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:15.20395683Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 41df0427-5998-4d1e-9c26-1f727912638b msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:15.686970839Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = c6a8bb2d-22ab-4932-bdd0-138f12f843b6 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:18.177810023Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = a49d663b-1676-4ecf-ba52-76e9ad7c501d msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:19.600220653Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = b6d2c2d1-5d5c-4f7a-9dd2-2067e732f018 msg = \"SendEvent: Pushed event to core data\" level = INFO ts = 2020 -06-10T00:52:19.990751025Z app = device-virtual source = utils.go:94 Content-Type = application/json correlation-id = 1db5dde3-bb6b-4600-abbb-d01b3042c329 msg = \"SendEvent: Pushed event to core data\" Test to get random integer value of the remote device-virtual random integer device from the local machine using curl command like this: jim@jim-NUC7i5DNHE:~/go/src/github.com/edgexfoundry/device-virtual-go$ curl -k http://localhost:59900/api/v2/device/name/Random-Integer-Device/Int8 { \"device\" : \"Random-Integer-Device\" , \"origin\" :1592432603445490720, \"readings\" : [{ \"origin\" :1592432603404127336, \"device\" : \"Random-Integer-Device\" , \"name\" : \"Int8\" , \"value\" : \"11\" , \"valueType\" : \"Int8\" }] , \"EncodedEvent\" :null }","title":"Test with the device-virtual APIs"},{"location":"security/Ch-SecretStore/","text":"Secret Store Introduction There are all kinds of secrets used within EdgeX Foundry micro services, such as tokens, passwords, certificates etc. The secret store serves as the central repository to keep these secrets. The developers of other EdgeX Foundry micro services utilize the secret store to create, store and retrieve secrets relevant to their corresponding micro services. Currently the EdgeX Foundry secret store is implemented with Vault , a HashiCorp open source software product. Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, database credentials, service credentials, or certificates. Vault provides a unified interface to any secret, while providing tight access control and multiple authentication mechanisms (token, LDAP, etc.). Additionally, Vault supports pluggable \"secrets engines\". EdgeX uses the Consul secrets engine to allow Vault to issue Consul access tokens to EdgeX microservices. In EdgeX, Vault's storage backend is the host file system. Start the Secret Store The EdgeX secret store is started by default when using the secure version of the Docker Compose scripts found at https://github.com/edgexfoundry/edgex-compose/tree/ireland . The command to start EdgeX with the secret store enabled is: git clone -b ireland https://github.com/edgexfoundry/edgex-compose make run or git clone -b ireland https://github.com/edgexfoundry/edgex-compose make run arm64 The EdgeX secret store is not started if EdgeX is started with security features disabled by appending no-secty to the previous commands. This disables all EdgeX security features, not just the API gateway. Documentation on how the EdgeX security store is sequenced with respect to all of the other EdgeX services is covered in the Secure Bootstrapping of EdgeX Architecture Decision Record(ADR) . Using the Secret Store Preferred Approach The preferred approach for interacting with the EdgeX secret store is to use the SecretClient interface in go-mod-secrets . Each EdgeX microservice has access to a StoreSecrets() method that allows setting of per-microservice secrets, and a GetSecrets() method to read them back. If manual \"super-user\" to the EdgeX secret store is required, it is necesary to obtain a privileged access token, called the Vault root token. Obtaining the Vault Root Token For security reasons (the Vault production hardening guide recommends revokation of the root token), the Vault root token is revoked by default. EdgeX automatically manages the secrets required by the framework, and provides a programmatic interface for individual microservices to interact with their partition of the secret store. If global access to the secret store is required, it is necessary to obtain a copy of the Vault root token using the below recommended procedure. Note that following this procedure directly contradicts the Vault production hardening guide . Since the root token cannot be un-revoked, the framework must be started for the first time with root token revokation disabled. Shut down the entire framework and remove the Docker persistent volumes using make clean in edgex-compose or docker volume prune after stopping all the containers. Optionally remove /tmp/edgex as well to clean the shared secrets volume. Edit docker-compose.yml and add an environment variable override for SECRETSTORE_REVOKEROOTTOKENS secretstore-setup : environment : SECRETSTORE_REVOKEROOTTOKENS : \"false\" Start EdgeX using make run or some other mechanism. Reveal the contents of the resp-init.json file stored in a Docker volume. docker run --rm -ti -v edgex_vault-config:/vault/config:ro alpine:latest cat /vault/config/assets/resp-init.json Extract the root_token field value from the resulting JSON output. As an alternative to overriding SECRETSTORE_REVOKEROOTTOKENS from the beginning, it is possible to regenerate the root token from the Vault unseal keys in resp-init.json using the Vault's documented procedure . The EdgeX framework executes this process internally whenever it requires root token capability. Note that a token created in this manner will again be revoked the next time EdgeX is restarted if SECRETSTORE_REVOKEROOTTOKENS remains set to its default value: all root tokens are revoked every time the framework is started if SECRETSTORE_REVOKEROOTTOKENS is true . Using the Vault CLI Execute a shell session in the running Vault container: docker exec -it edgex-vault sh -l Login to Vault using Vault CLI and the gathered Root Token: edgex-vault:/# vault login s.ULr5bcjwy8S0I5g3h4xZ5uWa Success! You are now authenticated. The token information displayed below is already stored in the token helper. You do NOT need to run \"vault login\" again. Future Vault requests will automatically use this token. Key Value --- ----- token s.ULr5bcjwy8S0I5g3h4xZ5uWa token_accessor Kv5FUhT2XgN2lLu8XbVxJI0o token_duration \u221e token_renewable false token_policies [\"root\"] identity_policies [] policies [\"root\"] Perform an introspection lookup on the current token login. This proves the token works and is valid. edgex-vault:/# vault token lookup Key Value --- ----- accessor Kv5FUhT2XgN2lLu8XbVxJI0o creation_time 1623371879 creation_ttl 0s display_name root entity_id n/a expire_time explicit_max_ttl 0s id s.ULr5bcjwy8S0I5g3h4xZ5uWa meta num_uses 0 orphan true path auth/token/root policies [root] ttl 0s type service !!! Note: The Root Token is the only token that has no expiration enforcement rules (Time to Live TTL counter). As an example, let's poke around and spy on the Redis database password: edgex-vault:/# vault list secret Keys ---- edgex/ edgex-vault:/# vault list secret/edgex Keys ---- app-rules-engine/ core-command/ core-data/ core-metadata/ device-rest/ device-virtual/ security-bootstrapper-redis/ support-notifications/ support-scheduler/ edgex-vault:/# vault list secret/edgex/core-data Keys ---- redisdb edgex-vault:/# vault read secret/edgex/core-data/redisdb Key Value --- ----- refresh_interval 168h password 9/crBba5mZqAfAH8d90m7RlZfd7N8yF2IVul89+GEaG3 username redis5 With the root token, it is possible to modify any Vault setting. See the Vault manual for available commands. Use the Vault REST API Vault also supports a REST API with functionality equivalent to the command line interface: The equivalent of the vault read secret/edgex/core-data/redisdb command looks like the following using the REST API: Displaying (GET) the redis credentials from Core Data's secret store: curl -s -H 'X-Vault-Token: s.ULr5bcjwy8S0I5g3h4xZ5uWa' http://localhost:8200/v1/secret/edgex/core-data/redisdb | python -m json.tool { \"request_id\": \"9d28ffe0-6b25-c0a8-e395-9fbc633f20cc\", \"lease_id\": \"\", \"renewable\": false, \"lease_duration\": 604800, \"data\": { \"password\": \"9/crBba5mZqAfAH8d90m7RlZfd7N8yF2IVul89+GEaG3\", \"username\": \"redis5\" }, \"wrap_info\": null, \"warnings\": null, \"auth\": null } See HashiCorp Vault API documentation for further details on syntax and usage ( https://www.vaultproject.io/api/ ). Using the Vault Web UI The Vault Web UI is not exposed via the API gateway. It must therefore be accessed via localhost or a network tunnel of some kind. Open a browser session on http://localhost:8200 and sign-in with the Root Token. {.align-center width=\"606px\" height=\"504px\"} Upper left corner of the current Vault UI session, the sign-out menu displaying the current token name: {.align-center width=\"275px\" height=\"156px\"} Select the Vault secret backend: Navigate the API Gateway (Kong) service X.509 TLS materials path (edgex/pki/tls/edgex-kong): The Vault UI also allows entering Vault CLI commands (see above 1st alternative ) using an embedded console: See also Some of the command used in implementing security services have man-style documentation: security-file-token-provider - Generate Vault tokens for EdgeX services secrets-config - Utility for secrets management. secrets-config-proxy - \"proxy\" subcommand for managing proxy secrets.","title":"Secret Store"},{"location":"security/Ch-SecretStore/#secret-store","text":"","title":"Secret Store"},{"location":"security/Ch-SecretStore/#introduction","text":"There are all kinds of secrets used within EdgeX Foundry micro services, such as tokens, passwords, certificates etc. The secret store serves as the central repository to keep these secrets. The developers of other EdgeX Foundry micro services utilize the secret store to create, store and retrieve secrets relevant to their corresponding micro services. Currently the EdgeX Foundry secret store is implemented with Vault , a HashiCorp open source software product. Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, database credentials, service credentials, or certificates. Vault provides a unified interface to any secret, while providing tight access control and multiple authentication mechanisms (token, LDAP, etc.). Additionally, Vault supports pluggable \"secrets engines\". EdgeX uses the Consul secrets engine to allow Vault to issue Consul access tokens to EdgeX microservices. In EdgeX, Vault's storage backend is the host file system.","title":"Introduction"},{"location":"security/Ch-SecretStore/#start-the-secret-store","text":"The EdgeX secret store is started by default when using the secure version of the Docker Compose scripts found at https://github.com/edgexfoundry/edgex-compose/tree/ireland . The command to start EdgeX with the secret store enabled is: git clone -b ireland https://github.com/edgexfoundry/edgex-compose make run or git clone -b ireland https://github.com/edgexfoundry/edgex-compose make run arm64 The EdgeX secret store is not started if EdgeX is started with security features disabled by appending no-secty to the previous commands. This disables all EdgeX security features, not just the API gateway. Documentation on how the EdgeX security store is sequenced with respect to all of the other EdgeX services is covered in the Secure Bootstrapping of EdgeX Architecture Decision Record(ADR) .","title":"Start the Secret Store"},{"location":"security/Ch-SecretStore/#using-the-secret-store","text":"","title":"Using the Secret Store"},{"location":"security/Ch-SecretStore/#preferred-approach","text":"The preferred approach for interacting with the EdgeX secret store is to use the SecretClient interface in go-mod-secrets . Each EdgeX microservice has access to a StoreSecrets() method that allows setting of per-microservice secrets, and a GetSecrets() method to read them back. If manual \"super-user\" to the EdgeX secret store is required, it is necesary to obtain a privileged access token, called the Vault root token.","title":"Preferred Approach"},{"location":"security/Ch-SecretStore/#obtaining-the-vault-root-token","text":"For security reasons (the Vault production hardening guide recommends revokation of the root token), the Vault root token is revoked by default. EdgeX automatically manages the secrets required by the framework, and provides a programmatic interface for individual microservices to interact with their partition of the secret store. If global access to the secret store is required, it is necessary to obtain a copy of the Vault root token using the below recommended procedure. Note that following this procedure directly contradicts the Vault production hardening guide . Since the root token cannot be un-revoked, the framework must be started for the first time with root token revokation disabled. Shut down the entire framework and remove the Docker persistent volumes using make clean in edgex-compose or docker volume prune after stopping all the containers. Optionally remove /tmp/edgex as well to clean the shared secrets volume. Edit docker-compose.yml and add an environment variable override for SECRETSTORE_REVOKEROOTTOKENS secretstore-setup : environment : SECRETSTORE_REVOKEROOTTOKENS : \"false\" Start EdgeX using make run or some other mechanism. Reveal the contents of the resp-init.json file stored in a Docker volume. docker run --rm -ti -v edgex_vault-config:/vault/config:ro alpine:latest cat /vault/config/assets/resp-init.json Extract the root_token field value from the resulting JSON output. As an alternative to overriding SECRETSTORE_REVOKEROOTTOKENS from the beginning, it is possible to regenerate the root token from the Vault unseal keys in resp-init.json using the Vault's documented procedure . The EdgeX framework executes this process internally whenever it requires root token capability. Note that a token created in this manner will again be revoked the next time EdgeX is restarted if SECRETSTORE_REVOKEROOTTOKENS remains set to its default value: all root tokens are revoked every time the framework is started if SECRETSTORE_REVOKEROOTTOKENS is true .","title":"Obtaining the Vault Root Token"},{"location":"security/Ch-SecretStore/#using-the-vault-cli","text":"Execute a shell session in the running Vault container: docker exec -it edgex-vault sh -l Login to Vault using Vault CLI and the gathered Root Token: edgex-vault:/# vault login s.ULr5bcjwy8S0I5g3h4xZ5uWa Success! You are now authenticated. The token information displayed below is already stored in the token helper. You do NOT need to run \"vault login\" again. Future Vault requests will automatically use this token. Key Value --- ----- token s.ULr5bcjwy8S0I5g3h4xZ5uWa token_accessor Kv5FUhT2XgN2lLu8XbVxJI0o token_duration \u221e token_renewable false token_policies [\"root\"] identity_policies [] policies [\"root\"] Perform an introspection lookup on the current token login. This proves the token works and is valid. edgex-vault:/# vault token lookup Key Value --- ----- accessor Kv5FUhT2XgN2lLu8XbVxJI0o creation_time 1623371879 creation_ttl 0s display_name root entity_id n/a expire_time explicit_max_ttl 0s id s.ULr5bcjwy8S0I5g3h4xZ5uWa meta num_uses 0 orphan true path auth/token/root policies [root] ttl 0s type service !!! Note: The Root Token is the only token that has no expiration enforcement rules (Time to Live TTL counter). As an example, let's poke around and spy on the Redis database password: edgex-vault:/# vault list secret Keys ---- edgex/ edgex-vault:/# vault list secret/edgex Keys ---- app-rules-engine/ core-command/ core-data/ core-metadata/ device-rest/ device-virtual/ security-bootstrapper-redis/ support-notifications/ support-scheduler/ edgex-vault:/# vault list secret/edgex/core-data Keys ---- redisdb edgex-vault:/# vault read secret/edgex/core-data/redisdb Key Value --- ----- refresh_interval 168h password 9/crBba5mZqAfAH8d90m7RlZfd7N8yF2IVul89+GEaG3 username redis5 With the root token, it is possible to modify any Vault setting. See the Vault manual for available commands.","title":"Using the Vault CLI"},{"location":"security/Ch-SecretStore/#use-the-vault-rest-api","text":"Vault also supports a REST API with functionality equivalent to the command line interface: The equivalent of the vault read secret/edgex/core-data/redisdb command looks like the following using the REST API: Displaying (GET) the redis credentials from Core Data's secret store: curl -s -H 'X-Vault-Token: s.ULr5bcjwy8S0I5g3h4xZ5uWa' http://localhost:8200/v1/secret/edgex/core-data/redisdb | python -m json.tool { \"request_id\": \"9d28ffe0-6b25-c0a8-e395-9fbc633f20cc\", \"lease_id\": \"\", \"renewable\": false, \"lease_duration\": 604800, \"data\": { \"password\": \"9/crBba5mZqAfAH8d90m7RlZfd7N8yF2IVul89+GEaG3\", \"username\": \"redis5\" }, \"wrap_info\": null, \"warnings\": null, \"auth\": null } See HashiCorp Vault API documentation for further details on syntax and usage ( https://www.vaultproject.io/api/ ).","title":"Use the Vault REST API"},{"location":"security/Ch-SecretStore/#using-the-vault-web-ui","text":"The Vault Web UI is not exposed via the API gateway. It must therefore be accessed via localhost or a network tunnel of some kind. Open a browser session on http://localhost:8200 and sign-in with the Root Token. {.align-center width=\"606px\" height=\"504px\"} Upper left corner of the current Vault UI session, the sign-out menu displaying the current token name: {.align-center width=\"275px\" height=\"156px\"} Select the Vault secret backend: Navigate the API Gateway (Kong) service X.509 TLS materials path (edgex/pki/tls/edgex-kong): The Vault UI also allows entering Vault CLI commands (see above 1st alternative ) using an embedded console:","title":"Using the Vault Web UI"},{"location":"security/Ch-SecretStore/#see-also","text":"Some of the command used in implementing security services have man-style documentation: security-file-token-provider - Generate Vault tokens for EdgeX services secrets-config - Utility for secrets management. secrets-config-proxy - \"proxy\" subcommand for managing proxy secrets.","title":"See also"},{"location":"security/Ch-Secure-Consul/","text":"Secure Consul EdgeX 2.0 Secure Consul is new in EdgeX 2.0 Introduction In the current EdgeX architecture, Consul is pre-wired as the default agent service for Service Configuration , Service Registry , and Service Health Check purposes. Prior to EdgeX's Ireland release, the communication to Consul uses plain HTTP calls without any access control (ACL) token header and thus are insecure. With the Ireland release, that situation is now improved by adding required ACL token header X-Consul-Token in any HTTP calls. Moreover, Consul itself is now bootstrapped and started with its ACL system enabled and thus provides better authentication and authorization security features for services. In other words, with the required Consul's ACL token for accessing Consul, assets inside Consul like EdgeX's configuration items in Key-Value (KV) store are now better protected. In this documentation, we will highlight some major features incorporated into EdgeX framework system for Securing Consul , including how the Consul token is generated via the integration of secret store management system Vault with Consul via Vault's Consul Secrets Engine APIs. Also a brief overview on how Consul token is governed by Vault using Consul's ACL policy associated with a Vault role for that token is given. Finally, EdgeX provides an easy way for getting Consul token from edgex-compose 's compose-builder utility for better developer experiences. Consul access token with Vault integration In order to reduce another token generation system to maintain, we utilize the Vault's feature of Consul Secrets Engine APIs, governed by Vault itself, and integrated with Consul. Consul service itself provides ACL system and is enabled via Consul's configuration settings like: acl = { enabled = true default_policy = \"deny\" enable_token_persistence = true } and this is set as part of EdgeX security-bootstrapper service's process. Note that the default ACL policy is set to \"deny\" so that anything is not listed in the ACL list will get access denied by nature. The flag enable_token_persistence is related to the persistence of Consul's agent token and is set to true so as to re-use the same agent token when EdgeX system restarts again. During the process of Consul bootstrapping, the first main step of security-bootstrapper for Consul is to bootstrap Consul's ACL system with Consul's API endpoint /acl/bootstrap . Once Consul's ACL is successfully bootstrapped, security-bootstrapper stores the Consul's ACL bootstrap token onto the pre-configured folder under /tmp/edgex/secrets/consul-acl-token . As part of security-bootstrapper process for Consul, Consul service's agent token is also set via Consul's sub-command: consul acl set-agent-token agent or Consul's HTTP API endpoint /agent/token/ using Consul's ACL bootstrap token for the authentication. This agent token provides the identity for Consul service itself and access control for any agent-based API calls from client and thus provides better security. security-bootstrapper service also uses Consul's bootstrap token to generate Vault's role based from Consul Secrets Engine API /consul/role/ for all internal default EdgeX services and add-on services via environment variable ADD_REGISTRY_ACL_ROLES . Please see more details and some examples in Configuring Add-on Service documentation section for how to configure add-on services' ACL roles. security-bootstrapper then automatically associated with Consul's ACL policy rules with this provided ACL role so that Consul token will be created or generated with that ACL rules and hence enforced access controls by Consul when the service is communicating with it. Note that Consul token is generated via Vault's /consul/creds/ API with Vault's secretstore token and hence the generated Consul token is inherited the time-restriction nature from Vault system itself. Thus Consul token will be revoked by Vault if Vault's token used to generate it expires or is revoked. Currently in EdgeX we utilize the auto-renewal feature of Vault's token implemented in go-mod-secrets to keep Consul token alive and not expire. How to get Consul ACL token Consul's access token can be obtained from the compose-builder of edgex-compose repository via command make get-consul-acl-token . One example of this will be like: $ make get-consul-acl-token ef4a0580-d200-32bf-17ba-ba78e3a546e7 This output token is Consul's ACL bootstrap token and thus one can use it to login and access Consul service's features from Consul's GUI on http://localhost:8500/ui. From the upper right-hand corner of Consul's GUI or the \"Log in\" button in the center, one can login with the obtained Consul token in order to access Consul's GUI features: If the end user wants to access consul from the command line and since by default now Consul is running in ACL enabled mode, any API call to Consul's endpoints will requires the access token and thus one needs to give the access token into the header X-Consul-Token of HTTP calls. One example using curl command with Consul access token to do local Consul KV store is given as follows: curl -v -H \"X-Consul-Token:8775c1db-9340-d07b-ac95-bc6a1fa5fe57\" -X PUT --data 'TestKey=\"My key values\"' \\ http://localhost:8500/v1/kv/my-test-key where the Consul access token is passed into the header X-Consul-Token and assuming it has write permission for accessing and updating data in Consul's KV store.","title":"Secure Consul"},{"location":"security/Ch-Secure-Consul/#secure-consul","text":"EdgeX 2.0 Secure Consul is new in EdgeX 2.0","title":"Secure Consul"},{"location":"security/Ch-Secure-Consul/#introduction","text":"In the current EdgeX architecture, Consul is pre-wired as the default agent service for Service Configuration , Service Registry , and Service Health Check purposes. Prior to EdgeX's Ireland release, the communication to Consul uses plain HTTP calls without any access control (ACL) token header and thus are insecure. With the Ireland release, that situation is now improved by adding required ACL token header X-Consul-Token in any HTTP calls. Moreover, Consul itself is now bootstrapped and started with its ACL system enabled and thus provides better authentication and authorization security features for services. In other words, with the required Consul's ACL token for accessing Consul, assets inside Consul like EdgeX's configuration items in Key-Value (KV) store are now better protected. In this documentation, we will highlight some major features incorporated into EdgeX framework system for Securing Consul , including how the Consul token is generated via the integration of secret store management system Vault with Consul via Vault's Consul Secrets Engine APIs. Also a brief overview on how Consul token is governed by Vault using Consul's ACL policy associated with a Vault role for that token is given. Finally, EdgeX provides an easy way for getting Consul token from edgex-compose 's compose-builder utility for better developer experiences.","title":"Introduction"},{"location":"security/Ch-Secure-Consul/#consul-access-token-with-vault-integration","text":"In order to reduce another token generation system to maintain, we utilize the Vault's feature of Consul Secrets Engine APIs, governed by Vault itself, and integrated with Consul. Consul service itself provides ACL system and is enabled via Consul's configuration settings like: acl = { enabled = true default_policy = \"deny\" enable_token_persistence = true } and this is set as part of EdgeX security-bootstrapper service's process. Note that the default ACL policy is set to \"deny\" so that anything is not listed in the ACL list will get access denied by nature. The flag enable_token_persistence is related to the persistence of Consul's agent token and is set to true so as to re-use the same agent token when EdgeX system restarts again. During the process of Consul bootstrapping, the first main step of security-bootstrapper for Consul is to bootstrap Consul's ACL system with Consul's API endpoint /acl/bootstrap . Once Consul's ACL is successfully bootstrapped, security-bootstrapper stores the Consul's ACL bootstrap token onto the pre-configured folder under /tmp/edgex/secrets/consul-acl-token . As part of security-bootstrapper process for Consul, Consul service's agent token is also set via Consul's sub-command: consul acl set-agent-token agent or Consul's HTTP API endpoint /agent/token/ using Consul's ACL bootstrap token for the authentication. This agent token provides the identity for Consul service itself and access control for any agent-based API calls from client and thus provides better security. security-bootstrapper service also uses Consul's bootstrap token to generate Vault's role based from Consul Secrets Engine API /consul/role/ for all internal default EdgeX services and add-on services via environment variable ADD_REGISTRY_ACL_ROLES . Please see more details and some examples in Configuring Add-on Service documentation section for how to configure add-on services' ACL roles. security-bootstrapper then automatically associated with Consul's ACL policy rules with this provided ACL role so that Consul token will be created or generated with that ACL rules and hence enforced access controls by Consul when the service is communicating with it. Note that Consul token is generated via Vault's /consul/creds/ API with Vault's secretstore token and hence the generated Consul token is inherited the time-restriction nature from Vault system itself. Thus Consul token will be revoked by Vault if Vault's token used to generate it expires or is revoked. Currently in EdgeX we utilize the auto-renewal feature of Vault's token implemented in go-mod-secrets to keep Consul token alive and not expire.","title":"Consul access token with Vault integration"},{"location":"security/Ch-Secure-Consul/#how-to-get-consul-acl-token","text":"Consul's access token can be obtained from the compose-builder of edgex-compose repository via command make get-consul-acl-token . One example of this will be like: $ make get-consul-acl-token ef4a0580-d200-32bf-17ba-ba78e3a546e7 This output token is Consul's ACL bootstrap token and thus one can use it to login and access Consul service's features from Consul's GUI on http://localhost:8500/ui. From the upper right-hand corner of Consul's GUI or the \"Log in\" button in the center, one can login with the obtained Consul token in order to access Consul's GUI features: If the end user wants to access consul from the command line and since by default now Consul is running in ACL enabled mode, any API call to Consul's endpoints will requires the access token and thus one needs to give the access token into the header X-Consul-Token of HTTP calls. One example using curl command with Consul access token to do local Consul KV store is given as follows: curl -v -H \"X-Consul-Token:8775c1db-9340-d07b-ac95-bc6a1fa5fe57\" -X PUT --data 'TestKey=\"My key values\"' \\ http://localhost:8500/v1/kv/my-test-key where the Consul access token is passed into the header X-Consul-Token and assuming it has write permission for accessing and updating data in Consul's KV store.","title":"How to get Consul ACL token"},{"location":"security/Ch-Secure-MessageBus/","text":"Secure MessageBus EdgeX 2.0 Starting with the Ireland release (2.0.0) the default MessageBus implementation used is Redis Pub/Sub , which replaced the Redis Streams implementation. Redis Pub/Sub utilizes the existing Redis database service so that no additional broker service is required. When running in secure mode the Redis database service is secured with a username/password. This in turn creates a Secure MessageBus . All the default services (Core Data, App Service Rules, Device Virtual, eKuiper, etc.) that utilize the MessageBus are configured out of the box to connect securely. Additional add-on services that require Secure MessageBus access (App and/or Device services) need to follow the steps outline in the Configuring Add-On Services for Security section. Note Secure MQTT MessageBus capability does not exist . This will be a future enhancement.","title":"Secure MessageBus"},{"location":"security/Ch-Secure-MessageBus/#secure-messagebus","text":"EdgeX 2.0 Starting with the Ireland release (2.0.0) the default MessageBus implementation used is Redis Pub/Sub , which replaced the Redis Streams implementation. Redis Pub/Sub utilizes the existing Redis database service so that no additional broker service is required. When running in secure mode the Redis database service is secured with a username/password. This in turn creates a Secure MessageBus . All the default services (Core Data, App Service Rules, Device Virtual, eKuiper, etc.) that utilize the MessageBus are configured out of the box to connect securely. Additional add-on services that require Secure MessageBus access (App and/or Device services) need to follow the steps outline in the Configuring Add-On Services for Security section. Note Secure MQTT MessageBus capability does not exist . This will be a future enhancement.","title":"Secure MessageBus"},{"location":"security/Ch-Security/","text":"Security Security elements, both inside and outside of EdgeX Foundry, protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules. With security service enabled, the administrator of the EdgeX would be able to initialize the security components, set up running environment for security services, manage user access control, and create JWT( JSON Web Token) for resource access for other EdgeX business services. There are two major EdgeX security components. The first is a security store, which is used to provide a safe place to keep the EdgeX secrets. The second is an API gateway, which is used as a reverse proxy to restrict access to EdgeX REST resources and perform access control related works. In summary, the current features are as below: Secret creation, store and retrieve (password, cert, access key etc.) API gateway for other existing EdgeX microservice REST APIs User account creation with optional either OAuth2 or JWT authentication User account with arbitrary Access Control List groups (ACL)","title":"Security"},{"location":"security/Ch-Security/#security","text":"Security elements, both inside and outside of EdgeX Foundry, protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules. With security service enabled, the administrator of the EdgeX would be able to initialize the security components, set up running environment for security services, manage user access control, and create JWT( JSON Web Token) for resource access for other EdgeX business services. There are two major EdgeX security components. The first is a security store, which is used to provide a safe place to keep the EdgeX secrets. The second is an API gateway, which is used as a reverse proxy to restrict access to EdgeX REST resources and perform access control related works. In summary, the current features are as below: Secret creation, store and retrieve (password, cert, access key etc.) API gateway for other existing EdgeX microservice REST APIs User account creation with optional either OAuth2 or JWT authentication User account with arbitrary Access Control List groups (ACL)","title":"Security"},{"location":"security/Ch-SecurityIssues/","text":"Reporting Security Issues This page describes how to report EdgeX Foundry security issues and how they are handled. Security Announcements Join the edgexfoundry-announce group at: https://groups.google.com/d/forum/edgexfoundry-announce ) for emails about security and major API announcements. Vulnerability Reporting The EdgeX Foundry Open Source Community is grateful for all security reports made by users and security researchers. All reports are thoroughly investigated by a set of community volunteers. To make a report, please email the private list: security-issues@edgexfoundry.org , providing as much detail as possible. Use the security issue template: security_issue_template . At this time we do not yet offer an encrypted bug reporting option. When to Report a Vulnerability? You think you discovered a potential security vulnerability in EdgeX Foundry You are unsure how a vulnerability affects EdgeX Foundry You think you discovered a vulnerability in another project that EdgeX Foundry depends upon (e.g. docker, MongoDB, Redis,..) When NOT to Report a Vulnerability? You need help tuning EdgeX Foundry components for security You need help applying security related updates Your issue is not security related Security Vulnerability Response Each report is acknowledged and analyzed by Security Issue Review (SIR) team within one week. Any vulnerability information shared with SIR stays private, and is shared with sub-projects as necessary to get the issue fixed. As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated. In the case of 3 rd party dependency (code or library not managed and maintained by the EdgeX community) related security issues, while the issue report triggers the same response workflow, the EdgeX community will defer to owning community for fixes. On receipt of a security issue report, SIR: Discusses the issue privately to understand it Uses the Common Vulnerability Scoring System to grade the issue Determines the sub-projects and developers to involve Develops a fix In conjunction with the product group determines when to release the fix Communicates the fix 7. Uploads a Common Vulnerabilities and Exposures (CVE) style report of the issue and associated threat The issue reporter will be kept in the loop as appropriate. Note that a critical or high severity issue can delay a scheduled release to incorporate a fix or mitigation. Public Disclosure Timing A public disclosure date is negotiated by the EdgeX Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible AFTER a mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure may be immediate (especially publicly known issues) to a few weeks. The EdgeX Foundry Product Security Committee holds the final say when setting a disclosure date.","title":"Reporting Security Issues"},{"location":"security/Ch-SecurityIssues/#reporting-security-issues","text":"This page describes how to report EdgeX Foundry security issues and how they are handled.","title":"Reporting Security Issues"},{"location":"security/Ch-SecurityIssues/#security-announcements","text":"Join the edgexfoundry-announce group at: https://groups.google.com/d/forum/edgexfoundry-announce ) for emails about security and major API announcements.","title":"Security Announcements"},{"location":"security/Ch-SecurityIssues/#vulnerability-reporting","text":"The EdgeX Foundry Open Source Community is grateful for all security reports made by users and security researchers. All reports are thoroughly investigated by a set of community volunteers. To make a report, please email the private list: security-issues@edgexfoundry.org , providing as much detail as possible. Use the security issue template: security_issue_template . At this time we do not yet offer an encrypted bug reporting option.","title":"Vulnerability Reporting"},{"location":"security/Ch-SecurityIssues/#when-to-report-a-vulnerability","text":"You think you discovered a potential security vulnerability in EdgeX Foundry You are unsure how a vulnerability affects EdgeX Foundry You think you discovered a vulnerability in another project that EdgeX Foundry depends upon (e.g. docker, MongoDB, Redis,..)","title":"When to Report a Vulnerability?"},{"location":"security/Ch-SecurityIssues/#when-not-to-report-a-vulnerability","text":"You need help tuning EdgeX Foundry components for security You need help applying security related updates Your issue is not security related","title":"When NOT to Report a Vulnerability?"},{"location":"security/Ch-SecurityIssues/#security-vulnerability-response","text":"Each report is acknowledged and analyzed by Security Issue Review (SIR) team within one week. Any vulnerability information shared with SIR stays private, and is shared with sub-projects as necessary to get the issue fixed. As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated. In the case of 3 rd party dependency (code or library not managed and maintained by the EdgeX community) related security issues, while the issue report triggers the same response workflow, the EdgeX community will defer to owning community for fixes. On receipt of a security issue report, SIR: Discusses the issue privately to understand it Uses the Common Vulnerability Scoring System to grade the issue Determines the sub-projects and developers to involve Develops a fix In conjunction with the product group determines when to release the fix Communicates the fix 7. Uploads a Common Vulnerabilities and Exposures (CVE) style report of the issue and associated threat The issue reporter will be kept in the loop as appropriate. Note that a critical or high severity issue can delay a scheduled release to incorporate a fix or mitigation.","title":"Security Vulnerability Response"},{"location":"security/Ch-SecurityIssues/#public-disclosure-timing","text":"A public disclosure date is negotiated by the EdgeX Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible AFTER a mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure may be immediate (especially publicly known issues) to a few weeks. The EdgeX Foundry Product Security Committee holds the final say when setting a disclosure date.","title":"Public Disclosure Timing"},{"location":"security/SeedingServiceSecrets/","text":"Seeding Service Secrets EdgeX 2.1 New for EdgeX 2.1 is the ability to seed service specific secrets during the service's start-up. All EdgeX services now have the capability to specify a JSON file that contains the service's secrets which are seeded into the service's SecretStore during service start-up. This allows the secrets to be present in the service's SecretStore when the service needs to use them. Note The service must already have a SecretStore configured. This is done by default for the Core/Support services. See Configure the service's Secret Store section for details for add-on App and Device services Secrets File The new SecretsFile setting on the SecretStore configuration allows the service to specify the fully-qualified path to the location of the service's secrets file. Normally this setting is left blank when a service has no secrets to be seeded. Example - Setting SecretsFile in TOML [SecretStore] Type = \"vault\" ... SecretsFile = \"/tmp/my-service/secrets.json\" DisableScrubSecretsFile = false ... This setting can also be overridden with the SECRETSTORE_SECRETSFILE environment variable. When EdgeX is deployed using Docker/docker-compose the setting can be overridden in the docker-compose file and the file can be volume mounted into the service's container. Example - Setting SecretsFile via environment override environment : SECRETSTORE_SECRETSFILE : \"/tmp/my-service/secrets.json\" ... volumes : - /tmp/my-service/secrets.json:/tmp/my-service/secrets.json During service start-up, after SecretStore initialization, the service's secrets JSON file is read, validated, and the secrets stored into the service's SecretStore . The file is then scrubbed of the secret data, i.e rewritten without the sensitive secret data that was successfully stored. See Disable Scrubbing section below for detail on disabling the scrubbing of the secret data Example - Initial service secrets JSON { \"secrets\" : [ { \"path\" : \"credentials001\" , \"imported\" : false , \"secretData\" : [ { \"key\" : \"username\" , \"value\" : \"my-user-1\" }, { \"key\" : \"password\" , \"value\" : \"password-001\" } ] }, { \"path\" : \"credentials002\" , \"imported\" : false , \"secretData\" : [ { \"key\" : \"username\" , \"value\" : \"my-user-2\" }, { \"key\" : \"password\" , \"value\" : \"password-002\" } ] } ] } Example - Re-written service secrets JSON after seeding complete { \"secrets\" : [ { \"path\" : \"credentials001\" , \"imported\" : true , \"secretData\" : [] }, { \"path\" : \"credentials002\" , \"imported\" : true , \"secretData\" : [] } ] } The secrets marked with imported=true are ignored the next time the service starts up since they are already in the the service's SecretStore . If the Secret Store service's persistence is cleared, the original version of service's secrets file will need to be provided for the next time the service starts up. Note The secrets file must be have write permissions for the file to be scrubbed of the secret data. If not the service with fail to start-up with an error re-writing the file. Disable Scrubbing Scrubbing of the secret data can be disabled by setting SecretStore.DisableScrubSecretsFile to true . This can be done in the configuration.toml file or by using the SECRETSTORE_DISABLESCRUBSECRETSFILE environment variable override. Example - Set DisableScrubSecretsFile in TOML [SecretStore] Type = \"vault\" ... SecretsFile = \"/tmp/my-service/secrets.json\" DisableScrubSecretsFile = true ... Example - Set DisableScrubSecretsFile via environment variable environment : SECRETSTORE_DISABLESCRUBSECRETSFILE : \"true\"","title":"Seeding Service Secrets"},{"location":"security/SeedingServiceSecrets/#seeding-service-secrets","text":"EdgeX 2.1 New for EdgeX 2.1 is the ability to seed service specific secrets during the service's start-up. All EdgeX services now have the capability to specify a JSON file that contains the service's secrets which are seeded into the service's SecretStore during service start-up. This allows the secrets to be present in the service's SecretStore when the service needs to use them. Note The service must already have a SecretStore configured. This is done by default for the Core/Support services. See Configure the service's Secret Store section for details for add-on App and Device services","title":"Seeding Service Secrets"},{"location":"security/SeedingServiceSecrets/#secrets-file","text":"The new SecretsFile setting on the SecretStore configuration allows the service to specify the fully-qualified path to the location of the service's secrets file. Normally this setting is left blank when a service has no secrets to be seeded. Example - Setting SecretsFile in TOML [SecretStore] Type = \"vault\" ... SecretsFile = \"/tmp/my-service/secrets.json\" DisableScrubSecretsFile = false ... This setting can also be overridden with the SECRETSTORE_SECRETSFILE environment variable. When EdgeX is deployed using Docker/docker-compose the setting can be overridden in the docker-compose file and the file can be volume mounted into the service's container. Example - Setting SecretsFile via environment override environment : SECRETSTORE_SECRETSFILE : \"/tmp/my-service/secrets.json\" ... volumes : - /tmp/my-service/secrets.json:/tmp/my-service/secrets.json During service start-up, after SecretStore initialization, the service's secrets JSON file is read, validated, and the secrets stored into the service's SecretStore . The file is then scrubbed of the secret data, i.e rewritten without the sensitive secret data that was successfully stored. See Disable Scrubbing section below for detail on disabling the scrubbing of the secret data Example - Initial service secrets JSON { \"secrets\" : [ { \"path\" : \"credentials001\" , \"imported\" : false , \"secretData\" : [ { \"key\" : \"username\" , \"value\" : \"my-user-1\" }, { \"key\" : \"password\" , \"value\" : \"password-001\" } ] }, { \"path\" : \"credentials002\" , \"imported\" : false , \"secretData\" : [ { \"key\" : \"username\" , \"value\" : \"my-user-2\" }, { \"key\" : \"password\" , \"value\" : \"password-002\" } ] } ] } Example - Re-written service secrets JSON after seeding complete { \"secrets\" : [ { \"path\" : \"credentials001\" , \"imported\" : true , \"secretData\" : [] }, { \"path\" : \"credentials002\" , \"imported\" : true , \"secretData\" : [] } ] } The secrets marked with imported=true are ignored the next time the service starts up since they are already in the the service's SecretStore . If the Secret Store service's persistence is cleared, the original version of service's secrets file will need to be provided for the next time the service starts up. Note The secrets file must be have write permissions for the file to be scrubbed of the secret data. If not the service with fail to start-up with an error re-writing the file.","title":"Secrets File"},{"location":"security/SeedingServiceSecrets/#disable-scrubbing","text":"Scrubbing of the secret data can be disabled by setting SecretStore.DisableScrubSecretsFile to true . This can be done in the configuration.toml file or by using the SECRETSTORE_DISABLESCRUBSECRETSFILE environment variable override. Example - Set DisableScrubSecretsFile in TOML [SecretStore] Type = \"vault\" ... SecretsFile = \"/tmp/my-service/secrets.json\" DisableScrubSecretsFile = true ... Example - Set DisableScrubSecretsFile via environment variable environment : SECRETSTORE_DISABLESCRUBSECRETSFILE : \"true\"","title":"Disable Scrubbing"},{"location":"security/secrets-config-proxy/","text":"% secrets-config-proxy(1) User Manuals secrets-config-proxy(1) NAME secrets-config-proxy \u2013 Configure EdgeX API gateway service SYNOPSIS secrets-config proxy SUBCOMMAND [OPTIONS] DESCRIPTION Configures the EdgeX API gateway service. This command is used to configure the TLS certificate for external connections, create authentication tokens for inbound proxy access, and other related utility functions. Proxy configuration commands (listed below) require access to the secret store master key in order to generate temporary secret store access credentials. OPTIONS --confdir /path/to/directory/with/configuration.toml (optional) Points to directory containing a configuration.toml file. SUBCOMMANDS tls Configure inbound TLS certificate. This command will provision the TLS secrets into the secret store and re-deploy them to Kong. Requires additional arguments: --incert /path/to/certchain (required) Path to TLS leaf certificate (PEM-encoded x.509) (the file extension is arbitrary). If intermediate certificates are required to chain to a certificate authority, these should also be included. The root certificate authority should not be included. --inkey /path/to/private_key (required) Path to TLS private key (PEM-encoded). --snis comma_separated_list_for_server_names (optional) A comma separated extra server DNS names in addition to the built-in server name indications. The built-in names are \"localhost,kong\". These names will be associated with the user-provided certificate for Kong's TLS to use. Based on the specification RFC4366 : \"Currently, the only server names supported are DNS hostnames\", so the IP address-based input is not allowed. adduser Create an API gateway user using specified token type. Requires additional arguments: --token-type jwt (required) Create user using either the JWT authentication plugin. This value must match the configured authentication plugin ( KongAuth.Name in security-proxy-setup's configuration.toml ). --user username (required) Username of the user to add. --group group (optional) Group to which the user belongs, defaults to \"admin\". This should be the group associated with the route ACL ( KongAuth.WhiteList in security-proxy-setup's configuration.toml ). (Note that secrets-config shares the same configuration as security-proxy-setup as they both configure the EdgeX API gateway.) The following options are used when token-type == \"jwt\": --algorithm RS256 | ES256 (required for JWT method) Algorithm used for signing the JWT. (See RFC 7518 for a list of signing algorithms.) --public_key /path/to/public_key (required for JWT tokens) Public key (in PEM format) used to validate the JWT. (Not an x.509 certificate.) This key is assumed to have been pre-created using some external mechanism such as a TPM, HSM, openssl, or other method. --id key (optional) Optional user-specified \"key\" used for linkage with an incoming JWT via Kong's config.key_claim_name setting (defaults to \"iss\" field). See Kong documentation for JWT plugin for an example of how this parameter is used. Upon completion, for token-type == \"jwt\", the command outputs the autogenerated key for the id command above. This value must be used during later construction of the JWT. deluser Delete a API gateway user. Requires additional arguments: --user username (required) Username of the user to delete. jwt Utility function to create a JWT proxy authentication token from a supplied secret. This command does not require secret store access, but the values supplied must match those presented to the adduser command earlier. Requires additional arguments: --algorithm RS256 | ES256 (required) Algorithm used for signing the JWT. (See RFC 7518 for a list of signing algorithms.) --id key (required) The \"key\" field from the \"adduser\" command. (This will be either the --id argument passed in, or the automatically generated identifier.) (This is not actually a cryptographic key, but a unique identifier such as would be used in a database.) --private_key /path/to/private.key (required) Private key used to sign the JWT (PEM-encoded) with a key type corresponding to the above-supplied algorithm. --exp duration (optional) Duration of generated JWT expressed as a golang-parseable duration value. Use \"never\" to omit an expiration field in the JWT. Defaults to \"1h\" (one hour) if unspecified. The generated JWT will be the encoded representation of: { \"typ\": \"JWT\", \"alg\": \"RS256 | ES256\" } { \"iss\": \" key \", \"exp\": (calculated expiration time) } (signature) CONFIGURATION ENVIRONMENT IKM_HOOK Enables decryption of an encrypted secret store master key by pointing at an executable that returns an encryption seed that is formatted as a hex-encoded (typically 32-byte) string to its stdout. This optional feature, if enabled, requires pointing at the same executable that was used by security-secretstore-setup to provision and unlock the EdgeX the secret store. SEE ALSO secrets-config(1) EdgeX Foundry Last change: 2020","title":"Secrets config proxy"},{"location":"security/secrets-config-proxy/#name","text":"secrets-config-proxy \u2013 Configure EdgeX API gateway service","title":"NAME"},{"location":"security/secrets-config-proxy/#synopsis","text":"secrets-config proxy SUBCOMMAND [OPTIONS]","title":"SYNOPSIS"},{"location":"security/secrets-config-proxy/#description","text":"Configures the EdgeX API gateway service. This command is used to configure the TLS certificate for external connections, create authentication tokens for inbound proxy access, and other related utility functions. Proxy configuration commands (listed below) require access to the secret store master key in order to generate temporary secret store access credentials.","title":"DESCRIPTION"},{"location":"security/secrets-config-proxy/#options","text":"--confdir /path/to/directory/with/configuration.toml (optional) Points to directory containing a configuration.toml file.","title":"OPTIONS"},{"location":"security/secrets-config-proxy/#subcommands","text":"tls Configure inbound TLS certificate. This command will provision the TLS secrets into the secret store and re-deploy them to Kong. Requires additional arguments: --incert /path/to/certchain (required) Path to TLS leaf certificate (PEM-encoded x.509) (the file extension is arbitrary). If intermediate certificates are required to chain to a certificate authority, these should also be included. The root certificate authority should not be included. --inkey /path/to/private_key (required) Path to TLS private key (PEM-encoded). --snis comma_separated_list_for_server_names (optional) A comma separated extra server DNS names in addition to the built-in server name indications. The built-in names are \"localhost,kong\". These names will be associated with the user-provided certificate for Kong's TLS to use. Based on the specification RFC4366 : \"Currently, the only server names supported are DNS hostnames\", so the IP address-based input is not allowed. adduser Create an API gateway user using specified token type. Requires additional arguments: --token-type jwt (required) Create user using either the JWT authentication plugin. This value must match the configured authentication plugin ( KongAuth.Name in security-proxy-setup's configuration.toml ). --user username (required) Username of the user to add. --group group (optional) Group to which the user belongs, defaults to \"admin\". This should be the group associated with the route ACL ( KongAuth.WhiteList in security-proxy-setup's configuration.toml ). (Note that secrets-config shares the same configuration as security-proxy-setup as they both configure the EdgeX API gateway.) The following options are used when token-type == \"jwt\": --algorithm RS256 | ES256 (required for JWT method) Algorithm used for signing the JWT. (See RFC 7518 for a list of signing algorithms.) --public_key /path/to/public_key (required for JWT tokens) Public key (in PEM format) used to validate the JWT. (Not an x.509 certificate.) This key is assumed to have been pre-created using some external mechanism such as a TPM, HSM, openssl, or other method. --id key (optional) Optional user-specified \"key\" used for linkage with an incoming JWT via Kong's config.key_claim_name setting (defaults to \"iss\" field). See Kong documentation for JWT plugin for an example of how this parameter is used. Upon completion, for token-type == \"jwt\", the command outputs the autogenerated key for the id command above. This value must be used during later construction of the JWT. deluser Delete a API gateway user. Requires additional arguments: --user username (required) Username of the user to delete. jwt Utility function to create a JWT proxy authentication token from a supplied secret. This command does not require secret store access, but the values supplied must match those presented to the adduser command earlier. Requires additional arguments: --algorithm RS256 | ES256 (required) Algorithm used for signing the JWT. (See RFC 7518 for a list of signing algorithms.) --id key (required) The \"key\" field from the \"adduser\" command. (This will be either the --id argument passed in, or the automatically generated identifier.) (This is not actually a cryptographic key, but a unique identifier such as would be used in a database.) --private_key /path/to/private.key (required) Private key used to sign the JWT (PEM-encoded) with a key type corresponding to the above-supplied algorithm. --exp duration (optional) Duration of generated JWT expressed as a golang-parseable duration value. Use \"never\" to omit an expiration field in the JWT. Defaults to \"1h\" (one hour) if unspecified. The generated JWT will be the encoded representation of: { \"typ\": \"JWT\", \"alg\": \"RS256 | ES256\" } { \"iss\": \" key \", \"exp\": (calculated expiration time) } (signature)","title":"SUBCOMMANDS"},{"location":"security/secrets-config-proxy/#configuration","text":"","title":"CONFIGURATION"},{"location":"security/secrets-config-proxy/#environment","text":"IKM_HOOK Enables decryption of an encrypted secret store master key by pointing at an executable that returns an encryption seed that is formatted as a hex-encoded (typically 32-byte) string to its stdout. This optional feature, if enabled, requires pointing at the same executable that was used by security-secretstore-setup to provision and unlock the EdgeX the secret store.","title":"ENVIRONMENT"},{"location":"security/secrets-config-proxy/#see-also","text":"secrets-config(1) EdgeX Foundry Last change: 2020","title":"SEE ALSO"},{"location":"security/secrets-config/","text":"% edgex-secrets-config(1) User Manuals edgex-secrets-config(1) NAME edgex-secrets-config \u2013 Perform post-installation EdgeX secrets configuration SYNOPSIS edgex-secrets-config [OPTIONS] COMMAND [ARG...] DESCRIPTION edgex-secrets-config performs post-installation EdgeX secrets configuration. edgex-secrets-config takes a command that specifies which module is being configured, and module-specific arguments thereafter. COMMANDS help Return a list of available commands. Use edgex-secrets-config help (command) for an overview of available subcommands. proxy Configure secrets related to the EdgeX reverse proxy. Use edgex-secrets-config help proxy for an overview of available subcommands. SEE ALSO edgex-secrets-config-proxy(1) EdgeX Foundry Last change: 2021","title":"Secrets config"},{"location":"security/secrets-config/#name","text":"edgex-secrets-config \u2013 Perform post-installation EdgeX secrets configuration","title":"NAME"},{"location":"security/secrets-config/#synopsis","text":"edgex-secrets-config [OPTIONS] COMMAND [ARG...]","title":"SYNOPSIS"},{"location":"security/secrets-config/#description","text":"edgex-secrets-config performs post-installation EdgeX secrets configuration. edgex-secrets-config takes a command that specifies which module is being configured, and module-specific arguments thereafter.","title":"DESCRIPTION"},{"location":"security/secrets-config/#commands","text":"help Return a list of available commands. Use edgex-secrets-config help (command) for an overview of available subcommands. proxy Configure secrets related to the EdgeX reverse proxy. Use edgex-secrets-config help proxy for an overview of available subcommands.","title":"COMMANDS"},{"location":"security/secrets-config/#see-also","text":"edgex-secrets-config-proxy(1) EdgeX Foundry Last change: 2021","title":"SEE ALSO"},{"location":"security/security-file-token-provider.1/","text":"NAME security-file-token-provider -- Generate Vault tokens for EdgeX services SYNOPSIS security-file-token-provider [-h--confdir \\] [-p|--profile \\] DESCRIPTION security-file-token-provider generates per-service Vault tokens for EdgeX services so that they can make authenticated connections to Vault to retrieve application secrets. security-file-token-provider implements a generic secret seeding mechanism based on pre-created files and is designed for maximum portability. security-file-token-provider takes a configuration file that specifies the services for which tokens shall be generated and the Vault access policy that shall be applied to those tokens. security-file-token-provider assumes that there is some underlying protection mechanism that will be used to prevent EdgeX services from reading each other's tokens. OPTIONS -h, --help : Display help text -c, --confdir \\ : Look in this directory for configuration.toml instead. -p, --profile \\ : Indicate configuration profile other than default FILES configuration.toml This file specifies the TCP/IP location of the Vault service and parameters used for Vault token generation. [SecretService] Scheme = \"https\" Server = \"localhost\" Port = 8200 [TokenFileProvider] PrivilegedTokenPath = \"/run/edgex/secrets/security-file-token-provider/secrets-token.json\" ConfigFile = \"token-config.json\" OutputDir = \"/run/edgex/secrets/\" OutputFilename = \"secrets-token.json\" secrets-token.json This file contains a token used to authenticate to Vault. The filename is customizable via OutputFilename . { \"auth\": { \"client_token\": \"s.wOrq9dO9kzOcuvB06CMviJhZ\" } } token-config.json This configuration file tells security-file-token-provider which tokens to generate. In order to avoid a directory full of .hcl files, this configuration file uses the JSON serialization of HCL, documented at https://github.com/hashicorp/hcl/blob/master/README.md . Note that all paths are keys under the \"path\" object. { \"service-name\": { \"edgex_use_defaults\": true, \"custom_policy\": { \"path\": { \"secret/non/standard/location/*\": { \"capabilities\": [ \"list\", \"read\" ] } } }, \"custom_token_parameters\": { } } } When edgex-use-default is true (the default), the following is added to the policy specification for the auto-generated policy. The auto-generated policy is named edgex-secrets-XYZ where XYZ is service-name from the JSON key above. Thus, the final policy created for the token will be the union of the policy below (if using the default policy) plus the custom_policy defined above. { \"path\": { \"secret/edgex/service-name/*\": { \"capabilities\": [ \"create\", \"update\", \"delete\", \"list\", \"read\" ] } } } When edgex-use-default is true (the default), the following is inserted (if not overridden) to the token parameters for the generated token. (See https://www.vaultproject.io/api/auth/token/index.html#create-token .) \"display_name\": token-service-name \"no_parent\": true \"policies\": [ \"edgex-service-service-name\" ] Note that display_name is set by vault to be \"token-\" + the specified display name. This is hard-coded in Vault from versions 0.6 to 1.2.3 and cannot be changed. Additionally, a meta property, edgex-service-name is set to service-name . The edgex-service-name property may be used by clients to infer the location in the secret store where service-specific secrets are held. \"meta\": { \"edgex-service-name\": service-name } {OutputDir}/{service-name}/{OutputFilename} For example: /run/edgex/secrets/edgex-security-proxy-setup/secrets-token.json For each \"service-name\" in {ConfigFile} , a matching directory is created under {OutputDir} and the corresponding Vault token is stored as {OutputFilename} . This file contains the authorization token generated to allow the indicated EdgeX service to retrieve its secrets. PREREQUISITES PrivilegedTokenPath points to a non-expired Vault token that the security-file-token-provider will use to install policies and create per-service tokens. It will create policies with the naming convention \"edgex-service-service-name\" where service-name comes from JSON keys in the configuration file and the Vault policy will be configured to allow creation and modification of policies using this naming convention. This token must have the following policy ( edgex-privileged-token-creator ) configured. path \"auth/token/create\" { capabilities = [\"create\", \"update\", \"sudo\"] } path \"auth/token/create-orphan\" { capabilities = [\"create\", \"update\", \"sudo\"] } path \"auth/token/create/*\" { capabilities = [\"create\", \"update\", \"sudo\"] } path \"sys/policies/acl/edgex-service-*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\" ] } path \"sys/policies/acl\" { capabilities = [\"list\"] } AUTHOR EdgeX Foundry \\< info@edgexfoundry.org >","title":"NAME"},{"location":"security/security-file-token-provider.1/#name","text":"security-file-token-provider -- Generate Vault tokens for EdgeX services","title":"NAME"},{"location":"security/security-file-token-provider.1/#synopsis","text":"security-file-token-provider [-h--confdir \\] [-p|--profile \\]","title":"SYNOPSIS"},{"location":"security/security-file-token-provider.1/#description","text":"security-file-token-provider generates per-service Vault tokens for EdgeX services so that they can make authenticated connections to Vault to retrieve application secrets. security-file-token-provider implements a generic secret seeding mechanism based on pre-created files and is designed for maximum portability. security-file-token-provider takes a configuration file that specifies the services for which tokens shall be generated and the Vault access policy that shall be applied to those tokens. security-file-token-provider assumes that there is some underlying protection mechanism that will be used to prevent EdgeX services from reading each other's tokens.","title":"DESCRIPTION"},{"location":"security/security-file-token-provider.1/#options","text":"-h, --help : Display help text -c, --confdir \\ : Look in this directory for configuration.toml instead. -p, --profile \\ : Indicate configuration profile other than default","title":"OPTIONS"},{"location":"security/security-file-token-provider.1/#files","text":"","title":"FILES"},{"location":"security/security-file-token-provider.1/#configurationtoml","text":"This file specifies the TCP/IP location of the Vault service and parameters used for Vault token generation. [SecretService] Scheme = \"https\" Server = \"localhost\" Port = 8200 [TokenFileProvider] PrivilegedTokenPath = \"/run/edgex/secrets/security-file-token-provider/secrets-token.json\" ConfigFile = \"token-config.json\" OutputDir = \"/run/edgex/secrets/\" OutputFilename = \"secrets-token.json\"","title":"configuration.toml"},{"location":"security/security-file-token-provider.1/#secrets-tokenjson","text":"This file contains a token used to authenticate to Vault. The filename is customizable via OutputFilename . { \"auth\": { \"client_token\": \"s.wOrq9dO9kzOcuvB06CMviJhZ\" } }","title":"secrets-token.json"},{"location":"security/security-file-token-provider.1/#token-configjson","text":"This configuration file tells security-file-token-provider which tokens to generate. In order to avoid a directory full of .hcl files, this configuration file uses the JSON serialization of HCL, documented at https://github.com/hashicorp/hcl/blob/master/README.md . Note that all paths are keys under the \"path\" object. { \"service-name\": { \"edgex_use_defaults\": true, \"custom_policy\": { \"path\": { \"secret/non/standard/location/*\": { \"capabilities\": [ \"list\", \"read\" ] } } }, \"custom_token_parameters\": { } } } When edgex-use-default is true (the default), the following is added to the policy specification for the auto-generated policy. The auto-generated policy is named edgex-secrets-XYZ where XYZ is service-name from the JSON key above. Thus, the final policy created for the token will be the union of the policy below (if using the default policy) plus the custom_policy defined above. { \"path\": { \"secret/edgex/service-name/*\": { \"capabilities\": [ \"create\", \"update\", \"delete\", \"list\", \"read\" ] } } } When edgex-use-default is true (the default), the following is inserted (if not overridden) to the token parameters for the generated token. (See https://www.vaultproject.io/api/auth/token/index.html#create-token .) \"display_name\": token-service-name \"no_parent\": true \"policies\": [ \"edgex-service-service-name\" ] Note that display_name is set by vault to be \"token-\" + the specified display name. This is hard-coded in Vault from versions 0.6 to 1.2.3 and cannot be changed. Additionally, a meta property, edgex-service-name is set to service-name . The edgex-service-name property may be used by clients to infer the location in the secret store where service-specific secrets are held. \"meta\": { \"edgex-service-name\": service-name }","title":"token-config.json"},{"location":"security/security-file-token-provider.1/#outputdirservice-nameoutputfilename","text":"For example: /run/edgex/secrets/edgex-security-proxy-setup/secrets-token.json For each \"service-name\" in {ConfigFile} , a matching directory is created under {OutputDir} and the corresponding Vault token is stored as {OutputFilename} . This file contains the authorization token generated to allow the indicated EdgeX service to retrieve its secrets.","title":"{OutputDir}/{service-name}/{OutputFilename}"},{"location":"security/security-file-token-provider.1/#prerequisites","text":"PrivilegedTokenPath points to a non-expired Vault token that the security-file-token-provider will use to install policies and create per-service tokens. It will create policies with the naming convention \"edgex-service-service-name\" where service-name comes from JSON keys in the configuration file and the Vault policy will be configured to allow creation and modification of policies using this naming convention. This token must have the following policy ( edgex-privileged-token-creator ) configured. path \"auth/token/create\" { capabilities = [\"create\", \"update\", \"sudo\"] } path \"auth/token/create-orphan\" { capabilities = [\"create\", \"update\", \"sudo\"] } path \"auth/token/create/*\" { capabilities = [\"create\", \"update\", \"sudo\"] } path \"sys/policies/acl/edgex-service-*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\" ] } path \"sys/policies/acl\" { capabilities = [\"list\"] }","title":"PREREQUISITES"},{"location":"security/security-file-token-provider.1/#author","text":"EdgeX Foundry \\< info@edgexfoundry.org >","title":"AUTHOR"},{"location":"threat-models/secret-store/","text":"EdgeX Foundry Secret Management Threat Model Table of Contents Background High Level Design Threat Model Vault Master Key Encryption","title":"EdgeX Foundry Secret Management Threat Model"},{"location":"threat-models/secret-store/#edgex-foundry-secret-management-threat-model","text":"","title":"EdgeX Foundry Secret Management Threat Model"},{"location":"threat-models/secret-store/#table-of-contents","text":"Background High Level Design Threat Model Vault Master Key Encryption","title":"Table of Contents"},{"location":"threat-models/secret-store/background/","text":"Background The secret management components comprise a very small portion of the EdgeX framework. Many components of an actual system are out-of-scope including the underlying hardware platform, the operating system on which the framework is running, the applications that are using it, and even the existence of workload isolation technologies, although the reference code does support deployment as Docker containers or Snaps. The goal of the EdgeX secret store is to provide general-purpose secret management to EdgeX core services and applications. Motivation The EdgeX Foundry security roadmap is published on the Security WG Wiki: https://wiki.edgexfoundry.org/display/FA/Security+Working+Group https://wiki.edgexfoundry.org/download/attachments/329467/EdgeX%20Security%20Architecture%20Roadmap.pptx?version=1&modificationDate=1536753478000&api=v2 The security roadmap establishes the requirement for a secret storage engine at the edge, and that furthermore that hardware secure storage should be supported: Initial EdgeX secrets (needed to start Vault/Kong) will be encrypted on the file system using a secure storage abstraction layer \u2013 allowing other implementations to store these in hardware stores (based on hardware root of trust systems) https://www.edgexfoundry.org/blog/2018/11/15/edgex-foundry-releases-delhi-and-plans-for-edinburgh/ https://wiki.edgexfoundry.org/display/FA/Edinburgh+Release The current state of secret storage is described in the Hardware Secure Storage Draft . The AS-IS architecture resembles the following diagram: As the diagram notes, the critical secrets for securing the entire on-device infrastructure sit unencrypted on bulk storage media. While the deptiction that the Vault contents are encrypted is true, the key needed to decrypt it is in plaintext nearby. The Hardware Secure Storage Draft proposes the following future state: This future state proposes a security service that can encrypt the currently unencrypted data items. A number of problems must be resolved to make this future state a reality: Initialization order of containers: containers must block until their prerequisites have been satisfied. It is not sufficient to have only start-ordering, as initialization can take a variable amount of time, and the initialization tasks of a previous step are not necessarily completed before the next step is initiated. Allowing for variability in the hardware encryption component. A simple bulk encryption/decryption interface does not allow for interesting scenarios based on local attestation, for example. Distribution of Vault tokens to services. General Requirements for Vault on the Edge When using Vault at the edge, there are a number of general problems that must be solved as illustrated in the below diagram: Working top to bottom and left to right: Vault requires TLS to protect secrets in transit. This introduces a requirement to establish an on-device PKI, and the consequent need to prevent compromise of TLS private keys and unauthorized issuance of TLS certificates. It is difficult to dynamically trust a new certificate authority as the trusted list of certificate authorities is often set at build time not runtime. An alternative is to trust a particular CA at build time, and to pre-populate the PKI during device provisioning. Vault requires a master encryption key to encrypt its database. This master key is generated when the vault is initialized and must be resupplied when Vault is restarted to \"unlock\" the vault. The implementation must ensure the confidentiality, integrity, and availability of the Vault master key. Normally the vault is manually unsealed using a human process. In IoT scenarios, the vault must be unsealed automatically, which presents additional challenges. Services need to talk to Vault to retrieve their secrets. Thus, the service location mechanism that clients use to establish that connection must be trustworthy / non-spoofable. One option is to hard-code \"localhost\" or use DNS provided by container orchestration software. The problem is significantly harder if using an outsource service locator, like the Consul service location, as the trust in Consul then needs to be established. There is a general bootstrapping problem for the services themselves: clients need a Vault token to authenticate to Vault. The confidentiality, integrity, and availability of this token needs to be protected, and the token somehow needs to be distributed to the service. If the client tries to pull the token from somewhere, there must be an preexisting mechanism to authenticate the request. Alternatively, the token could be pushed to the service before it is started: environment variable or files are common approaches. Lastly, there could be an agent that sends the token to a service after it starts, such as by an HTTP API. (Reference: Cubbyhole authentication principles .) In addition, the previously mentioned PKI problem applies here. The Vault storage itself must be protected against integrity and availability threats. Confidentiality is provided through the Vault master key. The secret management design for EdgeX can be said to be finished when there is a sufficiently secure solution to the above challenges for the supported execution models. Next Steps for EdgeX All parts of the system must collaborate in order to ensure a robust secret management design. What is needed is a systematic approach to secret management that will close the gaps between the AS-IS and TO-BE future state. This systematic approach is based on formal threat model with the aim that the system will meet some critical security objectives. The threat model is built against a proposed design and validates the security architecture of the design. Through threat modeling, we can identify assets, adversaries, threats, and mitigations against those threats. We can then make a prioritized implementation plan to address those threats. More importantly, for someone adopting EdgeX, the documented threat model outlines the threats that the framework has been designed to protect against and by omission, the threats that it has not.","title":"Background"},{"location":"threat-models/secret-store/background/#background","text":"The secret management components comprise a very small portion of the EdgeX framework. Many components of an actual system are out-of-scope including the underlying hardware platform, the operating system on which the framework is running, the applications that are using it, and even the existence of workload isolation technologies, although the reference code does support deployment as Docker containers or Snaps. The goal of the EdgeX secret store is to provide general-purpose secret management to EdgeX core services and applications.","title":"Background"},{"location":"threat-models/secret-store/background/#motivation","text":"The EdgeX Foundry security roadmap is published on the Security WG Wiki: https://wiki.edgexfoundry.org/display/FA/Security+Working+Group https://wiki.edgexfoundry.org/download/attachments/329467/EdgeX%20Security%20Architecture%20Roadmap.pptx?version=1&modificationDate=1536753478000&api=v2 The security roadmap establishes the requirement for a secret storage engine at the edge, and that furthermore that hardware secure storage should be supported: Initial EdgeX secrets (needed to start Vault/Kong) will be encrypted on the file system using a secure storage abstraction layer \u2013 allowing other implementations to store these in hardware stores (based on hardware root of trust systems) https://www.edgexfoundry.org/blog/2018/11/15/edgex-foundry-releases-delhi-and-plans-for-edinburgh/ https://wiki.edgexfoundry.org/display/FA/Edinburgh+Release The current state of secret storage is described in the Hardware Secure Storage Draft . The AS-IS architecture resembles the following diagram: As the diagram notes, the critical secrets for securing the entire on-device infrastructure sit unencrypted on bulk storage media. While the deptiction that the Vault contents are encrypted is true, the key needed to decrypt it is in plaintext nearby. The Hardware Secure Storage Draft proposes the following future state: This future state proposes a security service that can encrypt the currently unencrypted data items. A number of problems must be resolved to make this future state a reality: Initialization order of containers: containers must block until their prerequisites have been satisfied. It is not sufficient to have only start-ordering, as initialization can take a variable amount of time, and the initialization tasks of a previous step are not necessarily completed before the next step is initiated. Allowing for variability in the hardware encryption component. A simple bulk encryption/decryption interface does not allow for interesting scenarios based on local attestation, for example. Distribution of Vault tokens to services.","title":"Motivation"},{"location":"threat-models/secret-store/background/#general-requirements-for-vault-on-the-edge","text":"When using Vault at the edge, there are a number of general problems that must be solved as illustrated in the below diagram: Working top to bottom and left to right: Vault requires TLS to protect secrets in transit. This introduces a requirement to establish an on-device PKI, and the consequent need to prevent compromise of TLS private keys and unauthorized issuance of TLS certificates. It is difficult to dynamically trust a new certificate authority as the trusted list of certificate authorities is often set at build time not runtime. An alternative is to trust a particular CA at build time, and to pre-populate the PKI during device provisioning. Vault requires a master encryption key to encrypt its database. This master key is generated when the vault is initialized and must be resupplied when Vault is restarted to \"unlock\" the vault. The implementation must ensure the confidentiality, integrity, and availability of the Vault master key. Normally the vault is manually unsealed using a human process. In IoT scenarios, the vault must be unsealed automatically, which presents additional challenges. Services need to talk to Vault to retrieve their secrets. Thus, the service location mechanism that clients use to establish that connection must be trustworthy / non-spoofable. One option is to hard-code \"localhost\" or use DNS provided by container orchestration software. The problem is significantly harder if using an outsource service locator, like the Consul service location, as the trust in Consul then needs to be established. There is a general bootstrapping problem for the services themselves: clients need a Vault token to authenticate to Vault. The confidentiality, integrity, and availability of this token needs to be protected, and the token somehow needs to be distributed to the service. If the client tries to pull the token from somewhere, there must be an preexisting mechanism to authenticate the request. Alternatively, the token could be pushed to the service before it is started: environment variable or files are common approaches. Lastly, there could be an agent that sends the token to a service after it starts, such as by an HTTP API. (Reference: Cubbyhole authentication principles .) In addition, the previously mentioned PKI problem applies here. The Vault storage itself must be protected against integrity and availability threats. Confidentiality is provided through the Vault master key. The secret management design for EdgeX can be said to be finished when there is a sufficiently secure solution to the above challenges for the supported execution models.","title":"General Requirements for Vault on the Edge"},{"location":"threat-models/secret-store/background/#next-steps-for-edgex","text":"All parts of the system must collaborate in order to ensure a robust secret management design. What is needed is a systematic approach to secret management that will close the gaps between the AS-IS and TO-BE future state. This systematic approach is based on formal threat model with the aim that the system will meet some critical security objectives. The threat model is built against a proposed design and validates the security architecture of the design. Through threat modeling, we can identify assets, adversaries, threats, and mitigations against those threats. We can then make a prioritized implementation plan to address those threats. More importantly, for someone adopting EdgeX, the documented threat model outlines the threats that the framework has been designed to protect against and by omission, the threats that it has not.","title":"Next Steps for EdgeX"},{"location":"threat-models/secret-store/high_level_design/","text":"Detailed Design This document gets into the design details of the proposed secret management architecture, starting with a design overview and going into greater detail for each subsystem. Design Overview In context of the stated future goal to support hardware-based secret storage, it is important to note that in a Vault-based design, not every secret is actually wrapped by a hardware-backed key. Instead, the secrets in Vault are wrapped by a single master key, and the encryption and decryption of secrets are done in a user-level process in software . The Vault master key is then wrapped by one more additional keys, ultimately to a root key that is hardware-based using some authorization mechanism. In a PKCS#11 hardware token, authorization is typically a PIN. In a TPM, authorization is typically a set of PCR values and an optional password. The idea is that the Vault master key is eventually protected by some uncopyable unique secret attached to physical hardware. The hardware may or may not have non-volatile tamper-resistant storage. Non-volatile storage is useful for integrity protection as well as in pre-OS scenarios. An example of the former would be to store a hash value for HTTP Public Key Pinning (HPKP) in a manner that makes it difficult for an attacker to pin a different key. An example of the latter would be storing a LUKS disk encryption key that can decrypt a root file system when normal file system storage is not yet available. If non-volatile storage is available, it is often available only in very limited quantity. Obvious with the above design is that at some point along the line, the Vault master key or a wrapping key is observably exposed to user-mode software. In fact, the number two recommendation for Vault hardening is \"single tenancy\" which is further explained, in priority order, as (a) giving Vault its own physical machine, (b) giving Vault its own virtual machine, or (c) giving Vault its own container. The general solution to the exposure of the Vault master key or a wrapping key is to use a Trusted Execution Environment (TEE) to limit observability. There is currently no platform- and architecture-independent TEE solution. High-level design Figure 1: High-level design. The secrets to be protected are the application secrets (P-1) . The application secrets are protected with a per-service Vault service token (S-1) . The Vault service token is delivered by a \"token server\" running in the security service to a pre-agreed rendezvous location, where mandatory access control, namespaces, or file system permissions constrain path accessibility. Vault access tokens are simply 128-bit random handles that are renewed at the Vault server. They can be shared across multiple instances of a load-balanced service, and unlike a JWT there is no need to periodically re-issue them if they have not expired. The token server has its own non-root token-issuing token (S-3) that is created by the security service with the root token after it has initialized or unlocked the vault but before the root token is revoked. (S-4) Because of the sensitive nature of this token, it is co-located in the security service, and revoked immediately after use. The actual application secrets are stored in the Vault encrypted data store (S-6) that is logically stored in Consul's data store (S-7) . The vault data store is encrypted with a master key (S-5) that is held in Vault memory and forgotten across Vault restarts. The master key must be resupplied whenever Vault is restarted. The security service encrypts the master key using AES-256-GCM where the key (S-13) is derived using an RFC5869 key derivation function (KDF). The input key material for the KDF originates from a vendor-defined plugin that interfaces with a hardware security mechanism such as a TPM, PKCS11-compatible HSM, trusted execution environments (TEE), or enclave. An encrypted Vault master key is what is ultimately saved to storage. Confidentiality of the secret management APIs is established using server-side TLS. The PKI initialization component is responsible for generating a root certificate authority (S-8) , one or more intermediate certificate authorities (S-9) , and several leaf certificates (S-10) needed for initialization of the core services. The PKI can be generated afresh every boot, or installed during initial provisioning and cached. PKI intialization is covered next. PKI Initialization Figure 2: PKI initialization. PKI initialization must happen before any other component in the secret management architecture is started because Vault requires a PKI to be in place to protect its HTTP API. Creation of a PKI is a multi-stage operation and care must be taken to ensure that critical secrets, such as the the CA private keys, are not written to a location where they can be recovered, such as bulk storage devices. The PKI can be created on-device at every boot, at device provisioning time, or created off-device and imported. Caching of the PKI is optional if the PKI is created afresh every boot, but required otherwise. If the implementation allows, the private keys for certificate authorities should be destroyed after PKI generation to prevent unauthorized issuance of new leaf certificates, except where the certificate authority is stored in Vault and controlled with an appropriate policy. Following creation of the PKI, or retrieving it from cache, the PKI initialization is responsible for distributing keying material to pre-agreed per-service drop locations that service configuration files expect to find them. PKI initialization is not instantaneous. Even if PKI initialization is started first, dependent services may also be started before PKI initialization is completed. It is necessary to implement init-blocking code in dependent services that delays service startup until PKI assets have been delivered to the service. Most dependent services do not support encrypted TLS private keys. File access controls offered by the underlying execution environment are their only protection. A potential future enhancement might be to re-use the key derivation strategy used earlier to generate additional keys to encrypt the cached PKI keying material at rest. (Update: ADR 0015, adopted after this threat model was written, stipulates that TLS will not be used for single-node deployments of EdgeX.) Vault initialization and unsealing flow Figure 3: Vault initialization and unsealing flow When the security service starts the first thing that it does is check to see if a hardware security hook has been defined. The presence of a hardware security hook is indicated by an environment variable, IKM_HOOK, that points to an executable program. The security service will run the program and look for a hex-encoded key on its standard output. If a key is found, it will be used as the input key material for the HMAC key deriviation function, otherwise, hardware security will not be used. The input key material is combined with a random salt that is also saved to disk for later retrieval. The salt ensures that unique encryption keys will be used each time EdgeX is installed on a platform, even if the underlying input key material does not change. The salt also defends against weak input key material. Initialization flow Next, the security service will determine if Vault has been initialized. In the case that Vault is uninitialized, Vault's initialization API will be invoked, which results in a set of keys that can be used to reconstruct a Vault master key. When hardware security is enabled, the input key material and salt are fed into the key derivation function to generate a unique AES-256-GCM encryption key for each key shard. The encrypted keys along with nonces will be persisted to disk. AES-GCM protects against padding oracle attacks, but is sensitive to re-use of the salt value. This weakness is addressed both by using a unique encryption key for each shard, as well as the expectation that encryption is performed exactly once: when Vault is initialized. The Vault response is saved to disk directly in the case that hardware security is not enabled. Unseal flow If Vault is found to be in an initialized and sealed state, the Vault master key shards are retrieved from disk. If they are encrypted, they will be encrypted by reversing the process performed during initialization. The key shards are then fed back to Vault until the Vault is unsealed and operational. Token-issuing flow Figure 7: Token-issuing flow. Client side Every service that wants to query Vault must link to a secrets module either directly (go-mod-secrets) or indirectly (go-mod-bootstrap) or implement their own Vault interface. The module must take as input a path to a file that contains a Vault access token specific to that service. There is currently no secrets module for the C SDK. Clients must be prepared to handle a number of error conditions while attempting to access the secret store: There may be race conditions between the security service issuing new tokens and the service consuming an old token. The supplied token may be expired (tokens will expire if not renewed periodically) Vault may not be accessible (it is a networked service, after all) The client may be waiting for a secret that has not yet been provisioned into the secret store. Judicious use of retry loops should be sufficient to handle most of the above issues. Server side On the server side, the Vault master key will be used to generate a fresh \"root token\". The root token will generate a special \"token-issuing token\" that will generate tokens for the EdgeX microservices. The root token will then be revoked, and a \"token provider\" process with access to the token-issuing token will be launched in the background. EdgeX will provide a single reference implementation for the token provider: * security-file-token-provider: This token provider will consume a list of services that require tokens, along with a set of customizable parameters. At startup, the service tokens are created in bulk and delivered via the host file system on a per-service basis. The token-issuing token will be revoked upon termination of the token provider. Token revocation Vault tokens are persistent. Although they will automatically expire if they are not renewed, inadvertent disclosure of a token would be difficult to detect. This condition could allow an attacker to maintain an unauthorized connection to Vault indefinitely. Since tokens do expire if not renewed, it is necessary to generate fresh tokens on startup. Therefore, part of the startup process is the revokation of all previously Vault tokens, as a mitigation against token disclosure as well as garbage collection of obsolete tokens.","title":"Detailed Design"},{"location":"threat-models/secret-store/high_level_design/#detailed-design","text":"This document gets into the design details of the proposed secret management architecture, starting with a design overview and going into greater detail for each subsystem.","title":"Detailed Design"},{"location":"threat-models/secret-store/high_level_design/#design-overview","text":"In context of the stated future goal to support hardware-based secret storage, it is important to note that in a Vault-based design, not every secret is actually wrapped by a hardware-backed key. Instead, the secrets in Vault are wrapped by a single master key, and the encryption and decryption of secrets are done in a user-level process in software . The Vault master key is then wrapped by one more additional keys, ultimately to a root key that is hardware-based using some authorization mechanism. In a PKCS#11 hardware token, authorization is typically a PIN. In a TPM, authorization is typically a set of PCR values and an optional password. The idea is that the Vault master key is eventually protected by some uncopyable unique secret attached to physical hardware. The hardware may or may not have non-volatile tamper-resistant storage. Non-volatile storage is useful for integrity protection as well as in pre-OS scenarios. An example of the former would be to store a hash value for HTTP Public Key Pinning (HPKP) in a manner that makes it difficult for an attacker to pin a different key. An example of the latter would be storing a LUKS disk encryption key that can decrypt a root file system when normal file system storage is not yet available. If non-volatile storage is available, it is often available only in very limited quantity. Obvious with the above design is that at some point along the line, the Vault master key or a wrapping key is observably exposed to user-mode software. In fact, the number two recommendation for Vault hardening is \"single tenancy\" which is further explained, in priority order, as (a) giving Vault its own physical machine, (b) giving Vault its own virtual machine, or (c) giving Vault its own container. The general solution to the exposure of the Vault master key or a wrapping key is to use a Trusted Execution Environment (TEE) to limit observability. There is currently no platform- and architecture-independent TEE solution.","title":"Design Overview"},{"location":"threat-models/secret-store/high_level_design/#high-level-design","text":"Figure 1: High-level design. The secrets to be protected are the application secrets (P-1) . The application secrets are protected with a per-service Vault service token (S-1) . The Vault service token is delivered by a \"token server\" running in the security service to a pre-agreed rendezvous location, where mandatory access control, namespaces, or file system permissions constrain path accessibility. Vault access tokens are simply 128-bit random handles that are renewed at the Vault server. They can be shared across multiple instances of a load-balanced service, and unlike a JWT there is no need to periodically re-issue them if they have not expired. The token server has its own non-root token-issuing token (S-3) that is created by the security service with the root token after it has initialized or unlocked the vault but before the root token is revoked. (S-4) Because of the sensitive nature of this token, it is co-located in the security service, and revoked immediately after use. The actual application secrets are stored in the Vault encrypted data store (S-6) that is logically stored in Consul's data store (S-7) . The vault data store is encrypted with a master key (S-5) that is held in Vault memory and forgotten across Vault restarts. The master key must be resupplied whenever Vault is restarted. The security service encrypts the master key using AES-256-GCM where the key (S-13) is derived using an RFC5869 key derivation function (KDF). The input key material for the KDF originates from a vendor-defined plugin that interfaces with a hardware security mechanism such as a TPM, PKCS11-compatible HSM, trusted execution environments (TEE), or enclave. An encrypted Vault master key is what is ultimately saved to storage. Confidentiality of the secret management APIs is established using server-side TLS. The PKI initialization component is responsible for generating a root certificate authority (S-8) , one or more intermediate certificate authorities (S-9) , and several leaf certificates (S-10) needed for initialization of the core services. The PKI can be generated afresh every boot, or installed during initial provisioning and cached. PKI intialization is covered next.","title":"High-level design"},{"location":"threat-models/secret-store/high_level_design/#pki-initialization","text":"Figure 2: PKI initialization. PKI initialization must happen before any other component in the secret management architecture is started because Vault requires a PKI to be in place to protect its HTTP API. Creation of a PKI is a multi-stage operation and care must be taken to ensure that critical secrets, such as the the CA private keys, are not written to a location where they can be recovered, such as bulk storage devices. The PKI can be created on-device at every boot, at device provisioning time, or created off-device and imported. Caching of the PKI is optional if the PKI is created afresh every boot, but required otherwise. If the implementation allows, the private keys for certificate authorities should be destroyed after PKI generation to prevent unauthorized issuance of new leaf certificates, except where the certificate authority is stored in Vault and controlled with an appropriate policy. Following creation of the PKI, or retrieving it from cache, the PKI initialization is responsible for distributing keying material to pre-agreed per-service drop locations that service configuration files expect to find them. PKI initialization is not instantaneous. Even if PKI initialization is started first, dependent services may also be started before PKI initialization is completed. It is necessary to implement init-blocking code in dependent services that delays service startup until PKI assets have been delivered to the service. Most dependent services do not support encrypted TLS private keys. File access controls offered by the underlying execution environment are their only protection. A potential future enhancement might be to re-use the key derivation strategy used earlier to generate additional keys to encrypt the cached PKI keying material at rest. (Update: ADR 0015, adopted after this threat model was written, stipulates that TLS will not be used for single-node deployments of EdgeX.)","title":"PKI Initialization"},{"location":"threat-models/secret-store/high_level_design/#vault-initialization-and-unsealing-flow","text":"Figure 3: Vault initialization and unsealing flow When the security service starts the first thing that it does is check to see if a hardware security hook has been defined. The presence of a hardware security hook is indicated by an environment variable, IKM_HOOK, that points to an executable program. The security service will run the program and look for a hex-encoded key on its standard output. If a key is found, it will be used as the input key material for the HMAC key deriviation function, otherwise, hardware security will not be used. The input key material is combined with a random salt that is also saved to disk for later retrieval. The salt ensures that unique encryption keys will be used each time EdgeX is installed on a platform, even if the underlying input key material does not change. The salt also defends against weak input key material.","title":"Vault initialization and unsealing flow"},{"location":"threat-models/secret-store/high_level_design/#initialization-flow","text":"Next, the security service will determine if Vault has been initialized. In the case that Vault is uninitialized, Vault's initialization API will be invoked, which results in a set of keys that can be used to reconstruct a Vault master key. When hardware security is enabled, the input key material and salt are fed into the key derivation function to generate a unique AES-256-GCM encryption key for each key shard. The encrypted keys along with nonces will be persisted to disk. AES-GCM protects against padding oracle attacks, but is sensitive to re-use of the salt value. This weakness is addressed both by using a unique encryption key for each shard, as well as the expectation that encryption is performed exactly once: when Vault is initialized. The Vault response is saved to disk directly in the case that hardware security is not enabled.","title":"Initialization flow"},{"location":"threat-models/secret-store/high_level_design/#unseal-flow","text":"If Vault is found to be in an initialized and sealed state, the Vault master key shards are retrieved from disk. If they are encrypted, they will be encrypted by reversing the process performed during initialization. The key shards are then fed back to Vault until the Vault is unsealed and operational.","title":"Unseal flow"},{"location":"threat-models/secret-store/high_level_design/#token-issuing-flow","text":"Figure 7: Token-issuing flow.","title":"Token-issuing flow"},{"location":"threat-models/secret-store/high_level_design/#client-side","text":"Every service that wants to query Vault must link to a secrets module either directly (go-mod-secrets) or indirectly (go-mod-bootstrap) or implement their own Vault interface. The module must take as input a path to a file that contains a Vault access token specific to that service. There is currently no secrets module for the C SDK. Clients must be prepared to handle a number of error conditions while attempting to access the secret store: There may be race conditions between the security service issuing new tokens and the service consuming an old token. The supplied token may be expired (tokens will expire if not renewed periodically) Vault may not be accessible (it is a networked service, after all) The client may be waiting for a secret that has not yet been provisioned into the secret store. Judicious use of retry loops should be sufficient to handle most of the above issues.","title":"Client side"},{"location":"threat-models/secret-store/high_level_design/#server-side","text":"On the server side, the Vault master key will be used to generate a fresh \"root token\". The root token will generate a special \"token-issuing token\" that will generate tokens for the EdgeX microservices. The root token will then be revoked, and a \"token provider\" process with access to the token-issuing token will be launched in the background. EdgeX will provide a single reference implementation for the token provider: * security-file-token-provider: This token provider will consume a list of services that require tokens, along with a set of customizable parameters. At startup, the service tokens are created in bulk and delivered via the host file system on a per-service basis. The token-issuing token will be revoked upon termination of the token provider.","title":"Server side"},{"location":"threat-models/secret-store/high_level_design/#token-revocation","text":"Vault tokens are persistent. Although they will automatically expire if they are not renewed, inadvertent disclosure of a token would be difficult to detect. This condition could allow an attacker to maintain an unauthorized connection to Vault indefinitely. Since tokens do expire if not renewed, it is necessary to generate fresh tokens on startup. Therefore, part of the startup process is the revokation of all previously Vault tokens, as a mitigation against token disclosure as well as garbage collection of obsolete tokens.","title":"Token revocation"},{"location":"threat-models/secret-store/threat_model/","text":"Threat Model Historical Context This threat model was written in the EdgeX Fuji timeframe. Significant changes have occured to EdgeX since that time. This document serves as a historical record of motification for security changes that occured in the Fuji, Geneva, Hanoi, and Ireland releases of EdgeX. This threat model also covers ONLY THE EDGEX SECRET STORE and not the EdgeX project as a whole. Assumptions The EdgeX Framework is a API-based software framework that strives to be platform and architecture-independent. The threat model considers only the following two deployment scenarios: A containerized implementation based on Docker. A confined implementation based on Snaps. The threat model presented in this document analyzes the secret management subsystem of EdgeX, and has considerations for both of the above runtime environments, both of which implement protections beyond a stock user/process runtime environment. In generic terms, the secret management threat model assumes: Services do not have unfettered access to the host file system. Services are protected from each other and communicate only through defined IPC mechanisms. The service location mechanism is trustworthy/non-spoofable. Services do not run with privilege except where noted. There are no unauthorized privileged administrators operating on the device (privileged administrator can bypass all access controls). The framework may be deployed on a device with inbound and outbound Internet connectivity. This is a pessimistic assumption to introduce an anonymous network adversary. The framework may be deployed on a device with limited physical security. This is a pessimistic assumption to introduce simple hardware attacks such as disk cloning. Any particular of implementation of EdgeX should perform its own threat modeling activity as part of securing the implementation, and may use this document to supplement analysis of the secret management subsystem of EdgeX. Recommended Hardening Physical security and hardening of the underlying platform is out-of-scope for implementation by the EdgeX reference code. But since the privileged administrator can bypass all access controls, such hardening is nevertheless recommended: the threat model assumes that there are no unauthorized privileged administrators. One should look to industry standard hardening guides, such as CIS Benchmarks for hardening operating system and container runtimes. Additionally, typical EdgeX base platforms are likely to support the following types of hardening out-of-the-box(1), and these should be enabled where possible. Verified/secure boot with a hardware root of trust. This refers to a trust chain that starts at power-on, verifying the system firmware, boot loaders, drivers, and the core components of the operating system. Verified boot helps to ensure that an attacker cannot obtain a privileged administrator role during the boot process. File system integrity (e.g. dm-verity) and/or full disk encryption (e.g. LUKS). Verified/secure boot typically does not apply to user-mode process started after the kernel has booted. File system integrity checking and/or encryption is an easy way to reduce exposure to off-line tampering such as resetting the administrator password or installing a back door. The EdgeX secret store provides hooks for utilizing hardware secure storage to ensure that secrets stored on the device can only be decrypted on that device. Implementations should use hardware security features where a suitable plug-in is available. For maximum benefit, hardware security should be combined with verified/secure boot, file system protection, and other software-level hardening. Lastly, due consideration should be given to the security of the software supply chain: it is important to ensure that code deployed to a device is what is expected and free of known vulnerabilities. This implies an ability to update a device in the field to ensure that it remains free of known vulnerabilities. Footnotes: (1) Most Linux distributions support verified/secure boot. Microsoft Windows enables verified/secure boot by default, and can automatically use TPM hardware if full disk encryption is enabled and will fail to decrypt if verified/secure boot is disabled. Protections afforded by modeled runtime environments The threat model considers Docker-based and Snap-based deployments. Each of these deployment environments offer sandboxing protections that go beyond a standard Unix user and process model. As mentioned earlier, the threat model assumes the sandboxing protections: Prevent one service from accessing the protected files of the host or another service. Prevent one service from inspecting the protected memory of another service or processes on the host. Restrict interprocess communication (IPC) mechanisms to a defined set. Allow for private scratch spaces, preferably on a RAMdisk. In the Linux environment, most of these protections are based on a combination of two technologies: Linux namespaces and mandatory access control (MAC) based on Linux Security Module (LSM) . Docker-based runtimes All services running within a single container are assumed to be within the same trust boundary. Docker-based runtimes are expected to provide the following properties: General protections The root user in a container is subject to namespace constraints and restricted set of capabilities . File system protections Containers by default have no visibility to the host's file system and run with their own root file system that is supplied with the container. The container's file system can be augmented with docker volumes and bind mounts to the host file system to allow specific data sharing scenarios. Containers can be started with tmpfs volumes that are local to that container instance. By default, all files in a container are remapped to an overlay file system stored as files under /var/lib/docker where they are observable on the host and stored persistently. The root file system of a container can be mounted read-only. For writable root file systems, each container gets a fresh copy of the root file system. Content that must be persisted across container restarts must be stored in Docker volumes. Docker volumes can be shared across multiple containers; however, the default \"local\" driver can only do such sharing when the containers are co-located on the same host. Interprocess communication protections Docker containers do not share the host's network interface by default and instead is based on virtual ethernet adapters and bridges. Network connectivity is strictly controlled via the docker-compose definition. There are networking differences when running Docker on Windows or MacOS machines, due to the use of a hidden Linux virtual machine to actually run Docker. There are few if any IPC restrictions between processes running in the same container due to lack of mandatory access controls. Each service must run in its own container to ensure maximum service isolation. Snap-based runtimes All services running within a single snap are assumed to be within the same trust boundary. However, even in a snap, due to the use of mandatory access control, there are stronger-than-normal process isolation policies in place, as documented below. General protections The root user in a snap is subject to namespace constraints and MAC rules enforced by Linux Security Modules (LSMs) configured as part of the snap. File system protections Snaps run inside their own mount namespace, which is a confined view of the host's file system where access to most paths is restricted. This includes sysfs and procfs. Note: File system paths inside of the snap are homomorphic with the host's view of the file system - any files written in the snap are visible on the host. All of the files in the snap are read-only with the exception of the below noted paths. The contents of the snap itself are mounted read-only from a squashfs file system. Snaps can write small temporary files to a tmpfs pointed to by $XDG_RUNTIME_DIR which is a user-private user-writable-directory that is also per-snap. Snaps can write persistent data local to the snap to the $SNAP_DATA folder. Snaps do not have the CAP_SYS_ADMIN , mount(2) , capability. Content interface snaps can be used to allow one snap to share code or data with another snap. Interprocess communication protections Snaps can send signals only to processes running inside of the snap. Snaps share the host's network interface rather than having a virtual network interface card. Snaps may have multiple processes running in them and they are allowed to communicate with each other. Snaps may connect to IP sockets opened by processes running outside of the snap. Snaps are not allowed to access /proc/mem or to ptrace(2) other processes. High-level Security Objectives Security Objectives The security objectives call out the security goals of the architecture/design. They are: Ensure confidentiality, integrity, and availability of application secrets. Reduce plain text exposure of sensitive data. Design-in hooks for hardware secure storage. Assets Primary Assets Primary assets are the assets at the level of the conceptual data model of the system and primarily represent \"real-world\" things. AssetId Name Description Attack Points P-1 Application secrets The things we are trying to protect In use, in transit, in storage Secondary Assets Secondary assets are assets are used to support or protect the primary assets and are usually implementation details versus being part of the conceptual data model. AssetId Name Description Attack Points S-1 Vault service token Vault service tokens are issued per-service and used by services to authenticate to vault and retrieve per-service application secrets. In-flight via API, at rest S-3 Vault token-issuing-token Used by the token issuing service to create vault service tokens for other services. (Called out separately from S-1 due to its high privilege.) In-flight via API, at rest S-4 Vault root token A special token created at Vault initialization time that has all capabilities and never expires. In-flight via API, at rest S-5 Vault master key A root secret that encrypts all of Vault's other secrets. In-flight via API, at rest, in-use. S-6 Vault data store A data store encrypted with the Vault master key that contains the contents of the vault. In storage S-7 Consul data store Back-end storage engine for vault data store. In storage S-8 CA key Private keys for on-device PKI certificate authority. In use, in transit, in storage S-9 Issuing CA key Private keys for on-device PKI issuing authorities. In use, in transit, in storage S-10 Leaf TLS key Private keys for TLS server authentication for on-device services (e.g. Vault service, Consul service) In use, in transit, in storage S-13 IKM Initial keying material as input to HMAC KDF In use, in transit, in storage Note that asset S-9 (issuing CA key) is not currently implemented: in all current EdgeX releases all TLS leaf certificates are derived from the root CA. Attack Surfaces This table lists components in the system architecture that have assets of potential value to an attacker and how a potential attacker may attempt to gain access to those components. System Element Compromise Type Assets Exposed Attack Method Consul API IA Vault data store, service location data/registry, settings Data modification, DoS against API Vault API CIA All application secrets, all vault tokens Data channel snooping or data modification, DoS against API Host file system CIA PKI private keys, Vault tokens, Vault master key, Vault store, Consul store Snooping or data modification, deletion of critical files PKI initiazation agent CI Private keys for on-device PKI Snooping generation of assets or forcing predictable PKI Vault initialization agent CI Vault master key, Vault root token, token-issuing-token, encryption key for Vault master key Snooping generation of assets or tampering with assets Token server API CIA Token issuing token, service tokens Data channel snooping, tampering with asset policies, or forcing service down Process memory CIA Most assets excluding hardware and storage media Read or modify process memory through /proc or related IPC mechanisms Adversaries The adversary model is use-case specific, but for the sake of discussion assume the following simplistic list: Persona Motivation Starting Access Skill / Effort Thief (Larceny) Quick cash by reselling stolen components. None Low Remote hacker Financial gain by harvesting resellable information or performing ransomware attacks via exploitable vulnerabilities. Network Medium Malicious administrator Out of scope. Cannot defend against attacks originating at level of system software. N/A N/A Malicious non-privileged service Escalation of privilege and data exfiltration. Malicious services includes software supply chain attackers. User mode access Medium Industrial espionage / Malicious developer Financial gain or harm by obtaining access to back-end systems and/or competitive data. Unknown High The malicious administrator is out of scope: the threat model assumes that there are no unauthorized privileged administrators on the device. This must be ensured through hardening of the underlying platform, which is out of scope. Malicious non-privileged services are a concern. This can occur through a wide variety of software supply chain attacks, as well as implementation bugs that permit a service to exhibit unintended functionality. The industrial espionage or malicious developer adversary deserves some explanation. Whereas the remote hacker adversary is primarily motivated by a one-time attack, the industrial espionage attacker seeks to maintain a persistent foothold or to insert back-doors into an entire fleet of devices. Making each device unique (e.g. device-unique secrets) helps to mitigate against break-once-run-everywhere (BORE) attacks. Threat Matrix The threat matrix indicates what assets are at risk for the various attack surfaces in the system. Consul API Vault API Host FS PKI agent Vault agent Token svc /proc /mem Application secrets *a *p Vault service token *bd *b *bd *p Token-issuing-token *e *e *e *e *p Vault root token *f *f *f *p Vault master key *g *g *g *p Vault DS *hi Consul DS *j *j PKI CA *m *k *p PKI intermediate *m *l *p PKI leaf *m *m *p IKM *q *p Threats and Mitigations Format: (identifier) Threat name Mitigation 1 Mitigation 2 et cetera (a1) Loss of confidentiality of application secrets in-flight by MITM attack against the Vault API. DNS name resolution is assumed trustworthy (hard-coded localhost, or Docker-supplied DNS). Vault API is protected by TLS verified against a CA certificate. Vault TLS private key is protected by host file system ( SECRETSLOC ). Unmitigated: Service location information is trustworthy. (a2) Loss of confidentiality of application secrets by querying Vault API. Vault API is protected by TLS verified against a CA certificate. Application secrets are protected by Vault service token. Each service has a unique token with restricted visibility. (b1) Loss of confidentiality of Vault service token in-flight by MITM attack against the Vault API. Vault service token is protected by host file system ( SECRETSLOC ). Vault service token has limited lifespan and must be periodically renewed. Vault API is protected by TLS verified against a CA certificate. (b2) Loss of confidentiality of Vault service token in-flight by MITM attack against the token provider. The file-based token provider does not expose an API. The file-based token provider configuration information comes from a trusted source (configuration file bundled with the service). (b3) Loss of confidentiality of Vault service token at-rest by file system inspection/monitoring. Container/Snap protections prevent services from reading other services' tokens off of disk. Revoke previously generated tokens on every reboot. (d1) Loss of availability of Vault service token token via intentional Vault service crash. Service tokens are created as persistent orphans (survive Vault restarts). Services needing long-lived Vault access can renew their own token. Unmitigated: Automatic restart and re-unsealing of Vault daemon. (d2) Loss of availability of Vault service token token via intentional token provider crash. File-based token provider is a one-shot service. (e1) Loss of confidentiality of token-issuing-token in-flight by MITM attack against the Vault API. See mitigations for threat (b1) above. (e2) Loss of confidentiality of token-issuing-token at-rest by file system inspection/monitoring. Container/Snap provided file system protections. Token-issuing token in stored in private tmpfs area in execution environments that support it. Token-issuing token is passed via private channel inside of security service. Token-issuing token for file-based token provider is revoked after use. (e3) Loss of availability of token-issuing token via intentional service crash. Not applicable: file-based token provider is a single-shot process (f1) Loss of confidentiality of Vault root token in-flight by MITM attack against the Vault API. See mitigations for threat (a1) above. (f2) Loss of confidentiality of Vault root token by other means. The root token is never persisted to disk and revoked immediately after performing necessary setup during vault initialization (the root token can be regenerated on-demand with the master key). (g1) Loss of confidentiality of Vault master key in-flight by MITM attack against the Vault API. See mitigations for threat (a1) above. (g2) Loss of confidentiality of Vault master key at-rest by file system inspection/monitoring. Container/Snap provided file system protections. Vault master key is encrypted with AES-256-GCM using a HMAC-KDF derived-key with KDF input coming from a configurable source. Threat model recommends use of hardware secure storage for the input key material. (g3) Loss of availability of Vault master key by malicious deletion. Container/Snap provided file system protections. Hardware-based solutions are out of scope for the reference design, but may offer additional protections. (h) Lost of confidentiality of Vault data store at-rest by file system inspection/monitoring. Vault data store is encrypted using Vault master key before being stored. (i) Lost of availability of Vault data store due to intentional service crash of Consul. Vault data store is implemented on top of Consul, which is a fault-tolerant-capable data store. In Docker-based environments, Consul can be configured to automatically restart on failure. (j1) Loss of confidentiality of Consul data store at-rest by file system inspection/monitoring. Consul data store is assumed to be non-confidential and thus there is no threat. Vault data is encrypted prior to be passed to Consul for storage. (j2) Loss of integrity or availability of Consul data store at-rest by file system tampering or malicious deletion. Container/Snap provided file system protections. (j3) Loss of availability of Consul data store at runtime due to intentional service crash. In Docker-based environments, Consul can be configured to automatically restart on failure. Threat may be further mitigated by running Consul in High Availability mode (not done in reference implementation). (k1) Loss of confidentiality of PKI CA at-rest by file system inspection/monitoring. Container/Snap provided file system protections. Secure deletion of CA private key after PKI generation. (k2) Loss of integrity of PKI CA by malicious replacement. Container/Snap provided file system protections. (k3) Loss of availability of PKI CA (public certificate) by malicious deletion. Container/Snap provided file system protections. (l1) Loss of confidentiality of PKI intermediate at-rest by file system inspection/monitoring. Container/Snap provided file system protections. Secure deletion of CA intermediate private key after PKI generation. (l2) Loss of integrity of PKI intermediate by malicious replacement. Identical to threat (k3): CA would have to be maliciously replaced as well. (l3) Loss of availability of PKI intermediate (public certificate) by malicious deletion. Container/Snap provided file system protections. (m1) Loss of confidentiality of PKI leaf at-rest by file system inspection/monitoring. Container/Snap provided file system protections. Note that server TLS private keys must be delivered to services unencrypted due to limitations of dependent services. (m2) Loss of integrity of PKI leaf by malicious replacement. Identical to threat (k3/l3): CA or intermediate would have to be maliciously replaced as well. (m3) Loss of availability of PKI leaf by malicious deletion. Container/Snap provided file system protections. (p) Disclosure, tampering, or deletion of secrets through /proc/mem or ptrace() by malicous or compromised microservice Container/Snap provided memory protections. (q) Lost of confidentiality of input key material (IKM) IKM is secured by vendor-defined hardware-mechanism. IKM is passed to key derivation function via IPC pipe (stdout).","title":"Threat Model"},{"location":"threat-models/secret-store/threat_model/#threat-model","text":"","title":"Threat Model"},{"location":"threat-models/secret-store/threat_model/#historical-context","text":"This threat model was written in the EdgeX Fuji timeframe. Significant changes have occured to EdgeX since that time. This document serves as a historical record of motification for security changes that occured in the Fuji, Geneva, Hanoi, and Ireland releases of EdgeX. This threat model also covers ONLY THE EDGEX SECRET STORE and not the EdgeX project as a whole.","title":"Historical Context"},{"location":"threat-models/secret-store/threat_model/#assumptions","text":"The EdgeX Framework is a API-based software framework that strives to be platform and architecture-independent. The threat model considers only the following two deployment scenarios: A containerized implementation based on Docker. A confined implementation based on Snaps. The threat model presented in this document analyzes the secret management subsystem of EdgeX, and has considerations for both of the above runtime environments, both of which implement protections beyond a stock user/process runtime environment. In generic terms, the secret management threat model assumes: Services do not have unfettered access to the host file system. Services are protected from each other and communicate only through defined IPC mechanisms. The service location mechanism is trustworthy/non-spoofable. Services do not run with privilege except where noted. There are no unauthorized privileged administrators operating on the device (privileged administrator can bypass all access controls). The framework may be deployed on a device with inbound and outbound Internet connectivity. This is a pessimistic assumption to introduce an anonymous network adversary. The framework may be deployed on a device with limited physical security. This is a pessimistic assumption to introduce simple hardware attacks such as disk cloning. Any particular of implementation of EdgeX should perform its own threat modeling activity as part of securing the implementation, and may use this document to supplement analysis of the secret management subsystem of EdgeX.","title":"Assumptions"},{"location":"threat-models/secret-store/threat_model/#recommended-hardening","text":"Physical security and hardening of the underlying platform is out-of-scope for implementation by the EdgeX reference code. But since the privileged administrator can bypass all access controls, such hardening is nevertheless recommended: the threat model assumes that there are no unauthorized privileged administrators. One should look to industry standard hardening guides, such as CIS Benchmarks for hardening operating system and container runtimes. Additionally, typical EdgeX base platforms are likely to support the following types of hardening out-of-the-box(1), and these should be enabled where possible. Verified/secure boot with a hardware root of trust. This refers to a trust chain that starts at power-on, verifying the system firmware, boot loaders, drivers, and the core components of the operating system. Verified boot helps to ensure that an attacker cannot obtain a privileged administrator role during the boot process. File system integrity (e.g. dm-verity) and/or full disk encryption (e.g. LUKS). Verified/secure boot typically does not apply to user-mode process started after the kernel has booted. File system integrity checking and/or encryption is an easy way to reduce exposure to off-line tampering such as resetting the administrator password or installing a back door. The EdgeX secret store provides hooks for utilizing hardware secure storage to ensure that secrets stored on the device can only be decrypted on that device. Implementations should use hardware security features where a suitable plug-in is available. For maximum benefit, hardware security should be combined with verified/secure boot, file system protection, and other software-level hardening. Lastly, due consideration should be given to the security of the software supply chain: it is important to ensure that code deployed to a device is what is expected and free of known vulnerabilities. This implies an ability to update a device in the field to ensure that it remains free of known vulnerabilities. Footnotes: (1) Most Linux distributions support verified/secure boot. Microsoft Windows enables verified/secure boot by default, and can automatically use TPM hardware if full disk encryption is enabled and will fail to decrypt if verified/secure boot is disabled.","title":"Recommended Hardening"},{"location":"threat-models/secret-store/threat_model/#protections-afforded-by-modeled-runtime-environments","text":"The threat model considers Docker-based and Snap-based deployments. Each of these deployment environments offer sandboxing protections that go beyond a standard Unix user and process model. As mentioned earlier, the threat model assumes the sandboxing protections: Prevent one service from accessing the protected files of the host or another service. Prevent one service from inspecting the protected memory of another service or processes on the host. Restrict interprocess communication (IPC) mechanisms to a defined set. Allow for private scratch spaces, preferably on a RAMdisk. In the Linux environment, most of these protections are based on a combination of two technologies: Linux namespaces and mandatory access control (MAC) based on Linux Security Module (LSM) .","title":"Protections afforded by modeled runtime environments"},{"location":"threat-models/secret-store/threat_model/#docker-based-runtimes","text":"All services running within a single container are assumed to be within the same trust boundary. Docker-based runtimes are expected to provide the following properties:","title":"Docker-based runtimes"},{"location":"threat-models/secret-store/threat_model/#general-protections","text":"The root user in a container is subject to namespace constraints and restricted set of capabilities .","title":"General protections"},{"location":"threat-models/secret-store/threat_model/#file-system-protections","text":"Containers by default have no visibility to the host's file system and run with their own root file system that is supplied with the container. The container's file system can be augmented with docker volumes and bind mounts to the host file system to allow specific data sharing scenarios. Containers can be started with tmpfs volumes that are local to that container instance. By default, all files in a container are remapped to an overlay file system stored as files under /var/lib/docker where they are observable on the host and stored persistently. The root file system of a container can be mounted read-only. For writable root file systems, each container gets a fresh copy of the root file system. Content that must be persisted across container restarts must be stored in Docker volumes. Docker volumes can be shared across multiple containers; however, the default \"local\" driver can only do such sharing when the containers are co-located on the same host.","title":"File system protections"},{"location":"threat-models/secret-store/threat_model/#interprocess-communication-protections","text":"Docker containers do not share the host's network interface by default and instead is based on virtual ethernet adapters and bridges. Network connectivity is strictly controlled via the docker-compose definition. There are networking differences when running Docker on Windows or MacOS machines, due to the use of a hidden Linux virtual machine to actually run Docker. There are few if any IPC restrictions between processes running in the same container due to lack of mandatory access controls. Each service must run in its own container to ensure maximum service isolation.","title":"Interprocess communication protections"},{"location":"threat-models/secret-store/threat_model/#snap-based-runtimes","text":"All services running within a single snap are assumed to be within the same trust boundary. However, even in a snap, due to the use of mandatory access control, there are stronger-than-normal process isolation policies in place, as documented below.","title":"Snap-based runtimes"},{"location":"threat-models/secret-store/threat_model/#general-protections_1","text":"The root user in a snap is subject to namespace constraints and MAC rules enforced by Linux Security Modules (LSMs) configured as part of the snap.","title":"General protections"},{"location":"threat-models/secret-store/threat_model/#file-system-protections_1","text":"Snaps run inside their own mount namespace, which is a confined view of the host's file system where access to most paths is restricted. This includes sysfs and procfs. Note: File system paths inside of the snap are homomorphic with the host's view of the file system - any files written in the snap are visible on the host. All of the files in the snap are read-only with the exception of the below noted paths. The contents of the snap itself are mounted read-only from a squashfs file system. Snaps can write small temporary files to a tmpfs pointed to by $XDG_RUNTIME_DIR which is a user-private user-writable-directory that is also per-snap. Snaps can write persistent data local to the snap to the $SNAP_DATA folder. Snaps do not have the CAP_SYS_ADMIN , mount(2) , capability. Content interface snaps can be used to allow one snap to share code or data with another snap.","title":"File system protections"},{"location":"threat-models/secret-store/threat_model/#interprocess-communication-protections_1","text":"Snaps can send signals only to processes running inside of the snap. Snaps share the host's network interface rather than having a virtual network interface card. Snaps may have multiple processes running in them and they are allowed to communicate with each other. Snaps may connect to IP sockets opened by processes running outside of the snap. Snaps are not allowed to access /proc/mem or to ptrace(2) other processes.","title":"Interprocess communication protections"},{"location":"threat-models/secret-store/threat_model/#high-level-security-objectives","text":"","title":"High-level Security Objectives"},{"location":"threat-models/secret-store/threat_model/#security-objectives","text":"The security objectives call out the security goals of the architecture/design. They are: Ensure confidentiality, integrity, and availability of application secrets. Reduce plain text exposure of sensitive data. Design-in hooks for hardware secure storage.","title":"Security Objectives"},{"location":"threat-models/secret-store/threat_model/#assets","text":"","title":"Assets"},{"location":"threat-models/secret-store/threat_model/#primary-assets","text":"Primary assets are the assets at the level of the conceptual data model of the system and primarily represent \"real-world\" things. AssetId Name Description Attack Points P-1 Application secrets The things we are trying to protect In use, in transit, in storage","title":"Primary Assets"},{"location":"threat-models/secret-store/threat_model/#secondary-assets","text":"Secondary assets are assets are used to support or protect the primary assets and are usually implementation details versus being part of the conceptual data model. AssetId Name Description Attack Points S-1 Vault service token Vault service tokens are issued per-service and used by services to authenticate to vault and retrieve per-service application secrets. In-flight via API, at rest S-3 Vault token-issuing-token Used by the token issuing service to create vault service tokens for other services. (Called out separately from S-1 due to its high privilege.) In-flight via API, at rest S-4 Vault root token A special token created at Vault initialization time that has all capabilities and never expires. In-flight via API, at rest S-5 Vault master key A root secret that encrypts all of Vault's other secrets. In-flight via API, at rest, in-use. S-6 Vault data store A data store encrypted with the Vault master key that contains the contents of the vault. In storage S-7 Consul data store Back-end storage engine for vault data store. In storage S-8 CA key Private keys for on-device PKI certificate authority. In use, in transit, in storage S-9 Issuing CA key Private keys for on-device PKI issuing authorities. In use, in transit, in storage S-10 Leaf TLS key Private keys for TLS server authentication for on-device services (e.g. Vault service, Consul service) In use, in transit, in storage S-13 IKM Initial keying material as input to HMAC KDF In use, in transit, in storage Note that asset S-9 (issuing CA key) is not currently implemented: in all current EdgeX releases all TLS leaf certificates are derived from the root CA.","title":"Secondary Assets"},{"location":"threat-models/secret-store/threat_model/#attack-surfaces","text":"This table lists components in the system architecture that have assets of potential value to an attacker and how a potential attacker may attempt to gain access to those components. System Element Compromise Type Assets Exposed Attack Method Consul API IA Vault data store, service location data/registry, settings Data modification, DoS against API Vault API CIA All application secrets, all vault tokens Data channel snooping or data modification, DoS against API Host file system CIA PKI private keys, Vault tokens, Vault master key, Vault store, Consul store Snooping or data modification, deletion of critical files PKI initiazation agent CI Private keys for on-device PKI Snooping generation of assets or forcing predictable PKI Vault initialization agent CI Vault master key, Vault root token, token-issuing-token, encryption key for Vault master key Snooping generation of assets or tampering with assets Token server API CIA Token issuing token, service tokens Data channel snooping, tampering with asset policies, or forcing service down Process memory CIA Most assets excluding hardware and storage media Read or modify process memory through /proc or related IPC mechanisms","title":"Attack Surfaces"},{"location":"threat-models/secret-store/threat_model/#adversaries","text":"The adversary model is use-case specific, but for the sake of discussion assume the following simplistic list: Persona Motivation Starting Access Skill / Effort Thief (Larceny) Quick cash by reselling stolen components. None Low Remote hacker Financial gain by harvesting resellable information or performing ransomware attacks via exploitable vulnerabilities. Network Medium Malicious administrator Out of scope. Cannot defend against attacks originating at level of system software. N/A N/A Malicious non-privileged service Escalation of privilege and data exfiltration. Malicious services includes software supply chain attackers. User mode access Medium Industrial espionage / Malicious developer Financial gain or harm by obtaining access to back-end systems and/or competitive data. Unknown High The malicious administrator is out of scope: the threat model assumes that there are no unauthorized privileged administrators on the device. This must be ensured through hardening of the underlying platform, which is out of scope. Malicious non-privileged services are a concern. This can occur through a wide variety of software supply chain attacks, as well as implementation bugs that permit a service to exhibit unintended functionality. The industrial espionage or malicious developer adversary deserves some explanation. Whereas the remote hacker adversary is primarily motivated by a one-time attack, the industrial espionage attacker seeks to maintain a persistent foothold or to insert back-doors into an entire fleet of devices. Making each device unique (e.g. device-unique secrets) helps to mitigate against break-once-run-everywhere (BORE) attacks.","title":"Adversaries"},{"location":"threat-models/secret-store/threat_model/#threat-matrix","text":"The threat matrix indicates what assets are at risk for the various attack surfaces in the system. Consul API Vault API Host FS PKI agent Vault agent Token svc /proc /mem Application secrets *a *p Vault service token *bd *b *bd *p Token-issuing-token *e *e *e *e *p Vault root token *f *f *f *p Vault master key *g *g *g *p Vault DS *hi Consul DS *j *j PKI CA *m *k *p PKI intermediate *m *l *p PKI leaf *m *m *p IKM *q *p","title":"Threat Matrix"},{"location":"threat-models/secret-store/threat_model/#threats-and-mitigations","text":"Format: (identifier) Threat name Mitigation 1 Mitigation 2 et cetera","title":"Threats and Mitigations"},{"location":"threat-models/secret-store/threat_model/#a1-loss-of-confidentiality-of-application-secrets-in-flight-by-mitm-attack-against-the-vault-api","text":"DNS name resolution is assumed trustworthy (hard-coded localhost, or Docker-supplied DNS). Vault API is protected by TLS verified against a CA certificate. Vault TLS private key is protected by host file system ( SECRETSLOC ). Unmitigated: Service location information is trustworthy.","title":"(a1) Loss of confidentiality of application secrets in-flight by MITM attack against the Vault API."},{"location":"threat-models/secret-store/threat_model/#a2-loss-of-confidentiality-of-application-secrets-by-querying-vault-api","text":"Vault API is protected by TLS verified against a CA certificate. Application secrets are protected by Vault service token. Each service has a unique token with restricted visibility.","title":"(a2) Loss of confidentiality of application secrets by querying Vault API."},{"location":"threat-models/secret-store/threat_model/#b1-loss-of-confidentiality-of-vault-service-token-in-flight-by-mitm-attack-against-the-vault-api","text":"Vault service token is protected by host file system ( SECRETSLOC ). Vault service token has limited lifespan and must be periodically renewed. Vault API is protected by TLS verified against a CA certificate.","title":"(b1) Loss of confidentiality of Vault service token in-flight by MITM attack against the Vault API."},{"location":"threat-models/secret-store/threat_model/#b2-loss-of-confidentiality-of-vault-service-token-in-flight-by-mitm-attack-against-the-token-provider","text":"The file-based token provider does not expose an API. The file-based token provider configuration information comes from a trusted source (configuration file bundled with the service).","title":"(b2) Loss of confidentiality of Vault service token in-flight by MITM attack against the token provider."},{"location":"threat-models/secret-store/threat_model/#b3-loss-of-confidentiality-of-vault-service-token-at-rest-by-file-system-inspectionmonitoring","text":"Container/Snap protections prevent services from reading other services' tokens off of disk. Revoke previously generated tokens on every reboot.","title":"(b3) Loss of confidentiality of Vault service token at-rest by file system inspection/monitoring."},{"location":"threat-models/secret-store/threat_model/#d1-loss-of-availability-of-vault-service-token-token-via-intentional-vault-service-crash","text":"Service tokens are created as persistent orphans (survive Vault restarts). Services needing long-lived Vault access can renew their own token. Unmitigated: Automatic restart and re-unsealing of Vault daemon.","title":"(d1) Loss of availability of Vault service token token via intentional Vault service crash."},{"location":"threat-models/secret-store/threat_model/#d2-loss-of-availability-of-vault-service-token-token-via-intentional-token-provider-crash","text":"File-based token provider is a one-shot service.","title":"(d2) Loss of availability of Vault service token token via intentional token provider crash."},{"location":"threat-models/secret-store/threat_model/#e1-loss-of-confidentiality-of-token-issuing-token-in-flight-by-mitm-attack-against-the-vault-api","text":"See mitigations for threat (b1) above.","title":"(e1) Loss of confidentiality of token-issuing-token in-flight by MITM attack against the Vault API."},{"location":"threat-models/secret-store/threat_model/#e2-loss-of-confidentiality-of-token-issuing-token-at-rest-by-file-system-inspectionmonitoring","text":"Container/Snap provided file system protections. Token-issuing token in stored in private tmpfs area in execution environments that support it. Token-issuing token is passed via private channel inside of security service. Token-issuing token for file-based token provider is revoked after use.","title":"(e2) Loss of confidentiality of token-issuing-token at-rest by file system inspection/monitoring."},{"location":"threat-models/secret-store/threat_model/#e3-loss-of-availability-of-token-issuing-token-via-intentional-service-crash","text":"Not applicable: file-based token provider is a single-shot process","title":"(e3) Loss of availability of token-issuing token via intentional service crash."},{"location":"threat-models/secret-store/threat_model/#f1-loss-of-confidentiality-of-vault-root-token-in-flight-by-mitm-attack-against-the-vault-api","text":"See mitigations for threat (a1) above.","title":"(f1) Loss of confidentiality of Vault root token in-flight by MITM attack against the Vault API."},{"location":"threat-models/secret-store/threat_model/#f2-loss-of-confidentiality-of-vault-root-token-by-other-means","text":"The root token is never persisted to disk and revoked immediately after performing necessary setup during vault initialization (the root token can be regenerated on-demand with the master key).","title":"(f2) Loss of confidentiality of Vault root token by other means."},{"location":"threat-models/secret-store/threat_model/#g1-loss-of-confidentiality-of-vault-master-key-in-flight-by-mitm-attack-against-the-vault-api","text":"See mitigations for threat (a1) above.","title":"(g1) Loss of confidentiality of Vault master key in-flight by MITM attack against the Vault API."},{"location":"threat-models/secret-store/threat_model/#g2-loss-of-confidentiality-of-vault-master-key-at-rest-by-file-system-inspectionmonitoring","text":"Container/Snap provided file system protections. Vault master key is encrypted with AES-256-GCM using a HMAC-KDF derived-key with KDF input coming from a configurable source. Threat model recommends use of hardware secure storage for the input key material.","title":"(g2) Loss of confidentiality of Vault master key at-rest by file system inspection/monitoring."},{"location":"threat-models/secret-store/threat_model/#g3-loss-of-availability-of-vault-master-key-by-malicious-deletion","text":"Container/Snap provided file system protections. Hardware-based solutions are out of scope for the reference design, but may offer additional protections.","title":"(g3) Loss of availability of Vault master key by malicious deletion."},{"location":"threat-models/secret-store/threat_model/#h-lost-of-confidentiality-of-vault-data-store-at-rest-by-file-system-inspectionmonitoring","text":"Vault data store is encrypted using Vault master key before being stored.","title":"(h) Lost of confidentiality of Vault data store at-rest by file system inspection/monitoring."},{"location":"threat-models/secret-store/threat_model/#i-lost-of-availability-of-vault-data-store-due-to-intentional-service-crash-of-consul","text":"Vault data store is implemented on top of Consul, which is a fault-tolerant-capable data store. In Docker-based environments, Consul can be configured to automatically restart on failure.","title":"(i) Lost of availability of Vault data store due to intentional service crash of Consul."},{"location":"threat-models/secret-store/threat_model/#j1-loss-of-confidentiality-of-consul-data-store-at-rest-by-file-system-inspectionmonitoring","text":"Consul data store is assumed to be non-confidential and thus there is no threat. Vault data is encrypted prior to be passed to Consul for storage.","title":"(j1) Loss of confidentiality of Consul data store at-rest by file system inspection/monitoring."},{"location":"threat-models/secret-store/threat_model/#j2-loss-of-integrity-or-availability-of-consul-data-store-at-rest-by-file-system-tampering-or-malicious-deletion","text":"Container/Snap provided file system protections.","title":"(j2) Loss of integrity or availability of Consul data store at-rest by file system tampering or malicious deletion."},{"location":"threat-models/secret-store/threat_model/#j3-loss-of-availability-of-consul-data-store-at-runtime-due-to-intentional-service-crash","text":"In Docker-based environments, Consul can be configured to automatically restart on failure. Threat may be further mitigated by running Consul in High Availability mode (not done in reference implementation).","title":"(j3) Loss of availability of Consul data store at runtime due to intentional service crash."},{"location":"threat-models/secret-store/threat_model/#k1-loss-of-confidentiality-of-pki-ca-at-rest-by-file-system-inspectionmonitoring","text":"Container/Snap provided file system protections. Secure deletion of CA private key after PKI generation.","title":"(k1) Loss of confidentiality of PKI CA at-rest by file system inspection/monitoring."},{"location":"threat-models/secret-store/threat_model/#k2-loss-of-integrity-of-pki-ca-by-malicious-replacement","text":"Container/Snap provided file system protections.","title":"(k2) Loss of integrity of PKI CA by malicious replacement."},{"location":"threat-models/secret-store/threat_model/#k3-loss-of-availability-of-pki-ca-public-certificate-by-malicious-deletion","text":"Container/Snap provided file system protections.","title":"(k3) Loss of availability of PKI CA (public certificate) by malicious deletion."},{"location":"threat-models/secret-store/threat_model/#l1-loss-of-confidentiality-of-pki-intermediate-at-rest-by-file-system-inspectionmonitoring","text":"Container/Snap provided file system protections. Secure deletion of CA intermediate private key after PKI generation.","title":"(l1) Loss of confidentiality of PKI intermediate at-rest by file system inspection/monitoring."},{"location":"threat-models/secret-store/threat_model/#l2-loss-of-integrity-of-pki-intermediate-by-malicious-replacement","text":"Identical to threat (k3): CA would have to be maliciously replaced as well.","title":"(l2) Loss of integrity of PKI intermediate by malicious replacement."},{"location":"threat-models/secret-store/threat_model/#l3-loss-of-availability-of-pki-intermediate-public-certificate-by-malicious-deletion","text":"Container/Snap provided file system protections.","title":"(l3) Loss of availability of PKI intermediate (public certificate) by malicious deletion."},{"location":"threat-models/secret-store/threat_model/#m1-loss-of-confidentiality-of-pki-leaf-at-rest-by-file-system-inspectionmonitoring","text":"Container/Snap provided file system protections. Note that server TLS private keys must be delivered to services unencrypted due to limitations of dependent services.","title":"(m1) Loss of confidentiality of PKI leaf at-rest by file system inspection/monitoring."},{"location":"threat-models/secret-store/threat_model/#m2-loss-of-integrity-of-pki-leaf-by-malicious-replacement","text":"Identical to threat (k3/l3): CA or intermediate would have to be maliciously replaced as well.","title":"(m2) Loss of integrity of PKI leaf by malicious replacement."},{"location":"threat-models/secret-store/threat_model/#m3-loss-of-availability-of-pki-leaf-by-malicious-deletion","text":"Container/Snap provided file system protections.","title":"(m3) Loss of availability of PKI leaf by malicious deletion."},{"location":"threat-models/secret-store/threat_model/#p-disclosure-tampering-or-deletion-of-secrets-through-procmem-or-ptrace-by-malicous-or-compromised-microservice","text":"Container/Snap provided memory protections.","title":"(p) Disclosure, tampering, or deletion of secrets through /proc/mem or ptrace() by malicous or compromised microservice"},{"location":"threat-models/secret-store/threat_model/#q-lost-of-confidentiality-of-input-key-material-ikm","text":"IKM is secured by vendor-defined hardware-mechanism. IKM is passed to key derivation function via IPC pipe (stdout).","title":"(q) Lost of confidentiality of input key material (IKM)"},{"location":"threat-models/secret-store/vault_master_key_encryption/","text":"Vault Master Key Encryption Feature Introduction The EdgeX secret store threat model calls out a particular aspect of the Vault-based secret store architecture upon which the whole EdgeX secret store depends: the Vault master key. Because plaintext storage of the Vault master key at rest would be a known security weakness , the high level design calls for the Vault master key to be encrypted on storage. One way of doing this would be to simply encrypt the whole drive upon which the Vault master key is stored. This is a good solution: it would encrypt not only the Vault master key, but also other part of the system to harden them against offline tampering and information disclosure risks. This solution also has drawbacks as well: whole volume encryption may slow down boot times and have a runtime performance impact on constrained devices without hardware-accelerated crypto. The Vault Master Key Encryption feature of EdgeX enables a system designer to specifically target encryption of the Vault master key, and enables a variety of flexible use cases that are not tied to volume encryption such as key escrow (where a key is stored on another machine on the network), smart cards or USB HSMs (where a key us stored in a dongle or chip card), or TPM (security hardware found on many PC-class motherboards). Internal design As stated in the high level design, an RFC-5869 key derivation function (KDF) is used to produce a set of wrapping keys that are used by the vault-worker process to encrypt the Vault master key. An RFC-5869 KDF requires three inputs. A change to any input results in a different output key: Input keying material (IKM). It need not be (but should be) cryptographically strong, and is the \"secret\" part of the KDF. A salt. A non-secret random number that adds to the strength of the KDF. An \"info\" argument. The info argument allows multiple keys to be generated from the same IKM and salt. This allows the same KDF to generate multiple keys each used for a different purpose. For instance, the same KDF can be used to generate an encryption key to protect the PKI at-rest. The Vault Master Key Encryption feature consumes the IKM from a Unix-style pipe. The IKM is provided by a vendor-defined mechanism, and is intended to be tied into security hardware on the device, be device-unique, and explicitly not stored in the file system. To further strengthen the solution, an implementation could choose to engineer a solution whereby the IKM is only released a configurable number of times per boot, so that malware that runs on the system post-boot cannot retrieve it. IKM HOOK The Vault Master Key Encryption feature is embedded into the EdgeX security-secretsetore-setup utility. It is enabled by setting an environment variable, IKM_HOOK , containing the path to an executable that implements the IKM interface, described below, when the security-secretstore-setup executable is run in early boot to initialize or unseal the EdgeX secret store. When this feature is enabled, the Vault master key is encrypted at rest, and cannot be recovered unless the same IKM is provided as when the secretstore was initialized. IKM interface NAME ikm - Return input key material for a hash-based KDF. SYNOPSIS ikm DESCRIPTION ikm outputs initial keying material to stdout as a lowercase hex string to be used for the default EdgeX software implementation of an RFC-5869 KDF. The ikm can output any number of octets. Typically, the KDF will pad the ikm if it is shorter than hashlen, and hash the ikm if it is longer than hashlen. Thus, if ikm returns variable-length output it is advantageous to ensure that the output is always greater than hashlen, where hashlen depends on the hash function used by the KDF. EXAMPLE ikm 64acd82883269a5e46b8b0426d5a18e2b006f7d79041a68a4efa5339f25aba80 Sample implementations This section lists example implementations of the EdgeX Hardware Security Hook. Tutorial: Configuring EdgeX Hardware Security Hooks to use a TPM on Intel\u00ae Developer Zone There is a tutorial published on Intel\u00ae Developer Zone that uses TPM hardware through a device driver interface to encrypt the Vault master key shares. The sample uses TPM-based local attestation to attest the system state prior to releasing the IKM. The sample is based on the tpm2-software project in GitHub and is specifically designed to run as a statically-linked executable that could be injected into a Docker container. Although not a complete solution, it is an illustrative sample that demonstrates in concrete terms how to use the TSS C API to access TPM functionality.","title":"Vault Master Key Encryption Feature"},{"location":"threat-models/secret-store/vault_master_key_encryption/#vault-master-key-encryption-feature","text":"","title":"Vault Master Key Encryption Feature"},{"location":"threat-models/secret-store/vault_master_key_encryption/#introduction","text":"The EdgeX secret store threat model calls out a particular aspect of the Vault-based secret store architecture upon which the whole EdgeX secret store depends: the Vault master key. Because plaintext storage of the Vault master key at rest would be a known security weakness , the high level design calls for the Vault master key to be encrypted on storage. One way of doing this would be to simply encrypt the whole drive upon which the Vault master key is stored. This is a good solution: it would encrypt not only the Vault master key, but also other part of the system to harden them against offline tampering and information disclosure risks. This solution also has drawbacks as well: whole volume encryption may slow down boot times and have a runtime performance impact on constrained devices without hardware-accelerated crypto. The Vault Master Key Encryption feature of EdgeX enables a system designer to specifically target encryption of the Vault master key, and enables a variety of flexible use cases that are not tied to volume encryption such as key escrow (where a key is stored on another machine on the network), smart cards or USB HSMs (where a key us stored in a dongle or chip card), or TPM (security hardware found on many PC-class motherboards).","title":"Introduction"},{"location":"threat-models/secret-store/vault_master_key_encryption/#internal-design","text":"As stated in the high level design, an RFC-5869 key derivation function (KDF) is used to produce a set of wrapping keys that are used by the vault-worker process to encrypt the Vault master key. An RFC-5869 KDF requires three inputs. A change to any input results in a different output key: Input keying material (IKM). It need not be (but should be) cryptographically strong, and is the \"secret\" part of the KDF. A salt. A non-secret random number that adds to the strength of the KDF. An \"info\" argument. The info argument allows multiple keys to be generated from the same IKM and salt. This allows the same KDF to generate multiple keys each used for a different purpose. For instance, the same KDF can be used to generate an encryption key to protect the PKI at-rest. The Vault Master Key Encryption feature consumes the IKM from a Unix-style pipe. The IKM is provided by a vendor-defined mechanism, and is intended to be tied into security hardware on the device, be device-unique, and explicitly not stored in the file system. To further strengthen the solution, an implementation could choose to engineer a solution whereby the IKM is only released a configurable number of times per boot, so that malware that runs on the system post-boot cannot retrieve it.","title":"Internal design"},{"location":"threat-models/secret-store/vault_master_key_encryption/#ikm-hook","text":"The Vault Master Key Encryption feature is embedded into the EdgeX security-secretsetore-setup utility. It is enabled by setting an environment variable, IKM_HOOK , containing the path to an executable that implements the IKM interface, described below, when the security-secretstore-setup executable is run in early boot to initialize or unseal the EdgeX secret store. When this feature is enabled, the Vault master key is encrypted at rest, and cannot be recovered unless the same IKM is provided as when the secretstore was initialized.","title":"IKM HOOK"},{"location":"threat-models/secret-store/vault_master_key_encryption/#ikm-interface","text":"","title":"IKM interface"},{"location":"threat-models/secret-store/vault_master_key_encryption/#name","text":"ikm - Return input key material for a hash-based KDF.","title":"NAME"},{"location":"threat-models/secret-store/vault_master_key_encryption/#synopsis","text":"ikm","title":"SYNOPSIS"},{"location":"threat-models/secret-store/vault_master_key_encryption/#description","text":"ikm outputs initial keying material to stdout as a lowercase hex string to be used for the default EdgeX software implementation of an RFC-5869 KDF. The ikm can output any number of octets. Typically, the KDF will pad the ikm if it is shorter than hashlen, and hash the ikm if it is longer than hashlen. Thus, if ikm returns variable-length output it is advantageous to ensure that the output is always greater than hashlen, where hashlen depends on the hash function used by the KDF.","title":"DESCRIPTION"},{"location":"threat-models/secret-store/vault_master_key_encryption/#example","text":"ikm 64acd82883269a5e46b8b0426d5a18e2b006f7d79041a68a4efa5339f25aba80","title":"EXAMPLE"},{"location":"threat-models/secret-store/vault_master_key_encryption/#sample-implementations","text":"This section lists example implementations of the EdgeX Hardware Security Hook.","title":"Sample implementations"},{"location":"threat-models/secret-store/vault_master_key_encryption/#tutorial-configuring-edgex-hardware-security-hooks-to-use-a-tpm-on-intel-developer-zone","text":"There is a tutorial published on Intel\u00ae Developer Zone that uses TPM hardware through a device driver interface to encrypt the Vault master key shares. The sample uses TPM-based local attestation to attest the system state prior to releasing the IKM. The sample is based on the tpm2-software project in GitHub and is specifically designed to run as a statically-linked executable that could be injected into a Docker container. Although not a complete solution, it is an illustrative sample that demonstrates in concrete terms how to use the TSS C API to access TPM functionality.","title":"Tutorial: Configuring EdgeX Hardware Security Hooks to use a TPM on Intel\u00ae Developer Zone"},{"location":"walk-through/Ch-Walkthrough/","text":"EdgeX Demonstration API Walk Through EdgeX 2.0 This walkthough has been updated to use the Ireland Release / EdgeX 2.0. Changes to this tutorial include: - Remove the creation and reference to Addressables and Value Descriptors - Use of the new V2 Device Profile structure - Cleanup of the command service and use of the device name as part of the command - and more In order to better appreciate the EdgeX Foundry micro services (what they do and how they work), how they inter-operate with each other, and some of the more important API calls that each micro service has to offer, this demonstration API walk through shows how a device service and device are established in EdgeX, how data is sent flowing through the various services, and how data is then shipped out of EdgeX to the cloud or enterprise system. Through this demonstration, you will play the part of various EdgeX micro services by manually making REST calls in a way that mimics EdgeX system behavior. After exploring this demonstration, and hopefully exercising the APIs yourself, you should have a much better understanding of how EdgeX Foundry works. To be clear, this walkthrough is not the way you setup all your device services, devices, etc. In this walkthrough, you manually call EdgeX APIs to perform the work that a device service would do to get a new device setup and to send data to/through EdgeX. In other words, you are simulating the work of a device service does automatically by manually executing EdgeX APIs. You will also exercise APIs to see the results of the work accomplished by the device service and all of EdgeX. Next>","title":"EdgeX Demonstration API Walk Through"},{"location":"walk-through/Ch-Walkthrough/#edgex-demonstration-api-walk-through","text":"EdgeX 2.0 This walkthough has been updated to use the Ireland Release / EdgeX 2.0. Changes to this tutorial include: - Remove the creation and reference to Addressables and Value Descriptors - Use of the new V2 Device Profile structure - Cleanup of the command service and use of the device name as part of the command - and more In order to better appreciate the EdgeX Foundry micro services (what they do and how they work), how they inter-operate with each other, and some of the more important API calls that each micro service has to offer, this demonstration API walk through shows how a device service and device are established in EdgeX, how data is sent flowing through the various services, and how data is then shipped out of EdgeX to the cloud or enterprise system. Through this demonstration, you will play the part of various EdgeX micro services by manually making REST calls in a way that mimics EdgeX system behavior. After exploring this demonstration, and hopefully exercising the APIs yourself, you should have a much better understanding of how EdgeX Foundry works. To be clear, this walkthrough is not the way you setup all your device services, devices, etc. In this walkthrough, you manually call EdgeX APIs to perform the work that a device service would do to get a new device setup and to send data to/through EdgeX. In other words, you are simulating the work of a device service does automatically by manually executing EdgeX APIs. You will also exercise APIs to see the results of the work accomplished by the device service and all of EdgeX. Next>","title":"EdgeX Demonstration API Walk Through"},{"location":"walk-through/Ch-WalkthroughCommands/","text":"Calling commands Recall that the device profile (the camera-monitor-profile in this walkthrough) included a number of commands to get/set (read or write) information from any device of that type. Also recall that the device (the countcamera1 in this walkthrough) was associated to the device profile (again, the camera-monitor-profile ) when the device was provisioned. See core command API for more details. With the setup complete, you can ask the core command micro service for the list of commands associated to the device (the countcamera1 ). The command micro service exposes the commands in a common, normalized way that enables simplified communications with the devices for other micro services within EdgeX Foundry (for example, an edge analytics or rules engine micro service) other applications that may exist on the same host with EdgeX Foundry (for example, a management agent that needs to shutoff a sensor) any external system that needs to command those devices (for example, a cloud-based application that determined the need to modify the settings on a collection of devices) Walkthrough - Commands Use either the Postman or Curl tab below to walkthrough getting the list of commands. Postman Make a GET request to http://localhost:59882/api/v2/device/name/countcamera1 . Note Please note the change in port for the command request above. We are no longer calling on core metadata in this part of the walkthrough. The command micro service is at port 59882 by default. Curl Make a curl GET request as shown below. curl -X GET localhost:59882/api/v2/device/name/countcamera1 | json_pp Note Please note the change in port for the command request above. We are no longer calling on core metadata in this part of the walkthrough. The command micro service is at port 59882 by default. Explore all of the URLs returned as part of this response! These are the URLs that clients (internal or external to EdgeX) can call to trigger the various get/set (read and write) offerings on the Device. However, do take note that the host for the URLs is edgex-core-command . This is the name of the host for core command inside Docker. To exercise the URL outside of Docker, you would have to use the name of the system host ( localhost if executing on the same box). Check the Events While we're at it, check that no data has yet been shipped to core data from the camera device. Since the device service and device in this demonstration are wholly manually driven by you, no sensor data should yet have been collected. You can test this theory by asking for the count of events in core data. Walkthrough - Events Use either the Postman or Curl tab below to walkthrough getting the list of events. Postman Make a GET request to http://localhost:59880/api/v2/event/count/device/name/countcamera1 . Curl Make a curl GET request as shown below. curl -X GET localhost:59880/api/v2/event/count/device/name/countcamera1 The response returned should indicate no events for the camera in core data. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"Count\" : 0 } Execute a Command While there is no real device or device service in this walkthrough, EdgeX doesn't know that. Therefore, with all the configuration and setup you have performed, you can ask EdgeX to set the scan depth or set the snapshot duration to the camera, and EdgeX will dutifully try to perform the task. Of course, since no device service or device exists, as expected EdgeX will ultimately responds with an error. However, through the log files, you can see a command made of the core command micro service, attempts to call on the appropriate command of the fictitious device service that manages our fictitious camera. For example sake, let's launch a command to set the scan depth of countcamera1 (the name of the single human/dog counting camera device in EdgeX right now). The first task to launch a request to set the scan depth is to get the URL for the command to set or write a new scan depth on the device. Return to the results of the request to get a list of the commands by the device name above. Locate and copy the URL and path for the set depth command. Below is a picture containing a slice of the JSON returned by the GET request above and desired set Command URL highlighted - yours will vary based on IDs. Walkthrough - Actuation Command Use either the Postman or Curl tab below to walkthrough actuating the device. Postman Make a PUT request to http://localhost:59882/api/v2/device/name/countcamera1/ScanDepth with the following body. { \"depth\" : \"9\" } Warning Notice that the URL above is a combination of both the command URL and path you found from your command list. Curl Make a curl PUT request as shown below. curl -X PUT -d '{\"depth\":\"9\"}' localhost:59882/api/v2/device/name/countcamera1/ScanDepth Warning Notice that the URL above is a combination of both the command URL and path you found from your command list. Check Command Service Log Again, because no device service (or device) actually exists, core command will respond with a Failed to send a http request error. However, checking the logging output will prove that the core command micro service did receive the request and attempted to call on the non-existent device service (at the address provided for the device service - defined earlier in this walkthrough) to issue the actuating command. To see the core command service log issue the following Docker command : docker logs edgex-core-command The last lines of the log entries should highlight the attempt to contact the non-existent device. level=ERROR ts=2021-09-16T20:50:09.965368572Z app=core-command source=http.go:47 X-Correlation-ID=49cc97f5-1e84-4a46-9eb5-543ae8bd5284 msg=\"failed to send a http request -> Put \\\"camera-device-service:59990/api/v2/device/name/countcamera1/ScanDepth?\\\": unsupported protocol scheme \\\"camera-device-service\\\"\" ... ","title":"Calling commands"},{"location":"walk-through/Ch-WalkthroughCommands/#calling-commands","text":"Recall that the device profile (the camera-monitor-profile in this walkthrough) included a number of commands to get/set (read or write) information from any device of that type. Also recall that the device (the countcamera1 in this walkthrough) was associated to the device profile (again, the camera-monitor-profile ) when the device was provisioned. See core command API for more details. With the setup complete, you can ask the core command micro service for the list of commands associated to the device (the countcamera1 ). The command micro service exposes the commands in a common, normalized way that enables simplified communications with the devices for other micro services within EdgeX Foundry (for example, an edge analytics or rules engine micro service) other applications that may exist on the same host with EdgeX Foundry (for example, a management agent that needs to shutoff a sensor) any external system that needs to command those devices (for example, a cloud-based application that determined the need to modify the settings on a collection of devices)","title":"Calling commands"},{"location":"walk-through/Ch-WalkthroughCommands/#walkthrough-commands","text":"Use either the Postman or Curl tab below to walkthrough getting the list of commands. Postman Make a GET request to http://localhost:59882/api/v2/device/name/countcamera1 . Note Please note the change in port for the command request above. We are no longer calling on core metadata in this part of the walkthrough. The command micro service is at port 59882 by default. Curl Make a curl GET request as shown below. curl -X GET localhost:59882/api/v2/device/name/countcamera1 | json_pp Note Please note the change in port for the command request above. We are no longer calling on core metadata in this part of the walkthrough. The command micro service is at port 59882 by default. Explore all of the URLs returned as part of this response! These are the URLs that clients (internal or external to EdgeX) can call to trigger the various get/set (read and write) offerings on the Device. However, do take note that the host for the URLs is edgex-core-command . This is the name of the host for core command inside Docker. To exercise the URL outside of Docker, you would have to use the name of the system host ( localhost if executing on the same box).","title":"Walkthrough - Commands"},{"location":"walk-through/Ch-WalkthroughCommands/#check-the-events","text":"While we're at it, check that no data has yet been shipped to core data from the camera device. Since the device service and device in this demonstration are wholly manually driven by you, no sensor data should yet have been collected. You can test this theory by asking for the count of events in core data.","title":"Check the Events"},{"location":"walk-through/Ch-WalkthroughCommands/#walkthrough-events","text":"Use either the Postman or Curl tab below to walkthrough getting the list of events. Postman Make a GET request to http://localhost:59880/api/v2/event/count/device/name/countcamera1 . Curl Make a curl GET request as shown below. curl -X GET localhost:59880/api/v2/event/count/device/name/countcamera1 The response returned should indicate no events for the camera in core data. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"Count\" : 0 }","title":"Walkthrough - Events"},{"location":"walk-through/Ch-WalkthroughCommands/#execute-a-command","text":"While there is no real device or device service in this walkthrough, EdgeX doesn't know that. Therefore, with all the configuration and setup you have performed, you can ask EdgeX to set the scan depth or set the snapshot duration to the camera, and EdgeX will dutifully try to perform the task. Of course, since no device service or device exists, as expected EdgeX will ultimately responds with an error. However, through the log files, you can see a command made of the core command micro service, attempts to call on the appropriate command of the fictitious device service that manages our fictitious camera. For example sake, let's launch a command to set the scan depth of countcamera1 (the name of the single human/dog counting camera device in EdgeX right now). The first task to launch a request to set the scan depth is to get the URL for the command to set or write a new scan depth on the device. Return to the results of the request to get a list of the commands by the device name above. Locate and copy the URL and path for the set depth command. Below is a picture containing a slice of the JSON returned by the GET request above and desired set Command URL highlighted - yours will vary based on IDs.","title":"Execute a Command"},{"location":"walk-through/Ch-WalkthroughCommands/#walkthrough-actuation-command","text":"Use either the Postman or Curl tab below to walkthrough actuating the device. Postman Make a PUT request to http://localhost:59882/api/v2/device/name/countcamera1/ScanDepth with the following body. { \"depth\" : \"9\" } Warning Notice that the URL above is a combination of both the command URL and path you found from your command list. Curl Make a curl PUT request as shown below. curl -X PUT -d '{\"depth\":\"9\"}' localhost:59882/api/v2/device/name/countcamera1/ScanDepth Warning Notice that the URL above is a combination of both the command URL and path you found from your command list.","title":"Walkthrough - Actuation Command"},{"location":"walk-through/Ch-WalkthroughCommands/#check-command-service-log","text":"Again, because no device service (or device) actually exists, core command will respond with a Failed to send a http request error. However, checking the logging output will prove that the core command micro service did receive the request and attempted to call on the non-existent device service (at the address provided for the device service - defined earlier in this walkthrough) to issue the actuating command. To see the core command service log issue the following Docker command : docker logs edgex-core-command The last lines of the log entries should highlight the attempt to contact the non-existent device. level=ERROR ts=2021-09-16T20:50:09.965368572Z app=core-command source=http.go:47 X-Correlation-ID=49cc97f5-1e84-4a46-9eb5-543ae8bd5284 msg=\"failed to send a http request -> Put \\\"camera-device-service:59990/api/v2/device/name/countcamera1/ScanDepth?\\\": unsupported protocol scheme \\\"camera-device-service\\\"\" ... ","title":"Check Command Service Log"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/","text":"Defining your device A device profile can be thought of as a template or as a type or classification of device. General characteristics about the type of device, the data theses devices provide, and how to command them is all provided in a device profile. Other pages within this document set provide more details about a device profile and its purpose (see core metadata to start). It is typical that as part of the reference information setup sequence, the device service provides the device profiles for the types of devices it manages. Device Profile See core metadata API for more details. Our fictitious device service will manage only the human/dog counting camera, so it only needs to make one POST request to create the monitoring camera device profile. Since device profiles are often represented in YAML, you make a multi-part form-data POST with the device profile file (find the example profile here) to create the Camera Monitor profile. If you explore the sample profile , you will see that the profile begins with some general information. name : \"camera-monitor-profile\" manufacturer : \"IOTech\" model : \"Cam12345\" labels : - \"camera\" description : \"Human and canine camera monitor profile\" Each profile has a unique name along with a description, manufacturer, model and collection of labels to assist in queries for particular profiles. These are relatively straightforward attributes of a profile. EdgeX 2.0 As of Ireland/V2, device profile names may only contain unreserved characters which are ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_~ Resources and Commands The device profile defines how to communicate with any device that abides by the profile. In particular, it defines the deviceResources and deviceCommands used to send requests to the device (via the device service). See the Device Profile documentation for more background on each of these. Understanding Device Resources The device profile describes the elements of data that can be obtained from the device or sensor and how to change a setting on a device or sensor. The data that can be obtained or the setting that can be changed are called resources or more precisely they are referred to as device resources in Edgex. Learn more about deviceReources in the Device Profile documentation . In this walkthrough example, there are two pieces of data we want to be able to get or read from the camera: dog and human counts. Therefore, both are represented as device resources in the device profile. Additionally, we want to be able to set two settings on the camera: the scan depth and snapshot duration. These are also represented as device resources in the device profile. deviceResources : - name : \"HumanCount\" isHidden : false #is hidden is false by default so this is just making it explicit for purpose of the walkthrough demonstration description : \"Number of people on camera\" properties : valueType : \"Int16\" readWrite : \"R\" #designates that this property can only be read and not set defaultValue : \"0\" - name : \"CanineCount\" isHidden : false description : \"Number of dogs on camera\" properties : valueType : \"Int16\" readWrite : \"R\" #designates that this property can only be read and not set defaultValue : \"0\" - name : \"ScanDepth\" isHidden : false description : \"Get/set the scan depth\" properties : valueType : \"Int16\" readWrite : \"RW\" #designates that this property can be read or set defaultValue : \"0\" - name : \"SnapshotDuration\" isHidden : false description : \"Get the snaphot duration\" properties : valueType : \"Int16\" readWrite : \"RW\" #designates that this property can be read or set defaultValue : \"0\" Understanding Device Commands Command or more precisely device commands specify access to reads and writes for multiple simultaneous device resources. In other words, device commands allow you to ask for multiple pieces of data from a sensor at one time (or set multiple settings at one time). In this example, we can request both human and dog counts in one request by establishing a device command that specifies the request for both. Get more details on deviceCommands in the Device Profile documentation . deviceCommands : - name : \"Counts\" readWrite : \"R\" isHidden : false resourceOperations : - { deviceResource : \"HumanCount\" } - { deviceResource : \"CanineCount\" } EdgeX 2.0 As of the Ireland release, device commands are automatically created by EdgeX for any device resource that are not specified as hidden (that is where isHidden is set to false or is simply left off the device resource) in the profile. Therefore, you would not define a device command to provide access to a single device resource unless you need to restrict the read/write access to that device resource. Walkthrough - Device Profile Use either the Postman or Curl tab below to walkthrough uploading the device profile. Download the Device Profile Click on the link below to download and save the device profile (YAML) to your system. EdgeX_CameraMonitorProfile.yml Note Device profiles are stored in core metadata. Therefore, note that the calls in the walkthrough are to the metadata service, which defaults to port 59881. Upload the Device Profile to EdgeX Postman Make a POST request to http://localhost:59881/api/v2/deviceprofile/uploadfile . The request should not include any additional headers (leave the defaults). In the Body, make sure \"form-data\" is selected and set the Key to file and then select the device profile file where you saved it (as shown below). If your API call is successful, you will get a generated id for your new DeviceProfile in the response area. Curl Make a curl POST request as shown below. curl -X POST -F 'file=@/path/to/your/profile/here/EdgeX_CameraMonitorProfile.yml' http://localhost:59881/api/v2/deviceprofile/uploadfile If your API call is successful, you will get a generated id for your new DeviceProfile in the response area. Warning Note that the file location in the curl command above needs to be replaced with your actual file location path. Also, if you do not save the device profile file to EdgeX_CameraMonitorProfile.yml , then you will need to replace the file name as well. Test the GET API If you make a GET call to the http://localhost:59881/api/v2/deviceprofile/all URL (with Postman or curl) you will get a listing (in JSON) of all the device profiles (and all of its associated deviceResource and deviceCommand ) currently defined in your instance of EdgeX, including the one you just added. ","title":"Defining your device"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#defining-your-device","text":"A device profile can be thought of as a template or as a type or classification of device. General characteristics about the type of device, the data theses devices provide, and how to command them is all provided in a device profile. Other pages within this document set provide more details about a device profile and its purpose (see core metadata to start). It is typical that as part of the reference information setup sequence, the device service provides the device profiles for the types of devices it manages.","title":"Defining your device"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#device-profile","text":"See core metadata API for more details. Our fictitious device service will manage only the human/dog counting camera, so it only needs to make one POST request to create the monitoring camera device profile. Since device profiles are often represented in YAML, you make a multi-part form-data POST with the device profile file (find the example profile here) to create the Camera Monitor profile. If you explore the sample profile , you will see that the profile begins with some general information. name : \"camera-monitor-profile\" manufacturer : \"IOTech\" model : \"Cam12345\" labels : - \"camera\" description : \"Human and canine camera monitor profile\" Each profile has a unique name along with a description, manufacturer, model and collection of labels to assist in queries for particular profiles. These are relatively straightforward attributes of a profile. EdgeX 2.0 As of Ireland/V2, device profile names may only contain unreserved characters which are ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_~","title":"Device Profile"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#resources-and-commands","text":"The device profile defines how to communicate with any device that abides by the profile. In particular, it defines the deviceResources and deviceCommands used to send requests to the device (via the device service). See the Device Profile documentation for more background on each of these.","title":"Resources and Commands"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#understanding-device-resources","text":"The device profile describes the elements of data that can be obtained from the device or sensor and how to change a setting on a device or sensor. The data that can be obtained or the setting that can be changed are called resources or more precisely they are referred to as device resources in Edgex. Learn more about deviceReources in the Device Profile documentation . In this walkthrough example, there are two pieces of data we want to be able to get or read from the camera: dog and human counts. Therefore, both are represented as device resources in the device profile. Additionally, we want to be able to set two settings on the camera: the scan depth and snapshot duration. These are also represented as device resources in the device profile. deviceResources : - name : \"HumanCount\" isHidden : false #is hidden is false by default so this is just making it explicit for purpose of the walkthrough demonstration description : \"Number of people on camera\" properties : valueType : \"Int16\" readWrite : \"R\" #designates that this property can only be read and not set defaultValue : \"0\" - name : \"CanineCount\" isHidden : false description : \"Number of dogs on camera\" properties : valueType : \"Int16\" readWrite : \"R\" #designates that this property can only be read and not set defaultValue : \"0\" - name : \"ScanDepth\" isHidden : false description : \"Get/set the scan depth\" properties : valueType : \"Int16\" readWrite : \"RW\" #designates that this property can be read or set defaultValue : \"0\" - name : \"SnapshotDuration\" isHidden : false description : \"Get the snaphot duration\" properties : valueType : \"Int16\" readWrite : \"RW\" #designates that this property can be read or set defaultValue : \"0\"","title":"Understanding Device Resources"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#understanding-device-commands","text":"Command or more precisely device commands specify access to reads and writes for multiple simultaneous device resources. In other words, device commands allow you to ask for multiple pieces of data from a sensor at one time (or set multiple settings at one time). In this example, we can request both human and dog counts in one request by establishing a device command that specifies the request for both. Get more details on deviceCommands in the Device Profile documentation . deviceCommands : - name : \"Counts\" readWrite : \"R\" isHidden : false resourceOperations : - { deviceResource : \"HumanCount\" } - { deviceResource : \"CanineCount\" } EdgeX 2.0 As of the Ireland release, device commands are automatically created by EdgeX for any device resource that are not specified as hidden (that is where isHidden is set to false or is simply left off the device resource) in the profile. Therefore, you would not define a device command to provide access to a single device resource unless you need to restrict the read/write access to that device resource.","title":"Understanding Device Commands"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#walkthrough-device-profile","text":"Use either the Postman or Curl tab below to walkthrough uploading the device profile.","title":"Walkthrough - Device Profile"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#download-the-device-profile","text":"Click on the link below to download and save the device profile (YAML) to your system. EdgeX_CameraMonitorProfile.yml Note Device profiles are stored in core metadata. Therefore, note that the calls in the walkthrough are to the metadata service, which defaults to port 59881.","title":"Download the Device Profile"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#upload-the-device-profile-to-edgex","text":"Postman Make a POST request to http://localhost:59881/api/v2/deviceprofile/uploadfile . The request should not include any additional headers (leave the defaults). In the Body, make sure \"form-data\" is selected and set the Key to file and then select the device profile file where you saved it (as shown below). If your API call is successful, you will get a generated id for your new DeviceProfile in the response area. Curl Make a curl POST request as shown below. curl -X POST -F 'file=@/path/to/your/profile/here/EdgeX_CameraMonitorProfile.yml' http://localhost:59881/api/v2/deviceprofile/uploadfile If your API call is successful, you will get a generated id for your new DeviceProfile in the response area. Warning Note that the file location in the curl command above needs to be replaced with your actual file location path. Also, if you do not save the device profile file to EdgeX_CameraMonitorProfile.yml , then you will need to replace the file name as well.","title":"Upload the Device Profile to EdgeX"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#test-the-get-api","text":"If you make a GET call to the http://localhost:59881/api/v2/deviceprofile/all URL (with Postman or curl) you will get a listing (in JSON) of all the device profiles (and all of its associated deviceResource and deviceCommand ) currently defined in your instance of EdgeX, including the one you just added. ","title":"Test the GET API"},{"location":"walk-through/Ch-WalkthroughDeviceService/","text":"Register your device service Our next task in this walkthrough is to have the device service register or define itself in EdgeX. That is, it can proclaim to EdgeX that \"I have arrived and am functional.\" Register with Core Configuration and Registration Part of that registration process of the device service, indeed any EdgeX micro service, is to register itself with the core configuration & registration . In this process, the micro service provides its location to the Config/Reg micro service and picks up any new/latest configuration information from this central service. Since there is no real device service in this walkthrough demonstration, this part of the inter-micro service exchange is not explored here. Device Service See core metadata API for more details. At this point in your walkthrough, the device service must create a representative instance of itself in core metadata. It is in this registration that the device service is given an address that allows core command or any EdgeX service to communicate with it. The name of the device service must be unique across all of EdgeX. When registering a device service, the initial admin state can be provided. The administrative state (aka admin state) provides control of the device service by man or other systems. It can be set to LOCKED or UNLOCKED . When a device service is set to LOCKED , it is not suppose to respond to any command requests nor send data from the devices. See Admin State documentation for more details. EdgeX 2.0 As of Ireland/V2, device service names may only contain unreserved characters which are ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_~ Walkthrough - Device Service Use either the Postman or Curl tab below to walkthrough creating the DeviceService . Postman Make a POST request to http://localhost:59881/api/v2/deviceservice with the following body: BODY : [ { \"apiVersion\" : \"v2\" , \"service\" : { \"name\" : \"camera-control-device-service\" , \"description\" : \"Manage human and dog counting cameras\" , \"adminState\" : \"UNLOCKED\" , \"labels\" : [ \"camera\" , \"counter\" ], \"baseAddress\" : \"camera-device-service:59990\" } } ] Be sure that you are POSTing raw data, not form-encoded data. If your API call is successful, you will get a generated ID for your new DeviceService in the response area. Curl Make a curl POST request as shown below. curl -X 'POST' 'http://localhost:59881/api/v2/deviceservice' -d '[{\"apiVersion\": \"v2\",\"service\": {\"name\": \"camera-control-device-service\",\"description\": \"Manage human and dog counting cameras\", \"adminState\": \"UNLOCKED\", \"labels\": [\"camera\",\"counter\"], \"baseAddress\": \"camera-device-service:59990\"}}]' If your API call is successful, you will get a generated ID for your new DeviceService . Test the GET API If you make a GET call to the http://localhost:59881/api/v2/deviceservice/all URL (with Postman or curl) you will get a listing (in JSON) of all the device services currently defined in your instance of EdgeX, including the one you just added. ","title":"Register your device service"},{"location":"walk-through/Ch-WalkthroughDeviceService/#register-your-device-service","text":"Our next task in this walkthrough is to have the device service register or define itself in EdgeX. That is, it can proclaim to EdgeX that \"I have arrived and am functional.\"","title":"Register your device service"},{"location":"walk-through/Ch-WalkthroughDeviceService/#register-with-core-configuration-and-registration","text":"Part of that registration process of the device service, indeed any EdgeX micro service, is to register itself with the core configuration & registration . In this process, the micro service provides its location to the Config/Reg micro service and picks up any new/latest configuration information from this central service. Since there is no real device service in this walkthrough demonstration, this part of the inter-micro service exchange is not explored here.","title":"Register with Core Configuration and Registration"},{"location":"walk-through/Ch-WalkthroughDeviceService/#device-service","text":"See core metadata API for more details. At this point in your walkthrough, the device service must create a representative instance of itself in core metadata. It is in this registration that the device service is given an address that allows core command or any EdgeX service to communicate with it. The name of the device service must be unique across all of EdgeX. When registering a device service, the initial admin state can be provided. The administrative state (aka admin state) provides control of the device service by man or other systems. It can be set to LOCKED or UNLOCKED . When a device service is set to LOCKED , it is not suppose to respond to any command requests nor send data from the devices. See Admin State documentation for more details. EdgeX 2.0 As of Ireland/V2, device service names may only contain unreserved characters which are ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_~","title":"Device Service"},{"location":"walk-through/Ch-WalkthroughDeviceService/#walkthrough-device-service","text":"Use either the Postman or Curl tab below to walkthrough creating the DeviceService . Postman Make a POST request to http://localhost:59881/api/v2/deviceservice with the following body: BODY : [ { \"apiVersion\" : \"v2\" , \"service\" : { \"name\" : \"camera-control-device-service\" , \"description\" : \"Manage human and dog counting cameras\" , \"adminState\" : \"UNLOCKED\" , \"labels\" : [ \"camera\" , \"counter\" ], \"baseAddress\" : \"camera-device-service:59990\" } } ] Be sure that you are POSTing raw data, not form-encoded data. If your API call is successful, you will get a generated ID for your new DeviceService in the response area. Curl Make a curl POST request as shown below. curl -X 'POST' 'http://localhost:59881/api/v2/deviceservice' -d '[{\"apiVersion\": \"v2\",\"service\": {\"name\": \"camera-control-device-service\",\"description\": \"Manage human and dog counting cameras\", \"adminState\": \"UNLOCKED\", \"labels\": [\"camera\",\"counter\"], \"baseAddress\": \"camera-device-service:59990\"}}]' If your API call is successful, you will get a generated ID for your new DeviceService .","title":"Walkthrough - Device Service"},{"location":"walk-through/Ch-WalkthroughDeviceService/#test-the-get-api","text":"If you make a GET call to the http://localhost:59881/api/v2/deviceservice/all URL (with Postman or curl) you will get a listing (in JSON) of all the device services currently defined in your instance of EdgeX, including the one you just added. ","title":"Test the GET API"},{"location":"walk-through/Ch-WalkthroughExporting/","text":"Exporting your device data Great, so the data sent by the camera device makes its way to core data. How can that data be sent to an enterprise system or the Cloud? How can that data be used by an edge analytics system (like a rules engine) to actuate on a device? Getting data to the rules engine By default, data is already passed from the core data service to application services (app services) via Redis Pub/Sub messaging. Alternately, the data can be supplied between the two via MQTT. A preconfigured application service is provided with the EdgeX default Docker Compose files that gets this data and routes it to the eKuiper rules engine . The application service is called app-service-rules (see below). More specifically, it is an app service configurable . app-service-rules : container_name : edgex-app-rules-engine depends_on : - consul - data environment : CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis EDGEX_PROFILE : rules-engine EDGEX_SECURITY_SECRET_STORE : \"false\" MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul SERVICE_HOST : edgex-app-rules-engine TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis hostname : edgex-app-rules-engine image : edgexfoundry/app-service-configurable:2.0.1 networks : edgex-network : {} ports : - 127.0.0.1:59701:59701/tcp read_only : true security_opt : - no-new-privileges:true user : 2002:2001 Seeing the data export The log level of any EdgeX micro service is set to INFO by default. If you tune the log level of the app-service-rules micro service to DEBUG , you can see Event s pass through the app service on the way to the rules engine. Set the log level To set the log level of any service, open the Consul UI in a browser by visiting http://[host]:8500 . When the Consul UI opens, click on the Key/Value tab on the top of the screen. On the Key/Value display page, click on edgex > appservices > 2.0 > app-rules-engine > Writable > LogLevel . In the Value entry field that presents itself, replace INFO with DEBUG and hit the Save button. View the service log The log level change will be picked up by the application service. In a terminal window, execute the Docker command below to view the service log. docker logs -f edgex-app-rules-engine Now push another event/reading into core data as you did earlier (see Send Event ). You should see each new event/reading created by acknowledged by the app service. With the right application service and rules engine configuration, the event/reading data is published to the rules engine topic where it can then be picked up and used by the rules engine service to trigger commands just as you did manually in this walkthrough. Exporting data to anywhere You can create an additional application service to get the data to another application or service, REST endpoint, MQTT topic, cloud provider, and more. See the Getting Started guide on exporting data for more information on how to use another app service configurable to get EdgeX data to any client. Building your own solutions Congratulations, you've made it all the way through the Walkthrough tutorial! appservices > 2.0 > app-rules-engine > Writable > LogLevel . In the Value entry field that presents itself, replace INFO with DEBUG and hit the Save button.","title":"Set the log level"},{"location":"walk-through/Ch-WalkthroughExporting/#view-the-service-log","text":"The log level change will be picked up by the application service. In a terminal window, execute the Docker command below to view the service log. docker logs -f edgex-app-rules-engine Now push another event/reading into core data as you did earlier (see Send Event ). You should see each new event/reading created by acknowledged by the app service. With the right application service and rules engine configuration, the event/reading data is published to the rules engine topic where it can then be picked up and used by the rules engine service to trigger commands just as you did manually in this walkthrough.","title":"View the service log"},{"location":"walk-through/Ch-WalkthroughExporting/#exporting-data-to-anywhere","text":"You can create an additional application service to get the data to another application or service, REST endpoint, MQTT topic, cloud provider, and more. See the Getting Started guide on exporting data for more information on how to use another app service configurable to get EdgeX data to any client.","title":"Exporting data to anywhere"},{"location":"walk-through/Ch-WalkthroughExporting/#building-your-own-solutions","text":"Congratulations, you've made it all the way through the Walkthrough tutorial! ","title":"Provision a device"},{"location":"walk-through/Ch-WalkthroughProvision/#provision-a-device","text":"In the last act of setup, a device service often discovers and provisions devices (either statically or dynamically ) and that it is going to manage on the part of EdgeX. Note the word \"often\" in the last sentence. Not all device services will discover new devices or provision them right away. Depending on the type of device and how the devices communicate, it is up to the device service to determine how/when to provision a device. In some cases, the provisioning may be triggered by a human request of the device service once everything is in place and once the human can provide the information the device service needs to physically connected to the device.","title":"Provision a device"},{"location":"walk-through/Ch-WalkthroughProvision/#device","text":"See core metadata API for more details. For the sake of this demonstration, the call to core metadata will provision the human/dog counting monitor camera as if the device service discovered it (by some unknown means) and provisioned the device as part of some startup process. To create a Device , it must be associated to a DeviceProfile , a DeviceService , and contain one or more Protocols that define how and where to communicate with the device (possibly providing its address). When creating a device, you specify both the admin state (just as you did for a device service) and an operating state. The operating state (aka op state) provides an indication on the part of EdgeX about the internal operating status of the device. The operating state is not set externally (as by another system or man), it is a signal from within EdgeX (and potentially the device service itself) about the condition of the device. The operating state of the device may be either UP or DOWN (it may alsy be UNKNOWN if the state cannot be determined). When the operating state of the device is DOWN , it is either experiencing some difficulty or going through some process (for example an upgrade) which does not allow it to function in its normal capacity.","title":"Device"},{"location":"walk-through/Ch-WalkthroughProvision/#walkthrough-device","text":"Use either the Postman or Curl tab below to walkthrough creating the Device . Postman Make a POST request to http://localhost:59881/api/v2/device with the following body: [ { \"apiVersion\" : \"v2\" , \"device\" : { \"name\" : \"countcamera1\" , \"description\" : \"human and dog counting camera #1\" , \"adminState\" : \"UNLOCKED\" , \"operatingState\" : \"UP\" , \"labels\" : [ \"camera\" , \"counter\" ], \"location\" : \"{lat:45.45,long:47.80}\" , \"serviceName\" : \"camera-control-device-service\" , \"profileName\" : \"camera-monitor-profile\" , \"protocols\" : { \"camera-protocol\" : { \"camera-address\" : \"localhost\" , \"port\" : \"1234\" , \"unitID\" : \"1\" } }, \"notify\" : false } } ] Be sure that you are POSTing raw data, not form-encoded data. If your API call is successful, you will get a generated ID for your new Device in the response area. Note The camera-monitor-profile was created by the device profile uploaded in a previous walkthrough step. The camera-control-device-service was created in the last walkthough step. These names must match the previously created EdgeX objects in order to successfully provision your device. EdgeX 2.0 As of Ireland/V2, device names may only contain unreserved characters which are ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_~ Curl Make a curl POST request as shown below. curl -X 'POST' 'http://localhost:59881/api/v2/device' -d '[{\"apiVersion\": \"v2\", \"device\": {\"name\": \"countcamera1\",\"description\": \"human and dog counting camera #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"camera\",\"counter\"],\"location\": \"{lat:45.45,long:47.80}\",\"serviceName\": \"camera-control-device-service\",\"profileName\": \"camera-monitor-profile\",\"protocols\": {\"camera-protocol\": {\"camera-address\": \"localhost\",\"port\": \"1234\",\"unitID\": \"1\"}},\"notify\": false}}]' If your API call is successful, you will get a generated ID (a UUID) for your new Device . Note The camera-monitor-profile was created by the device profile uploaded in a previous walkthrough step. The camera-control-device-service was created in the last walkthough step. These names must match the previously created EdgeX objects in order to successfully provision your device. EdgeX 2.0 As of Ireland/V2, device names may only contain unreserved characters which are ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_~","title":"Walkthrough - Device"},{"location":"walk-through/Ch-WalkthroughProvision/#test-the-get-api","text":"Ensure the monitor camera is among the devices known to core metadata. If you make a GET call to the http://localhost:59881/api/v2/device/all URL (with Postman or curl) you will get a listing (in JSON) of all the devices currently defined in your instance of EdgeX that should include the one you just added. There are many additional APIs on core metadata to retrieve a DeviceProfile , Device , DeviceService , etc. As an example, here is one to find all devices associated to a given DeviceProfile . curl -X GET http://localhost:59881/api/v2/device/profile/name/camera-monitor-profile | json_pp ","title":"Test the GET API"},{"location":"walk-through/Ch-WalkthroughReading/","text":"Sending events and reading data In the real world, the human/dog counting camera would start to take pictures, count beings, and send that data to EdgeX. To simulate this activity in this section of the walkthrough, you will make core data API calls as if you were the camera's device and device service. That is, you will report human and dog counts to core data in the form of event/reading objects. Send an Event/Reading See core data API for more details. Data is submitted to core data as an Event object. An event is a collection of sensor readings from a device (associated to a device by its name) at a particular point in time. A Reading object in an Event object is a particular value sensed by the device and associated to a Device Resource (by name) to provide context to the reading. So, the human/dog counting camera might determine that there are 5 people and 3 dogs in the space it is monitoring. In the EdgeX vernacular, the device service upon receiving these sensed values from the camera device would create an Event with two Reading s - one Reading would contain the key/value pair of HumanCount:5 and the other Reading would contain the key/value pair of CanineCount:3. The device service, on creating the Event and associated Reading objects would transmit this information to core data via REST call. Walkthrough - Send Event Use either the Postman or Curl tab below to walkthrough sending an Event with Reading s to core data. Postman Make a POST request to `http://localhost:59880/api/v2/event/camera-monitor-profile/countcamera1/HumanCount with the body below. { \"apiVersion\" : \"v2\" , \"event\" : { \"apiVersion\" : \"v2\" , \"deviceName\" : \"countcamera1\" , \"profileName\" : \"camera-monitor-profile\" , \"sourceName\" : \"HumanCount\" , \"id\" : \"d5471d59-2810-419a-8744-18eb8fa03465\" , \"origin\" : 1602168089665565200 , \"readings\" : [ { \"id\" : \"7003cacc-0e00-4676-977c-4e58b9612abd\" , \"origin\" : 1602168089665565200 , \"deviceName\" : \"countcamera1\" , \"resourceName\" : \"HumanCount\" , \"profileName\" : \"camera-monitor-profile\" , \"valueType\" : \"Int16\" , \"value\" : \"5\" }, { \"id\" : \"7003cacc-0e00-4676-977c-4e58b9612abe\" , \"origin\" : 1602168089665565200 , \"deviceName\" : \"countcamera1\" , \"resourceName\" : \"CanineCount\" , \"profileName\" : \"camera-monitor-profile\" , \"valueType\" : \"Int16\" , \"value\" : \"3\" } ] } } If your API call is successful, you will get a generated ID for your new Event as shown in the image below. Note Notice that the POST request URL contains the device profile name, the device name and the device resource (or device command) associated with the device that is providing the event. Curl Make a curl POST request as shown below. curl -X POST -d '{\"apiVersion\": \"v2\",\"event\": {\"apiVersion\": \"v2\",\"deviceName\": \"countcamera1\",\"profileName\": \"camera-monitor-profile\",\"sourceName\": \"HumanCount\",\"id\":\"d5471d59-2810-419a-8744-18eb8fa03464\",\"origin\": 1602168089665565200,\"readings\": [{\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abc\",\"origin\": 1602168089665565200,\"deviceName\": \"countcamera1\",\"resourceName\": \"HumanCount\",\"profileName\": \"camera-monitor-profile\",\"valueType\": \"Int16\",\"value\": \"5\"},{\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abf\",\"origin\":1602168089665565200,\"deviceName\": \"countcamera1\",\"resourceName\": \"CanineCount\",\"profileName\": \"camera-monitor-profile\",\"valueType\": \"Int16\",\"value\": \"3\"}]}}' localhost:59880/api/v2/event/camera-monitor-profile/countcamera1/HumanCount Note Notice that the POST request URL contains the device profile name, the device name and the device resource (or device command) associated with the device that is providing the event. Origin Timestamp The device service will supply an origin property in the Event and Reading object to suggest the time (in Epoch timestamp/milliseconds format) at which the data was sensed/collected. Note Smart devices will often timestamp sensor data and this timestamp can be used as the origin timestamp. In cases where the sensor/device is unable to provide a timestamp (\"dumb\" or brownfield sensors), it is the device service that creates a timestamp for the sensor data that it be applied as the origin timestamp for the device. Exploring Events/Readings Now that an Event and associated Readings have been sent to core data, you can use the core data API to explore that data that is now stored in the database. Recall from a previous walkthrough step , you checked that no data was yet stored in core data. Make a similar call to see event records have now been sent into core data.. Walkthrough - Query Events/Readings Use either the Postman or Curl tab below to walkthrough getting the list of events. Postman Make a GET request to retrieve the Event s associated to the countcamera1 device: http://localhost:59880/api/v2/event/device/name/countcamera1 . Make a GET request to retrieve the Reading s associated to the countcamera1 device: http://localhost:59880/api/v2/reading/device/name/countcamera1 . Curl Make a curl GET requests to retrieve 10 of the last Event s associated to the countcamera1 device and to retrieve 10 of the human count readings associated to countcamera1 curl -X GET localhost:59880/api/v2/event/device/name/countcamera1 | json_pp curl -X GET localhost:59880/api/v2/reading/device/name/countcamera1 | json_pp There are many additional APIs on core data to retrieve Event and Reading data. As an example, here is one to find all events inside of a start and end time range. curl -X GET localhost:59880/api/v2/event/start/1602168089665560000/end/1602168089665570000 | json_pp ","title":"Sending events and reading data"},{"location":"walk-through/Ch-WalkthroughReading/#sending-events-and-reading-data","text":"In the real world, the human/dog counting camera would start to take pictures, count beings, and send that data to EdgeX. To simulate this activity in this section of the walkthrough, you will make core data API calls as if you were the camera's device and device service. That is, you will report human and dog counts to core data in the form of event/reading objects.","title":"Sending events and reading data"},{"location":"walk-through/Ch-WalkthroughReading/#send-an-eventreading","text":"See core data API for more details. Data is submitted to core data as an Event object. An event is a collection of sensor readings from a device (associated to a device by its name) at a particular point in time. A Reading object in an Event object is a particular value sensed by the device and associated to a Device Resource (by name) to provide context to the reading. So, the human/dog counting camera might determine that there are 5 people and 3 dogs in the space it is monitoring. In the EdgeX vernacular, the device service upon receiving these sensed values from the camera device would create an Event with two Reading s - one Reading would contain the key/value pair of HumanCount:5 and the other Reading would contain the key/value pair of CanineCount:3. The device service, on creating the Event and associated Reading objects would transmit this information to core data via REST call.","title":"Send an Event/Reading"},{"location":"walk-through/Ch-WalkthroughReading/#walkthrough-send-event","text":"Use either the Postman or Curl tab below to walkthrough sending an Event with Reading s to core data. Postman Make a POST request to `http://localhost:59880/api/v2/event/camera-monitor-profile/countcamera1/HumanCount with the body below. { \"apiVersion\" : \"v2\" , \"event\" : { \"apiVersion\" : \"v2\" , \"deviceName\" : \"countcamera1\" , \"profileName\" : \"camera-monitor-profile\" , \"sourceName\" : \"HumanCount\" , \"id\" : \"d5471d59-2810-419a-8744-18eb8fa03465\" , \"origin\" : 1602168089665565200 , \"readings\" : [ { \"id\" : \"7003cacc-0e00-4676-977c-4e58b9612abd\" , \"origin\" : 1602168089665565200 , \"deviceName\" : \"countcamera1\" , \"resourceName\" : \"HumanCount\" , \"profileName\" : \"camera-monitor-profile\" , \"valueType\" : \"Int16\" , \"value\" : \"5\" }, { \"id\" : \"7003cacc-0e00-4676-977c-4e58b9612abe\" , \"origin\" : 1602168089665565200 , \"deviceName\" : \"countcamera1\" , \"resourceName\" : \"CanineCount\" , \"profileName\" : \"camera-monitor-profile\" , \"valueType\" : \"Int16\" , \"value\" : \"3\" } ] } } If your API call is successful, you will get a generated ID for your new Event as shown in the image below. Note Notice that the POST request URL contains the device profile name, the device name and the device resource (or device command) associated with the device that is providing the event. Curl Make a curl POST request as shown below. curl -X POST -d '{\"apiVersion\": \"v2\",\"event\": {\"apiVersion\": \"v2\",\"deviceName\": \"countcamera1\",\"profileName\": \"camera-monitor-profile\",\"sourceName\": \"HumanCount\",\"id\":\"d5471d59-2810-419a-8744-18eb8fa03464\",\"origin\": 1602168089665565200,\"readings\": [{\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abc\",\"origin\": 1602168089665565200,\"deviceName\": \"countcamera1\",\"resourceName\": \"HumanCount\",\"profileName\": \"camera-monitor-profile\",\"valueType\": \"Int16\",\"value\": \"5\"},{\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abf\",\"origin\":1602168089665565200,\"deviceName\": \"countcamera1\",\"resourceName\": \"CanineCount\",\"profileName\": \"camera-monitor-profile\",\"valueType\": \"Int16\",\"value\": \"3\"}]}}' localhost:59880/api/v2/event/camera-monitor-profile/countcamera1/HumanCount Note Notice that the POST request URL contains the device profile name, the device name and the device resource (or device command) associated with the device that is providing the event.","title":"Walkthrough - Send Event"},{"location":"walk-through/Ch-WalkthroughReading/#origin-timestamp","text":"The device service will supply an origin property in the Event and Reading object to suggest the time (in Epoch timestamp/milliseconds format) at which the data was sensed/collected. Note Smart devices will often timestamp sensor data and this timestamp can be used as the origin timestamp. In cases where the sensor/device is unable to provide a timestamp (\"dumb\" or brownfield sensors), it is the device service that creates a timestamp for the sensor data that it be applied as the origin timestamp for the device.","title":"Origin Timestamp"},{"location":"walk-through/Ch-WalkthroughReading/#exploring-eventsreadings","text":"Now that an Event and associated Readings have been sent to core data, you can use the core data API to explore that data that is now stored in the database. Recall from a previous walkthrough step , you checked that no data was yet stored in core data. Make a similar call to see event records have now been sent into core data..","title":"Exploring Events/Readings"},{"location":"walk-through/Ch-WalkthroughReading/#walkthrough-query-eventsreadings","text":"Use either the Postman or Curl tab below to walkthrough getting the list of events. Postman Make a GET request to retrieve the Event s associated to the countcamera1 device: http://localhost:59880/api/v2/event/device/name/countcamera1 . Make a GET request to retrieve the Reading s associated to the countcamera1 device: http://localhost:59880/api/v2/reading/device/name/countcamera1 . Curl Make a curl GET requests to retrieve 10 of the last Event s associated to the countcamera1 device and to retrieve 10 of the human count readings associated to countcamera1 curl -X GET localhost:59880/api/v2/event/device/name/countcamera1 | json_pp curl -X GET localhost:59880/api/v2/reading/device/name/countcamera1 | json_pp There are many additional APIs on core data to retrieve Event and Reading data. As an example, here is one to find all events inside of a start and end time range. curl -X GET localhost:59880/api/v2/event/start/1602168089665560000/end/1602168089665570000 | json_pp ","title":"Walkthrough - Query Events/Readings"},{"location":"walk-through/Ch-WalkthroughSetup/","text":"Setup up your environment Install Docker, Docker Compose & EdgeX Foundry To explore EdgeX and walk through it's APIs and how it works, you will need: Docker Docker Compose EdgeX Foundry (the base set of containers) If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry. If you have the tools and EdgeX already installed and running, you can proceed to the Walkthrough Use Case . Install Postman (optional) You can follow this walkthrough making HTTP calls from the command-line with a tool like curl , but it's easier if you use a graphical user interface tool designed for exercising REST APIs. For that we like to use Postman . You can download the native Postman app for your operating system. Note Example curl commands will be provided with the walk through so that you can run this walkthrough without Postman. Alert It is assumed that for the purposes of this walk through demonstration all API micro services are running on localhost . If this is not the case, substitute your hostname for localhost. any POST call has the associated CONTENT-TYPE=application/JSON header associated to it unless explicitly stated otherwise. ","title":"Setup up your environment"},{"location":"walk-through/Ch-WalkthroughSetup/#setup-up-your-environment","text":"","title":"Setup up your environment"},{"location":"walk-through/Ch-WalkthroughSetup/#install-docker-docker-compose-edgex-foundry","text":"To explore EdgeX and walk through it's APIs and how it works, you will need: Docker Docker Compose EdgeX Foundry (the base set of containers) If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry. If you have the tools and EdgeX already installed and running, you can proceed to the Walkthrough Use Case .","title":"Install Docker, Docker Compose & EdgeX Foundry"},{"location":"walk-through/Ch-WalkthroughSetup/#install-postman-optional","text":"You can follow this walkthrough making HTTP calls from the command-line with a tool like curl , but it's easier if you use a graphical user interface tool designed for exercising REST APIs. For that we like to use Postman . You can download the native Postman app for your operating system. Note Example curl commands will be provided with the walk through so that you can run this walkthrough without Postman. Alert It is assumed that for the purposes of this walk through demonstration all API micro services are running on localhost . If this is not the case, substitute your hostname for localhost. any POST call has the associated CONTENT-TYPE=application/JSON header associated to it unless explicitly stated otherwise. ","title":"Install Postman (optional)"},{"location":"walk-through/Ch-WalkthroughUseCase/","text":"Example Use Case In order to explore EdgeX, its services and APIs and to generally understand how it works, it helps to see EdgeX under the context of a real use case. While you exercise the APIs under a hypothetical situation in order to demonstrate how EdgeX works, the use case is very much a valid example of how EdgeX can be used to collect data from devices and actuate control of the sensed environment it monitors. People (and animal) counting camera technology as highlighted in this walk through does exist and has been connected to EdgeX before. Object Counting Camera Suppose you had a new device that you wanted to connect to EdgeX. The device was a camera that took a picture and then had an on-board chip that analyzed the picture and reported the number of humans and canines (dogs) it saw. How often the camera takes a picture and reports its findings can be configured. In fact, the camera device could be sent two actuation commands - that is sent two requests for which it must respond and do something. You could send a request to set its time, in seconds, between picture snapshots (and then calculating the number of humans and dogs it finds in that resulting image). You could also request it to set the scan depth, in feet, of the camera - that is set how far out the camera looks. The farther out it looks, the less accurate the count of humans and dogs becomes, so this is something the manufacturer wants to allow the user to set based on use case needs. EdgeX Device Representation In EdgeX, the camera must be represented by a Device . Each Device is managed by a device service . The device service communicates with the underlying hardware - in this case the camera - in the protocol of choice for that Device . The device service collects the data from the devices it manages and passes that data into the rest of EdgeX. EdgeX 2.0 As of the Ireland release, a device service will, by default, publish data into a message bus which can be subscribed to by core data and/or application services. You'll learn more about these later in this walkthrough. Alternately, a device service can send data directly to core data. In this case, the device service would be collecting the count of humans and dogs that the camera sees. The device service also serves to translate the request for actuation from EdgeX and the rest of the world into protocol requests that the physical device would understand. So in this example, the device service would take requests to set the duration between snapshots and to set the scan depth and translate those requests into protocol commands that the camera understood. Exactly how this camera physically connects to the host machine running EdgeX and how the device service works under the covers to communicate with the camera Device is immaterial for the point of this demonstration. ","title":"Example Use Case"},{"location":"walk-through/Ch-WalkthroughUseCase/#example-use-case","text":"In order to explore EdgeX, its services and APIs and to generally understand how it works, it helps to see EdgeX under the context of a real use case. While you exercise the APIs under a hypothetical situation in order to demonstrate how EdgeX works, the use case is very much a valid example of how EdgeX can be used to collect data from devices and actuate control of the sensed environment it monitors. People (and animal) counting camera technology as highlighted in this walk through does exist and has been connected to EdgeX before.","title":"Example Use Case"},{"location":"walk-through/Ch-WalkthroughUseCase/#object-counting-camera","text":"Suppose you had a new device that you wanted to connect to EdgeX. The device was a camera that took a picture and then had an on-board chip that analyzed the picture and reported the number of humans and canines (dogs) it saw. How often the camera takes a picture and reports its findings can be configured. In fact, the camera device could be sent two actuation commands - that is sent two requests for which it must respond and do something. You could send a request to set its time, in seconds, between picture snapshots (and then calculating the number of humans and dogs it finds in that resulting image). You could also request it to set the scan depth, in feet, of the camera - that is set how far out the camera looks. The farther out it looks, the less accurate the count of humans and dogs becomes, so this is something the manufacturer wants to allow the user to set based on use case needs.","title":"Object Counting Camera"},{"location":"walk-through/Ch-WalkthroughUseCase/#edgex-device-representation","text":"In EdgeX, the camera must be represented by a Device . Each Device is managed by a device service . The device service communicates with the underlying hardware - in this case the camera - in the protocol of choice for that Device . The device service collects the data from the devices it manages and passes that data into the rest of EdgeX. EdgeX 2.0 As of the Ireland release, a device service will, by default, publish data into a message bus which can be subscribed to by core data and/or application services. You'll learn more about these later in this walkthrough. Alternately, a device service can send data directly to core data. In this case, the device service would be collecting the count of humans and dogs that the camera sees. The device service also serves to translate the request for actuation from EdgeX and the rest of the world into protocol requests that the physical device would understand. So in this example, the device service would take requests to set the duration between snapshots and to set the scan depth and translate those requests into protocol commands that the camera understood. Exactly how this camera physically connects to the host machine running EdgeX and how the device service works under the covers to communicate with the camera Device is immaterial for the point of this demonstration. ","title":"EdgeX Device Representation"}]}
\ No newline at end of file
+{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Introduction EdgeX 2.x Want to know what's new in EdgeX 2.x releases (Ireland/Jakarta/etc)? If you are already familiar with EdgeX, look for the EdgeX 2.x emoji ( Edgey - the EdgeX mascot) throughout the documentation - like the one on this page outlining what's new in the latest 2.x releases. These sections will give you a summary of what's new in each area of the documentation. EdgeX Foundry is an open source, vendor neutral, flexible, interoperable, software platform at the edge of the network, that interacts with the physical world of devices , sensors, actuators, and other IoT objects. In simple terms, EdgeX is edge middleware - serving between physical sensing and actuating \"things\" and our information technology (IT) systems. The EdgeX platform enables and encourages the rapidly growing community of IoT solution providers to work together in an ecosystem of interoperable components to reduce uncertainty, accelerate time to market, and facilitate scale. By bringing this much-needed interoperability, EdgeX makes it easier to monitor physical world items, send instructions to them, collect data from them, move the data across the fog up to the cloud where it may be stored, aggregated, analyzed, and turned into information, actuated, and acted upon. So EdgeX enables data to travel northwards towards the cloud or enterprise and back to devices, sensors, and actuators. The initiative is aligned around a common goal: the simplification and standardization of the foundation for tiered edge computing architectures in the IoT market while still enabling the ecosystem to provide significant value-added differentiation. If you don't need further description and want to immediately use EdgeX Foundry use this link: Getting Started Guide EdgeX Foundry Use Cases Originally built to support industrial IoT needs, EdgeX today is used in a variety of use cases to include: Building automation \u2013 helping to manage shared workspace facilities Oil/gas \u2013 closed loop control of a gas supply valve Retail \u2013 multi-sensor reconciliation for loss prevention at the point of sale Water treatment \u2013 monitor and control chemical dosing Consumer IoT \u2013 the open source HomeEdge project is using elements of EdgeX as part of its smart home platform EdgeX Foundry Architectural Tenets EdgeX Foundry was conceived with the following tenets guiding the overall architecture: EdgeX Foundry must be platform agnostic with regard to Hardware (x86, ARM) Operating system (Linux, Windows, MacOS, ...) Distribution (allowing for the distribution of functionality through micro services at the edge, on a gateway, in the fog, on cloud, etc.) Deployment/orchestration (Docker, Snaps, K8s, roll-your-own, ... ) Protocols ( north or south side protocols) EdgeX Foundry must be extremely flexible Any part of the platform may be upgraded, replaced or augmented by other micro services or software components Allow services to scale up and down based on device capability and use case EdgeX Foundry should provide \" reference implementation \" services but encourages best of breed solutions EdgeX Foundry must provide for store and forward capability To support disconnected/remote edge systems To deal with intermittent connectivity EdgeX Foundry must support and facilitate \"intelligence\" moving closer to the edge in order to address Actuation latency concerns Bandwidth and storage concerns Operating remotely concerns EdgeX Foundry must support brown and green device/sensor field deployments EdgeX Foundry must be secure and easily managed Deployments EdgeX was originally built by Dell to run on its IoT gateways . While EdgeX can and does run on gateways, its platform agnostic nature and micro service architecture enables tiered distributed deployments. In other words, a single instance of EdgeX\u2019s micro services can be distributed across several host platforms. The host platform for one or many EdgeX micro services is called a node. This allows EdgeX to leverage compute, storage, and network resources wherever they live on the edge. Its loosely-coupled architecture enables distribution across nodes to enable tiered edge computing. For example, thing communicating services could run on a programmable logic controller (PLC), a gateway, or be embedded in smarter sensors while other EdgeX services are deployed on networked servers or even in the cloud. The scope of a deployment could therefore include embedded sensors, controllers, edge gateways, servers and cloud systems. EdgeX micro services can be deployed across an array of compute nodes to maximize resources while at the same time position more processing intelligence closer to the physical edge. The number and the function of particular micro services deployed on a given node depends on the use case and capability of the hardware and infrastructure. Apache 2 License EdgeX is distributed under Apache 2 License backed by the Apache Foundation. Apache 2 licensing is very friendly (\u201cpermissive\u201d) to open and commercial interests. It allows users to use the software for any purpose. It allows users to distribute, modify or even fork the code base without seeking permission from the founding project. It allows users to change or extend the code base without having to contribute back to the founding project. It even allows users to build commercial products without concerns for profit sharing or royalties to go back to the Linux Foundation or open source project organization. EdgeX Foundry Service Layers EdgeX Foundry is a collection of open source micro services. These micro services are organized into 4 service layers, and 2 underlying augmenting system services. The Service Layers traverse from the edge of the physical realm (from the Device Services Layer), to the edge of the information realm (that of the Application Services Layer), with the Core and Supporting Services Layers at the center. The 4 Service Layers of EdgeX Foundry are as follows: Core Services Layer Supporting Services Layer Application Services Layer Device Services Layer The 2 underlying System Services of EdgeX Foundry are as follows: Security System Management Core Services Layer Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where most of the innate knowledge of what \u201cthings\u201d are connected, what data is flowing through, and how EdgeX is configured resides in an EdgeX instance. Core consists of the following micro services: Core data: a persistence repository and associated management service for data collected from south side objects. Command: a service that facilitates and controls actuation requests from the north side to the south side. Metadata: a repository and associated management service of metadata about the objects that are connected to EdgeX Foundry. Metadata provides the capability to provision new devices and pair them with their owning device services. Registry and Configuration: provides other EdgeX Foundry micro services with information about associated services within EdgeX Foundry and micro services configuration properties (i.e. - a repository of initialization values). Core services provide intermediary communications between the things and the IT systems. Supporting Services Layer The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Normal software application duties such as scheduler, and data clean up (also known as scrubbing in EdgeX) are performed by micro services in the supporting services layer. These services often require some amount of core services in order to function. In all cases, supporting service can be considered optional \u2013 that is they can be left out of an EdgeX deployment depending on use case needs and system resources. Supporting services include: Rules Engine: the reference implementation edge analytics service that performs if-then conditional actuation at the edge based on sensor data collected by the EdgeX instance. This service may be replaced or augmented by use case specific analytics capability. Scheduler: an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time, the service will call on any EdgeX service API URL via REST to trigger an operation. For example, the scheduler service periodically calls on core data APIs to clean up old sensed events that have been successfully exported out of EdgeX. Alerts and Notifications: provides EdgeX services with a central facility to send out an alert or notification. These are notices sent to another system or to a person monitoring the EdgeX instance (internal service communications are often handled more directly). Application Services Layer Application services are the means to extract, process/transform and send sensed data from EdgeX to an endpoint or process of your choice. EdgeX today offers application service examples to send data to many of the major cloud providers (Amazon IoT Hub, Google IoT Core, Azure IoT Hub, IBM Watson IoT\u2026), to MQTT(s) topics, and HTTP(s) REST endpoints. Application services are based on the idea of a \"functions pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event messages) in the order specified. The first function in a pipeline is a trigger. A trigger begins the functions pipeline execution. A trigger, for example, is something like a message landing in a message queue. Each function then acts on the message. Common functions include filtering, transformation (i.e. to XML or JSON), compression, and encryption functions. The function pipeline ends when the message has gone through all the functions and is set to a sink. Putting the resulting message into an MQTT topic to be sent to Azure or AWS is an example of a sink completing an application service. Device Services Layer Device services connect \u201cthings\u201d \u2013 that is sensors and devices \u2013 into the rest of EdgeX. Device services are the edge connectors interacting with the \"things\" that include, but are not limited to: alarm systems, heating and air conditioning systems in homes and office buildings, lights, machines in any industry, irrigation systems, drones, currently automated transit such as some rail systems, currently automated factories, and appliances in your home. In the future, this may include driverless cars and trucks, traffic signals, fully automated fast food facilities, fully automated self-serve grocery stores, devices taking medical readings from patients, etc. Device services may service one or a number of things or devices (sensor, actuator, etc.) at one time. A device that a device service manages, could be something other than a simple, single, physical device. The device could be another gateway (and all of that gateway's devices), a device manager, a device aggregator that acts as a device, or collection of devices, to EdgeX Foundry. The device service communicates with the devices, sensors, actuators, and other IoT objects through protocols native to each device object. The device service converts the data produced and communicated by the IoT object into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry. EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, MQTT, etc. System Services Layer Security Infrastructure Security elements of EdgeX Foundry protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules. There are two major EdgeX security components. A security store, which is used to provide a safe place to keep the EdgeX secrets. Examples of EdgeX secrets are the database access passwords used by the other services and tokens to connect to cloud systems. An API gateway serves as the reverse proxy to restrict access to EdgeX REST resources and perform access control related works. System Management System Management facilities provide the central point of contact for external management systems to start/stop/restart EdgeX services, get the status/health of a service, or get metrics on the EdgeX services (such as memory usage) so that the EdgeX services can be monitored. Software Development Kits (SDKs) Two types of SDKs are provided by EdgeX to assist in creating north and south side services \u2013 specifically to create application services and device services. SDKs for both the north and south side services make connecting new things or new cloud/enterprise systems easier by providing developers all the scaffolding code that takes care of the basic operations of the service. Thereby allowing developers to focus on specifics of their connectivity to the south or north side object without worrying about all the raw plumbing of a micro service. SDKs are language specific; meaning an SDK is written to create services in a particular programming language. Today, EdgeX offers the following SDKs: Golang Device Service SDK C Device Service SDK Golang Application Functions SDK How EdgeX Works Sensor Data Collection EdgeX\u2019s primary job is to collect data from sensors and devices and make that data available to north side applications and systems. Data is collected from a sensor by a device service that speaks the protocol of that device. Example: a Modbus device service would communicate in Modbus to get a pressure reading from a Modbus pump. The device service translates the sensor data into an EdgeX event object. The device service can then either: put the event object on a message bus (which may be implemented via Redis Streams or MQTT). Subscribers to the event message on the message bus can be application services or core data or both (see step 1.1 below). send the event object to the core data service via REST communications (see step 1.2). When core data receives the event (either via message bus or REST), it persists the sensor data in the local edge database. EdgeX uses Redis as our persistence store. There is an abstraction in place to allow you to use another database (which has allowed other databases to be used in the past). Persistence is not required and can be turned off. Data is persisted in EdgeX at the edge for two basics reasons: Edge nodes are not always connected. During periods of disconnected operations, the sensor data must be saved so that it can be transmitted northbound when connectivity is restored. This is referred to as store and forward capability. In some cases, analytics of sensor data needs to look back in history in order to understand the trend and to make the right decision based on that history. If a sensor reports that it is 72\u00b0 F right now, you might want to know what the temperature was ten minutes ago before you make a decision to adjust a heating or cooling system. If the temperature was 85\u00b0 F, you may decide that adjustments to lower the room temperature you made ten minutes ago were sufficient to cool the room. It is the context of historical data that are important to local analytic decisions. When core data receives event objects from the device service via REST, it will put sensor data events on a message topic destined for application services. Redis Pub/Sub is used as the messaging infrastructure by default (step 2). MQTT or ZMQ can also be used as the messaging infrastructure between core data and the application services. The application service transforms the data as needed and pushes the data to an endpoint. It can also filter, enrich, compress, encrypt or perform other functions on the event before sending it to the endpoint (step 3). The endpoint could be an HTTP/S endpoint, an MQTT topic, a cloud system (cloud topic), etc. Edge Analytics and Actuation In edge computing, simply collecting sensor data is only part of the job of an edge platform like EdgeX. Another important job of an edge platform is to be able to: Analyze the incoming sensor data locally Act quickly on that analysis Edge or local analytics is the processing that performs an assessment of the sensor data collected at the edge (\u201clocally\u201d) and triggers actuations or actions based on what it sees. Why edge analytics ? Local analytics are important for two reasons: Some decisions cannot afford to wait for sensor collected data to be fed back to an enterprise or cloud system and have a response returned. Additionally, some edge systems are not always connected to the enterprise or cloud \u2013 they have intermittent periods of connectivity. Local analytics allows systems to operate independently, at least for some stretches of time. For example: a shipping container\u2019s cooling system must be able to make decisions locally without the benefit of Internet connectivity for long periods of time when the ship is at sea. Local analytics also allow a system to act quickly in a low latent fashion when critical to system operations. As an extreme case, imagine that your car\u2019s airbag fired on the basis of data being sent to the cloud and analyzed for collisions. Your car has local analytics to prevent such a potentially slow and error prone delivery of the safety actuation in your automobile. EdgeX is built to act locally on data it collects from the edge. In other words, events are processed by local analytics and can be used to trigger action back down on a sensor/device. Just as application services prepare data for consumption by north side cloud systems or applications, application services can process and get EdgeX events (and the sensor data they contain) to any analytics package (see step 4). By default, EdgeX ships with a simple rules engine (the default EdgeX rules engine is eKuiper \u2013 an open source rules engine and now a sister project in LF Edge). Your own analytics package (or ML agent) could replace or augment the local rules engine. The analytic package can explore the sensor event data and make a decision to trigger actuation of a device. For example, it could check that the pressure reading of an engine is greater than 60 PSI. When such a rule is determined to be true, the analytic package calls on the core command service to trigger some action, like \u201copen a valve\u201d on some controllable device (see step 5). The core command service gets the actuation request and determines which device it needs to act on with the request; then calling on the owning device service to do the actuation (see step 6). Core command allows developers to put additional security measures or checks in place before actuating. The device service receives the request for actuation, translates that into a protocol specific request and forwards the request to the desired device (see step 7). Project Release Cadence Typically, EdgeX releases twice a year; once in the spring and once in the fall. Bug fix releases may occur more often. Each EdgeX release has a code name. The code name follows an alphabetic pattern similar to Android (code names sequentially follow the alphabet). The code name of each release is named after some geographical location in the world. The honor of naming an EdgeX release is given to a community member deemed to have contributed significantly to the project. A release also has a version number. The release version follows sematic versioning to indicate the release is major or minor in scope. Major releases typically contain significant new features and functionality and are not always backward compatible with prior releases. Minor releases are backward compatible and usually contain bug fixes and fewer new features. See the project Wiki for more information on releases, versions and patches . Release Schedule Version Barcelona Oct 2017 0.5.0 California Jun 2017 0.6.0 Delhi Oct 2018 0.7.0 Edinburgh Jul 2019 1.0.0 Fuji Nov 2019 1.1.0 Geneva May 2020 1.2.0 Hanoi November 2020 1.3.0 Ireland Spring 2021 2.0.0 Jakarta Fall 2021 2.1.0 Kamukura Spring 2022 TBD Levski Fall 2022 TBD Note : minor releases of the Device Services and Application Services (along with their associated SDKs) can be release independently. Graphical User Interface, the command line interface (CLI) and other tools can be released independently. EdgeX community members convene in a meeting right at the time of a release to plan the next release and roadmap future releases. See the Project Wiki for more detailed information on releases and roadmap . EdgeX 2.0 The Ireland Release The Ireland release, available June 2021, is the second major version of EdgeX. Highlights of the 2.0 release include: A new and improved set of service APIs, which eliminate a lot of technical debt and setting EdgeX up for new features in the future (such as allowing for more message based communications) Direct device service to application service communications via message bus (bypassing core data if desired or allowing it to be a secondary subscriber) Simplified device profiles Improved security New, improved and more comprehensive graphical user interface (for development and demonstration purposes) New device services for CoAP, GPIO, and LLRP (RFID protocol) An LLRP inventory application service Improved application service capability and functions (to include new filter functions) Cleaner/simpler Docker image naming and facilities to create custom Docker Compose files EdgeX 2.0 provides adopters with a platform that Has an improved API that addresses edge application needs of today and tomorrow Is more efficient and lighter (depending on use case) Is more reliable and offers better quality of service (less REST, more messaging and incorporating a number of bug fixes) Has eliminated a lot of technical debt accumulated over 4 years EdgeX History and Naming EdgeX Foundry began as a project chartered by Dell IoT Marketing and developed by the Dell Client Office of the CTO as an incubation project called Project Fuse in July 2015. It was initially created to run as the IoT software application on Dell\u2019s introductory line of IoT gateways. Dell entered the project into open source through the Linux Foundation on April 24, 2017. EdgeX was formally announced and demonstrated at Hanover Messe 2017. Hanover Messe is one of the world's largest industrial trade fairs. At the fair, the Linux Foundation also announced the association of 50 founding member organizations \u2013 the EdgeX ecosystem \u2013 to help further the project and the goals of creating a universal edge platform. The name \u2018foundry\u2019 was used to draw parallels to Cloud Foundry . EdgeX Foundry is meant to be a foundry for solutions at the edge just like Cloud Foundry is a foundry for solutions in the cloud. Cloud Foundry was originated by VMWare (Dell Technologies is a major shareholder of VMWare - recall that Dell Technologies was the original creator of EdgeX). The \u2018X\u2019 in EdgeX represents the transformational aspects of the platform and allows the project name to be trademarked and to be used in efforts such as certification and certification marks. The EdgeX Foundry Logo represents the nature of its role as transformation engine between the physical OT world and the digital IT world. The EdgeX community selected the octopus as the mascot or \u201cspirit animal\u201d of the project at its inception. Its eight arms and the suckers on the arms represent the sensors. The sensors bring the data into the octopus. Actually, the octopus has nine brains in a way. It has millions of neurons running down each arm; functioning as mini-brains in each of those arms. The arms of the octopus serve as \u201clocal analytics\u201d like that offered by EdgeX. The mascot is affectionately called \u201cEdgey\u201d by the community.","title":"Introduction"},{"location":"#introduction","text":"EdgeX 2.x Want to know what's new in EdgeX 2.x releases (Ireland/Jakarta/etc)? If you are already familiar with EdgeX, look for the EdgeX 2.x emoji ( Edgey - the EdgeX mascot) throughout the documentation - like the one on this page outlining what's new in the latest 2.x releases. These sections will give you a summary of what's new in each area of the documentation. EdgeX Foundry is an open source, vendor neutral, flexible, interoperable, software platform at the edge of the network, that interacts with the physical world of devices , sensors, actuators, and other IoT objects. In simple terms, EdgeX is edge middleware - serving between physical sensing and actuating \"things\" and our information technology (IT) systems. The EdgeX platform enables and encourages the rapidly growing community of IoT solution providers to work together in an ecosystem of interoperable components to reduce uncertainty, accelerate time to market, and facilitate scale. By bringing this much-needed interoperability, EdgeX makes it easier to monitor physical world items, send instructions to them, collect data from them, move the data across the fog up to the cloud where it may be stored, aggregated, analyzed, and turned into information, actuated, and acted upon. So EdgeX enables data to travel northwards towards the cloud or enterprise and back to devices, sensors, and actuators. The initiative is aligned around a common goal: the simplification and standardization of the foundation for tiered edge computing architectures in the IoT market while still enabling the ecosystem to provide significant value-added differentiation. If you don't need further description and want to immediately use EdgeX Foundry use this link: Getting Started Guide","title":"Introduction"},{"location":"#edgex-foundry-use-cases","text":"Originally built to support industrial IoT needs, EdgeX today is used in a variety of use cases to include: Building automation \u2013 helping to manage shared workspace facilities Oil/gas \u2013 closed loop control of a gas supply valve Retail \u2013 multi-sensor reconciliation for loss prevention at the point of sale Water treatment \u2013 monitor and control chemical dosing Consumer IoT \u2013 the open source HomeEdge project is using elements of EdgeX as part of its smart home platform","title":"EdgeX Foundry Use Cases"},{"location":"#edgex-foundry-architectural-tenets","text":"EdgeX Foundry was conceived with the following tenets guiding the overall architecture: EdgeX Foundry must be platform agnostic with regard to Hardware (x86, ARM) Operating system (Linux, Windows, MacOS, ...) Distribution (allowing for the distribution of functionality through micro services at the edge, on a gateway, in the fog, on cloud, etc.) Deployment/orchestration (Docker, Snaps, K8s, roll-your-own, ... ) Protocols ( north or south side protocols) EdgeX Foundry must be extremely flexible Any part of the platform may be upgraded, replaced or augmented by other micro services or software components Allow services to scale up and down based on device capability and use case EdgeX Foundry should provide \" reference implementation \" services but encourages best of breed solutions EdgeX Foundry must provide for store and forward capability To support disconnected/remote edge systems To deal with intermittent connectivity EdgeX Foundry must support and facilitate \"intelligence\" moving closer to the edge in order to address Actuation latency concerns Bandwidth and storage concerns Operating remotely concerns EdgeX Foundry must support brown and green device/sensor field deployments EdgeX Foundry must be secure and easily managed","title":"EdgeX Foundry Architectural Tenets"},{"location":"#deployments","text":"EdgeX was originally built by Dell to run on its IoT gateways . While EdgeX can and does run on gateways, its platform agnostic nature and micro service architecture enables tiered distributed deployments. In other words, a single instance of EdgeX\u2019s micro services can be distributed across several host platforms. The host platform for one or many EdgeX micro services is called a node. This allows EdgeX to leverage compute, storage, and network resources wherever they live on the edge. Its loosely-coupled architecture enables distribution across nodes to enable tiered edge computing. For example, thing communicating services could run on a programmable logic controller (PLC), a gateway, or be embedded in smarter sensors while other EdgeX services are deployed on networked servers or even in the cloud. The scope of a deployment could therefore include embedded sensors, controllers, edge gateways, servers and cloud systems. EdgeX micro services can be deployed across an array of compute nodes to maximize resources while at the same time position more processing intelligence closer to the physical edge. The number and the function of particular micro services deployed on a given node depends on the use case and capability of the hardware and infrastructure.","title":"Deployments"},{"location":"#apache-2-license","text":"EdgeX is distributed under Apache 2 License backed by the Apache Foundation. Apache 2 licensing is very friendly (\u201cpermissive\u201d) to open and commercial interests. It allows users to use the software for any purpose. It allows users to distribute, modify or even fork the code base without seeking permission from the founding project. It allows users to change or extend the code base without having to contribute back to the founding project. It even allows users to build commercial products without concerns for profit sharing or royalties to go back to the Linux Foundation or open source project organization.","title":"Apache 2 License"},{"location":"#edgex-foundry-service-layers","text":"EdgeX Foundry is a collection of open source micro services. These micro services are organized into 4 service layers, and 2 underlying augmenting system services. The Service Layers traverse from the edge of the physical realm (from the Device Services Layer), to the edge of the information realm (that of the Application Services Layer), with the Core and Supporting Services Layers at the center. The 4 Service Layers of EdgeX Foundry are as follows: Core Services Layer Supporting Services Layer Application Services Layer Device Services Layer The 2 underlying System Services of EdgeX Foundry are as follows: Security System Management","title":"EdgeX Foundry Service Layers"},{"location":"#core-services-layer","text":"Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where most of the innate knowledge of what \u201cthings\u201d are connected, what data is flowing through, and how EdgeX is configured resides in an EdgeX instance. Core consists of the following micro services: Core data: a persistence repository and associated management service for data collected from south side objects. Command: a service that facilitates and controls actuation requests from the north side to the south side. Metadata: a repository and associated management service of metadata about the objects that are connected to EdgeX Foundry. Metadata provides the capability to provision new devices and pair them with their owning device services. Registry and Configuration: provides other EdgeX Foundry micro services with information about associated services within EdgeX Foundry and micro services configuration properties (i.e. - a repository of initialization values). Core services provide intermediary communications between the things and the IT systems.","title":"Core Services Layer"},{"location":"#supporting-services-layer","text":"The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Normal software application duties such as scheduler, and data clean up (also known as scrubbing in EdgeX) are performed by micro services in the supporting services layer. These services often require some amount of core services in order to function. In all cases, supporting service can be considered optional \u2013 that is they can be left out of an EdgeX deployment depending on use case needs and system resources. Supporting services include: Rules Engine: the reference implementation edge analytics service that performs if-then conditional actuation at the edge based on sensor data collected by the EdgeX instance. This service may be replaced or augmented by use case specific analytics capability. Scheduler: an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time, the service will call on any EdgeX service API URL via REST to trigger an operation. For example, the scheduler service periodically calls on core data APIs to clean up old sensed events that have been successfully exported out of EdgeX. Alerts and Notifications: provides EdgeX services with a central facility to send out an alert or notification. These are notices sent to another system or to a person monitoring the EdgeX instance (internal service communications are often handled more directly).","title":"Supporting Services Layer"},{"location":"#application-services-layer","text":"Application services are the means to extract, process/transform and send sensed data from EdgeX to an endpoint or process of your choice. EdgeX today offers application service examples to send data to many of the major cloud providers (Amazon IoT Hub, Google IoT Core, Azure IoT Hub, IBM Watson IoT\u2026), to MQTT(s) topics, and HTTP(s) REST endpoints. Application services are based on the idea of a \"functions pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event messages) in the order specified. The first function in a pipeline is a trigger. A trigger begins the functions pipeline execution. A trigger, for example, is something like a message landing in a message queue. Each function then acts on the message. Common functions include filtering, transformation (i.e. to XML or JSON), compression, and encryption functions. The function pipeline ends when the message has gone through all the functions and is set to a sink. Putting the resulting message into an MQTT topic to be sent to Azure or AWS is an example of a sink completing an application service.","title":"Application Services Layer"},{"location":"#device-services-layer","text":"Device services connect \u201cthings\u201d \u2013 that is sensors and devices \u2013 into the rest of EdgeX. Device services are the edge connectors interacting with the \"things\" that include, but are not limited to: alarm systems, heating and air conditioning systems in homes and office buildings, lights, machines in any industry, irrigation systems, drones, currently automated transit such as some rail systems, currently automated factories, and appliances in your home. In the future, this may include driverless cars and trucks, traffic signals, fully automated fast food facilities, fully automated self-serve grocery stores, devices taking medical readings from patients, etc. Device services may service one or a number of things or devices (sensor, actuator, etc.) at one time. A device that a device service manages, could be something other than a simple, single, physical device. The device could be another gateway (and all of that gateway's devices), a device manager, a device aggregator that acts as a device, or collection of devices, to EdgeX Foundry. The device service communicates with the devices, sensors, actuators, and other IoT objects through protocols native to each device object. The device service converts the data produced and communicated by the IoT object into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry. EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, MQTT, etc.","title":"Device Services Layer"},{"location":"#system-services-layer","text":"Security Infrastructure Security elements of EdgeX Foundry protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules. There are two major EdgeX security components. A security store, which is used to provide a safe place to keep the EdgeX secrets. Examples of EdgeX secrets are the database access passwords used by the other services and tokens to connect to cloud systems. An API gateway serves as the reverse proxy to restrict access to EdgeX REST resources and perform access control related works. System Management System Management facilities provide the central point of contact for external management systems to start/stop/restart EdgeX services, get the status/health of a service, or get metrics on the EdgeX services (such as memory usage) so that the EdgeX services can be monitored.","title":"System Services Layer"},{"location":"#software-development-kits-sdks","text":"Two types of SDKs are provided by EdgeX to assist in creating north and south side services \u2013 specifically to create application services and device services. SDKs for both the north and south side services make connecting new things or new cloud/enterprise systems easier by providing developers all the scaffolding code that takes care of the basic operations of the service. Thereby allowing developers to focus on specifics of their connectivity to the south or north side object without worrying about all the raw plumbing of a micro service. SDKs are language specific; meaning an SDK is written to create services in a particular programming language. Today, EdgeX offers the following SDKs: Golang Device Service SDK C Device Service SDK Golang Application Functions SDK","title":"Software Development Kits (SDKs)"},{"location":"#how-edgex-works","text":"","title":"How EdgeX Works"},{"location":"#sensor-data-collection","text":"EdgeX\u2019s primary job is to collect data from sensors and devices and make that data available to north side applications and systems. Data is collected from a sensor by a device service that speaks the protocol of that device. Example: a Modbus device service would communicate in Modbus to get a pressure reading from a Modbus pump. The device service translates the sensor data into an EdgeX event object. The device service can then either: put the event object on a message bus (which may be implemented via Redis Streams or MQTT). Subscribers to the event message on the message bus can be application services or core data or both (see step 1.1 below). send the event object to the core data service via REST communications (see step 1.2). When core data receives the event (either via message bus or REST), it persists the sensor data in the local edge database. EdgeX uses Redis as our persistence store. There is an abstraction in place to allow you to use another database (which has allowed other databases to be used in the past). Persistence is not required and can be turned off. Data is persisted in EdgeX at the edge for two basics reasons: Edge nodes are not always connected. During periods of disconnected operations, the sensor data must be saved so that it can be transmitted northbound when connectivity is restored. This is referred to as store and forward capability. In some cases, analytics of sensor data needs to look back in history in order to understand the trend and to make the right decision based on that history. If a sensor reports that it is 72\u00b0 F right now, you might want to know what the temperature was ten minutes ago before you make a decision to adjust a heating or cooling system. If the temperature was 85\u00b0 F, you may decide that adjustments to lower the room temperature you made ten minutes ago were sufficient to cool the room. It is the context of historical data that are important to local analytic decisions. When core data receives event objects from the device service via REST, it will put sensor data events on a message topic destined for application services. Redis Pub/Sub is used as the messaging infrastructure by default (step 2). MQTT or ZMQ can also be used as the messaging infrastructure between core data and the application services. The application service transforms the data as needed and pushes the data to an endpoint. It can also filter, enrich, compress, encrypt or perform other functions on the event before sending it to the endpoint (step 3). The endpoint could be an HTTP/S endpoint, an MQTT topic, a cloud system (cloud topic), etc.","title":"Sensor Data Collection"},{"location":"#edge-analytics-and-actuation","text":"In edge computing, simply collecting sensor data is only part of the job of an edge platform like EdgeX. Another important job of an edge platform is to be able to: Analyze the incoming sensor data locally Act quickly on that analysis Edge or local analytics is the processing that performs an assessment of the sensor data collected at the edge (\u201clocally\u201d) and triggers actuations or actions based on what it sees. Why edge analytics ? Local analytics are important for two reasons: Some decisions cannot afford to wait for sensor collected data to be fed back to an enterprise or cloud system and have a response returned. Additionally, some edge systems are not always connected to the enterprise or cloud \u2013 they have intermittent periods of connectivity. Local analytics allows systems to operate independently, at least for some stretches of time. For example: a shipping container\u2019s cooling system must be able to make decisions locally without the benefit of Internet connectivity for long periods of time when the ship is at sea. Local analytics also allow a system to act quickly in a low latent fashion when critical to system operations. As an extreme case, imagine that your car\u2019s airbag fired on the basis of data being sent to the cloud and analyzed for collisions. Your car has local analytics to prevent such a potentially slow and error prone delivery of the safety actuation in your automobile. EdgeX is built to act locally on data it collects from the edge. In other words, events are processed by local analytics and can be used to trigger action back down on a sensor/device. Just as application services prepare data for consumption by north side cloud systems or applications, application services can process and get EdgeX events (and the sensor data they contain) to any analytics package (see step 4). By default, EdgeX ships with a simple rules engine (the default EdgeX rules engine is eKuiper \u2013 an open source rules engine and now a sister project in LF Edge). Your own analytics package (or ML agent) could replace or augment the local rules engine. The analytic package can explore the sensor event data and make a decision to trigger actuation of a device. For example, it could check that the pressure reading of an engine is greater than 60 PSI. When such a rule is determined to be true, the analytic package calls on the core command service to trigger some action, like \u201copen a valve\u201d on some controllable device (see step 5). The core command service gets the actuation request and determines which device it needs to act on with the request; then calling on the owning device service to do the actuation (see step 6). Core command allows developers to put additional security measures or checks in place before actuating. The device service receives the request for actuation, translates that into a protocol specific request and forwards the request to the desired device (see step 7).","title":"Edge Analytics and Actuation"},{"location":"#project-release-cadence","text":"Typically, EdgeX releases twice a year; once in the spring and once in the fall. Bug fix releases may occur more often. Each EdgeX release has a code name. The code name follows an alphabetic pattern similar to Android (code names sequentially follow the alphabet). The code name of each release is named after some geographical location in the world. The honor of naming an EdgeX release is given to a community member deemed to have contributed significantly to the project. A release also has a version number. The release version follows sematic versioning to indicate the release is major or minor in scope. Major releases typically contain significant new features and functionality and are not always backward compatible with prior releases. Minor releases are backward compatible and usually contain bug fixes and fewer new features. See the project Wiki for more information on releases, versions and patches . Release Schedule Version Barcelona Oct 2017 0.5.0 California Jun 2017 0.6.0 Delhi Oct 2018 0.7.0 Edinburgh Jul 2019 1.0.0 Fuji Nov 2019 1.1.0 Geneva May 2020 1.2.0 Hanoi November 2020 1.3.0 Ireland Spring 2021 2.0.0 Jakarta Fall 2021 2.1.0 Kamukura Spring 2022 TBD Levski Fall 2022 TBD Note : minor releases of the Device Services and Application Services (along with their associated SDKs) can be release independently. Graphical User Interface, the command line interface (CLI) and other tools can be released independently. EdgeX community members convene in a meeting right at the time of a release to plan the next release and roadmap future releases. See the Project Wiki for more detailed information on releases and roadmap . EdgeX 2.0","title":"Project Release Cadence"},{"location":"#the-ireland-release","text":"The Ireland release, available June 2021, is the second major version of EdgeX. Highlights of the 2.0 release include: A new and improved set of service APIs, which eliminate a lot of technical debt and setting EdgeX up for new features in the future (such as allowing for more message based communications) Direct device service to application service communications via message bus (bypassing core data if desired or allowing it to be a secondary subscriber) Simplified device profiles Improved security New, improved and more comprehensive graphical user interface (for development and demonstration purposes) New device services for CoAP, GPIO, and LLRP (RFID protocol) An LLRP inventory application service Improved application service capability and functions (to include new filter functions) Cleaner/simpler Docker image naming and facilities to create custom Docker Compose files EdgeX 2.0 provides adopters with a platform that Has an improved API that addresses edge application needs of today and tomorrow Is more efficient and lighter (depending on use case) Is more reliable and offers better quality of service (less REST, more messaging and incorporating a number of bug fixes) Has eliminated a lot of technical debt accumulated over 4 years","title":"The Ireland Release"},{"location":"#edgex-history-and-naming","text":"EdgeX Foundry began as a project chartered by Dell IoT Marketing and developed by the Dell Client Office of the CTO as an incubation project called Project Fuse in July 2015. It was initially created to run as the IoT software application on Dell\u2019s introductory line of IoT gateways. Dell entered the project into open source through the Linux Foundation on April 24, 2017. EdgeX was formally announced and demonstrated at Hanover Messe 2017. Hanover Messe is one of the world's largest industrial trade fairs. At the fair, the Linux Foundation also announced the association of 50 founding member organizations \u2013 the EdgeX ecosystem \u2013 to help further the project and the goals of creating a universal edge platform. The name \u2018foundry\u2019 was used to draw parallels to Cloud Foundry . EdgeX Foundry is meant to be a foundry for solutions at the edge just like Cloud Foundry is a foundry for solutions in the cloud. Cloud Foundry was originated by VMWare (Dell Technologies is a major shareholder of VMWare - recall that Dell Technologies was the original creator of EdgeX). The \u2018X\u2019 in EdgeX represents the transformational aspects of the platform and allows the project name to be trademarked and to be used in efforts such as certification and certification marks. The EdgeX Foundry Logo represents the nature of its role as transformation engine between the physical OT world and the digital IT world. The EdgeX community selected the octopus as the mascot or \u201cspirit animal\u201d of the project at its inception. Its eight arms and the suckers on the arms represent the sensors. The sensors bring the data into the octopus. Actually, the octopus has nine brains in a way. It has millions of neurons running down each arm; functioning as mini-brains in each of those arms. The arms of the octopus serve as \u201clocal analytics\u201d like that offered by EdgeX. The mascot is affectionately called \u201cEdgey\u201d by the community.","title":"EdgeX History and Naming"},{"location":"V2TopLevelMigration/","text":"V2 Migration Guide EdgeX 2.0 Many backward breaking changes occurred in the EdgeX 2.0 (Ireland) release which may require some migration depending on your use case. This section describes how to migrate from V1 to V2 at a high level and refers the reader to the appropriate detail documents. The areas to consider for migrating are: Custom Compose File Database Custom Configuration Custom Device Service Custom Device Profile Custom Pre-Defined Device Custom Applications Service Security eKuiper Rules Custom Compose File The compose files for V2 have many changes from their V1 counter parts. If you have customized a V1 compose file to add additional services and/or add or modify configuration overrides, it is highly recommended that you start with the appropriate V2 compose file and re-add your customizations. It is very likely that the sections for your additional services will need to be migrated to have the proper environment overrides. Best approach is to use one of the V2 service sections that closest matches your service as a template. The latest V2 compose files can be found here: https://github.com/edgexfoundry/edgex-compose/tree/ireland Compose Builder If the add on service(s) in your custom compose file are EdgeX released device or app services, it is highly recommended that you use the Compose Builder to generate your custom compose file. The latest V2 Compose Builder can be found here: https://github.com/edgexfoundry/edgex-compose/tree/ireland/compose-builder#readme Database There currently is no migration path for the data stored in the database. The V2 data collections are stored separately from the V1 data collections in the Redis database. Redis is now the only supported database, i.e. support for Mongo has been removed. Note Since the V1 data and V2 data are stored separately, one could create a migration tool and upstream it to the EdgeX community. Warning If the database is not cleared before starting the V2 services, the old V1 data will still reside in the database taking up useful memory. It is recommended that you first wipe the database clean before starting V2 Services. That is unless you create a DB migration tool, in which case you will not want to clear the V1 data until it has been migrated. See Clearing Redis Database section below for details on how to clear the Redis database. The following sections describe what you need to be aware for the different services that create data in the database. Core Data The Event/Reading data stored by Core Data is considered transient and of little value once it has become old. The V2 versions of these data collections will be empty until new Events/Readings are received from V2 Device Services. The V1 ValueDescriptors have been removed in V2. Core Metadata Most of the data stored by Core Metadata will be recreated when the V2 versions of the Device Services start-up. The statically declared devices will automatically be created and device discovery will find and add existing devices. Any device profiles, devices, provision watchers created manually via the V1 REST APIs will have to be recreated using the V2 REST API. Any manually-applied AdministrativeState settings will also need to be re-applied. Support Notifications Any Subscriptions created via the V1 REST API will have to be recreated using the V2 REST API. The Notification and Transmission collections will be empty until new notifications are sent using EdgeX 2.0 Support Scheduler The statically declared Interval and IntervalAction will be created automatically. Any Interval and/or IntervalAction created via the V1 REST API will have to be recreated using the V2 REST API. If you have created a custom configuration with additional statically declared Interval s and IntervalActions see the TOML File section under Custom Configuration below. Application Services Application services use the database only when the Store and Forward capability is enabled. If you do not use this capability you can skip this section. This data collection only has data when that data could not be exported. It is recommended not to upgrade to V2 while the Store and Forward data collection is not empty or you are certain the data is no longer needed. You can determine if the Store and Forward data collection is empty by setting the Application Service's log level to DEBUG and look for the following message which is logged every RetryInterval : msg=\" 0 stored data items found for retrying\" Clearing Redis Database Docker When running EdgeX in Docker the simplest way to clear the database is to remove the db-data volume after stopping the V1 EdgeX services. docker-compose -f down docker volume rm $(docker volume ls -q | grep db-data) Now when the V2 EdgeX services are started the database will be cleared of the old v1 data. Snaps Because there are no tools to migrate EdgeX configuration and database, it's not possible to update the edgexfoundry snap from a V1 version to a V2 version. You must remove the V1 snap first, and then install a V2 version of the snap (available from the 2.0 track in the Snap Store). This will result in starting fresh with EdgeX V2 and all V1 data removed. Local If you are running EdgeX locally, i.e. not in Docker or snaps and in non-secure mode you can use the Redis CLI to clear the database. The CLI would have been installed when you installed Redis locally. Run the following command to clear the database: redis-cli FLUSHDB This will not work if running EdgeX V1 in running in secure mode since you will not have the random generated Redis password unless you created an Admin password when you installed Redis. Custom Configuration Consul If you have customized any EdgeX service's configuration (core, support, device, etc.) via Consul, those customization will need to be re-applied to those services' configuration in Consul once the V2 versions have started and pushed their configuration into Consul. The V2 services now use 2.0 in the Consul path rather than 1.0 . See the TOML File section below for details on migrating configuration for each of the EdgeX services. Example Consul path for V2 .../kv/edgex/core/2.0/core-data/ The same applies for custom device and application service once they have been migrated following the guides referenced in the Custom Device Service and Custom Applications Service sections below. Warning If the Consul data is not cleared prior to running the V2 services, the V1 configuration will remain and be taking up useful memory. The configuration data in Consul can be cleared by deleting the .../kv/edgex/ node with the curl command below prior to starting EdgeX 2.0. Consul is secured in EdgeX 2.0 secure-mode which will make running the command below require an access token if not done prior. curl --request DELETE http://localhost:8500/v1/kv/edgex?recurse=true` TOML File If you have custom configuration TOML files for any EdgeX service (core, support, device, etc.) that configuration will need to be migrated. See V2 Migration of Common Configuration for the details on migrating configuration common to all EdgeX services. The following are where you can find the configuration migration specifics for individual core/support the services Core Data Core Metadata Core Command Support Notifications Support Scheduler System Management Agent (DEPRECATED) Application Services Device Services (common) Device MQTT Device Camera Custom Environment Overrides If you have custom environment overrides for configuration impacted by the V2 changes you will also need to migrate your overrides to use the new name or value depending on what has changed. Refer to the links above and/or below for details for migration of common and/or the service specific configuration to determine if your overrides require migrating. Custom Device Service If you have custom Device Services they will need to be migrated to the V2 version of the Device SDK. See Device Service V2 Migration Guide for complete details. Custom Device Profile If you have custom V1 Device Profile(s) for one of the EdgeX Device Services they will need to be migrated to the V2 version of Device Profiles. See Device Service V2 Migration Guide for complete details. Custom Pre-Defined Device If you have custom V1 Pre-Defined Device(s) for one of the EdgeX Device Services they will need to be migrated to the V2 version of Pre-Defined Devices. See Device Service V2 Migration Guide for complete details. Custom Applications Service If you have custom Application Services they will need to be migrated to the V2 version of the App Functions SDK. See Application Services V2 Migration Guide for complete details. Security Settings If you have an add-on service running in secure mode you will need to set addition security service environment variables in EdgeX V2. See Configuring Add-on Service for more details. API Gateway configuration The API gateway has different tools to set TLS and acquire access tokens. See Configuring API Gateway section for complete details. Secure Consul Consul is now secured when running EdgeX 2.0 in secured mode. See Secure Consul section for complete details. Secured API Gateway Admin Port The API Gateway Admin port is now secured when running EdgeX 2.0 in secured mode. See API Gateway Admin Port (TBD) section for complete details. eKuiper Rules If you have rules defined in the eKuiper rules engine that utilize the meta() directive, you will need to migrate your rule(s) to use the new V2 meta names. The following are the meta names that have changed, added or removed. device => deviceName name => resourceName profileName ( new ) pushed ( removed ) created ( removed - use origin) modified ( removed - use origin) floatEncoding ( removed ) Example V1 to V2 rule migration V1 Rule: { \"id\": \"ruleInt64\", \"sql\": \"SELECT Int64 FROM demo WHERE meta(device) = \\\"Random-Integer-Device\\\" \", \"actions\": [ { \"mqtt\": { \"server\": \"tcp://edgex-mqtt-broker:1883\", \"topic\": \"result\", \"clientId\": \"demo_001\" } } ] } V2 Rule: { \"id\": \"ruleInt64\", \"sql\": \"SELECT Int64 FROM demo WHERE meta(deviceName) = \\\"Random-Integer-Device\\\" \", \"actions\": [ { \"mqtt\": { \"server\": \"tcp://edgex-mqtt-broker:1883\", \"topic\": \"result\", \"clientId\": \"demo_001\" } } ] }","title":"V2 Migration Guide"},{"location":"V2TopLevelMigration/#v2-migration-guide","text":"EdgeX 2.0 Many backward breaking changes occurred in the EdgeX 2.0 (Ireland) release which may require some migration depending on your use case. This section describes how to migrate from V1 to V2 at a high level and refers the reader to the appropriate detail documents. The areas to consider for migrating are: Custom Compose File Database Custom Configuration Custom Device Service Custom Device Profile Custom Pre-Defined Device Custom Applications Service Security eKuiper Rules","title":"V2 Migration Guide"},{"location":"V2TopLevelMigration/#custom-compose-file","text":"The compose files for V2 have many changes from their V1 counter parts. If you have customized a V1 compose file to add additional services and/or add or modify configuration overrides, it is highly recommended that you start with the appropriate V2 compose file and re-add your customizations. It is very likely that the sections for your additional services will need to be migrated to have the proper environment overrides. Best approach is to use one of the V2 service sections that closest matches your service as a template. The latest V2 compose files can be found here: https://github.com/edgexfoundry/edgex-compose/tree/ireland","title":"Custom Compose File"},{"location":"V2TopLevelMigration/#compose-builder","text":"If the add on service(s) in your custom compose file are EdgeX released device or app services, it is highly recommended that you use the Compose Builder to generate your custom compose file. The latest V2 Compose Builder can be found here: https://github.com/edgexfoundry/edgex-compose/tree/ireland/compose-builder#readme","title":"Compose Builder"},{"location":"V2TopLevelMigration/#database","text":"There currently is no migration path for the data stored in the database. The V2 data collections are stored separately from the V1 data collections in the Redis database. Redis is now the only supported database, i.e. support for Mongo has been removed. Note Since the V1 data and V2 data are stored separately, one could create a migration tool and upstream it to the EdgeX community. Warning If the database is not cleared before starting the V2 services, the old V1 data will still reside in the database taking up useful memory. It is recommended that you first wipe the database clean before starting V2 Services. That is unless you create a DB migration tool, in which case you will not want to clear the V1 data until it has been migrated. See Clearing Redis Database section below for details on how to clear the Redis database. The following sections describe what you need to be aware for the different services that create data in the database.","title":"Database"},{"location":"V2TopLevelMigration/#core-data","text":"The Event/Reading data stored by Core Data is considered transient and of little value once it has become old. The V2 versions of these data collections will be empty until new Events/Readings are received from V2 Device Services. The V1 ValueDescriptors have been removed in V2.","title":"Core Data"},{"location":"V2TopLevelMigration/#core-metadata","text":"Most of the data stored by Core Metadata will be recreated when the V2 versions of the Device Services start-up. The statically declared devices will automatically be created and device discovery will find and add existing devices. Any device profiles, devices, provision watchers created manually via the V1 REST APIs will have to be recreated using the V2 REST API. Any manually-applied AdministrativeState settings will also need to be re-applied.","title":"Core Metadata"},{"location":"V2TopLevelMigration/#support-notifications","text":"Any Subscriptions created via the V1 REST API will have to be recreated using the V2 REST API. The Notification and Transmission collections will be empty until new notifications are sent using EdgeX 2.0","title":"Support Notifications"},{"location":"V2TopLevelMigration/#support-scheduler","text":"The statically declared Interval and IntervalAction will be created automatically. Any Interval and/or IntervalAction created via the V1 REST API will have to be recreated using the V2 REST API. If you have created a custom configuration with additional statically declared Interval s and IntervalActions see the TOML File section under Custom Configuration below.","title":"Support Scheduler"},{"location":"V2TopLevelMigration/#application-services","text":"Application services use the database only when the Store and Forward capability is enabled. If you do not use this capability you can skip this section. This data collection only has data when that data could not be exported. It is recommended not to upgrade to V2 while the Store and Forward data collection is not empty or you are certain the data is no longer needed. You can determine if the Store and Forward data collection is empty by setting the Application Service's log level to DEBUG and look for the following message which is logged every RetryInterval : msg=\" 0 stored data items found for retrying\"","title":"Application Services"},{"location":"V2TopLevelMigration/#clearing-redis-database","text":"","title":"Clearing Redis Database"},{"location":"V2TopLevelMigration/#docker","text":"When running EdgeX in Docker the simplest way to clear the database is to remove the db-data volume after stopping the V1 EdgeX services. docker-compose -f down docker volume rm $(docker volume ls -q | grep db-data) Now when the V2 EdgeX services are started the database will be cleared of the old v1 data.","title":"Docker"},{"location":"V2TopLevelMigration/#snaps","text":"Because there are no tools to migrate EdgeX configuration and database, it's not possible to update the edgexfoundry snap from a V1 version to a V2 version. You must remove the V1 snap first, and then install a V2 version of the snap (available from the 2.0 track in the Snap Store). This will result in starting fresh with EdgeX V2 and all V1 data removed.","title":"Snaps"},{"location":"V2TopLevelMigration/#local","text":"If you are running EdgeX locally, i.e. not in Docker or snaps and in non-secure mode you can use the Redis CLI to clear the database. The CLI would have been installed when you installed Redis locally. Run the following command to clear the database: redis-cli FLUSHDB This will not work if running EdgeX V1 in running in secure mode since you will not have the random generated Redis password unless you created an Admin password when you installed Redis.","title":"Local"},{"location":"V2TopLevelMigration/#custom-configuration","text":"","title":"Custom Configuration"},{"location":"V2TopLevelMigration/#consul","text":"If you have customized any EdgeX service's configuration (core, support, device, etc.) via Consul, those customization will need to be re-applied to those services' configuration in Consul once the V2 versions have started and pushed their configuration into Consul. The V2 services now use 2.0 in the Consul path rather than 1.0 . See the TOML File section below for details on migrating configuration for each of the EdgeX services. Example Consul path for V2 .../kv/edgex/core/2.0/core-data/ The same applies for custom device and application service once they have been migrated following the guides referenced in the Custom Device Service and Custom Applications Service sections below. Warning If the Consul data is not cleared prior to running the V2 services, the V1 configuration will remain and be taking up useful memory. The configuration data in Consul can be cleared by deleting the .../kv/edgex/ node with the curl command below prior to starting EdgeX 2.0. Consul is secured in EdgeX 2.0 secure-mode which will make running the command below require an access token if not done prior. curl --request DELETE http://localhost:8500/v1/kv/edgex?recurse=true`","title":"Consul"},{"location":"V2TopLevelMigration/#toml-file","text":"If you have custom configuration TOML files for any EdgeX service (core, support, device, etc.) that configuration will need to be migrated. See V2 Migration of Common Configuration for the details on migrating configuration common to all EdgeX services. The following are where you can find the configuration migration specifics for individual core/support the services Core Data Core Metadata Core Command Support Notifications Support Scheduler System Management Agent (DEPRECATED) Application Services Device Services (common) Device MQTT Device Camera","title":"TOML File"},{"location":"V2TopLevelMigration/#custom-environment-overrides","text":"If you have custom environment overrides for configuration impacted by the V2 changes you will also need to migrate your overrides to use the new name or value depending on what has changed. Refer to the links above and/or below for details for migration of common and/or the service specific configuration to determine if your overrides require migrating.","title":"Custom Environment Overrides"},{"location":"V2TopLevelMigration/#custom-device-service","text":"If you have custom Device Services they will need to be migrated to the V2 version of the Device SDK. See Device Service V2 Migration Guide for complete details.","title":"Custom Device Service"},{"location":"V2TopLevelMigration/#custom-device-profile","text":"If you have custom V1 Device Profile(s) for one of the EdgeX Device Services they will need to be migrated to the V2 version of Device Profiles. See Device Service V2 Migration Guide for complete details.","title":"Custom Device Profile"},{"location":"V2TopLevelMigration/#custom-pre-defined-device","text":"If you have custom V1 Pre-Defined Device(s) for one of the EdgeX Device Services they will need to be migrated to the V2 version of Pre-Defined Devices. See Device Service V2 Migration Guide for complete details.","title":"Custom Pre-Defined Device"},{"location":"V2TopLevelMigration/#custom-applications-service","text":"If you have custom Application Services they will need to be migrated to the V2 version of the App Functions SDK. See Application Services V2 Migration Guide for complete details.","title":"Custom Applications Service"},{"location":"V2TopLevelMigration/#security","text":"","title":"Security"},{"location":"V2TopLevelMigration/#settings","text":"If you have an add-on service running in secure mode you will need to set addition security service environment variables in EdgeX V2. See Configuring Add-on Service for more details.","title":"Settings"},{"location":"V2TopLevelMigration/#api-gateway-configuration","text":"The API gateway has different tools to set TLS and acquire access tokens. See Configuring API Gateway section for complete details.","title":"API Gateway configuration"},{"location":"V2TopLevelMigration/#secure-consul","text":"Consul is now secured when running EdgeX 2.0 in secured mode. See Secure Consul section for complete details.","title":"Secure Consul"},{"location":"V2TopLevelMigration/#secured-api-gateway-admin-port","text":"The API Gateway Admin port is now secured when running EdgeX 2.0 in secured mode. See API Gateway Admin Port (TBD) section for complete details.","title":"Secured API Gateway Admin Port"},{"location":"V2TopLevelMigration/#ekuiper-rules","text":"If you have rules defined in the eKuiper rules engine that utilize the meta() directive, you will need to migrate your rule(s) to use the new V2 meta names. The following are the meta names that have changed, added or removed. device => deviceName name => resourceName profileName ( new ) pushed ( removed ) created ( removed - use origin) modified ( removed - use origin) floatEncoding ( removed ) Example V1 to V2 rule migration V1 Rule: { \"id\": \"ruleInt64\", \"sql\": \"SELECT Int64 FROM demo WHERE meta(device) = \\\"Random-Integer-Device\\\" \", \"actions\": [ { \"mqtt\": { \"server\": \"tcp://edgex-mqtt-broker:1883\", \"topic\": \"result\", \"clientId\": \"demo_001\" } } ] } V2 Rule: { \"id\": \"ruleInt64\", \"sql\": \"SELECT Int64 FROM demo WHERE meta(deviceName) = \\\"Random-Integer-Device\\\" \", \"actions\": [ { \"mqtt\": { \"server\": \"tcp://edgex-mqtt-broker:1883\", \"topic\": \"result\", \"clientId\": \"demo_001\" } } ] }","title":"eKuiper Rules"},{"location":"api/Ch-APIIntroduction/","text":"Introduction Each of the EdgeX services (core, supporting, management, device and application) implement a RESTful API. This section provides details about each service's API. You will see there is a common set of API's that all services implement, which are: Version Metrics Config Ping Each Edgex Service's RESTful API is documented via Swagger. A link is provided to the swagger document in the service specific documentation. Also included in this API Reference are a couple 3rd party services (Configuration/Registry and Rules Engine). These services do not implement the above common APIs and don't not have swagger documentation. Links are provided to their appropriate documentation. See the left side navigation for complete list of services to access their API Reference. EdgeX 2.0 For EdgeX 2.0 all the EdgeX services use new DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all or /label/{label}, provide offset and limit query parameters.","title":"Introduction"},{"location":"api/Ch-APIIntroduction/#introduction","text":"Each of the EdgeX services (core, supporting, management, device and application) implement a RESTful API. This section provides details about each service's API. You will see there is a common set of API's that all services implement, which are: Version Metrics Config Ping Each Edgex Service's RESTful API is documented via Swagger. A link is provided to the swagger document in the service specific documentation. Also included in this API Reference are a couple 3rd party services (Configuration/Registry and Rules Engine). These services do not implement the above common APIs and don't not have swagger documentation. Links are provided to their appropriate documentation. See the left side navigation for complete list of services to access their API Reference. EdgeX 2.0 For EdgeX 2.0 all the EdgeX services use new DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all or /label/{label}, provide offset and limit query parameters.","title":"Introduction"},{"location":"api/applications/Ch-APIAppFunctionsSDK/","text":"Application Services The App Functions SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a functions pipeline. This functions pipeline processes messages received by the configured trigger. See Application Functions SDK for more details on this SDK. The App Functions SDK provides a RESTful API that all Application Services inherit from the SDK. Application Service SDK V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the App Functions SDK has changed to use DTOs (Data Transfer Objects) for all responses and for POST requests. One exception is the /api/v2/trigger endpoint that is enabled when the Trigger is configured to be http . This endpoint accepts any data POSTed to it.","title":"Application Services"},{"location":"api/applications/Ch-APIAppFunctionsSDK/#application-services","text":"The App Functions SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a functions pipeline. This functions pipeline processes messages received by the configured trigger. See Application Functions SDK for more details on this SDK. The App Functions SDK provides a RESTful API that all Application Services inherit from the SDK. Application Service SDK V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the App Functions SDK has changed to use DTOs (Data Transfer Objects) for all responses and for POST requests. One exception is the /api/v2/trigger endpoint that is enabled when the Trigger is configured to be http . This endpoint accepts any data POSTed to it.","title":"Application Services"},{"location":"api/applications/Ch-APIRulesEngine/","text":"Rules Engine EdgeX Foundry Rules Engine Microservice receives data from the instance of App Service Configurable running the rules-engine profile (aka app-rules-engine) via the EdgeX MessageBus. EdgeX uses eKuiper for the rules engine, which is a separate LF Edge project. See the eKuiper README for more details on this rules engine. eKuiper's RESTful API documentation","title":"Rules Engine"},{"location":"api/applications/Ch-APIRulesEngine/#rules-engine","text":"EdgeX Foundry Rules Engine Microservice receives data from the instance of App Service Configurable running the rules-engine profile (aka app-rules-engine) via the EdgeX MessageBus. EdgeX uses eKuiper for the rules engine, which is a separate LF Edge project. See the eKuiper README for more details on this rules engine. eKuiper's RESTful API documentation","title":"Rules Engine"},{"location":"api/core/Ch-APICoreCommand/","text":"Core Command EdgeX Foundry's Command microservice is a conduit for other services to trigger action on devices and sensors through their managing Device Services. See Core Command for more details about this service. The service provides an API to get the list of commands that can be issued for all devices or a single device. Commands are divided into two groups for each device: GET commands are issued to a device or sensor to get a current value for a particular attribute on the device, such as the current temperature provided by a thermostat sensor, or the on/off status of a light. SET commands are issued to a device or sensor to change the current state or status of a device or one of its attributes, such as setting the speed in RPMs of a motor, or setting the brightness of a dimmer light. Core Command V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Command has changed to use DTOs (Data Transfer Objects) for all responses and for all PUT requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Command"},{"location":"api/core/Ch-APICoreCommand/#core-command","text":"EdgeX Foundry's Command microservice is a conduit for other services to trigger action on devices and sensors through their managing Device Services. See Core Command for more details about this service. The service provides an API to get the list of commands that can be issued for all devices or a single device. Commands are divided into two groups for each device: GET commands are issued to a device or sensor to get a current value for a particular attribute on the device, such as the current temperature provided by a thermostat sensor, or the on/off status of a light. SET commands are issued to a device or sensor to change the current state or status of a device or one of its attributes, such as setting the speed in RPMs of a motor, or setting the brightness of a dimmer light. Core Command V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Command has changed to use DTOs (Data Transfer Objects) for all responses and for all PUT requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Command"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/","text":"Configuration and Registry EdgeX uses the 3rd party Consul microservice as the implementations for Configuration and Registry. The RESTful APIs are provided by Consul directly, and several communities supply Consul client libraries for different programming languages, including Go (official), Python, Java, PHP, Scala, Erlang/OTP, Ruby, Node.js, and C#. EdgeX 2.0 New for Edgex 2.0 is Secure Consul when running EdgeX in secure mode. See the Secure Consul section for more details. For the client libraries of different languages, please refer to the list on this page: https://www.consul.io/downloads_tools.html Configuration Management For the current API documentation, please refer to the official Consul web site: https://www.consul.io/intro/getting-started/kv.html https://www.consul.io/docs/agent/http/kv.html Service Registry For the current API documentation, please refer to the official Consul web site: https://www.consul.io/intro/getting-started/services.html https://www.consul.io/docs/agent/http/catalog.html https://www.consul.io/docs/agent/http/agent.html https://www.consul.io/docs/agent/checks.html https://www.consul.io/docs/agent/http/health.html Service Registration While each microservice is starting up, it will connect to Consul to register its endpoint information, including microservice ID, address, port number, and health checking method. After that, other microservices can locate its URL from Consul, and Consul has the ability to monitor its health status. The RESTful API of registration is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_service_register Service Deregistration Before microservices shut down, they have to deregister themselves from Consul. The RESTful API of deregistration is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_service_deregister Service Discovery Service Discovery feature allows client micro services to query the endpoint information of a particular microservice by its microservice IDor list all available services registered in Consul. The RESTful API of querying service by microservice IDis described on the following Consul page: https://www.consul.io/docs/agent/http/catalog.html#catalog_service The RESTful API of listing all available services is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_services Health Checking Health checking is a critical feature that prevents using services that are unhealthy. Consul provides a variety of methods to check the health of services, including Script + Interval, HTTP + Interval, TCP + Interval, Time to Live (TTL), and Docker + Interval. The detailed introduction and examples of each checking methods are described on the following Consul page: https://www.consul.io/docs/agent/checks.html The health checks should be established during service registration. Please see the paragraph on this page of Service Registration section. Consul UI Consul has UI which allows you to view the health of registered services and view/edit services' individual configuration. Learn more about the UI on the following Consul page: https://learn.hashicorp.com/tutorials/consul/get-started-explore-the-ui EdgeX 2.0 Please note that as of EdgeX 2.0, Consul can be secured. When EdgeX is running in secure mode with secure Consul , you must provide Consul's access token to get to the UI referenced above. See How to get Consul ACL token for details.","title":"Configuration and Registry"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#configuration-and-registry","text":"EdgeX uses the 3rd party Consul microservice as the implementations for Configuration and Registry. The RESTful APIs are provided by Consul directly, and several communities supply Consul client libraries for different programming languages, including Go (official), Python, Java, PHP, Scala, Erlang/OTP, Ruby, Node.js, and C#. EdgeX 2.0 New for Edgex 2.0 is Secure Consul when running EdgeX in secure mode. See the Secure Consul section for more details. For the client libraries of different languages, please refer to the list on this page: https://www.consul.io/downloads_tools.html","title":"Configuration and Registry"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#configuration-management","text":"For the current API documentation, please refer to the official Consul web site: https://www.consul.io/intro/getting-started/kv.html https://www.consul.io/docs/agent/http/kv.html","title":"Configuration Management"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#service-registry","text":"For the current API documentation, please refer to the official Consul web site: https://www.consul.io/intro/getting-started/services.html https://www.consul.io/docs/agent/http/catalog.html https://www.consul.io/docs/agent/http/agent.html https://www.consul.io/docs/agent/checks.html https://www.consul.io/docs/agent/http/health.html Service Registration While each microservice is starting up, it will connect to Consul to register its endpoint information, including microservice ID, address, port number, and health checking method. After that, other microservices can locate its URL from Consul, and Consul has the ability to monitor its health status. The RESTful API of registration is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_service_register Service Deregistration Before microservices shut down, they have to deregister themselves from Consul. The RESTful API of deregistration is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_service_deregister Service Discovery Service Discovery feature allows client micro services to query the endpoint information of a particular microservice by its microservice IDor list all available services registered in Consul. The RESTful API of querying service by microservice IDis described on the following Consul page: https://www.consul.io/docs/agent/http/catalog.html#catalog_service The RESTful API of listing all available services is described on the following Consul page: https://www.consul.io/docs/agent/http/agent.html#agent_services Health Checking Health checking is a critical feature that prevents using services that are unhealthy. Consul provides a variety of methods to check the health of services, including Script + Interval, HTTP + Interval, TCP + Interval, Time to Live (TTL), and Docker + Interval. The detailed introduction and examples of each checking methods are described on the following Consul page: https://www.consul.io/docs/agent/checks.html The health checks should be established during service registration. Please see the paragraph on this page of Service Registration section.","title":"Service Registry"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#consul-ui","text":"Consul has UI which allows you to view the health of registered services and view/edit services' individual configuration. Learn more about the UI on the following Consul page: https://learn.hashicorp.com/tutorials/consul/get-started-explore-the-ui EdgeX 2.0 Please note that as of EdgeX 2.0, Consul can be secured. When EdgeX is running in secure mode with secure Consul , you must provide Consul's access token to get to the UI referenced above. See How to get Consul ACL token for details.","title":"Consul UI"},{"location":"api/core/Ch-APICoreData/","text":"Core Data EdgeX Foundry Core Data microservice includes the Events/Readings database collected from devices /sensors and APIs to expose this database to other services. Its APIs to provide access to Add, Query and Delete Events/Readings. See Core Data for more details about this service. Core Data V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Data has changed to use DTOs (Data Transfer Objects) for all responses and for all POST requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Data"},{"location":"api/core/Ch-APICoreData/#core-data","text":"EdgeX Foundry Core Data microservice includes the Events/Readings database collected from devices /sensors and APIs to expose this database to other services. Its APIs to provide access to Add, Query and Delete Events/Readings. See Core Data for more details about this service. Core Data V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Data has changed to use DTOs (Data Transfer Objects) for all responses and for all POST requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Data"},{"location":"api/core/Ch-APICoreMetadata/","text":"Core Metadata The Core Metadata microservice includes the device/sensor metadata database and APIs to expose this database to other services. In particular, the device provisioning service deposits and manages device metadata through this service's API. See Core Metadata for more details about this service. Core Metadata V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Metadata has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Metadata"},{"location":"api/core/Ch-APICoreMetadata/#core-metadata","text":"The Core Metadata microservice includes the device/sensor metadata database and APIs to expose this database to other services. In particular, the device provisioning service deposits and manages device metadata through this service's API. See Core Metadata for more details about this service. Core Metadata V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Core Metadata has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Core Metadata"},{"location":"api/devices/Ch-APIDeviceSDK/","text":"Device Services The EdgeX Foundry Device Service Software Development Kit (SDK) takes the Developer through the step-by-step process to create an EdgeX Foundry Device Service microservice. See Device Service SDK for more details on this SDK. The Device Service SDK provides a RESTful API that all Device Services inherit from the SDK. Device SDK V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Device Service SDK has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT requests.","title":"Device Services"},{"location":"api/devices/Ch-APIDeviceSDK/#device-services","text":"The EdgeX Foundry Device Service Software Development Kit (SDK) takes the Developer through the step-by-step process to create an EdgeX Foundry Device Service microservice. See Device Service SDK for more details on this SDK. The Device Service SDK provides a RESTful API that all Device Services inherit from the SDK. Device SDK V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Device Service SDK has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT requests.","title":"Device Services"},{"location":"api/management/Ch-APISystemManagement/","text":"System Management Agent EdgeX 2.0 System Management Agent has been deprecated for EdgeX 2.0. While it is still available, it may be removed in a future release and no further develop is planned for it. The EdgeX System Management Agent (SMA) microservice exposes the EdgeX management service API to 3rd party systems. In other words, the Agent serves as a proxy for system management service API calls into each micro service. See System Management Agent for more details about this service. System Management V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the System Management Agent has changed to use DTOs (Data Transfer Objects) for all responses and for all POST requests.","title":"System Management Agent"},{"location":"api/management/Ch-APISystemManagement/#system-management-agent","text":"EdgeX 2.0 System Management Agent has been deprecated for EdgeX 2.0. While it is still available, it may be removed in a future release and no further develop is planned for it. The EdgeX System Management Agent (SMA) microservice exposes the EdgeX management service API to 3rd party systems. In other words, the Agent serves as a proxy for system management service API calls into each micro service. See System Management Agent for more details about this service. System Management V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the System Management Agent has changed to use DTOs (Data Transfer Objects) for all responses and for all POST requests.","title":"System Management Agent"},{"location":"api/support/Ch-APISupportNotifications/","text":"Support Notifications When a person or a system needs to be informed of something discovered on the node by another microservice on the node, EdgeX Foundry's Support Notifications microservice delivers that information. Examples of Alerts and Notifications that other services might need to broadcast include sensor data detected outside of certain parameters, usually detected by a Rules Engine service, or a system or service malfunction usually detected by system management services. See Support Notifications for more details about this service. Support Notifications V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Support Notifications has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Support Notifications"},{"location":"api/support/Ch-APISupportNotifications/#support-notifications","text":"When a person or a system needs to be informed of something discovered on the node by another microservice on the node, EdgeX Foundry's Support Notifications microservice delivers that information. Examples of Alerts and Notifications that other services might need to broadcast include sensor data detected outside of certain parameters, usually detected by a Rules Engine service, or a system or service malfunction usually detected by system management services. See Support Notifications for more details about this service. Support Notifications V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Support Notifications has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Support Notifications"},{"location":"api/support/Ch-APISupportScheduler/","text":"Support Scheduler EdgeX Foundry's Support Scheduler microservice to schedule actions to occur on specific intervals. See Support Scheduler for more details about this service. Support Scheduler V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Support Scheduler has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Support Scheduler"},{"location":"api/support/Ch-APISupportScheduler/#support-scheduler","text":"EdgeX Foundry's Support Scheduler microservice to schedule actions to occur on specific intervals. See Support Scheduler for more details about this service. Support Scheduler V2 API Swagger Documentation EdgeX 2.0 For EdgeX 2.0 the REST API provided by the Support Scheduler has changed to use DTOs (Data Transfer Objects) for all responses and for all POST/PUT/PATCH requests. All query APIs (GET) which return multiple objects, such as /all, provide offset and limit query parameters.","title":"Support Scheduler"},{"location":"design/","text":"Architecture Decision Records Folder This folder contains EdgeX Foundry decision records (ADR) and legacy design / requirement documents. /design /adr (architecture decision Records) /legacy-design (legacy design documents) /legacy-requirements (legacy requirement documents) At the root of the ADR folder (/design/adr) are decisions that are relevant to multiple parts of the project (aka \ufffd cross cutting concerns ). Sub folders under the ADR folder contain decisions relevant to the specific area of the project and essentially set up along working group lines (security, core, application, etc.). Naming and Formatting ADR documents are requested to follow RFC (request for comments) naming standard. Specifically, authors should name their documents with a sequentially increasing integer (or serial number) and then the architectural design topic: (sequence number - topic). Example: 0001-SeparateConfigurationInterface. The sequence is a global sequence for all EdgeX ADR. Per RFC and Michael Nygard suggestions the makeup of the ADR document should generally include: Title Status (proposed, accepted, rejected, deprecated, superseded, etc.) Context and Proposed Design Decision Consequences/considerations References Document history is maintained via Github history. Ownership EdgeX WG chairman own the sub folder and included documents associated to their work group. The EdgeX TSC chair/vice chair are responsible for the root level, cross cutting concern documents. Review and Approval ADR\u2019s shall be submitted as PRs to the appropriate edgex-docs folder based on the Architecture Decision Records Folder section above. The status of the PR (inside the document) shall be listed as proposed during this period. The PRs shall be left open (not merged) so that comments against the PR can be collected during the proposal period. The PRs can be approved and merged only after a formal vote of approval is conducted by the TSC. On approval of the ADR by the TSC, the status of the ADR should be changed to accepted . If the ADR is not approved by the TSC, the status in the document should be changed to rejected and the PR closed. Legacy A separate folder (/design/legacy-design) is used for legacy design/architecture decisions. A separate folder (/design/legacy-requirements) is used for legacy requirements documents. WG chairman take the responsibility for posting legacy material in to the applicable folders. Table of Contents A README with a table of contents for current documents is located here . Legacy Design and Requirements have their own Table of Contents as well and are located in their respective directories at /legacy-design and /legacy-requirements . Document authors are asked to keep the TOC updated with each new document entry.","title":"Architecture Decision Records Folder"},{"location":"design/#architecture-decision-records-folder","text":"This folder contains EdgeX Foundry decision records (ADR) and legacy design / requirement documents. /design /adr (architecture decision Records) /legacy-design (legacy design documents) /legacy-requirements (legacy requirement documents) At the root of the ADR folder (/design/adr) are decisions that are relevant to multiple parts of the project (aka \ufffd cross cutting concerns ). Sub folders under the ADR folder contain decisions relevant to the specific area of the project and essentially set up along working group lines (security, core, application, etc.).","title":"Architecture Decision Records Folder"},{"location":"design/#naming-and-formatting","text":"ADR documents are requested to follow RFC (request for comments) naming standard. Specifically, authors should name their documents with a sequentially increasing integer (or serial number) and then the architectural design topic: (sequence number - topic). Example: 0001-SeparateConfigurationInterface. The sequence is a global sequence for all EdgeX ADR. Per RFC and Michael Nygard suggestions the makeup of the ADR document should generally include: Title Status (proposed, accepted, rejected, deprecated, superseded, etc.) Context and Proposed Design Decision Consequences/considerations References Document history is maintained via Github history.","title":"Naming and Formatting"},{"location":"design/#ownership","text":"EdgeX WG chairman own the sub folder and included documents associated to their work group. The EdgeX TSC chair/vice chair are responsible for the root level, cross cutting concern documents.","title":"Ownership"},{"location":"design/#review-and-approval","text":"ADR\u2019s shall be submitted as PRs to the appropriate edgex-docs folder based on the Architecture Decision Records Folder section above. The status of the PR (inside the document) shall be listed as proposed during this period. The PRs shall be left open (not merged) so that comments against the PR can be collected during the proposal period. The PRs can be approved and merged only after a formal vote of approval is conducted by the TSC. On approval of the ADR by the TSC, the status of the ADR should be changed to accepted . If the ADR is not approved by the TSC, the status in the document should be changed to rejected and the PR closed.","title":"Review and Approval"},{"location":"design/#legacy","text":"A separate folder (/design/legacy-design) is used for legacy design/architecture decisions. A separate folder (/design/legacy-requirements) is used for legacy requirements documents. WG chairman take the responsibility for posting legacy material in to the applicable folders.","title":"Legacy"},{"location":"design/#table-of-contents","text":"A README with a table of contents for current documents is located here . Legacy Design and Requirements have their own Table of Contents as well and are located in their respective directories at /legacy-design and /legacy-requirements . Document authors are asked to keep the TOC updated with each new document entry.","title":"Table of Contents"},{"location":"design/TOC/","text":"ADR Table of Contents Name/Link Short Description 0001 Registry Refactor Separate out Registry and Configuration APIs 0002 Array Datatypes Allow Arrays to be held in Readings 0003 V2 API Principles Principles and Goals of V2 API Design 0004 Feature Flags Feature Flag Implementation 0005 Service Self Config Init Service Self Config Init & Config Seed Removal 0006 Metrics Collection Collection of service telemetry data 0007 Release Automation Overview of Release Automation Flow for EdgeX 0008 Secret Distribution Creation and Distribution of Secrets 0009 Secure Bootstrapping Secure Bootstrapping of EdgeX 0011 Device Service REST API The REST API for Device Services in EdgeX v2.x 0012 Device Service Filters Device Service event/reading filters 0013 Device Service Events via Message Bus Device Services send Events via Message Bus 0014 Secret Provider for All Secret Provider for All EdgeX Services 0015 Encryption between microservices Details conditions under which TLS is or is not used 0016 Container Image Guidelines Documents best practices for security of docker images 0017 Securing access to Consul Access control and authorization strategy for Consul 0018 Service Registry Service registry usage for EdgeX services 0019 EdgeX-CLI V2 EdgeX-CLI V2 Implementation 0020 Delay start services (SPIFFE/SPIRE) Secret store tokens for delayed start services 0021 Device Profile Changes Rules on device profile modifications","title":"ADR Table of Contents"},{"location":"design/TOC/#adr-table-of-contents","text":"Name/Link Short Description 0001 Registry Refactor Separate out Registry and Configuration APIs 0002 Array Datatypes Allow Arrays to be held in Readings 0003 V2 API Principles Principles and Goals of V2 API Design 0004 Feature Flags Feature Flag Implementation 0005 Service Self Config Init Service Self Config Init & Config Seed Removal 0006 Metrics Collection Collection of service telemetry data 0007 Release Automation Overview of Release Automation Flow for EdgeX 0008 Secret Distribution Creation and Distribution of Secrets 0009 Secure Bootstrapping Secure Bootstrapping of EdgeX 0011 Device Service REST API The REST API for Device Services in EdgeX v2.x 0012 Device Service Filters Device Service event/reading filters 0013 Device Service Events via Message Bus Device Services send Events via Message Bus 0014 Secret Provider for All Secret Provider for All EdgeX Services 0015 Encryption between microservices Details conditions under which TLS is or is not used 0016 Container Image Guidelines Documents best practices for security of docker images 0017 Securing access to Consul Access control and authorization strategy for Consul 0018 Service Registry Service registry usage for EdgeX services 0019 EdgeX-CLI V2 EdgeX-CLI V2 Implementation 0020 Delay start services (SPIFFE/SPIRE) Secret store tokens for delayed start services 0021 Device Profile Changes Rules on device profile modifications","title":"ADR Table of Contents"},{"location":"design/adr/0001-Registy-Refactor/","text":"Registry Refactoring Design Status Context Proposed Design Decision Consequences References Status Approved Context Currently the Registry Client in go-mod-registry module provides Service Configuration and Service Registration functionality. The goal of this design is to refactor the go-mod-registry module for separation of concerns. The Service Registry functionality will stay in the go-mod-registry module and the Service Configuration functionality will be separated out into a new go-mod-configuration module. This allows for implementations for deferent providers for each, another aspect of separation of concerns. Proposed Design Provider Connection information An aspect of using the current Registry Client is \" Where do the services get the Registry Provider connection information? \" Currently all services either pull this connection information from the local configuration file or from the edgex_registry environment variable. Device Services also have the option to specify this connection information on the command line. With the refactoring for separation of concerns, this issue changes to \" Where do the services get the Configuration Provider connection information? \" There have been concerns voiced by some in the EdgeX community that storing this Configuration Provider connection information in the configuration which ultimately is provided by that provider is not the right design. This design proposes that all services will use the command line option approach with the ability to override with an environment variable. The Configuration Provider information will not be stored in each service's local configuration file. The edgex_registry environment variable will be deprecated. The Registry Provider connection information will continue to be stored in each service's configuration either locally or from the Configuration Provider same as all other EdgeX Client and Database connection information. Command line option changes The new -cp/-configProvider command line option will be added to each service which will have a value specified using the format {type}.{protocol}://{host}:{port} e.g consul.http://localhost:8500 . This new command line option will be overridden by the edgex_configuration_provider environment variable when it is set. This environment variable's value has the same format as the command line option value. If no value is provided to the -cp/-configProvider option, i.e. just -cp , and no environment variable override is specified, the default value of consul.http://localhost:8500 will be used. if -cp/-configProvider not used and no environment variable override is specified the local configuration file is used, as is it now. All services will log the Configuration Provider connection information that is used. The existing -r/-registry command line option will be retained as a Boolean flag to indicate to use the Registry. Bootstrap Changes All services in the edgex-go mono repo use the new common bootstrap functionality. The plan is to move this code to a go module for the Device Service and App Functions SDKs to also use. The current bootstrap modules pkg/bootstrap/configuration/registry.go and pkg/bootstrap/container/registry.go will be refactored to use the new Configuration Client and be renamed appropriately. New bootstrap modules will be created for using the revised version of Registry Client . The current use of useRegistry and registryClient for service configuration will be change to appropriate names for using the new Configuration Client . The current use of useRegistry and registryClient for service registration will be retained for service registration. Call to the new Unregister() API will be added to shutdown code for all services. Config-Seed Changes The conf-seed service will have similar changes for specifying the Configuration Provider connection information since it doesn't use the common bootstrap package. Beyond that it will have minor changes for switching to using the Configuration Client interface, which will just be imports and appropriate name refactoring. Config Endpoint Changes Since the Configuration Provider connection information will no longer be in the service's configuration struct, the config endpoint processing will be modified to add the Configuration Provider connection information to the resulting JSON create from service's configuration. Client Interfaces changes Current Registry Client This following is the current Registry Client Interface type Client interface { Register () error HasConfiguration () ( bool , error ) PutConfigurationToml ( configuration * toml . Tree , overwrite bool ) error PutConfiguration ( configStruct interface {}, overwrite bool ) error GetConfiguration ( configStruct interface {}) ( interface {}, error ) WatchForChanges ( updateChannel chan <- interface {}, errorChannel chan <- error , configuration interface {}, waitKey string ) IsAlive () bool ConfigurationValueExists ( name string ) ( bool , error ) GetConfigurationValue ( name string ) ([] byte , error ) PutConfigurationValue ( name string , value [] byte ) error GetServiceEndpoint ( serviceId string ) ( types . ServiceEndpoint , error ) IsServiceAvailable ( serviceId string ) error } New Configuration Client This following is the new Configuration Client Interface which contains the Service Configuration specific portion from the above current Registry Client . type Client interface { HasConfiguration () ( bool , error ) PutConfigurationFromToml ( configuration * toml . Tree , overwrite bool ) error PutConfiguration ( configStruct interface {}, overwrite bool ) error GetConfiguration ( configStruct interface {}) ( interface {}, error ) WatchForChanges ( updateChannel chan <- interface {}, errorChannel chan <- error , configuration interface {}, waitKey string ) IsAlive () bool ConfigurationValueExists ( name string ) ( bool , error ) GetConfigurationValue ( name string ) ([] byte , error ) PutConfigurationValue ( name string , value [] byte ) error } Revised Registry Client This following is the revised Registry Client Interface, which contains the Service Registry specific portion from the above current Registry Client . The UnRegister() API has been added per issue #20 type Client interface { Register () error UnRegister () error IsAlive () bool GetServiceEndpoint ( serviceId string ) ( types . ServiceEndpoint , error ) IsServiceAvailable ( serviceId string ) error } Client Configuration Structs Current Registry Client Config The following is the current struct used to configure the current Registry Client type Config struct { Protocol string Host string Port int Type string Stem string ServiceKey string ServiceHost string ServicePort int ServiceProtocol string CheckRoute string CheckInterval string } New Configuration Client Config The following is the new struct the will be used to configure the new Configuration Client from the command line option or environment variable values. The Service Registry portion has been removed from the above existing Registry Client Config type Config struct { Protocol string Host string Port int Type string BasePath string ServiceKey string } New Registry Client Config The following is the revised struct the will be used to configure the new Registry Client from the information in the service's configuration. This is mostly unchanged from the existing Registry Client Config , except that the Stem for configuration has been removed type Config struct { Protocol string Host string Port int Type string ServiceKey string ServiceHost string ServicePort int ServiceProtocol string CheckRoute string CheckInterval string } Provider Implementations The current Consul implementation of the Registry Client will be split up into implementations for the new Configuration Client in the new go-mod-configuration module and the revised Registry Client in the existing go-mod-registry module. Decision It was decided to move forward with the above design After initial ADR was approved, it was decided to retain the -r/--registry command-line flag and not add the Enabled field in the Registry provider configuration. Consequences Once the refactoring of go-mod-registry and go-mod-configuration are complete, they will need to be integrated into the new go-mod-bootstrap. Part of this integration will be the Command line option changes above. At this point the edgex-go services will be integrated with the new Registry and Configuration providers. The App Services SDK and Device Services SDK will then need to integrate go-mod-bootstrap to take advantage of these new providers. References Registry Abstraction - Decouple EdgeX services from Consul (Previous design)","title":"Registry Refactoring Design"},{"location":"design/adr/0001-Registy-Refactor/#registry-refactoring-design","text":"Status Context Proposed Design Decision Consequences References","title":"Registry Refactoring Design"},{"location":"design/adr/0001-Registy-Refactor/#status","text":"Approved","title":"Status"},{"location":"design/adr/0001-Registy-Refactor/#context","text":"Currently the Registry Client in go-mod-registry module provides Service Configuration and Service Registration functionality. The goal of this design is to refactor the go-mod-registry module for separation of concerns. The Service Registry functionality will stay in the go-mod-registry module and the Service Configuration functionality will be separated out into a new go-mod-configuration module. This allows for implementations for deferent providers for each, another aspect of separation of concerns.","title":"Context"},{"location":"design/adr/0001-Registy-Refactor/#proposed-design","text":"","title":"Proposed Design"},{"location":"design/adr/0001-Registy-Refactor/#provider-connection-information","text":"An aspect of using the current Registry Client is \" Where do the services get the Registry Provider connection information? \" Currently all services either pull this connection information from the local configuration file or from the edgex_registry environment variable. Device Services also have the option to specify this connection information on the command line. With the refactoring for separation of concerns, this issue changes to \" Where do the services get the Configuration Provider connection information? \" There have been concerns voiced by some in the EdgeX community that storing this Configuration Provider connection information in the configuration which ultimately is provided by that provider is not the right design. This design proposes that all services will use the command line option approach with the ability to override with an environment variable. The Configuration Provider information will not be stored in each service's local configuration file. The edgex_registry environment variable will be deprecated. The Registry Provider connection information will continue to be stored in each service's configuration either locally or from the Configuration Provider same as all other EdgeX Client and Database connection information.","title":"Provider Connection information"},{"location":"design/adr/0001-Registy-Refactor/#command-line-option-changes","text":"The new -cp/-configProvider command line option will be added to each service which will have a value specified using the format {type}.{protocol}://{host}:{port} e.g consul.http://localhost:8500 . This new command line option will be overridden by the edgex_configuration_provider environment variable when it is set. This environment variable's value has the same format as the command line option value. If no value is provided to the -cp/-configProvider option, i.e. just -cp , and no environment variable override is specified, the default value of consul.http://localhost:8500 will be used. if -cp/-configProvider not used and no environment variable override is specified the local configuration file is used, as is it now. All services will log the Configuration Provider connection information that is used. The existing -r/-registry command line option will be retained as a Boolean flag to indicate to use the Registry.","title":"Command line option changes"},{"location":"design/adr/0001-Registy-Refactor/#bootstrap-changes","text":"All services in the edgex-go mono repo use the new common bootstrap functionality. The plan is to move this code to a go module for the Device Service and App Functions SDKs to also use. The current bootstrap modules pkg/bootstrap/configuration/registry.go and pkg/bootstrap/container/registry.go will be refactored to use the new Configuration Client and be renamed appropriately. New bootstrap modules will be created for using the revised version of Registry Client . The current use of useRegistry and registryClient for service configuration will be change to appropriate names for using the new Configuration Client . The current use of useRegistry and registryClient for service registration will be retained for service registration. Call to the new Unregister() API will be added to shutdown code for all services.","title":"Bootstrap Changes"},{"location":"design/adr/0001-Registy-Refactor/#config-seed-changes","text":"The conf-seed service will have similar changes for specifying the Configuration Provider connection information since it doesn't use the common bootstrap package. Beyond that it will have minor changes for switching to using the Configuration Client interface, which will just be imports and appropriate name refactoring.","title":"Config-Seed Changes"},{"location":"design/adr/0001-Registy-Refactor/#config-endpoint-changes","text":"Since the Configuration Provider connection information will no longer be in the service's configuration struct, the config endpoint processing will be modified to add the Configuration Provider connection information to the resulting JSON create from service's configuration.","title":"Config Endpoint Changes"},{"location":"design/adr/0001-Registy-Refactor/#client-interfaces-changes","text":"","title":"Client Interfaces changes"},{"location":"design/adr/0001-Registy-Refactor/#current-registry-client","text":"This following is the current Registry Client Interface type Client interface { Register () error HasConfiguration () ( bool , error ) PutConfigurationToml ( configuration * toml . Tree , overwrite bool ) error PutConfiguration ( configStruct interface {}, overwrite bool ) error GetConfiguration ( configStruct interface {}) ( interface {}, error ) WatchForChanges ( updateChannel chan <- interface {}, errorChannel chan <- error , configuration interface {}, waitKey string ) IsAlive () bool ConfigurationValueExists ( name string ) ( bool , error ) GetConfigurationValue ( name string ) ([] byte , error ) PutConfigurationValue ( name string , value [] byte ) error GetServiceEndpoint ( serviceId string ) ( types . ServiceEndpoint , error ) IsServiceAvailable ( serviceId string ) error }","title":"Current Registry Client"},{"location":"design/adr/0001-Registy-Refactor/#new-configuration-client","text":"This following is the new Configuration Client Interface which contains the Service Configuration specific portion from the above current Registry Client . type Client interface { HasConfiguration () ( bool , error ) PutConfigurationFromToml ( configuration * toml . Tree , overwrite bool ) error PutConfiguration ( configStruct interface {}, overwrite bool ) error GetConfiguration ( configStruct interface {}) ( interface {}, error ) WatchForChanges ( updateChannel chan <- interface {}, errorChannel chan <- error , configuration interface {}, waitKey string ) IsAlive () bool ConfigurationValueExists ( name string ) ( bool , error ) GetConfigurationValue ( name string ) ([] byte , error ) PutConfigurationValue ( name string , value [] byte ) error }","title":"New Configuration Client"},{"location":"design/adr/0001-Registy-Refactor/#revised-registry-client","text":"This following is the revised Registry Client Interface, which contains the Service Registry specific portion from the above current Registry Client . The UnRegister() API has been added per issue #20 type Client interface { Register () error UnRegister () error IsAlive () bool GetServiceEndpoint ( serviceId string ) ( types . ServiceEndpoint , error ) IsServiceAvailable ( serviceId string ) error }","title":"Revised Registry Client"},{"location":"design/adr/0001-Registy-Refactor/#client-configuration-structs","text":"","title":"Client Configuration Structs"},{"location":"design/adr/0001-Registy-Refactor/#current-registry-client-config","text":"The following is the current struct used to configure the current Registry Client type Config struct { Protocol string Host string Port int Type string Stem string ServiceKey string ServiceHost string ServicePort int ServiceProtocol string CheckRoute string CheckInterval string }","title":"Current Registry Client Config"},{"location":"design/adr/0001-Registy-Refactor/#new-configuration-client-config","text":"The following is the new struct the will be used to configure the new Configuration Client from the command line option or environment variable values. The Service Registry portion has been removed from the above existing Registry Client Config type Config struct { Protocol string Host string Port int Type string BasePath string ServiceKey string }","title":"New Configuration Client Config"},{"location":"design/adr/0001-Registy-Refactor/#new-registry-client-config","text":"The following is the revised struct the will be used to configure the new Registry Client from the information in the service's configuration. This is mostly unchanged from the existing Registry Client Config , except that the Stem for configuration has been removed type Config struct { Protocol string Host string Port int Type string ServiceKey string ServiceHost string ServicePort int ServiceProtocol string CheckRoute string CheckInterval string }","title":"New Registry Client Config"},{"location":"design/adr/0001-Registy-Refactor/#provider-implementations","text":"The current Consul implementation of the Registry Client will be split up into implementations for the new Configuration Client in the new go-mod-configuration module and the revised Registry Client in the existing go-mod-registry module.","title":"Provider Implementations"},{"location":"design/adr/0001-Registy-Refactor/#decision","text":"It was decided to move forward with the above design After initial ADR was approved, it was decided to retain the -r/--registry command-line flag and not add the Enabled field in the Registry provider configuration.","title":"Decision"},{"location":"design/adr/0001-Registy-Refactor/#consequences","text":"Once the refactoring of go-mod-registry and go-mod-configuration are complete, they will need to be integrated into the new go-mod-bootstrap. Part of this integration will be the Command line option changes above. At this point the edgex-go services will be integrated with the new Registry and Configuration providers. The App Services SDK and Device Services SDK will then need to integrate go-mod-bootstrap to take advantage of these new providers.","title":"Consequences"},{"location":"design/adr/0001-Registy-Refactor/#references","text":"Registry Abstraction - Decouple EdgeX services from Consul (Previous design)","title":"References"},{"location":"design/adr/0004-Feature-Flags/","text":"Feature Flag Proposal Status Accepted Context Out of the proposal for releasing on time, the community suggested that we take a closer look at feature-flags. Feature-flags are typically intended for users of an application to turn on or off new or unused features. This gives user more control to adopt a feature-set at their own pace \u2013 i.e disabling store and forward in App Functions SDK without breaking backward compatibility. It can also be used to indicate to developers the features that are more often used than others and can provided valuable feedback to enhance and continue a given feature. To gain that insight of the use of any given feature, we would require not only instrumentation of the code but a central location in the cloud (i.e a TIG stack) for the telemetry to be ingested and in turn reported in order to provide the feedback to the developers. This becomes infeasible primarily because the cloud infrastructure costs, privacy concerns, and other unforeseen legal reasons for sending \u201cUsage Metrics\u201d of an EdgeX installation back to a central entity such as the Linux Foundation, among many others. Without the valuable feedback loop, feature-flags don\u2019t provide much value on their own and they certainly don\u2019t assist in increasing velocity to help us deliver on time. Putting aside one of the major value propositions listed above, feasibility of a feature flag \u201cmodule\u201d was still evaluated. The simplest approach would be to leverage configuration following a certain format such as FF_[NewFeatureName]=true/false. This is similar to what is done today. Turning on/off security is an example, turning on/off the registry is another. Expanding this further with a module could offer standardization of controlling a given feature such as featurepkg.Register(\u201cMyNewFeature\u201d) or featurepkg.IsOn(\u201cMyNewFeature\u201d) . However, this really is just adding complexity on top of the underlying configuration that is already implemented. If we were to consider doing something like this, it lends it self to a central management of features within the EdgeX framework\u2014either its own service or possibly added as part of the SMA. This could help address concerns around feature dependencies and compatibility. Feature A on Service X requires Feature B and Feature C on Service Y. Continuing down this path starts to beget a fairly large impact to EdgeX for value that cannot be fully realized. Decision The community should NOT pursue a full-fledged feature flag implementation either homegrown or off-the-shelf. However, it should be encouraged to develop features with a wholistic perspective and consider leveraging configuration options to turn them on/off. In other words, once a feature compiles, can work under common scenarios, but perhaps isn\u2019t fully tested with edge cases, but doesn\u2019t impact any other functionality, should be encouraged. Consequences Allows more focus on the many more competing priorities for this release. Minimal impact to development cycles and release schedule","title":"Feature Flag Proposal"},{"location":"design/adr/0004-Feature-Flags/#feature-flag-proposal","text":"","title":"Feature Flag Proposal"},{"location":"design/adr/0004-Feature-Flags/#status","text":"Accepted","title":"Status"},{"location":"design/adr/0004-Feature-Flags/#context","text":"Out of the proposal for releasing on time, the community suggested that we take a closer look at feature-flags. Feature-flags are typically intended for users of an application to turn on or off new or unused features. This gives user more control to adopt a feature-set at their own pace \u2013 i.e disabling store and forward in App Functions SDK without breaking backward compatibility. It can also be used to indicate to developers the features that are more often used than others and can provided valuable feedback to enhance and continue a given feature. To gain that insight of the use of any given feature, we would require not only instrumentation of the code but a central location in the cloud (i.e a TIG stack) for the telemetry to be ingested and in turn reported in order to provide the feedback to the developers. This becomes infeasible primarily because the cloud infrastructure costs, privacy concerns, and other unforeseen legal reasons for sending \u201cUsage Metrics\u201d of an EdgeX installation back to a central entity such as the Linux Foundation, among many others. Without the valuable feedback loop, feature-flags don\u2019t provide much value on their own and they certainly don\u2019t assist in increasing velocity to help us deliver on time. Putting aside one of the major value propositions listed above, feasibility of a feature flag \u201cmodule\u201d was still evaluated. The simplest approach would be to leverage configuration following a certain format such as FF_[NewFeatureName]=true/false. This is similar to what is done today. Turning on/off security is an example, turning on/off the registry is another. Expanding this further with a module could offer standardization of controlling a given feature such as featurepkg.Register(\u201cMyNewFeature\u201d) or featurepkg.IsOn(\u201cMyNewFeature\u201d) . However, this really is just adding complexity on top of the underlying configuration that is already implemented. If we were to consider doing something like this, it lends it self to a central management of features within the EdgeX framework\u2014either its own service or possibly added as part of the SMA. This could help address concerns around feature dependencies and compatibility. Feature A on Service X requires Feature B and Feature C on Service Y. Continuing down this path starts to beget a fairly large impact to EdgeX for value that cannot be fully realized.","title":"Context"},{"location":"design/adr/0004-Feature-Flags/#decision","text":"The community should NOT pursue a full-fledged feature flag implementation either homegrown or off-the-shelf. However, it should be encouraged to develop features with a wholistic perspective and consider leveraging configuration options to turn them on/off. In other words, once a feature compiles, can work under common scenarios, but perhaps isn\u2019t fully tested with edge cases, but doesn\u2019t impact any other functionality, should be encouraged.","title":"Decision"},{"location":"design/adr/0004-Feature-Flags/#consequences","text":"Allows more focus on the many more competing priorities for this release. Minimal impact to development cycles and release schedule","title":"Consequences"},{"location":"design/adr/0005-Service-Self-Config/","text":"Service Self Config Init & Config Seed Removal Status approved - TSC vote on 3/25/20 for Geneva release NOTE: this ADR does not address high availability considerations and concerns. EdgeX, in general, has a number of unanswered questions with regard to HA architecture and this design adds to those considerations. Context Since its debut, EdgeX has had a configuration seed service (config-seed) that, on start of EdgeX, deposits configuration for all the services into Consul (our configuration/registry service). For development purposes, or on resource constrained platforms, EdgeX can be run without Consul with services simply reading configuration from the filesystem. While this process has nominally worked for several releases of EdgeX, there has always been some issues with this extra initialization process (config-seed), not least of which are: - race conditions on the part of the services, as they bootstrap, coming up before the config-seed completes its deposit of configuration into Consul - how to deal with \"overrides\" such as environmental variable provided configuration overrides. As the override is often specific to a service but has to be in place for config-seed in order to take effect. - need for an additional service that is only there for init and then dies (confusing to users) NOTE - for historical purposes, it should be noted that config-seed only writes configuration into the configuration/registry service (Consul) once on the first start of EdgeX. On subsequent starts of EdgeX, config-seed checks to see if it has already populated the configuration/registry service and will not rewrite configuration again (unless the --overwrite flag is used). The design/architectural proposal, therefore, is: - removal of the config-seed service (removing cmd/config-seed from the edgex-go repository) - have each EdgeX micro service \"self seed\" - that is seed Consul with their own required configuration on bootstrap of the service. Details of that bootstrapping process are below. Command Line Options All EdgeX services support a common set of command-line options, some combination of which are required on startup for a service to interact with the rest of EdgeX. Command line options are not set by any configuration. Command line options include: --configProvider or -cp (the configuration provider location URL - prefixed with consul. - for example: -cp=consul.http://localhost:8500 ) --overwrite or -o (overwrite the configuration in the configuration provider) --file or -f (the configuration filename - configuration.toml is used by default if the configuration filename is not provided) --profile or -p (the name of a sub directory in the configuration directory in which a profile-specific configuration file is found. This has no default. If not specified, the configuration file is read from the configuration directory) --confdir or -c (the directory where the configuration file is found - ./res is used by default if the confdir is not specified, where \".\" is the convention on Linux/Unix/MacOS which means current directory) --registry or -r (string indicating use of the registry) The distinction of command line options versus configuration will be important later in this ADR. Two command line options (-o for overwrite and -r for registry) are not overridable by environmental variables. NOTES: Use of the --overwrite command line option should be used sparingly and with expert knowledge of EdgeX; in particular knowledge of how it operates and where/how it gets its configuration on restarts, etc. Ordinarily, --overwrite is provided as a means to support development needs. Use of --overwrite permanently in production enviroments is highly discouraged. Configuration Initialization Each service has (or shall have if not providing it already) a local configuration file. The service may use the local configuration file on initialization of the service (aka bootstrap of the service) depending on command line options and environmental variables (see below) provided at startup. Using a configuration provider When the configuration provider is specified, the service will call on the configuration provider (Consul) and check if the top-level (root) namespace for the service exists. If configuratation at the top-level (root) namespace exists, it indicates that the service has already populated its configuration into the configuration provider in a prior startup. If the service finds the top-level (root) namespace is already populated with configuration information it will then read that configuration information from the configuration provider under namespace for that service (and ignore what is in the local configuration file). If the service finds the top-level (root) namespace is not populated with configuration information, it will read its local configuration file and populate the configuration provider (under the namespace for the service) with configuration read from the local configuration file. A configuration provider can be specified with a command line argument (the -cp / --configProvider) or environment variable (the EDGEX_CONFIGURATION_PROVIDER environmental variable which overrides the command line argument). NOTE: the environmental variables are typically uppercase but there have been inconsistencies in environmental variable casing (example: edgex_registry). This should be considered and made consistent in a future major release. Using the local configuration file When a configuration provider isn't specified, the service just uses the configuration in its local configuration file. That is the service uses the configuration in the file associated with the profile, config filename and config file directory command line options or environmental variables. In this case, the service does not contact the configuration service (Consul) for any configuration information. NOTE: As the services now self seed and deployment specific changes can be made via environment overrides, it will no longer be necessary to have a Docker profile configuration file in each of the service directories (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml). See Consequences below. It will still be possible for users to use the profile mechanism to specify a Docker configuration, but it will no longer be required and not the recommended approach to providing Docker container specific configuration. Overrides Environment variables used to override configuration always take precedence whether configuration is being sourced locally or read from the config provider/Consul. Note - this means that a configuration value that is being overridden by an environment variable will always be the source of truth, even if the same configuration is changed directly in Consul. The name of the environmental variable must match the path names in Consul. NOTES: - Environmental variables overrides remove the need to change the \"docker\" profile in the res/docker/configuration.toml files - Allowing removal of 50% of the existing configuration.toml files. - The override rules in EdgeX between environmental variables and command line options may be counter intuitive compared to other systems. There appears to be no standard practice. Indeed, web searching \"Reddit & Starting Fights Env Variables vs Command Line Args\" will layout the prevailing differences. - Environment variables used for configuration overrides are named by prepending the the configuration element with the configuration section inclusive of sub-path, where sub-path's \".\"s are replaced with underscores. These configuration environment variable overrides must be specified using camel case. Here are two examples: Registry_Host for [Registry] Host = 'localhost' Clients_CoreData_Host for [Clients] [Clients.CoreData] Host = 'localhost' - Going forward, environmental variables that override command line options should be all uppercase. All values overriden get logged (indicating which configuration value or op param and the new value). Decision These features have been implemented (with some minor changes to be done) for consideration here: https://github.com/edgexfoundry/go-mod-bootstrap/compare/master...lenny-intel:SelfSeed2. This code branch will be removed once this ADR is approved and implemented on master. The implementation for self-seeding services and environmental overrides is already implemented (for Fuji) per this document in the application services and device services (and instituted in the SDKs of each). Backward compatibility Several aspects of this ADR contain backward compatibility issues for the device service and application service SDKs. Therefore, for the upcoming minor release, the following guidelines and expections are added to provide for backward compatibility. --registry= for Device SDKs As earlier versions of the device service SDKs accepted a URI for --registry, if specified on the command line, use the given URI as the address of the configuration provider. If both --configProvider and --registry specify URIs, then the service should log an error and exit. --registry (no \u2018=\u2019) and w/o --configProvider for both SDKs If a configProvider URI isn't specified, but --registry (w/out a URI) is specified, then the service will use the Registry provider information from its local configuration file for both configuration and registry providers. Env Var: edgex_registry= for all services (currently has been removed) Add it back and use value as if it was EDGEX_CONFIGURATION_PROVIDER and enable use of registry with same settings in URL. Default to http as it is in Fuji. Consequences Docker compose files will need to be changed to remove config seed. The main Snap will need to be changed to remove config seed. Config seed code (currently in edgex-go repo) is to be removed. Any service specific environmental overrides currently on config seed need to be moved to the specific service(s). The Docker configuration files and directory (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml) that are used to populate the config seed for Docker containers can be eliminated from all the services. In cmd/security-secretstore-setup, there is only a docker configuration.toml. This file will be moved rather than deleted. Documentation would need to reflect removal of config seed and \"self seeding\" process. Removes any potential issue with past race conditions (as experienced with the Edinburgh release) as each service is now responsible for its own configuration. There are still high availability concerns that need to be considered and not covered in this ADR at this time. Removes some confusion on the part of users as to why a service (config-seed) starts and immediately exits. Minimal impact to development cycles and release schedule Configuration endpoints in all services need to ensure the environmental variables are reflected in the configuration data returned (this is a system management impact). Docker files will need to be modified to remove setting profile=docker Docker compose files will need to be changed to add environmental overrides for removal of docker profiles. These should go in the global environment section of the compose files for those overrides that apply to all services. Example: # all common shared environment variables defined here: x-common-env-variables: &common-variables EDGEX_SECURITY_SECRET_STORE: \"false\" EDGEX_CONFIGURATION_PROVIDER: consul.http://edgex-core-consul:8500 Clients_CoreData_Host: edgex-core-data Clients_Logging_Host: edgex-support-logging Logging_EnableRemote: \"true\"","title":"Service Self Config Init & Config Seed Removal"},{"location":"design/adr/0005-Service-Self-Config/#service-self-config-init-config-seed-removal","text":"","title":"Service Self Config Init & Config Seed Removal"},{"location":"design/adr/0005-Service-Self-Config/#status","text":"approved - TSC vote on 3/25/20 for Geneva release NOTE: this ADR does not address high availability considerations and concerns. EdgeX, in general, has a number of unanswered questions with regard to HA architecture and this design adds to those considerations.","title":"Status"},{"location":"design/adr/0005-Service-Self-Config/#context","text":"Since its debut, EdgeX has had a configuration seed service (config-seed) that, on start of EdgeX, deposits configuration for all the services into Consul (our configuration/registry service). For development purposes, or on resource constrained platforms, EdgeX can be run without Consul with services simply reading configuration from the filesystem. While this process has nominally worked for several releases of EdgeX, there has always been some issues with this extra initialization process (config-seed), not least of which are: - race conditions on the part of the services, as they bootstrap, coming up before the config-seed completes its deposit of configuration into Consul - how to deal with \"overrides\" such as environmental variable provided configuration overrides. As the override is often specific to a service but has to be in place for config-seed in order to take effect. - need for an additional service that is only there for init and then dies (confusing to users) NOTE - for historical purposes, it should be noted that config-seed only writes configuration into the configuration/registry service (Consul) once on the first start of EdgeX. On subsequent starts of EdgeX, config-seed checks to see if it has already populated the configuration/registry service and will not rewrite configuration again (unless the --overwrite flag is used). The design/architectural proposal, therefore, is: - removal of the config-seed service (removing cmd/config-seed from the edgex-go repository) - have each EdgeX micro service \"self seed\" - that is seed Consul with their own required configuration on bootstrap of the service. Details of that bootstrapping process are below.","title":"Context"},{"location":"design/adr/0005-Service-Self-Config/#command-line-options","text":"All EdgeX services support a common set of command-line options, some combination of which are required on startup for a service to interact with the rest of EdgeX. Command line options are not set by any configuration. Command line options include: --configProvider or -cp (the configuration provider location URL - prefixed with consul. - for example: -cp=consul.http://localhost:8500 ) --overwrite or -o (overwrite the configuration in the configuration provider) --file or -f (the configuration filename - configuration.toml is used by default if the configuration filename is not provided) --profile or -p (the name of a sub directory in the configuration directory in which a profile-specific configuration file is found. This has no default. If not specified, the configuration file is read from the configuration directory) --confdir or -c (the directory where the configuration file is found - ./res is used by default if the confdir is not specified, where \".\" is the convention on Linux/Unix/MacOS which means current directory) --registry or -r (string indicating use of the registry) The distinction of command line options versus configuration will be important later in this ADR. Two command line options (-o for overwrite and -r for registry) are not overridable by environmental variables. NOTES: Use of the --overwrite command line option should be used sparingly and with expert knowledge of EdgeX; in particular knowledge of how it operates and where/how it gets its configuration on restarts, etc. Ordinarily, --overwrite is provided as a means to support development needs. Use of --overwrite permanently in production enviroments is highly discouraged.","title":"Command Line Options"},{"location":"design/adr/0005-Service-Self-Config/#configuration-initialization","text":"Each service has (or shall have if not providing it already) a local configuration file. The service may use the local configuration file on initialization of the service (aka bootstrap of the service) depending on command line options and environmental variables (see below) provided at startup. Using a configuration provider When the configuration provider is specified, the service will call on the configuration provider (Consul) and check if the top-level (root) namespace for the service exists. If configuratation at the top-level (root) namespace exists, it indicates that the service has already populated its configuration into the configuration provider in a prior startup. If the service finds the top-level (root) namespace is already populated with configuration information it will then read that configuration information from the configuration provider under namespace for that service (and ignore what is in the local configuration file). If the service finds the top-level (root) namespace is not populated with configuration information, it will read its local configuration file and populate the configuration provider (under the namespace for the service) with configuration read from the local configuration file. A configuration provider can be specified with a command line argument (the -cp / --configProvider) or environment variable (the EDGEX_CONFIGURATION_PROVIDER environmental variable which overrides the command line argument). NOTE: the environmental variables are typically uppercase but there have been inconsistencies in environmental variable casing (example: edgex_registry). This should be considered and made consistent in a future major release. Using the local configuration file When a configuration provider isn't specified, the service just uses the configuration in its local configuration file. That is the service uses the configuration in the file associated with the profile, config filename and config file directory command line options or environmental variables. In this case, the service does not contact the configuration service (Consul) for any configuration information. NOTE: As the services now self seed and deployment specific changes can be made via environment overrides, it will no longer be necessary to have a Docker profile configuration file in each of the service directories (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml). See Consequences below. It will still be possible for users to use the profile mechanism to specify a Docker configuration, but it will no longer be required and not the recommended approach to providing Docker container specific configuration.","title":"Configuration Initialization"},{"location":"design/adr/0005-Service-Self-Config/#overrides","text":"Environment variables used to override configuration always take precedence whether configuration is being sourced locally or read from the config provider/Consul. Note - this means that a configuration value that is being overridden by an environment variable will always be the source of truth, even if the same configuration is changed directly in Consul. The name of the environmental variable must match the path names in Consul. NOTES: - Environmental variables overrides remove the need to change the \"docker\" profile in the res/docker/configuration.toml files - Allowing removal of 50% of the existing configuration.toml files. - The override rules in EdgeX between environmental variables and command line options may be counter intuitive compared to other systems. There appears to be no standard practice. Indeed, web searching \"Reddit & Starting Fights Env Variables vs Command Line Args\" will layout the prevailing differences. - Environment variables used for configuration overrides are named by prepending the the configuration element with the configuration section inclusive of sub-path, where sub-path's \".\"s are replaced with underscores. These configuration environment variable overrides must be specified using camel case. Here are two examples: Registry_Host for [Registry] Host = 'localhost' Clients_CoreData_Host for [Clients] [Clients.CoreData] Host = 'localhost' - Going forward, environmental variables that override command line options should be all uppercase. All values overriden get logged (indicating which configuration value or op param and the new value).","title":"Overrides"},{"location":"design/adr/0005-Service-Self-Config/#decision","text":"These features have been implemented (with some minor changes to be done) for consideration here: https://github.com/edgexfoundry/go-mod-bootstrap/compare/master...lenny-intel:SelfSeed2. This code branch will be removed once this ADR is approved and implemented on master. The implementation for self-seeding services and environmental overrides is already implemented (for Fuji) per this document in the application services and device services (and instituted in the SDKs of each).","title":"Decision"},{"location":"design/adr/0005-Service-Self-Config/#backward-compatibility","text":"Several aspects of this ADR contain backward compatibility issues for the device service and application service SDKs. Therefore, for the upcoming minor release, the following guidelines and expections are added to provide for backward compatibility. --registry= for Device SDKs As earlier versions of the device service SDKs accepted a URI for --registry, if specified on the command line, use the given URI as the address of the configuration provider. If both --configProvider and --registry specify URIs, then the service should log an error and exit. --registry (no \u2018=\u2019) and w/o --configProvider for both SDKs If a configProvider URI isn't specified, but --registry (w/out a URI) is specified, then the service will use the Registry provider information from its local configuration file for both configuration and registry providers. Env Var: edgex_registry= for all services (currently has been removed) Add it back and use value as if it was EDGEX_CONFIGURATION_PROVIDER and enable use of registry with same settings in URL. Default to http as it is in Fuji.","title":"Backward compatibility"},{"location":"design/adr/0005-Service-Self-Config/#consequences","text":"Docker compose files will need to be changed to remove config seed. The main Snap will need to be changed to remove config seed. Config seed code (currently in edgex-go repo) is to be removed. Any service specific environmental overrides currently on config seed need to be moved to the specific service(s). The Docker configuration files and directory (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml) that are used to populate the config seed for Docker containers can be eliminated from all the services. In cmd/security-secretstore-setup, there is only a docker configuration.toml. This file will be moved rather than deleted. Documentation would need to reflect removal of config seed and \"self seeding\" process. Removes any potential issue with past race conditions (as experienced with the Edinburgh release) as each service is now responsible for its own configuration. There are still high availability concerns that need to be considered and not covered in this ADR at this time. Removes some confusion on the part of users as to why a service (config-seed) starts and immediately exits. Minimal impact to development cycles and release schedule Configuration endpoints in all services need to ensure the environmental variables are reflected in the configuration data returned (this is a system management impact). Docker files will need to be modified to remove setting profile=docker Docker compose files will need to be changed to add environmental overrides for removal of docker profiles. These should go in the global environment section of the compose files for those overrides that apply to all services. Example: # all common shared environment variables defined here: x-common-env-variables: &common-variables EDGEX_SECURITY_SECRET_STORE: \"false\" EDGEX_CONFIGURATION_PROVIDER: consul.http://edgex-core-consul:8500 Clients_CoreData_Host: edgex-core-data Clients_Logging_Host: edgex-support-logging Logging_EnableRemote: \"true\"","title":"Consequences"},{"location":"design/adr/0006-Metrics-Collection/","text":"EdgeX Metrics Collection Status Approved Original proposal 10/24/2020 Approved by the TSC on 3/2/22 Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include: the number of EdgeX Events sent from core data to an application service the number of requests on a service API the average time it takes to process a message through an application service The number of errors logged by a service Control plane events (CPE) are defined as events that occur within an EdgeX instance. Examples of CPE include: a device was provisioned (added to core metadata) a service was stopped service configuration has changed CPE should not be confused with core data Events. Core data Events represent a collection (one or more) of sensor/device readings. Core data Events represent sensing of some measured state of the physical world (temperature, vibration, etc.). CPE represents the detection of some happening inside of the EdgeX software. This ADR outlines metrics (or telemetry) collection and handling. Note This ADR initially incorporated metrics collection and control plane event processing. The EdgeX architects felt the scope of the design was too large to cover under one ADR. Control plane event processing will be covered under a separate ADR in the future. Context System Management services (SMA and executors) currently provide a limited set of \u201cmetrics\u201d to requesting clients (3rd party applications and systems external to EdgeX). Namely, it provides requesting clients with service CPU and memory usage; both metrics about the resource utilization of the service (the executable) itself versus metrics that are about what is happening inside of the service. Arguably, the current system management metrics can be provided by the container engine and orchestration tools (example: by Docker engine) or by the underlying OS tooling. Info The SMA has been deprecated (since Ireland release) and will be removed in a future, yet named, release. Going forward, users of EdgeX will want to have more insights \u2013 that is more metrics telemetry \u2013 on what is happening directly in the services and the tasks that they are preforming. In other words, users of EdgeX will want more telemetry on service activities to include: sensor data collection (how much, how fast, etc.) command requests handled (how many, to which devices, etc.) sensor data transformation as it is done in application services (how fast, what is filtered, etc) sensor data export (how much is sent, how many exports have failed, etc. ) API requests (how often, how quickly, how many success versus failed attempts, etc.) bootstrapping time (time to come up and be available to other services) activity processing time (amount of time it takes to perform a particular service function - such as respond to a command request) Definitions Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include: the number of EdgeX Events sent from core data to an application service via message bus (or via device service to application service in Ireland and beyond) the number of requests on a service API the average time it takes to process a message through an application service The number of errors logged by a service The collection and dissemination of metric data will require internal service level instrumentation (relevant to that service) to capture and send data about relevant EdgeX operations. EdgeX does not currently offer any service instrumentation. Metric Use As a first step in implementation of metrics data, EdgeX will make metric data available to other subscribing 3rd party applications and systems, but will not necessarily consume or use this information itself. In the future, EdgeX may consume its own metric data. For example, EdgeX may, in the future, use a metric on the number of EdgeX events being sent to core data (or app services) as the means to throttle back device data collection. In the future, EdgeX application services may optionally subscribe to a service's metrics messages bus (by attaching to the appropriate message pipe for that service). Thus allowing additional filtering, transformation, endpoint control of metric data from that service. At the point where this feature is supported, consideration would need to be made as to whether all events (sensor reading messages and metric messages) go through the same application services. At this time, EdgeX will not persist the metric data (except as it may be retained as part of a message bus subsystem such as in an MQTT broker). Consumers of metric data are responsible for persisting the data if needed, but this is external to EdgeX. Persistence of metric information may be considered in the future based on requirements and adopter demand for such a feature. In general, EdgeX metrics are meant to provide internal services and external applications and systems better information about what is happening \"inside\" EdgeX services and the associated devices with which it communicates. Requirements Services will push specified metrics collected for that service to a specified (by configuration) message endpoint (as supported by the EdgeX message bus implementation; currently either Redis Pub/Sub or MQTT implementations are supported) Each service will have configuration that specifies a message endpoint for the service metrics. The metrics message topic communications may be secured or unsecured (just as application services provide the means to export to secured or unsecured message pipes today). The configuration will be placed in the Writable area. When a user wishes to change the configuration dynamically (such as turning on/off a metric), then Consul's UI can be used to change it. Services will have configuration which indicates what metrics are available from the service. Services will have configuration which allows EdgeX system managers to select which metrics are on or off - in other words providing configuration that determines what metrics are collected and reported by default. When a metric is turned off (the default setting) the service does not report the metric. When a metric is turned on the service collects and sends the metric to the designated message topic. Metrics collection must be pushed to the designated message topic on some appointed schedule. The schedule would be designated by configuration and done in a way similar to auto events in device services. For the initial implementation, there will be just one scheduled time when all metrics will be collected and pushed to the designated message topic. In the future, there may be a desire to set up a separate schedule for each metric, but this was deemed too complex for the initial implementation. Info Initially, it was proposed that metrics be associated with a \"level\" and allow metrics to be turned on or off by level (like levels associated to log messages in logging). The level of metrics data seems arbitrary at this time and considered too complex for initial implementation. This may be reconsidered in a future release and based on new requirements/use cases. It was also proposed to categorize or label metrics - essentially allowing grouping of various metrics. This would allow groups of metrics to be turned on or off, and allow metrics to be organized per the group when reporting. At this time, this feature is also considered beyond the scope of the initial implementation and to be reconsidered in a future release based on requirements/use case needs. It was also proposed that each service offer a REST API to provide metrics collection information (such as which metrics were being collected) and the ability to turn the collection on or off dynamically. This is deemed out of scope for the first implementation and may be brought back if there are use case requirements / demand for it. Requested Metrics The following is a list of example metrics requested by the EdgeX community and adopters for various service areas. Again, metrics would generally be collected and pushed to the message topic in some configured interval (example: 1/5/15 minutes or other defined interval). This is just a sample of metrics thought relevant by each work group. It may not reflect the metrics supported by the implementation. The exact metrics collected by each service will be determined by the service implementers (or SDK implementers in the case of the app functions and device service SDKs). General The following metrics apply to all (or most) services. Service uptime (time since last service boot) Cumulative number of API requests succeeded / failed / invalid (2xx vs 5xx vs 4xx) Avg response time (in milliseconds or appropriate unit of measure) on APIs Avg and Max request size Core/Supporting Latency (measure of time) an event takes to get through core data Latency (measure of time) a command request takes to get to a device service Indication of health \u2013 that events are being processed during a configurable period Number of events in persistence Number of readings in persistence Number of validation failures (validation of device identification) Number of notification transactions Number of notifications handled Number of failed notification transmissions Number of notifications in retry status Application Services Processing time for a pipeline; latency (measure of time) an event takes to get through an application service pipeline DB access times How often are we failing export to be sent to db to be retried at a later time What is the current store and forward queue size How much data (size in KBs or MBs) of packaged sensor data is being sent to an endpoint (or volume) Number of invalid messages that triggered pipeline Number of events processed Device Services Number of devices managed by this DS Device Requests (which may be more informative than reading counts and rates) Note It is envisioned that there may be additional specific metrics for each device service. For example, the ONVIF camera device service may report number of times camera tampering was detected. Security Security metrics may be more difficult to ascertain as they are cross service metrics. Given the nature of this design (on a per service basis), global security metrics may be out of scope or security metrics collection has to be copied into each service (leading to lots of duplicate code for now). Also, true threat detection based on metrics may be a feature best provided by 3rd party based on particular threats and security profile needs. Number of API requests denied due to wrong access token (Kong) per service and within a given time Number of secrets accessed per service name Count of any accesses and failures to the data persistence layer Count of service start and restart attempts Design Proposal Collect and Push Architecture Metric data will be collected and cached by each service. At designated times (kicked off by configurable schedule), the service will collect telemetry data from the cache and push it to a designated message bus topic. Metrics Messaging Cached metric data, at the designated time, will be marshaled into a message and pushed to the pre-configured message bus topic. Each metric message consists of several key/value pairs: - a required name (the name of the metric) such as service-uptime - a required value which is the telemetry value collected such as 120 as the number of hours the service has been up. - a required timestamp is the time (in Epoch timestamp/milliseconds format) at which the data was collected (similar in nature to the origin of sensed data). - an optional collection (array) of tags. The tags are sets of key/value pairs of strings that provide amplifying information about the telemetry. Tags may include: - originating service name - unit of measure associated with the telemetry value - value type of the value - additional values when the metric is more than just one value (example: when using a histogram, it would include min, max, mean and sum values) The metric name must be unique for that service. Because some metrics are reported from multiple services (such as service uptime), the name is not required to be unique across all services. All information (keys, values, tags, etc.) is in string format and placed in a JSON array within the message body. Here are some example representations: Example metric message body with a single value { \"name\" : \"service-up\" , \"value\" : \"120\" , \"timestamp\" : \"1602168089665570000\" , \"tags\" :{ \"service\" : \"coredata\" , \"uom\" : \"days\" , \"type\" : \"int64\" }} Example metric message body with multiple values { \"name\" : \"api-requests\" , \"value\" : \"24\" , \"timestamp\" : \"1602168089665570001\" , \"tags\" :{ \"service\" : \"coredata\" , \"uom\" : \"count\" , \"type\" : \"int64\" , \"mean\" : \"0.0665\" , \"rate1\" : \"0.111\" , \"rate5\" : \"0.150\" , \"rate15\" : \"0.111\" }} Info The key or metric name must be unique when using go-metrics as it requires the metric name to be unique per the registry. Metrics are considered immutable. Configuration Configuration, not unlike that provided in core data or any device service, will specify the message bus type and locations where the metrics messages should be sent. In fact, the message bus configuration will use (or reuse if the service is already using the message bus) the common message bus configuration as defined below. Common configuration for each service for message queue configuration (inclusive of metrics): [ MessageQueue ] Protocol = 'redis' ## or 'tcp' Host = 'localhost' Port = 5573 Type = 'redis' ## or 'mqtt' PublishTopicPrefix = \"edgex/events/core\" # standard and existing core or device topic for publishing [ MessageQueue.Optional ] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"device-virtual\" # Connection information Qos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified Additional configuration must be provided in each service to provide metrics / telemetry specific configuration. This area of the configuration will likely be different for each type of service. Additional metrics collection configuration to be provided include: Trigger the collection of telemetry from the metrics cache and sending it into the appointed message bus. Define which metrics are available and which are turned off and on . All are false by default. The list of metrics can and likely will be different per service. The keys in this list are the metric name. True and false are used for on and off values. Specify the metrics topic prefix where metrics data will be published to (ex: providing the prefix /edgex/telemetry/topic name where the service and metric name [service-name]/[metric-name] will be appended per metric (allowing subscribers to filter by service or metric name) These metrics configuration options will be defined in the Writable area of configuration.toml so as to allow for dynamic changes to the configuration (when using Consul). Specifically, the [Writable].[Writable.Telemetry] area will dictate metrics collection configuration like this: [[ Writable ]] [[ Writable.Telemetry ]] Interval = \"30s\" PublishTopicPrefix = \"edgex/telemetry\" # // will be added to this Publish Topic prefix #available metrics listed here. All metrics should be listed off (or false) by default service-up = false api-requests = false Info It was discussed that in future EdgeX releases, services may want separate message bus connections. For example one for sensor data and one for metrics telemetry data. This would allow the QoS and other settings of the message bus connection to be different. This would allow sensor data collection, for example, to be messaged with a higher QoS than that of metrics. As an alternate approach, we could modify go-mod-messaging to allow setting QoS per topic (and thereby avoid multiple connections). For the initial release of this feature, the service will use the same connection (and therefore configuration) for metrics telemetry as well as sensor data. Library Support Each service will now need go-mod-messaging support (for GoLang services and the equivalent for C services). Each service would determine when and what metrics to collect and push to the message bus, but will use a common library chosen for each EdgeX language supported (Go or C currently) Use of go-metrics (a GoLang library to publish application metrics) would allow EdgeX to utilize (versus construct) a library utilized by over 7 thousand projects. It provides the means to capture various types of metrics in a registry (a sophisticated map). The metrics can then be published ( reported ) to a number of well known systems such as InfluxDB, Graphite, DataDog, and Syslog. go-metrics is a Go library made from original Java package https://github.com/dropwizard/metrics. A similar package would need to be selected (or created) for C. Per the Core WG meeting of 2/24/22 - it is important to provide an implementation that is the same in Go or C. The adopter of EdgeX should not see a difference in whether the metrics/telemetry is collected by a C or Go service. Configuration of metrics in a C or Go service should have the same structure. The C based metrics collection mechanism in C services (specifically as provided for in our C device service SDK) may operate differently \"under the covers\" but its configuration and resulting metrics messages on the EdgeX message bus must be formatted/organized the same. Considerations in the use of go-metrics This is a Golang only library. Using this library would not provide with any package to use for the C services. If there are expectations for parity between the services, this may be more difficult to achieve given the features of go-metrics. go-metrics will still require the EdgeX team to develop a bootstrapping apparatus to take the metrics configuration and register each of the metrics defined in the configuration in go-metrics. go-metrics would also require the EdgeX team to develop the means to periodically extract the metrics data from the registry and ship it via message bus (something the current go-metrics library does not do). While go-metrics offers the ability for data to be reported to other systems, it would required EdgeX to expose these capabilities (possibly through APIs) if a user wanted to export to these subsystems in addition to the message bus. Per the Kamakura Planning Meeting, it was noted that go-metrics is already a dependency in our Go code due to its use other 3rd party packages (see https://github.com/edgexfoundry/edgex-go/blob/4264632f3ddafb0cbc2089cffbea8c0719035c96/go.sum#L18). Community questions about go-metrics Per the Monthly Architect's meeting of 9/20/21): How it manages the telemetry data (persistence, in memory, database, etc.)? In memory - in a \"registry\"; essentially a key/value store where the key is the metric name Does it offer a query API (in order to easily support the ADR suggested REST API)? Yes - metrics are stored in a \"Registry\" (MetricRegistry - essentially a map). Get (or GetAll) methods provided to query for metrics What does the go-metrics package do so that its features can become requirements for C side? About a dozen types of metrics collection (simple gauge or counter to more sophisticated structures like Histograms) - all stored in a registry (map). How is the data made available? Report out (export or publish) to various integrated packages (InfluxDB, Graphite, DataDog, Syslog, etc.). Nothing to MQTT or other base message service. This would have to be implemented from scratch. Can the metric/telemetry count be reset if needed? Does this happen whenever it posts to the message bus? How would this work for REST? Yes, you can unregister and re-register the metric. A REST API would have to be constructed to call this capability. As an alternative to go-metrics, there is another library called OpenCensus . This is a multi-language metrics library, including Go and C++. This library is more feature rich. OpenCensus is also roughly 5x the size of the go-metrics library. Additional Open Questions Should consideration be given to allow metrics to be placed in different topics per name? If so, we will have to add to the topic name like we do for device name in device services? A future consideration Should consideration be given to incorporate alternate protocols/standards for metric collection such as https://opentelemetry.io/ or https://github.com/statsd/? Go metrics is already a library pulled into all Go services. These packages may be used in C side implementations. Decision Per the Monthly Architect's meeting of 12/13/21 - it was decided to use go-metrics for Go services over creating our own library or using open census. C services will either find/pick a package that provides similar functionality to go-metrics or implement internally something providing MVP capability. Use of go-metrics helps avoid too much service bloat since it is already in most Go services. Per the same Monthly Architect's meeting, it as decided to implement metrics in Go services first. Per the Monthly Architect's meeting of 1/24/22 - it was decided not to support a REST API on all services that would provide information on what metrics the service provides and the ability to turn them on / off. Instead, the decision was to use Writable configuration and allow Consul to be the means to change the configuration (dynamically). If an adopter chooses not to use Consul, then the configuration with regard to metrics collection, as with all configuration in this circumstance, would be static. If an external API need is requested in the future (such as from an external UI or tool), a REST API may be added. See older versions of this PR for ideas on implementation in this case. Per Core Working Group meeting of 2/24/22 (and in many other previous meetings on this ADR) - it was decided that the EdgeX approach should be one of push (via message bus/MQTT) vs. pull (REST API). Both approaches require each service to collect metric telemetry specific to that service. After collecting it, the service must either push it onto a message topic (as a message) or cache it (into memory or some storage mechanism depending on whether the storage needs to be durable or not) and allow for a REST API call that would cause the data to be pulled from that cache and provided in a response to the REST call. Given both mechanisms require the same collection process, the belief is that push is probably preferred today by adopters. In the future, if highly desired, a pull REST API could be added (along with a decision on how to cache the metrics telemetry for that pull). Per Core Working Group meeting of 2/24/22 - importantly , EdgeX is just making the metrics telemetry available on the internal EdgeX message bus. An adopter would need to create something to pull the data off this bus to use it in some way. As voiced by several on the call, it is important for the adopter to realize that today, \"we (EdgeX) are not providing the last mile in metrics data\". The adopter must provide that last mile which is to pick the data from the topic, make it available to their systems and do something with it. Per Core Working Group meeting of 2/24/22 (and in many other previous meetings on this ADR) - it was decided not to use Prometheus (or Prometheus library) as the means to provide for metrics. The reasons for this are many: Push vs pull is favored in the first implementation (see point above). Also see similar debate online for the pluses/minuses of each approach. EdgeX wants to make telemetry data available without dictating the specific mechanism for making the data more widely available. Specific debate centered on use of Prometheus as a popular collection library (to use inside of services to collect the data) as well as a monitoring system to watch/display the data. While Prometheus is popular open source approach, it was felt that many organizations choose to use InfluxDB/Grafana, DataDog, AppDynamics, a cloud provided mechanism, or their own home-grown solution to collect, analyse, visualize and otherwise use the telemetry. Therefore, rather than dictating the selection of the monitoring system, EdgeX would simply make the data available whereby and organization could choose their own monitoring system/tooling. It should be noted that the EdgeX approach merely makes the telemetry data available by message bus. A Prometheus approach would provide collection as well as backend system to otherwise collect, analyse, display, etc. the data. Therefore, there is typically work to be done by the adopter to get the telemetry data from the proposed EdgeX message bus solution and do something with it. There are some reporters that come with go-metrics that allow for data to be taken directly from go-metrics and pushed to an intermediary for Prometheus and other monitoring/telemetry platforms as referenced above. These capabilities may not be very well supported and is beyond the scope of this EdgeX ADR. However, even without reporters , it was felt a relatively straightforward exercise (on the part of the adopter) to create an application that listens to the EdgeX metrics message bus and makes that data available via pull REST API for Prometheus if desired. The Prometheus client libraries would have to be added to each service which would bloat the services (although they are available for both Go an C). The benefit of using go-metrics is that it is used already by Hashicorp Consul (so already in the Go services). Implementation Details for Go The go-metrics package offers the following types of metrics collection: Gauges: holds a single integer (int64) value. Example use: Number of notifications in retry status Operations to update the gauge and get the gauge's value Example code: g := metrics . NewGauge () g . Update ( 42 ) // set the value to 42 g . Update ( 10 ) // now set the value to 10 fmt . Println ( g . Value ()) // print out the current value in the gauge = 10 Counter: holds a integer (in64) count. A counter could be implemented with a Gauge. Example use: the current store and forward queue size Operations to increment, decrement, clear and get the counter's count (or value) c := metrics . NewCounter () c . Inc ( 1 ) // add one to the current counter c . Inc ( 10 ) // add 10 to the current counter, making it 11 c . Dec ( 5 ) // decrement the counter by 5, making it 6 fmt . Println ( c . Count ()) // print out the current count of the counter = 6 Meter: measures the rate (int64) of events over time (at one, five and fifteen minute intervals). Example use: the number or rate of requests on a service API Operations: provide the total count of events as well as the mean and rate at 1, 5, and 15 minute rates m := metrics . NewMeter () m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by fmt . Println ( m . Count ()) // prints 4 fmt . Println ( m . Rate1 ()) // prints 0.11075889086811593 fmt . Println ( m . Rate5 ()) // prints 0.1755318374350548 fmt . Println ( m . Rate15 ()) // prints 0.19136522498856992 fmt . Println ( m . RateMean ()) //prints 0.06665062941438574 Histograms: measure the statistical distribution of values (int64 values) in a collection of values. Example use: response times on APIs Operations: update and get the min, max, count, percentile, sample, sum and variance from the collection h := metrics . NewHistogram ( metrics . NewUniformSample ( 4 )) h . Update ( 10 ) h . Update ( 20 ) h . Update ( 30 ) h . Update ( 40 ) fmt . Println (( h . Max ())) // prints 40 fmt . Println ( h . Min ()) // prints 10 fmt . Println ( h . Mean ()) // prints 25 fmt . Println ( h . Count ()) // prints 4 fmt . Println ( h . Percentile ( 0.25 )) //prints 12.5 fmt . Println ( h . Variance ()) //prints 125 fmt . Println ( h . Sample ()) //prints &{4 {0 0} 4 [10 20 30 40]} Timer: measures both the rate a particular piece of code is called and the distribution of its duration Example use: how often an app service function gets called and how long it takes get through the function Operations: update and get min, max, count, rate1, rate5, rate15, mean, percentile, sum and variance from the collection t := metrics . NewTimer () t . Update ( 10 ) time . Sleep ( 15 * time . Second ) t . Update ( 20 ) time . Sleep ( 15 * time . Second ) t . Update ( 30 ) time . Sleep ( 15 * time . Second ) t . Update ( 40 ) time . Sleep ( 15 * time . Second ) fmt . Println (( t . Max ())) // prints 40 fmt . Println ( t . Min ()) // prints 10 fmt . Println ( t . Mean ()) // prints 25 fmt . Println ( t . Count ()) // prints 4 fmt . Println ( t . Sum ()) // prints 100 fmt . Println ( t . Percentile ( 0.25 )) //prints 12.5 fmt . Println ( t . Variance ()) //prints 125 fmt . Println ( t . Rate1 ()) // prints 0.1116017821771607 fmt . Println ( t . Rate5 ()) // prints 0.1755821073441404 fmt . Println ( t . Rate15 ()) // prints 0.1913711954736821 fmt . Println ( t . RateMean ()) //prints 0.06665773963998162 Note The go-metrics package does offer some variants of these like the GaugeFloat64 to hold 64 bit floats. Consequences Should there be a global configuration option to turn all metrics off/on? EdgeX doesn't yet have global config so this will have to be by service. Given the potential that each service publishes metrics to the same message topic, 0MQ is not implementation option unless each service uses a different 0MQ pipe (0MQ topics do not allow multiple publishers). Like the DS to App Services implementation, do we allow 0MQ to be used, but only if each service sends to a different 0MQ topic? Probably not. We need to avoid service bloat. EdgeX is not an enterprise system. How can we implement in a concise and economical way? Use of Go metrics helps on the Go side since this is already a module used by EdgeX modules (and brought in by default). Care and concern must be given to not cause too much bloat on the C side. SMA reports on service CPU, memory, configuration and provides the means to start/stop/restart the services. This is currently outside the scope of the new metric collection/monitoring. In the future, 3rd party mechanisms which offer the same capability as SMA may warrant all of SMA irrelevant. The existing notifications service serves to send a notification via alternate protocol outside of EdgeX. This communication service is provided as a generic communication instrument from any micro service and is independent of any type of data or concern. In the future, the notification service could be configured to be a subscriber of the metric messages and trigger appropriate external notification (via email, SMTP, etc.). Reference Possible standards for implementation Open Telemetry statsd go-metrics OpenCensus","title":"EdgeX Metrics Collection"},{"location":"design/adr/0006-Metrics-Collection/#edgex-metrics-collection","text":"","title":"EdgeX Metrics Collection"},{"location":"design/adr/0006-Metrics-Collection/#status","text":"Approved Original proposal 10/24/2020 Approved by the TSC on 3/2/22 Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include: the number of EdgeX Events sent from core data to an application service the number of requests on a service API the average time it takes to process a message through an application service The number of errors logged by a service Control plane events (CPE) are defined as events that occur within an EdgeX instance. Examples of CPE include: a device was provisioned (added to core metadata) a service was stopped service configuration has changed CPE should not be confused with core data Events. Core data Events represent a collection (one or more) of sensor/device readings. Core data Events represent sensing of some measured state of the physical world (temperature, vibration, etc.). CPE represents the detection of some happening inside of the EdgeX software. This ADR outlines metrics (or telemetry) collection and handling. Note This ADR initially incorporated metrics collection and control plane event processing. The EdgeX architects felt the scope of the design was too large to cover under one ADR. Control plane event processing will be covered under a separate ADR in the future.","title":"Status"},{"location":"design/adr/0006-Metrics-Collection/#context","text":"System Management services (SMA and executors) currently provide a limited set of \u201cmetrics\u201d to requesting clients (3rd party applications and systems external to EdgeX). Namely, it provides requesting clients with service CPU and memory usage; both metrics about the resource utilization of the service (the executable) itself versus metrics that are about what is happening inside of the service. Arguably, the current system management metrics can be provided by the container engine and orchestration tools (example: by Docker engine) or by the underlying OS tooling. Info The SMA has been deprecated (since Ireland release) and will be removed in a future, yet named, release. Going forward, users of EdgeX will want to have more insights \u2013 that is more metrics telemetry \u2013 on what is happening directly in the services and the tasks that they are preforming. In other words, users of EdgeX will want more telemetry on service activities to include: sensor data collection (how much, how fast, etc.) command requests handled (how many, to which devices, etc.) sensor data transformation as it is done in application services (how fast, what is filtered, etc) sensor data export (how much is sent, how many exports have failed, etc. ) API requests (how often, how quickly, how many success versus failed attempts, etc.) bootstrapping time (time to come up and be available to other services) activity processing time (amount of time it takes to perform a particular service function - such as respond to a command request)","title":"Context"},{"location":"design/adr/0006-Metrics-Collection/#definitions","text":"Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include: the number of EdgeX Events sent from core data to an application service via message bus (or via device service to application service in Ireland and beyond) the number of requests on a service API the average time it takes to process a message through an application service The number of errors logged by a service The collection and dissemination of metric data will require internal service level instrumentation (relevant to that service) to capture and send data about relevant EdgeX operations. EdgeX does not currently offer any service instrumentation.","title":"Definitions"},{"location":"design/adr/0006-Metrics-Collection/#metric-use","text":"As a first step in implementation of metrics data, EdgeX will make metric data available to other subscribing 3rd party applications and systems, but will not necessarily consume or use this information itself. In the future, EdgeX may consume its own metric data. For example, EdgeX may, in the future, use a metric on the number of EdgeX events being sent to core data (or app services) as the means to throttle back device data collection. In the future, EdgeX application services may optionally subscribe to a service's metrics messages bus (by attaching to the appropriate message pipe for that service). Thus allowing additional filtering, transformation, endpoint control of metric data from that service. At the point where this feature is supported, consideration would need to be made as to whether all events (sensor reading messages and metric messages) go through the same application services. At this time, EdgeX will not persist the metric data (except as it may be retained as part of a message bus subsystem such as in an MQTT broker). Consumers of metric data are responsible for persisting the data if needed, but this is external to EdgeX. Persistence of metric information may be considered in the future based on requirements and adopter demand for such a feature. In general, EdgeX metrics are meant to provide internal services and external applications and systems better information about what is happening \"inside\" EdgeX services and the associated devices with which it communicates.","title":"Metric Use"},{"location":"design/adr/0006-Metrics-Collection/#requirements","text":"Services will push specified metrics collected for that service to a specified (by configuration) message endpoint (as supported by the EdgeX message bus implementation; currently either Redis Pub/Sub or MQTT implementations are supported) Each service will have configuration that specifies a message endpoint for the service metrics. The metrics message topic communications may be secured or unsecured (just as application services provide the means to export to secured or unsecured message pipes today). The configuration will be placed in the Writable area. When a user wishes to change the configuration dynamically (such as turning on/off a metric), then Consul's UI can be used to change it. Services will have configuration which indicates what metrics are available from the service. Services will have configuration which allows EdgeX system managers to select which metrics are on or off - in other words providing configuration that determines what metrics are collected and reported by default. When a metric is turned off (the default setting) the service does not report the metric. When a metric is turned on the service collects and sends the metric to the designated message topic. Metrics collection must be pushed to the designated message topic on some appointed schedule. The schedule would be designated by configuration and done in a way similar to auto events in device services. For the initial implementation, there will be just one scheduled time when all metrics will be collected and pushed to the designated message topic. In the future, there may be a desire to set up a separate schedule for each metric, but this was deemed too complex for the initial implementation. Info Initially, it was proposed that metrics be associated with a \"level\" and allow metrics to be turned on or off by level (like levels associated to log messages in logging). The level of metrics data seems arbitrary at this time and considered too complex for initial implementation. This may be reconsidered in a future release and based on new requirements/use cases. It was also proposed to categorize or label metrics - essentially allowing grouping of various metrics. This would allow groups of metrics to be turned on or off, and allow metrics to be organized per the group when reporting. At this time, this feature is also considered beyond the scope of the initial implementation and to be reconsidered in a future release based on requirements/use case needs. It was also proposed that each service offer a REST API to provide metrics collection information (such as which metrics were being collected) and the ability to turn the collection on or off dynamically. This is deemed out of scope for the first implementation and may be brought back if there are use case requirements / demand for it.","title":"Requirements"},{"location":"design/adr/0006-Metrics-Collection/#requested-metrics","text":"The following is a list of example metrics requested by the EdgeX community and adopters for various service areas. Again, metrics would generally be collected and pushed to the message topic in some configured interval (example: 1/5/15 minutes or other defined interval). This is just a sample of metrics thought relevant by each work group. It may not reflect the metrics supported by the implementation. The exact metrics collected by each service will be determined by the service implementers (or SDK implementers in the case of the app functions and device service SDKs).","title":"Requested Metrics"},{"location":"design/adr/0006-Metrics-Collection/#general","text":"The following metrics apply to all (or most) services. Service uptime (time since last service boot) Cumulative number of API requests succeeded / failed / invalid (2xx vs 5xx vs 4xx) Avg response time (in milliseconds or appropriate unit of measure) on APIs Avg and Max request size","title":"General"},{"location":"design/adr/0006-Metrics-Collection/#coresupporting","text":"Latency (measure of time) an event takes to get through core data Latency (measure of time) a command request takes to get to a device service Indication of health \u2013 that events are being processed during a configurable period Number of events in persistence Number of readings in persistence Number of validation failures (validation of device identification) Number of notification transactions Number of notifications handled Number of failed notification transmissions Number of notifications in retry status","title":"Core/Supporting"},{"location":"design/adr/0006-Metrics-Collection/#application-services","text":"Processing time for a pipeline; latency (measure of time) an event takes to get through an application service pipeline DB access times How often are we failing export to be sent to db to be retried at a later time What is the current store and forward queue size How much data (size in KBs or MBs) of packaged sensor data is being sent to an endpoint (or volume) Number of invalid messages that triggered pipeline Number of events processed","title":"Application Services"},{"location":"design/adr/0006-Metrics-Collection/#device-services","text":"Number of devices managed by this DS Device Requests (which may be more informative than reading counts and rates) Note It is envisioned that there may be additional specific metrics for each device service. For example, the ONVIF camera device service may report number of times camera tampering was detected.","title":"Device Services"},{"location":"design/adr/0006-Metrics-Collection/#security","text":"Security metrics may be more difficult to ascertain as they are cross service metrics. Given the nature of this design (on a per service basis), global security metrics may be out of scope or security metrics collection has to be copied into each service (leading to lots of duplicate code for now). Also, true threat detection based on metrics may be a feature best provided by 3rd party based on particular threats and security profile needs. Number of API requests denied due to wrong access token (Kong) per service and within a given time Number of secrets accessed per service name Count of any accesses and failures to the data persistence layer Count of service start and restart attempts","title":"Security"},{"location":"design/adr/0006-Metrics-Collection/#design-proposal","text":"","title":"Design Proposal"},{"location":"design/adr/0006-Metrics-Collection/#collect-and-push-architecture","text":"Metric data will be collected and cached by each service. At designated times (kicked off by configurable schedule), the service will collect telemetry data from the cache and push it to a designated message bus topic.","title":"Collect and Push Architecture"},{"location":"design/adr/0006-Metrics-Collection/#metrics-messaging","text":"Cached metric data, at the designated time, will be marshaled into a message and pushed to the pre-configured message bus topic. Each metric message consists of several key/value pairs: - a required name (the name of the metric) such as service-uptime - a required value which is the telemetry value collected such as 120 as the number of hours the service has been up. - a required timestamp is the time (in Epoch timestamp/milliseconds format) at which the data was collected (similar in nature to the origin of sensed data). - an optional collection (array) of tags. The tags are sets of key/value pairs of strings that provide amplifying information about the telemetry. Tags may include: - originating service name - unit of measure associated with the telemetry value - value type of the value - additional values when the metric is more than just one value (example: when using a histogram, it would include min, max, mean and sum values) The metric name must be unique for that service. Because some metrics are reported from multiple services (such as service uptime), the name is not required to be unique across all services. All information (keys, values, tags, etc.) is in string format and placed in a JSON array within the message body. Here are some example representations: Example metric message body with a single value { \"name\" : \"service-up\" , \"value\" : \"120\" , \"timestamp\" : \"1602168089665570000\" , \"tags\" :{ \"service\" : \"coredata\" , \"uom\" : \"days\" , \"type\" : \"int64\" }} Example metric message body with multiple values { \"name\" : \"api-requests\" , \"value\" : \"24\" , \"timestamp\" : \"1602168089665570001\" , \"tags\" :{ \"service\" : \"coredata\" , \"uom\" : \"count\" , \"type\" : \"int64\" , \"mean\" : \"0.0665\" , \"rate1\" : \"0.111\" , \"rate5\" : \"0.150\" , \"rate15\" : \"0.111\" }} Info The key or metric name must be unique when using go-metrics as it requires the metric name to be unique per the registry. Metrics are considered immutable.","title":"Metrics Messaging"},{"location":"design/adr/0006-Metrics-Collection/#configuration","text":"Configuration, not unlike that provided in core data or any device service, will specify the message bus type and locations where the metrics messages should be sent. In fact, the message bus configuration will use (or reuse if the service is already using the message bus) the common message bus configuration as defined below. Common configuration for each service for message queue configuration (inclusive of metrics): [ MessageQueue ] Protocol = 'redis' ## or 'tcp' Host = 'localhost' Port = 5573 Type = 'redis' ## or 'mqtt' PublishTopicPrefix = \"edgex/events/core\" # standard and existing core or device topic for publishing [ MessageQueue.Optional ] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"device-virtual\" # Connection information Qos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified Additional configuration must be provided in each service to provide metrics / telemetry specific configuration. This area of the configuration will likely be different for each type of service. Additional metrics collection configuration to be provided include: Trigger the collection of telemetry from the metrics cache and sending it into the appointed message bus. Define which metrics are available and which are turned off and on . All are false by default. The list of metrics can and likely will be different per service. The keys in this list are the metric name. True and false are used for on and off values. Specify the metrics topic prefix where metrics data will be published to (ex: providing the prefix /edgex/telemetry/topic name where the service and metric name [service-name]/[metric-name] will be appended per metric (allowing subscribers to filter by service or metric name) These metrics configuration options will be defined in the Writable area of configuration.toml so as to allow for dynamic changes to the configuration (when using Consul). Specifically, the [Writable].[Writable.Telemetry] area will dictate metrics collection configuration like this: [[ Writable ]] [[ Writable.Telemetry ]] Interval = \"30s\" PublishTopicPrefix = \"edgex/telemetry\" # // will be added to this Publish Topic prefix #available metrics listed here. All metrics should be listed off (or false) by default service-up = false api-requests = false Info It was discussed that in future EdgeX releases, services may want separate message bus connections. For example one for sensor data and one for metrics telemetry data. This would allow the QoS and other settings of the message bus connection to be different. This would allow sensor data collection, for example, to be messaged with a higher QoS than that of metrics. As an alternate approach, we could modify go-mod-messaging to allow setting QoS per topic (and thereby avoid multiple connections). For the initial release of this feature, the service will use the same connection (and therefore configuration) for metrics telemetry as well as sensor data.","title":"Configuration"},{"location":"design/adr/0006-Metrics-Collection/#library-support","text":"Each service will now need go-mod-messaging support (for GoLang services and the equivalent for C services). Each service would determine when and what metrics to collect and push to the message bus, but will use a common library chosen for each EdgeX language supported (Go or C currently) Use of go-metrics (a GoLang library to publish application metrics) would allow EdgeX to utilize (versus construct) a library utilized by over 7 thousand projects. It provides the means to capture various types of metrics in a registry (a sophisticated map). The metrics can then be published ( reported ) to a number of well known systems such as InfluxDB, Graphite, DataDog, and Syslog. go-metrics is a Go library made from original Java package https://github.com/dropwizard/metrics. A similar package would need to be selected (or created) for C. Per the Core WG meeting of 2/24/22 - it is important to provide an implementation that is the same in Go or C. The adopter of EdgeX should not see a difference in whether the metrics/telemetry is collected by a C or Go service. Configuration of metrics in a C or Go service should have the same structure. The C based metrics collection mechanism in C services (specifically as provided for in our C device service SDK) may operate differently \"under the covers\" but its configuration and resulting metrics messages on the EdgeX message bus must be formatted/organized the same. Considerations in the use of go-metrics This is a Golang only library. Using this library would not provide with any package to use for the C services. If there are expectations for parity between the services, this may be more difficult to achieve given the features of go-metrics. go-metrics will still require the EdgeX team to develop a bootstrapping apparatus to take the metrics configuration and register each of the metrics defined in the configuration in go-metrics. go-metrics would also require the EdgeX team to develop the means to periodically extract the metrics data from the registry and ship it via message bus (something the current go-metrics library does not do). While go-metrics offers the ability for data to be reported to other systems, it would required EdgeX to expose these capabilities (possibly through APIs) if a user wanted to export to these subsystems in addition to the message bus. Per the Kamakura Planning Meeting, it was noted that go-metrics is already a dependency in our Go code due to its use other 3rd party packages (see https://github.com/edgexfoundry/edgex-go/blob/4264632f3ddafb0cbc2089cffbea8c0719035c96/go.sum#L18). Community questions about go-metrics Per the Monthly Architect's meeting of 9/20/21): How it manages the telemetry data (persistence, in memory, database, etc.)? In memory - in a \"registry\"; essentially a key/value store where the key is the metric name Does it offer a query API (in order to easily support the ADR suggested REST API)? Yes - metrics are stored in a \"Registry\" (MetricRegistry - essentially a map). Get (or GetAll) methods provided to query for metrics What does the go-metrics package do so that its features can become requirements for C side? About a dozen types of metrics collection (simple gauge or counter to more sophisticated structures like Histograms) - all stored in a registry (map). How is the data made available? Report out (export or publish) to various integrated packages (InfluxDB, Graphite, DataDog, Syslog, etc.). Nothing to MQTT or other base message service. This would have to be implemented from scratch. Can the metric/telemetry count be reset if needed? Does this happen whenever it posts to the message bus? How would this work for REST? Yes, you can unregister and re-register the metric. A REST API would have to be constructed to call this capability. As an alternative to go-metrics, there is another library called OpenCensus . This is a multi-language metrics library, including Go and C++. This library is more feature rich. OpenCensus is also roughly 5x the size of the go-metrics library.","title":"Library Support"},{"location":"design/adr/0006-Metrics-Collection/#additional-open-questions","text":"Should consideration be given to allow metrics to be placed in different topics per name? If so, we will have to add to the topic name like we do for device name in device services? A future consideration Should consideration be given to incorporate alternate protocols/standards for metric collection such as https://opentelemetry.io/ or https://github.com/statsd/? Go metrics is already a library pulled into all Go services. These packages may be used in C side implementations.","title":"Additional Open Questions"},{"location":"design/adr/0006-Metrics-Collection/#decision","text":"Per the Monthly Architect's meeting of 12/13/21 - it was decided to use go-metrics for Go services over creating our own library or using open census. C services will either find/pick a package that provides similar functionality to go-metrics or implement internally something providing MVP capability. Use of go-metrics helps avoid too much service bloat since it is already in most Go services. Per the same Monthly Architect's meeting, it as decided to implement metrics in Go services first. Per the Monthly Architect's meeting of 1/24/22 - it was decided not to support a REST API on all services that would provide information on what metrics the service provides and the ability to turn them on / off. Instead, the decision was to use Writable configuration and allow Consul to be the means to change the configuration (dynamically). If an adopter chooses not to use Consul, then the configuration with regard to metrics collection, as with all configuration in this circumstance, would be static. If an external API need is requested in the future (such as from an external UI or tool), a REST API may be added. See older versions of this PR for ideas on implementation in this case. Per Core Working Group meeting of 2/24/22 (and in many other previous meetings on this ADR) - it was decided that the EdgeX approach should be one of push (via message bus/MQTT) vs. pull (REST API). Both approaches require each service to collect metric telemetry specific to that service. After collecting it, the service must either push it onto a message topic (as a message) or cache it (into memory or some storage mechanism depending on whether the storage needs to be durable or not) and allow for a REST API call that would cause the data to be pulled from that cache and provided in a response to the REST call. Given both mechanisms require the same collection process, the belief is that push is probably preferred today by adopters. In the future, if highly desired, a pull REST API could be added (along with a decision on how to cache the metrics telemetry for that pull). Per Core Working Group meeting of 2/24/22 - importantly , EdgeX is just making the metrics telemetry available on the internal EdgeX message bus. An adopter would need to create something to pull the data off this bus to use it in some way. As voiced by several on the call, it is important for the adopter to realize that today, \"we (EdgeX) are not providing the last mile in metrics data\". The adopter must provide that last mile which is to pick the data from the topic, make it available to their systems and do something with it. Per Core Working Group meeting of 2/24/22 (and in many other previous meetings on this ADR) - it was decided not to use Prometheus (or Prometheus library) as the means to provide for metrics. The reasons for this are many: Push vs pull is favored in the first implementation (see point above). Also see similar debate online for the pluses/minuses of each approach. EdgeX wants to make telemetry data available without dictating the specific mechanism for making the data more widely available. Specific debate centered on use of Prometheus as a popular collection library (to use inside of services to collect the data) as well as a monitoring system to watch/display the data. While Prometheus is popular open source approach, it was felt that many organizations choose to use InfluxDB/Grafana, DataDog, AppDynamics, a cloud provided mechanism, or their own home-grown solution to collect, analyse, visualize and otherwise use the telemetry. Therefore, rather than dictating the selection of the monitoring system, EdgeX would simply make the data available whereby and organization could choose their own monitoring system/tooling. It should be noted that the EdgeX approach merely makes the telemetry data available by message bus. A Prometheus approach would provide collection as well as backend system to otherwise collect, analyse, display, etc. the data. Therefore, there is typically work to be done by the adopter to get the telemetry data from the proposed EdgeX message bus solution and do something with it. There are some reporters that come with go-metrics that allow for data to be taken directly from go-metrics and pushed to an intermediary for Prometheus and other monitoring/telemetry platforms as referenced above. These capabilities may not be very well supported and is beyond the scope of this EdgeX ADR. However, even without reporters , it was felt a relatively straightforward exercise (on the part of the adopter) to create an application that listens to the EdgeX metrics message bus and makes that data available via pull REST API for Prometheus if desired. The Prometheus client libraries would have to be added to each service which would bloat the services (although they are available for both Go an C). The benefit of using go-metrics is that it is used already by Hashicorp Consul (so already in the Go services).","title":"Decision"},{"location":"design/adr/0006-Metrics-Collection/#implementation-details-for-go","text":"The go-metrics package offers the following types of metrics collection: Gauges: holds a single integer (int64) value. Example use: Number of notifications in retry status Operations to update the gauge and get the gauge's value Example code: g := metrics . NewGauge () g . Update ( 42 ) // set the value to 42 g . Update ( 10 ) // now set the value to 10 fmt . Println ( g . Value ()) // print out the current value in the gauge = 10 Counter: holds a integer (in64) count. A counter could be implemented with a Gauge. Example use: the current store and forward queue size Operations to increment, decrement, clear and get the counter's count (or value) c := metrics . NewCounter () c . Inc ( 1 ) // add one to the current counter c . Inc ( 10 ) // add 10 to the current counter, making it 11 c . Dec ( 5 ) // decrement the counter by 5, making it 6 fmt . Println ( c . Count ()) // print out the current count of the counter = 6 Meter: measures the rate (int64) of events over time (at one, five and fifteen minute intervals). Example use: the number or rate of requests on a service API Operations: provide the total count of events as well as the mean and rate at 1, 5, and 15 minute rates m := metrics . NewMeter () m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by m . Mark ( 1 ) // add one to the current meter value time . Sleep ( 15 * time . Second ) // allow some time to go by fmt . Println ( m . Count ()) // prints 4 fmt . Println ( m . Rate1 ()) // prints 0.11075889086811593 fmt . Println ( m . Rate5 ()) // prints 0.1755318374350548 fmt . Println ( m . Rate15 ()) // prints 0.19136522498856992 fmt . Println ( m . RateMean ()) //prints 0.06665062941438574 Histograms: measure the statistical distribution of values (int64 values) in a collection of values. Example use: response times on APIs Operations: update and get the min, max, count, percentile, sample, sum and variance from the collection h := metrics . NewHistogram ( metrics . NewUniformSample ( 4 )) h . Update ( 10 ) h . Update ( 20 ) h . Update ( 30 ) h . Update ( 40 ) fmt . Println (( h . Max ())) // prints 40 fmt . Println ( h . Min ()) // prints 10 fmt . Println ( h . Mean ()) // prints 25 fmt . Println ( h . Count ()) // prints 4 fmt . Println ( h . Percentile ( 0.25 )) //prints 12.5 fmt . Println ( h . Variance ()) //prints 125 fmt . Println ( h . Sample ()) //prints &{4 {0 0} 4 [10 20 30 40]} Timer: measures both the rate a particular piece of code is called and the distribution of its duration Example use: how often an app service function gets called and how long it takes get through the function Operations: update and get min, max, count, rate1, rate5, rate15, mean, percentile, sum and variance from the collection t := metrics . NewTimer () t . Update ( 10 ) time . Sleep ( 15 * time . Second ) t . Update ( 20 ) time . Sleep ( 15 * time . Second ) t . Update ( 30 ) time . Sleep ( 15 * time . Second ) t . Update ( 40 ) time . Sleep ( 15 * time . Second ) fmt . Println (( t . Max ())) // prints 40 fmt . Println ( t . Min ()) // prints 10 fmt . Println ( t . Mean ()) // prints 25 fmt . Println ( t . Count ()) // prints 4 fmt . Println ( t . Sum ()) // prints 100 fmt . Println ( t . Percentile ( 0.25 )) //prints 12.5 fmt . Println ( t . Variance ()) //prints 125 fmt . Println ( t . Rate1 ()) // prints 0.1116017821771607 fmt . Println ( t . Rate5 ()) // prints 0.1755821073441404 fmt . Println ( t . Rate15 ()) // prints 0.1913711954736821 fmt . Println ( t . RateMean ()) //prints 0.06665773963998162 Note The go-metrics package does offer some variants of these like the GaugeFloat64 to hold 64 bit floats.","title":"Implementation Details for Go"},{"location":"design/adr/0006-Metrics-Collection/#consequences","text":"Should there be a global configuration option to turn all metrics off/on? EdgeX doesn't yet have global config so this will have to be by service. Given the potential that each service publishes metrics to the same message topic, 0MQ is not implementation option unless each service uses a different 0MQ pipe (0MQ topics do not allow multiple publishers). Like the DS to App Services implementation, do we allow 0MQ to be used, but only if each service sends to a different 0MQ topic? Probably not. We need to avoid service bloat. EdgeX is not an enterprise system. How can we implement in a concise and economical way? Use of Go metrics helps on the Go side since this is already a module used by EdgeX modules (and brought in by default). Care and concern must be given to not cause too much bloat on the C side. SMA reports on service CPU, memory, configuration and provides the means to start/stop/restart the services. This is currently outside the scope of the new metric collection/monitoring. In the future, 3rd party mechanisms which offer the same capability as SMA may warrant all of SMA irrelevant. The existing notifications service serves to send a notification via alternate protocol outside of EdgeX. This communication service is provided as a generic communication instrument from any micro service and is independent of any type of data or concern. In the future, the notification service could be configured to be a subscriber of the metric messages and trigger appropriate external notification (via email, SMTP, etc.).","title":"Consequences"},{"location":"design/adr/0006-Metrics-Collection/#reference","text":"Possible standards for implementation Open Telemetry statsd go-metrics OpenCensus","title":"Reference"},{"location":"design/adr/0018-Service-Registry/","text":"Service Registry Status Context Existing Behavior Device Services Registry Client Interface Usage Core and Support Services Security Proxy Setup History Problem Statement Decision References Status Approved (by TSC vote on 3/25/21) Context An EdgeX system may be run with an optional service registry, the use of which (see the related ADR 0001-Registry-Refactor [1]) can be controlled on a per-service basis via the -r/-registry commmand line options. For the purposes of this ADR, a base assumption is that the registry has been enabled for all services. The default service registry used by EdgeX is Consul [2] from Hashicorp. Consul is also the default configuration provider for EdgeX. This ADR is meant to address the current usage of the registry by EdgeX services, and in particular whether the EdgeX services are using the registry to determine the location of peer services vs. using static per-service configuration. The reason this is being investigated is that there has been a proposal that EdgeX do away with the registry functionality, as the current implementation is not considered secure , due to the current configuration of Consul as used by the latest version of EdgeX (Hanoi/1.3.0). According to the original Service Name Design document (v6) [3] written during the California (0.6) release of EdgeX, all EdgeX Foundry microservices should be able to accomplish the following tasks: Register with the configuration/registration (referred to simply as \u201cthe registry\u201d for the rest of this document) provider (today Consul) Respond to availability requests Respond to shutdown requests by: Cleaning up resources in an orderly fashion Unregistering itself from the registry Get the address (host & port) of another EdgeX microservice by service name through the registry (when enabled) The purpose of this design is to ensure that services themselves advertise their location to the rest of the system by first self- registering. Most service registries (including Consul) implement some sort of health check mechanism. If a service is failing one or more health checks, the registry will stop reporting its availability when queried. Note - the design specifically excludes device services from this service lookup, as Core Metadata maintains a persistent store of DeviceService objects which provide service location for device services. Existing Behavior This section documents the existing behavior in the Hanoi (1.3.x) version of EdgeX. Device Services Device Virtual's behavior was first tested using the edgexfoundry snap (which is configured to always use the registry) by doing the following: $ sudo snap install edgexfoundry $ cp /var/snap/edgexfoundry/current/config/device-virtual/res/configuration.toml . I edited the file, removing the [Client.Data] section completely and copied the file back into place. Next I enabled device-virtual while monitoring the journal output. $ sudo cp configuration.toml /var/snap/edgexfoundry/current/config/device-virtual/res/ $ sudo snap set edgexfoundry device-virtual=on The following error was seen in the journal: level=INFO app=device-virtual source=httpserver.go:94 msg=\"Web server starting (0.0.0.0:49990)\" error: fatal error; Host setting for Core Data client not configured Next I followed the same steps, but instead of completely removing the client, I instead set the client ports to invalid values. In this case the service logged the following errors and exited: level=ERROR app=device-virtual source=service.go:149 msg=\"DeviceServicForName failed: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\" level=ERROR app=device-virtual source=init.go:45 msg=\"Couldn't register to metadata service: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\\n\" Note - in order to run this second test, the easiest way to do so is to remove and reinstall the snap vs. manually wiping out device-virtual's configuration in Consul. I could have also stopped the service, modified the configuration directly in Consul, and restarted the service. Registry Client Interface Usage Next the service's usage of the go-mod-registry Client interface was examined: type Client interface { // Registers the current service with Registry for discover and health check Register() error // Un-registers the current service with Registry for discover and health check Unregister() error // Simply checks if Registry is up and running at the configured URL IsAlive() bool // Gets the service endpoint information for the target ID from the Registry GetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error) // Checks with the Registry if the target service is available, i.e. registered and healthy IsServiceAvailable(serviceId string) (bool, error) } Summary If a device service is started with the registry flag set: Both Device SDKs register with the registry on startup, and unregister from the registry on normal shutdown. The Go SDK (device-sdk-go) queries the registry to check dependent service availability and health (via IsServiceAvailable ) on startup. Regardless of the registry setting, the Go SDK always sources the addresses of its dependent services from the Client* configuration stanzas. The C SDK queries the registry for the addresses of its dependent services. It pings the services directly to determine their availbility and health. Core and Support Services The same approach was used for Core and Support services (i.e. reviewing the usage of go-mod-bootstrap's Client interface), and ironically, the SMA seems to be the only service in edgex-go that actually queries the registry for service location: ./internal/system/agent/getconfig/executor.go: ep, err := e.registryClient.GetServiceEndpoint(serviceName) ./internal/system/agent/direct/metrics.go: e, err := m.registryClient.GetServiceEndpoint(serviceName) In summary, other than the SMA's configuration and metrics logic, the Core and Support services behave in the same manner as device-sdk-go. Note - the SMA also has a longstanding issue #2486 where it continuousy logs errors if one (or more) of the Support Services are not running. As described in the issue, this could be avoided if the SMA used the registry to determine if the services were actually available. See related issue #1662 ('Look at Driving \"Default Services List\" via Configuration'). Security Proxy Setup The security-proxy-setup service also relies on static service address configuration to configure the server routes for each of the services accessible through the API Gateway (aka Kong). Although it uses the same TOML-based client config keys as the other services, these configuration values are only ever read from the security-proxy-setup's local configuration.toml file, as the security services have never supported using our configuration provider (aka Consul). Note - Another point worth mentioning with respect to security services is that in the Geneva and Hanoi releases the service health checks registered by the services (and the associated IsServiceAvailable method) are used to orchestrate the ordered startup of the security services via a set of Consul scripts. This additional orchestration is only performed when EdgeX is deployed via docker, and is slated to to be removed as part of the Ireland release. History After a bit of research reaching as far back as the California (0.6.1) release of EdgeX, I've managed to piece together why the current implementation works the way it does. This history focues solely on the core and support services. The California release of EdgeX was released in June of 2018 and was the first to include services written using Go. This version of EdgeX as well as versions through the Fuji release all relied on a bootstrapping service called core-config-seed which was responsible for seeding the configuration of all of the core and support services into Consul prior to any of the services being started. This release actually preceded usage of TOML for configuration files, and instead just used a flat key/value format, with keys converted from legacy Java property names (e.g. meta.db.device.url ) to Camel[Pascal]/Case (e.g. MetaDeviceServiceURL). I chose the config key mentioned above on purpose: MetaDeviceURL = \"http://edgex-core-metadata:48081/api/v1/device\" Not only did this config key provide the address of core metadata, it also provided the path of a specific REST endpoint. In later releases of EdgeX, the address of the service and the specific endpoint paths were de-coupled. Instead of following the Service Name design (which was finalized two months earlier), the initial implementation followed the legacy Java implementation and initialized its service clients for each required REST endpoint (belonging to another EdgeX service) directly from the associated *URL config key read from Consul (if enabled) or directly from the configuration file. The shared client initialization code also created an Endpoint monitor goroutine and passed it a go channel channel used by the service to receive updates to the REST API endpoint URL. This monitor goroutine effectively polled Consul every 15s (this became configurable in later versions) for the client's service address and if a change was detected, would write the updated endpoint URL to the given channel, effectively ensuring that the service started using the new URL. It wasn't till late in the Geneva development cycle that I noticed log messages which made me aware of the fact that every one of our services was making a REST call to check the address of a service endpoint every 15s, for every REST endpoint it used! An issue was filed (https://github.com/edgexfoundry/edgex-go/issues/2594), and the client monitoring was removed as part of the Geneva 1.2.1 release. Problem Statement The fundamental problem with the existing implementations (as decribed above), is that there is too much duplication of configuration across services. For instance, Core Data's service port can easily be changed by passing the environment variable SERVICE_PORT to the service on startup. This overrides the configuration read from the configuration provider, and will cause Core Data to listen on the new port, however it has no impact on any services which use Core Data, as the client config for each is read from the configuration provider (excluding security-proxy-setup). This means in order to change a service port, environment variable overrides (e.g. CLIENTS_COREDARA_PORT) need to set for every client service as well as security-proxy-setup (if required). Decision Update the core, support, and security-proxy-setup services to use go-mod-registry's Client.GetServiceEndpoint method (if started with the --registry option) to determine (a) if a service dependency is available and (b) use the returned address information to initialize client endpoints (or setup the correct route in the case of proxy-setup). The same changes also need to be applied to the App Functions SDK and Go Device SDK, with only minor changes required in the C Device SDK (see previous commments re: the current implementation). Note - this design only works if service registration occurs before the service initializes its clients. For instance, Core Data and Core Metadata both depend on the other, and thus if both defer service registration till after client initialization, neither will be able to successfully lookup the address of the other service. Consquences One impact of this decision is that since the security-proxy-setup service currently runs before any of the core and support services are started, it would not be possible to implement this proposal without also modifying the service to use a lazy initialization of the API Gateway's routes. As such, the implementation of this ADR will require more design work with respect to security-proxy-setup. Some of the issues include: Splitting the configuration of the API Gateway from the service route intialization logic, either by making the service long-running or splitting route initialization into it's own service. Handling registry and non-registry scenarios (i.e. add --registry command-line support to security-proxy-setup). Handling changes to service address information (i.e. dynamically update API Gateway routes if/when service addresses change). Finally the proxy-setup's configuration needs to be updated so that its Route entries use service-keys instead of arbitrary names (e.g. ( Route.core-data vs. Route.CoreData ). References [1] ADR 0001-Registry-Refactor [2] Consul [3] Service Name Design v6","title":"Service Registry"},{"location":"design/adr/0018-Service-Registry/#service-registry","text":"Status Context Existing Behavior Device Services Registry Client Interface Usage Core and Support Services Security Proxy Setup History Problem Statement Decision References","title":"Service Registry"},{"location":"design/adr/0018-Service-Registry/#status","text":"Approved (by TSC vote on 3/25/21)","title":"Status"},{"location":"design/adr/0018-Service-Registry/#context","text":"An EdgeX system may be run with an optional service registry, the use of which (see the related ADR 0001-Registry-Refactor [1]) can be controlled on a per-service basis via the -r/-registry commmand line options. For the purposes of this ADR, a base assumption is that the registry has been enabled for all services. The default service registry used by EdgeX is Consul [2] from Hashicorp. Consul is also the default configuration provider for EdgeX. This ADR is meant to address the current usage of the registry by EdgeX services, and in particular whether the EdgeX services are using the registry to determine the location of peer services vs. using static per-service configuration. The reason this is being investigated is that there has been a proposal that EdgeX do away with the registry functionality, as the current implementation is not considered secure , due to the current configuration of Consul as used by the latest version of EdgeX (Hanoi/1.3.0). According to the original Service Name Design document (v6) [3] written during the California (0.6) release of EdgeX, all EdgeX Foundry microservices should be able to accomplish the following tasks: Register with the configuration/registration (referred to simply as \u201cthe registry\u201d for the rest of this document) provider (today Consul) Respond to availability requests Respond to shutdown requests by: Cleaning up resources in an orderly fashion Unregistering itself from the registry Get the address (host & port) of another EdgeX microservice by service name through the registry (when enabled) The purpose of this design is to ensure that services themselves advertise their location to the rest of the system by first self- registering. Most service registries (including Consul) implement some sort of health check mechanism. If a service is failing one or more health checks, the registry will stop reporting its availability when queried. Note - the design specifically excludes device services from this service lookup, as Core Metadata maintains a persistent store of DeviceService objects which provide service location for device services.","title":"Context"},{"location":"design/adr/0018-Service-Registry/#existing-behavior","text":"This section documents the existing behavior in the Hanoi (1.3.x) version of EdgeX.","title":"Existing Behavior"},{"location":"design/adr/0018-Service-Registry/#device-services","text":"Device Virtual's behavior was first tested using the edgexfoundry snap (which is configured to always use the registry) by doing the following: $ sudo snap install edgexfoundry $ cp /var/snap/edgexfoundry/current/config/device-virtual/res/configuration.toml . I edited the file, removing the [Client.Data] section completely and copied the file back into place. Next I enabled device-virtual while monitoring the journal output. $ sudo cp configuration.toml /var/snap/edgexfoundry/current/config/device-virtual/res/ $ sudo snap set edgexfoundry device-virtual=on The following error was seen in the journal: level=INFO app=device-virtual source=httpserver.go:94 msg=\"Web server starting (0.0.0.0:49990)\" error: fatal error; Host setting for Core Data client not configured Next I followed the same steps, but instead of completely removing the client, I instead set the client ports to invalid values. In this case the service logged the following errors and exited: level=ERROR app=device-virtual source=service.go:149 msg=\"DeviceServicForName failed: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\" level=ERROR app=device-virtual source=init.go:45 msg=\"Couldn't register to metadata service: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\\n\" Note - in order to run this second test, the easiest way to do so is to remove and reinstall the snap vs. manually wiping out device-virtual's configuration in Consul. I could have also stopped the service, modified the configuration directly in Consul, and restarted the service.","title":"Device Services"},{"location":"design/adr/0018-Service-Registry/#registry-client-interface-usage","text":"Next the service's usage of the go-mod-registry Client interface was examined: type Client interface { // Registers the current service with Registry for discover and health check Register() error // Un-registers the current service with Registry for discover and health check Unregister() error // Simply checks if Registry is up and running at the configured URL IsAlive() bool // Gets the service endpoint information for the target ID from the Registry GetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error) // Checks with the Registry if the target service is available, i.e. registered and healthy IsServiceAvailable(serviceId string) (bool, error) }","title":"Registry Client Interface Usage"},{"location":"design/adr/0018-Service-Registry/#summary","text":"If a device service is started with the registry flag set: Both Device SDKs register with the registry on startup, and unregister from the registry on normal shutdown. The Go SDK (device-sdk-go) queries the registry to check dependent service availability and health (via IsServiceAvailable ) on startup. Regardless of the registry setting, the Go SDK always sources the addresses of its dependent services from the Client* configuration stanzas. The C SDK queries the registry for the addresses of its dependent services. It pings the services directly to determine their availbility and health.","title":"Summary"},{"location":"design/adr/0018-Service-Registry/#core-and-support-services","text":"The same approach was used for Core and Support services (i.e. reviewing the usage of go-mod-bootstrap's Client interface), and ironically, the SMA seems to be the only service in edgex-go that actually queries the registry for service location: ./internal/system/agent/getconfig/executor.go: ep, err := e.registryClient.GetServiceEndpoint(serviceName) ./internal/system/agent/direct/metrics.go: e, err := m.registryClient.GetServiceEndpoint(serviceName) In summary, other than the SMA's configuration and metrics logic, the Core and Support services behave in the same manner as device-sdk-go. Note - the SMA also has a longstanding issue #2486 where it continuousy logs errors if one (or more) of the Support Services are not running. As described in the issue, this could be avoided if the SMA used the registry to determine if the services were actually available. See related issue #1662 ('Look at Driving \"Default Services List\" via Configuration').","title":"Core and Support Services"},{"location":"design/adr/0018-Service-Registry/#security-proxy-setup","text":"The security-proxy-setup service also relies on static service address configuration to configure the server routes for each of the services accessible through the API Gateway (aka Kong). Although it uses the same TOML-based client config keys as the other services, these configuration values are only ever read from the security-proxy-setup's local configuration.toml file, as the security services have never supported using our configuration provider (aka Consul). Note - Another point worth mentioning with respect to security services is that in the Geneva and Hanoi releases the service health checks registered by the services (and the associated IsServiceAvailable method) are used to orchestrate the ordered startup of the security services via a set of Consul scripts. This additional orchestration is only performed when EdgeX is deployed via docker, and is slated to to be removed as part of the Ireland release.","title":"Security Proxy Setup"},{"location":"design/adr/0018-Service-Registry/#history","text":"After a bit of research reaching as far back as the California (0.6.1) release of EdgeX, I've managed to piece together why the current implementation works the way it does. This history focues solely on the core and support services. The California release of EdgeX was released in June of 2018 and was the first to include services written using Go. This version of EdgeX as well as versions through the Fuji release all relied on a bootstrapping service called core-config-seed which was responsible for seeding the configuration of all of the core and support services into Consul prior to any of the services being started. This release actually preceded usage of TOML for configuration files, and instead just used a flat key/value format, with keys converted from legacy Java property names (e.g. meta.db.device.url ) to Camel[Pascal]/Case (e.g. MetaDeviceServiceURL). I chose the config key mentioned above on purpose: MetaDeviceURL = \"http://edgex-core-metadata:48081/api/v1/device\" Not only did this config key provide the address of core metadata, it also provided the path of a specific REST endpoint. In later releases of EdgeX, the address of the service and the specific endpoint paths were de-coupled. Instead of following the Service Name design (which was finalized two months earlier), the initial implementation followed the legacy Java implementation and initialized its service clients for each required REST endpoint (belonging to another EdgeX service) directly from the associated *URL config key read from Consul (if enabled) or directly from the configuration file. The shared client initialization code also created an Endpoint monitor goroutine and passed it a go channel channel used by the service to receive updates to the REST API endpoint URL. This monitor goroutine effectively polled Consul every 15s (this became configurable in later versions) for the client's service address and if a change was detected, would write the updated endpoint URL to the given channel, effectively ensuring that the service started using the new URL. It wasn't till late in the Geneva development cycle that I noticed log messages which made me aware of the fact that every one of our services was making a REST call to check the address of a service endpoint every 15s, for every REST endpoint it used! An issue was filed (https://github.com/edgexfoundry/edgex-go/issues/2594), and the client monitoring was removed as part of the Geneva 1.2.1 release.","title":"History"},{"location":"design/adr/0018-Service-Registry/#problem-statement","text":"The fundamental problem with the existing implementations (as decribed above), is that there is too much duplication of configuration across services. For instance, Core Data's service port can easily be changed by passing the environment variable SERVICE_PORT to the service on startup. This overrides the configuration read from the configuration provider, and will cause Core Data to listen on the new port, however it has no impact on any services which use Core Data, as the client config for each is read from the configuration provider (excluding security-proxy-setup). This means in order to change a service port, environment variable overrides (e.g. CLIENTS_COREDARA_PORT) need to set for every client service as well as security-proxy-setup (if required).","title":"Problem Statement"},{"location":"design/adr/0018-Service-Registry/#decision","text":"Update the core, support, and security-proxy-setup services to use go-mod-registry's Client.GetServiceEndpoint method (if started with the --registry option) to determine (a) if a service dependency is available and (b) use the returned address information to initialize client endpoints (or setup the correct route in the case of proxy-setup). The same changes also need to be applied to the App Functions SDK and Go Device SDK, with only minor changes required in the C Device SDK (see previous commments re: the current implementation). Note - this design only works if service registration occurs before the service initializes its clients. For instance, Core Data and Core Metadata both depend on the other, and thus if both defer service registration till after client initialization, neither will be able to successfully lookup the address of the other service.","title":"Decision"},{"location":"design/adr/0018-Service-Registry/#consquences","text":"One impact of this decision is that since the security-proxy-setup service currently runs before any of the core and support services are started, it would not be possible to implement this proposal without also modifying the service to use a lazy initialization of the API Gateway's routes. As such, the implementation of this ADR will require more design work with respect to security-proxy-setup. Some of the issues include: Splitting the configuration of the API Gateway from the service route intialization logic, either by making the service long-running or splitting route initialization into it's own service. Handling registry and non-registry scenarios (i.e. add --registry command-line support to security-proxy-setup). Handling changes to service address information (i.e. dynamically update API Gateway routes if/when service addresses change). Finally the proxy-setup's configuration needs to be updated so that its Route entries use service-keys instead of arbitrary names (e.g. ( Route.core-data vs. Route.CoreData ).","title":"Consquences"},{"location":"design/adr/0018-Service-Registry/#references","text":"[1] ADR 0001-Registry-Refactor [2] Consul [3] Service Name Design v6","title":"References"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/","text":"Device Services Send Events via Message Bus Status Context Decision Which Message Bus implementations? Go Device SDK C Device SDK Core Data and Persistence V2 Event DTO Validation Message Envelope Application Services MessageBus Topics Configuration Device Services [MessageQueue] Core Data [MessageQueue] Application Services [MessageBus] [Binding] Secure Connections Consequences Status Approved Context Currently EdgeX Events are sent from Device Services via HTTP to Core Data, which then puts the Events on the MessageBus after optionally persisting them to the database. This ADR details how Device Services will send EdgeX Events to other services via the EdgeX MessageBus. Note: Though this design is centered on device services, it does have cross cutting impacts with other EdgeX services and modules Note: This ADR is dependent on the Secret Provider for All to provide the secrets for secure Message Bus connections. Decision Which Message Bus implementations? Multiple Device Services may need to be publishing Events to the MessageBus concurrently. ZMQ will not be a valid option if multiple Device Services are configured to publish. This is because ZMQ only allows for a single publisher. ZMQ will still be valid if only one Device Service is publishing Events. The MQTT and Redis Streams are valid options to use when multiple Device Services are required, as they both support multiple publishers. These are the only other implementations currently available for Go services. The C base device services do not yet have a MessageBus implementation. See the C Device SDK below for details. Note: Documentation will need to be clear when ZMQ can be used and when it can not be used. Go Device SDK The Go Device SDK will take advantage of the existing go-mod-messaging module to enable use of the EdgeX MessageBus. A new bootstrap handler will be created which initializes the MessageBus client based on configuration. See Configuration section below for details. The Go Device SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details. C Device SDK The C Device SDK will implement its own MessageBus abstraction similar to the one in go-mod-messaging . The first implementation type (MQTT or Redis Streams) is TBD. Using this abstraction allows for future implementations to be added when use cases warrant the additional implementations. As with the Go SDK, the C SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details. Core Data and Persistence With this design, Events will be sent directly to Application Services w/o going through Core Data and thus will not be persisted unless changes are made to Core Data. To allow Events to optionally continue to be persisted, Core Data will become an additional or secondary (and optional) subscriber for the Events from the MessageBus. The Events will be persisted when they are received. Core Data will also retain the ability to receive Events via HTTP, persist them and publish them to the MessageBus as is done today. This allows for the flexibility to have some device services to be configured to POST Events and some to be configured to publish Events while we transition the Device Services to all have the capability to publishing Events. In the future, once this new Publish approach has been proven, we may decide to remove POSTing Events to Core Data from the Device SDKs. The existing PersistData setting will be ignored by the code path subscribing to Events since the only reason to do this is to persist the Events. There is a race condition for Marked As Pushed when Core Data is persisting Events received from the MessageBus. Core Data may not have finished persisting an Event before the Application Service has processed the Event and requested the Event be Marked As Pushed . It was decided to remove Mark as Pushed capability and just rely on time based scrubbing of old Events. V2 Event DTO As this development will be part of the Ireland release all Events published to the MessageBus will use the V2 Event DTO. This is already implemented in Core Data for the V2 AddEvent API. Validation Services receiving the Event DTO from the MessageBus will log validation errors and stop processing the Event. Message Envelope EdgeX Go Services currently uses a custom Message Envelope for all data that is published to the MessageBus. This envelope wraps the data with metadata, which is ContentType (JSON or CBOR), Correlation-Id and the obsolete Checksum . The Checksum is used when the data is CBOR encoded to identify the Event in V1 API to be mark it as pushed. This checksum is no longer needed as the V2 Event DTO requires the ID be set by the Device Services which will always be used in the V2 API to mark the Events as pushed. The Message Envelope will be updated to remove this property. The C SDK will recreate this Message Envelope. Application Services As part of the V2 API consumption work in Ireland the App Services SDK will be changed to expect to receive V2 Event DTOs rather than the V1 Event model. It will also be updated to no longer expect or use the Checksum currently on the Message Envelope. Note these changes must occur for the V2 consumption and are not directly tied to this effort. The App Service SDK will be enhanced for the secure MessageBus connection described below. See Secure Connections for details MessageBus Topics Note: The change recommended here is not required for this design, but it provides a good opportunity to adopt it. Currently Core Data publishes Events to the simple events topic. All Application Services running receive every Event published, whether they want them or not. The Events can be filtered out using the FilterByDeviceName or FilterByResourceName pipeline functions, but the Application Services still receives every Event and process all the Events to some extent. This could cause load issues in a deployment with many devices and large volume of Events from various devices or a very verbose device that the Application Services is not interested in. Note: The current FilterByDeviceName is only good if the device name is known statically and the only instance of the device defined by the DeviceProfileName . What we really need is FilterByDeviceProfileName which allows multiple instances of a device to be filtered for, rather than a single instance as it it now. The V2 API will be adding DeviceProfileName to the Events, so in Ireland this filter will be possible. Pub/Sub systems have advanced topic schema, which we can take advantage of from Application Services to filter for just the Events the Application Service actual wants. Publishers of Events must add the DeviceProfileName , DeviceName and SourceName to the topic in the form edgex/events/// . The SourceName is the Resource or Command name used to create the Event. This allows Application Services to filter for just the Events from the device(s) it wants by only subscribing to those DeviceProfileNames or the specific DeviceNames or just the specific SourceNames Example subscribe topics if above schema is used: edgex/events/# All Events Core Data will subscribe using this topic schema edgex/events/Random-Integer-Device/# Any Events from devices created from the Random-Integer-Device device profile edgex/events/Random-Integer-Device/Random-Integer-Device1 Only Events from the Random-Integer-Device1 Device edgex/events/Random-Integer-Device/#/Int16 Any Events with Readings from Int16 device resource from devices created from the Random-Integer-Device device profile. **edgex/events/Modbus-Device/#/HVACValues Any Events with Readings from HVACValues device command from devices created from the Modbus-Device device profile. The MessageBus abstraction allows for multiple subscriptions, so an Application Service could specify to receive data from multiple specific device profiles or devices by creating multiple subscriptions. i.e. edgex/Events/Random-Integer-Device/# and edgex/Events/Random-Boolean-Device/# . Currently the App SDK only allows for a single subscription topic to be configured, but that could easily be expanded to handle a list of subscriptions. See Configuration section below for details. Core Data's existing publishing of Events would also need to be changed to use this new topic schema. One challenge with this is Core Data doesn't currently know the DeviceProfileName or DeviceName when it receives a CBOR encoded event. This is because it doesn't decode the Event until after it has published it to the MessageBus. Also, Core Data doesn't know of SourceName at all. The V2 API will be enhanced to change the AddEvent endpoint from /event to /event/{profile}/{device}/{source} so that DeviceProfileName , DeviceName , and SourceName are always know no matter how the request is encoded. This new topic approach will be enabled via each publisher's PublishTopic having the DeviceProfileName , DeviceName and SourceName added to the configured PublishTopicPrefix PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix See Configuration section below for details. Configuration Device Services All Device services will have the following additional configuration to allow connecting and publishing to the MessageBus. As describe above in the MessageBus Topics section, the PublishTopic will include the DeviceProfileName and DeviceName . [MessageQueue] A MessageQueue section will be added, which is similar to that used in Core Data today, but with PublishTopicPrefix instead of Topic .To enable secure connections, the Username & Password have been replaced with ClientAuth & SecretPath , See Secure Connections section below for details. The added Enabled property controls whether the Device Service publishes to the MessageBus or POSTs to Core Data. [MessageQueue] Enabled = true Protocol = \"tcp\" Host = \"localhost\" Port = 1883 Type = \"mqtt\" PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none` Core Data Core data will also require additional configuration to be able to subscribe to receive Events from the MessageBus. As describe above in the MessageBus Topics section, the PublishTopicPrefix will have DeviceProfileName and DeviceName added to create the actual Public Topic. [MessageQueue] The MessageQueue section will be changed so that the Topic property changes to PublishTopicPrefix and SubscribeEnabled and SubscribeTopic will be added. As with device services configuration, the Username & Password have been replaced with ClientAuth & SecretPath for secure connections. See Secure Connections section below for details. In addition, the Boolean SubscribeEnabled property will be used to control if the service subscribes to Events from the MessageBus or not. [MessageQueue] Protocol = \"tcp\" Host = \"localhost\" Port = 1883 Type = \"mqtt\" PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix SubscribeEnabled = true SubscribeTopic = \"edgex/events/#\" [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable evnironment variable overrides of them # Client Identifiers ClientId = \"edgex-core-data\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none` Application Services [MessageBus] Similar to above, the Application Services MessageBus configuration will change to allow for secure connection to the MessageBus. The Username & Password have been replaced with ClientAuth & SecretPath for secure connections. See Secure Connections section below for details. [MessageBus.Optional] # MQTT Specific options # Client Identifiers ClientId = \"\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none` [Binding] The Binding configuration section will require changes for the subscribe topics scheme described in the MessageBus Topics section above to filter for Events from specific device profiles or devices. SubscribeTopic will change from a string property containing a single topic to the SubscribeTopics string property containing a comma separated list of topics. This allows for the flexibility for the property to be a single topic with the # wild card so the Application Service receives all Events as it does today. Receive only Events from the Random-Integer-Device and Random-Boolean-Device profiles [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/Random-Integer-Device, edgex/events/Random-Boolean-Device\" Receive only Events from the Random-Integer-Device1 from the Random-Integer-Device profile [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/Random-Integer-Device/Random-Integer-Device1\" or receives all Events: [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/#\" Secure Connections As stated earlier, this ADR is dependent on the Secret Provider for All ADR to provide a common Secret Provider for all Edgex Services to access their secrets. Once this is available, the MessageBus connection can be secured via the following configurable client authentications modes which follows similar implementation for secure MQTT Export and secure MQTT Trigger used in Application Services. none - No authentication usernamepassword - Username & password authentication. clientcert - Client certificate and key for authentication. The secrets specified for the above options are pulled from the Secret Provider using the configured SecretPath . How the secrets are injected into the Secret Provider is out of scope for this ADR and covered in the Secret Provider for All ADR. Consequences If C SDK doesn't support ZMQ or Redis Streams then there must be a MQTT Broker running when a C Device service is in use and configured to publish to MessageBus. Since we've adopted the publish topic scheme with DeviceProfileName and DeviceName the V2 API must restrict the characters used in device names to those allowed in a topic. An issue for V2 API already exists for restricting the allowable characters to RFC 3986 , which will suffice. Newer ZMQ may allow for multiple publishers. Requires investigation and very likely rework of the ZMQ implementation in go-mod-messaging. No alternative has been found . Mark as Push V2 Api will be removed from Core Data, Core Data Client and the App SDK Consider moving App Service Binding to Writable. (out of scope for this ADR)","title":"Device Services Send Events via Message Bus"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#device-services-send-events-via-message-bus","text":"Status Context Decision Which Message Bus implementations? Go Device SDK C Device SDK Core Data and Persistence V2 Event DTO Validation Message Envelope Application Services MessageBus Topics Configuration Device Services [MessageQueue] Core Data [MessageQueue] Application Services [MessageBus] [Binding] Secure Connections Consequences","title":"Device Services Send Events via Message Bus"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#status","text":"Approved","title":"Status"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#context","text":"Currently EdgeX Events are sent from Device Services via HTTP to Core Data, which then puts the Events on the MessageBus after optionally persisting them to the database. This ADR details how Device Services will send EdgeX Events to other services via the EdgeX MessageBus. Note: Though this design is centered on device services, it does have cross cutting impacts with other EdgeX services and modules Note: This ADR is dependent on the Secret Provider for All to provide the secrets for secure Message Bus connections.","title":"Context"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#decision","text":"","title":"Decision"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#which-message-bus-implementations","text":"Multiple Device Services may need to be publishing Events to the MessageBus concurrently. ZMQ will not be a valid option if multiple Device Services are configured to publish. This is because ZMQ only allows for a single publisher. ZMQ will still be valid if only one Device Service is publishing Events. The MQTT and Redis Streams are valid options to use when multiple Device Services are required, as they both support multiple publishers. These are the only other implementations currently available for Go services. The C base device services do not yet have a MessageBus implementation. See the C Device SDK below for details. Note: Documentation will need to be clear when ZMQ can be used and when it can not be used.","title":"Which Message Bus implementations?"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#go-device-sdk","text":"The Go Device SDK will take advantage of the existing go-mod-messaging module to enable use of the EdgeX MessageBus. A new bootstrap handler will be created which initializes the MessageBus client based on configuration. See Configuration section below for details. The Go Device SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details.","title":"Go Device SDK"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#c-device-sdk","text":"The C Device SDK will implement its own MessageBus abstraction similar to the one in go-mod-messaging . The first implementation type (MQTT or Redis Streams) is TBD. Using this abstraction allows for future implementations to be added when use cases warrant the additional implementations. As with the Go SDK, the C SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details.","title":"C Device SDK"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#core-data-and-persistence","text":"With this design, Events will be sent directly to Application Services w/o going through Core Data and thus will not be persisted unless changes are made to Core Data. To allow Events to optionally continue to be persisted, Core Data will become an additional or secondary (and optional) subscriber for the Events from the MessageBus. The Events will be persisted when they are received. Core Data will also retain the ability to receive Events via HTTP, persist them and publish them to the MessageBus as is done today. This allows for the flexibility to have some device services to be configured to POST Events and some to be configured to publish Events while we transition the Device Services to all have the capability to publishing Events. In the future, once this new Publish approach has been proven, we may decide to remove POSTing Events to Core Data from the Device SDKs. The existing PersistData setting will be ignored by the code path subscribing to Events since the only reason to do this is to persist the Events. There is a race condition for Marked As Pushed when Core Data is persisting Events received from the MessageBus. Core Data may not have finished persisting an Event before the Application Service has processed the Event and requested the Event be Marked As Pushed . It was decided to remove Mark as Pushed capability and just rely on time based scrubbing of old Events.","title":"Core Data and Persistence"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#v2-event-dto","text":"As this development will be part of the Ireland release all Events published to the MessageBus will use the V2 Event DTO. This is already implemented in Core Data for the V2 AddEvent API.","title":"V2 Event DTO"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#validation","text":"Services receiving the Event DTO from the MessageBus will log validation errors and stop processing the Event.","title":"Validation"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#message-envelope","text":"EdgeX Go Services currently uses a custom Message Envelope for all data that is published to the MessageBus. This envelope wraps the data with metadata, which is ContentType (JSON or CBOR), Correlation-Id and the obsolete Checksum . The Checksum is used when the data is CBOR encoded to identify the Event in V1 API to be mark it as pushed. This checksum is no longer needed as the V2 Event DTO requires the ID be set by the Device Services which will always be used in the V2 API to mark the Events as pushed. The Message Envelope will be updated to remove this property. The C SDK will recreate this Message Envelope.","title":"Message Envelope"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#application-services","text":"As part of the V2 API consumption work in Ireland the App Services SDK will be changed to expect to receive V2 Event DTOs rather than the V1 Event model. It will also be updated to no longer expect or use the Checksum currently on the Message Envelope. Note these changes must occur for the V2 consumption and are not directly tied to this effort. The App Service SDK will be enhanced for the secure MessageBus connection described below. See Secure Connections for details","title":"Application Services"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagebus-topics","text":"Note: The change recommended here is not required for this design, but it provides a good opportunity to adopt it. Currently Core Data publishes Events to the simple events topic. All Application Services running receive every Event published, whether they want them or not. The Events can be filtered out using the FilterByDeviceName or FilterByResourceName pipeline functions, but the Application Services still receives every Event and process all the Events to some extent. This could cause load issues in a deployment with many devices and large volume of Events from various devices or a very verbose device that the Application Services is not interested in. Note: The current FilterByDeviceName is only good if the device name is known statically and the only instance of the device defined by the DeviceProfileName . What we really need is FilterByDeviceProfileName which allows multiple instances of a device to be filtered for, rather than a single instance as it it now. The V2 API will be adding DeviceProfileName to the Events, so in Ireland this filter will be possible. Pub/Sub systems have advanced topic schema, which we can take advantage of from Application Services to filter for just the Events the Application Service actual wants. Publishers of Events must add the DeviceProfileName , DeviceName and SourceName to the topic in the form edgex/events/// . The SourceName is the Resource or Command name used to create the Event. This allows Application Services to filter for just the Events from the device(s) it wants by only subscribing to those DeviceProfileNames or the specific DeviceNames or just the specific SourceNames Example subscribe topics if above schema is used: edgex/events/# All Events Core Data will subscribe using this topic schema edgex/events/Random-Integer-Device/# Any Events from devices created from the Random-Integer-Device device profile edgex/events/Random-Integer-Device/Random-Integer-Device1 Only Events from the Random-Integer-Device1 Device edgex/events/Random-Integer-Device/#/Int16 Any Events with Readings from Int16 device resource from devices created from the Random-Integer-Device device profile. **edgex/events/Modbus-Device/#/HVACValues Any Events with Readings from HVACValues device command from devices created from the Modbus-Device device profile. The MessageBus abstraction allows for multiple subscriptions, so an Application Service could specify to receive data from multiple specific device profiles or devices by creating multiple subscriptions. i.e. edgex/Events/Random-Integer-Device/# and edgex/Events/Random-Boolean-Device/# . Currently the App SDK only allows for a single subscription topic to be configured, but that could easily be expanded to handle a list of subscriptions. See Configuration section below for details. Core Data's existing publishing of Events would also need to be changed to use this new topic schema. One challenge with this is Core Data doesn't currently know the DeviceProfileName or DeviceName when it receives a CBOR encoded event. This is because it doesn't decode the Event until after it has published it to the MessageBus. Also, Core Data doesn't know of SourceName at all. The V2 API will be enhanced to change the AddEvent endpoint from /event to /event/{profile}/{device}/{source} so that DeviceProfileName , DeviceName , and SourceName are always know no matter how the request is encoded. This new topic approach will be enabled via each publisher's PublishTopic having the DeviceProfileName , DeviceName and SourceName added to the configured PublishTopicPrefix PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix See Configuration section below for details.","title":"MessageBus Topics"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#configuration","text":"","title":"Configuration"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#device-services","text":"All Device services will have the following additional configuration to allow connecting and publishing to the MessageBus. As describe above in the MessageBus Topics section, the PublishTopic will include the DeviceProfileName and DeviceName .","title":"Device Services"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagequeue","text":"A MessageQueue section will be added, which is similar to that used in Core Data today, but with PublishTopicPrefix instead of Topic .To enable secure connections, the Username & Password have been replaced with ClientAuth & SecretPath , See Secure Connections section below for details. The added Enabled property controls whether the Device Service publishes to the MessageBus or POSTs to Core Data. [MessageQueue] Enabled = true Protocol = \"tcp\" Host = \"localhost\" Port = 1883 Type = \"mqtt\" PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable environment variable overrides of them # Client Identifiers ClientId = \"\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`","title":"[MessageQueue]"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#core-data","text":"Core data will also require additional configuration to be able to subscribe to receive Events from the MessageBus. As describe above in the MessageBus Topics section, the PublishTopicPrefix will have DeviceProfileName and DeviceName added to create the actual Public Topic.","title":"Core Data"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagequeue_1","text":"The MessageQueue section will be changed so that the Topic property changes to PublishTopicPrefix and SubscribeEnabled and SubscribeTopic will be added. As with device services configuration, the Username & Password have been replaced with ClientAuth & SecretPath for secure connections. See Secure Connections section below for details. In addition, the Boolean SubscribeEnabled property will be used to control if the service subscribes to Events from the MessageBus or not. [MessageQueue] Protocol = \"tcp\" Host = \"localhost\" Port = 1883 Type = \"mqtt\" PublishTopicPrefix = \"edgex/events\" # /// will be added to this Publish Topic prefix SubscribeEnabled = true SubscribeTopic = \"edgex/events/#\" [MessageQueue.Optional] # Default MQTT Specific options that need to be here to enable evnironment variable overrides of them # Client Identifiers ClientId = \"edgex-core-data\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`","title":"[MessageQueue]"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#application-services_1","text":"","title":"Application Services"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagebus","text":"Similar to above, the Application Services MessageBus configuration will change to allow for secure connection to the MessageBus. The Username & Password have been replaced with ClientAuth & SecretPath for secure connections. See Secure Connections section below for details. [MessageBus.Optional] # MQTT Specific options # Client Identifiers ClientId = \"\" # Connection information Qos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive = \"10\" # Seconds (must be 2 or greater) Retained = \"false\" AutoReconnect = \"true\" ConnectTimeout = \"5\" # Seconds SkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified ClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert` Secretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`","title":"[MessageBus]"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#binding","text":"The Binding configuration section will require changes for the subscribe topics scheme described in the MessageBus Topics section above to filter for Events from specific device profiles or devices. SubscribeTopic will change from a string property containing a single topic to the SubscribeTopics string property containing a comma separated list of topics. This allows for the flexibility for the property to be a single topic with the # wild card so the Application Service receives all Events as it does today. Receive only Events from the Random-Integer-Device and Random-Boolean-Device profiles [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/Random-Integer-Device, edgex/events/Random-Boolean-Device\" Receive only Events from the Random-Integer-Device1 from the Random-Integer-Device profile [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/Random-Integer-Device/Random-Integer-Device1\" or receives all Events: [Binding] Type = \"messagebus\" SubscribeTopics = \"edgex/events/#\"","title":"[Binding]"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#secure-connections","text":"As stated earlier, this ADR is dependent on the Secret Provider for All ADR to provide a common Secret Provider for all Edgex Services to access their secrets. Once this is available, the MessageBus connection can be secured via the following configurable client authentications modes which follows similar implementation for secure MQTT Export and secure MQTT Trigger used in Application Services. none - No authentication usernamepassword - Username & password authentication. clientcert - Client certificate and key for authentication. The secrets specified for the above options are pulled from the Secret Provider using the configured SecretPath . How the secrets are injected into the Secret Provider is out of scope for this ADR and covered in the Secret Provider for All ADR.","title":"Secure Connections"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#consequences","text":"If C SDK doesn't support ZMQ or Redis Streams then there must be a MQTT Broker running when a C Device service is in use and configured to publish to MessageBus. Since we've adopted the publish topic scheme with DeviceProfileName and DeviceName the V2 API must restrict the characters used in device names to those allowed in a topic. An issue for V2 API already exists for restricting the allowable characters to RFC 3986 , which will suffice. Newer ZMQ may allow for multiple publishers. Requires investigation and very likely rework of the ZMQ implementation in go-mod-messaging. No alternative has been found . Mark as Push V2 Api will be removed from Core Data, Core Data Client and the App SDK Consider moving App Service Binding to Writable. (out of scope for this ADR)","title":"Consequences"},{"location":"design/adr/014-Secret-Provider-For-All/","text":"Secret Provider for All Status Context Existing Implementations What is a Secret? Service Exclusive vs Service Shared Secrets Known and Unknown Services Static Secrets and Runtime Secrets Interfaces and factory methods Bootstrap's current implementation Interfaces Factory and bootstrap handler methods App SDK's current implementation Interface Factory and bootstrap handler methods Secret Store for non-secure mode InsecureSecrets Configuration Decision Only Exclusive Secret Stores Abstraction Interface Implementation Factory Method and Bootstrap Handler Caching of Secrets Insecure Secrets Handling on-the-fly changes to InsecureSecrets Mocks Where will SecretProvider reside? Go Services C Device Service Consequences Status Approved Context This ADR defines the new SecretProvider abstraction that will be used by all EdgeX services, including Device Services. The Secret Provider is used by services to retrieve secrets from the Secret Store. The Secret Store, in secure mode, is currently Vault. In non-secure mode it is configuration in some form, i.e. DatabaseInfo configuration or InsecureSecrets configuration for Application Services. Existing Implementations The Secret Provider abstraction defined in this ADR is based on the Secret Provider abstraction implementations in the Application Functions SDK (App SDK) for Application Services and the one in go-mod-bootstrap (Bootstrap) used by the Core, Support & Security services in edgex-go. Device Services do not currently use secure secrets. The App SDK implementation was initially based on the Bootstrap implementation. The similarities and differences between these implementations are: Both wrap the SecretClient from go-mod-secrets Both initialize the SecretClient based on the SecretStore configuration(s) Both have factory methods, but they differ greatly Both implement the GetDatabaseCredentials API Bootstrap's uses split interfaces definitions ( CredentialsProvider & CertificateProvider ) while the App SDK's use a single interface ( SecretProvider ) for the abstraction Bootstrap's includes the bootstrap handler while the App SDK's has the bootstrap handler separated out Bootstrap's implements the GetCertificateKeyPair API, which the App SDK's does not App SDK's implements the following, which the Bootstrap's does not Initialize API (Bootstrap's initialization is done by the bootstrap handler) StoreSecrets API GetSecrets API InsecureSecretsUpdated API SecretsLastUpdated API Wraps a second SecretClient for the Application Service instance's exclusive secrets. Used by the StoreSecrets & GetSecrets APIs The standard SecretClient is considered the shared client for secrets that all Application Service instances share. It is only used by the GetDatabaseCredentials API Configuration based secret store for non-secure mode called InsecureSecrets Caching of secrets Needed so that secrets used by pipeline functions do not cause call out to Vault for every Event processed What is a Secret? A secret is a collection of key/value pairs stored in a SecretStore at specified path whose values are sensitive in nature. Redis database credentials are an example of a Secret which contains the username and password key/values stored at the redisdb path. Service Exclusive vs Service Shared Secrets Service Exclusive secrets are those that are exclusive to the instance of the running service. An example of exclusive secrets are the HTTP Auth tokens used by two running instances of app-service-configurable (http-export) which export different device Events to different endpoints with different Auth tokens in the HTTP headers. Service Exclusive secrets are seeded by POSTing the secrets to the /api/vX/secrets endpoint on the running instance of each Application Service. Service Shared secrets are those that all instances of a class of service, such a Application Services, share. Think of Core Data as it own class of service. An example of shared secrets are the database credentials for the single database instance for Store and Forward data that all Application Services may need to access. Another example is the database credentials for each of instance the Core Data. It is shared, but only one instance of Core Data is currently ever run. Service Shared secrets are seeded by security-secretstore-setup using static configuration for static secrets for known services. Currently database credentials are the only shared secrets. In the future we may have Message Bus credentials as shared secrets, but these will be truly shared secrets for all services to securely connect to the Message Bus, not just shared between instances of a service. Application Services currently have the ability to configure SecretStores for Service Exclusive and/or Service Shared secrets depending on their needs. Known and Unknown Services Known Services are those identified in the static configuration by security-secretstore-setup These currently are Core Data, Core Metadata, Support Notifications, Support Scheduler and Application Service (class) Unknown Services are those not known in the static configuration that become known when added to the Docker compose file or Snap. Application Service (instance) are examples of these services. Service exclusive SecretStore can be created for these services by adding the services' unique name , i.e. appservice-http-export, to the ADD_SECRETSTORE_TOKENS environment variable for security-secretstore-setup ADD_SECRETSTORE_TOKENS: \"appservice-http-export, appservice-mqtt-export\" This creates an exclusive secret store token for each service listed. The name provided for each service must be used in the service's SecretStore configuration and Docker volume mount (if applicable). Typically the configuration is set via environment overrides or is already in an existing configuration profile ( http-export profile for app-service-configurable). Example docker-compose file entries: environment : ... SecretStoreExclusive_Path : \"/v1/secret/edgex/appservice-http-export/\" TokenFile : \"/tmp/edgex/secrets/appservice-http-export/secrets-token.json\" volumes : ... - /tmp/edgex/secrets/appservice-http-export:/tmp/edgex/secrets/appservice-http-export:ro,z Static Secrets and Runtime Secrets Static Secrets are those identified by name in the static configuration whose values are randomly generated at seed time. These secrets are seeded on start-up of EdgeX. Database credentials are currently the only secrets of this type Runtime Secrets are those not known in the static configuration and that become known during run time. These secrets are seeded at run time via the Application Services /api/vX/secrets endpoint HTTP header authorization credentials for HTTP Export are types of these secrets Interfaces and factory methods Bootstrap's current implementation Interfaces type CredentialsProvider interface { GetDatabaseCredentials ( database config . Database ) ( config . Credentials , error ) } and type CertificateProvider interface { GetCertificateKeyPair ( path string ) ( config . CertKeyPair , error ) } Factory and bootstrap handler methods type SecretProvider struct { secretClient pkg . SecretClient } func NewSecret () * SecretProvider { return & SecretProvider {} } func ( s * SecretProvider ) BootstrapHandler ( ctx context . Context , _ * sync . WaitGroup , startupTimer startup . Timer , dic * di . Container ) bool { ... Intializes the SecretClient and adds it to the DIC for both interfaces . ... } App SDK's current implementation Interface type SecretProvider interface { Initialize ( _ context . Context ) bool StoreSecrets ( path string , secrets map [ string ] string ) error GetSecrets ( path string , _ ... string ) ( map [ string ] string , error ) GetDatabaseCredentials ( database db . DatabaseInfo ) ( common . Credentials , error ) InsecureSecretsUpdated () SecretsLastUpdated () time . Time } Factory and bootstrap handler methods type SecretProviderImpl struct { SharedSecretClient pkg . SecretClient ExclusiveSecretClient pkg . SecretClient secretsCache map [ string ] map [ string ] string // secret's path, key, value configuration * common . ConfigurationStruct cacheMuxtex * sync . Mutex loggingClient logger . LoggingClient //used to track when secrets have last been retrieved LastUpdated time . Time } func NewSecretProvider ( loggingClient logger . LoggingClient , configuration * common . ConfigurationStruct ) * SecretProviderImpl { sp := & SecretProviderImpl { secretsCache : make ( map [ string ] map [ string ] string ), cacheMuxtex : & sync . Mutex {}, configuration : configuration , loggingClient : loggingClient , LastUpdated : time . Now (), } return sp } type Secrets struct { } func NewSecrets () * Secrets { return & Secrets {} } func ( _ * Secrets ) BootstrapHandler ( ctx context . Context , _ * sync . WaitGroup , startupTimer startup . Timer , dic * di . Container ) bool { ... Creates NewNewSecretProvider , calls Initailizes () and adds it to the DIC ... } Secret Store for non-secure mode Both Bootstrap's and App SDK's implementation use the DatabaseInfo configuration for GetDatabaseCredentials API in non-secure mode. The App SDK only uses it, for backward compatibility, if the database credentials are not found in the new InsecureSecrets configuration section. For Ireland it was planned to only use the new InsecureSecrets configuration section in non-secure mode. Note: Redis credentials are blank in non-secure mode Core Data [Databases] [Databases.Primary] Host = \"localhost\" Name = \"coredata\" Username = \"\" Password = \"\" Port = 6379 Timeout = 5000 Type = \"redisdb\" Application Services [Database] Type = \"redisdb\" Host = \"localhost\" Port = 6379 Username = \"\" Password = \"\" Timeout = \"30s\" InsecureSecrets Configuration The App SDK defines a new Writable configuration section called InsecureSecrets . This structure mimics that of the secure SecretStore when EDGEX_SECURITY_SECRET_STORE environment variable is set to false . Having the InsecureSecrets in the Writable section allows for the secrets to be updated without restarting the service. Some minor processing must occur when the InsecureSecrets section is updated. This is to call the InsecureSecretsUpdated API. This API simply sets the time the secrets were last updated. The SecretsLastUpdated API returns this timestamp so pipeline functions that use credentials for exporting know if their client needs to be recreated with new credentials, i.e MQTT export. type WritableInfo struct { LogLevel string ... InsecureSecrets InsecureSecrets } type InsecureSecrets map [ string ] InsecureSecretsInfo type InsecureSecretsInfo struct { Path string Secrets map [ string ] string } [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" [Writable.InsecureSecrets.mqtt] path = \"mqtt\" [Writable.InsecureSecrets.mqtt.Secrets] username = \"\" password = \"\" cacert = \"\" clientcert = \"\" clientkey = \"\" Decision The new SecretProvider abstraction defined by this ADR is a combination of the two implementations described above in the Existing Implementations section. Only Exclusive Secret Stores To simplify the SecretProvider abstraction, we need to reduce to using only exclusive SecretStores . This allows all the APIs to deal with a single SecretClient , rather than the split up way we currently have in Application Services. This requires that the current Application Service shared secrets (database credentials) must be copied into each Application Service's exclusive SecretStore when it is created. The challenge is how do we seed static secrets for unknown services when they become known. As described above in the Known and Unknown Services section above, services currently identify themselves for exclusive SecretStore creation via the ADD_SECRETSTORE_TOKENS environment variable on security-secretstore-setup. This environment variable simply takes a comma separated list of service names. ADD_SECRETSTORE_TOKENS : \",\" If we expanded this to add an optional list of static secret identifiers for each service, i.e. appservice/redisdb , the exclusive store could also be seeded with a copy of static shared secrets. In this case the Redis database credentials for the Application Services' shared database. The environment variable name will change to ADD_SECRETSTORE now that it is more than just tokens. ADD_SECRETSTORE : \"app-service-xyz[appservice/redisdb]\" Note: The secret identifier here is the short path to the secret in the existing appservice SecretStore . In the above example this expands to the full path of /secret/edgex/appservice/redisdb The above example results in the Redis credentials being copied into app-service-xyz's SecretStore at /secret/edgex/app-service-xyz/redis . Similar approach could be taken for Message Bus credentials where a common SecretStore is created with the Message Bus credentials saved. The services request the credentials are copied into their exclusive SecretStore using common/messagebus as the secret identifier. Full specification for the environment variable's value is a comma separated list of service entries defined as: [optional list of static secret IDs sperated by ;],[optional list of static secret IDs sperated by ;],... Example with one service specifying IDs for static secrets and one without static secrets ADD_SECRETSTORE : \"appservice-xyz[appservice/redisdb; common/messagebus], appservice-http-export\" When the ADD_SECRETSTORE environment variable is processed to create these SecretStores , it will copy the specified saved secrets from the initial SecretStore into the service's SecretStore . This all depends on the completion of database or other credential bootstrapping and the secrets having been stored prior to the environment variable being processed. security-secretstore-setup will need to be refactored to ensure this sequencing. Abstraction Interface The following will be the new SecretProvider abstraction interface used by all Edgex services type SecretProvider interface { // Stores new secrets into the service's exclusive SecretStore at the specified path. StoreSecrets ( path string , secrets map [ string ] string ) error // Retrieves secrets from the service's exclusive SecretStore at the specified path. GetSecrets ( path string , _ ... string ) ( map [ string ] string , error ) // Sets the secrets lastupdated time to current time. SecretsUpdated () // Returns the secrets last updated time SecretsLastUpdated () time . Time } Note: The GetDatabaseCredentials and GetCertificateKeyPair APIs have been removed. These are no longer needed since insecure database credentials will no longer be stored in the DatabaseInfo configuration and certificate key pairs are secrets like any others. This allows these secrets to be retrieved via the GetSecrets API. Implementation Factory Method and Bootstrap Handler The factory method and bootstrap handler will follow that currently in the Bootstrap implementation with some tweaks. Rather than putting the two split interfaces into the DIC, it will put just the single interface instance into the DIC. See details in the Interfaces and factory methods section above under Existing Implementations . Caching of Secrets Secrets will be cached as they are currently in the Application Service implementation Insecure Secrets Insecure Secrets will be handled as they are currently in the Application Service implementation. DatabaseInfo configuration will no longer be an option for storing the insecure database credentials. They will be stored in the InsecureSecrets configuration only. [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" Handling on-the-fly changes to InsecureSecrets All services will need to handle the special processing when InsecureSecrets are changed on-the-fly via Consul. Since this will now be a common configuration item in Writable it can be handled in go-mod-bootstrap along with existing log level processing. This special processing will be taken from App SDK. Mocks Proper mock of the SecretProvider interface will be created with Mockery to be used in unit tests. Current mock in App SDK is hand written rather then generated with Mockery . Where will SecretProvider reside? Go Services The final decision to make is where will this new SecretProvider abstraction reside? Originally is was assumed that it would reside in go-mod-secrets , which seems logical. If we were to attempt this with the implementation including the bootstrap handler, go-mod-secrets would have a dependency on go-mod-bootstrap which will likely create a circular dependency. Refactoring the existing implementation in go-mod-bootstrap and have it reside there now seems to be the best choice. C Device Service The C Device SDK will implement the same SecretProvider abstraction, InsecureSercets configuration and the underling SecretStore client. Consequences All service's will have Writable.InsecureSecrets section added to their configuration InsecureSecrets definition will be moved from App SDK to go-mod-bootstrap Go Device SDK will add the SecretProvider to it's bootstrapping C Device SDK implementation could be big lift? SecretStore configuration section will be added to all Device Services edgex-go services will be modified to use the single SecretProvider interface from the DIC in place of current usage of the GetDatabaseCredentials and GetCertificateKeyPair interfaces. Calls to GetDatabaseCredentials and GetCertificateKeyPair will be replaced with calls to GetSecrets API and appropriate processing of the returned secrets will be added. App SDK will be modified to use GetSecrets API in place of the GetDatabaseCredentials API App SDK will be modified to use the new SecretProvider bootstrap handler app-service-configurable's configuration profiles as well as all the Application Service examples configurations will be updated to remove the SecretStoreExclusive configuration and just use the existing SecretStore configuration security-secretstore-setup will be enhanced as described in the Exclusive Secret Stores only section above Adding new services that need static secrets added to their SecretStore requires stopping and restarting all the services. The is because security-secretstore-setup has completed but not stopped. If it is rerun without stopping the other services, there tokens and static secrets will have changed. The planned refactor of security-secretstore-setup will attempt to resolve this. Snaps do not yet support setting the environment variable for adding SecretStore. It is planned for Ireland release.","title":"Secret Provider for All"},{"location":"design/adr/014-Secret-Provider-For-All/#secret-provider-for-all","text":"Status Context Existing Implementations What is a Secret? Service Exclusive vs Service Shared Secrets Known and Unknown Services Static Secrets and Runtime Secrets Interfaces and factory methods Bootstrap's current implementation Interfaces Factory and bootstrap handler methods App SDK's current implementation Interface Factory and bootstrap handler methods Secret Store for non-secure mode InsecureSecrets Configuration Decision Only Exclusive Secret Stores Abstraction Interface Implementation Factory Method and Bootstrap Handler Caching of Secrets Insecure Secrets Handling on-the-fly changes to InsecureSecrets Mocks Where will SecretProvider reside? Go Services C Device Service Consequences","title":"Secret Provider for All"},{"location":"design/adr/014-Secret-Provider-For-All/#status","text":"Approved","title":"Status"},{"location":"design/adr/014-Secret-Provider-For-All/#context","text":"This ADR defines the new SecretProvider abstraction that will be used by all EdgeX services, including Device Services. The Secret Provider is used by services to retrieve secrets from the Secret Store. The Secret Store, in secure mode, is currently Vault. In non-secure mode it is configuration in some form, i.e. DatabaseInfo configuration or InsecureSecrets configuration for Application Services.","title":"Context"},{"location":"design/adr/014-Secret-Provider-For-All/#existing-implementations","text":"The Secret Provider abstraction defined in this ADR is based on the Secret Provider abstraction implementations in the Application Functions SDK (App SDK) for Application Services and the one in go-mod-bootstrap (Bootstrap) used by the Core, Support & Security services in edgex-go. Device Services do not currently use secure secrets. The App SDK implementation was initially based on the Bootstrap implementation. The similarities and differences between these implementations are: Both wrap the SecretClient from go-mod-secrets Both initialize the SecretClient based on the SecretStore configuration(s) Both have factory methods, but they differ greatly Both implement the GetDatabaseCredentials API Bootstrap's uses split interfaces definitions ( CredentialsProvider & CertificateProvider ) while the App SDK's use a single interface ( SecretProvider ) for the abstraction Bootstrap's includes the bootstrap handler while the App SDK's has the bootstrap handler separated out Bootstrap's implements the GetCertificateKeyPair API, which the App SDK's does not App SDK's implements the following, which the Bootstrap's does not Initialize API (Bootstrap's initialization is done by the bootstrap handler) StoreSecrets API GetSecrets API InsecureSecretsUpdated API SecretsLastUpdated API Wraps a second SecretClient for the Application Service instance's exclusive secrets. Used by the StoreSecrets & GetSecrets APIs The standard SecretClient is considered the shared client for secrets that all Application Service instances share. It is only used by the GetDatabaseCredentials API Configuration based secret store for non-secure mode called InsecureSecrets Caching of secrets Needed so that secrets used by pipeline functions do not cause call out to Vault for every Event processed","title":"Existing Implementations"},{"location":"design/adr/014-Secret-Provider-For-All/#what-is-a-secret","text":"A secret is a collection of key/value pairs stored in a SecretStore at specified path whose values are sensitive in nature. Redis database credentials are an example of a Secret which contains the username and password key/values stored at the redisdb path.","title":"What is a Secret?"},{"location":"design/adr/014-Secret-Provider-For-All/#service-exclusive-vs-service-shared-secrets","text":"Service Exclusive secrets are those that are exclusive to the instance of the running service. An example of exclusive secrets are the HTTP Auth tokens used by two running instances of app-service-configurable (http-export) which export different device Events to different endpoints with different Auth tokens in the HTTP headers. Service Exclusive secrets are seeded by POSTing the secrets to the /api/vX/secrets endpoint on the running instance of each Application Service. Service Shared secrets are those that all instances of a class of service, such a Application Services, share. Think of Core Data as it own class of service. An example of shared secrets are the database credentials for the single database instance for Store and Forward data that all Application Services may need to access. Another example is the database credentials for each of instance the Core Data. It is shared, but only one instance of Core Data is currently ever run. Service Shared secrets are seeded by security-secretstore-setup using static configuration for static secrets for known services. Currently database credentials are the only shared secrets. In the future we may have Message Bus credentials as shared secrets, but these will be truly shared secrets for all services to securely connect to the Message Bus, not just shared between instances of a service. Application Services currently have the ability to configure SecretStores for Service Exclusive and/or Service Shared secrets depending on their needs.","title":"Service Exclusive vs Service Shared Secrets"},{"location":"design/adr/014-Secret-Provider-For-All/#known-and-unknown-services","text":"Known Services are those identified in the static configuration by security-secretstore-setup These currently are Core Data, Core Metadata, Support Notifications, Support Scheduler and Application Service (class) Unknown Services are those not known in the static configuration that become known when added to the Docker compose file or Snap. Application Service (instance) are examples of these services. Service exclusive SecretStore can be created for these services by adding the services' unique name , i.e. appservice-http-export, to the ADD_SECRETSTORE_TOKENS environment variable for security-secretstore-setup ADD_SECRETSTORE_TOKENS: \"appservice-http-export, appservice-mqtt-export\" This creates an exclusive secret store token for each service listed. The name provided for each service must be used in the service's SecretStore configuration and Docker volume mount (if applicable). Typically the configuration is set via environment overrides or is already in an existing configuration profile ( http-export profile for app-service-configurable). Example docker-compose file entries: environment : ... SecretStoreExclusive_Path : \"/v1/secret/edgex/appservice-http-export/\" TokenFile : \"/tmp/edgex/secrets/appservice-http-export/secrets-token.json\" volumes : ... - /tmp/edgex/secrets/appservice-http-export:/tmp/edgex/secrets/appservice-http-export:ro,z","title":"Known and Unknown Services"},{"location":"design/adr/014-Secret-Provider-For-All/#static-secrets-and-runtime-secrets","text":"Static Secrets are those identified by name in the static configuration whose values are randomly generated at seed time. These secrets are seeded on start-up of EdgeX. Database credentials are currently the only secrets of this type Runtime Secrets are those not known in the static configuration and that become known during run time. These secrets are seeded at run time via the Application Services /api/vX/secrets endpoint HTTP header authorization credentials for HTTP Export are types of these secrets","title":"Static Secrets and Runtime Secrets"},{"location":"design/adr/014-Secret-Provider-For-All/#interfaces-and-factory-methods","text":"","title":"Interfaces and factory methods"},{"location":"design/adr/014-Secret-Provider-For-All/#bootstraps-current-implementation","text":"","title":"Bootstrap's current implementation"},{"location":"design/adr/014-Secret-Provider-For-All/#interfaces","text":"type CredentialsProvider interface { GetDatabaseCredentials ( database config . Database ) ( config . Credentials , error ) } and type CertificateProvider interface { GetCertificateKeyPair ( path string ) ( config . CertKeyPair , error ) }","title":"Interfaces"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-and-bootstrap-handler-methods","text":"type SecretProvider struct { secretClient pkg . SecretClient } func NewSecret () * SecretProvider { return & SecretProvider {} } func ( s * SecretProvider ) BootstrapHandler ( ctx context . Context , _ * sync . WaitGroup , startupTimer startup . Timer , dic * di . Container ) bool { ... Intializes the SecretClient and adds it to the DIC for both interfaces . ... }","title":"Factory and bootstrap handler methods"},{"location":"design/adr/014-Secret-Provider-For-All/#app-sdks-current-implementation","text":"","title":"App SDK's current implementation"},{"location":"design/adr/014-Secret-Provider-For-All/#interface","text":"type SecretProvider interface { Initialize ( _ context . Context ) bool StoreSecrets ( path string , secrets map [ string ] string ) error GetSecrets ( path string , _ ... string ) ( map [ string ] string , error ) GetDatabaseCredentials ( database db . DatabaseInfo ) ( common . Credentials , error ) InsecureSecretsUpdated () SecretsLastUpdated () time . Time }","title":"Interface"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-and-bootstrap-handler-methods_1","text":"type SecretProviderImpl struct { SharedSecretClient pkg . SecretClient ExclusiveSecretClient pkg . SecretClient secretsCache map [ string ] map [ string ] string // secret's path, key, value configuration * common . ConfigurationStruct cacheMuxtex * sync . Mutex loggingClient logger . LoggingClient //used to track when secrets have last been retrieved LastUpdated time . Time } func NewSecretProvider ( loggingClient logger . LoggingClient , configuration * common . ConfigurationStruct ) * SecretProviderImpl { sp := & SecretProviderImpl { secretsCache : make ( map [ string ] map [ string ] string ), cacheMuxtex : & sync . Mutex {}, configuration : configuration , loggingClient : loggingClient , LastUpdated : time . Now (), } return sp } type Secrets struct { } func NewSecrets () * Secrets { return & Secrets {} } func ( _ * Secrets ) BootstrapHandler ( ctx context . Context , _ * sync . WaitGroup , startupTimer startup . Timer , dic * di . Container ) bool { ... Creates NewNewSecretProvider , calls Initailizes () and adds it to the DIC ... }","title":"Factory and bootstrap handler methods"},{"location":"design/adr/014-Secret-Provider-For-All/#secret-store-for-non-secure-mode","text":"Both Bootstrap's and App SDK's implementation use the DatabaseInfo configuration for GetDatabaseCredentials API in non-secure mode. The App SDK only uses it, for backward compatibility, if the database credentials are not found in the new InsecureSecrets configuration section. For Ireland it was planned to only use the new InsecureSecrets configuration section in non-secure mode. Note: Redis credentials are blank in non-secure mode Core Data [Databases] [Databases.Primary] Host = \"localhost\" Name = \"coredata\" Username = \"\" Password = \"\" Port = 6379 Timeout = 5000 Type = \"redisdb\" Application Services [Database] Type = \"redisdb\" Host = \"localhost\" Port = 6379 Username = \"\" Password = \"\" Timeout = \"30s\"","title":"Secret Store for non-secure mode"},{"location":"design/adr/014-Secret-Provider-For-All/#insecuresecrets-configuration","text":"The App SDK defines a new Writable configuration section called InsecureSecrets . This structure mimics that of the secure SecretStore when EDGEX_SECURITY_SECRET_STORE environment variable is set to false . Having the InsecureSecrets in the Writable section allows for the secrets to be updated without restarting the service. Some minor processing must occur when the InsecureSecrets section is updated. This is to call the InsecureSecretsUpdated API. This API simply sets the time the secrets were last updated. The SecretsLastUpdated API returns this timestamp so pipeline functions that use credentials for exporting know if their client needs to be recreated with new credentials, i.e MQTT export. type WritableInfo struct { LogLevel string ... InsecureSecrets InsecureSecrets } type InsecureSecrets map [ string ] InsecureSecretsInfo type InsecureSecretsInfo struct { Path string Secrets map [ string ] string } [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\" [Writable.InsecureSecrets.mqtt] path = \"mqtt\" [Writable.InsecureSecrets.mqtt.Secrets] username = \"\" password = \"\" cacert = \"\" clientcert = \"\" clientkey = \"\"","title":"InsecureSecrets Configuration"},{"location":"design/adr/014-Secret-Provider-For-All/#decision","text":"The new SecretProvider abstraction defined by this ADR is a combination of the two implementations described above in the Existing Implementations section.","title":"Decision"},{"location":"design/adr/014-Secret-Provider-For-All/#only-exclusive-secret-stores","text":"To simplify the SecretProvider abstraction, we need to reduce to using only exclusive SecretStores . This allows all the APIs to deal with a single SecretClient , rather than the split up way we currently have in Application Services. This requires that the current Application Service shared secrets (database credentials) must be copied into each Application Service's exclusive SecretStore when it is created. The challenge is how do we seed static secrets for unknown services when they become known. As described above in the Known and Unknown Services section above, services currently identify themselves for exclusive SecretStore creation via the ADD_SECRETSTORE_TOKENS environment variable on security-secretstore-setup. This environment variable simply takes a comma separated list of service names. ADD_SECRETSTORE_TOKENS : \",\" If we expanded this to add an optional list of static secret identifiers for each service, i.e. appservice/redisdb , the exclusive store could also be seeded with a copy of static shared secrets. In this case the Redis database credentials for the Application Services' shared database. The environment variable name will change to ADD_SECRETSTORE now that it is more than just tokens. ADD_SECRETSTORE : \"app-service-xyz[appservice/redisdb]\" Note: The secret identifier here is the short path to the secret in the existing appservice SecretStore . In the above example this expands to the full path of /secret/edgex/appservice/redisdb The above example results in the Redis credentials being copied into app-service-xyz's SecretStore at /secret/edgex/app-service-xyz/redis . Similar approach could be taken for Message Bus credentials where a common SecretStore is created with the Message Bus credentials saved. The services request the credentials are copied into their exclusive SecretStore using common/messagebus as the secret identifier. Full specification for the environment variable's value is a comma separated list of service entries defined as: [optional list of static secret IDs sperated by ;],[optional list of static secret IDs sperated by ;],... Example with one service specifying IDs for static secrets and one without static secrets ADD_SECRETSTORE : \"appservice-xyz[appservice/redisdb; common/messagebus], appservice-http-export\" When the ADD_SECRETSTORE environment variable is processed to create these SecretStores , it will copy the specified saved secrets from the initial SecretStore into the service's SecretStore . This all depends on the completion of database or other credential bootstrapping and the secrets having been stored prior to the environment variable being processed. security-secretstore-setup will need to be refactored to ensure this sequencing.","title":"Only Exclusive Secret Stores"},{"location":"design/adr/014-Secret-Provider-For-All/#abstraction-interface","text":"The following will be the new SecretProvider abstraction interface used by all Edgex services type SecretProvider interface { // Stores new secrets into the service's exclusive SecretStore at the specified path. StoreSecrets ( path string , secrets map [ string ] string ) error // Retrieves secrets from the service's exclusive SecretStore at the specified path. GetSecrets ( path string , _ ... string ) ( map [ string ] string , error ) // Sets the secrets lastupdated time to current time. SecretsUpdated () // Returns the secrets last updated time SecretsLastUpdated () time . Time } Note: The GetDatabaseCredentials and GetCertificateKeyPair APIs have been removed. These are no longer needed since insecure database credentials will no longer be stored in the DatabaseInfo configuration and certificate key pairs are secrets like any others. This allows these secrets to be retrieved via the GetSecrets API.","title":"Abstraction Interface"},{"location":"design/adr/014-Secret-Provider-For-All/#implementation","text":"","title":"Implementation"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-method-and-bootstrap-handler","text":"The factory method and bootstrap handler will follow that currently in the Bootstrap implementation with some tweaks. Rather than putting the two split interfaces into the DIC, it will put just the single interface instance into the DIC. See details in the Interfaces and factory methods section above under Existing Implementations .","title":"Factory Method and Bootstrap Handler"},{"location":"design/adr/014-Secret-Provider-For-All/#caching-of-secrets","text":"Secrets will be cached as they are currently in the Application Service implementation","title":"Caching of Secrets"},{"location":"design/adr/014-Secret-Provider-For-All/#insecure-secrets","text":"Insecure Secrets will be handled as they are currently in the Application Service implementation. DatabaseInfo configuration will no longer be an option for storing the insecure database credentials. They will be stored in the InsecureSecrets configuration only. [Writable.InsecureSecrets] [Writable.InsecureSecrets.DB] path = \"redisdb\" [Writable.InsecureSecrets.DB.Secrets] username = \"\" password = \"\"","title":"Insecure Secrets"},{"location":"design/adr/014-Secret-Provider-For-All/#handling-on-the-fly-changes-to-insecuresecrets","text":"All services will need to handle the special processing when InsecureSecrets are changed on-the-fly via Consul. Since this will now be a common configuration item in Writable it can be handled in go-mod-bootstrap along with existing log level processing. This special processing will be taken from App SDK.","title":"Handling on-the-fly changes to InsecureSecrets"},{"location":"design/adr/014-Secret-Provider-For-All/#mocks","text":"Proper mock of the SecretProvider interface will be created with Mockery to be used in unit tests. Current mock in App SDK is hand written rather then generated with Mockery .","title":"Mocks"},{"location":"design/adr/014-Secret-Provider-For-All/#where-will-secretprovider-reside","text":"","title":"Where will SecretProvider reside?"},{"location":"design/adr/014-Secret-Provider-For-All/#go-services","text":"The final decision to make is where will this new SecretProvider abstraction reside? Originally is was assumed that it would reside in go-mod-secrets , which seems logical. If we were to attempt this with the implementation including the bootstrap handler, go-mod-secrets would have a dependency on go-mod-bootstrap which will likely create a circular dependency. Refactoring the existing implementation in go-mod-bootstrap and have it reside there now seems to be the best choice.","title":"Go Services"},{"location":"design/adr/014-Secret-Provider-For-All/#c-device-service","text":"The C Device SDK will implement the same SecretProvider abstraction, InsecureSercets configuration and the underling SecretStore client.","title":"C Device Service"},{"location":"design/adr/014-Secret-Provider-For-All/#consequences","text":"All service's will have Writable.InsecureSecrets section added to their configuration InsecureSecrets definition will be moved from App SDK to go-mod-bootstrap Go Device SDK will add the SecretProvider to it's bootstrapping C Device SDK implementation could be big lift? SecretStore configuration section will be added to all Device Services edgex-go services will be modified to use the single SecretProvider interface from the DIC in place of current usage of the GetDatabaseCredentials and GetCertificateKeyPair interfaces. Calls to GetDatabaseCredentials and GetCertificateKeyPair will be replaced with calls to GetSecrets API and appropriate processing of the returned secrets will be added. App SDK will be modified to use GetSecrets API in place of the GetDatabaseCredentials API App SDK will be modified to use the new SecretProvider bootstrap handler app-service-configurable's configuration profiles as well as all the Application Service examples configurations will be updated to remove the SecretStoreExclusive configuration and just use the existing SecretStore configuration security-secretstore-setup will be enhanced as described in the Exclusive Secret Stores only section above Adding new services that need static secrets added to their SecretStore requires stopping and restarting all the services. The is because security-secretstore-setup has completed but not stopped. If it is rerun without stopping the other services, there tokens and static secrets will have changed. The planned refactor of security-secretstore-setup will attempt to resolve this. Snaps do not yet support setting the environment variable for adding SecretStore. It is planned for Ireland release.","title":"Consequences"},{"location":"design/adr/core/0003-V2-API-Principles/","text":"Geneva API Guiding Principles Status Accepted by EdgeX Foundry working groups as of Core Working Group meeting 16-Jan-2020 Note This ADR was written pre-Geneva with an assumption that the V2 APIs would be available in Geneva. In actuality, the full V2 APIs will be delivered in the Ireland release (Spring 2020) Context A redesign of the EdgeX Foundry API is proposed for the Geneva release. This is understood by the community to warrant a 2.0 release that will not be backward compatible. The goal is to rework the API using solid principles that will allow for extension over the course of several release cycles, avoiding the necessity of yet another major release version in a short period of time. Briefly, this effort grew from the acknowledgement that the current models used to facilitate requests and responses via the EdgeX Foundry API were legacy definitions that were once used as internal representations of state within the EdgeX services themselves. Thus if you want to add or update a device, you populate a full device model rather than a specific Add/UpdateDeviceRequest. Currently, your request model has the same definition, and thus validation constraints, as the response model because they are one and the same! It is desirable to separate and be specific about what is required for a given request, as well as its state validity, and the bare minimum that must be returned within a response. Following from that central need, other considerations have been used when designing this proposed API. These will be enumerated and briefly explained below. 1.) Transport-agnostic Define the request/response data transfer objects (DTO) in a manner whereby they can be used independent of transport. For example, although an OpenAPI doc is implicitly coupled to HTTP/REST, define the DTOs in such a way that they could also be used if the platform were to evolve to a pub/sub architecture. 2.) Support partial updates via PATCH Given a request to, for example, update a device the user should be able to update only some properties of the device. Previously this would require an endpoint for each individual property to be updated since the \"update device\" endpoint, facilitated by a PUT, would perform a complete replacement of the device's data. If you only wanted to update the LastConnected timestamp, then a separate endpoint for that property was required. We will leverage PATCH in order to update an entity and only those properties populated on the request will be considered. Properties that are missing or left blank will not be touched. 3.) Support multiple requests at once Endpoints for the addition or updating of data (POST/PATCH) should accept multiple requests at once. If it were desirable to add or update multiple devices with one request, for example, the API should facilitate this. 4.) Support multiple correlated responses at once Following from #3 above, each request sent to the endpoint must result in a corresponding response. In the case of HTTP/REST, this means if four requests are sent to a POST operation, the return payload will have four responses. Each response must expose a \"code\" property containing a numeric result for what occurred. These could be equivalent to HTTP status codes, for example. So while the overall call might succeed, one or more of the child requests may not have. It is up to the caller to examine each response and handle accordingly. In order to correlate each response to its original request, each request must be assigned its own ID (in GUID format). The caller can then tie a response to an individual request and handle the result accordingly, or otherwise track that a response to a given request was not received. 5.) Use of 207 HTTP Status (Multi-Result) In the case where an endpoint can support multiple responses, the returned HTTP code from a REST API will be 207 (Multi-status) 6.) Each service should provide a \"batch\" request endpoint In addition to use-case specific endpoints that you'd find in any REST API, each service should provide a \"batch\" endpoint that can take any kind of request. This is a generic endpoint that allows you to group requests of different types within a single call. For example, instead of having to call two endpoints to get two jobs done, you can call a single endpoint passing the specific requests and have them routed appropriately within the service. Also, when considering agnostic transport, the batch endpoint would allow for the definition and handling of \"GET\" equivalent DTOs which are now implicit in the format of a URL. 7.) GET endpoints returning a list of items must support pagination URL parameters must be supported for every GET endpoint to support pagination. These parameters should indicate the current page of results and the number of results on a page. Decision Commnunity has accepted the reasoning for the new API and the design principles outlined above. The approach will be to gradually implement the V2 API side-by-side with the current V1 APIs. We believe it will take more than a single release cycle to implement the new specification. Releases of that occur prior to the V2 API implementation completion will continue to be major versioned as 1.x. Subsequent to completion, releases will be major versioned as 2.x. Consequences Backward incompatibility with EdgeX Foundry's V1 API requires a major version increment (e.g. v2.x). Service-level testing (e.g. blackbox tests) needs to be rewritten. Specification-first development allows for different implementations of EdgeX services to be certified as \"EdgeX Compliant\" in reference to an objective standard. Transport-agnostic focus enables different architectural patterns (pub/sub versus REST) using the same data representation.","title":"Geneva API Guiding Principles"},{"location":"design/adr/core/0003-V2-API-Principles/#geneva-api-guiding-principles","text":"","title":"Geneva API Guiding Principles"},{"location":"design/adr/core/0003-V2-API-Principles/#status","text":"Accepted by EdgeX Foundry working groups as of Core Working Group meeting 16-Jan-2020 Note This ADR was written pre-Geneva with an assumption that the V2 APIs would be available in Geneva. In actuality, the full V2 APIs will be delivered in the Ireland release (Spring 2020)","title":"Status"},{"location":"design/adr/core/0003-V2-API-Principles/#context","text":"A redesign of the EdgeX Foundry API is proposed for the Geneva release. This is understood by the community to warrant a 2.0 release that will not be backward compatible. The goal is to rework the API using solid principles that will allow for extension over the course of several release cycles, avoiding the necessity of yet another major release version in a short period of time. Briefly, this effort grew from the acknowledgement that the current models used to facilitate requests and responses via the EdgeX Foundry API were legacy definitions that were once used as internal representations of state within the EdgeX services themselves. Thus if you want to add or update a device, you populate a full device model rather than a specific Add/UpdateDeviceRequest. Currently, your request model has the same definition, and thus validation constraints, as the response model because they are one and the same! It is desirable to separate and be specific about what is required for a given request, as well as its state validity, and the bare minimum that must be returned within a response. Following from that central need, other considerations have been used when designing this proposed API. These will be enumerated and briefly explained below. 1.) Transport-agnostic Define the request/response data transfer objects (DTO) in a manner whereby they can be used independent of transport. For example, although an OpenAPI doc is implicitly coupled to HTTP/REST, define the DTOs in such a way that they could also be used if the platform were to evolve to a pub/sub architecture. 2.) Support partial updates via PATCH Given a request to, for example, update a device the user should be able to update only some properties of the device. Previously this would require an endpoint for each individual property to be updated since the \"update device\" endpoint, facilitated by a PUT, would perform a complete replacement of the device's data. If you only wanted to update the LastConnected timestamp, then a separate endpoint for that property was required. We will leverage PATCH in order to update an entity and only those properties populated on the request will be considered. Properties that are missing or left blank will not be touched. 3.) Support multiple requests at once Endpoints for the addition or updating of data (POST/PATCH) should accept multiple requests at once. If it were desirable to add or update multiple devices with one request, for example, the API should facilitate this. 4.) Support multiple correlated responses at once Following from #3 above, each request sent to the endpoint must result in a corresponding response. In the case of HTTP/REST, this means if four requests are sent to a POST operation, the return payload will have four responses. Each response must expose a \"code\" property containing a numeric result for what occurred. These could be equivalent to HTTP status codes, for example. So while the overall call might succeed, one or more of the child requests may not have. It is up to the caller to examine each response and handle accordingly. In order to correlate each response to its original request, each request must be assigned its own ID (in GUID format). The caller can then tie a response to an individual request and handle the result accordingly, or otherwise track that a response to a given request was not received. 5.) Use of 207 HTTP Status (Multi-Result) In the case where an endpoint can support multiple responses, the returned HTTP code from a REST API will be 207 (Multi-status) 6.) Each service should provide a \"batch\" request endpoint In addition to use-case specific endpoints that you'd find in any REST API, each service should provide a \"batch\" endpoint that can take any kind of request. This is a generic endpoint that allows you to group requests of different types within a single call. For example, instead of having to call two endpoints to get two jobs done, you can call a single endpoint passing the specific requests and have them routed appropriately within the service. Also, when considering agnostic transport, the batch endpoint would allow for the definition and handling of \"GET\" equivalent DTOs which are now implicit in the format of a URL. 7.) GET endpoints returning a list of items must support pagination URL parameters must be supported for every GET endpoint to support pagination. These parameters should indicate the current page of results and the number of results on a page.","title":"Context"},{"location":"design/adr/core/0003-V2-API-Principles/#decision","text":"Commnunity has accepted the reasoning for the new API and the design principles outlined above. The approach will be to gradually implement the V2 API side-by-side with the current V1 APIs. We believe it will take more than a single release cycle to implement the new specification. Releases of that occur prior to the V2 API implementation completion will continue to be major versioned as 1.x. Subsequent to completion, releases will be major versioned as 2.x.","title":"Decision"},{"location":"design/adr/core/0003-V2-API-Principles/#consequences","text":"Backward incompatibility with EdgeX Foundry's V1 API requires a major version increment (e.g. v2.x). Service-level testing (e.g. blackbox tests) needs to be rewritten. Specification-first development allows for different implementations of EdgeX services to be certified as \"EdgeX Compliant\" in reference to an objective standard. Transport-agnostic focus enables different architectural patterns (pub/sub versus REST) using the same data representation.","title":"Consequences"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/","text":"EdgeX-CLI V2 Design Status Approved (by TSC vote on 10/6/21) Context This ADR presents a technical plan for creation of a 2.0 version of edgex-cli which supports the new V2 REST APIs developed as part of the Ireland release of EdgeX. Existing Behavior The latest version of edgex-cli (1.0.1) only supports the V1 REST APIs and thus cannot be used with V2 releases of EdgeX. As the edgex-cli was developed organically over time, the current implementation has a number of bugs mostly involving a lack of consistent behavior, especially with respect to formatting of output. Other issues with the existing client include: lack of tab completion default output of commands is too verbose verbose output sometime prevents use of jq static configuration file required (i.e. no registry support) project hierarchy not conforming to best practice guidelines History The original Hanoi V1 client was created by a team at VMWare which is no longer participating in the project. Canonical will lead the development of the Ireland/Jakarta V2 client. Decision Use standardized command-line args/flags Argument/Flag Description -d , --debug show additional output for debugging purposes (e.g. REST URL, request JSON, \u2026). This command-line arg will replace -v, --verbose and will no longer trigger output of the response JSON (see -j, --json). -j , --json output the raw JSON response returned by the EdgeX REST API and nothing else. This output mode is used for script-based usage of the client. --version output the version of the client and if available, the version of EdgeX installed on the system (using the version of the metadata data service) Restructure the Go code hierarchy to follow the most recent recommended guidelines . For instance /cmd should just contain the main application for the project, not an implementation for each command - that should be in /internal/cmd Take full advantage of the features of the underlying command-line library, Cobra , such as tab-completion of commands. Allow overlap of command names across services by supporting an argument to specify the service to use: -m/--metadata , -c/--command , -n/--notification , -s/--scheduler or --data (which is the default). Examples: edgex-cli ping --data edgex-cli ping -m edgex-cli version -c Implement all required V2 endpoints for core services Core Command - edgex-cli command read | write | list Core Data - edgex-cli event add | count | list | rm | scrub** - edgex-cli reading count | list Metadata - edgex-cli device add | adminstate | list | operstate | rm | update - edgex-cli deviceprofile add | list | rm | update - edgex-cli deviceservice add | list | rm | update - edgex-cli provisionwatcher add | list | rm | update Support Notifications - edgex-cli notification add | list | rm - edgex-cli subscription add | list | rm Support Scheduler - edgex-cli interval add | list | rm | update ** Common endpoints in all services ** - ** ` edgex - cli version ` ** - ** ` edgex - cli ping ` ** - ** ` edgex - cli metrics ` ** - ** ` edgex - cli status ` ** The commands will support arguments as appropriate . For instance : - ` event list ` using ` / event / all ` to return all events - ` event list -- device { name }` using ` / event / device / name / { name }` to return the events sourced from the specified device . Currently, some commands default to always displaying GUIDs in objects when they're not really needed. Change this so that by default GUIDs aren't displayed, but add a flag which causes them to be displayed. scrub may not work with Redis being secured by default. That might also apply to the top-level db command (used to wipe the entire db). If so, then the commands will be disabled in secure mode, but permitted in non-secure mode. Have built-in defaults with port numbers for all core services and allow overrides, avoiding the need for static configuration file or configuration provider. (Stretch) implement a -o / --output argument which could be used to customize the pretty-printed objects (i.e. non-JSON). (Stretch) Implement support for use of the client via the API Gateway, including being able to connect to a remote EdgeX instance. This might require updates in go-mod-core-contracts. References Command Line Interface Guidelines The Unix Programming Environment, Brian W. Kernighan and Rob Pike POSIX Utility Conventions Program Behavior for All Programs, GNU Coding Standards 12 Factor CLI Apps, Jeff Dickey CLI Style Guide, Heroku Standard Go Project Layout","title":"EdgeX-CLI V2 Design"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#edgex-cli-v2-design","text":"","title":"EdgeX-CLI V2 Design"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#status","text":"Approved (by TSC vote on 10/6/21)","title":"Status"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#context","text":"This ADR presents a technical plan for creation of a 2.0 version of edgex-cli which supports the new V2 REST APIs developed as part of the Ireland release of EdgeX.","title":"Context"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#existing-behavior","text":"The latest version of edgex-cli (1.0.1) only supports the V1 REST APIs and thus cannot be used with V2 releases of EdgeX. As the edgex-cli was developed organically over time, the current implementation has a number of bugs mostly involving a lack of consistent behavior, especially with respect to formatting of output. Other issues with the existing client include: lack of tab completion default output of commands is too verbose verbose output sometime prevents use of jq static configuration file required (i.e. no registry support) project hierarchy not conforming to best practice guidelines","title":"Existing Behavior"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#history","text":"The original Hanoi V1 client was created by a team at VMWare which is no longer participating in the project. Canonical will lead the development of the Ireland/Jakarta V2 client.","title":"History"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#decision","text":"Use standardized command-line args/flags Argument/Flag Description -d , --debug show additional output for debugging purposes (e.g. REST URL, request JSON, \u2026). This command-line arg will replace -v, --verbose and will no longer trigger output of the response JSON (see -j, --json). -j , --json output the raw JSON response returned by the EdgeX REST API and nothing else. This output mode is used for script-based usage of the client. --version output the version of the client and if available, the version of EdgeX installed on the system (using the version of the metadata data service) Restructure the Go code hierarchy to follow the most recent recommended guidelines . For instance /cmd should just contain the main application for the project, not an implementation for each command - that should be in /internal/cmd Take full advantage of the features of the underlying command-line library, Cobra , such as tab-completion of commands. Allow overlap of command names across services by supporting an argument to specify the service to use: -m/--metadata , -c/--command , -n/--notification , -s/--scheduler or --data (which is the default). Examples: edgex-cli ping --data edgex-cli ping -m edgex-cli version -c Implement all required V2 endpoints for core services Core Command - edgex-cli command read | write | list Core Data - edgex-cli event add | count | list | rm | scrub** - edgex-cli reading count | list Metadata - edgex-cli device add | adminstate | list | operstate | rm | update - edgex-cli deviceprofile add | list | rm | update - edgex-cli deviceservice add | list | rm | update - edgex-cli provisionwatcher add | list | rm | update Support Notifications - edgex-cli notification add | list | rm - edgex-cli subscription add | list | rm Support Scheduler - edgex-cli interval add | list | rm | update ** Common endpoints in all services ** - ** ` edgex - cli version ` ** - ** ` edgex - cli ping ` ** - ** ` edgex - cli metrics ` ** - ** ` edgex - cli status ` ** The commands will support arguments as appropriate . For instance : - ` event list ` using ` / event / all ` to return all events - ` event list -- device { name }` using ` / event / device / name / { name }` to return the events sourced from the specified device . Currently, some commands default to always displaying GUIDs in objects when they're not really needed. Change this so that by default GUIDs aren't displayed, but add a flag which causes them to be displayed. scrub may not work with Redis being secured by default. That might also apply to the top-level db command (used to wipe the entire db). If so, then the commands will be disabled in secure mode, but permitted in non-secure mode. Have built-in defaults with port numbers for all core services and allow overrides, avoiding the need for static configuration file or configuration provider. (Stretch) implement a -o / --output argument which could be used to customize the pretty-printed objects (i.e. non-JSON). (Stretch) Implement support for use of the client via the API Gateway, including being able to connect to a remote EdgeX instance. This might require updates in go-mod-core-contracts.","title":"Decision"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#references","text":"Command Line Interface Guidelines The Unix Programming Environment, Brian W. Kernighan and Rob Pike POSIX Utility Conventions Program Behavior for All Programs, GNU Coding Standards 12 Factor CLI Apps, Jeff Dickey CLI Style Guide, Heroku Standard Go Project Layout","title":"References"},{"location":"design/adr/core/0021-Device-Profile-Changes/","text":"Changes to Device Profiles Status Approved By TSC Vote on 2/14/22 Please see a prior PR on this topic that detailed much of the debate and context on this issue. For clarity and simplicity, that PR was closed in favor of this simpler ADR. Context While the device profile has always been the way to describe a device/sensor and template its communications to the rest of the EdgeX platform, over the course of EdgeX evolution there have been changes in what could change in a profile (often based on its associations to other EdgeX objects). This document is meant to address the issue of change surrounding device profiles in EdgeX going forward \u2013 specifically when can a device profile (or its sub-elements such as device resources) be added, modified or removed. Summary of Device Profile Rules These rules will be implemented in core metadata on device profile API calls. A device profile can be added anytime Device resources or device commands can be added to a device profile anytime Attributes can be added to a device profile anytime A device profile can be removed or modified when the device profile is not associated to a device or provision watcher this includes modifying any field (except identifiers like names and ids) this includes changes to the array of device resources, device commands this includes changes to attributes (of device resources) even when a device profile is associated to a device or provision watcher, fields of the device profile or device resource can be modified when the field change will not affect the behavior of the system. on profile, the following fields do not affect the behavior: description, manufacturer, model, labels. on device resource, the following fields do not affect the behavior: description and tag A device profile cannot be removed when it is associated to a device or provision watcher. A device profile can be removed or modified even when associated to an event or reading. However, configuration options (see New Configuration Settings below) are available to block the change or removal of a device profile for any reason. the rationale behind the new configuraton settings was specifically to protect the event/reading association to device profiles. Events and readings are generally considered short lived (ephemeral) objects and already contain the necessary device profile information that are needed by the system during their short life without having to refer to and keep the device profile. But if an adopter wants to make sure the device profile is unmodified and still exists for any event/readings association (or for any reason), then the new config setting will block device profile changes or removals. see note below in Consequences that a new Units property must be added to the Reading object in order to support this rule and the need for all relevant profile data to be in the reading. Ancillary Rules associated to Device Profiles Name and ID fields (identifying fields) for device profiles, device resources, etc. cannot be modified and can never be null. A device profile can begin life \u201cempty\u201d - meaning that it has no device resources or device commands. New APIs The following APIs would be added to the metadata REST service in order to meet the design specified above. Add Profile General Property PATCH API (allow to modify profile's description, manufacturer, model and label fields) Add Profile Device Resource POST API Add Profile Device Resource PATCH API (allow to modify Description and IsHidden only) Add Profile Device Resource DELETE API (allow as described above) Add Profile Device Command POST API Add Profile Device Command PATCH API (allow as described above) Add Profile Device Command DELETE API (allow as described above) New Configuration Settings Some adopters may not view event/reading data as ephemeral or short lived. These adopters may choose not to allow device profiles to be modified or removed when associated to an event or reading. For this reason, two new configuration options, in the [Writable.ProfileChange] section, will be added to metadata configuration that are used to reject modifications or deletions. StrictDeviceProfileChanges (set to false by default) StrictDeviceProfileDeletes (set to false by default) When either of these config settings are set to true, metadata would accordingly reject changes to or removal of profiles (note: metadata will not check that there are actually events or readings - or any object - associated to the device profile when these are set to true. It simply rejects all modification or deletes to device profiles with the assumption that there could be events, readings or other objects associated and which need to be preserved). Consequences/Considerations In order to allow device profiles to be updated or removed even when associated to an EdgeX event/reading, a new property needs to be added to the reading object. Readings will now contain a \u201cUnits\u201d (string) property. This property will indicate the units of measure for the Value in the Reading and will be populated based on the Units for the device resource. A new device service configuration property, ReadingUnits (set true by default) will allow adopters to indicate they do not want units to be added to the readings (for cases where there is a concern about the number of readings and the extra data of adding units). The ReadingUnits configuration option will be added to the [Writable.Reading] section of device services (and addressed in the device service SDKs). This allows the event/reading to contain all relevant information from the device profile that is needed by the system during the course of the event/reading\u2019s life. This allows the device profile to be modified or even removed even when there are events/readings in the system that were created from information in the device profile. References Metadata API Device Service SDK Required Functionality","title":"Changes to Device Profiles"},{"location":"design/adr/core/0021-Device-Profile-Changes/#changes-to-device-profiles","text":"","title":"Changes to Device Profiles"},{"location":"design/adr/core/0021-Device-Profile-Changes/#status","text":"Approved By TSC Vote on 2/14/22 Please see a prior PR on this topic that detailed much of the debate and context on this issue. For clarity and simplicity, that PR was closed in favor of this simpler ADR.","title":"Status"},{"location":"design/adr/core/0021-Device-Profile-Changes/#context","text":"While the device profile has always been the way to describe a device/sensor and template its communications to the rest of the EdgeX platform, over the course of EdgeX evolution there have been changes in what could change in a profile (often based on its associations to other EdgeX objects). This document is meant to address the issue of change surrounding device profiles in EdgeX going forward \u2013 specifically when can a device profile (or its sub-elements such as device resources) be added, modified or removed.","title":"Context"},{"location":"design/adr/core/0021-Device-Profile-Changes/#summary-of-device-profile-rules","text":"These rules will be implemented in core metadata on device profile API calls. A device profile can be added anytime Device resources or device commands can be added to a device profile anytime Attributes can be added to a device profile anytime A device profile can be removed or modified when the device profile is not associated to a device or provision watcher this includes modifying any field (except identifiers like names and ids) this includes changes to the array of device resources, device commands this includes changes to attributes (of device resources) even when a device profile is associated to a device or provision watcher, fields of the device profile or device resource can be modified when the field change will not affect the behavior of the system. on profile, the following fields do not affect the behavior: description, manufacturer, model, labels. on device resource, the following fields do not affect the behavior: description and tag A device profile cannot be removed when it is associated to a device or provision watcher. A device profile can be removed or modified even when associated to an event or reading. However, configuration options (see New Configuration Settings below) are available to block the change or removal of a device profile for any reason. the rationale behind the new configuraton settings was specifically to protect the event/reading association to device profiles. Events and readings are generally considered short lived (ephemeral) objects and already contain the necessary device profile information that are needed by the system during their short life without having to refer to and keep the device profile. But if an adopter wants to make sure the device profile is unmodified and still exists for any event/readings association (or for any reason), then the new config setting will block device profile changes or removals. see note below in Consequences that a new Units property must be added to the Reading object in order to support this rule and the need for all relevant profile data to be in the reading.","title":"Summary of Device Profile Rules"},{"location":"design/adr/core/0021-Device-Profile-Changes/#ancillary-rules-associated-to-device-profiles","text":"Name and ID fields (identifying fields) for device profiles, device resources, etc. cannot be modified and can never be null. A device profile can begin life \u201cempty\u201d - meaning that it has no device resources or device commands.","title":"Ancillary Rules associated to Device Profiles"},{"location":"design/adr/core/0021-Device-Profile-Changes/#new-apis","text":"The following APIs would be added to the metadata REST service in order to meet the design specified above. Add Profile General Property PATCH API (allow to modify profile's description, manufacturer, model and label fields) Add Profile Device Resource POST API Add Profile Device Resource PATCH API (allow to modify Description and IsHidden only) Add Profile Device Resource DELETE API (allow as described above) Add Profile Device Command POST API Add Profile Device Command PATCH API (allow as described above) Add Profile Device Command DELETE API (allow as described above)","title":"New APIs"},{"location":"design/adr/core/0021-Device-Profile-Changes/#new-configuration-settings","text":"Some adopters may not view event/reading data as ephemeral or short lived. These adopters may choose not to allow device profiles to be modified or removed when associated to an event or reading. For this reason, two new configuration options, in the [Writable.ProfileChange] section, will be added to metadata configuration that are used to reject modifications or deletions. StrictDeviceProfileChanges (set to false by default) StrictDeviceProfileDeletes (set to false by default) When either of these config settings are set to true, metadata would accordingly reject changes to or removal of profiles (note: metadata will not check that there are actually events or readings - or any object - associated to the device profile when these are set to true. It simply rejects all modification or deletes to device profiles with the assumption that there could be events, readings or other objects associated and which need to be preserved).","title":"New Configuration Settings"},{"location":"design/adr/core/0021-Device-Profile-Changes/#consequencesconsiderations","text":"In order to allow device profiles to be updated or removed even when associated to an EdgeX event/reading, a new property needs to be added to the reading object. Readings will now contain a \u201cUnits\u201d (string) property. This property will indicate the units of measure for the Value in the Reading and will be populated based on the Units for the device resource. A new device service configuration property, ReadingUnits (set true by default) will allow adopters to indicate they do not want units to be added to the readings (for cases where there is a concern about the number of readings and the extra data of adding units). The ReadingUnits configuration option will be added to the [Writable.Reading] section of device services (and addressed in the device service SDKs). This allows the event/reading to contain all relevant information from the device profile that is needed by the system during the course of the event/reading\u2019s life. This allows the device profile to be modified or even removed even when there are events/readings in the system that were created from information in the device profile.","title":"Consequences/Considerations"},{"location":"design/adr/core/0021-Device-Profile-Changes/#references","text":"Metadata API Device Service SDK Required Functionality","title":"References"},{"location":"design/adr/device-service/0002-Array-Datatypes/","text":"Array Datatypes Design Status Context Decision Consequences Status Approved Context The current data model does not directly provide for devices which provide array data. Small fixed-length arrays may be handled by defining multiple device resources - one for each element - and aggregating them via a resource command. Other array data may be passed using the Binary type. Neither of these approaches is ideal: the binary data is opaque and any service processing it would need specific knowledge to do so, and aggregation presents the device service implementation with a multiple-read request that could in many cases be better handled by a single request. This design adds arrays of primitives to the range of supported types in EdgeX. It comprises an extension of the DeviceProfile model, and an update to the definition of Reading. Decision DeviceProfile extension The permitted values of the Type field in PropertyValue are extended to include: \"BoolArray\", \"Uint8Array\", \"Uint16Array\", \"Uint32Array\", \"Uint64Array\", \"Int8Array\", Int16Array\", \"Int32Array\", \"Int64Array\", \"Float32Array\", \"Float64Array\" Readings In the API (v1 and v2), Reading.Value is a string representation of the data. If this is maintained, the representation for Array types will follow the JSON array syntax, ie [\"value1\", \"value2\", ...] Consequences Any service which processes Readings will need to be reworked to account for the new Reading type. Device Service considerations The API used for interfacing between device SDKs and devices service implementations contains a local representation of reading values. This will need to be updated in line with the changes outlined here. For C, this will involve an extension of the existing union type. For Go, additional fields may be added to the CommandValue structure. Processing of numeric data in the device service, ie offset , scale etc will not be applied to the values in an array.","title":"Array Datatypes Design"},{"location":"design/adr/device-service/0002-Array-Datatypes/#array-datatypes-design","text":"Status Context Decision Consequences","title":"Array Datatypes Design"},{"location":"design/adr/device-service/0002-Array-Datatypes/#status","text":"Approved","title":"Status"},{"location":"design/adr/device-service/0002-Array-Datatypes/#context","text":"The current data model does not directly provide for devices which provide array data. Small fixed-length arrays may be handled by defining multiple device resources - one for each element - and aggregating them via a resource command. Other array data may be passed using the Binary type. Neither of these approaches is ideal: the binary data is opaque and any service processing it would need specific knowledge to do so, and aggregation presents the device service implementation with a multiple-read request that could in many cases be better handled by a single request. This design adds arrays of primitives to the range of supported types in EdgeX. It comprises an extension of the DeviceProfile model, and an update to the definition of Reading.","title":"Context"},{"location":"design/adr/device-service/0002-Array-Datatypes/#decision","text":"","title":"Decision"},{"location":"design/adr/device-service/0002-Array-Datatypes/#deviceprofile-extension","text":"The permitted values of the Type field in PropertyValue are extended to include: \"BoolArray\", \"Uint8Array\", \"Uint16Array\", \"Uint32Array\", \"Uint64Array\", \"Int8Array\", Int16Array\", \"Int32Array\", \"Int64Array\", \"Float32Array\", \"Float64Array\"","title":"DeviceProfile extension"},{"location":"design/adr/device-service/0002-Array-Datatypes/#readings","text":"In the API (v1 and v2), Reading.Value is a string representation of the data. If this is maintained, the representation for Array types will follow the JSON array syntax, ie [\"value1\", \"value2\", ...]","title":"Readings"},{"location":"design/adr/device-service/0002-Array-Datatypes/#consequences","text":"Any service which processes Readings will need to be reworked to account for the new Reading type.","title":"Consequences"},{"location":"design/adr/device-service/0002-Array-Datatypes/#device-service-considerations","text":"The API used for interfacing between device SDKs and devices service implementations contains a local representation of reading values. This will need to be updated in line with the changes outlined here. For C, this will involve an extension of the existing union type. For Go, additional fields may be added to the CommandValue structure. Processing of numeric data in the device service, ie offset , scale etc will not be applied to the values in an array.","title":"Device Service considerations"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/","text":"Device Service REST API Status Approved Context This ADR details the REST API to be provided by Device Service implementations in EdgeX version 2.x. As such, it supercedes the equivalent sections of the earlier \"Device Service Functional Requirements\" document. These requirements should be implemented as far as possible within the Device Service SDKs, but they also apply to any Device Service implementation. Decision Common endpoints The DS should provide the REST endpoints that are expected of all EdgeX microservices, specifically: config metrics ping version Callback Endpoint Methods callback/device PUT and POST callback/device/name/{name} DELETE callback/profile PUT callback/watcher PUT and POST callback/watcher/name/{name} DELETE parameter meaning {name} the name of the device or watcher These endpoints are used by the Core Metadata service to inform the device service of metadata updates. Endpoints are defined for each of the objects of interest to a device service, ie Devices, Device Profiles and Provision Watchers. On receipt of calls to these endpoints the device service should update its internal state accordingly. Note that the device service does not need to be informed of the creation or deletion of device profiles, as these operations may only occur where no devices are associated with the profile. To avoid stale profile entries the device service should delete a profile from its cache when the last device using it is deleted. Object deletion When an object is deleted, the Metadata service makes a DELETE request to the relevant callback/{type}/name/{name} endpoint. Object creation and updates When an object is created or updated, the Metadata service makes a POST or PUT request respectively to the relevant callback/{type} endpoint. The payload of the request is the new or updated object, ie one of the Device, DeviceProfile or ProvisionWatcher DTOs. Device Endpoint Methods device/name/{name}/{command} GET and PUT parameter meaning {name} the name of the device {command} the command name The command specified must match a deviceCommand or deviceResource name in the device's profile body (for PUT ): An application/json SettingRequest, which is a set of key/value pairs where the keys are valid deviceResource names, and the values provide the command argument for that resource. Example: {\"AHU-TargetTemperature\": \"28.5\", \"AHU-TargetBand\": \"4.0\"} Return code Meaning 200 the command was successful 404 the specified device does not exist, or the command/resource is unknown 405 attempted write to a read-only resource 423 the specified device is locked (admin state) or disabled (operating state) 500 the device driver is unable to process the request response body : A successful GET operation will return a JSON-encoded EventResponse object, which contains one or more Readings. Example: {\"apiVersion\":\"v2\",\"deviceName\":\"Gyro\",\"origin\":1592405201763915855,\"readings\":[{\"deviceName\":\"Gyro\",\"name\":\"Xrotation\",\"value\":\"124\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Yrotation\",\"value\":\"-54\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Zrotation\",\"value\":\"122\",\"origin\":1592405201763915855,\"valueType\":\"int32\"}]} This endpoint is used for obtaining readings from a device, and for writing settings to a device. Data formats The values obtained when readings are taken, or used to make settings, are expressed as strings. Type EdgeX types Representation Boolean Bool \"true\" or \"false\" Integer Uint8-Uint64 , Int8-Int64 Numeric string, eg \"-132\" Float Float32 , Float64 Decimal with exponent, eg \"1.234e-5\" String String string Binary Bytes octet array Array BoolArray , Uint8Array-Uint64Array , Int8Array-Int64Array , Float32Array , Float64Array JSON Array, eg \"[\"1\", \"34\", \"-5\"]\" Notes: - The presence of a Binary reading will cause the entire Event to be encoded using CBOR rather than JSON - Arrays of String and Binary data are not supported Readings and Events A Reading represents a value obtained from a deviceResource. It contains the following fields Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken value The reading value valueType The type of the data Or for binary Readings, the following fields Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken binaryValue The reading value mediaType The MIME type of the data An Event represents the result of a GET command. If the command names a deviceResource, the Event will contain a single Reading. If the command names a deviceCommand, the Event will contain as many Readings as there are deviceResources listed in the deviceCommand. The fields of an Event are as follows: Field name Description deviceName The name of the Device from which the Readings are taken profileName The name of the Profile describing the Device origin The time at which the Event was created readings An array of Readings Query Parameters Calls to the device endpoints may include a Query String in the URL. This may be used to pass parameters relating to the request to the device service. Individual device services may define their own parameters to control specific behaviors. Parameters beginning with the prefix ds- are reserved to the Device SDKs and the following parameters are defined for GET requests: Parameter Valid Values Default Meaning ds-pushevent \"yes\" or \"no\" \"no\" If set to yes, a successful GET will result in an event being pushed to the EdgeX system ds-returnevent \"yes\" or \"no\" \"yes\" If set to no, there will be no Event returned in the http response Device States A Device in EdgeX has two states associated with it: the Administrative state and the Operational state. The Administrative state may be set to LOCKED (normally UNLOCKED ) to block access to the device for administrative reasons. The Operational state may be set to DOWN (normally UP ) to indicate that the device is not currently working. In either case access to the device via this endpoint will be denied and HTTP 423 (\"Locked\") will be returned. Data Transformations A number of simple data transformations may be defined in the deviceResource. The table below shows these transformations in the order in which they are applied to outgoing data, ie Readings. The transformations are inverted and applied in reverse order for incoming data. Transform Applicable reading types Effect mask Integers The reading is masked (bitwise-and operation) with the specified value. shift Integers The reading is bit-shifted by the specified value. Positive values indicate right-shift, negative for left. base Integers and Floats The reading is replaced by the specified value raised to the power of the reading. scale Integers and Floats The reading is multiplied by the specified value. offset Integers and Floats The reading is increased by the specified value. The operation of the mask transform on incoming data (a setting) is that the value to be set on the resource is the existing value bitwise-anded with the complement of the mask, bitwise-ored with the value specified in the request. ie, new-value = (current-value & !mask) | request-value The combination of mask and shift can therefore be used to access data contained in a subdivision of an octet. It is possible that following the application of the specified transformations, a value may exceed the range that may be represented by its type. Should this occur on a set operation, a suitable error should be logged and returned, along with the Bad Request http code 400. If it occurs as part of a get operation, the Reading's value should be set to the String \"overflow\" and its valueType to String . Assertions and Mappings Assertions are another attribute in a device resource's PropertyValue, which specify a string which the reading value is compared against. If the comparison fails, then the http request returns a string of the form \"Assertion failed for device resource: \\ , with value: \\ \" , this also has a side-effect of setting the device operatingstate to DISABLED . A 500 status code is also returned. Note that the error response and status code should be returned regardless of the ds-returnevent setting. Assertions are also checked where an event is being generated due to an AutoEvent, or asynchronous readings are pushed. In these cases if the assertion is triggered, an error should be logged and the operating state should be set as above. Assertions are not checked for settings, only for readings. Mappings may be defined in a deviceCommand. These allow Readings of string type to be remapped. Mappings are applied after assertions are checked, and are the final transformation before Readings are created. Mappings are also applied, but in reverse, to settings ( PUT request data). lastConnected timestamp Each Device has as part of its metadata a timestamp named lastConnected , this indicates the most recent occasion when the device was successfully interacted with. The device service should update this timestamp every time a GET or PUT operation succeeds, unless it has been configured not to do so (eg for performance reasons). Discovery Endpoint Methods discovery POST A call to this endpoint triggers the device discovery process, if enabled. See Discovery Design for details. Consequences Changes from v1.x API The callback endpoint is split according to the type of object being updated Callbacks for new and updated objects take the object in the request body The device/all form is removed GET requests take parameters controlling what is to be done with resulting Events, and the default behavior does not send the Event to core-data References OpenAPI definition of v2 API : https://github.com/edgexfoundry/device-sdk-go/blob/master/openapi/v2/device-sdk.yaml Device Service Functional Requirements (Geneva) : https://wiki.edgexfoundry.org/download/attachments/329488/edgex-device-service-requirements-v11.pdf?version=1&modificationDate=1591621033000&api=v2","title":"Device Service REST API"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#device-service-rest-api","text":"","title":"Device Service REST API"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#status","text":"Approved","title":"Status"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#context","text":"This ADR details the REST API to be provided by Device Service implementations in EdgeX version 2.x. As such, it supercedes the equivalent sections of the earlier \"Device Service Functional Requirements\" document. These requirements should be implemented as far as possible within the Device Service SDKs, but they also apply to any Device Service implementation.","title":"Context"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#decision","text":"","title":"Decision"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#common-endpoints","text":"The DS should provide the REST endpoints that are expected of all EdgeX microservices, specifically: config metrics ping version","title":"Common endpoints"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#callback","text":"Endpoint Methods callback/device PUT and POST callback/device/name/{name} DELETE callback/profile PUT callback/watcher PUT and POST callback/watcher/name/{name} DELETE parameter meaning {name} the name of the device or watcher These endpoints are used by the Core Metadata service to inform the device service of metadata updates. Endpoints are defined for each of the objects of interest to a device service, ie Devices, Device Profiles and Provision Watchers. On receipt of calls to these endpoints the device service should update its internal state accordingly. Note that the device service does not need to be informed of the creation or deletion of device profiles, as these operations may only occur where no devices are associated with the profile. To avoid stale profile entries the device service should delete a profile from its cache when the last device using it is deleted.","title":"Callback"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#object-deletion","text":"When an object is deleted, the Metadata service makes a DELETE request to the relevant callback/{type}/name/{name} endpoint.","title":"Object deletion"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#object-creation-and-updates","text":"When an object is created or updated, the Metadata service makes a POST or PUT request respectively to the relevant callback/{type} endpoint. The payload of the request is the new or updated object, ie one of the Device, DeviceProfile or ProvisionWatcher DTOs.","title":"Object creation and updates"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#device","text":"Endpoint Methods device/name/{name}/{command} GET and PUT parameter meaning {name} the name of the device {command} the command name The command specified must match a deviceCommand or deviceResource name in the device's profile body (for PUT ): An application/json SettingRequest, which is a set of key/value pairs where the keys are valid deviceResource names, and the values provide the command argument for that resource. Example: {\"AHU-TargetTemperature\": \"28.5\", \"AHU-TargetBand\": \"4.0\"} Return code Meaning 200 the command was successful 404 the specified device does not exist, or the command/resource is unknown 405 attempted write to a read-only resource 423 the specified device is locked (admin state) or disabled (operating state) 500 the device driver is unable to process the request response body : A successful GET operation will return a JSON-encoded EventResponse object, which contains one or more Readings. Example: {\"apiVersion\":\"v2\",\"deviceName\":\"Gyro\",\"origin\":1592405201763915855,\"readings\":[{\"deviceName\":\"Gyro\",\"name\":\"Xrotation\",\"value\":\"124\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Yrotation\",\"value\":\"-54\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Zrotation\",\"value\":\"122\",\"origin\":1592405201763915855,\"valueType\":\"int32\"}]} This endpoint is used for obtaining readings from a device, and for writing settings to a device.","title":"Device"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#data-formats","text":"The values obtained when readings are taken, or used to make settings, are expressed as strings. Type EdgeX types Representation Boolean Bool \"true\" or \"false\" Integer Uint8-Uint64 , Int8-Int64 Numeric string, eg \"-132\" Float Float32 , Float64 Decimal with exponent, eg \"1.234e-5\" String String string Binary Bytes octet array Array BoolArray , Uint8Array-Uint64Array , Int8Array-Int64Array , Float32Array , Float64Array JSON Array, eg \"[\"1\", \"34\", \"-5\"]\" Notes: - The presence of a Binary reading will cause the entire Event to be encoded using CBOR rather than JSON - Arrays of String and Binary data are not supported","title":"Data formats"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#readings-and-events","text":"A Reading represents a value obtained from a deviceResource. It contains the following fields Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken value The reading value valueType The type of the data Or for binary Readings, the following fields Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken binaryValue The reading value mediaType The MIME type of the data An Event represents the result of a GET command. If the command names a deviceResource, the Event will contain a single Reading. If the command names a deviceCommand, the Event will contain as many Readings as there are deviceResources listed in the deviceCommand. The fields of an Event are as follows: Field name Description deviceName The name of the Device from which the Readings are taken profileName The name of the Profile describing the Device origin The time at which the Event was created readings An array of Readings","title":"Readings and Events"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#query-parameters","text":"Calls to the device endpoints may include a Query String in the URL. This may be used to pass parameters relating to the request to the device service. Individual device services may define their own parameters to control specific behaviors. Parameters beginning with the prefix ds- are reserved to the Device SDKs and the following parameters are defined for GET requests: Parameter Valid Values Default Meaning ds-pushevent \"yes\" or \"no\" \"no\" If set to yes, a successful GET will result in an event being pushed to the EdgeX system ds-returnevent \"yes\" or \"no\" \"yes\" If set to no, there will be no Event returned in the http response","title":"Query Parameters"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#device-states","text":"A Device in EdgeX has two states associated with it: the Administrative state and the Operational state. The Administrative state may be set to LOCKED (normally UNLOCKED ) to block access to the device for administrative reasons. The Operational state may be set to DOWN (normally UP ) to indicate that the device is not currently working. In either case access to the device via this endpoint will be denied and HTTP 423 (\"Locked\") will be returned.","title":"Device States"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#data-transformations","text":"A number of simple data transformations may be defined in the deviceResource. The table below shows these transformations in the order in which they are applied to outgoing data, ie Readings. The transformations are inverted and applied in reverse order for incoming data. Transform Applicable reading types Effect mask Integers The reading is masked (bitwise-and operation) with the specified value. shift Integers The reading is bit-shifted by the specified value. Positive values indicate right-shift, negative for left. base Integers and Floats The reading is replaced by the specified value raised to the power of the reading. scale Integers and Floats The reading is multiplied by the specified value. offset Integers and Floats The reading is increased by the specified value. The operation of the mask transform on incoming data (a setting) is that the value to be set on the resource is the existing value bitwise-anded with the complement of the mask, bitwise-ored with the value specified in the request. ie, new-value = (current-value & !mask) | request-value The combination of mask and shift can therefore be used to access data contained in a subdivision of an octet. It is possible that following the application of the specified transformations, a value may exceed the range that may be represented by its type. Should this occur on a set operation, a suitable error should be logged and returned, along with the Bad Request http code 400. If it occurs as part of a get operation, the Reading's value should be set to the String \"overflow\" and its valueType to String .","title":"Data Transformations"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#assertions-and-mappings","text":"Assertions are another attribute in a device resource's PropertyValue, which specify a string which the reading value is compared against. If the comparison fails, then the http request returns a string of the form \"Assertion failed for device resource: \\ , with value: \\ \" , this also has a side-effect of setting the device operatingstate to DISABLED . A 500 status code is also returned. Note that the error response and status code should be returned regardless of the ds-returnevent setting. Assertions are also checked where an event is being generated due to an AutoEvent, or asynchronous readings are pushed. In these cases if the assertion is triggered, an error should be logged and the operating state should be set as above. Assertions are not checked for settings, only for readings. Mappings may be defined in a deviceCommand. These allow Readings of string type to be remapped. Mappings are applied after assertions are checked, and are the final transformation before Readings are created. Mappings are also applied, but in reverse, to settings ( PUT request data).","title":"Assertions and Mappings"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#lastconnected-timestamp","text":"Each Device has as part of its metadata a timestamp named lastConnected , this indicates the most recent occasion when the device was successfully interacted with. The device service should update this timestamp every time a GET or PUT operation succeeds, unless it has been configured not to do so (eg for performance reasons).","title":"lastConnected timestamp"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#discovery","text":"Endpoint Methods discovery POST A call to this endpoint triggers the device discovery process, if enabled. See Discovery Design for details.","title":"Discovery"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#consequences","text":"","title":"Consequences"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#changes-from-v1x-api","text":"The callback endpoint is split according to the type of object being updated Callbacks for new and updated objects take the object in the request body The device/all form is removed GET requests take parameters controlling what is to be done with resulting Events, and the default behavior does not send the Event to core-data","title":"Changes from v1.x API"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#references","text":"OpenAPI definition of v2 API : https://github.com/edgexfoundry/device-sdk-go/blob/master/openapi/v2/device-sdk.yaml Device Service Functional Requirements (Geneva) : https://wiki.edgexfoundry.org/download/attachments/329488/edgex-device-service-requirements-v11.pdf?version=1&modificationDate=1591621033000&api=v2","title":"References"},{"location":"design/adr/device-service/0012-DeviceService-Filters/","text":"Device Service Filters Status Approved (by TSC vote on 3/15/21) design (initially) for Hanoi - but now being considered for Ireland implementation TBD (desired feature targeted for Ireland or Jakarata) Context In EdgeX today, sensor/device data collected can be \"filtered\" by application services before being exported or sent to some north side application or system. Built-in application service functions (available through the app services SDK) allow EdgeX event/reading objects to be filtered by device name or by device ResourceName. That is, event/readings can be filtered by: which device sent the event/reading (as determined by the Event device property). the classification or origin (such as temperature or humidity) of data produced by the device as determined by the Reading's name property (which used to be the value descriptor and now refers to the device ResourceName). Two Levels of Device Service Filtering There are potentially two places where \"filtering\" in a device service could be useful. One (Sensor Data Filter) - after the device service has communicated with the sensor or device to get sensor values (but before the service creates Event/Reading objects and pushes those to core data). A sensor data filter would allow the device service to essentially ignore some of the raw sensed data. This would allow for some device service optimization in that the device service would not have perform type transformations and creation of event/reading objects if the data can be eliminated at this early stage. This first level filtering would, if put in place , likely occur in code associated with the read command gets done by the ProtocolDriver . Two (Reading Filter) - after the sensor data has been collected and read and put into Event/Reading objects, there is a desire to filter some of the Readings based on the Reading values or Reading name (which is the device ResourceName) or some combination of value and name. At this time, this design only addresses the need for the second filter (Reading Filter) . At the time of this writing, no applicable use case has yet to be defined to warrant the Sensor Data Filter. Reading Filters Reading filters will allow, not unlike application service filter functions today, to have Readings in an Event to be removed if: the value was outside or inside some range, or the value was greater than, less than or equal to some value based on the Reading value (numeric) of a Reading outside a specified range (min/max) described in the service configuration. Thus avoiding sending in outlier or jittery data Readings that could negatively effect analytics. Future scope: based on the Reading value (numeric) equal to or near (with in some specified range) the last reading. This allows a device service to reduce sending in Event/Readings that do not represent any significant change. This differs from the already implemented onChangeOnly in that it is filtering Readings within a specified degree of change. Note: this feature would require caching of readings which has not fully been implemented in the SDK. The existing mechanism for autoevents provides a partial cache. Added for future reference, but this feature would not be accomplished in the initial implementation; requiring extra design work on caching to be implemented. the value was the same as some or not the same as some specified value or values (for strings, boolean and other non-numeric values) the value matches a pattern (glob and/or regex) when the value is a string. the name (the device ResourceName) matched a particular value; in other words match temperature or humidity as example device resources. Unlike application services, there is not a need to filter on a device name (or identifier). Simply disable the device in the device service if all Event/Readings are to be stopped for the device. In the case that all Readings of an Event are filtered, it is assumed the entire Event is deemed to be worthless and not sent to core data by the device service. If only some Readings from and Event are filtered, the Event minus the filtered Readings would be sent to core data. The filter behaves the same whether the collection of Readings and Events is triggered by a scheduled collection of data from the underlying sensor/device or triggered by a command request (as from the command service). Therefore, the call for a command request still results in a successful status code and a return of no results (or partial results) if the filter causes all or some of the readings to be removed. Design / Architecture A new function interface shall be defined that, when implemented, performs a Reading Filter operation. A ReadingFilter function would take a parameter (an Event containing readings), check whether the Readings of the Event match on the filtering configuration (see below) and if they do then remove them from the Event . The ReadingFilter function would return the Event object (minus filtered Readings ) or nil if the Event held no more Readings . Pseudo code for the generic function is provided below. The results returned will include a boolean to indicate whether any Reading objects were removed from the Event (allowing the receiver to know if some were filtered from the original list). func ( f Filter ) ReadingFilter ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) { // depending on impl; filtering for values in/out of a range, >, <, =, same, not same, from a particular name (device resource), etc. // The boolean will indicate whether any Readings were filtered from the Event. if ( len ( event . Reading )) > 0 ) if ( len filteredReadings > 0 ) return event , true else return event , false else return nil , true } Based on current needs/use cases, implementations of the function interface could include the following filter functions: func ( f Filter ) FilterByValue ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) {} func ( f Filter ) FilterByResourceNamesMatch ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) {} Note The app functions SDK comes with FilterByDeviceName and FilterByResourceName functions today. The FilterByResourceName would behave similarly to FilterByResourceNameMatch. The Filter structure houses the configuration parameters for which the filter functions work and filter on. Note The app functions SDK uses a fairly simple Filter structure. type Filter struct { FilterValues [] string FilterOut bool } Given the collection of filter operations (in range, out of range, equal or not equal), the following structure is proposed: type Filter struct { FilterValues [] string TargetResourceName string FilterOp string // enum of in (in range inclusive), out (outside a range exclusive), eq (equal) or ne (not equal) } Examples use of the Filter structure to specify filtering: Filter { FilterValues : { 10 , 20 }, \"Int64\" , FilterOp : \"in\" } // filter for those Int64 readings with values between 10-20 inclusive Filter { FilterValues : { 10 , 20 }, \"Int64\" , FilterOp : \"out\" } // filter for those Int64 readings with values outside of 10-20. Filter { FilterValues : { 8 , 10 , 12 }, \"Int64\" , FilterOp : \"eq\" } //filter for those Int64 readings with values of 8, 10, or 12. Filter { FilterValues : { 8 , 10 }, \"Int64\" , FilterOp : \"ne\" } //filter for those Int64 readings with values not equal to 8 or 10 Filter { FilterValues : { \"Int32\" , \"Int64\" }, nil , FilterOp : \"eq\" } //filter to be used with FilterByResourceNameMatch. Filter for resource names of Int32 or Int64. Filter { FilterValues : { \"Int32\" }, nil , FilterOp : \"ne\" } //filter to be used with FilterByResourceNameMatch. Filter for resource names not equal to (excluding) Int32. A NewFilter function creates, initializes and returns a new instance of the filter based on the configuration provided. func NewReadingNameFilter ( filterValues [] string , filterOp string ) Filter { return Filter { FilterValues : filterValues , TargetResourceName string , FilterOp : filterOp } } Sharing filter functions If one were to explore the filtering functions in the app functions SDK filter.go (both FilterByDeviceName and FilterByValueDescriptor ), the filters operate on the Event model object and return the same objects ( Event or nil). Ideally, since both app services and device services generally share the same interface model (from go-mod-core-contracts ), it would be the desire to share the same filter functions functions between SDKs and associated services. Decisions on how to do this in Go - whether by shared module for example - is left as a future release design and implementation task - and as the need for common filter functions across device services and application services are identified in use cases. C needs are likely to be handled in the SDK directly. Additional Design Considerations As Device Services do not have the concept of a functions pipeline like application services do, consideration must be given as to how and where to: provide configuration to specify which filter functions to invoke create the filter invoke the filtering functions At this time, custom filters will not be supported as the custom filters would not be known by the SDK and therefore could not be specified in configuration. This is consistent with the app functions SDK and filtering. Function Inflection Point It is precisely after the convert to Event/Reading objects (after the async readings are assembled into events) and before returning that result in common.SendEvent (in utils.go) function that the device service should invoke the required filter functions. In the existing V1 implementation of the device-sdk-go, commands, async readings, and auto-events all call the function common.SendEvent() . Note: V2 implementation will require some re-evaluation of this inflection point. Where possible, the implementation should locate a single point of inflection if possible. In the C SDK, it is likely that the filters will be called before conversion to Event/Reading objects - they will operate on commandresult objects (equivalent to CommandValues). The order in which functions are called is important when more than one filter is provided. The order that functions are called should be reflected in the order listed in the configuration of the filters. Events containing binary values (event.HasBinaryValue), will not be filtered. Future releases may include binary value filters. Setting Filter Function and Configuration When filter functions are shared (or appear to be doing the same type of work) between SDKs, the configuration of the similar filter functions should also look similar. The app functions SDK configuration model for filters should therefore be followed. While device services do not have pipelines, the inclusion and configuration of filters for device services should have a similar look (to provide symmetry with app services). The configuration has to provide the functions required and parameters to make the functions work - even though the association to a pipeline is not required. Below is the common app service configuration as it relates to filters: [Writable.Pipeline] ExecutionOrder = \"FilterByDeviceName, TransformToXML, SetOutputData\" [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] DeviceNames = \"Random-Float-Device,Random-Integer-Device\" FilterOut = \"false\" Suggested and hypothetical configuration for the device service reading filters should look something like that below. [Writable.Filters] # filter readings where resource name equals Int32 ExecutionOrder = \"FilterByResourceNamesMatch, FilterByValue\" [Writable.Filter.Functions.FilterByResourceNamesMatch] [Writable.Filter.Functions.FilterByResourceNamesMatch.Parameters] FilterValues = \"Int32\" FilterOps = \"eq\" # filter readings where the Int64 readings (resource name) is Int64 and the values are between 10 and 20 [Writable.Filter.Functions.FilterByValue] [Writable.Filter.Functions.FilterByValue.Parameters] TargetResourceName = \"Int64\" FilterValues = { 10 , 20 } FilterOp = \"in\" Decision To be determined Consequences This design does not take into account potential changes found with the V2 API. References","title":"Device Service Filters"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#device-service-filters","text":"","title":"Device Service Filters"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#status","text":"Approved (by TSC vote on 3/15/21) design (initially) for Hanoi - but now being considered for Ireland implementation TBD (desired feature targeted for Ireland or Jakarata)","title":"Status"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#context","text":"In EdgeX today, sensor/device data collected can be \"filtered\" by application services before being exported or sent to some north side application or system. Built-in application service functions (available through the app services SDK) allow EdgeX event/reading objects to be filtered by device name or by device ResourceName. That is, event/readings can be filtered by: which device sent the event/reading (as determined by the Event device property). the classification or origin (such as temperature or humidity) of data produced by the device as determined by the Reading's name property (which used to be the value descriptor and now refers to the device ResourceName).","title":"Context"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#two-levels-of-device-service-filtering","text":"There are potentially two places where \"filtering\" in a device service could be useful. One (Sensor Data Filter) - after the device service has communicated with the sensor or device to get sensor values (but before the service creates Event/Reading objects and pushes those to core data). A sensor data filter would allow the device service to essentially ignore some of the raw sensed data. This would allow for some device service optimization in that the device service would not have perform type transformations and creation of event/reading objects if the data can be eliminated at this early stage. This first level filtering would, if put in place , likely occur in code associated with the read command gets done by the ProtocolDriver . Two (Reading Filter) - after the sensor data has been collected and read and put into Event/Reading objects, there is a desire to filter some of the Readings based on the Reading values or Reading name (which is the device ResourceName) or some combination of value and name. At this time, this design only addresses the need for the second filter (Reading Filter) . At the time of this writing, no applicable use case has yet to be defined to warrant the Sensor Data Filter.","title":"Two Levels of Device Service Filtering"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#reading-filters","text":"Reading filters will allow, not unlike application service filter functions today, to have Readings in an Event to be removed if: the value was outside or inside some range, or the value was greater than, less than or equal to some value based on the Reading value (numeric) of a Reading outside a specified range (min/max) described in the service configuration. Thus avoiding sending in outlier or jittery data Readings that could negatively effect analytics. Future scope: based on the Reading value (numeric) equal to or near (with in some specified range) the last reading. This allows a device service to reduce sending in Event/Readings that do not represent any significant change. This differs from the already implemented onChangeOnly in that it is filtering Readings within a specified degree of change. Note: this feature would require caching of readings which has not fully been implemented in the SDK. The existing mechanism for autoevents provides a partial cache. Added for future reference, but this feature would not be accomplished in the initial implementation; requiring extra design work on caching to be implemented. the value was the same as some or not the same as some specified value or values (for strings, boolean and other non-numeric values) the value matches a pattern (glob and/or regex) when the value is a string. the name (the device ResourceName) matched a particular value; in other words match temperature or humidity as example device resources. Unlike application services, there is not a need to filter on a device name (or identifier). Simply disable the device in the device service if all Event/Readings are to be stopped for the device. In the case that all Readings of an Event are filtered, it is assumed the entire Event is deemed to be worthless and not sent to core data by the device service. If only some Readings from and Event are filtered, the Event minus the filtered Readings would be sent to core data. The filter behaves the same whether the collection of Readings and Events is triggered by a scheduled collection of data from the underlying sensor/device or triggered by a command request (as from the command service). Therefore, the call for a command request still results in a successful status code and a return of no results (or partial results) if the filter causes all or some of the readings to be removed.","title":"Reading Filters"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#design-architecture","text":"A new function interface shall be defined that, when implemented, performs a Reading Filter operation. A ReadingFilter function would take a parameter (an Event containing readings), check whether the Readings of the Event match on the filtering configuration (see below) and if they do then remove them from the Event . The ReadingFilter function would return the Event object (minus filtered Readings ) or nil if the Event held no more Readings . Pseudo code for the generic function is provided below. The results returned will include a boolean to indicate whether any Reading objects were removed from the Event (allowing the receiver to know if some were filtered from the original list). func ( f Filter ) ReadingFilter ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) { // depending on impl; filtering for values in/out of a range, >, <, =, same, not same, from a particular name (device resource), etc. // The boolean will indicate whether any Readings were filtered from the Event. if ( len ( event . Reading )) > 0 ) if ( len filteredReadings > 0 ) return event , true else return event , false else return nil , true } Based on current needs/use cases, implementations of the function interface could include the following filter functions: func ( f Filter ) FilterByValue ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) {} func ( f Filter ) FilterByResourceNamesMatch ( lc logger . LoggingClient , event * models . Event ) ( * models . Event , error , boolean ) {} Note The app functions SDK comes with FilterByDeviceName and FilterByResourceName functions today. The FilterByResourceName would behave similarly to FilterByResourceNameMatch. The Filter structure houses the configuration parameters for which the filter functions work and filter on. Note The app functions SDK uses a fairly simple Filter structure. type Filter struct { FilterValues [] string FilterOut bool } Given the collection of filter operations (in range, out of range, equal or not equal), the following structure is proposed: type Filter struct { FilterValues [] string TargetResourceName string FilterOp string // enum of in (in range inclusive), out (outside a range exclusive), eq (equal) or ne (not equal) } Examples use of the Filter structure to specify filtering: Filter { FilterValues : { 10 , 20 }, \"Int64\" , FilterOp : \"in\" } // filter for those Int64 readings with values between 10-20 inclusive Filter { FilterValues : { 10 , 20 }, \"Int64\" , FilterOp : \"out\" } // filter for those Int64 readings with values outside of 10-20. Filter { FilterValues : { 8 , 10 , 12 }, \"Int64\" , FilterOp : \"eq\" } //filter for those Int64 readings with values of 8, 10, or 12. Filter { FilterValues : { 8 , 10 }, \"Int64\" , FilterOp : \"ne\" } //filter for those Int64 readings with values not equal to 8 or 10 Filter { FilterValues : { \"Int32\" , \"Int64\" }, nil , FilterOp : \"eq\" } //filter to be used with FilterByResourceNameMatch. Filter for resource names of Int32 or Int64. Filter { FilterValues : { \"Int32\" }, nil , FilterOp : \"ne\" } //filter to be used with FilterByResourceNameMatch. Filter for resource names not equal to (excluding) Int32. A NewFilter function creates, initializes and returns a new instance of the filter based on the configuration provided. func NewReadingNameFilter ( filterValues [] string , filterOp string ) Filter { return Filter { FilterValues : filterValues , TargetResourceName string , FilterOp : filterOp } }","title":"Design / Architecture"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#sharing-filter-functions","text":"If one were to explore the filtering functions in the app functions SDK filter.go (both FilterByDeviceName and FilterByValueDescriptor ), the filters operate on the Event model object and return the same objects ( Event or nil). Ideally, since both app services and device services generally share the same interface model (from go-mod-core-contracts ), it would be the desire to share the same filter functions functions between SDKs and associated services. Decisions on how to do this in Go - whether by shared module for example - is left as a future release design and implementation task - and as the need for common filter functions across device services and application services are identified in use cases. C needs are likely to be handled in the SDK directly.","title":"Sharing filter functions"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#additional-design-considerations","text":"As Device Services do not have the concept of a functions pipeline like application services do, consideration must be given as to how and where to: provide configuration to specify which filter functions to invoke create the filter invoke the filtering functions At this time, custom filters will not be supported as the custom filters would not be known by the SDK and therefore could not be specified in configuration. This is consistent with the app functions SDK and filtering.","title":"Additional Design Considerations"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#function-inflection-point","text":"It is precisely after the convert to Event/Reading objects (after the async readings are assembled into events) and before returning that result in common.SendEvent (in utils.go) function that the device service should invoke the required filter functions. In the existing V1 implementation of the device-sdk-go, commands, async readings, and auto-events all call the function common.SendEvent() . Note: V2 implementation will require some re-evaluation of this inflection point. Where possible, the implementation should locate a single point of inflection if possible. In the C SDK, it is likely that the filters will be called before conversion to Event/Reading objects - they will operate on commandresult objects (equivalent to CommandValues). The order in which functions are called is important when more than one filter is provided. The order that functions are called should be reflected in the order listed in the configuration of the filters. Events containing binary values (event.HasBinaryValue), will not be filtered. Future releases may include binary value filters.","title":"Function Inflection Point"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#setting-filter-function-and-configuration","text":"When filter functions are shared (or appear to be doing the same type of work) between SDKs, the configuration of the similar filter functions should also look similar. The app functions SDK configuration model for filters should therefore be followed. While device services do not have pipelines, the inclusion and configuration of filters for device services should have a similar look (to provide symmetry with app services). The configuration has to provide the functions required and parameters to make the functions work - even though the association to a pipeline is not required. Below is the common app service configuration as it relates to filters: [Writable.Pipeline] ExecutionOrder = \"FilterByDeviceName, TransformToXML, SetOutputData\" [Writable.Pipeline.Functions.FilterByDeviceName] [Writable.Pipeline.Functions.FilterByDeviceName.Parameters] DeviceNames = \"Random-Float-Device,Random-Integer-Device\" FilterOut = \"false\" Suggested and hypothetical configuration for the device service reading filters should look something like that below. [Writable.Filters] # filter readings where resource name equals Int32 ExecutionOrder = \"FilterByResourceNamesMatch, FilterByValue\" [Writable.Filter.Functions.FilterByResourceNamesMatch] [Writable.Filter.Functions.FilterByResourceNamesMatch.Parameters] FilterValues = \"Int32\" FilterOps = \"eq\" # filter readings where the Int64 readings (resource name) is Int64 and the values are between 10 and 20 [Writable.Filter.Functions.FilterByValue] [Writable.Filter.Functions.FilterByValue.Parameters] TargetResourceName = \"Int64\" FilterValues = { 10 , 20 } FilterOp = \"in\"","title":"Setting Filter Function and Configuration"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#decision","text":"To be determined","title":"Decision"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#consequences","text":"This design does not take into account potential changes found with the V2 API.","title":"Consequences"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#references","text":"","title":"References"},{"location":"design/adr/devops/0007-Release-Automation/","text":"Release Automation Status Approved by TSC 04/08/2020 Context EdgeX Foundry is a framework composed of microservices to ease development of IoT/Edge solutions. With the framework getting richer, project growth, the number of artifacts to be released has increased. This proposal outlines a method for automating the release process for the base artifacts. Requirements Release Artifact Definition For the scope of Hanoi release artifact types are defined as: GitHub tags in the repositories. Docker images in our Nexus repository and Docker hub. *Snaps in the Snapcraft store. This list is likely to expand in future releases. *The building and publishing of snaps was removed from community scope in September 2020 and is managed outside the community by Canonical. General Requirements As the EdgeX Release Czar I gathered the following requirements for automating this part of the release. The release automation needs a manual trigger to be triggered by the EdgeX Release Czar or the Linux Foundation Release Engineers. The goal of this automation is to have a \"push button\" release mechanism to reduce human error in our release process. Release artifacts can come from one or more GitHub repositories at a time. GitHub repositories can have one or more release artifact types to release. GitHub repositories can have one or more artifacts of a specific type to release. (For example: The mono repository, edgex-go, has more than 20 docker images to release.) GitHub repositories may be released at different times. (For example: Application and Device service repositories can be released on a different day than the Core services in the mono repository.) Ability to track multiple release streams for the project. An audit trail history for releases. Location The code that will manage the release automation for EdgeX Foundry will live in a repository called cd-management . This repository will have a branch named release that will track the releases of artifacts off the main branch of the EdgeX Foundry repositories. Multiple Release Streams EdgeX Foundry has this idea of multple release streams that basically coincides with different named branches in GitHub. For the majority of the main releases we will be targeting those off the main branch. In our cd-management repository we will have a release branch that will track the main branches EdgeX repositories. In the future we will mark a specific release for long term support (LTS). When this happens we will have to branch off main in the EdgeX repositories and create a separate release stream for the LTS. The suggestion at that point will be to branch off the release branch in cd-management as well and use this new release branch to track the LTS branches in the EdgeX repositories. Release Flow Go Modules, Device and Application SDKs During Development Go modules, Application and Device SDKs only release a GitHub tag as their release. Go modules, Application and Device SDKs are set up to automatically increment a developmental version tag on each merge to main . (IE: 1.0.0-dev.1 -> 1.0.0-dev.2) Release The release automation for Go Modules, Device and Application SDKs is used to set the final release version git tag. (IE: 1.0.0-dev.X -> 1.0.0) For each release, the Go Modules, Device and Application SDK repositories will be tagged with the release version. Core Services (Including Security and System Management services), Application Services, Device Services and Supporting Docker Images During Development For the Core Services, Application Services, Device Services and Supporting Docker Images we release Github tags and docker images. On every merge to the main branch we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2), stage docker images in our Nexus repository (docker.staging). Release The release automation will need to do the following: Set version tag on GitHub. (IE: 1.0.0-dev.X -> 1.0.0) Promote docker images in our Nexus repository from docker.staging to docker.release and public Docker hub. Supporting Assets (e.g. edgex-cli) During Development For supporting release assets (e.g. edgex-cli) we release GitHub tags on every merge to the main branch. For every merge to main we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2) and store the build artifacts in our Nexus repository. Release For EdgeX releases the release automation will set the final release version by creating a git tag (e.g. 1.0.0-dev.X -> 1.0.0) and produce a Github Release containing the binary assets targeted for release.","title":"Release Automation"},{"location":"design/adr/devops/0007-Release-Automation/#release-automation","text":"","title":"Release Automation"},{"location":"design/adr/devops/0007-Release-Automation/#status","text":"Approved by TSC 04/08/2020","title":"Status"},{"location":"design/adr/devops/0007-Release-Automation/#context","text":"EdgeX Foundry is a framework composed of microservices to ease development of IoT/Edge solutions. With the framework getting richer, project growth, the number of artifacts to be released has increased. This proposal outlines a method for automating the release process for the base artifacts.","title":"Context"},{"location":"design/adr/devops/0007-Release-Automation/#requirements","text":"","title":"Requirements"},{"location":"design/adr/devops/0007-Release-Automation/#release-artifact-definition","text":"For the scope of Hanoi release artifact types are defined as: GitHub tags in the repositories. Docker images in our Nexus repository and Docker hub. *Snaps in the Snapcraft store. This list is likely to expand in future releases. *The building and publishing of snaps was removed from community scope in September 2020 and is managed outside the community by Canonical.","title":"Release Artifact Definition"},{"location":"design/adr/devops/0007-Release-Automation/#general-requirements","text":"As the EdgeX Release Czar I gathered the following requirements for automating this part of the release. The release automation needs a manual trigger to be triggered by the EdgeX Release Czar or the Linux Foundation Release Engineers. The goal of this automation is to have a \"push button\" release mechanism to reduce human error in our release process. Release artifacts can come from one or more GitHub repositories at a time. GitHub repositories can have one or more release artifact types to release. GitHub repositories can have one or more artifacts of a specific type to release. (For example: The mono repository, edgex-go, has more than 20 docker images to release.) GitHub repositories may be released at different times. (For example: Application and Device service repositories can be released on a different day than the Core services in the mono repository.) Ability to track multiple release streams for the project. An audit trail history for releases.","title":"General Requirements"},{"location":"design/adr/devops/0007-Release-Automation/#location","text":"The code that will manage the release automation for EdgeX Foundry will live in a repository called cd-management . This repository will have a branch named release that will track the releases of artifacts off the main branch of the EdgeX Foundry repositories.","title":"Location"},{"location":"design/adr/devops/0007-Release-Automation/#multiple-release-streams","text":"EdgeX Foundry has this idea of multple release streams that basically coincides with different named branches in GitHub. For the majority of the main releases we will be targeting those off the main branch. In our cd-management repository we will have a release branch that will track the main branches EdgeX repositories. In the future we will mark a specific release for long term support (LTS). When this happens we will have to branch off main in the EdgeX repositories and create a separate release stream for the LTS. The suggestion at that point will be to branch off the release branch in cd-management as well and use this new release branch to track the LTS branches in the EdgeX repositories.","title":"Multiple Release Streams"},{"location":"design/adr/devops/0007-Release-Automation/#release-flow","text":"","title":"Release Flow"},{"location":"design/adr/devops/0007-Release-Automation/#go-modules-device-and-application-sdks","text":"","title":"Go Modules, Device and Application SDKs"},{"location":"design/adr/devops/0007-Release-Automation/#during-development","text":"Go modules, Application and Device SDKs only release a GitHub tag as their release. Go modules, Application and Device SDKs are set up to automatically increment a developmental version tag on each merge to main . (IE: 1.0.0-dev.1 -> 1.0.0-dev.2)","title":"During Development"},{"location":"design/adr/devops/0007-Release-Automation/#release","text":"The release automation for Go Modules, Device and Application SDKs is used to set the final release version git tag. (IE: 1.0.0-dev.X -> 1.0.0) For each release, the Go Modules, Device and Application SDK repositories will be tagged with the release version.","title":"Release"},{"location":"design/adr/devops/0007-Release-Automation/#core-services-including-security-and-system-management-services-application-services-device-services-and-supporting-docker-images","text":"","title":"Core Services (Including Security and System Management services), Application Services, Device Services and Supporting Docker Images"},{"location":"design/adr/devops/0007-Release-Automation/#during-development_1","text":"For the Core Services, Application Services, Device Services and Supporting Docker Images we release Github tags and docker images. On every merge to the main branch we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2), stage docker images in our Nexus repository (docker.staging).","title":"During Development"},{"location":"design/adr/devops/0007-Release-Automation/#release_1","text":"The release automation will need to do the following: Set version tag on GitHub. (IE: 1.0.0-dev.X -> 1.0.0) Promote docker images in our Nexus repository from docker.staging to docker.release and public Docker hub.","title":"Release"},{"location":"design/adr/devops/0007-Release-Automation/#supporting-assets-eg-edgex-cli","text":"","title":"Supporting Assets (e.g. edgex-cli)"},{"location":"design/adr/devops/0007-Release-Automation/#during-development_2","text":"For supporting release assets (e.g. edgex-cli) we release GitHub tags on every merge to the main branch. For every merge to main we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2) and store the build artifacts in our Nexus repository.","title":"During Development"},{"location":"design/adr/devops/0007-Release-Automation/#release_2","text":"For EdgeX releases the release automation will set the final release version by creating a git tag (e.g. 1.0.0-dev.X -> 1.0.0) and produce a Github Release containing the binary assets targeted for release.","title":"Release"},{"location":"design/adr/devops/0010-Release-Artifacts/","text":"Release Artifacts Status Approved Context During the Geneva release of EdgeX Foundry the DevOps WG transformed the CI/CD process with new Jenkins pipeline functionality. After this new functionality was added we also started adding release automation. This new automation is outlined in ADR 0007 Release Automation. However, in ADR 0007 Release Automation only two release artifact types are outlined. This document is meant to be a living document to try to outlines all currently supported artifacts associated with an EdgeX Foundry release, and should be updated if/when this list changes. Release Artifact Types Docker Images Tied to Code Release? Yes Docker images are released for every named release of EdgeX Foundry. During development the community releases images to the docker.staging repository in Nexus . At the time of release we promote the last tested image from docker.staging to docker.release . In addition to that we will publish the docker image on DockerHub . Nexus Retention Policy docker.snapshots Retention Policy: 90 days since last download Contains: Docker images that are not expected to be released. This contains images to optimize the builds in the CI infrastructure. The definitions of these docker images can be found in the edgexfoundry/ci-build-images Github repository. Docker Tags Used: Version, Latest docker.staging Retention Policy: 180 days since last download Contains: Docker images built for potential release and testing purposes during development. Docker Tags Used: Version (ie: v1.x), Release Branch (master, fuji, etc), Latest docker.release Retention Policy: No automatic removal. Requires TSC approval to remove images from this repository. Contains: Officially released docker images for EdgeX. Docker Tags Used:\u2022Version (ie: v1.x), Latest Nexus Cleanup Policies Reference Docker Compose Files Tied to Code Release? Yes Docker compose files are released alongside the docker images for every release of EdgeX Foundry. During development the community maintains compose files a folder named nightly-build . These compose files are meant to be used by our testing frameworks. At the time of release the community makes compose files for that release in a folder matching it's name. (ie: geneva ) DockerHub Image Descriptions and Overviews Tied to Code Release? No After Docker images are published to DockerHub , automation should be run to update the image Overviews and Descriptions of the necessary images. This automation is located in the edgex-docker-hub-documentation branch of the cd-management repository. In preparation for the release the community makes changes to the Overview and Description metadata as appropriate. The Release Czar will coordinate the execution of the automation near the release time. Github Page: EdgeX Docs Tied to Code Release? No EdgeX Foundry releases a set of documentation for our project at http://docs.edgexfoundry.org . This page is a Github page that is managed by the edgex/foundry/edgex-docs Github repository. As a community we make our best effort to keep these docs up to date. On this page we are also versioning the docs with the semantic versions of the named releases. As a community we try to version our documentation site shortly after the official release date but documentation changes are addressed as we find them throughout the release cycle. GitHub Tags Tied to Code Release? Yes, for the final semantic version Github tags are used to track the releases of EdgeX Foundry. During development the tags are incremented automatically for each commit using a development suffix (ie: v1.1.1-dev.1 -> v1.1.1-dev.2 ). At the time of release we release a tag with the final semantic version (ie: v1.1.1 ). Snaps Tied to Code Release? Yes The building of snaps was removed from community scope in September 2020 but are still available on the snapcraft store . Canonical publishes daily arm64 and amd64 releases of the following snaps to latest/edge in the Snap Store. These builds take place on the Canonical Launchpad platform and use the latest code from the master branch of each EdgeX repository, versioned using the latest git tag. edgexfoundry edgex-app-service-configurable edgex-device-camera edgex-device-rest edgex-device-modbus edgex-device-mqtt edgex-device-grove edgex-cli (work-in-progress) Note - this list may expand over time. At code freeze the edgexfoundry snap revision in the edge channel is promoted to latest/beta and $TRACK/beta. Publishing to beta will trigger the Canonical checkbox automated tests, which include tests on a variety of hardware hosted by Canonical. When the project tags a release of any of the snaps listed above, the resulting snap revision is first promoted from the edge channel to latest/candidate and $TRACK/candidate. Canonical tests this revision, and if all looks good, releases to latest/stable and $TRACK/stable. Canonical may also publish updates to the EdgeX snaps after release to address high/critical bugs and CVEs (common vulnerabilities and exposures). Note - in the above descriptions, $TRACK corresponds to the named release tracks (e.g. fuji, geneva, hanoi, ...) which are created for every major/minor release of EdgeX Foundry. SwaggerHub API Docs Tied to Code Release? No In addition to our documentation site EdgeX foundry also releases our API specifications on Swaggerhub. Testing Framework Tied to Code Release? Yes The EdgeX Foundry community has a set of tests we maintain to do regression testing during development this framework is tracking the master branch of the components of EdgeX. At the time of release we will update the testing frameworks to point at the released Github tags and add a version tag to the testing frameworks themselves. This creates a snapshot of testing framework at the time of release for validation of the official release. GitHub Release Artifacts Tied to Code Release? Yes GitHub release functionality is utilized on some repositories to release binary artifacts/assets (e.g. zip/tar files). These are versioned with the semantic version and found on the repository's GitHub Release page under 'Assets'. Known Build Dependencies for EdgeX Foundry There are some internal build dependencies within the EdgeX Foundry organization. When building artifacts for validation or a release you will need to take into the account the build dependencies to make sure you build them in the correct order. Application services have a dependency on the Application Functions SDK. Go Device services have a dependency on the Go Device SDK. C Device services have a dependency on the C Device SDK. Decision Consequences This document is meant to be a living document of all the release artifacts of EdgeX Foundry. With this ADR we would have a good understanding on what needs to be released and when they are released. Without this document this information will remain tribal knowledge within the community.","title":"Release Artifacts"},{"location":"design/adr/devops/0010-Release-Artifacts/#release-artifacts","text":"","title":"Release Artifacts"},{"location":"design/adr/devops/0010-Release-Artifacts/#status","text":"Approved","title":"Status"},{"location":"design/adr/devops/0010-Release-Artifacts/#context","text":"During the Geneva release of EdgeX Foundry the DevOps WG transformed the CI/CD process with new Jenkins pipeline functionality. After this new functionality was added we also started adding release automation. This new automation is outlined in ADR 0007 Release Automation. However, in ADR 0007 Release Automation only two release artifact types are outlined. This document is meant to be a living document to try to outlines all currently supported artifacts associated with an EdgeX Foundry release, and should be updated if/when this list changes.","title":"Context"},{"location":"design/adr/devops/0010-Release-Artifacts/#release-artifact-types","text":"","title":"Release Artifact Types"},{"location":"design/adr/devops/0010-Release-Artifacts/#docker-images","text":"Tied to Code Release? Yes Docker images are released for every named release of EdgeX Foundry. During development the community releases images to the docker.staging repository in Nexus . At the time of release we promote the last tested image from docker.staging to docker.release . In addition to that we will publish the docker image on DockerHub .","title":"Docker Images"},{"location":"design/adr/devops/0010-Release-Artifacts/#nexus-retention-policy","text":"","title":"Nexus Retention Policy"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockersnapshots","text":"Retention Policy: 90 days since last download Contains: Docker images that are not expected to be released. This contains images to optimize the builds in the CI infrastructure. The definitions of these docker images can be found in the edgexfoundry/ci-build-images Github repository. Docker Tags Used: Version, Latest","title":"docker.snapshots"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerstaging","text":"Retention Policy: 180 days since last download Contains: Docker images built for potential release and testing purposes during development. Docker Tags Used: Version (ie: v1.x), Release Branch (master, fuji, etc), Latest","title":"docker.staging"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerrelease","text":"Retention Policy: No automatic removal. Requires TSC approval to remove images from this repository. Contains: Officially released docker images for EdgeX. Docker Tags Used:\u2022Version (ie: v1.x), Latest Nexus Cleanup Policies Reference","title":"docker.release"},{"location":"design/adr/devops/0010-Release-Artifacts/#docker-compose-files","text":"Tied to Code Release? Yes Docker compose files are released alongside the docker images for every release of EdgeX Foundry. During development the community maintains compose files a folder named nightly-build . These compose files are meant to be used by our testing frameworks. At the time of release the community makes compose files for that release in a folder matching it's name. (ie: geneva )","title":"Docker Compose Files"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerhub-image-descriptions-and-overviews","text":"Tied to Code Release? No After Docker images are published to DockerHub , automation should be run to update the image Overviews and Descriptions of the necessary images. This automation is located in the edgex-docker-hub-documentation branch of the cd-management repository. In preparation for the release the community makes changes to the Overview and Description metadata as appropriate. The Release Czar will coordinate the execution of the automation near the release time.","title":"DockerHub Image Descriptions and Overviews"},{"location":"design/adr/devops/0010-Release-Artifacts/#github-page-edgex-docs","text":"Tied to Code Release? No EdgeX Foundry releases a set of documentation for our project at http://docs.edgexfoundry.org . This page is a Github page that is managed by the edgex/foundry/edgex-docs Github repository. As a community we make our best effort to keep these docs up to date. On this page we are also versioning the docs with the semantic versions of the named releases. As a community we try to version our documentation site shortly after the official release date but documentation changes are addressed as we find them throughout the release cycle.","title":"Github Page: EdgeX Docs"},{"location":"design/adr/devops/0010-Release-Artifacts/#github-tags","text":"Tied to Code Release? Yes, for the final semantic version Github tags are used to track the releases of EdgeX Foundry. During development the tags are incremented automatically for each commit using a development suffix (ie: v1.1.1-dev.1 -> v1.1.1-dev.2 ). At the time of release we release a tag with the final semantic version (ie: v1.1.1 ).","title":"GitHub Tags"},{"location":"design/adr/devops/0010-Release-Artifacts/#snaps","text":"Tied to Code Release? Yes The building of snaps was removed from community scope in September 2020 but are still available on the snapcraft store . Canonical publishes daily arm64 and amd64 releases of the following snaps to latest/edge in the Snap Store. These builds take place on the Canonical Launchpad platform and use the latest code from the master branch of each EdgeX repository, versioned using the latest git tag. edgexfoundry edgex-app-service-configurable edgex-device-camera edgex-device-rest edgex-device-modbus edgex-device-mqtt edgex-device-grove edgex-cli (work-in-progress) Note - this list may expand over time. At code freeze the edgexfoundry snap revision in the edge channel is promoted to latest/beta and $TRACK/beta. Publishing to beta will trigger the Canonical checkbox automated tests, which include tests on a variety of hardware hosted by Canonical. When the project tags a release of any of the snaps listed above, the resulting snap revision is first promoted from the edge channel to latest/candidate and $TRACK/candidate. Canonical tests this revision, and if all looks good, releases to latest/stable and $TRACK/stable. Canonical may also publish updates to the EdgeX snaps after release to address high/critical bugs and CVEs (common vulnerabilities and exposures). Note - in the above descriptions, $TRACK corresponds to the named release tracks (e.g. fuji, geneva, hanoi, ...) which are created for every major/minor release of EdgeX Foundry.","title":"Snaps"},{"location":"design/adr/devops/0010-Release-Artifacts/#swaggerhub-api-docs","text":"Tied to Code Release? No In addition to our documentation site EdgeX foundry also releases our API specifications on Swaggerhub.","title":"SwaggerHub API Docs"},{"location":"design/adr/devops/0010-Release-Artifacts/#testing-framework","text":"Tied to Code Release? Yes The EdgeX Foundry community has a set of tests we maintain to do regression testing during development this framework is tracking the master branch of the components of EdgeX. At the time of release we will update the testing frameworks to point at the released Github tags and add a version tag to the testing frameworks themselves. This creates a snapshot of testing framework at the time of release for validation of the official release.","title":"Testing Framework"},{"location":"design/adr/devops/0010-Release-Artifacts/#github-release-artifacts","text":"Tied to Code Release? Yes GitHub release functionality is utilized on some repositories to release binary artifacts/assets (e.g. zip/tar files). These are versioned with the semantic version and found on the repository's GitHub Release page under 'Assets'.","title":"GitHub Release Artifacts"},{"location":"design/adr/devops/0010-Release-Artifacts/#known-build-dependencies-for-edgex-foundry","text":"There are some internal build dependencies within the EdgeX Foundry organization. When building artifacts for validation or a release you will need to take into the account the build dependencies to make sure you build them in the correct order. Application services have a dependency on the Application Functions SDK. Go Device services have a dependency on the Go Device SDK. C Device services have a dependency on the C Device SDK.","title":"Known Build Dependencies for EdgeX Foundry"},{"location":"design/adr/devops/0010-Release-Artifacts/#decision","text":"","title":"Decision"},{"location":"design/adr/devops/0010-Release-Artifacts/#consequences","text":"This document is meant to be a living document of all the release artifacts of EdgeX Foundry. With this ADR we would have a good understanding on what needs to be released and when they are released. Without this document this information will remain tribal knowledge within the community.","title":"Consequences"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/","text":"Creation and Distribution of Secrets Status Approved Context This ADR seeks to clarify and prioritize the secret handling approach taken by EdgeX. EdgeX microservices need a number of secrets to be created and distributed in order to create a functional, secure system. Among these secrets are: Privileged administrator passwords (such as a database superuser password) Service account passwords (e.g. non-privileged database accounts) PKI private keys There is a lack of consistency on how secrets are created and distributed to EdgeX microservices, and when developers need to add new components to the system, it is unclear on what the preferred approach should be. This document assumes a threat model wherein the EdgeX services are sandboxed (such as in a snap or a container) and the host system is trusted, and all services running in a single snap share a trust boundary. Terms The following terms will be helpful for understading the subsequent discussion: SECRETSLOC is a protected file system path where bootstrapping secrets are stored. While EdgeX implements a sophisticated secret handling mechanism, that mechanism itself requires secrets. For example, every microservice that talks to Vault must have its own unique secret to authenticate: Vault itself cannot be used to distribute these secrets. SECRETSLOC fulfills the role that the non-routable instance data IP address, 169.254.169.254, fulfills in the public cloud: delivery of bootstrapping secrets. As EdgeX does not have a hypervisor nor virtual machines for this purpose, a protected file system path is used instead. SECRETSLOC is implementation-dependent. A desirable feature of SECRETSLOC would be that data written here is kept in RAM and is not persisted to storage media. This property is not achieveable in all circumstances. For Docker, a list of suggested paths--in preference order--is: /run/edgex/secrets (a tmpfs volume on a Linux host) /tmp/edgex/secrets (a temporary file area on Linux and MacOS hosts) A persistent docker volume (use when host bind mounts are not available) For snaps, a list of suggested paths-in preference order--is: * /run/snap. $SNAP_NAME / (a tmpfs volume on a Linux host) * $SNAP_DATA /secrets (a snap-specific persistent data area) * TBD (a content interface that allows for sharing of secrets from the core snap) Current practices survey A survey on the existing EdgeX secrets reveals the following appoaches. A designation of \"compliant\" means that the current implementation is aligned with the recommended practices documented in the next section. A designation of \"non-compliant\" means that the current implementation uses an implemention mechanism outside of the recommended practices documented in the next section. A \"non-compliant\" implementation is a candidate for refactoring to bring the implementation into conformance with the recommended practices. System-managed secrets PKI private keys Docker: PKI generated by standalone utility every cold start of the framework. Distribution via SECRETSLOC . (Compliant.) Snaps: PKI generated by standalone utility every cold start of the framework. Deployed to SECRETSLOC . (Compliant.) Secret store master password Docker: Distribution via persistent docker volume. (Non-compliant.) Snaps: Stored in $SNAP_DATA/config/security-secrets-setup/res . (Non-compliant.) Secret store per-service authentication tokens Docker: Distribution via SECRETSLOC generated every cold start of the framework. (Compliant.) Snaps: Distribution via SECRETSLOC , generated every cold start of the framework. (Compliant.) Postgres superuser password Docker: Hard-coded into docker-compose file, checked in to source control. (Non-compliant.) Snaps: Generated at snap install time via \"apg\" (\"automatic password generator\") tool, installed into Postgres, cached to $SNAP_DATA/config/postgres/kongpw (non-compliant), and passed to Kong via $KONG_PG_PASSWORD . MongoDB service account passwords Docker: Direct consumption from secret store. (Compliant.) Snaps: Direct consumption from secret store. (Compliant.) Redis authentication password Docker: Server--staged to secrets volume and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.) Snaps: Server--staged to $SNAP_DATA/secrets/edgex-redis/redis5-password and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.) Kong client authentication tokens Docker: System of reference is unencrypted Postgres database. (Non-compliant.) Snaps: System of reference is unencrypted Postgres database. (Non-compliant.) Note: in the current implementation, Consul is being operated as a public service. Consul will be a subject of a future \"bootstrapping ADR\" due to its role in serivce location. User-managed secrets User-managed secrets functionality is provided by app-functions-sdk-go . If security is enabled, secrets are retrieved from Vault. If security is disabled, secrets are retreived from the configuration provider. If the configuration provider is not available, secrets are read from the underlying .toml . It is taken as granted in this ADR that secrets originating in the configuration provider or from .toml configuration files are not secret. The fallback mechanism is provided as a convienience to the developer, who would otherwise have to litter their code with \"if (isSecurityEnabled())\" logic leading to implementation inconsistencies. The central database credential is supplied by GetDatabaseCredentials() and returns the database credential assigned to app-service-configurable . If security is enabled, database credentials are retreived using the standard flow. If security is disabled, secrets are retreived from the configuration provider from a special section called [Writable.InsecureSecrets] . If not found there, the configuration provider is searched for credentials stored in the legacy [Databases.Primary] section using the Username and Password keys. Each user application has its own exclusive-use area of the secret store that is accessed via GetSecrets() . If security is enabled, secret requests are passed along to go-mod-secrets using an application-specific access token. If security is disabled, secret requets are made to the configuration provider from the [Writable.InsecureSecrets] section. There is no fallback configuration location. As user-managed secrets have no framework support for initialization, a special StoreSecrets() method is made available to the application for the application to initialize its own secrets. This method is only available in security-enabled mode. No changes to user-managed secrets are being proposed in this ADR. Decision Creation of secrets Management of hardware-bound secrets is platform-specific and out-of-scope for the EdgeX framework. EdgeX open source will contain only the necessary hooks to integrate platform-specific functionality. For software-managed secrets, the system of referece of secrets in EdgeX is the EdgeX secret store. The EdgeX secret store provides for encryption of secrets at rest. This term means that if a secret is replicated, the EdgeX secret store is the authoritative source of truth of the secret. Whenever possible, the EdgeX secret store should also be the record of origin of a secret as well. This means creating secrets inside of the EdgeX secret store is preferrable to importing an externally-created secret into the secret store. This can often be done for framework-managed secrets, but not possible for user-managed secrets. Choosing between alternative forms of secrets When given a choice between plain-text secrets and cryptographic keys, cryptographic keys should be preferred. An example situation would be the introduction of an MQTT message broker. A broker may support both TLS client authentication as well as username/password authentication. In such a situation, TLS client authentication would be preferred: The cryptographic key is typically longer in bits than a plain-text secret. A plain-text secret will require transport encryption in order to protect confidentiality of the secret, such as server-side TLS. Use of TLS client authentication typically eliminates the need for additional assets on the server side (such as a password database) to authenticate the client, by relying on digital signature instead. TLS client authentication should not be used unless there is a capability to revoke a compromised certificate, such as by replacing the certificate authority, or providing a certificate revokation list to the server. If certificate revokation is not supported, plain-text secrets (such as username/password) should be used instead, as they are typically easier to revoke. Distribution and consumption of secrets Prohibited practices Use of hard-coded secrets is an instance of CWE-798: Use of hard-coded credentials and is not allowed. A hard-coded secret is a secret that is the same across multiple EdgeX instances. Hard-coded secrets make devices susceptible to BORE (break-once-run-everywhere) attacks, where collections of machines can compromised by a single replicated secret. Specific cases where this is likely to come up are: Secrets embedded in source control EdgeX is an open-source project. Any secret that is present in an EdgeX repository is public to the world, and therefore not a secret, by definition. Configuration files, such as .toml files, .json files, .yaml files (including docker-compose.yml ) are specific instances of this practice. Secrets embedded in binaries Binaries are usually not protected against confidentiality threats, and binaries can be easily reverse-engineered to find any secrets therein. Binaries included compile executables as well as Docker images. Recommended practices Direct consumption from process-to-process interaction with secret store This approach is only possible for components that have native support for Hashicorp Vault . This includes any EdgeX service that links to go-mod-secrets. For example, if secretClient is an instance of the go-mod-secrets secret store client: secrets , err := secretClient . GetSecrets ( \"myservice\" , \"username\" , \"password\" ) The above code will retrieve the username and password properties of the myservice secret. Dynamic injection of secret into process environment space Environment variables are part of a process' environment block and are mapped into a process' memory. In this scenario, an intermediary makes a connection to the secret store to fetch a secret, store it into an environment variable, and then launches a target executable, thereby passing the secret in-memory to the target process. Existing examples of this functionality include vaultenv , envconsul , or env-aws-params . These tools authenticate to a remote network service, inject secrets into the process environment, and then exec's a replacment process that inherits the secret-enriched enviornment block. There are a few potential risks with this approach: Environment blocks are passed to child processes by default. Environment-variable-sniffing malware (introduced by compromised 3rd party libaries) is a proven attack method. Dynamic injection of secret into container-scoped tmpfs volume An example of this approach is consul-template . This approach is useful when a secret is required to be in a configuration file and cannot be passed via an environment variable or directly consumed from a secret store. Distribution via SECRETSLOC This option is the most widely supported secret distribution mechanism by container orchestrators. EdgeX supports runtime environments such as standard Docker and snaps that have no built-in secret management features. Generic Docker does not have a built-in secrets mechanism. Manual configuration of a SECRETSLOC should utilize either a host file file system path or a Docker volume. Snaps also do not have a built-in secrets mechanism. The options for SECRETSLOC are limited to designated snap-writable directories. For comparison: Docker Swarm: Swarm swarm mode is not officially supported by the EdgeX project. Docker Swarm secrets are shared via the /run/secrets volume, which is a Linux tmpfs volume created on the host and shared with the container. For an example of Docker Swarm secrets, see the docker-compose secrets stanza . Secrets distributed in this manner become part of the RaftDB, and thus it becomes necessary to enable swarm autolock mode, which prevents the Raft database encryption key from being stored plaintext on disk. Swarm secrets have an additional limitation in that they are not mutable at runtime. Kubernetes: Kubernetes is not officially supported by the EdgeX project. Kubernetes also supports the secrets volume approach, though the secrets volume can be mounted anywhere in the container namespace. For an example of Kubernetes secrets volumes, see the Kubernetes secrets documentation . Secrets distributed in this manner become part of the etcd database, and thus it becomes necessary to specify a KMS provider for data encryption to prevent etcd from storing plaintext versions of secrets. Consequences As the existing implementation is not fully-compliant with this ADR, significant scope will be added to current and future EdgeX releases in order to bring the project into compliance. List of needed improvements: PKI private keys All: Move to using Vault as system of origin for the PKI instead of the standalone security-secrets-setup utility. All: Cache the PKI for Consul and Vault on persistent disk; rotate occasionally. All: Investigate hardware protection of cached Consul and Vault PKI secret keys. (Vault cannot unseal its own TLS certificate.) Special case: Bring-your-own external Kong certificate and key The Kong external certificate and key is already stored in Vault, however, additional metadata is needed to signal whether these are auto-generated or manually-installed. A manually-installed certificate and key would not be overwritten by the framework bringup logic. Installing a custom certificate and key can then be implemented by overwriting the system-generated ones and setting a flag indicating that they were manually-installed. Secret store master password All: Enable hooks for hardware protection of secret store master password. Secret store per-service authentication tokens No changes required. Postgres superuser password Generate at install time or on cold start of the framework. Cache in Vault and inject into Kong using environment variable injection. MongoDB service account passwords No changes required. Redis(v5) authentication password All: Implement process-to-process injection: start Redis unauthenticated, with a post-start hook to read the secret out of Vault and set the Redis password. (Short race condition between Redis starting, password being set, and dependent services starting.) No changes on client side. Redis(v6) passwords (v6 adds multiple user support) Interim solution: handle like MongoDB service account passwords. Future ADR to propose use of a Vault database secrets engine. No changes on client side (each service accesses its own credential) Kong authentication tokens All: Implement in-transit authentication with TLS-protected Postgres interface. (Subject to change if it is decided not to enable a Postgres backend out of the box.) Additional research needed as PostgreSQL does not support transparent data encryption. References ADR for secret creation and distribution CWE-798: Use of hard-coded credentials Docker Swarm secrets EdgeX go-mod-secrets Hashicorp Vault","title":"Creation and Distribution of Secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#creation-and-distribution-of-secrets","text":"","title":"Creation and Distribution of Secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#context","text":"This ADR seeks to clarify and prioritize the secret handling approach taken by EdgeX. EdgeX microservices need a number of secrets to be created and distributed in order to create a functional, secure system. Among these secrets are: Privileged administrator passwords (such as a database superuser password) Service account passwords (e.g. non-privileged database accounts) PKI private keys There is a lack of consistency on how secrets are created and distributed to EdgeX microservices, and when developers need to add new components to the system, it is unclear on what the preferred approach should be. This document assumes a threat model wherein the EdgeX services are sandboxed (such as in a snap or a container) and the host system is trusted, and all services running in a single snap share a trust boundary.","title":"Context"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#terms","text":"The following terms will be helpful for understading the subsequent discussion: SECRETSLOC is a protected file system path where bootstrapping secrets are stored. While EdgeX implements a sophisticated secret handling mechanism, that mechanism itself requires secrets. For example, every microservice that talks to Vault must have its own unique secret to authenticate: Vault itself cannot be used to distribute these secrets. SECRETSLOC fulfills the role that the non-routable instance data IP address, 169.254.169.254, fulfills in the public cloud: delivery of bootstrapping secrets. As EdgeX does not have a hypervisor nor virtual machines for this purpose, a protected file system path is used instead. SECRETSLOC is implementation-dependent. A desirable feature of SECRETSLOC would be that data written here is kept in RAM and is not persisted to storage media. This property is not achieveable in all circumstances. For Docker, a list of suggested paths--in preference order--is: /run/edgex/secrets (a tmpfs volume on a Linux host) /tmp/edgex/secrets (a temporary file area on Linux and MacOS hosts) A persistent docker volume (use when host bind mounts are not available) For snaps, a list of suggested paths-in preference order--is: * /run/snap. $SNAP_NAME / (a tmpfs volume on a Linux host) * $SNAP_DATA /secrets (a snap-specific persistent data area) * TBD (a content interface that allows for sharing of secrets from the core snap)","title":"Terms"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#current-practices-survey","text":"A survey on the existing EdgeX secrets reveals the following appoaches. A designation of \"compliant\" means that the current implementation is aligned with the recommended practices documented in the next section. A designation of \"non-compliant\" means that the current implementation uses an implemention mechanism outside of the recommended practices documented in the next section. A \"non-compliant\" implementation is a candidate for refactoring to bring the implementation into conformance with the recommended practices.","title":"Current practices survey"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#system-managed-secrets","text":"PKI private keys Docker: PKI generated by standalone utility every cold start of the framework. Distribution via SECRETSLOC . (Compliant.) Snaps: PKI generated by standalone utility every cold start of the framework. Deployed to SECRETSLOC . (Compliant.) Secret store master password Docker: Distribution via persistent docker volume. (Non-compliant.) Snaps: Stored in $SNAP_DATA/config/security-secrets-setup/res . (Non-compliant.) Secret store per-service authentication tokens Docker: Distribution via SECRETSLOC generated every cold start of the framework. (Compliant.) Snaps: Distribution via SECRETSLOC , generated every cold start of the framework. (Compliant.) Postgres superuser password Docker: Hard-coded into docker-compose file, checked in to source control. (Non-compliant.) Snaps: Generated at snap install time via \"apg\" (\"automatic password generator\") tool, installed into Postgres, cached to $SNAP_DATA/config/postgres/kongpw (non-compliant), and passed to Kong via $KONG_PG_PASSWORD . MongoDB service account passwords Docker: Direct consumption from secret store. (Compliant.) Snaps: Direct consumption from secret store. (Compliant.) Redis authentication password Docker: Server--staged to secrets volume and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.) Snaps: Server--staged to $SNAP_DATA/secrets/edgex-redis/redis5-password and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.) Kong client authentication tokens Docker: System of reference is unencrypted Postgres database. (Non-compliant.) Snaps: System of reference is unencrypted Postgres database. (Non-compliant.) Note: in the current implementation, Consul is being operated as a public service. Consul will be a subject of a future \"bootstrapping ADR\" due to its role in serivce location.","title":"System-managed secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#user-managed-secrets","text":"User-managed secrets functionality is provided by app-functions-sdk-go . If security is enabled, secrets are retrieved from Vault. If security is disabled, secrets are retreived from the configuration provider. If the configuration provider is not available, secrets are read from the underlying .toml . It is taken as granted in this ADR that secrets originating in the configuration provider or from .toml configuration files are not secret. The fallback mechanism is provided as a convienience to the developer, who would otherwise have to litter their code with \"if (isSecurityEnabled())\" logic leading to implementation inconsistencies. The central database credential is supplied by GetDatabaseCredentials() and returns the database credential assigned to app-service-configurable . If security is enabled, database credentials are retreived using the standard flow. If security is disabled, secrets are retreived from the configuration provider from a special section called [Writable.InsecureSecrets] . If not found there, the configuration provider is searched for credentials stored in the legacy [Databases.Primary] section using the Username and Password keys. Each user application has its own exclusive-use area of the secret store that is accessed via GetSecrets() . If security is enabled, secret requests are passed along to go-mod-secrets using an application-specific access token. If security is disabled, secret requets are made to the configuration provider from the [Writable.InsecureSecrets] section. There is no fallback configuration location. As user-managed secrets have no framework support for initialization, a special StoreSecrets() method is made available to the application for the application to initialize its own secrets. This method is only available in security-enabled mode. No changes to user-managed secrets are being proposed in this ADR.","title":"User-managed secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#decision","text":"","title":"Decision"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#creation-of-secrets","text":"Management of hardware-bound secrets is platform-specific and out-of-scope for the EdgeX framework. EdgeX open source will contain only the necessary hooks to integrate platform-specific functionality. For software-managed secrets, the system of referece of secrets in EdgeX is the EdgeX secret store. The EdgeX secret store provides for encryption of secrets at rest. This term means that if a secret is replicated, the EdgeX secret store is the authoritative source of truth of the secret. Whenever possible, the EdgeX secret store should also be the record of origin of a secret as well. This means creating secrets inside of the EdgeX secret store is preferrable to importing an externally-created secret into the secret store. This can often be done for framework-managed secrets, but not possible for user-managed secrets.","title":"Creation of secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#choosing-between-alternative-forms-of-secrets","text":"When given a choice between plain-text secrets and cryptographic keys, cryptographic keys should be preferred. An example situation would be the introduction of an MQTT message broker. A broker may support both TLS client authentication as well as username/password authentication. In such a situation, TLS client authentication would be preferred: The cryptographic key is typically longer in bits than a plain-text secret. A plain-text secret will require transport encryption in order to protect confidentiality of the secret, such as server-side TLS. Use of TLS client authentication typically eliminates the need for additional assets on the server side (such as a password database) to authenticate the client, by relying on digital signature instead. TLS client authentication should not be used unless there is a capability to revoke a compromised certificate, such as by replacing the certificate authority, or providing a certificate revokation list to the server. If certificate revokation is not supported, plain-text secrets (such as username/password) should be used instead, as they are typically easier to revoke.","title":"Choosing between alternative forms of secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#distribution-and-consumption-of-secrets","text":"","title":"Distribution and consumption of secrets"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#prohibited-practices","text":"Use of hard-coded secrets is an instance of CWE-798: Use of hard-coded credentials and is not allowed. A hard-coded secret is a secret that is the same across multiple EdgeX instances. Hard-coded secrets make devices susceptible to BORE (break-once-run-everywhere) attacks, where collections of machines can compromised by a single replicated secret. Specific cases where this is likely to come up are: Secrets embedded in source control EdgeX is an open-source project. Any secret that is present in an EdgeX repository is public to the world, and therefore not a secret, by definition. Configuration files, such as .toml files, .json files, .yaml files (including docker-compose.yml ) are specific instances of this practice. Secrets embedded in binaries Binaries are usually not protected against confidentiality threats, and binaries can be easily reverse-engineered to find any secrets therein. Binaries included compile executables as well as Docker images.","title":"Prohibited practices"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#recommended-practices","text":"Direct consumption from process-to-process interaction with secret store This approach is only possible for components that have native support for Hashicorp Vault . This includes any EdgeX service that links to go-mod-secrets. For example, if secretClient is an instance of the go-mod-secrets secret store client: secrets , err := secretClient . GetSecrets ( \"myservice\" , \"username\" , \"password\" ) The above code will retrieve the username and password properties of the myservice secret. Dynamic injection of secret into process environment space Environment variables are part of a process' environment block and are mapped into a process' memory. In this scenario, an intermediary makes a connection to the secret store to fetch a secret, store it into an environment variable, and then launches a target executable, thereby passing the secret in-memory to the target process. Existing examples of this functionality include vaultenv , envconsul , or env-aws-params . These tools authenticate to a remote network service, inject secrets into the process environment, and then exec's a replacment process that inherits the secret-enriched enviornment block. There are a few potential risks with this approach: Environment blocks are passed to child processes by default. Environment-variable-sniffing malware (introduced by compromised 3rd party libaries) is a proven attack method. Dynamic injection of secret into container-scoped tmpfs volume An example of this approach is consul-template . This approach is useful when a secret is required to be in a configuration file and cannot be passed via an environment variable or directly consumed from a secret store. Distribution via SECRETSLOC This option is the most widely supported secret distribution mechanism by container orchestrators. EdgeX supports runtime environments such as standard Docker and snaps that have no built-in secret management features. Generic Docker does not have a built-in secrets mechanism. Manual configuration of a SECRETSLOC should utilize either a host file file system path or a Docker volume. Snaps also do not have a built-in secrets mechanism. The options for SECRETSLOC are limited to designated snap-writable directories. For comparison: Docker Swarm: Swarm swarm mode is not officially supported by the EdgeX project. Docker Swarm secrets are shared via the /run/secrets volume, which is a Linux tmpfs volume created on the host and shared with the container. For an example of Docker Swarm secrets, see the docker-compose secrets stanza . Secrets distributed in this manner become part of the RaftDB, and thus it becomes necessary to enable swarm autolock mode, which prevents the Raft database encryption key from being stored plaintext on disk. Swarm secrets have an additional limitation in that they are not mutable at runtime. Kubernetes: Kubernetes is not officially supported by the EdgeX project. Kubernetes also supports the secrets volume approach, though the secrets volume can be mounted anywhere in the container namespace. For an example of Kubernetes secrets volumes, see the Kubernetes secrets documentation . Secrets distributed in this manner become part of the etcd database, and thus it becomes necessary to specify a KMS provider for data encryption to prevent etcd from storing plaintext versions of secrets.","title":"Recommended practices"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#consequences","text":"As the existing implementation is not fully-compliant with this ADR, significant scope will be added to current and future EdgeX releases in order to bring the project into compliance. List of needed improvements: PKI private keys All: Move to using Vault as system of origin for the PKI instead of the standalone security-secrets-setup utility. All: Cache the PKI for Consul and Vault on persistent disk; rotate occasionally. All: Investigate hardware protection of cached Consul and Vault PKI secret keys. (Vault cannot unseal its own TLS certificate.) Special case: Bring-your-own external Kong certificate and key The Kong external certificate and key is already stored in Vault, however, additional metadata is needed to signal whether these are auto-generated or manually-installed. A manually-installed certificate and key would not be overwritten by the framework bringup logic. Installing a custom certificate and key can then be implemented by overwriting the system-generated ones and setting a flag indicating that they were manually-installed. Secret store master password All: Enable hooks for hardware protection of secret store master password. Secret store per-service authentication tokens No changes required. Postgres superuser password Generate at install time or on cold start of the framework. Cache in Vault and inject into Kong using environment variable injection. MongoDB service account passwords No changes required. Redis(v5) authentication password All: Implement process-to-process injection: start Redis unauthenticated, with a post-start hook to read the secret out of Vault and set the Redis password. (Short race condition between Redis starting, password being set, and dependent services starting.) No changes on client side. Redis(v6) passwords (v6 adds multiple user support) Interim solution: handle like MongoDB service account passwords. Future ADR to propose use of a Vault database secrets engine. No changes on client side (each service accesses its own credential) Kong authentication tokens All: Implement in-transit authentication with TLS-protected Postgres interface. (Subject to change if it is decided not to enable a Postgres backend out of the box.) Additional research needed as PostgreSQL does not support transparent data encryption.","title":"Consequences"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#references","text":"ADR for secret creation and distribution CWE-798: Use of hard-coded credentials Docker Swarm secrets EdgeX go-mod-secrets Hashicorp Vault","title":"References"},{"location":"design/adr/security/0009-Secure-Bootstrapping/","text":"Secure Bootstrapping of EdgeX Secure Bootstrapping of EdgeX Status Context History Decision Stage-gate mechanism Docker-specific service changes \"As-is\" startup flow \"To-be\" startup flow New Bootstrap/RTR container Consequences Benefits Drawbacks Alternatives Event-driven vs commanded staging System management agent (SMA) as the coordinator Create a mega-install container Manual secret provisioning References Status Approved Context Docker-compose, the tool used by EdgeX to manage its Docker-based stack, lags in its support for initialization logic. Docker-compose v2.x used to have a depends_on / condition directive that would test a service's HEALTHCHECK and block startup until the service was \"healthy\". Unfortunately, this feature was removed in 3.x docker-compose. (This feature is also unsuppported in swarm mode as well.) Snaps have an explicit install phase and Kubernetes PODs have optional init containers. In other frameworks, initialization is allowed to run to completion prior to application components being started in production mode. This functionality does not exist in Docker nor docker-compose. The current lack of an initialization phase is a blocking issue for implementing microservice communication security, as critical EdgeX core components that are involved with microservice communication (specifically Consul) are being brought up in an insecure configuration. (Consul's insecure configuration is will be addressed in a separate ADR .) Activities that are best done in the initialization phase include the following: Bootstrapping of crytographic secrets needed by the application. Bootstrapping of database users and passwords. Installation of database schema needed for application logic to function. Initialization of authorization frameworks such as configuring RBAC or ACLs. Other one-time initialization activities. Workarounds when an installation phase is not present include: Perform initialization tasks manually, and manually seed secrets into static configuration files. Ship with known hard-coded secrets in static configuration files. Start in an insecure configuration and remain that way. Provision some secrets at runtime. EdgeX does not have a manual installation flow, and uses a combination of the last three approaches. The objective of this ADR is to define a framework for Docker-based initialization logic in EdgeX. This will enable the removal of certain hard-coded secrets in EdgeX and enable certain components (such as Consul) to be started in a secure configuration. These improvement are necessary pre-requisites to implementing microservice communication security. History In previous releases, container startup sequencing has been primarily been driven by Consul service health checks backed healthcheck endpoints of particular services or by sentinel files placed in the file system when certain intialization milestones are reached. The implementation has been plagued by several issues: Sentinel files are not cleaned up if the framework fails or is shut down. Invalid state left over from previous instantiations of the framework causes difficult-to-resolve race conditions. (Implementation of this ADR will try to remove as many as possible, focusing on those that are used to gate startup. Some use of sentinel files may still be required to indicate completion of initialization steps so that they are not re-done if there is no API-based mechanism to determine if such initialization has been completed.) Consul healh checks are reported in a difficult-to-parse JSON structure, which has lead to the creation of specialized tools that are insensitive to libc implementations used by different container images. Consul is being used not only for service health, but for service location and configuration as well . The requirement to synchronize framework startup for the purpose of securely initializing Consul means that a non-Consul mechanism must be used to stage-gate EdgeX initialization. This last point is the primary motivator of this ADR. Decision Stage-gate mechanism The stage-gate mechanism must work in the following environments: docker-compose in Linux on a single node/system docker-compose in Microsoft Windows on a single node/system docker-compose in Apple MacOS on a single node/system Startup sequencing will be driven by two primary mechanisms: Use of entrypoint scripts to: Block on stage-gate and service dependencies Perform first-boot initialization phase activities as noted in Context The bootstrap container will inject entrypoint scripts into the other containers in the case where EdgeX is directly consuming an upstream container. Docker will automatically retry restarting containers if its entrypoint script is missing. Use of open TCP sockets as semaphores to gate startup sequencing Use of TCP sockets for startup sequencing is commonly used in Docker environments. Due to its popularlity, there are several existing tools for this, including wait-for-it , dockerize , and wait-for . The TCP mechanism is portable across platforms and will work in distributed multi-node scenarios. At least three new ports will be added to EdgeX for sequencing purposes: bootstrap port. This port will be opened once first-time initialization has been completed. tokens_ready port. This port signals that secret-store tokens have been provisioned and are valid. ready_to_run port. This port will be opened once stateful services have completed initialization and it is safe for the majority of EdgeX core services to start. The stateless EdgeX services should block on ready_to_run port. Docker-specific service changes \"As-is\" startup flow The following diagram shows the \"as-is\" startup flow. There are several components being removed via activity unrelated with this ADR. These proposed edits are shown to reduce clutter in the TO-BE diagram. * secrets-setup is being eliminated through a separate ADR to eliminate TLS for single-node usage. * kong-migrations is being combined with the kong service via an entrypoint script. * bootstrap-redis will be incorporated into the Redis entrypoint script to set the Redis password before Redis starts to fix the time delay before a Redis password is set. \"To-be\" startup flow The following diagram shows the \"to-be\" startup flow. Note that the bootstrap flows are always processed, but can be short-circuited. Another difference to note in the \"to-be\" diagram is that the Vault depdendency on Consul is reversed in order to provide better security . New Bootstrap/RTR container The purpose of this new container is to: Inject entrypoint scripts into third-party containers (such as Vault, Redis, Consul, PostgreSQL, Kong) in order to perform first-time initialization and wait on service dependencies Raise the bootstrap semaphore Wait on dependent semaphores required to raise the ready_to_run semaphore (these are the stateful components such as databases, and blocking waiting for sercret store tokens to be provisioned) Raise the ready_to_run semaphore Wait forever (in order to leave TCP sockets open) Consequences Benefits This ADR is expected to yield the following benefits after completion of the related engineering tasks: Standardization of the stage-gate mechanism. Standardized approach to component initialization in Docker. Reduced fragility in the framework startup flow. Vault no longer uses Consul as its data store (uses file system instead). Ability to use a stock Consul container instead of creating a custom one for EdgeX Elimination of several sentinel files used for Consul health checks /tmp/edgex/secrets/ca/.security-secrets-setup.complete /tmp/edgex/secrets/edgex-consul/.secretstore-setup-done Drawbacks Introduction of a new container into the startup flow (but other containers are eliminated or combined). Expanded scope and responsibility of entrypoint scripts, which must not only block component startup, but now must also configure a component for secure operation. Alternatives Event-driven vs commanded staging In this scenario, instead of a service waiting on a TCP-socket semaphore created by another service, services would open a socket and wait for a coordinator/controller to issue a \"go\" command. This solution was not chosen for several reasons: The code required to open a socket and wait for a command is much more complicated than the code required to check for an open socket. Many open source utilities exist to block on a socket opening; there are no such examples for the reverse. This solution would would duplicate the information regarding which services need to run: once in the docker-compose file, and once as a configuration file to the coordinator/controller. System management agent (SMA) as the coordinator In this scenario, the system management agent is responsbile bringing up the EdgeX framework. Since the system management agent has access to the Docker socket, it has the ability to start services in a prescribed order, and as a management agent, has knowledge about the desired state of the framework. This solution was not chosen for several reasons: SMA is an optional EdgeX component--use in this way would make SMA a required core component. SMA, in order to authenticate an authorize remote management requests, requires access to persistent state and secrets. To make the same component responsible for initializing that state and secrets upon which it depends would make the design convoluted. Create a mega-install container This alternative would create a mega-install container that has locally installed verions of critical components needed for bootstrapping such as Vault, Consul, PostgreSQL, and others. A sequential script would start each component in turn, intiailizing each to run in a secure configuration, and then shut them all down again. The same stage-gate mechanism would be used to block startup of these same components, but Docker would start them in production configuration. Manual secret provisioning A typical cloud-based microservice architecture typically has a manual provisioning step. This step would include activities such as configuring Vault, installing a database schema, setting up database service account passwords, and seeding initial secrets such as PKI private keys that have been generated offline (possibly requiring several days of lead time). A cloud team may have weeks or months to prepare for this event, and it might take the greater part of a day. In contrast, EdgeX up to this point has been a \"turnkey\" middleware framework: it can be deployed with the same ease as an application, such as via a docker-compose file, or via a snap install. This means that most of the secret provisioning must be automated and the provisioning logic must be built into the framework in some way. The proposals presented in this ADR are compatibile with continuance of this functionality. References ADR 0008 - Creation and Distribution of Secrets ADR 0015 - Encryption between microservices , Hashicorp Consul Hashicorp Vault Issue: ADR for securing access to Consul Issue: Service registry ADR","title":"Secure Bootstrapping of EdgeX"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#secure-bootstrapping-of-edgex","text":"Secure Bootstrapping of EdgeX Status Context History Decision Stage-gate mechanism Docker-specific service changes \"As-is\" startup flow \"To-be\" startup flow New Bootstrap/RTR container Consequences Benefits Drawbacks Alternatives Event-driven vs commanded staging System management agent (SMA) as the coordinator Create a mega-install container Manual secret provisioning References","title":"Secure Bootstrapping of EdgeX"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#context","text":"Docker-compose, the tool used by EdgeX to manage its Docker-based stack, lags in its support for initialization logic. Docker-compose v2.x used to have a depends_on / condition directive that would test a service's HEALTHCHECK and block startup until the service was \"healthy\". Unfortunately, this feature was removed in 3.x docker-compose. (This feature is also unsuppported in swarm mode as well.) Snaps have an explicit install phase and Kubernetes PODs have optional init containers. In other frameworks, initialization is allowed to run to completion prior to application components being started in production mode. This functionality does not exist in Docker nor docker-compose. The current lack of an initialization phase is a blocking issue for implementing microservice communication security, as critical EdgeX core components that are involved with microservice communication (specifically Consul) are being brought up in an insecure configuration. (Consul's insecure configuration is will be addressed in a separate ADR .) Activities that are best done in the initialization phase include the following: Bootstrapping of crytographic secrets needed by the application. Bootstrapping of database users and passwords. Installation of database schema needed for application logic to function. Initialization of authorization frameworks such as configuring RBAC or ACLs. Other one-time initialization activities. Workarounds when an installation phase is not present include: Perform initialization tasks manually, and manually seed secrets into static configuration files. Ship with known hard-coded secrets in static configuration files. Start in an insecure configuration and remain that way. Provision some secrets at runtime. EdgeX does not have a manual installation flow, and uses a combination of the last three approaches. The objective of this ADR is to define a framework for Docker-based initialization logic in EdgeX. This will enable the removal of certain hard-coded secrets in EdgeX and enable certain components (such as Consul) to be started in a secure configuration. These improvement are necessary pre-requisites to implementing microservice communication security.","title":"Context"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#history","text":"In previous releases, container startup sequencing has been primarily been driven by Consul service health checks backed healthcheck endpoints of particular services or by sentinel files placed in the file system when certain intialization milestones are reached. The implementation has been plagued by several issues: Sentinel files are not cleaned up if the framework fails or is shut down. Invalid state left over from previous instantiations of the framework causes difficult-to-resolve race conditions. (Implementation of this ADR will try to remove as many as possible, focusing on those that are used to gate startup. Some use of sentinel files may still be required to indicate completion of initialization steps so that they are not re-done if there is no API-based mechanism to determine if such initialization has been completed.) Consul healh checks are reported in a difficult-to-parse JSON structure, which has lead to the creation of specialized tools that are insensitive to libc implementations used by different container images. Consul is being used not only for service health, but for service location and configuration as well . The requirement to synchronize framework startup for the purpose of securely initializing Consul means that a non-Consul mechanism must be used to stage-gate EdgeX initialization. This last point is the primary motivator of this ADR.","title":"History"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#decision","text":"","title":"Decision"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#stage-gate-mechanism","text":"The stage-gate mechanism must work in the following environments: docker-compose in Linux on a single node/system docker-compose in Microsoft Windows on a single node/system docker-compose in Apple MacOS on a single node/system Startup sequencing will be driven by two primary mechanisms: Use of entrypoint scripts to: Block on stage-gate and service dependencies Perform first-boot initialization phase activities as noted in Context The bootstrap container will inject entrypoint scripts into the other containers in the case where EdgeX is directly consuming an upstream container. Docker will automatically retry restarting containers if its entrypoint script is missing. Use of open TCP sockets as semaphores to gate startup sequencing Use of TCP sockets for startup sequencing is commonly used in Docker environments. Due to its popularlity, there are several existing tools for this, including wait-for-it , dockerize , and wait-for . The TCP mechanism is portable across platforms and will work in distributed multi-node scenarios. At least three new ports will be added to EdgeX for sequencing purposes: bootstrap port. This port will be opened once first-time initialization has been completed. tokens_ready port. This port signals that secret-store tokens have been provisioned and are valid. ready_to_run port. This port will be opened once stateful services have completed initialization and it is safe for the majority of EdgeX core services to start. The stateless EdgeX services should block on ready_to_run port.","title":"Stage-gate mechanism"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#docker-specific-service-changes","text":"","title":"Docker-specific service changes"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#as-is-startup-flow","text":"The following diagram shows the \"as-is\" startup flow. There are several components being removed via activity unrelated with this ADR. These proposed edits are shown to reduce clutter in the TO-BE diagram. * secrets-setup is being eliminated through a separate ADR to eliminate TLS for single-node usage. * kong-migrations is being combined with the kong service via an entrypoint script. * bootstrap-redis will be incorporated into the Redis entrypoint script to set the Redis password before Redis starts to fix the time delay before a Redis password is set.","title":"\"As-is\" startup flow"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#to-be-startup-flow","text":"The following diagram shows the \"to-be\" startup flow. Note that the bootstrap flows are always processed, but can be short-circuited. Another difference to note in the \"to-be\" diagram is that the Vault depdendency on Consul is reversed in order to provide better security .","title":"\"To-be\" startup flow"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#new-bootstraprtr-container","text":"The purpose of this new container is to: Inject entrypoint scripts into third-party containers (such as Vault, Redis, Consul, PostgreSQL, Kong) in order to perform first-time initialization and wait on service dependencies Raise the bootstrap semaphore Wait on dependent semaphores required to raise the ready_to_run semaphore (these are the stateful components such as databases, and blocking waiting for sercret store tokens to be provisioned) Raise the ready_to_run semaphore Wait forever (in order to leave TCP sockets open)","title":"New Bootstrap/RTR container"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#consequences","text":"","title":"Consequences"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#benefits","text":"This ADR is expected to yield the following benefits after completion of the related engineering tasks: Standardization of the stage-gate mechanism. Standardized approach to component initialization in Docker. Reduced fragility in the framework startup flow. Vault no longer uses Consul as its data store (uses file system instead). Ability to use a stock Consul container instead of creating a custom one for EdgeX Elimination of several sentinel files used for Consul health checks /tmp/edgex/secrets/ca/.security-secrets-setup.complete /tmp/edgex/secrets/edgex-consul/.secretstore-setup-done","title":"Benefits"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#drawbacks","text":"Introduction of a new container into the startup flow (but other containers are eliminated or combined). Expanded scope and responsibility of entrypoint scripts, which must not only block component startup, but now must also configure a component for secure operation.","title":"Drawbacks"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#alternatives","text":"","title":"Alternatives"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#event-driven-vs-commanded-staging","text":"In this scenario, instead of a service waiting on a TCP-socket semaphore created by another service, services would open a socket and wait for a coordinator/controller to issue a \"go\" command. This solution was not chosen for several reasons: The code required to open a socket and wait for a command is much more complicated than the code required to check for an open socket. Many open source utilities exist to block on a socket opening; there are no such examples for the reverse. This solution would would duplicate the information regarding which services need to run: once in the docker-compose file, and once as a configuration file to the coordinator/controller.","title":"Event-driven vs commanded staging"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#system-management-agent-sma-as-the-coordinator","text":"In this scenario, the system management agent is responsbile bringing up the EdgeX framework. Since the system management agent has access to the Docker socket, it has the ability to start services in a prescribed order, and as a management agent, has knowledge about the desired state of the framework. This solution was not chosen for several reasons: SMA is an optional EdgeX component--use in this way would make SMA a required core component. SMA, in order to authenticate an authorize remote management requests, requires access to persistent state and secrets. To make the same component responsible for initializing that state and secrets upon which it depends would make the design convoluted.","title":"System management agent (SMA) as the coordinator"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#create-a-mega-install-container","text":"This alternative would create a mega-install container that has locally installed verions of critical components needed for bootstrapping such as Vault, Consul, PostgreSQL, and others. A sequential script would start each component in turn, intiailizing each to run in a secure configuration, and then shut them all down again. The same stage-gate mechanism would be used to block startup of these same components, but Docker would start them in production configuration.","title":"Create a mega-install container"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#manual-secret-provisioning","text":"A typical cloud-based microservice architecture typically has a manual provisioning step. This step would include activities such as configuring Vault, installing a database schema, setting up database service account passwords, and seeding initial secrets such as PKI private keys that have been generated offline (possibly requiring several days of lead time). A cloud team may have weeks or months to prepare for this event, and it might take the greater part of a day. In contrast, EdgeX up to this point has been a \"turnkey\" middleware framework: it can be deployed with the same ease as an application, such as via a docker-compose file, or via a snap install. This means that most of the secret provisioning must be automated and the provisioning logic must be built into the framework in some way. The proposals presented in this ADR are compatibile with continuance of this functionality.","title":"Manual secret provisioning"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#references","text":"ADR 0008 - Creation and Distribution of Secrets ADR 0015 - Encryption between microservices , Hashicorp Consul Hashicorp Vault Issue: ADR for securing access to Consul Issue: Service registry ADR","title":"References"},{"location":"design/adr/security/0015-in-cluster-tls/","text":"Use of encryption to secure in-cluster EdgeX communications Status Approved Context This ADR seeks to define the EdgeX direction on using encryption to secure \"in-cluster\" EdgeX communications, that is, internal microservice-to-microservice communication. This ADR will seek to clarify the EdgeX direction in several aspects with regard to: EdgeX services communicating within a single host EdgeX services communicating across multiple hosts Using encryption for confidentiality or integrity in communication Using encryption for authentication between microservices This ADR will be used to triage EdgeX feature requests in this space. Background Why encrypt? Why consider encryption in the first place? Simple. Encryption helps with the following problems: Client authentication of servers. The client knows that it is talking to the correct server. This is typically achieved using TLS server certificates that the client checks against a trusted root certificate authority. Since the client is not in charge of network routing, TLS server authentication provides a good assurance that the requests are being routed to the correct server. Server authentication of clients. The server knows the identity of the client that has connected to it. There are a variety of mechanims to achieve this, such as usernames and passwords, tokens, claims, et cetera, but the mechanism under consideration by this ADR is TLS client authentication using TLS client certificates. Confidentiality of messages exchanged between services. Confidentiality is needed to protect authentication data flowing between communicating microservices as well as to protect the message payloads if they contain nonpublic data. TLS provides communication channel confidentiality. Integrity of messages exchanged between services. Integrity is needed to ensure that messages between communicating microservices are not maliciously altered, such as inserting or deleting data in the middle of the exchange. TLS provides communication channel integrity. A microservice architecture normally strives for all of the above protections. Besides TLS, there are other mechanisms that can be used to provide some of the above properties. For example, IPSec tunnels provide confidentity, integrity, and authentication of the hosts (network-level protection). SSH tunnels provide confidentiality, integrity, and authentication of the tunnel endpoints (also network-level protection). TLS, however, is preferred, because it operates in-process at the application level and provides better point-to-point security. Why to not encrypt? In the case of TLS communications, microservices depend on an asymmetric private key to prove their identity. To be of value, this private key must be kept secret. Applications typically depend on process-level isolation and/or file system protections for the private key. Moreover, interprocess communication using sockets is mediated by the operating system kernel. An attacker running at the privilege of the operating system has the ability to compromise TLS protections, such as by substituting a private key or certificate authority of their choice, accessing the unencrypted data in process memory, or intercepting the network communications that flow through the kernel. Therefore, within a single host, TLS protections may slow down an attacker, but are not likely to stop them. Additionally, use of TLS requires management of additional security assets in the form of TLS private keys. Microservice communcation across hosts, however, is vulnerable to intereception, and must be protected via some mechanism such as, but not limited to: IPSec or SSH tunnels, encrypted overlay networks, service mesh middlewares, or application-level TLS. Another reason to not encrypt is that TLS adds overhead to microservice communication in the form of additional network around-trips when opening connections and performing cryptographic public key and symmetric key operations. Decision At this time, EdgeX is primarily a single-node IoT application framework. Should this position change, this ADR should be revisited. Based on the single-node assumption: TLS will not be used for confidentiality and integrity of internal on-host microservice communication. TLS will be avoided as an authentication mechanism of peer microservices. Integrity and confidentiality of microservice communcations crossing host boundaries is required to secure EdgeX, but are an EdgeX customer responsibility. EdgeX customers are welcome to add extra security to their own EdgeX deployments. Consequences This ADR if approved would close the following issues as will-not-fix. https://github.com/edgexfoundry/edgex-go/issues/1942 https://github.com/edgexfoundry/edgex-go/issues/1941 https://github.com/edgexfoundry/edgex-go/issues/2454 https://github.com/edgexfoundry/developer-scripts/issues/240 https://github.com/edgexfoundry/edgex-go/issues/2495 It would also close https://github.com/edgexfoundry/edgex-go/issues/1925 as there is no current need for TLS as a mutual authentication strategy. Alternatives Encrypted overlay networks Encrypted overlay networks provide varying protection based on the product used. Some can only encrypt data, such as an IPsec tunnel. Some can encrypt and provide for network microsegmentation, such as Docker Swarm networks with encryption enabled. Some can encrypt and enforce network policy such as restrictions on ingress traffic or restrictions on egress traffic. Service mesh middleware Service mesh middleware is an alternative that should be investigated if EdgeX decides to fully support a Kubernetes-based deployment using distributed Kubernetes pods. A service mesh typically achieves most of the security objectives of security microservice commuication by intercepting microservice communications and imposing a configuration-driven policy that typically includes confidentiality and integrity protection. These middlewares typically rely on the Kubernetes pod construct and are difficult to support for non-Kubernetes deployments. EdgeX public key infrastructure An EdgeX public key infrastructure that is natively supported by the architecture should be considered if EdgeX decides to support an out-of-box distributed deployment on non-Kubernetes platforms. Native support of TLS requires a significant amount of glue logic, and exceeds the availble resources in the security working group to implement this strategy. The following text outlines a proposed strategy for supporting native TLS in the EdgeX framework: EdgeX will use Hashicorp Vault to secure the EdgeX PKI, through the use of the Vault PKI secrets engine. Vault will be configured with a root CA at initialization time, and a Vault-based sub-CA for dynamic generation of TLS leaf certificates. The root CA will be restricted to be used only by the Vault root token. EdgeX microservices that are based on third-party containers require special support unless they can talk natively to Vault for their secrets. Certain tools, such as those mentioned in the \"Creation and Distribution of Secrets\" ADR ( envconsul , consul-template , and others) can be used to facilitiate third-party container integration. These services are: Consul : Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container. Vault : As Vault's database is encrypted, Vault cannot natively bootstrap its own TLS certificate. Requires TLS certificate to be injected into container and its location set in a configuration file. PostgreSQL : Requires TLS certificate to be injected into '$PGDATA' (default: /var/lib/postgresql/data ) which is where the writable database files are kept. Kong (admin) : Requires environment variable to be set to secure admin port with TLS, with a TLS certificates injected into the container. Kong (external) : Requires a bring-your-own (BYO) external certificate, or as a fallback, a default one should be generated using a configurable external hostname. (The Kong ACME plugin could possibly be used to automate this process.) Redis (v6) : Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container. Mosquitto : Requires TLS certificate set by configuration file, with a TLS certificate injected into the container. Additionally, every EdgeX microservice consumer will require access to the root CA for certificate verification purposes, and every EdgeX microservice server will need a TLS leaf certificate and private key. Note that Vault bootstrapping its own PKI is tricky and not natively supported by Vault. Expect that a non-trivial amount of effort will need to be put into starting Vault in non-secure mode to create the CA hierarchy and a TLS certificate for Vault itself, and then restarting Vault in a TLS-enabled configuration. Periodic certificate rotation is a non-trivial challenge as well. The Vault bootstrapping flow would look something like this: Bring up vault on localhost with TLS disabled (bootstrapping configuration) Initialize a blank Vault and immediately unseal it Encrypt the Vault keyshares and revoke the root token Generate a new root from the keyshares Generate an on-device root CA (see https://learn.hashicorp.com/vault/secrets-management/sm-pki-engine) Create an intermediate CA for TLS server authentication Sign the intermediate CA using the root CA Configure policy for intermediate CA Generate and store leaf certificates for Consul, Vault, PostgreSQL, Kong (admin), Kong (external), Redis (v6), Mosquitto Deploy the PKI to the respective services' secrets area Write the production Vault configuration (TLS-enabled) to a Docker volume There are no current plans for mutual auth TLS. Supporting mutual auth TLS would require creation of a separate PKI hierarchy for generation of TLS client certificates and glue logic to persist the certificates in the service's key-value secret store and provide them when connecting to other EdgeX services.","title":"Use of encryption to secure in-cluster EdgeX communications"},{"location":"design/adr/security/0015-in-cluster-tls/#use-of-encryption-to-secure-in-cluster-edgex-communications","text":"","title":"Use of encryption to secure in-cluster EdgeX communications"},{"location":"design/adr/security/0015-in-cluster-tls/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0015-in-cluster-tls/#context","text":"This ADR seeks to define the EdgeX direction on using encryption to secure \"in-cluster\" EdgeX communications, that is, internal microservice-to-microservice communication. This ADR will seek to clarify the EdgeX direction in several aspects with regard to: EdgeX services communicating within a single host EdgeX services communicating across multiple hosts Using encryption for confidentiality or integrity in communication Using encryption for authentication between microservices This ADR will be used to triage EdgeX feature requests in this space.","title":"Context"},{"location":"design/adr/security/0015-in-cluster-tls/#background","text":"","title":"Background"},{"location":"design/adr/security/0015-in-cluster-tls/#why-encrypt","text":"Why consider encryption in the first place? Simple. Encryption helps with the following problems: Client authentication of servers. The client knows that it is talking to the correct server. This is typically achieved using TLS server certificates that the client checks against a trusted root certificate authority. Since the client is not in charge of network routing, TLS server authentication provides a good assurance that the requests are being routed to the correct server. Server authentication of clients. The server knows the identity of the client that has connected to it. There are a variety of mechanims to achieve this, such as usernames and passwords, tokens, claims, et cetera, but the mechanism under consideration by this ADR is TLS client authentication using TLS client certificates. Confidentiality of messages exchanged between services. Confidentiality is needed to protect authentication data flowing between communicating microservices as well as to protect the message payloads if they contain nonpublic data. TLS provides communication channel confidentiality. Integrity of messages exchanged between services. Integrity is needed to ensure that messages between communicating microservices are not maliciously altered, such as inserting or deleting data in the middle of the exchange. TLS provides communication channel integrity. A microservice architecture normally strives for all of the above protections. Besides TLS, there are other mechanisms that can be used to provide some of the above properties. For example, IPSec tunnels provide confidentity, integrity, and authentication of the hosts (network-level protection). SSH tunnels provide confidentiality, integrity, and authentication of the tunnel endpoints (also network-level protection). TLS, however, is preferred, because it operates in-process at the application level and provides better point-to-point security.","title":"Why encrypt?"},{"location":"design/adr/security/0015-in-cluster-tls/#why-to-not-encrypt","text":"In the case of TLS communications, microservices depend on an asymmetric private key to prove their identity. To be of value, this private key must be kept secret. Applications typically depend on process-level isolation and/or file system protections for the private key. Moreover, interprocess communication using sockets is mediated by the operating system kernel. An attacker running at the privilege of the operating system has the ability to compromise TLS protections, such as by substituting a private key or certificate authority of their choice, accessing the unencrypted data in process memory, or intercepting the network communications that flow through the kernel. Therefore, within a single host, TLS protections may slow down an attacker, but are not likely to stop them. Additionally, use of TLS requires management of additional security assets in the form of TLS private keys. Microservice communcation across hosts, however, is vulnerable to intereception, and must be protected via some mechanism such as, but not limited to: IPSec or SSH tunnels, encrypted overlay networks, service mesh middlewares, or application-level TLS. Another reason to not encrypt is that TLS adds overhead to microservice communication in the form of additional network around-trips when opening connections and performing cryptographic public key and symmetric key operations.","title":"Why to not encrypt?"},{"location":"design/adr/security/0015-in-cluster-tls/#decision","text":"At this time, EdgeX is primarily a single-node IoT application framework. Should this position change, this ADR should be revisited. Based on the single-node assumption: TLS will not be used for confidentiality and integrity of internal on-host microservice communication. TLS will be avoided as an authentication mechanism of peer microservices. Integrity and confidentiality of microservice communcations crossing host boundaries is required to secure EdgeX, but are an EdgeX customer responsibility. EdgeX customers are welcome to add extra security to their own EdgeX deployments.","title":"Decision"},{"location":"design/adr/security/0015-in-cluster-tls/#consequences","text":"This ADR if approved would close the following issues as will-not-fix. https://github.com/edgexfoundry/edgex-go/issues/1942 https://github.com/edgexfoundry/edgex-go/issues/1941 https://github.com/edgexfoundry/edgex-go/issues/2454 https://github.com/edgexfoundry/developer-scripts/issues/240 https://github.com/edgexfoundry/edgex-go/issues/2495 It would also close https://github.com/edgexfoundry/edgex-go/issues/1925 as there is no current need for TLS as a mutual authentication strategy.","title":"Consequences"},{"location":"design/adr/security/0015-in-cluster-tls/#alternatives","text":"","title":"Alternatives"},{"location":"design/adr/security/0015-in-cluster-tls/#encrypted-overlay-networks","text":"Encrypted overlay networks provide varying protection based on the product used. Some can only encrypt data, such as an IPsec tunnel. Some can encrypt and provide for network microsegmentation, such as Docker Swarm networks with encryption enabled. Some can encrypt and enforce network policy such as restrictions on ingress traffic or restrictions on egress traffic.","title":"Encrypted overlay networks"},{"location":"design/adr/security/0015-in-cluster-tls/#service-mesh-middleware","text":"Service mesh middleware is an alternative that should be investigated if EdgeX decides to fully support a Kubernetes-based deployment using distributed Kubernetes pods. A service mesh typically achieves most of the security objectives of security microservice commuication by intercepting microservice communications and imposing a configuration-driven policy that typically includes confidentiality and integrity protection. These middlewares typically rely on the Kubernetes pod construct and are difficult to support for non-Kubernetes deployments.","title":"Service mesh middleware"},{"location":"design/adr/security/0015-in-cluster-tls/#edgex-public-key-infrastructure","text":"An EdgeX public key infrastructure that is natively supported by the architecture should be considered if EdgeX decides to support an out-of-box distributed deployment on non-Kubernetes platforms. Native support of TLS requires a significant amount of glue logic, and exceeds the availble resources in the security working group to implement this strategy. The following text outlines a proposed strategy for supporting native TLS in the EdgeX framework: EdgeX will use Hashicorp Vault to secure the EdgeX PKI, through the use of the Vault PKI secrets engine. Vault will be configured with a root CA at initialization time, and a Vault-based sub-CA for dynamic generation of TLS leaf certificates. The root CA will be restricted to be used only by the Vault root token. EdgeX microservices that are based on third-party containers require special support unless they can talk natively to Vault for their secrets. Certain tools, such as those mentioned in the \"Creation and Distribution of Secrets\" ADR ( envconsul , consul-template , and others) can be used to facilitiate third-party container integration. These services are: Consul : Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container. Vault : As Vault's database is encrypted, Vault cannot natively bootstrap its own TLS certificate. Requires TLS certificate to be injected into container and its location set in a configuration file. PostgreSQL : Requires TLS certificate to be injected into '$PGDATA' (default: /var/lib/postgresql/data ) which is where the writable database files are kept. Kong (admin) : Requires environment variable to be set to secure admin port with TLS, with a TLS certificates injected into the container. Kong (external) : Requires a bring-your-own (BYO) external certificate, or as a fallback, a default one should be generated using a configurable external hostname. (The Kong ACME plugin could possibly be used to automate this process.) Redis (v6) : Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container. Mosquitto : Requires TLS certificate set by configuration file, with a TLS certificate injected into the container. Additionally, every EdgeX microservice consumer will require access to the root CA for certificate verification purposes, and every EdgeX microservice server will need a TLS leaf certificate and private key. Note that Vault bootstrapping its own PKI is tricky and not natively supported by Vault. Expect that a non-trivial amount of effort will need to be put into starting Vault in non-secure mode to create the CA hierarchy and a TLS certificate for Vault itself, and then restarting Vault in a TLS-enabled configuration. Periodic certificate rotation is a non-trivial challenge as well. The Vault bootstrapping flow would look something like this: Bring up vault on localhost with TLS disabled (bootstrapping configuration) Initialize a blank Vault and immediately unseal it Encrypt the Vault keyshares and revoke the root token Generate a new root from the keyshares Generate an on-device root CA (see https://learn.hashicorp.com/vault/secrets-management/sm-pki-engine) Create an intermediate CA for TLS server authentication Sign the intermediate CA using the root CA Configure policy for intermediate CA Generate and store leaf certificates for Consul, Vault, PostgreSQL, Kong (admin), Kong (external), Redis (v6), Mosquitto Deploy the PKI to the respective services' secrets area Write the production Vault configuration (TLS-enabled) to a Docker volume There are no current plans for mutual auth TLS. Supporting mutual auth TLS would require creation of a separate PKI hierarchy for generation of TLS client certificates and glue logic to persist the certificates in the service's key-value secret store and provide them when connecting to other EdgeX services.","title":"EdgeX public key infrastructure"},{"location":"design/adr/security/0016-docker-image-guidelines/","text":"Docker image guidelines Status Approved Context When deploying the EdgeX Docker containers some security measures are recommended to ensure the integrity of the software stack. Decision When deploying Docker images, the following flags should be set for heightened security. To avoid escalation of privileges each docker container should use the no-new-privileges option in their Docker compose file (example below). More details about this flag can be found here . This follows Rule #4 for Docker security found here . security_opt: - \"no-new-privileges:true\" NOTE: Alternatively an AppArmor security profile can be used to isolate the docker container. More details about apparmor profiles can be found here security_opt: [ \"apparmor:unconfined\" ] To further prevent privilege escalation attacks the user should be set for the docker container using the --user= or -u= option in their Docker compose file (example below). More details about this flag can be found here . This follows Rule #2 for Docker security found here . services: device-virtual: image: ${ REPOSITORY } /docker-device-virtual-go ${ ARCH } : ${ DEVICE_VIRTUAL_VERSION } user: $CONTAINER -PORT: $CONTAINER -PORT # user option using an unprivileged user ports: - \"127.0.0.1:49990:49990\" container_name: edgex-device-virtual hostname: edgex-device-virtual networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-virtual depends_on: - consul - data - metadata NOTE: exception Sometimes containers will require root access to perform their fuctions. For example the System Management Agent requires root access to control other Docker containers. In this case you would allow it run as default root user. To avoid a faulty or compromised containers from consuming excess amounts of the host of its resources resource limits should be set for each container. More details about resource limits can be found here . This follows Rule #7 for Docker security found here . services: device-virtual: image: ${ REPOSITORY } /docker-device-virtual-go ${ ARCH } : ${ DEVICE_VIRTUAL_VERSION } user: 4000 :4000 # user option using an unprivileged user ports: - \"127.0.0.1:49990:49990\" container_name: edgex-device-virtual hostname: edgex-device-virtual networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-virtual depends_on: - consul - data - metadata deploy: # Deployment resource limits resources: limits: cpus: '0.001' memory: 50M reservations: cpus: '0.0001' memory: 20M To avoid attackers from writing data to the containers and modifying their files the --read_only flag should be set. More details about this flag can be found here . This follows Rule #8 for Docker security found here . device-rest: image: ${ REPOSITORY } /docker-device-rest-go ${ ARCH } : ${ DEVICE_REST_VERSION } ports: - \"127.0.0.1:49986:49986\" container_name: edgex-device-rest hostname: edgex-device-rest read_only: true # read_only option networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-rest depends_on: - data - command NOTE: exception If a container is required to have write permission to function, then this flag will not work. For example, the vault needs to run setcap in order to lock pages in memory. In this case the --read_only flag will not be used. NOTE: Volumes If writing persistent data is required then a volume can be used. A volume can be attached to the container in the following way device-rest: image: ${ REPOSITORY } /docker-device-rest-go ${ ARCH } : ${ DEVICE_REST_VERSION } ports: - \"127.0.0.1:49986:49986\" container_name: edgex-device-rest hostname: edgex-device-rest read_only: true # read_only option networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-rest depends_on: - data - command volumes: - consul-config:/consul/config:z NOTE: alternatives If writing non-persistent data is required (ex. a config file) then a temporary filesystem mount can be used to accomplish this goal while still enforcing --read_only . Mounting a tmpfs in Docker gives the container a temporary location in the host systems memory to modify files. This location will be removed once the container is stopped. More details about tmpfs can be found here for additional docker security rules and guidelines please check the Docker security cheatsheet Consequences Create a more secure Docker environment References Docker-compose reference https://docs.docker.com/compose/compose-file OWASP Docker Recommendations https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html CIS Docker Benchmark https://workbench.cisecurity.org/files/2433/download/2786 (registration required)","title":"Docker image guidelines"},{"location":"design/adr/security/0016-docker-image-guidelines/#docker-image-guidelines","text":"","title":"Docker image guidelines"},{"location":"design/adr/security/0016-docker-image-guidelines/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0016-docker-image-guidelines/#context","text":"When deploying the EdgeX Docker containers some security measures are recommended to ensure the integrity of the software stack.","title":"Context"},{"location":"design/adr/security/0016-docker-image-guidelines/#decision","text":"When deploying Docker images, the following flags should be set for heightened security. To avoid escalation of privileges each docker container should use the no-new-privileges option in their Docker compose file (example below). More details about this flag can be found here . This follows Rule #4 for Docker security found here . security_opt: - \"no-new-privileges:true\" NOTE: Alternatively an AppArmor security profile can be used to isolate the docker container. More details about apparmor profiles can be found here security_opt: [ \"apparmor:unconfined\" ] To further prevent privilege escalation attacks the user should be set for the docker container using the --user= or -u= option in their Docker compose file (example below). More details about this flag can be found here . This follows Rule #2 for Docker security found here . services: device-virtual: image: ${ REPOSITORY } /docker-device-virtual-go ${ ARCH } : ${ DEVICE_VIRTUAL_VERSION } user: $CONTAINER -PORT: $CONTAINER -PORT # user option using an unprivileged user ports: - \"127.0.0.1:49990:49990\" container_name: edgex-device-virtual hostname: edgex-device-virtual networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-virtual depends_on: - consul - data - metadata NOTE: exception Sometimes containers will require root access to perform their fuctions. For example the System Management Agent requires root access to control other Docker containers. In this case you would allow it run as default root user. To avoid a faulty or compromised containers from consuming excess amounts of the host of its resources resource limits should be set for each container. More details about resource limits can be found here . This follows Rule #7 for Docker security found here . services: device-virtual: image: ${ REPOSITORY } /docker-device-virtual-go ${ ARCH } : ${ DEVICE_VIRTUAL_VERSION } user: 4000 :4000 # user option using an unprivileged user ports: - \"127.0.0.1:49990:49990\" container_name: edgex-device-virtual hostname: edgex-device-virtual networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-virtual depends_on: - consul - data - metadata deploy: # Deployment resource limits resources: limits: cpus: '0.001' memory: 50M reservations: cpus: '0.0001' memory: 20M To avoid attackers from writing data to the containers and modifying their files the --read_only flag should be set. More details about this flag can be found here . This follows Rule #8 for Docker security found here . device-rest: image: ${ REPOSITORY } /docker-device-rest-go ${ ARCH } : ${ DEVICE_REST_VERSION } ports: - \"127.0.0.1:49986:49986\" container_name: edgex-device-rest hostname: edgex-device-rest read_only: true # read_only option networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-rest depends_on: - data - command NOTE: exception If a container is required to have write permission to function, then this flag will not work. For example, the vault needs to run setcap in order to lock pages in memory. In this case the --read_only flag will not be used. NOTE: Volumes If writing persistent data is required then a volume can be used. A volume can be attached to the container in the following way device-rest: image: ${ REPOSITORY } /docker-device-rest-go ${ ARCH } : ${ DEVICE_REST_VERSION } ports: - \"127.0.0.1:49986:49986\" container_name: edgex-device-rest hostname: edgex-device-rest read_only: true # read_only option networks: - edgex-network env_file: - common.env environment: SERVICE_HOST: edgex-device-rest depends_on: - data - command volumes: - consul-config:/consul/config:z NOTE: alternatives If writing non-persistent data is required (ex. a config file) then a temporary filesystem mount can be used to accomplish this goal while still enforcing --read_only . Mounting a tmpfs in Docker gives the container a temporary location in the host systems memory to modify files. This location will be removed once the container is stopped. More details about tmpfs can be found here for additional docker security rules and guidelines please check the Docker security cheatsheet","title":"Decision"},{"location":"design/adr/security/0016-docker-image-guidelines/#consequences","text":"Create a more secure Docker environment","title":"Consequences"},{"location":"design/adr/security/0016-docker-image-guidelines/#references","text":"Docker-compose reference https://docs.docker.com/compose/compose-file OWASP Docker Recommendations https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html CIS Docker Benchmark https://workbench.cisecurity.org/files/2433/download/2786 (registration required)","title":"References"},{"location":"design/adr/security/0017-consul-security/","text":"Securing access to Consul Status Approved Context This ADR defines the motiviation and approach used to secure access to the Consul component in the EdgeX architecture for security-enabled configurations only . Non-secure configuations continue to use Consul in anonymous read-write mode. As this Consul security feature requires Vault to function, if EDGEX_SECURITY_SECRET_STORE=false and Vault is not present, the legacy behavior (unauthenticated Consul access) will be preserved. Consul provides several services for the EdgeX architecture: Service registry (see ADR in references below) Service health monitoring Mutable configuration data Use of the services provided by Consul is optional on a service-by-service basis. Use of the registry is controlled by the -r or --registry flag provided to an EdgeX service. Use of mutable configuration data is controlled by the -cp or --configProvider flag provided to an EdgeX service. When Consul is enabled as a configuration provider, the configuration.toml is parsed into individual settings and seeded into the Consul key-value store on the first start of a service. Configuration reads and writes are then done to Consul if it is specified as the configuration provider, otherwise the static configuration.toml is used. Writes to the [Writable] section in Consul trigger per-service callbacks notifying the application of the changed data. Updates to non- [Writable] sections are parsed only once at startup and require a service restart to take effect. Since configuration data can affect the runtime behavior of services, compensating controls must be introduced in order to mitigate the risks introduced by moving configuration from a static file into to an HTTP-accessible service with mutable state. The current practice is that Consul is exposed via unencrypted HTTP in anonymous read/write mode to all processes and EdgeX services running on the host machine. Decision Consul will be configured with access control list (ACL) functionality enabled, and each EdgeX service will utilize a Consul access token to authenticate to Consul. Consul access tokens will be requested from the Vault Consul secrets engine (to avoid introducing additional bootstrapping secrets). DNS will be disabled via configuration as it is not used in EdgeX. Consul Access Via API Gateway In security enabled EdgeX, the API gateway will be configured to proxy the Consul service over the /consul path, using the request-transformer plugin to add the global management token to incoming requests via the X-Consul-Token HTTP header. Thus, ability to access remote APIs also grants the ability to modify Consul's key-value store. At this time, service access via API gateway is all-or-nothing, but this does not preclude future fine-grained authorization at the API gateway layer to specific microservices, including Consul. Proxying of the Consul UI is problematic and there is no current solution, which would involve proper balacing of the externally-visible URL, the path-stripping effect (or not) of the proxy, Consul's ui_content_path , and UI authentication (the request-transfomer does not work on the UI). Consequences Full implementation of this ADR will deny Consul access to all existing Consul clients. To limit the impacts of the change, deployment will take place in phases. Phase 1 is basic plumbing work and leaves Consul configured in a permissive mode and thus is not a breaking change. Phase 2 will affect the APIs of Go modules and will change the default policy to \"deny\", both of which are breaking changes. Phase 3 is a refinement of access control; presuming the existing services are \"well-behaved\", that is, they do not access configuration of other services, Phase 3 will not introduce any breaking changes on top of the Phase 2 breaking changes. Phase 1 (completed in Ireland release) Vault bootstrapper will install Vault Consul secrets engine. Secretstore-setup will create a Vault token for consul secrets engine configuration. Consul will be started with Consul ACLs enabled with persistent agent tokens and a default \"allow\" policy. Consul bootstrapper will create a bootstrap management token and use the provided Vault token to (re)configure the Consul secrets engine in Vault. Do to a quirk in Consul's ACL behavior that inverts the meaning of an ACL in default-allow mode, in phase 1 the Consul bootstrapper will create an agent token with the global-management policy and install it into the agent. During phase 2, it will be changed to a specific, limited, policy. (This change should not be visible to Consul API clients.) The bootstrap management token will also be stored persistently to be used by the API gateway for proxy authentication, and will also be needed for local access to Consul's web user interface. (Docker-only) Open a port to signal that Consul bootstrapping is completed. (Integrate with ready_to_run signal.) Phase 2 (completed in Ireland release) Consul bootstrapper will install a role in Vault that creates global-management tokens in Consul with no TTL. Registry and configuration client libraries will be modified to accept a Consul access token. go-mod-bootstrap will have contain the necessary glue logic to request a service-specifc Consul access token from Vault every time the service is started. Consul configuration will be changed to a default \"deny\" policy once all services have been changed to authenticated access mode. The agent tokens' policy will be changed to a specific agent policy instead of the global-management policy. Phase 3 (for Jakarta release) Introduce per-service roles and ACL policies that give each service access to its own subset of the Consul key-value store and to register in the service registry. Consul access tokens will be scoped to the needs of the particular service (ability to update that service's registry data, an access that services's KV store). Create a separate management token (non-bootstrap) for API gateway proxy authentication and Consul UI access that is different from boostrap management token stored in Vault. This token will need to be requested outside of Vault in order for it to be non-expiring. Glue logic will ensure that expired Consul tokens are replaced with fresh ones (token freshness can be pre-checked by a request made to /acl/token/self ). Unintended consequences and mitigation (for Jakarta stabilization release) Consul token lifetime will be tied to the Vault token lifetime. Vault deliberately revokes any Consul tokens that it issues in order to ensure that they don't outlive the parent token's lifetime. If Consul is not fully initialized when token revokation is attempted, Vault will be unable to revoke these tokens. Migtigations: Consul will be started concurrently with Vault to give time for Consul to fully initialize. secretstore-setup will delay starting until Consul has completed leader election. secretstore-setup will be modified to less aggressively revoke tokens. Alternatives include revoke-and-orphan which should leave the Consul tokens intact if the secret store is restarted but may leave garbage tokens in the Consul database, or tidy-tokens which cleans up invalid entries in the token database, or simply leave Vault to its own devices and let Vault clean itself up. Testing will be performed and an appropriate mechanism selected. References ADR for secret creation and distribution ADR for secure bootstrapping ADR for service registry Hashicorp Vault","title":"Securing access to Consul"},{"location":"design/adr/security/0017-consul-security/#securing-access-to-consul","text":"","title":"Securing access to Consul"},{"location":"design/adr/security/0017-consul-security/#status","text":"Approved","title":"Status"},{"location":"design/adr/security/0017-consul-security/#context","text":"This ADR defines the motiviation and approach used to secure access to the Consul component in the EdgeX architecture for security-enabled configurations only . Non-secure configuations continue to use Consul in anonymous read-write mode. As this Consul security feature requires Vault to function, if EDGEX_SECURITY_SECRET_STORE=false and Vault is not present, the legacy behavior (unauthenticated Consul access) will be preserved. Consul provides several services for the EdgeX architecture: Service registry (see ADR in references below) Service health monitoring Mutable configuration data Use of the services provided by Consul is optional on a service-by-service basis. Use of the registry is controlled by the -r or --registry flag provided to an EdgeX service. Use of mutable configuration data is controlled by the -cp or --configProvider flag provided to an EdgeX service. When Consul is enabled as a configuration provider, the configuration.toml is parsed into individual settings and seeded into the Consul key-value store on the first start of a service. Configuration reads and writes are then done to Consul if it is specified as the configuration provider, otherwise the static configuration.toml is used. Writes to the [Writable] section in Consul trigger per-service callbacks notifying the application of the changed data. Updates to non- [Writable] sections are parsed only once at startup and require a service restart to take effect. Since configuration data can affect the runtime behavior of services, compensating controls must be introduced in order to mitigate the risks introduced by moving configuration from a static file into to an HTTP-accessible service with mutable state. The current practice is that Consul is exposed via unencrypted HTTP in anonymous read/write mode to all processes and EdgeX services running on the host machine.","title":"Context"},{"location":"design/adr/security/0017-consul-security/#decision","text":"Consul will be configured with access control list (ACL) functionality enabled, and each EdgeX service will utilize a Consul access token to authenticate to Consul. Consul access tokens will be requested from the Vault Consul secrets engine (to avoid introducing additional bootstrapping secrets). DNS will be disabled via configuration as it is not used in EdgeX. Consul Access Via API Gateway In security enabled EdgeX, the API gateway will be configured to proxy the Consul service over the /consul path, using the request-transformer plugin to add the global management token to incoming requests via the X-Consul-Token HTTP header. Thus, ability to access remote APIs also grants the ability to modify Consul's key-value store. At this time, service access via API gateway is all-or-nothing, but this does not preclude future fine-grained authorization at the API gateway layer to specific microservices, including Consul. Proxying of the Consul UI is problematic and there is no current solution, which would involve proper balacing of the externally-visible URL, the path-stripping effect (or not) of the proxy, Consul's ui_content_path , and UI authentication (the request-transfomer does not work on the UI).","title":"Decision"},{"location":"design/adr/security/0017-consul-security/#consequences","text":"Full implementation of this ADR will deny Consul access to all existing Consul clients. To limit the impacts of the change, deployment will take place in phases. Phase 1 is basic plumbing work and leaves Consul configured in a permissive mode and thus is not a breaking change. Phase 2 will affect the APIs of Go modules and will change the default policy to \"deny\", both of which are breaking changes. Phase 3 is a refinement of access control; presuming the existing services are \"well-behaved\", that is, they do not access configuration of other services, Phase 3 will not introduce any breaking changes on top of the Phase 2 breaking changes.","title":"Consequences"},{"location":"design/adr/security/0017-consul-security/#phase-1-completed-in-ireland-release","text":"Vault bootstrapper will install Vault Consul secrets engine. Secretstore-setup will create a Vault token for consul secrets engine configuration. Consul will be started with Consul ACLs enabled with persistent agent tokens and a default \"allow\" policy. Consul bootstrapper will create a bootstrap management token and use the provided Vault token to (re)configure the Consul secrets engine in Vault. Do to a quirk in Consul's ACL behavior that inverts the meaning of an ACL in default-allow mode, in phase 1 the Consul bootstrapper will create an agent token with the global-management policy and install it into the agent. During phase 2, it will be changed to a specific, limited, policy. (This change should not be visible to Consul API clients.) The bootstrap management token will also be stored persistently to be used by the API gateway for proxy authentication, and will also be needed for local access to Consul's web user interface. (Docker-only) Open a port to signal that Consul bootstrapping is completed. (Integrate with ready_to_run signal.)","title":"Phase 1 (completed in Ireland release)"},{"location":"design/adr/security/0017-consul-security/#phase-2-completed-in-ireland-release","text":"Consul bootstrapper will install a role in Vault that creates global-management tokens in Consul with no TTL. Registry and configuration client libraries will be modified to accept a Consul access token. go-mod-bootstrap will have contain the necessary glue logic to request a service-specifc Consul access token from Vault every time the service is started. Consul configuration will be changed to a default \"deny\" policy once all services have been changed to authenticated access mode. The agent tokens' policy will be changed to a specific agent policy instead of the global-management policy.","title":"Phase 2 (completed in Ireland release)"},{"location":"design/adr/security/0017-consul-security/#phase-3-for-jakarta-release","text":"Introduce per-service roles and ACL policies that give each service access to its own subset of the Consul key-value store and to register in the service registry. Consul access tokens will be scoped to the needs of the particular service (ability to update that service's registry data, an access that services's KV store). Create a separate management token (non-bootstrap) for API gateway proxy authentication and Consul UI access that is different from boostrap management token stored in Vault. This token will need to be requested outside of Vault in order for it to be non-expiring. Glue logic will ensure that expired Consul tokens are replaced with fresh ones (token freshness can be pre-checked by a request made to /acl/token/self ).","title":"Phase 3 (for Jakarta release)"},{"location":"design/adr/security/0017-consul-security/#unintended-consequences-and-mitigation-for-jakarta-stabilization-release","text":"Consul token lifetime will be tied to the Vault token lifetime. Vault deliberately revokes any Consul tokens that it issues in order to ensure that they don't outlive the parent token's lifetime. If Consul is not fully initialized when token revokation is attempted, Vault will be unable to revoke these tokens. Migtigations: Consul will be started concurrently with Vault to give time for Consul to fully initialize. secretstore-setup will delay starting until Consul has completed leader election. secretstore-setup will be modified to less aggressively revoke tokens. Alternatives include revoke-and-orphan which should leave the Consul tokens intact if the secret store is restarted but may leave garbage tokens in the Consul database, or tidy-tokens which cleans up invalid entries in the token database, or simply leave Vault to its own devices and let Vault clean itself up. Testing will be performed and an appropriate mechanism selected.","title":"Unintended consequences and mitigation (for Jakarta stabilization release)"},{"location":"design/adr/security/0017-consul-security/#references","text":"ADR for secret creation and distribution ADR for secure bootstrapping ADR for service registry Hashicorp Vault","title":"References"},{"location":"design/adr/security/0020-spiffe/","text":"Use SPIFFE/SPIRE for On-demand Secret Store Token Generation Status Approved via TSC vote on 2021-12-14 Context In security-enabled EdgeX, there is a component called security-secretstore-setup that seeds authentication tokens for Hashicorp Vault--EdgeX's secret store--into directories reserved for each EdgeX microservice. The implementation is provided by a sub-component, security-file-token-provider , that works off of a static configuration file ( token-config.json ) that configures known EdgeX services, and an environment variable that lists additional services that require tokens. The token provider creates a unique token for each service and attaches a custom policy to each token that limits token access in a manner that paritions the secret store's namespace. The current solution has some problematic aspects: These tokens have an initial TTL of one hour (1h) and become invalid if not used and renewed within that time period. It is not possible to delay the start of EdgeX services until a later time (that is, greater than the default token TTL), as they will not be able to connect to the EdgeX secret store to obtain required secrets. Transmission of the authentication token requires one or more shared file systems between the service and security-secretstore-setup . In the Docker implementation, this shared file system is constructed by bind-mounting a host-based directory to multiple containers. The snap implementation is similar, utilizing a content-interface between snaps. In a Kubernetes implementation limited to a single worker node, a CSI storage driver that provided RWO volumes would suffice. The current approach cannot support distributed services without an underlying distributed file system to distribute tokens, such as GlusterFS, running across the participating nodes. For Kubernetes, the requirement would be a remote shared file system persistent volume (RWX volume). Decision EdgeX will create a new service, security-spiffe-token-provider . This service will be a mutual-auth TLS service that exchanges a SPIFFE X.509 SVID for a secret store token. An SPIFFE identifier is a URI of the format spiffe://trust domain/workload identifier . For example: spiffe://edgexfoundry.org/service/core-data . A SPIFFE Verifiable Identity Document (SVID) is a cryptographically-signed version of a SPIFFE ID, typically a X.509 certificate with the SPIFFE ID encoded into the subjectAltName certificate extension, or a JSON web token (encoded into the sub claim). The EdgeX implementation will use a naming convention on the path component, such as the above, in order to be able to extract the requesting service from the SPIFFE ID. The SPIFFE token provider will take three parameters: An X.509 SVID used in mutual-auth TLS for the token provider and the service to cross-authenticate. The reqested service key. If blank, the service key will default to the service name encoded in the SVID. If the service name follows the pattern device-(name) , then the service key must follow the format device-(name) or device-name-* . If the service name is app-service-configurable , then the service key must follow the format app-* . (This is an accomodation for the Unix workload attester not being able to distingish workloads that are launched using the same executable binary. Custom app services that support multiple instances won't be supported unless they name the executable the same as the standard app service binary or modify this logic.) A list of \"known secret\" identifiers that will allow new services to request database passwords or other \"known secrets\" to be seeded into their service's partition in the secret store. The go-mod-secrets module will be modified to enable a new mode whereby a secret store token is obtained by: Obtaining an X.509 SVID by contacting a local SPIFFE agent's workload API on a local Unix domain socket. Connecting to the security-spiffe-token-provider service using the X.509 SVID to request a secret store token. The SPIFFE authentication mode will be an opt-in feature. The SPIFFE implementation will be user-replaceable; specifically, the workload API socket will be configurable, as well as the parsing of the SPIFFE ID. Reasons for doing so might include: changing the name of the trust domain in the SPIFFE ID, or moving the SPIFFE server out of the edge. This feature is estimated to be a \"large\" or \"extra large\" effort that could be implemented in a single release cycle. Technical Architecture The work flow is as follows: Create a root CA for the SPIFFE user to use for creation of sub-CA's. The SPIFFE server is started. The server creates a sub-CA for issuing new identities. The trust bundle (certificate authority) data is exported from the SPIFFE server and stored on a shared volume readable by other EdgeX microservices (i.e. the existing secrets volume used for sharing secret store tokens). A join token for the SPIFFE agent is created using token generate and shared to the EdgeX secrets volume. Workload entries are loaded into the SPIFFE server database, using the join-identity of the agent created in the previous step as the parent ID of the workload. The SPIFFE agent is started with the join token created in a previous step to add it to the cluster. Vault is started and security-secret-store-setup initializes it and creates an admin token for security-spiffe-token-provider to use. The security-spiffe-token-provider service is started. It obtains an SVID from the SIFFE agent and uses it as a TLS server certificate. An EdgeX microservice starts and obtains another SVID from the SPIFFE agent and uses it as a TLS client certificate to contact the security-spiffe-token-provider service. The EdgeX microservice uses the trust bundle as a server CA to verify the TLS certificate of the remote service. security-spiffe-token-provider verifies the SVID using the trust bundle as client CA to verify the client, extracts the service key, and issues an appropriate Vault service token. The EdgeX microservice accesses Vault as usual. Workload Registration and Agent Sockets The server uses a workload registration Unix domain socket that allows authorization entries to be added to the authorization database. This socket is protected by Unix file system permissions to control who is allowed to add entries to the database. In this proposal, a subcommand will be added to the EdgeX secrets-config utility to simplify the process of registering new services that uses the registration socket above. The agent uses a workload attesation Unix domain socket that is open to the world. This socket is shared via a snap content-interface of via a shared host bind mount for Docker. There is one agent per node. Trust Bundle SVID's must be traceable back to a known issuing authority (certificate authority) to determine their validity. In the proposed implementation, we will generate a CA on first boot and store it persistently. This root CA will be distributed as the trust bundle. The SPIFFE server will then generate a rotating sub-CA for issuing SVIDs, and the issued SVID will include both the leaf certificate and the intermediate certificate. This implementation differs from the default implementation, which uses a transient CA that is rotated periodically and that keeps a log of past CA's. The default implementation is not suitable because only the Kubernetes reference implementation of the SPIRE server has a notification hook that is invoked when the CA is rotated. CA rotation would just result in issuing of SVIDs that are not trusted by microservices that received only the initial CA. The SPIFFE implementation is replaceable. The user is free to replace this default implementation with potentally a cloud-based SPIFFE server and a cloud-based CA. Workload Authorization Workloads are authenticated by connecting to the spiffe-agent via a Unix domain socket, which is capable of identifying the process ID of the remote client. The process ID is fed into one of following workload attesters, which gather additional metadata about the caller: The Unix workload attester gathers UID, GID, path, and SHA-256 hash of the executable. The Unix workload attester would be used native services and snaps. The Docker workload attester gathers container labels that are added by docker-compose when the container is launched. The Docker workload attester would be used for Docker-based EdgeX deployments. An example label is docker:label:com.docker.compose.service:edgex-core-data where the service label is the key value in the services section of the docker-compose.yml . It is also possible to refer to labels built-in to the container image. The Kubernetes workload attester gathers a wealth of pod and container metadata. Once authenticated, the metadata is sent to the SPIFFE server to authorize the workload. Workloads are authorized via an authorization database connected to the SPIFFE server. Supported databases are SQLite (default), PostgreSQL, and MySQL. Due to startup ordering issues, SQLite will be used. (Disclaimer: SQlite, according for the Turtle book is intended for development and test only. We will use SQlite anyway because because Redis is not supported.) The only service that needs to be seeded to the database as this time is security-spiffe-token-provier . For example: spire-server entry create -parentID \" ${ local_agent_svid } \" -dns edgex-spiffe-token-provider -spiffeID \" ${ svid_service_base } /edgex-spiffe-token-provider\" -selector \"docker:label:com.docker.compose.service:edgex-spiffe-token-provider\" The above command associates a SPIFFE ID with a selector , in this case, a container label, and configures a DNS subjectAltName in the X.509 certificate for server-side TLS. A snap-based installation of EdgeX would use a unix:path or unix:sha256 selector instead. There are two extension mechanims for authorization additional workloads: Inject a config file or environment variable to authorize additional workloads. The container will parse and issue spire-server entry create commands for each additional service. Run the edgex-secrets-config utility (that will wrap the spire-server entry create command) for ad-hoc authorization of new services. The authorization database is persistent across reboots. Consequences This proposal will require addition of several new, optional, EdgeX microservices: security-spiffe-token-provider , running on the main node spiffe-agent , running on the main node and each remote node spiffe-server , running on the main node spiffe-config , a one-shot service running on the main node Note that like Vault, the recommended SPIFFE configuration is to run the SPIFFE server on a dedicated node. If this is a concern, bring your own SPIFFE implementation. Minor changes will be needed to security-secretstore-setup to preserve the token-creating-token used by security-file-token-provider so that it can be used by security-spiffe-token-provider . The startup flow of the framework will be adjusted as follows: Bootstrap service (original) spiffe-server spiffe-config (can be combined with spifee-server ) spiffe-agent Vault service (original) Secret store setup service (original) security-spiffe-token-provider Consul (original) Postgres (original) There is no direct dependency between spiffe-server and any other microservice. security-spiffe-token-provider requires an SVID from spiffe-agent and a Vault admin token. None of these new services will be proxied via the API gateway. In the future, this mechanism may become the default secret store distribution mechanism, as it eliminates several secrets volumes used to share secrets between security-secretstore-setup and various EdgeX microservices. The EdgeX automation will only configure the SPIFEE agent on the main node. Additional nodes can be manually added by the operator by obtaining a join token from the main node and using it to bootstrap a remote node. SPIFFE/SPIRE has native support for Kubernetes and can distribute the trust bundle via a Kubernetes ConfigMap to more easily enable distributed scenarios, removing a major roadblock to usage of EdgeX in a Kubernetes environment. Footprint NOTE: This data is limited by the fact that the pre-built SPIRE reference binaries are compiled with CGO enabled. SPIRE Server 69 MB executable, dynamically linked 151 MB inside of a Debian-slim container 30 MB memory usage, as container SPIRE Agent 33 MB executable, dynamically linked 114 MB inside of a Debian-slim container 64 MB memory usage, as container SPIFFE-base Secret Store Token Provider The following is the minimum size: > 6 MB executable (likely much larger) > 29 MB memory usage, as container Limitations The following are known limitations with this proposal: The capabilities enabled by this solution would only be enabled on Linux platforms. SIFFE/SPIRE Agent is not available for native Windows and pre-built binaries are only avaiable for Linux. (It is unclear as to whether other *nix'es are supported.) The capabilities enabled by this solution would only be supported for Go-based services. The SPIFFE API's are implemented in gRPC, which is only ported to C#, C++, Dart, Go, Java, Kotlin, Node, Objective-C, PHP, Python, and Ruby. Notably, the C language is not supported, and the only other EdgeX supported language is Go. That default TTL of an x.509 SVID is one hour. As such, all SVID consumers must be capable of auto-renewal of SVIDs on both the client and server side. Alternatives Overcoming lack of a supported GRPC-C library Leave C-SDK device services behind. In this option, C device services would be unable to participate in the delayed-start services architecture. Fork a grpc-c library. Forking a grpc-c library and rehabilitating it is one option. There is at least one grpc-c library that has been proven to work, but it requires additional features to make it compatible with the SPIRE workload agent. However, the project is extremely large and it is unlikely that EdgeX is big enough to carry the project. Available libraries include: https://github.com/lixiangyun/grpc-c This library is several years out-of-date, does not compile on current Linux distributions without some rework, and does not pass per-request metadata tags. Proved to work via manual patching. Not supportable. https://github.com/Juniper/grpc-c This library is serveral years out-of-date, also does not compile on current Linux distributiosn without some rework. Uses hard-coded Unix domain socket paths. May support per-request metadata tags, but did not test. Not supportable. https://github.com/HewlettPackard/c-spiffe This library is yet untested. Rather than a gRPC library, this library implements the workload API client directly. Ultimately, this library also wraps the gRPC C++ library, and statically links to it. There is no benefit to the EdgeX project to use this library as we can call the underlying library directly. Hybrid device services. In this model, device services would always be written in Go, but in the case where linking to a C language library is required, CGO features would be used to invoke native C functions from golang. This option would commit the EdgeX project to a one-time investment to port the existing C device services to the new hybrid model. This option is the best choice if the long-term strategy is to end-of-life the C Device SDK. Bridge. In this model, the C++ implementation to invoke the SPIFFE/SPIRE workload API would be hidden behind a dynamic shared library with C linkage. This would require minimal change to the existing C SDK. However, the resulting binaries would have be based on GLIBC vs MUSL in order to get dlopen() support. This will also limit the choice of container base images for containerized services. Modernize. In this model, the Device SDK would be rewritten either partially or in-full in C++. Under this model, the SPIFFE/SPIRE workload API could be accessed via a community-supported C++ GRPC SDK. There are many implementation options: A \"C++ compilation-switch\" where the C SDK could be compiled in C-mode or C++-mode with enhanced functionality. A C++ extension API. The original C SDK would remain as-is, but if compiling with __cplusplus defined, additional API methods would be exposed. The SDK could thus be composed of a mixture of .c files with C linkage and .cc files with C++ linkage. The linker would ultimately determine whether or not the C++ runtime library needed to be linked in. Native C++ device SDK with legacy C wrapper facade. Compile existing code in C++ mode, with optional C++ facade. Opt-in or Standard Feature If one of the following things were to happen, it would push this proposal \"over the edge\" from being an optional opt-in feature to a required standard feature for security: The \"on-demand\" method of obtaining a secret store token is the default method of obtaining a token for non-core EdgeX services. The \"on-demand\" method of obtaining a secret store token is the default method for all EdgeX services. SPIFFE SVID's become the implementation mechanism for microservice-level authentication. (Not in scope for this ADR.) Merge security-file-token-provider and security-spiffe-token-provider Keeping these as separate executables clearly separates the on-demand secret store tokens feature as an optional service. It is possible to combine the services, but there would need to be a configuration switch in order to enable the SPIFFE feature. It would also increase the base executable size to include the extra logic. Alternatives regarding SPIFFE CA Transient CA option The SPIFFE server can be configured with no \"upstream authority\" (certificate authority), and the server will periodically generate a new, transient CA, and keep a bounded history of previous CA's. A rotating trust bundle only practically works in a Kubernetes environment, since a configmap can be updated real-time. For everyone else, we need a static CA that can be pre-distributed to remote nodes. Thus, this solution was not chosen. Vault-based CA option The SPIFFE server can be configured to make requests to a Hashicorp Vault PKI secrets engine to generate intermediate CA certificates for signing SVID's. This is an option for future integrations, but is omitted from this proposal due to the jump in implementation complexity and the desire that the current proposal be on add-on feature. The current implementation allows the SPIFFE server and Vault to be started simultaneously. Using a Vault-based CA would require a complex interlocking sequence of steps. References Issue to create ADR for handling delayed-start services 0018 Service Registry ADR Service List ADR SPIFFE SPIFFE ID X.500 SVID JWT SVID Turtle book","title":"Use SPIFFE/SPIRE for On-demand Secret Store Token Generation"},{"location":"design/adr/security/0020-spiffe/#use-spiffespire-for-on-demand-secret-store-token-generation","text":"","title":"Use SPIFFE/SPIRE for On-demand Secret Store Token Generation"},{"location":"design/adr/security/0020-spiffe/#status","text":"Approved via TSC vote on 2021-12-14","title":"Status"},{"location":"design/adr/security/0020-spiffe/#context","text":"In security-enabled EdgeX, there is a component called security-secretstore-setup that seeds authentication tokens for Hashicorp Vault--EdgeX's secret store--into directories reserved for each EdgeX microservice. The implementation is provided by a sub-component, security-file-token-provider , that works off of a static configuration file ( token-config.json ) that configures known EdgeX services, and an environment variable that lists additional services that require tokens. The token provider creates a unique token for each service and attaches a custom policy to each token that limits token access in a manner that paritions the secret store's namespace. The current solution has some problematic aspects: These tokens have an initial TTL of one hour (1h) and become invalid if not used and renewed within that time period. It is not possible to delay the start of EdgeX services until a later time (that is, greater than the default token TTL), as they will not be able to connect to the EdgeX secret store to obtain required secrets. Transmission of the authentication token requires one or more shared file systems between the service and security-secretstore-setup . In the Docker implementation, this shared file system is constructed by bind-mounting a host-based directory to multiple containers. The snap implementation is similar, utilizing a content-interface between snaps. In a Kubernetes implementation limited to a single worker node, a CSI storage driver that provided RWO volumes would suffice. The current approach cannot support distributed services without an underlying distributed file system to distribute tokens, such as GlusterFS, running across the participating nodes. For Kubernetes, the requirement would be a remote shared file system persistent volume (RWX volume).","title":"Context"},{"location":"design/adr/security/0020-spiffe/#decision","text":"EdgeX will create a new service, security-spiffe-token-provider . This service will be a mutual-auth TLS service that exchanges a SPIFFE X.509 SVID for a secret store token. An SPIFFE identifier is a URI of the format spiffe://trust domain/workload identifier . For example: spiffe://edgexfoundry.org/service/core-data . A SPIFFE Verifiable Identity Document (SVID) is a cryptographically-signed version of a SPIFFE ID, typically a X.509 certificate with the SPIFFE ID encoded into the subjectAltName certificate extension, or a JSON web token (encoded into the sub claim). The EdgeX implementation will use a naming convention on the path component, such as the above, in order to be able to extract the requesting service from the SPIFFE ID. The SPIFFE token provider will take three parameters: An X.509 SVID used in mutual-auth TLS for the token provider and the service to cross-authenticate. The reqested service key. If blank, the service key will default to the service name encoded in the SVID. If the service name follows the pattern device-(name) , then the service key must follow the format device-(name) or device-name-* . If the service name is app-service-configurable , then the service key must follow the format app-* . (This is an accomodation for the Unix workload attester not being able to distingish workloads that are launched using the same executable binary. Custom app services that support multiple instances won't be supported unless they name the executable the same as the standard app service binary or modify this logic.) A list of \"known secret\" identifiers that will allow new services to request database passwords or other \"known secrets\" to be seeded into their service's partition in the secret store. The go-mod-secrets module will be modified to enable a new mode whereby a secret store token is obtained by: Obtaining an X.509 SVID by contacting a local SPIFFE agent's workload API on a local Unix domain socket. Connecting to the security-spiffe-token-provider service using the X.509 SVID to request a secret store token. The SPIFFE authentication mode will be an opt-in feature. The SPIFFE implementation will be user-replaceable; specifically, the workload API socket will be configurable, as well as the parsing of the SPIFFE ID. Reasons for doing so might include: changing the name of the trust domain in the SPIFFE ID, or moving the SPIFFE server out of the edge. This feature is estimated to be a \"large\" or \"extra large\" effort that could be implemented in a single release cycle.","title":"Decision"},{"location":"design/adr/security/0020-spiffe/#technical-architecture","text":"The work flow is as follows: Create a root CA for the SPIFFE user to use for creation of sub-CA's. The SPIFFE server is started. The server creates a sub-CA for issuing new identities. The trust bundle (certificate authority) data is exported from the SPIFFE server and stored on a shared volume readable by other EdgeX microservices (i.e. the existing secrets volume used for sharing secret store tokens). A join token for the SPIFFE agent is created using token generate and shared to the EdgeX secrets volume. Workload entries are loaded into the SPIFFE server database, using the join-identity of the agent created in the previous step as the parent ID of the workload. The SPIFFE agent is started with the join token created in a previous step to add it to the cluster. Vault is started and security-secret-store-setup initializes it and creates an admin token for security-spiffe-token-provider to use. The security-spiffe-token-provider service is started. It obtains an SVID from the SIFFE agent and uses it as a TLS server certificate. An EdgeX microservice starts and obtains another SVID from the SPIFFE agent and uses it as a TLS client certificate to contact the security-spiffe-token-provider service. The EdgeX microservice uses the trust bundle as a server CA to verify the TLS certificate of the remote service. security-spiffe-token-provider verifies the SVID using the trust bundle as client CA to verify the client, extracts the service key, and issues an appropriate Vault service token. The EdgeX microservice accesses Vault as usual.","title":"Technical Architecture"},{"location":"design/adr/security/0020-spiffe/#workload-registration-and-agent-sockets","text":"The server uses a workload registration Unix domain socket that allows authorization entries to be added to the authorization database. This socket is protected by Unix file system permissions to control who is allowed to add entries to the database. In this proposal, a subcommand will be added to the EdgeX secrets-config utility to simplify the process of registering new services that uses the registration socket above. The agent uses a workload attesation Unix domain socket that is open to the world. This socket is shared via a snap content-interface of via a shared host bind mount for Docker. There is one agent per node.","title":"Workload Registration and Agent Sockets"},{"location":"design/adr/security/0020-spiffe/#trust-bundle","text":"SVID's must be traceable back to a known issuing authority (certificate authority) to determine their validity. In the proposed implementation, we will generate a CA on first boot and store it persistently. This root CA will be distributed as the trust bundle. The SPIFFE server will then generate a rotating sub-CA for issuing SVIDs, and the issued SVID will include both the leaf certificate and the intermediate certificate. This implementation differs from the default implementation, which uses a transient CA that is rotated periodically and that keeps a log of past CA's. The default implementation is not suitable because only the Kubernetes reference implementation of the SPIRE server has a notification hook that is invoked when the CA is rotated. CA rotation would just result in issuing of SVIDs that are not trusted by microservices that received only the initial CA. The SPIFFE implementation is replaceable. The user is free to replace this default implementation with potentally a cloud-based SPIFFE server and a cloud-based CA.","title":"Trust Bundle"},{"location":"design/adr/security/0020-spiffe/#workload-authorization","text":"Workloads are authenticated by connecting to the spiffe-agent via a Unix domain socket, which is capable of identifying the process ID of the remote client. The process ID is fed into one of following workload attesters, which gather additional metadata about the caller: The Unix workload attester gathers UID, GID, path, and SHA-256 hash of the executable. The Unix workload attester would be used native services and snaps. The Docker workload attester gathers container labels that are added by docker-compose when the container is launched. The Docker workload attester would be used for Docker-based EdgeX deployments. An example label is docker:label:com.docker.compose.service:edgex-core-data where the service label is the key value in the services section of the docker-compose.yml . It is also possible to refer to labels built-in to the container image. The Kubernetes workload attester gathers a wealth of pod and container metadata. Once authenticated, the metadata is sent to the SPIFFE server to authorize the workload. Workloads are authorized via an authorization database connected to the SPIFFE server. Supported databases are SQLite (default), PostgreSQL, and MySQL. Due to startup ordering issues, SQLite will be used. (Disclaimer: SQlite, according for the Turtle book is intended for development and test only. We will use SQlite anyway because because Redis is not supported.) The only service that needs to be seeded to the database as this time is security-spiffe-token-provier . For example: spire-server entry create -parentID \" ${ local_agent_svid } \" -dns edgex-spiffe-token-provider -spiffeID \" ${ svid_service_base } /edgex-spiffe-token-provider\" -selector \"docker:label:com.docker.compose.service:edgex-spiffe-token-provider\" The above command associates a SPIFFE ID with a selector , in this case, a container label, and configures a DNS subjectAltName in the X.509 certificate for server-side TLS. A snap-based installation of EdgeX would use a unix:path or unix:sha256 selector instead. There are two extension mechanims for authorization additional workloads: Inject a config file or environment variable to authorize additional workloads. The container will parse and issue spire-server entry create commands for each additional service. Run the edgex-secrets-config utility (that will wrap the spire-server entry create command) for ad-hoc authorization of new services. The authorization database is persistent across reboots.","title":"Workload Authorization"},{"location":"design/adr/security/0020-spiffe/#consequences","text":"This proposal will require addition of several new, optional, EdgeX microservices: security-spiffe-token-provider , running on the main node spiffe-agent , running on the main node and each remote node spiffe-server , running on the main node spiffe-config , a one-shot service running on the main node Note that like Vault, the recommended SPIFFE configuration is to run the SPIFFE server on a dedicated node. If this is a concern, bring your own SPIFFE implementation. Minor changes will be needed to security-secretstore-setup to preserve the token-creating-token used by security-file-token-provider so that it can be used by security-spiffe-token-provider . The startup flow of the framework will be adjusted as follows: Bootstrap service (original) spiffe-server spiffe-config (can be combined with spifee-server ) spiffe-agent Vault service (original) Secret store setup service (original) security-spiffe-token-provider Consul (original) Postgres (original) There is no direct dependency between spiffe-server and any other microservice. security-spiffe-token-provider requires an SVID from spiffe-agent and a Vault admin token. None of these new services will be proxied via the API gateway. In the future, this mechanism may become the default secret store distribution mechanism, as it eliminates several secrets volumes used to share secrets between security-secretstore-setup and various EdgeX microservices. The EdgeX automation will only configure the SPIFEE agent on the main node. Additional nodes can be manually added by the operator by obtaining a join token from the main node and using it to bootstrap a remote node. SPIFFE/SPIRE has native support for Kubernetes and can distribute the trust bundle via a Kubernetes ConfigMap to more easily enable distributed scenarios, removing a major roadblock to usage of EdgeX in a Kubernetes environment.","title":"Consequences"},{"location":"design/adr/security/0020-spiffe/#footprint","text":"NOTE: This data is limited by the fact that the pre-built SPIRE reference binaries are compiled with CGO enabled.","title":"Footprint"},{"location":"design/adr/security/0020-spiffe/#spire-server","text":"69 MB executable, dynamically linked 151 MB inside of a Debian-slim container 30 MB memory usage, as container","title":"SPIRE Server"},{"location":"design/adr/security/0020-spiffe/#spire-agent","text":"33 MB executable, dynamically linked 114 MB inside of a Debian-slim container 64 MB memory usage, as container","title":"SPIRE Agent"},{"location":"design/adr/security/0020-spiffe/#spiffe-base-secret-store-token-provider","text":"The following is the minimum size: > 6 MB executable (likely much larger) > 29 MB memory usage, as container","title":"SPIFFE-base Secret Store Token Provider"},{"location":"design/adr/security/0020-spiffe/#limitations","text":"The following are known limitations with this proposal: The capabilities enabled by this solution would only be enabled on Linux platforms. SIFFE/SPIRE Agent is not available for native Windows and pre-built binaries are only avaiable for Linux. (It is unclear as to whether other *nix'es are supported.) The capabilities enabled by this solution would only be supported for Go-based services. The SPIFFE API's are implemented in gRPC, which is only ported to C#, C++, Dart, Go, Java, Kotlin, Node, Objective-C, PHP, Python, and Ruby. Notably, the C language is not supported, and the only other EdgeX supported language is Go. That default TTL of an x.509 SVID is one hour. As such, all SVID consumers must be capable of auto-renewal of SVIDs on both the client and server side.","title":"Limitations"},{"location":"design/adr/security/0020-spiffe/#alternatives","text":"","title":"Alternatives"},{"location":"design/adr/security/0020-spiffe/#overcoming-lack-of-a-supported-grpc-c-library","text":"Leave C-SDK device services behind. In this option, C device services would be unable to participate in the delayed-start services architecture. Fork a grpc-c library. Forking a grpc-c library and rehabilitating it is one option. There is at least one grpc-c library that has been proven to work, but it requires additional features to make it compatible with the SPIRE workload agent. However, the project is extremely large and it is unlikely that EdgeX is big enough to carry the project. Available libraries include: https://github.com/lixiangyun/grpc-c This library is several years out-of-date, does not compile on current Linux distributions without some rework, and does not pass per-request metadata tags. Proved to work via manual patching. Not supportable. https://github.com/Juniper/grpc-c This library is serveral years out-of-date, also does not compile on current Linux distributiosn without some rework. Uses hard-coded Unix domain socket paths. May support per-request metadata tags, but did not test. Not supportable. https://github.com/HewlettPackard/c-spiffe This library is yet untested. Rather than a gRPC library, this library implements the workload API client directly. Ultimately, this library also wraps the gRPC C++ library, and statically links to it. There is no benefit to the EdgeX project to use this library as we can call the underlying library directly. Hybrid device services. In this model, device services would always be written in Go, but in the case where linking to a C language library is required, CGO features would be used to invoke native C functions from golang. This option would commit the EdgeX project to a one-time investment to port the existing C device services to the new hybrid model. This option is the best choice if the long-term strategy is to end-of-life the C Device SDK. Bridge. In this model, the C++ implementation to invoke the SPIFFE/SPIRE workload API would be hidden behind a dynamic shared library with C linkage. This would require minimal change to the existing C SDK. However, the resulting binaries would have be based on GLIBC vs MUSL in order to get dlopen() support. This will also limit the choice of container base images for containerized services. Modernize. In this model, the Device SDK would be rewritten either partially or in-full in C++. Under this model, the SPIFFE/SPIRE workload API could be accessed via a community-supported C++ GRPC SDK. There are many implementation options: A \"C++ compilation-switch\" where the C SDK could be compiled in C-mode or C++-mode with enhanced functionality. A C++ extension API. The original C SDK would remain as-is, but if compiling with __cplusplus defined, additional API methods would be exposed. The SDK could thus be composed of a mixture of .c files with C linkage and .cc files with C++ linkage. The linker would ultimately determine whether or not the C++ runtime library needed to be linked in. Native C++ device SDK with legacy C wrapper facade. Compile existing code in C++ mode, with optional C++ facade.","title":"Overcoming lack of a supported GRPC-C library"},{"location":"design/adr/security/0020-spiffe/#opt-in-or-standard-feature","text":"If one of the following things were to happen, it would push this proposal \"over the edge\" from being an optional opt-in feature to a required standard feature for security: The \"on-demand\" method of obtaining a secret store token is the default method of obtaining a token for non-core EdgeX services. The \"on-demand\" method of obtaining a secret store token is the default method for all EdgeX services. SPIFFE SVID's become the implementation mechanism for microservice-level authentication. (Not in scope for this ADR.)","title":"Opt-in or Standard Feature"},{"location":"design/adr/security/0020-spiffe/#merge-security-file-token-provider-and-security-spiffe-token-provider","text":"Keeping these as separate executables clearly separates the on-demand secret store tokens feature as an optional service. It is possible to combine the services, but there would need to be a configuration switch in order to enable the SPIFFE feature. It would also increase the base executable size to include the extra logic.","title":"Merge security-file-token-provider and security-spiffe-token-provider"},{"location":"design/adr/security/0020-spiffe/#alternatives-regarding-spiffe-ca","text":"","title":"Alternatives regarding SPIFFE CA"},{"location":"design/adr/security/0020-spiffe/#transient-ca-option","text":"The SPIFFE server can be configured with no \"upstream authority\" (certificate authority), and the server will periodically generate a new, transient CA, and keep a bounded history of previous CA's. A rotating trust bundle only practically works in a Kubernetes environment, since a configmap can be updated real-time. For everyone else, we need a static CA that can be pre-distributed to remote nodes. Thus, this solution was not chosen.","title":"Transient CA option"},{"location":"design/adr/security/0020-spiffe/#vault-based-ca-option","text":"The SPIFFE server can be configured to make requests to a Hashicorp Vault PKI secrets engine to generate intermediate CA certificates for signing SVID's. This is an option for future integrations, but is omitted from this proposal due to the jump in implementation complexity and the desire that the current proposal be on add-on feature. The current implementation allows the SPIFFE server and Vault to be started simultaneously. Using a Vault-based CA would require a complex interlocking sequence of steps.","title":"Vault-based CA option"},{"location":"design/adr/security/0020-spiffe/#references","text":"Issue to create ADR for handling delayed-start services 0018 Service Registry ADR Service List ADR SPIFFE SPIFFE ID X.500 SVID JWT SVID Turtle book","title":"References"},{"location":"design/legacy-design/","text":"Legacy Design Documents Name/Link Short Description Registry Abstraction Decouple EdgeX services from Consul device-service/Discovery Dynamically discover new devices","title":"Legacy Design Documents"},{"location":"design/legacy-design/#legacy-design-documents","text":"Name/Link Short Description Registry Abstraction Decouple EdgeX services from Consul device-service/Discovery Dynamically discover new devices","title":"Legacy Design Documents"},{"location":"design/legacy-design/device-service/discovery/","text":"Dynamic Device Discovery Overview Some device protocols allow for devices to be discovered automatically. A Device Service may include a capability for discovering devices and creating the corresponding Device objects within EdgeX. A framework for doing so will be implemented in the Device Service SDKs. The discovery process will operate as follows: Discovery is triggered either on an internal timer or by a call to a REST endpoint The SDK will call a function provided by the DS implementation to request a device scan The implementation calls back to the SDK with details of devices which it has found The SDK filters these devices against a set of acceptance criteria The SDK adds accepted devices in core-metadata. These are now available in the EdgeX system Triggering Discovery A boolean configuration value Device/Discovery/Enabled defaults to false. If this value is set true, and the DS implementation supports discovery, discovery is enabled. The SDK will respond to POST requests on the the /discovery endpoint. No content is required in the request. This call will return one of the following codes: 202: discovery has been triggered or is already running. The response should indicate which, and contain the correlation id that will be used by any resulting requests for device addition 423: the service is locked (admin state) or disabled (operating state) 500: unknown or unanticipated issues exist 501: discovery is not supported by this protocol implementation 503: discovery is disabled by configuration In each of the failure cases a meaningful error message should be returned. In the case where discovery is triggered, the discovery process will run in a new thread or goroutine, so that the REST call may return immediately. An integer configuration value Device/Discovery/Interval defaults to zero. If this value is set to a positive value, and discovery is enabled, the discovery process will be triggered at the specified interval (in seconds). Finding Devices When discovery is triggered, the SDK calls the implementation function provided by the Device Service. This should perform whatever protocol-specific procedure is necessary to find devices, and pass these devices into the SDK by calling the SDK's filtered device addition function. Note: The implementation should call back for every device found. The SDK is to take responsibility for filtering out devices which have already been added. The information required for a found device is as follows: An autogenerated device name The Protocol Properties of the device Optionally, a description string Optionally, a list of label strings The filtered device addition function will take as an argument a collection of structs containing the above data. An implementation may choose to make one call per discovered device, but implementors are encouraged to batch the devices if practical, as in future EdgeX versions it will be possible for the SDK to create all required new devices in a single call to core-metadata. Rationale: An alternative design would have the implementation function return the collection of discovered devices to the SDK. Using a callback mechanism instead has the following advantages: Allows for asynchronous operation. In this mode the DS implementation will intiate discovery and return immediately. For example discovery may be initiated by sending a broadcast packet. Devices will then send return packets indicating their existence. The thread handling inbound network traffic can on receipt of such packets call the filtered device addition function directly. Allows DS implementations where devices self-announce to call the filtered device addition function independent of the discovery process Filtered Device Addition The filter criteria for discovered devices are represented by Provision Watchers. A Provision Watcher contains the following fields: Identifiers : A set of name-value pairs against which a new device's ProtocolProperties are matched BlockingIdentifiers : A further set of name-value pairs which are also matched against a new device's ProtocolProperties Profile : The name of a DeviceProfile which should be assigned to new devices which pass this ProvisionWatcher AdminState : The initial Administrative State for new devices which pass this ProvisionWatcher A candidate new device passes a ProvisionWatcher if all of the Identifiers match, and none of the BlockingIdentifiers . For devices with multiple Device.Protocols , each Device.Protocol is considered separately. A pass (as described above) on any of the protocols results in the device being added. The values specified in Identifiers are regular expressions. Note: If a discovered Device is manually removed from EdgeX, it will be necessary to adjust the ProvisionWatcher via which it was added, either by making the Identifiers more specific or by adding BlockingIdentifiers , otherwise the Device will be re-added the next time Discovery is initiated. Note: ProvisionWatchers are stored in core-metadata. A facility for managing ProvisionWatchers is needed, eg edgex-cli could be extended","title":"Discovery"},{"location":"design/legacy-design/device-service/discovery/#dynamic-device-discovery","text":"","title":"Dynamic Device Discovery"},{"location":"design/legacy-design/device-service/discovery/#overview","text":"Some device protocols allow for devices to be discovered automatically. A Device Service may include a capability for discovering devices and creating the corresponding Device objects within EdgeX. A framework for doing so will be implemented in the Device Service SDKs. The discovery process will operate as follows: Discovery is triggered either on an internal timer or by a call to a REST endpoint The SDK will call a function provided by the DS implementation to request a device scan The implementation calls back to the SDK with details of devices which it has found The SDK filters these devices against a set of acceptance criteria The SDK adds accepted devices in core-metadata. These are now available in the EdgeX system","title":"Overview"},{"location":"design/legacy-design/device-service/discovery/#triggering-discovery","text":"A boolean configuration value Device/Discovery/Enabled defaults to false. If this value is set true, and the DS implementation supports discovery, discovery is enabled. The SDK will respond to POST requests on the the /discovery endpoint. No content is required in the request. This call will return one of the following codes: 202: discovery has been triggered or is already running. The response should indicate which, and contain the correlation id that will be used by any resulting requests for device addition 423: the service is locked (admin state) or disabled (operating state) 500: unknown or unanticipated issues exist 501: discovery is not supported by this protocol implementation 503: discovery is disabled by configuration In each of the failure cases a meaningful error message should be returned. In the case where discovery is triggered, the discovery process will run in a new thread or goroutine, so that the REST call may return immediately. An integer configuration value Device/Discovery/Interval defaults to zero. If this value is set to a positive value, and discovery is enabled, the discovery process will be triggered at the specified interval (in seconds).","title":"Triggering Discovery"},{"location":"design/legacy-design/device-service/discovery/#finding-devices","text":"When discovery is triggered, the SDK calls the implementation function provided by the Device Service. This should perform whatever protocol-specific procedure is necessary to find devices, and pass these devices into the SDK by calling the SDK's filtered device addition function. Note: The implementation should call back for every device found. The SDK is to take responsibility for filtering out devices which have already been added. The information required for a found device is as follows: An autogenerated device name The Protocol Properties of the device Optionally, a description string Optionally, a list of label strings The filtered device addition function will take as an argument a collection of structs containing the above data. An implementation may choose to make one call per discovered device, but implementors are encouraged to batch the devices if practical, as in future EdgeX versions it will be possible for the SDK to create all required new devices in a single call to core-metadata. Rationale: An alternative design would have the implementation function return the collection of discovered devices to the SDK. Using a callback mechanism instead has the following advantages: Allows for asynchronous operation. In this mode the DS implementation will intiate discovery and return immediately. For example discovery may be initiated by sending a broadcast packet. Devices will then send return packets indicating their existence. The thread handling inbound network traffic can on receipt of such packets call the filtered device addition function directly. Allows DS implementations where devices self-announce to call the filtered device addition function independent of the discovery process","title":"Finding Devices"},{"location":"design/legacy-design/device-service/discovery/#filtered-device-addition","text":"The filter criteria for discovered devices are represented by Provision Watchers. A Provision Watcher contains the following fields: Identifiers : A set of name-value pairs against which a new device's ProtocolProperties are matched BlockingIdentifiers : A further set of name-value pairs which are also matched against a new device's ProtocolProperties Profile : The name of a DeviceProfile which should be assigned to new devices which pass this ProvisionWatcher AdminState : The initial Administrative State for new devices which pass this ProvisionWatcher A candidate new device passes a ProvisionWatcher if all of the Identifiers match, and none of the BlockingIdentifiers . For devices with multiple Device.Protocols , each Device.Protocol is considered separately. A pass (as described above) on any of the protocols results in the device being added. The values specified in Identifiers are regular expressions. Note: If a discovered Device is manually removed from EdgeX, it will be necessary to adjust the ProvisionWatcher via which it was added, either by making the Identifiers more specific or by adding BlockingIdentifiers , otherwise the Device will be re-added the next time Discovery is initiated. Note: ProvisionWatchers are stored in core-metadata. A facility for managing ProvisionWatchers is needed, eg edgex-cli could be extended","title":"Filtered Device Addition"},{"location":"design/legacy-requirements/","text":"Legacy Requirements Name/Link Short Description Device Service Device Service SDK required functionality","title":"Legacy Requirements"},{"location":"design/legacy-requirements/#legacy-requirements","text":"Name/Link Short Description Device Service Device Service SDK required functionality","title":"Legacy Requirements"},{"location":"design/legacy-requirements/device-service/","text":"Device SDK Required Functionality Overview This document sets out the required functionality of a Device SDK other than the implementation of its REST API (see ADR 0011 ) and the Dynamic Discovery mechanism (see Discovery ). This functionality is categorised into three areas - actions required at startup, configuration options to be supported, and support for push-style event generation. Startup When the device service is started, in addition to any actions required to support functionality defined elsewhere, the SDK must: Manage the device service's registration in metadata Provide initialization information to the protocol-specific implementation Registration The core-metadata service maintains an extent of device service registrations so that it may route requests relating to particular devices to the correct device service. The SDK should create (on first run) or update its record appropriately. Device service registrations contain the following fields: Name - the name of the device service Description - an optional brief description of the service Labels - optional string labels BaseAddress - URL of the base of the service's REST API The default device service Name is to be hardcoded into every device service implementation. A suffix may be added to this name at runtime by means of commandline option or environment variable. Service names must be unique in a particular EdgeX instance; the suffix mechanism allows for running multiple instances of a given device service. The Description and Labels are configured in the [Service] section of the device service configuration. BaseAddress may be constructed using the [Service]/Host and [Service]/Port entries in the device service configuration. Initialization During startup the SDK must supply to the implementation that part of the service configuration which is specific to the implementation. This configuration is held in the Driver section of the configuration file or registry. The SDK must also supply a logging facility at this stage. This facility should by default emit logs locally (configurable to file or to stdout) but instead should use the optional logging service if the configuration element Logging/EnableRemote is set true . Note: the logging service is deprecated and support for it will be removed in EdgeX v2.0 The implementation on receipt of its configuration should perform any necessary initialization of its own. It may return an error in the event of unrecoverable problems, this should cause the service startup itself to fail. Configuration Configuration should be supported by the SDK, in accordance with ADR 0005 Commandline processing The SDK should handle commandline processing on behalf of the device service. In addition to the common EdgeX service options, the --instance / -i flag should be supported. This specifies a suffix to append to the device service name. Environment variables The SDK should also handle environment variables. In addition to the common EdgeX variables, EDGEX_INSTANCE_NAME should if set override the --instance setting. Configuration file and Registry The SDK should use (or for non-Go implementations, re-implement) the standard mechanisms for obtaining configuration from a file or registry. The configuration parameters to be supported are: Service section Option Type Notes Host String This is the hostname to use when registering the service in core-metadata. As such it is used by other services to connect to the device service, and therefore must be resolvable by other services in the EdgeX deployment. Port Int Port on which to accept the device service's REST API. The assigned port for experimental / in-development device services is 49999. Timeout Int Time (in milliseconds) to wait between attempts to contact core-data and core-metadata when starting up. ConnectRetries Int Number of times to attempt to contact core-data and core-metadata when starting up. StartupMsg String Message to log on successful startup. CheckInterval String The checking interval to request if registering with Consul. Consul will ping the service at this interval to monitor its liveliness. ServerBindAddr String The interface on which the service's REST server should listen. By default the server is to listen on the interface to which the Host option resolves. A value of 0.0.0.0 means listen on all available interfaces. Clients section Defines the endpoints for other microservices in an EdgeX system. Not required when using Registry. Data Option Type Notes Host String Hostname on which to contact the core-data service. Port Int Port on which to contact the core-data service. Metadata Option Type Notes Host String Hostname on which to contact the core-metadata service. Port Int Port on which to contact the core-metadata service. Device section Option Type Notes DataTransform Bool For enabling/disabling transformations on data between the device and EdgeX. Defaults to true (enabled). Discovery/Enabled Bool For enabling/disabling device discovery. Defaults to true (enabled). Discovery/Interval Int Time between automatic discovery runs, in seconds. Defaults to zero (do not run discovery automatically). MaxCmdOps Int Defines the maximum number of resource operations that can be sent to the driver in a single command. MaxCmdResultLen Int Maximum string length for command results returned from the driver. UpdateLastConnected Bool If true, update the LastConnected attribute of a device whenever it is successfully accessed (read or write). Defaults to false. Logging section Option Type Notes LogLevel String Sets the logging level. Available settings in order of increasing severity are: TRACE , DEBUG , INFO , WARNING , ERROR . Driver section This section is for options specific to the protocol driver. Any configuration specified here will be passed to the driver implementation during initialization. Push Events The SDK should implement methods for generating Events other than on receipt of device GET requests. The AutoEvent mechanism provides for generating Events at fixed intervals. The asynchronous event queue enables the device service to generate events at arbitrary times, according to implementation-specific logic. AutoEvents Each device may have as part of its definition in Metadata a number of AutoEvents associated with it. An AutoEvent has the following fields: resource : the name of a deviceResource or deviceCommand indicating what to read. frequency : a string indicating the time to wait between reading events, expressed as an integer followed by units of ms, s, m or h. onchange : a boolean: if set to true, only generate new events if one or more of the contained readings has changed since the last event. The device SDK should schedule device readings from the implementation according to these AutoEvent defininitions. It should use the same logic as it would if the readings were being requested via REST. Asynchronous Event Queue The SDK should provide a mechanism whereby the implementation may submit device readings at any time without blocking. This may be done in a manner appropriate to the implementation language, eg the Go SDK provides a channel on which readings may be pushed, the C SDK provides a function which submits readings to a workqueue.","title":"Device SDK Required Functionality"},{"location":"design/legacy-requirements/device-service/#device-sdk-required-functionality","text":"","title":"Device SDK Required Functionality"},{"location":"design/legacy-requirements/device-service/#overview","text":"This document sets out the required functionality of a Device SDK other than the implementation of its REST API (see ADR 0011 ) and the Dynamic Discovery mechanism (see Discovery ). This functionality is categorised into three areas - actions required at startup, configuration options to be supported, and support for push-style event generation.","title":"Overview"},{"location":"design/legacy-requirements/device-service/#startup","text":"When the device service is started, in addition to any actions required to support functionality defined elsewhere, the SDK must: Manage the device service's registration in metadata Provide initialization information to the protocol-specific implementation","title":"Startup"},{"location":"design/legacy-requirements/device-service/#registration","text":"The core-metadata service maintains an extent of device service registrations so that it may route requests relating to particular devices to the correct device service. The SDK should create (on first run) or update its record appropriately. Device service registrations contain the following fields: Name - the name of the device service Description - an optional brief description of the service Labels - optional string labels BaseAddress - URL of the base of the service's REST API The default device service Name is to be hardcoded into every device service implementation. A suffix may be added to this name at runtime by means of commandline option or environment variable. Service names must be unique in a particular EdgeX instance; the suffix mechanism allows for running multiple instances of a given device service. The Description and Labels are configured in the [Service] section of the device service configuration. BaseAddress may be constructed using the [Service]/Host and [Service]/Port entries in the device service configuration.","title":"Registration"},{"location":"design/legacy-requirements/device-service/#initialization","text":"During startup the SDK must supply to the implementation that part of the service configuration which is specific to the implementation. This configuration is held in the Driver section of the configuration file or registry. The SDK must also supply a logging facility at this stage. This facility should by default emit logs locally (configurable to file or to stdout) but instead should use the optional logging service if the configuration element Logging/EnableRemote is set true . Note: the logging service is deprecated and support for it will be removed in EdgeX v2.0 The implementation on receipt of its configuration should perform any necessary initialization of its own. It may return an error in the event of unrecoverable problems, this should cause the service startup itself to fail.","title":"Initialization"},{"location":"design/legacy-requirements/device-service/#configuration","text":"Configuration should be supported by the SDK, in accordance with ADR 0005","title":"Configuration"},{"location":"design/legacy-requirements/device-service/#commandline-processing","text":"The SDK should handle commandline processing on behalf of the device service. In addition to the common EdgeX service options, the --instance / -i flag should be supported. This specifies a suffix to append to the device service name.","title":"Commandline processing"},{"location":"design/legacy-requirements/device-service/#environment-variables","text":"The SDK should also handle environment variables. In addition to the common EdgeX variables, EDGEX_INSTANCE_NAME should if set override the --instance setting.","title":"Environment variables"},{"location":"design/legacy-requirements/device-service/#configuration-file-and-registry","text":"The SDK should use (or for non-Go implementations, re-implement) the standard mechanisms for obtaining configuration from a file or registry. The configuration parameters to be supported are:","title":"Configuration file and Registry"},{"location":"design/legacy-requirements/device-service/#service-section","text":"Option Type Notes Host String This is the hostname to use when registering the service in core-metadata. As such it is used by other services to connect to the device service, and therefore must be resolvable by other services in the EdgeX deployment. Port Int Port on which to accept the device service's REST API. The assigned port for experimental / in-development device services is 49999. Timeout Int Time (in milliseconds) to wait between attempts to contact core-data and core-metadata when starting up. ConnectRetries Int Number of times to attempt to contact core-data and core-metadata when starting up. StartupMsg String Message to log on successful startup. CheckInterval String The checking interval to request if registering with Consul. Consul will ping the service at this interval to monitor its liveliness. ServerBindAddr String The interface on which the service's REST server should listen. By default the server is to listen on the interface to which the Host option resolves. A value of 0.0.0.0 means listen on all available interfaces.","title":"Service section"},{"location":"design/legacy-requirements/device-service/#clients-section","text":"Defines the endpoints for other microservices in an EdgeX system. Not required when using Registry.","title":"Clients section"},{"location":"design/legacy-requirements/device-service/#data","text":"Option Type Notes Host String Hostname on which to contact the core-data service. Port Int Port on which to contact the core-data service.","title":"Data"},{"location":"design/legacy-requirements/device-service/#metadata","text":"Option Type Notes Host String Hostname on which to contact the core-metadata service. Port Int Port on which to contact the core-metadata service.","title":"Metadata"},{"location":"design/legacy-requirements/device-service/#device-section","text":"Option Type Notes DataTransform Bool For enabling/disabling transformations on data between the device and EdgeX. Defaults to true (enabled). Discovery/Enabled Bool For enabling/disabling device discovery. Defaults to true (enabled). Discovery/Interval Int Time between automatic discovery runs, in seconds. Defaults to zero (do not run discovery automatically). MaxCmdOps Int Defines the maximum number of resource operations that can be sent to the driver in a single command. MaxCmdResultLen Int Maximum string length for command results returned from the driver. UpdateLastConnected Bool If true, update the LastConnected attribute of a device whenever it is successfully accessed (read or write). Defaults to false.","title":"Device section"},{"location":"design/legacy-requirements/device-service/#logging-section","text":"Option Type Notes LogLevel String Sets the logging level. Available settings in order of increasing severity are: TRACE , DEBUG , INFO , WARNING , ERROR .","title":"Logging section"},{"location":"design/legacy-requirements/device-service/#driver-section","text":"This section is for options specific to the protocol driver. Any configuration specified here will be passed to the driver implementation during initialization.","title":"Driver section"},{"location":"design/legacy-requirements/device-service/#push-events","text":"The SDK should implement methods for generating Events other than on receipt of device GET requests. The AutoEvent mechanism provides for generating Events at fixed intervals. The asynchronous event queue enables the device service to generate events at arbitrary times, according to implementation-specific logic.","title":"Push Events"},{"location":"design/legacy-requirements/device-service/#autoevents","text":"Each device may have as part of its definition in Metadata a number of AutoEvents associated with it. An AutoEvent has the following fields: resource : the name of a deviceResource or deviceCommand indicating what to read. frequency : a string indicating the time to wait between reading events, expressed as an integer followed by units of ms, s, m or h. onchange : a boolean: if set to true, only generate new events if one or more of the contained readings has changed since the last event. The device SDK should schedule device readings from the implementation according to these AutoEvent defininitions. It should use the same logic as it would if the readings were being requested via REST.","title":"AutoEvents"},{"location":"design/legacy-requirements/device-service/#asynchronous-event-queue","text":"The SDK should provide a mechanism whereby the implementation may submit device readings at any time without blocking. This may be done in a manner appropriate to the implementation language, eg the Go SDK provides a channel on which readings may be pushed, the C SDK provides a function which submits readings to a workqueue.","title":"Asynchronous Event Queue"},{"location":"examples/","text":"EdgeX Examples In addition to the examples listed in this section of the documentation, you will find other examples in the EdgeX Examples Repository . The tabs below provide a listing (may be partial based on latest updates) for reference. Application Services See App Service Examples for a listing of custom and configurable application service examples. Deployment Example Location Kubernetes Github - examples, deployment Raspberry Pi 4 Github - examples, raspberry-pi-4 Cloud deployments Github - examples, cloud deployment templates Device Services Example Location Random Number Device Service (simulation) Github - examples, device-random Grove Device Service in C Github - examples, device-grove-c Security Example Location Docker Swarm, remote device service via overlay network Github - Docker Swarm SSH Tunneling, remote device service via SSH tunneling Github - SSH Tunneling Warning Not all the examples in the EdgeX Examples repository are available for all EdgeX releases. Check the documentation for details.","title":"EdgeX Examples"},{"location":"examples/#edgex-examples","text":"In addition to the examples listed in this section of the documentation, you will find other examples in the EdgeX Examples Repository . The tabs below provide a listing (may be partial based on latest updates) for reference. Application Services See App Service Examples for a listing of custom and configurable application service examples. Deployment Example Location Kubernetes Github - examples, deployment Raspberry Pi 4 Github - examples, raspberry-pi-4 Cloud deployments Github - examples, cloud deployment templates Device Services Example Location Random Number Device Service (simulation) Github - examples, device-random Grove Device Service in C Github - examples, device-grove-c Security Example Location Docker Swarm, remote device service via overlay network Github - Docker Swarm SSH Tunneling, remote device service via SSH tunneling Github - SSH Tunneling Warning Not all the examples in the EdgeX Examples repository are available for all EdgeX releases. Check the documentation for details.","title":"EdgeX Examples"},{"location":"examples/AppServiceExamples/","text":"App Service Examples The following is a list of examples we currently have available that demonstrate various ways that the Application Functions SDK or App Service Configurable can be used. All of the examples can be found here in the edgex-examples repo. They focus on how to leverage various built in provided functions as mentioned above as well as how to write your own in the case that the SDK does not provide what is needed. Example Name Description Simple Filter XML Demonstrates Filtering of Events by Device names and transforming data to XML Simple Filter XML HTTP Same example as #1, but result published to HTTP Endpoint Simple Filter XML MQTT Same example as #1, but result published to MQTT Broker Simple CBOR Filter Demonstrates Filtering of Events by Resource names for Event that is CBOR encoded containing a binary reading Advanced Filter Convert Publish Demonstrates Filtering of Events by Resource names, custom function to convert the reading and them publish the modified Event back to the MessageBus under a different topic. Advanced Target Type Demonstrates use of custom Target Type and use of HTTP Trigger Cloud Export MQTT Demonstrates simple custom Cloud transform and exporting to Cloud MQTT Broker. Cloud Event Transform Demonstrates custom transforms that convert Event/Readings to and from Cloud Events Send Command Demonstrates sending commands to a Device via the Command Client. Secrets Demonstrates how to retrieve secrets from the service SecretStore Custom Trigger Demonstrates how to create and use a custom trigger Fledge Export Demonstrates custom conversion of Event/Reading to Fledge format and then exporting to Fledge service REST endpoint Influxdb Export Demonstrates custom conversion of Event/Reading to InfluxDB timeseries format and then exporting to InFluxDB via MQTT Json Logic Demonstrates using the built in JSONLogic Evaluate pipeline function IBM Export Profile Demonstrates a custom App Service Configurable profile for exporting to IBM Cloud","title":"App Service Examples"},{"location":"examples/AppServiceExamples/#app-service-examples","text":"The following is a list of examples we currently have available that demonstrate various ways that the Application Functions SDK or App Service Configurable can be used. All of the examples can be found here in the edgex-examples repo. They focus on how to leverage various built in provided functions as mentioned above as well as how to write your own in the case that the SDK does not provide what is needed. Example Name Description Simple Filter XML Demonstrates Filtering of Events by Device names and transforming data to XML Simple Filter XML HTTP Same example as #1, but result published to HTTP Endpoint Simple Filter XML MQTT Same example as #1, but result published to MQTT Broker Simple CBOR Filter Demonstrates Filtering of Events by Resource names for Event that is CBOR encoded containing a binary reading Advanced Filter Convert Publish Demonstrates Filtering of Events by Resource names, custom function to convert the reading and them publish the modified Event back to the MessageBus under a different topic. Advanced Target Type Demonstrates use of custom Target Type and use of HTTP Trigger Cloud Export MQTT Demonstrates simple custom Cloud transform and exporting to Cloud MQTT Broker. Cloud Event Transform Demonstrates custom transforms that convert Event/Readings to and from Cloud Events Send Command Demonstrates sending commands to a Device via the Command Client. Secrets Demonstrates how to retrieve secrets from the service SecretStore Custom Trigger Demonstrates how to create and use a custom trigger Fledge Export Demonstrates custom conversion of Event/Reading to Fledge format and then exporting to Fledge service REST endpoint Influxdb Export Demonstrates custom conversion of Event/Reading to InfluxDB timeseries format and then exporting to InFluxDB via MQTT Json Logic Demonstrates using the built in JSONLogic Evaluate pipeline function IBM Export Profile Demonstrates a custom App Service Configurable profile for exporting to IBM Cloud","title":"App Service Examples"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/","text":"Command Devices with eKuiper Rules Engine Overview This document describes how to actuate a device with rules trigger by the eKuiper rules engine. To make the example simple, the virtual device device-virtual is used as the actuated device. The eKuiper rules engine analyzes the data sent from device-virtual services, and then sends a command to virtual device based a rule firing in eKuiper based on that analysis. It should be noted that an application service is used to route core data through the rules engine. Use Case Scenarios Rules will be created in eKuiper to watch for two circumstances: monitor for events coming from the Random-UnsignedInteger-Device device (one of the default virtual device managed devices), and if a uint8 reading value is found larger than 20 in the event, then send a command to Random-Boolean-Device device to start generating random numbers (specifically - set random generation bool to true). monitor for events coming from the Random-Integer-Device device (another of the default virtual device managed devices), and if the average for int8 reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device device to stop generating random numbers (specifically - set random generation bool to false). These use case scenarios do not have any real business meaning, but easily demonstrate the features of EdgeX automatic actuation accomplished via the eKuiper rule engine. Prerequisite Knowledge This document will not cover basic operations of EdgeX or LF Edge eKuiper. Readers should have basic knowledge of: Get and start EdgeX. Refer to Quick Start for how to get and start EdgeX with the virtual device service. Run the eKuiper Rules Engine. Refer to EdgeX eKuiper Rule Engine Tutorial to understand the basics of eKuiper and EdgeX. Start eKuiper and Create an EdgeX Stream Make sure you read the EdgeX eKuiper Rule Engine Tutorial and successfully run eKuiper with EdgeX. First create a stream that can consume streaming data from the EdgeX application service (rules engine profile). This step is not required if you already finished the EdgeX eKuiper Rule Engine Tutorial . curl -X POST \\ http:// $ekuiper_docker :59720/streams \\ -H 'Content-Type: application/json' \\ -d '{\"sql\": \"create stream demo() WITH (FORMAT=\\\"JSON\\\", TYPE=\\\"edgex\\\")\"}' Get and Test the Command URL Since both use case scenario rules will send commands to the Random-Boolean-Device virtual device, use the curl request below to get a list of available commands for this device. curl http://127.0.0.1:59882/api/v2/device/name/Random-Boolean-Device | jq It should print results like those below. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"deviceCoreCommand\" : { \"deviceName\" : \"Random-Boolean-Device\" , \"profileName\" : \"Random-Boolean-Device\" , \"coreCommands\" : [ { \"name\" : \"WriteBoolValue\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Bool\" , \"valueType\" : \"Bool\" }, { \"resourceName\" : \"EnableRandomization_Bool\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"WriteBoolArrayValue\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/WriteBoolArrayValue\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"BoolArray\" , \"valueType\" : \"BoolArray\" }, { \"resourceName\" : \"EnableRandomization_BoolArray\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"Bool\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/Bool\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Bool\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"BoolArray\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/BoolArray\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"BoolArray\" , \"valueType\" : \"BoolArray\" } ] } ] } } From this output, look for the URL associated to the PUT command (the first URL listed). This is the command eKuiper will use to call on the device. There are two parameters for this command: Bool : Set the returned value when other services want to get device data. The parameter will be used only when EnableRandomization_Bool is set to false. EnableRandomization_Bool : Enable/disable the randomization generation of bool values. If this value is set to true, then the 1st parameter will be ignored. You can test calling this command with its parameters using curl as shown below. curl -X PUT \\ http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue \\ -H 'Content-Type: application/json' \\ -d '{\"Bool\":\"true\", \"EnableRandomization_Bool\": \"true\"}' Create rules Now that you have EdgeX and eKuiper running, the EdgeX stream defined, and you know the command to actuate Random-Boolean-Device , it is time to build the eKuiper rules. The first rule Again, the 1st rule is to monitor for events coming from the Random-UnsignedInteger-Device device (one of the default virtual device managed devices), and if a uint8 reading value is found larger than 20 in the event, then send the command to Random-Boolean-Device device to start generating random numbers (specifically - set random generation bool to true). Given the URL and parameters to the command, below is the curl command to declare the first rule in eKuiper. curl -X POST \\ http:// $ekuiper_server :59720/rules \\ -H 'Content-Type: application/json' \\ -d '{ \"id\": \"rule1\", \"sql\": \"SELECT uint8 FROM demo WHERE uint8 > 20\", \"actions\": [ { \"rest\": { \"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\", \"method\": \"put\", \"dataTemplate\": \"{\\\"Bool\\\":\\\"true\\\", \\\"EnableRandomization_Bool\\\": \\\"true\\\"}\", \"sendSingle\": true } }, { \"log\":{} } ] }' The second rule The 2nd rule is to monitor for events coming from the Random-Integer-Device device (another of the default virtual device managed devices), and if the average for int8 reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device device to stop generating random numbers (specifically - set random generation bool to false). Here is the curl request to setup the second rule in eKuiper. The same command URL is used as the same device action ( Random-Boolean-Device's PUT bool command ) is being actuated, but with different parameters. curl -X POST \\ http:// $ekuiper_server :59720/rules \\ -H 'Content-Type: application/json' \\ -d '{ \"id\": \"rule2\", \"sql\": \"SELECT avg(int8) AS avg_int8 FROM demo WHERE int8 != nil GROUP BY TUMBLINGWINDOW(ss, 20) HAVING avg(int8) > 0\", \"actions\": [ { \"rest\": { \"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\", \"method\": \"put\", \"dataTemplate\": \"{\\\"Bool\\\":\\\"false\\\", \\\"EnableRandomization_Bool\\\": \\\"false\\\"}\", \"sendSingle\": true } }, { \"log\":{} } ] }' Watch the eKuiper Logs Both rules are now created in eKuiper. eKuiper is busy analyzing the event data coming for the virtual devices looking for readings that match the rules you created. You can watch the edgex-kuiper container logs for the rule triggering and command execution. docker logs edgex-kuiper Explore the Results You can also explore the eKuiper analysis that caused the commands to be sent to the service. To see the the data from the analysis, use the SQL below to query eKuiper filtering data. SELECT int8 , \"true\" AS randomization FROM demo WHERE uint8 > 20 The output of the SQL should look similar to the results below. [{ \"int8\" : -75 , \"randomization\" : \"true\" }] Extended Reading Use these resouces to learn more about the features of LF Edge eKuiper. eKuiper Github code repository eKuiper reference guide","title":"Command Devices with eKuiper Rules Engine"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#command-devices-with-ekuiper-rules-engine","text":"","title":"Command Devices with eKuiper Rules Engine"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#overview","text":"This document describes how to actuate a device with rules trigger by the eKuiper rules engine. To make the example simple, the virtual device device-virtual is used as the actuated device. The eKuiper rules engine analyzes the data sent from device-virtual services, and then sends a command to virtual device based a rule firing in eKuiper based on that analysis. It should be noted that an application service is used to route core data through the rules engine.","title":"Overview"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#use-case-scenarios","text":"Rules will be created in eKuiper to watch for two circumstances: monitor for events coming from the Random-UnsignedInteger-Device device (one of the default virtual device managed devices), and if a uint8 reading value is found larger than 20 in the event, then send a command to Random-Boolean-Device device to start generating random numbers (specifically - set random generation bool to true). monitor for events coming from the Random-Integer-Device device (another of the default virtual device managed devices), and if the average for int8 reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device device to stop generating random numbers (specifically - set random generation bool to false). These use case scenarios do not have any real business meaning, but easily demonstrate the features of EdgeX automatic actuation accomplished via the eKuiper rule engine.","title":"Use Case Scenarios"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#prerequisite-knowledge","text":"This document will not cover basic operations of EdgeX or LF Edge eKuiper. Readers should have basic knowledge of: Get and start EdgeX. Refer to Quick Start for how to get and start EdgeX with the virtual device service. Run the eKuiper Rules Engine. Refer to EdgeX eKuiper Rule Engine Tutorial to understand the basics of eKuiper and EdgeX.","title":"Prerequisite Knowledge"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#start-ekuiper-and-create-an-edgex-stream","text":"Make sure you read the EdgeX eKuiper Rule Engine Tutorial and successfully run eKuiper with EdgeX. First create a stream that can consume streaming data from the EdgeX application service (rules engine profile). This step is not required if you already finished the EdgeX eKuiper Rule Engine Tutorial . curl -X POST \\ http:// $ekuiper_docker :59720/streams \\ -H 'Content-Type: application/json' \\ -d '{\"sql\": \"create stream demo() WITH (FORMAT=\\\"JSON\\\", TYPE=\\\"edgex\\\")\"}'","title":"Start eKuiper and Create an EdgeX Stream"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#get-and-test-the-command-url","text":"Since both use case scenario rules will send commands to the Random-Boolean-Device virtual device, use the curl request below to get a list of available commands for this device. curl http://127.0.0.1:59882/api/v2/device/name/Random-Boolean-Device | jq It should print results like those below. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"deviceCoreCommand\" : { \"deviceName\" : \"Random-Boolean-Device\" , \"profileName\" : \"Random-Boolean-Device\" , \"coreCommands\" : [ { \"name\" : \"WriteBoolValue\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Bool\" , \"valueType\" : \"Bool\" }, { \"resourceName\" : \"EnableRandomization_Bool\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"WriteBoolArrayValue\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/WriteBoolArrayValue\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"BoolArray\" , \"valueType\" : \"BoolArray\" }, { \"resourceName\" : \"EnableRandomization_BoolArray\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"Bool\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/Bool\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Bool\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"BoolArray\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Boolean-Device/BoolArray\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"BoolArray\" , \"valueType\" : \"BoolArray\" } ] } ] } } From this output, look for the URL associated to the PUT command (the first URL listed). This is the command eKuiper will use to call on the device. There are two parameters for this command: Bool : Set the returned value when other services want to get device data. The parameter will be used only when EnableRandomization_Bool is set to false. EnableRandomization_Bool : Enable/disable the randomization generation of bool values. If this value is set to true, then the 1st parameter will be ignored. You can test calling this command with its parameters using curl as shown below. curl -X PUT \\ http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue \\ -H 'Content-Type: application/json' \\ -d '{\"Bool\":\"true\", \"EnableRandomization_Bool\": \"true\"}'","title":"Get and Test the Command URL"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#create-rules","text":"Now that you have EdgeX and eKuiper running, the EdgeX stream defined, and you know the command to actuate Random-Boolean-Device , it is time to build the eKuiper rules.","title":"Create rules"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#the-first-rule","text":"Again, the 1st rule is to monitor for events coming from the Random-UnsignedInteger-Device device (one of the default virtual device managed devices), and if a uint8 reading value is found larger than 20 in the event, then send the command to Random-Boolean-Device device to start generating random numbers (specifically - set random generation bool to true). Given the URL and parameters to the command, below is the curl command to declare the first rule in eKuiper. curl -X POST \\ http:// $ekuiper_server :59720/rules \\ -H 'Content-Type: application/json' \\ -d '{ \"id\": \"rule1\", \"sql\": \"SELECT uint8 FROM demo WHERE uint8 > 20\", \"actions\": [ { \"rest\": { \"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\", \"method\": \"put\", \"dataTemplate\": \"{\\\"Bool\\\":\\\"true\\\", \\\"EnableRandomization_Bool\\\": \\\"true\\\"}\", \"sendSingle\": true } }, { \"log\":{} } ] }'","title":"The first rule"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#the-second-rule","text":"The 2nd rule is to monitor for events coming from the Random-Integer-Device device (another of the default virtual device managed devices), and if the average for int8 reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device device to stop generating random numbers (specifically - set random generation bool to false). Here is the curl request to setup the second rule in eKuiper. The same command URL is used as the same device action ( Random-Boolean-Device's PUT bool command ) is being actuated, but with different parameters. curl -X POST \\ http:// $ekuiper_server :59720/rules \\ -H 'Content-Type: application/json' \\ -d '{ \"id\": \"rule2\", \"sql\": \"SELECT avg(int8) AS avg_int8 FROM demo WHERE int8 != nil GROUP BY TUMBLINGWINDOW(ss, 20) HAVING avg(int8) > 0\", \"actions\": [ { \"rest\": { \"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue\", \"method\": \"put\", \"dataTemplate\": \"{\\\"Bool\\\":\\\"false\\\", \\\"EnableRandomization_Bool\\\": \\\"false\\\"}\", \"sendSingle\": true } }, { \"log\":{} } ] }'","title":"The second rule"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#watch-the-ekuiper-logs","text":"Both rules are now created in eKuiper. eKuiper is busy analyzing the event data coming for the virtual devices looking for readings that match the rules you created. You can watch the edgex-kuiper container logs for the rule triggering and command execution. docker logs edgex-kuiper","title":"Watch the eKuiper Logs"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#explore-the-results","text":"You can also explore the eKuiper analysis that caused the commands to be sent to the service. To see the the data from the analysis, use the SQL below to query eKuiper filtering data. SELECT int8 , \"true\" AS randomization FROM demo WHERE uint8 > 20 The output of the SQL should look similar to the results below. [{ \"int8\" : -75 , \"randomization\" : \"true\" }]","title":"Explore the Results"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#extended-reading","text":"Use these resouces to learn more about the features of LF Edge eKuiper. eKuiper Github code repository eKuiper reference guide","title":"Extended Reading"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/","text":"MQTT EdgeX - Jakarta Release Overview In this example, we use a script to simulate a custom-defined MQTT device, instead of a real device. This provides a straight-forward way to test the device-mqtt features using an MQTT-broker. Note Multi-Level Topics move metadata (i.e. device name, command name,... etc) from the payload into the MQTT topics. Notice the sections marked with Using Multi-level Topic: for relevant input/output throughout this example. Prepare the Custom Device Configuration In this section, we create folders that contain files required for deployment of a customized device configuration to work with the existing device service: - custom-config |- devices |- my.custom.device.config.toml |- profiles |- my.custom.device.profile.yml Device Configuration Use this configuration file to define devices and schedule jobs. device-mqtt generates a relative instance on start-up. Create the device configuration file, named my.custom.device.config.toml , as shown below: # Pre-define Devices [[DeviceList]] Name = \"my-custom-device\" ProfileName = \"my-custom-device-profile\" Description = \"MQTT device is created for test purpose\" Labels = [ \"MQTT\" , \"test\" ] [DeviceList.Protocols] [DeviceList.Protocols.mqtt] # Comment out/remove below to use multi-level topics CommandTopic = \"CommandTopic\" # Uncomment below to use multi-level topics # CommandTopic = \"command/my-custom-device\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"message\" Note CommandTopic is used to publish the GET or SET command request Device Profile The DeviceProfile defines the device's values and operation method, which can be Read or Write. Create a device profile, named my.custom.device.profile.yml , with the following content: name : \"my-custom-device-profile\" manufacturer : \"iot\" model : \"MQTT-DEVICE\" description : \"Test device profile\" labels : - \"mqtt\" - \"test\" deviceResources : - name : randnum isHidden : true description : \"device random number\" properties : valueType : \"Float32\" readWrite : \"R\" - name : ping isHidden : true description : \"device awake\" properties : valueType : \"String\" readWrite : \"R\" - name : message isHidden : false description : \"device message\" properties : valueType : \"String\" readWrite : \"RW\" - name : json isHidden : false description : \"JSON message\" properties : valueType : \"Object\" readWrite : \"RW\" mediaType : \"application/json\" deviceCommands : - name : values readWrite : \"R\" isHidden : false resourceOperations : - { deviceResource : \"randnum\" } - { deviceResource : \"ping\" } - { deviceResource : \"message\" } Prepare docker-compose file Clone edgex-compose $ git clone git@github.com:edgexfoundry/edgex-compose.git $ git checkout main !!! note Use main branch until jakarta is released. Generate the docker-compose.yml file (notice this includes mqtt-broker) $ cd edgex-compose/compose-builder $ make gen ds-mqtt mqtt-broker no-secty ui Check the generated file $ ls | grep 'docker-compose.yml' docker-compose.yml Mount the custom-config Open the edgex-compose/compose-builder/docker-compose.yml file and then add volumes path and environment as shown below: # docker-compose.yml device-mqtt : ... environment : DEVICE_DEVICESDIR : /custom-config/devices DEVICE_PROFILESDIR : /custom-config/profiles ... volumes : - /path/to/custom-config:/custom-config ... Note Replace the /path/to/custom-config in the example with the correct path Enabling Multi-Level Topics To use the optional setting for MQTT device services with multi-level topics, make the following changes in the device service configuration files: There are two ways to set the environment variables for multi-level topics. If the code is built with compose builder, modify the docker-compose.yml file in edgex-compose/compose-builder: # docker-compose.yml device-mqtt : ... environment : MQTTBROKERINFO_INCOMINGTOPIC : \"incoming/data/#\" MQTTBROKERINFO_RESPONSETOPIC : \"command/response/#\" MQTTBROKERINFO_USETOPICLEVELS : \"true\" ... Otherwise if the device service is built locally, modify these lines in configuration.toml : # Comment out/remove when using multi-level topics #IncomingTopic = \"DataTopic\" #ResponseTopic = \"ResponseTopic\" #UseTopicLevels = false # Uncomment to use multi-level topics IncomingTopic = \"incoming/data/#\" ResponseTopic = \"command/response/#\" UseTopicLevels = true Note If you have previously run Device MQTT locally, you will need to remove the services configuration from Consul. This can be done with: curl --request DELETE http://localhost:8500/v1/kv/edgex/devices/2.0/device-mqtt?recurse=true In my.custom.device.config.toml : [DeviceList.Protocols] [DeviceList.Protocols.mqtt] # Comment out/remove below to use multi-level topics # CommandTopic = \"CommandTopic\" # Uncomment below to use multi-level topics CommandTopic = \"command/my-custom-device\" Note If you have run Device-MQTT before, you will need to delete the previously registered device(s) by replacing in the command below: curl --request DELETE http://localhost:59881/api/v2/device/name/ where can be found by running: curl --request GET http://localhost:59881/api/v2/device/all | json_pp Start EdgeX Foundry on Docker Deploy EdgeX using the following commands: $ cd edgex-compose/compose-builder $ docker-compose pull $ docker-compose up -d Using a MQTT Device Simulator Overview Expected Behaviors Using the detailed script below as a simulator, there are three behaviors: Publish random number data every 15 seconds. Default (single-level) Topic: The simulator publishes the data to the MQTT broker with topic DataTopic and the message is similar to the following: {\"name\":\"my-custom-device\", \"cmd\":\"randnum\", \"method\":\"get\", \"randnum\":4161.3549} Using Multi-level Topic: The simulator publishes the data to the MQTT broker with topic incoming/data/my-custom-device/randnum and the message is similar to the following: {\"randnum\":4161.3549} Receive the reading request, then return the response. Default (single-level) Topic: The simulator receives the request from the MQTT broker, the topic is CommandTopic and the message is similar to the following: {\"cmd\":\"randnum\", \"method\":\"get\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\"} The simulator returns the response to the MQTT broker, the topic is ResponseTopic and the message is similar to the following: {\"cmd\":\"randnum\", \"method\":\"get\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\", \"randnum\":42.0} Using Multi-level Topic: The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/randnum/get/293d7a00-66e1-4374-ace0-07520103c95f and message returned is similar to the following: {\"randnum\":\"42.0\"} The simulator returns the response to the MQTT broker, the topic is command/response/# and the message is similar to the following: {\"randnum\":\"4.20e+01\"} Receive the set request, then change the device value. Default (single-level) Topic: The simulator receives the request from the MQTT broker, the topic is CommandTopic and the message is similar to the following: {\"cmd\":\"message\", \"method\":\"set\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\", \"message\":\"test message...\"} The simulator changes the device value and returns the response to the MQTT broker, the topic is ResponseTopic and the message is similar to the following: {\"cmd\":\"message\", \"method\":\"set\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\"} Using Multi-level Topic: The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/testmessage/set/293d7a00-66e1-4374-ace0-07520103c95f and the message is similar to the following: {\"message\":\"test message...\"} The simulator changes the device value and returns the response to the MQTT broker, the topic is command/response/# and the message is similar to the following: {\"message\":\"test message...\"} Creating and Running a MQTT Device Simulator To implement the simulated custom-defined MQTT device, create a javascript, named mock-device.js , with the following content: Default (single-level) Topic: function getRandomFloat ( min , max ) { return Math . random () * ( max - min ) + min ; } const deviceName = \"my-custom-device\" ; let message = \"test-message\" ; let json = { \"name\" : \"My JSON\" }; // DataSender sends async value to MQTT broker every 15 seconds schedule ( '*/15 * * * * *' , ()=>{ let body = { \"name\" : deviceName , \"cmd\" : \"randnum\" , \"randnum\" : getRandomFloat ( 25 , 29 ). toFixed ( 1 ) }; publish ( 'DataTopic' , JSON . stringify ( body )); }); // CommandHandler receives commands and sends response to MQTT broker // 1. Receive the reading request, then return the response // 2. Receive the set request, then change the device value subscribe ( \"CommandTopic\" , ( topic , val ) => { var data = val ; if ( data . method == \"set\" ) { switch ( data . cmd ) { case \"message\" : message = data [ data . cmd ]; break ; case \"json\" : json = data [ data . cmd ]; break ; } } else { switch ( data . cmd ) { case \"ping\" : data . ping = \"pong\" ; break ; case \"message\" : data . message = message ; break ; case \"randnum\" : data . randnum = 12.123 ; break ; case \"json\" : data . json = json ; break ; } } publish ( \"ResponseTopic\" , JSON . stringify ( data )); }); Using Multi-level Topic: function getRandomFloat ( min , max ) { return Math . random () * ( max - min ) + min ; } const deviceName = \"my-custom-device\" ; let message = \"test-message\" ; let json = { \"name\" : \"My JSON\" }; // DataSender sends async value to MQTT broker every 15 seconds schedule ( '*/15 * * * * *' , ()=>{ let body = getRandomFloat ( 25 , 29 ). toFixed ( 1 ); publish ( 'incoming/data/my-custom-device/randnum' , body ); }); // CommandHandler receives commands and sends response to MQTT broker // 1. Receive the reading request, then return the response // 2. Receive the set request, then change the device value subscribe ( \"command/my-custom-device/#\" , ( topic , val ) => { const words = topic . split ( '/' ); var cmd = words [ 2 ]; var method = words [ 3 ]; var uuid = words [ 4 ]; var response = {}; var data = val ; if ( method == \"set\" ) { switch ( cmd ) { case \"message\" : message = data [ cmd ]; break ; case \"json\" : json = data [ cmd ]; break ; } } else { switch ( cmd ) { case \"ping\" : response . ping = \"pong\" ; break ; case \"message\" : response . message = message ; break ; case \"randnum\" : response . randnum = 12.123 ; break ; case \"json\" : response . json = json ; break ; } } var sendTopic = \"command/response/\" + uuid ; publish ( sendTopic , JSON . stringify ( response )); }); To run the device simulator, enter the commands shown below with the following changes: $ mv mock-device.js /path/to/mqtt-scripts $ docker run -d --restart=always --name=mqtt-scripts \\ -v /path/to/mqtt-scripts:/scripts \\ dersimn/mqtt-scripts --url mqtt://172.17.0.1 --dir /scripts Note Replace the /path/to/mqtt-scripts in the example mv command with the correct path Execute Commands Now we're ready to run some commands. Find Executable Commands Use the following query to find executable commands: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/all | jso n _pp { \"deviceCoreCommands\" : [ { \"profileName\" : \"my-custom-device-profile\" , \"coreCommands\" : [ { \"name\" : \"values\" , \"get\" : true , \"path\" : \"/api/v2/device/name/my-custom-device/values\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"randnum\" , \"valueType\" : \"Float32\" }, { \"resourceName\" : \"ping\" , \"valueType\" : \"String\" }, { \"valueType\" : \"String\" , \"resourceName\" : \"message\" } ] }, { \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"message\" , \"valueType\" : \"String\" } ], \"name\" : \"message\" , \"get\" : true , \"path\" : \"/api/v2/device/name/my-custom-device/message\" , \"set\" : true }, { \"name\" : \"json\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/MQTT-test-device/json\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"json\" , \"valueType\" : \"Object\" } ] } ], \"deviceName\" : \"my-custom-device\" } ], \"apiVersion\" : \"v2\" , \"statusCode\" : 200 } Execute SET Command Execute a SET command according to the url and parameterNames, replacing [host] with the server IP when running the SET command. $ curl http://localhost:59882/api/v2/device/name/my-custom-device/message \\ -H \"Content-Type:application/json\" -X PUT \\ -d '{\"message\":\"Hello!\"}' Execute GET Command Execute a GET command as follows: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/ na me/my - cus t om - device/message | jso n _pp { \"event\" : { \"origin\" : 1624417689920618131 , \"readings\" : [ { \"resourceName\" : \"message\" , \"binaryValue\" : null , \"profileName\" : \"my-custom-device-profile\" , \"deviceName\" : \"my-custom-device\" , \"id\" : \"a3bb78c5-e76f-49a2-ad9d-b220a86c3e36\" , \"value\" : \"Hello!\" , \"valueType\" : \"String\" , \"origin\" : 1624417689920615828 , \"mediaType\" : \"\" } ], \"sourceName\" : \"message\" , \"deviceName\" : \"my-custom-device\" , \"apiVersion\" : \"v2\" , \"profileName\" : \"my-custom-device-profile\" , \"id\" : \"e0b29735-8b39-44d1-8f68-4d7252e14cc7\" }, \"apiVersion\" : \"v2\" , \"statusCode\" : 200 } Schedule Job The schedule job is defined in the [[DeviceList.AutoEvents]] section of the device configuration file: [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"message\" After the service starts, query core-data's reading API. The results show that the service auto-executes the command every 30 secs, as shown below: $ curl h tt p : //localhos t : 59880 /api/v 2 /readi n g/resourceName/message | jso n _pp { \"statusCode\" : 200 , \"readings\" : [ { \"value\" : \"test-message\" , \"id\" : \"e91b8ca6-c5c4-4509-bb61-bd4b09fe835c\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"resourceName\" : \"message\" , \"origin\" : 1624418361324331392 , \"profileName\" : \"my-custom-device-profile\" , \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"String\" }, { \"mediaType\" : \"\" , \"binaryValue\" : null , \"resourceName\" : \"message\" , \"value\" : \"test-message\" , \"id\" : \"1da58cb7-2bf4-47f0-bbb8-9519797149a2\" , \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"String\" , \"profileName\" : \"my-custom-device-profile\" , \"origin\" : 1624418330822988843 }, ... ], \"apiVersion\" : \"v2\" } Async Device Reading The device-mqtt subscribes to a DataTopic , which is wait for the real device to send value to MQTT broker , then device-mqtt parses the value and forward to the northbound. The data format contains the following values: name = device name cmd = deviceResource name method = get or set cmd = device reading The following results show that the mock device sent the reading every 15 secs: $ curl h tt p : //localhos t : 59880 /api/v 2 /readi n g/resourceName/ra n d nu m | jso n _pp { \"readings\" : [ { \"origin\" : 1624418475007110946 , \"valueType\" : \"Float32\" , \"deviceName\" : \"my-custom-device\" , \"id\" : \"9b3d337e-8a8a-4a6c-8018-b4908b57abb8\" , \"binaryValue\" : null , \"resourceName\" : \"randnum\" , \"profileName\" : \"my-custom-device-profile\" , \"mediaType\" : \"\" , \"value\" : \"2.630000e+01\" }, { \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"Float32\" , \"id\" : \"06918cbb-ada0-4752-8877-0ef8488620f6\" , \"origin\" : 1624418460007833720 , \"mediaType\" : \"\" , \"profileName\" : \"my-custom-device-profile\" , \"value\" : \"2.570000e+01\" , \"resourceName\" : \"randnum\" , \"binaryValue\" : null }, ... ], \"statusCode\" : 200 , \"apiVersion\" : \"v2\" } MQTT Device Service Configuration MQTT Device Service has the following configurations to implement the MQTT protocol. Configuration Default Value Description MQTTBrokerInfo.Schema tcp The URL schema MQTTBrokerInfo.Host 0.0.0.0 The URL host MQTTBrokerInfo.Port 1883 The URL port MQTTBrokerInfo.Qos 0 Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) MQTTBrokerInfo.KeepAlive 3600 Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 MQTTBrokerInfo.ClientId device-mqtt ClientId to connect to the broker with MQTTBrokerInfo.CredentialsRetryTime 120 The retry times to get the credential MQTTBrokerInfo.CredentialsRetryWait 1 The wait time(seconds) when retry to get the credential MQTTBrokerInfo.ConnEstablishingRetry 10 The retry times to establish the MQTT connection MQTTBrokerInfo.ConnRetryWaitTime 5 The wait time(seconds) when retry to establish the MQTT connection MQTTBrokerInfo.AuthMode none Indicates what to use when connecting to the broker. Must be one of \"none\" , \"usernamepassword\" MQTTBrokerInfo.CredentialsPath credentials Name of the path in secret provider to retrieve your secrets. Must be non-blank. MQTTBrokerInfo.IncomingTopic DataTopic (incoming/data/#) IncomingTopic is used to receive the async value MQTTBrokerInfo.ResponseTopic ResponseTopic (command/response/#) ResponseTopic is used to receive the command response from the device MQTTBrokerInfo.UseTopicLevels false (true) Boolean setting to use multi-level topics MQTTBrokerInfo.Writable.ResponseFetchInterval 500 ResponseFetchInterval specifies the retry interval(milliseconds) to fetch the command response from the MQTT broker Note Using Multi-level Topic: Remember to change the defaults in parentheses in the table above. Overriding with Environment Variables The user can override any of the above configurations using environment: variables to meet their requirement, for example: # docker-compose.yml device-mqtt : . . . environment : MQTTBROKERINFO_CLIENTID : \"my-device-mqtt\" MQTTBROKERINFO_CONNRETRYWAITTIME : \"10\" MQTTBROKERINFO_USETOPICLEVELS : \"false\" ...","title":"MQTT"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#mqtt","text":"EdgeX - Jakarta Release","title":"MQTT"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overview","text":"In this example, we use a script to simulate a custom-defined MQTT device, instead of a real device. This provides a straight-forward way to test the device-mqtt features using an MQTT-broker. Note Multi-Level Topics move metadata (i.e. device name, command name,... etc) from the payload into the MQTT topics. Notice the sections marked with Using Multi-level Topic: for relevant input/output throughout this example.","title":"Overview"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#prepare-the-custom-device-configuration","text":"In this section, we create folders that contain files required for deployment of a customized device configuration to work with the existing device service: - custom-config |- devices |- my.custom.device.config.toml |- profiles |- my.custom.device.profile.yml","title":"Prepare the Custom Device Configuration"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#device-configuration","text":"Use this configuration file to define devices and schedule jobs. device-mqtt generates a relative instance on start-up. Create the device configuration file, named my.custom.device.config.toml , as shown below: # Pre-define Devices [[DeviceList]] Name = \"my-custom-device\" ProfileName = \"my-custom-device-profile\" Description = \"MQTT device is created for test purpose\" Labels = [ \"MQTT\" , \"test\" ] [DeviceList.Protocols] [DeviceList.Protocols.mqtt] # Comment out/remove below to use multi-level topics CommandTopic = \"CommandTopic\" # Uncomment below to use multi-level topics # CommandTopic = \"command/my-custom-device\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"message\" Note CommandTopic is used to publish the GET or SET command request","title":"Device Configuration"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#device-profile","text":"The DeviceProfile defines the device's values and operation method, which can be Read or Write. Create a device profile, named my.custom.device.profile.yml , with the following content: name : \"my-custom-device-profile\" manufacturer : \"iot\" model : \"MQTT-DEVICE\" description : \"Test device profile\" labels : - \"mqtt\" - \"test\" deviceResources : - name : randnum isHidden : true description : \"device random number\" properties : valueType : \"Float32\" readWrite : \"R\" - name : ping isHidden : true description : \"device awake\" properties : valueType : \"String\" readWrite : \"R\" - name : message isHidden : false description : \"device message\" properties : valueType : \"String\" readWrite : \"RW\" - name : json isHidden : false description : \"JSON message\" properties : valueType : \"Object\" readWrite : \"RW\" mediaType : \"application/json\" deviceCommands : - name : values readWrite : \"R\" isHidden : false resourceOperations : - { deviceResource : \"randnum\" } - { deviceResource : \"ping\" } - { deviceResource : \"message\" }","title":"Device Profile"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#prepare-docker-compose-file","text":"Clone edgex-compose $ git clone git@github.com:edgexfoundry/edgex-compose.git $ git checkout main !!! note Use main branch until jakarta is released. Generate the docker-compose.yml file (notice this includes mqtt-broker) $ cd edgex-compose/compose-builder $ make gen ds-mqtt mqtt-broker no-secty ui Check the generated file $ ls | grep 'docker-compose.yml' docker-compose.yml","title":"Prepare docker-compose file"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#mount-the-custom-config","text":"Open the edgex-compose/compose-builder/docker-compose.yml file and then add volumes path and environment as shown below: # docker-compose.yml device-mqtt : ... environment : DEVICE_DEVICESDIR : /custom-config/devices DEVICE_PROFILESDIR : /custom-config/profiles ... volumes : - /path/to/custom-config:/custom-config ... Note Replace the /path/to/custom-config in the example with the correct path","title":"Mount the custom-config"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#enabling-multi-level-topics","text":"To use the optional setting for MQTT device services with multi-level topics, make the following changes in the device service configuration files: There are two ways to set the environment variables for multi-level topics. If the code is built with compose builder, modify the docker-compose.yml file in edgex-compose/compose-builder: # docker-compose.yml device-mqtt : ... environment : MQTTBROKERINFO_INCOMINGTOPIC : \"incoming/data/#\" MQTTBROKERINFO_RESPONSETOPIC : \"command/response/#\" MQTTBROKERINFO_USETOPICLEVELS : \"true\" ... Otherwise if the device service is built locally, modify these lines in configuration.toml : # Comment out/remove when using multi-level topics #IncomingTopic = \"DataTopic\" #ResponseTopic = \"ResponseTopic\" #UseTopicLevels = false # Uncomment to use multi-level topics IncomingTopic = \"incoming/data/#\" ResponseTopic = \"command/response/#\" UseTopicLevels = true Note If you have previously run Device MQTT locally, you will need to remove the services configuration from Consul. This can be done with: curl --request DELETE http://localhost:8500/v1/kv/edgex/devices/2.0/device-mqtt?recurse=true In my.custom.device.config.toml : [DeviceList.Protocols] [DeviceList.Protocols.mqtt] # Comment out/remove below to use multi-level topics # CommandTopic = \"CommandTopic\" # Uncomment below to use multi-level topics CommandTopic = \"command/my-custom-device\" Note If you have run Device-MQTT before, you will need to delete the previously registered device(s) by replacing in the command below: curl --request DELETE http://localhost:59881/api/v2/device/name/ where can be found by running: curl --request GET http://localhost:59881/api/v2/device/all | json_pp","title":"Enabling Multi-Level Topics"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#start-edgex-foundry-on-docker","text":"Deploy EdgeX using the following commands: $ cd edgex-compose/compose-builder $ docker-compose pull $ docker-compose up -d","title":"Start EdgeX Foundry on Docker"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#using-a-mqtt-device-simulator","text":"","title":"Using a MQTT Device Simulator"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overview_1","text":"","title":"Overview"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#expected-behaviors","text":"Using the detailed script below as a simulator, there are three behaviors: Publish random number data every 15 seconds. Default (single-level) Topic: The simulator publishes the data to the MQTT broker with topic DataTopic and the message is similar to the following: {\"name\":\"my-custom-device\", \"cmd\":\"randnum\", \"method\":\"get\", \"randnum\":4161.3549} Using Multi-level Topic: The simulator publishes the data to the MQTT broker with topic incoming/data/my-custom-device/randnum and the message is similar to the following: {\"randnum\":4161.3549} Receive the reading request, then return the response. Default (single-level) Topic: The simulator receives the request from the MQTT broker, the topic is CommandTopic and the message is similar to the following: {\"cmd\":\"randnum\", \"method\":\"get\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\"} The simulator returns the response to the MQTT broker, the topic is ResponseTopic and the message is similar to the following: {\"cmd\":\"randnum\", \"method\":\"get\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\", \"randnum\":42.0} Using Multi-level Topic: The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/randnum/get/293d7a00-66e1-4374-ace0-07520103c95f and message returned is similar to the following: {\"randnum\":\"42.0\"} The simulator returns the response to the MQTT broker, the topic is command/response/# and the message is similar to the following: {\"randnum\":\"4.20e+01\"} Receive the set request, then change the device value. Default (single-level) Topic: The simulator receives the request from the MQTT broker, the topic is CommandTopic and the message is similar to the following: {\"cmd\":\"message\", \"method\":\"set\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\", \"message\":\"test message...\"} The simulator changes the device value and returns the response to the MQTT broker, the topic is ResponseTopic and the message is similar to the following: {\"cmd\":\"message\", \"method\":\"set\", \"uuid\":\"293d7a00-66e1-4374-ace0-07520103c95f\"} Using Multi-level Topic: The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/testmessage/set/293d7a00-66e1-4374-ace0-07520103c95f and the message is similar to the following: {\"message\":\"test message...\"} The simulator changes the device value and returns the response to the MQTT broker, the topic is command/response/# and the message is similar to the following: {\"message\":\"test message...\"}","title":"Expected Behaviors"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#creating-and-running-a-mqtt-device-simulator","text":"To implement the simulated custom-defined MQTT device, create a javascript, named mock-device.js , with the following content: Default (single-level) Topic: function getRandomFloat ( min , max ) { return Math . random () * ( max - min ) + min ; } const deviceName = \"my-custom-device\" ; let message = \"test-message\" ; let json = { \"name\" : \"My JSON\" }; // DataSender sends async value to MQTT broker every 15 seconds schedule ( '*/15 * * * * *' , ()=>{ let body = { \"name\" : deviceName , \"cmd\" : \"randnum\" , \"randnum\" : getRandomFloat ( 25 , 29 ). toFixed ( 1 ) }; publish ( 'DataTopic' , JSON . stringify ( body )); }); // CommandHandler receives commands and sends response to MQTT broker // 1. Receive the reading request, then return the response // 2. Receive the set request, then change the device value subscribe ( \"CommandTopic\" , ( topic , val ) => { var data = val ; if ( data . method == \"set\" ) { switch ( data . cmd ) { case \"message\" : message = data [ data . cmd ]; break ; case \"json\" : json = data [ data . cmd ]; break ; } } else { switch ( data . cmd ) { case \"ping\" : data . ping = \"pong\" ; break ; case \"message\" : data . message = message ; break ; case \"randnum\" : data . randnum = 12.123 ; break ; case \"json\" : data . json = json ; break ; } } publish ( \"ResponseTopic\" , JSON . stringify ( data )); }); Using Multi-level Topic: function getRandomFloat ( min , max ) { return Math . random () * ( max - min ) + min ; } const deviceName = \"my-custom-device\" ; let message = \"test-message\" ; let json = { \"name\" : \"My JSON\" }; // DataSender sends async value to MQTT broker every 15 seconds schedule ( '*/15 * * * * *' , ()=>{ let body = getRandomFloat ( 25 , 29 ). toFixed ( 1 ); publish ( 'incoming/data/my-custom-device/randnum' , body ); }); // CommandHandler receives commands and sends response to MQTT broker // 1. Receive the reading request, then return the response // 2. Receive the set request, then change the device value subscribe ( \"command/my-custom-device/#\" , ( topic , val ) => { const words = topic . split ( '/' ); var cmd = words [ 2 ]; var method = words [ 3 ]; var uuid = words [ 4 ]; var response = {}; var data = val ; if ( method == \"set\" ) { switch ( cmd ) { case \"message\" : message = data [ cmd ]; break ; case \"json\" : json = data [ cmd ]; break ; } } else { switch ( cmd ) { case \"ping\" : response . ping = \"pong\" ; break ; case \"message\" : response . message = message ; break ; case \"randnum\" : response . randnum = 12.123 ; break ; case \"json\" : response . json = json ; break ; } } var sendTopic = \"command/response/\" + uuid ; publish ( sendTopic , JSON . stringify ( response )); }); To run the device simulator, enter the commands shown below with the following changes: $ mv mock-device.js /path/to/mqtt-scripts $ docker run -d --restart=always --name=mqtt-scripts \\ -v /path/to/mqtt-scripts:/scripts \\ dersimn/mqtt-scripts --url mqtt://172.17.0.1 --dir /scripts Note Replace the /path/to/mqtt-scripts in the example mv command with the correct path","title":"Creating and Running a MQTT Device Simulator"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-commands","text":"Now we're ready to run some commands.","title":"Execute Commands"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#find-executable-commands","text":"Use the following query to find executable commands: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/all | jso n _pp { \"deviceCoreCommands\" : [ { \"profileName\" : \"my-custom-device-profile\" , \"coreCommands\" : [ { \"name\" : \"values\" , \"get\" : true , \"path\" : \"/api/v2/device/name/my-custom-device/values\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"randnum\" , \"valueType\" : \"Float32\" }, { \"resourceName\" : \"ping\" , \"valueType\" : \"String\" }, { \"valueType\" : \"String\" , \"resourceName\" : \"message\" } ] }, { \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"message\" , \"valueType\" : \"String\" } ], \"name\" : \"message\" , \"get\" : true , \"path\" : \"/api/v2/device/name/my-custom-device/message\" , \"set\" : true }, { \"name\" : \"json\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/MQTT-test-device/json\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"json\" , \"valueType\" : \"Object\" } ] } ], \"deviceName\" : \"my-custom-device\" } ], \"apiVersion\" : \"v2\" , \"statusCode\" : 200 }","title":"Find Executable Commands"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-set-command","text":"Execute a SET command according to the url and parameterNames, replacing [host] with the server IP when running the SET command. $ curl http://localhost:59882/api/v2/device/name/my-custom-device/message \\ -H \"Content-Type:application/json\" -X PUT \\ -d '{\"message\":\"Hello!\"}'","title":"Execute SET Command"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-get-command","text":"Execute a GET command as follows: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/ na me/my - cus t om - device/message | jso n _pp { \"event\" : { \"origin\" : 1624417689920618131 , \"readings\" : [ { \"resourceName\" : \"message\" , \"binaryValue\" : null , \"profileName\" : \"my-custom-device-profile\" , \"deviceName\" : \"my-custom-device\" , \"id\" : \"a3bb78c5-e76f-49a2-ad9d-b220a86c3e36\" , \"value\" : \"Hello!\" , \"valueType\" : \"String\" , \"origin\" : 1624417689920615828 , \"mediaType\" : \"\" } ], \"sourceName\" : \"message\" , \"deviceName\" : \"my-custom-device\" , \"apiVersion\" : \"v2\" , \"profileName\" : \"my-custom-device-profile\" , \"id\" : \"e0b29735-8b39-44d1-8f68-4d7252e14cc7\" }, \"apiVersion\" : \"v2\" , \"statusCode\" : 200 }","title":"Execute GET Command"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#schedule-job","text":"The schedule job is defined in the [[DeviceList.AutoEvents]] section of the device configuration file: [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"message\" After the service starts, query core-data's reading API. The results show that the service auto-executes the command every 30 secs, as shown below: $ curl h tt p : //localhos t : 59880 /api/v 2 /readi n g/resourceName/message | jso n _pp { \"statusCode\" : 200 , \"readings\" : [ { \"value\" : \"test-message\" , \"id\" : \"e91b8ca6-c5c4-4509-bb61-bd4b09fe835c\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"resourceName\" : \"message\" , \"origin\" : 1624418361324331392 , \"profileName\" : \"my-custom-device-profile\" , \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"String\" }, { \"mediaType\" : \"\" , \"binaryValue\" : null , \"resourceName\" : \"message\" , \"value\" : \"test-message\" , \"id\" : \"1da58cb7-2bf4-47f0-bbb8-9519797149a2\" , \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"String\" , \"profileName\" : \"my-custom-device-profile\" , \"origin\" : 1624418330822988843 }, ... ], \"apiVersion\" : \"v2\" }","title":"Schedule Job"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#async-device-reading","text":"The device-mqtt subscribes to a DataTopic , which is wait for the real device to send value to MQTT broker , then device-mqtt parses the value and forward to the northbound. The data format contains the following values: name = device name cmd = deviceResource name method = get or set cmd = device reading The following results show that the mock device sent the reading every 15 secs: $ curl h tt p : //localhos t : 59880 /api/v 2 /readi n g/resourceName/ra n d nu m | jso n _pp { \"readings\" : [ { \"origin\" : 1624418475007110946 , \"valueType\" : \"Float32\" , \"deviceName\" : \"my-custom-device\" , \"id\" : \"9b3d337e-8a8a-4a6c-8018-b4908b57abb8\" , \"binaryValue\" : null , \"resourceName\" : \"randnum\" , \"profileName\" : \"my-custom-device-profile\" , \"mediaType\" : \"\" , \"value\" : \"2.630000e+01\" }, { \"deviceName\" : \"my-custom-device\" , \"valueType\" : \"Float32\" , \"id\" : \"06918cbb-ada0-4752-8877-0ef8488620f6\" , \"origin\" : 1624418460007833720 , \"mediaType\" : \"\" , \"profileName\" : \"my-custom-device-profile\" , \"value\" : \"2.570000e+01\" , \"resourceName\" : \"randnum\" , \"binaryValue\" : null }, ... ], \"statusCode\" : 200 , \"apiVersion\" : \"v2\" }","title":"Async Device Reading"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#mqtt-device-service-configuration","text":"MQTT Device Service has the following configurations to implement the MQTT protocol. Configuration Default Value Description MQTTBrokerInfo.Schema tcp The URL schema MQTTBrokerInfo.Host 0.0.0.0 The URL host MQTTBrokerInfo.Port 1883 The URL port MQTTBrokerInfo.Qos 0 Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) MQTTBrokerInfo.KeepAlive 3600 Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 MQTTBrokerInfo.ClientId device-mqtt ClientId to connect to the broker with MQTTBrokerInfo.CredentialsRetryTime 120 The retry times to get the credential MQTTBrokerInfo.CredentialsRetryWait 1 The wait time(seconds) when retry to get the credential MQTTBrokerInfo.ConnEstablishingRetry 10 The retry times to establish the MQTT connection MQTTBrokerInfo.ConnRetryWaitTime 5 The wait time(seconds) when retry to establish the MQTT connection MQTTBrokerInfo.AuthMode none Indicates what to use when connecting to the broker. Must be one of \"none\" , \"usernamepassword\" MQTTBrokerInfo.CredentialsPath credentials Name of the path in secret provider to retrieve your secrets. Must be non-blank. MQTTBrokerInfo.IncomingTopic DataTopic (incoming/data/#) IncomingTopic is used to receive the async value MQTTBrokerInfo.ResponseTopic ResponseTopic (command/response/#) ResponseTopic is used to receive the command response from the device MQTTBrokerInfo.UseTopicLevels false (true) Boolean setting to use multi-level topics MQTTBrokerInfo.Writable.ResponseFetchInterval 500 ResponseFetchInterval specifies the retry interval(milliseconds) to fetch the command response from the MQTT broker Note Using Multi-level Topic: Remember to change the defaults in parentheses in the table above.","title":"MQTT Device Service Configuration"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overriding-with-environment-variables","text":"The user can override any of the above configurations using environment: variables to meet their requirement, for example: # docker-compose.yml device-mqtt : . . . environment : MQTTBROKERINFO_CLIENTID : \"my-device-mqtt\" MQTTBROKERINFO_CONNRETRYWAITTIME : \"10\" MQTTBROKERINFO_USETOPICLEVELS : \"false\" ...","title":"Overriding with Environment Variables"},{"location":"examples/Ch-ExamplesAddingModbusDevice/","text":"Modbus EdgeX - Ireland Release This page describes how to connect Modbus devices to EdgeX. In this example, we simulate the temperature sensor instead of using a real device. This provides a straightforward way to test the device service features. Temperature sensor: https://www.audon.co.uk/ethernet_sensors/NANO_TEMP.html User manual: http://download.inveo.com.pl/manual/nano_t/user_manual_en.pdf Important Notice To fulfill the issue #61 , there is an important incompatible change after v2 (Ireland release). In the Device Profile attributes section, the startingAddress becomes an integer data type and zero-based value. In v1, startingAddress was a string data type and one-based value. Environment You can use any operating system that can install docker and docker-compose. In this example, we use Ubuntu to deploy EdgeX using docker. Modbus Device Simulator 1.Download ModbusPal Download the fixed version of ModbusPal from the https://sourceforge.net/p/modbuspal/discussion/899955/thread/72cf35ee/cd1f/attachment/ModbusPal.jar . 2.Install required lib: sudo apt install librxtx-java 3.Startup the ModbusPal: sudo java -jar ModbusPal.jar Modbus Register Table You can find the available registers in the user manual. Modbus TCP \u2013 Holding Registers Address Name R/W Description 4000 ThermostatL R/W Lower alarm threshold 4001 ThermostatH R/W Upper alarm threshold 4002 Alarm mode R/W 1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher 4004 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105) Setup ModbusPal To simulate the sensor, do the following: Add mock device: Add registers according to the register table: Add the ModbusPal support value auto-generator, which can bind to the registers: Run the Simulator Enable the value generator and click the Run button. Set Up Before Starting Services The following sections describe how to complete the set up before starting the services. If you prefer to start the services and then add the device, see Set Up After Starting Services Create a Custom configuration folder Run the following command: mkdir -p custom-config Set Up Device Profile Run the following command to create your device profile: cd custom-config nano temperature.profile.yml Fill in the device profile according to the Modbus Register Table , as shown below: name : \"Ethernet-Temperature-Sensor\" manufacturer : \"Audon Electronics\" model : \"Temperature\" labels : - \"Web\" - \"Modbus TCP\" - \"SNMP\" description : \"The NANO_TEMP is a Ethernet Thermometer measuring from -55\u00b0C to 125\u00b0C with a web interface and Modbus TCP communications.\" deviceResources : - name : \"ThermostatL\" isHidden : true description : \"Lower alarm threshold of the temperature\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 3999 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"RW\" scale : \"0.1\" - name : \"ThermostatH\" isHidden : true description : \"Upper alarm threshold of the temperature\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4000 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"RW\" scale : \"0.1\" - name : \"AlarmMode\" isHidden : true description : \"1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4001 } properties : valueType : \"Int16\" readWrite : \"RW\" - name : \"Temperature\" isHidden : false description : \"Temperature x 10 (np. 10,5 st.C to 105)\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4003 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.1\" deviceCommands : - name : \"AlarmThreshold\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"ThermostatL\" } - { deviceResource : \"ThermostatH\" } - name : \"AlarmMode\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"AlarmMode\" , mappings : { \"1\" : \"OFF\" , \"2\" : \"Lower\" , \"3\" : \"Higher\" , \"4\" : \"Lower or Higher\" } } In the Modbus protocol, we provide the following attributes: 1. primaryTable : HOLDING_REGISTERS, INPUT_REGISTERS, COILS, DISCRETES_INPUT 2. startingAddress This attribute defines the zero-based startingAddress in Modbus device. For example, the GET command requests data from the Modbus address 4004 to get the temperature data, so the starting register address should be 4003. Address Starting Address Name R/W Description 4004 4003 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105) 3. IS_BYTE_SWAP , IS_WORD_SWAP : To handle the different Modbus binary data order, we support Int32, Uint32, Float32 to do the swap operation before decoding the binary data. For example: { primaryTable: \"INPUT_REGISTERS\", startingAddress: \"4\", isByteSwap: \"false\", isWordSwap: \"true\" } 4. RAW_TYPE : This attribute defines the binary data read from the Modbus device, then we can use the value type to indicate the data type that the user wants to receive. We only support Int16 and Uint16 for rawType. The corresponding value type must be Float32 and Float64 . For example: deviceResources : - name : \"Temperature\" isHidden : false description : \"Temperature x 10 (np. 10,5 st.C to 105)\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4003 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.1\" In the device-modbus, the Property valueType decides how many registers will be read. Like Holding registers, a register has 16 bits. If the Modbus device's user manual specifies that a value has two registers, define it as Float32 or Int32 or Uint32 in the deviceProfile. Once we execute a command, device-modbus knows its value type and register type, startingAddress, and register length. So it can read or write value using the modbus protocol. Set Up Device Service Configuration Run the following command to create your device configuration: cd custom-config nano device.config.toml Fill in the device.config.toml file, as shown below: [[DeviceList]] Name = \"Modbus-TCP-Temperature-Sensor\" ProfileName = \"Ethernet-Temperature-Sensor\" Description = \"This device is a product for monitoring the temperature via the ethernet\" labels = [ \"temperature\" , \"modbus TCP\" ] [DeviceList.Protocols] [DeviceList.Protocols.modbus-tcp] Address = \"172.17.0.1\" Port = \"502\" UnitID = \"1\" Timeout = \"5\" IdleTimeout = \"5\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Temperature\" The address 172.17.0.1 is point to the docker bridge network which means it can forward the request from docker network to the host. Use this configuration file to define devices and AutoEvent. Then the device-modbus will generate the relative instance on startup. The device-modbus offers two types of protocol, Modbus TCP and Modbus RTU, which can be defined as shown below: protocol Name Protocol Address Port UnitID BaudRate DataBits StopBits Parity Timeout IdleTimeout Modbus TCP Gateway address TCP 10.211.55.6 502 1 5 5 Modbus RTU Gateway address RTU /tmp/slave 502 2 19200 8 1 N 5 5 In the RTU protocol, Parity can be: N - None is 0 O - Odd is 1 E - Even is 2, default is E Prepare docker-compose file Clone edgex-compose $ git clone git@github.com:edgexfoundry/edgex-compose.git Generate the docker-compose.yml file $ cd edgex-compose/compose-builder $ make gen ds-modbus Add Custom Configuration to docker-compose File Add prepared configuration files to docker-compose file, you can mount them using volumes and change the environment for device-modbus internal use. Open the docker-compose.yml file and then add volumes path and environment as shown below: device-modbus : ... environment : ... DEVICE_DEVICESDIR : /custom-config DEVICE_PROFILESDIR : /custom-config volumes : ... - /path/to/custom-config:/custom-config Start EdgeX Foundry on Docker Since we generate the docker-compose.yml file at the previous step, we can deploy EdgeX as shown below: $ cd edgex-compose/compose-builder $ docker-compose up -d Creating network \"compose-builder_edgex-network\" with driver \"bridge\" Creating volume \"compose-builder_consul-acl-token\" with default driver ... Creating edgex-core-metadata ... done Creating edgex-core-command ... done Creating edgex-core-data ... done Creating edgex-device-modbus ... done Creating edgex-app-rules-engine ... done Creating edgex-sys-mgmt-agent ... done Set Up After Starting Services If the services are already running and you want to add a device, you can use the Core Metadata API as outlined in this section. If you set up the device profile and Service as described in Set Up Before Starting Services , you can skip this section. To add a device after starting the services, complete the following steps: Upload the device profile above to metadata with a POST to http://localhost:59881/api/v2/deviceprofile/uploadfile and add the file as key \"file\" to the body in form-data format, and the created ID will be returned. The following example command uses curl to send the request: $ curl http://localhost:59881/api/v2/deviceprofile/uploadfile \\ -F \"file=@temperature.profile.yml\" Ensure the Modbus device service is running, adjust the service name below to match if necessary or if using other device services. Add the device with a POST to http://localhost:59881/api/v2/device , the body will look something like: $ curl http://localhost:59881/api/v2/device -H \"Content-Type:application/json\" -X POST \\ -d '[ { \"apiVersion\": \"v2\", \"device\": { \"name\" :\"Modbus-TCP-Temperature-Sensor\", \"description\":\"This device is a product for monitoring the temperature via the ethernet\", \"labels\":[ \"Temperature\", \"Modbus TCP\" ], \"serviceName\": \"device-modbus\", \"profileName\": \"Ethernet-Temperature-Sensor\", \"protocols\":{ \"modbus-tcp\":{ \"Address\" : \"172.17.0.1\", \"Port\" : \"502\", \"UnitID\" : \"1\", \"Timeout\" : \"5\", \"IdleTimeout\" : \"5\" } }, \"autoEvents\":[ { \"Interval\":\"30s\", \"onChange\":false, \"SourceName\":\"Temperature\" } ], \"adminState\":\"UNLOCKED\", \"operatingState\":\"UP\" } } ]' The service name must match/refer to the target device service, and the profile name must match the device profile name from the previous steps. Execute Commands Now we're ready to run some commands. Find Executable Commands Use the following query to find executable commands: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/all | jso n _pp { \"apiVersion\" : \"v2\" , \"deviceCoreCommands\" : [ { \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"coreCommands\" : [ { \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"AlarmThreshold\" , \"get\" : true , \"set\" : true , \"parameters\" : [ { \"valueType\" : \"Float32\" , \"resourceName\" : \"ThermostatL\" }, { \"valueType\" : \"Float32\" , \"resourceName\" : \"ThermostatH\" } ], \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold\" }, { \"get\" : true , \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"AlarmMode\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmMode\" , \"parameters\" : [ { \"resourceName\" : \"AlarmMode\" , \"valueType\" : \"Int16\" } ] }, { \"get\" : true , \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"Temperature\" , \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/Temperature\" , \"parameters\" : [ { \"valueType\" : \"Float32\" , \"resourceName\" : \"Temperature\" } ] } ] } ], \"statusCode\" : 200 } Execute SET command Execute SET command according to url and parameterNames , replacing [host] with the server IP when running the SET command. $ curl http://localhost:59882/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold \\ -H \"Content-Type:application/json\" -X PUT \\ -d '{\"ThermostatL\":\"15\",\"ThermostatH\":\"100\"}' Execute GET command Replace \\ with the server IP when running the GET command. $ curl h tt p : //localhos t : 59882 /api/v 2 /device/ na me/Modbus - TCP - Tempera ture - Se ns or/AlarmThreshold | jso n _pp { \"statusCode\" : 200 , \"apiVersion\" : \"v2\" , \"event\" : { \"origin\" : 1624324686964377495 , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"f3d44a0f-d2c3-4ef6-9441-ad6b1bfb8a9e\" , \"sourceName\" : \"AlarmThreshold\" , \"readings\" : [ { \"resourceName\" : \"ThermostatL\" , \"value\" : \"1.500000e+01\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"9aa879a0-c184-476b-8124-34d35a2a51f3\" , \"valueType\" : \"Float32\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"origin\" : 1624324686963970614 , \"profileName\" : \"Ethernet-Temperature-Sensor\" }, { \"value\" : \"1.000000e+02\" , \"resourceName\" : \"ThermostatH\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"bf7df23b-4338-4b93-a8bd-7abd5e848379\" , \"valueType\" : \"Float32\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"origin\" : 1624324686964343768 , \"profileName\" : \"Ethernet-Temperature-Sensor\" } ], \"apiVersion\" : \"v2\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" } } AutoEvent The AutoEvent is defined in the [[DeviceList.AutoEvents]] section of the device configuration file: [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Temperature\" After service startup, query core-data's API. The results show that the service auto-executes the command every 30 seconds. $ curl h tt p : //localhos t : 59880 /api/v 2 /eve nt /device/ na me/Modbus - TCP - Tempera ture - Se ns or | jso n _pp { \"events\" : [ { \"readings\" : [ { \"value\" : \"5.300000e+01\" , \"binaryValue\" : null , \"origin\" : 1624325219186870396 , \"id\" : \"68a66a35-d3cf-48a2-9bf0-09578267a3f7\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"mediaType\" : \"\" , \"valueType\" : \"Float32\" , \"resourceName\" : \"Temperature\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" } ], \"apiVersion\" : \"v2\" , \"origin\" : 1624325219186977564 , \"id\" : \"4b235616-7304-419e-97ae-17a244911b1c\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"sourceName\" : \"Temperature\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" }, { \"readings\" : [ { \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"resourceName\" : \"Temperature\" , \"valueType\" : \"Float32\" , \"id\" : \"56b7e8be-7ce8-4fa9-89e2-3a1a7ef09050\" , \"origin\" : 1624325189184675483 , \"value\" : \"5.300000e+01\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" } ], \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"sourceName\" : \"Temperature\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"fbab44f5-9775-4c09-84bd-cbfb00001115\" , \"origin\" : 1624325189184721223 , \"apiVersion\" : \"v2\" }, ... ], \"apiVersion\" : \"v2\" , \"statusCode\" : 200 } Set up the Modbus RTU Device This section describes how to connect the Modbus RTU device. We use Ubuntu OS and a Modbus RTU device for this example. Modbus RTU device: http://www.icpdas.com/root/product/solutions/remote_io/rs-485/i-7000_m-7000/i-7055.html User manual: http://ftp.icpdas.com/pub/cd/8000cd/napdos/7000/manual/7000dio.pdf Connect the device Connect the device to your machine(laptop or gateway,etc.) via RS485/USB adaptor and power on. Execute a command on the machine, and you can find a message like the following: $ dmesg | grep tty ... ... [18006.167625] usb 1-1: FTDI USB Serial Device converter now attached to ttyUSB0 It shows the USB attach to ttyUSB0, then you can check whether the device path exists: $ ls /dev/ttyUSB0 /dev/ttyUSB0 Deploy the EdgeX Modify the docker-compose.yml file to mount the device path to the device-modbus: Change the permission of the device path sudo chmod 777 /dev/ttyUSB0 Open docker-compose.yml file with text editor. $ nano /docker-compose.yml Modify the device-modbus section and save the file device-modbus: ... devices: - /dev/ttyUSB0 Deploy the EdgeX $ docker-compose up -d Add device to EdgeX Create the device profile according to the register table $ nano modbus.rtu.demo.profile.yml name : \"Modbus-RTU-IO-Module\" manufacturer : \"icpdas\" model : \"M-7055\" labels : - \"Modbus RTU\" - \"IO Module\" description : \"This IO module offers 8 isolated channels for digital input and 8 isolated channels for digital output.\" deviceResources : - name : \"DO0\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 0 } properties : valueType : \"Bool\" readWrite : \"RW\" - name : \"DO1\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 1 } properties : valueType : \"Bool\" readWrite : \"RW\" - name : \"DO2\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 2 } properties : valueType : \"Bool\" readWrite : \"RW\" deviceCommands : - name : \"DO\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"DO0\" } - { deviceResource : \"DO1\" } - { deviceResource : \"DO2\" } Upload the device profile $ curl http://localhost:59881/api/v2/deviceprofile/uploadfile \\ -F \"file=@modbus.rtu.demo.profile.yml\" Create the device entity to the EdgeX. You can find the Modbus RTU setting on the device or the user manual. $ curl h tt p : //localhos t : 59881 /api/v 2 /device - H \"Content-Type:application/json\" - X POST \\ - d ' [ { \"apiVersion\" : \"v2\" , \"device\" : { \"name\" : \"Modbus-RTU-IO-Module\" , \"description\" : \"The device can be used to monitor the status of the digital input and digital output channels.\" , \"labels\" :[ \"IO Module\" , \"Modbus RTU\" ], \"serviceName\" : \"device-modbus\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"protocols\" :{ \"modbus-tcp\" :{ \"Address\" : \"/dev/ttyUSB0\" , \"BaudRate\" : \"19200\" , \"DataBits\" : \"8\" , \"StopBits\" : \"1\" , \"Parity\" : \"N\" , \"UnitID\" : \"1\" , \"Timeout\" : \"5\" , \"IdleTimeout\" : \"5\" } }, \"adminState\" : \"UNLOCKED\" , \"operatingState\" : \"UP\" } } ] ' Test the GET or SET command","title":"Modbus"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus","text":"EdgeX - Ireland Release This page describes how to connect Modbus devices to EdgeX. In this example, we simulate the temperature sensor instead of using a real device. This provides a straightforward way to test the device service features. Temperature sensor: https://www.audon.co.uk/ethernet_sensors/NANO_TEMP.html User manual: http://download.inveo.com.pl/manual/nano_t/user_manual_en.pdf","title":"Modbus"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#important-notice","text":"To fulfill the issue #61 , there is an important incompatible change after v2 (Ireland release). In the Device Profile attributes section, the startingAddress becomes an integer data type and zero-based value. In v1, startingAddress was a string data type and one-based value.","title":"Important Notice"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#environment","text":"You can use any operating system that can install docker and docker-compose. In this example, we use Ubuntu to deploy EdgeX using docker.","title":"Environment"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus-device-simulator","text":"1.Download ModbusPal Download the fixed version of ModbusPal from the https://sourceforge.net/p/modbuspal/discussion/899955/thread/72cf35ee/cd1f/attachment/ModbusPal.jar . 2.Install required lib: sudo apt install librxtx-java 3.Startup the ModbusPal: sudo java -jar ModbusPal.jar","title":"Modbus Device Simulator"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus-register-table","text":"You can find the available registers in the user manual. Modbus TCP \u2013 Holding Registers Address Name R/W Description 4000 ThermostatL R/W Lower alarm threshold 4001 ThermostatH R/W Upper alarm threshold 4002 Alarm mode R/W 1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher 4004 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105)","title":"Modbus Register Table"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#setup-modbuspal","text":"To simulate the sensor, do the following: Add mock device: Add registers according to the register table: Add the ModbusPal support value auto-generator, which can bind to the registers:","title":"Setup ModbusPal"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#run-the-simulator","text":"Enable the value generator and click the Run button.","title":"Run the Simulator"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-before-starting-services","text":"The following sections describe how to complete the set up before starting the services. If you prefer to start the services and then add the device, see Set Up After Starting Services","title":"Set Up Before Starting Services"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#create-a-custom-configuration-folder","text":"Run the following command: mkdir -p custom-config","title":"Create a Custom configuration folder"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-device-profile","text":"Run the following command to create your device profile: cd custom-config nano temperature.profile.yml Fill in the device profile according to the Modbus Register Table , as shown below: name : \"Ethernet-Temperature-Sensor\" manufacturer : \"Audon Electronics\" model : \"Temperature\" labels : - \"Web\" - \"Modbus TCP\" - \"SNMP\" description : \"The NANO_TEMP is a Ethernet Thermometer measuring from -55\u00b0C to 125\u00b0C with a web interface and Modbus TCP communications.\" deviceResources : - name : \"ThermostatL\" isHidden : true description : \"Lower alarm threshold of the temperature\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 3999 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"RW\" scale : \"0.1\" - name : \"ThermostatH\" isHidden : true description : \"Upper alarm threshold of the temperature\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4000 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"RW\" scale : \"0.1\" - name : \"AlarmMode\" isHidden : true description : \"1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4001 } properties : valueType : \"Int16\" readWrite : \"RW\" - name : \"Temperature\" isHidden : false description : \"Temperature x 10 (np. 10,5 st.C to 105)\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4003 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.1\" deviceCommands : - name : \"AlarmThreshold\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"ThermostatL\" } - { deviceResource : \"ThermostatH\" } - name : \"AlarmMode\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"AlarmMode\" , mappings : { \"1\" : \"OFF\" , \"2\" : \"Lower\" , \"3\" : \"Higher\" , \"4\" : \"Lower or Higher\" } } In the Modbus protocol, we provide the following attributes: 1. primaryTable : HOLDING_REGISTERS, INPUT_REGISTERS, COILS, DISCRETES_INPUT 2. startingAddress This attribute defines the zero-based startingAddress in Modbus device. For example, the GET command requests data from the Modbus address 4004 to get the temperature data, so the starting register address should be 4003. Address Starting Address Name R/W Description 4004 4003 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105) 3. IS_BYTE_SWAP , IS_WORD_SWAP : To handle the different Modbus binary data order, we support Int32, Uint32, Float32 to do the swap operation before decoding the binary data. For example: { primaryTable: \"INPUT_REGISTERS\", startingAddress: \"4\", isByteSwap: \"false\", isWordSwap: \"true\" } 4. RAW_TYPE : This attribute defines the binary data read from the Modbus device, then we can use the value type to indicate the data type that the user wants to receive. We only support Int16 and Uint16 for rawType. The corresponding value type must be Float32 and Float64 . For example: deviceResources : - name : \"Temperature\" isHidden : false description : \"Temperature x 10 (np. 10,5 st.C to 105)\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : 4003 , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.1\" In the device-modbus, the Property valueType decides how many registers will be read. Like Holding registers, a register has 16 bits. If the Modbus device's user manual specifies that a value has two registers, define it as Float32 or Int32 or Uint32 in the deviceProfile. Once we execute a command, device-modbus knows its value type and register type, startingAddress, and register length. So it can read or write value using the modbus protocol.","title":"Set Up Device Profile"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-device-service-configuration","text":"Run the following command to create your device configuration: cd custom-config nano device.config.toml Fill in the device.config.toml file, as shown below: [[DeviceList]] Name = \"Modbus-TCP-Temperature-Sensor\" ProfileName = \"Ethernet-Temperature-Sensor\" Description = \"This device is a product for monitoring the temperature via the ethernet\" labels = [ \"temperature\" , \"modbus TCP\" ] [DeviceList.Protocols] [DeviceList.Protocols.modbus-tcp] Address = \"172.17.0.1\" Port = \"502\" UnitID = \"1\" Timeout = \"5\" IdleTimeout = \"5\" [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Temperature\" The address 172.17.0.1 is point to the docker bridge network which means it can forward the request from docker network to the host. Use this configuration file to define devices and AutoEvent. Then the device-modbus will generate the relative instance on startup. The device-modbus offers two types of protocol, Modbus TCP and Modbus RTU, which can be defined as shown below: protocol Name Protocol Address Port UnitID BaudRate DataBits StopBits Parity Timeout IdleTimeout Modbus TCP Gateway address TCP 10.211.55.6 502 1 5 5 Modbus RTU Gateway address RTU /tmp/slave 502 2 19200 8 1 N 5 5 In the RTU protocol, Parity can be: N - None is 0 O - Odd is 1 E - Even is 2, default is E","title":"Set Up Device Service Configuration"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#prepare-docker-compose-file","text":"Clone edgex-compose $ git clone git@github.com:edgexfoundry/edgex-compose.git Generate the docker-compose.yml file $ cd edgex-compose/compose-builder $ make gen ds-modbus","title":"Prepare docker-compose file"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#add-custom-configuration-to-docker-compose-file","text":"Add prepared configuration files to docker-compose file, you can mount them using volumes and change the environment for device-modbus internal use. Open the docker-compose.yml file and then add volumes path and environment as shown below: device-modbus : ... environment : ... DEVICE_DEVICESDIR : /custom-config DEVICE_PROFILESDIR : /custom-config volumes : ... - /path/to/custom-config:/custom-config","title":"Add Custom Configuration to docker-compose File"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#start-edgex-foundry-on-docker","text":"Since we generate the docker-compose.yml file at the previous step, we can deploy EdgeX as shown below: $ cd edgex-compose/compose-builder $ docker-compose up -d Creating network \"compose-builder_edgex-network\" with driver \"bridge\" Creating volume \"compose-builder_consul-acl-token\" with default driver ... Creating edgex-core-metadata ... done Creating edgex-core-command ... done Creating edgex-core-data ... done Creating edgex-device-modbus ... done Creating edgex-app-rules-engine ... done Creating edgex-sys-mgmt-agent ... done","title":"Start EdgeX Foundry on Docker"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-after-starting-services","text":"If the services are already running and you want to add a device, you can use the Core Metadata API as outlined in this section. If you set up the device profile and Service as described in Set Up Before Starting Services , you can skip this section. To add a device after starting the services, complete the following steps: Upload the device profile above to metadata with a POST to http://localhost:59881/api/v2/deviceprofile/uploadfile and add the file as key \"file\" to the body in form-data format, and the created ID will be returned. The following example command uses curl to send the request: $ curl http://localhost:59881/api/v2/deviceprofile/uploadfile \\ -F \"file=@temperature.profile.yml\" Ensure the Modbus device service is running, adjust the service name below to match if necessary or if using other device services. Add the device with a POST to http://localhost:59881/api/v2/device , the body will look something like: $ curl http://localhost:59881/api/v2/device -H \"Content-Type:application/json\" -X POST \\ -d '[ { \"apiVersion\": \"v2\", \"device\": { \"name\" :\"Modbus-TCP-Temperature-Sensor\", \"description\":\"This device is a product for monitoring the temperature via the ethernet\", \"labels\":[ \"Temperature\", \"Modbus TCP\" ], \"serviceName\": \"device-modbus\", \"profileName\": \"Ethernet-Temperature-Sensor\", \"protocols\":{ \"modbus-tcp\":{ \"Address\" : \"172.17.0.1\", \"Port\" : \"502\", \"UnitID\" : \"1\", \"Timeout\" : \"5\", \"IdleTimeout\" : \"5\" } }, \"autoEvents\":[ { \"Interval\":\"30s\", \"onChange\":false, \"SourceName\":\"Temperature\" } ], \"adminState\":\"UNLOCKED\", \"operatingState\":\"UP\" } } ]' The service name must match/refer to the target device service, and the profile name must match the device profile name from the previous steps.","title":"Set Up After Starting Services"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-commands","text":"Now we're ready to run some commands.","title":"Execute Commands"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#find-executable-commands","text":"Use the following query to find executable commands: $ curl h tt p : //localhos t : 59882 /api/v 2 /device/all | jso n _pp { \"apiVersion\" : \"v2\" , \"deviceCoreCommands\" : [ { \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"coreCommands\" : [ { \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"AlarmThreshold\" , \"get\" : true , \"set\" : true , \"parameters\" : [ { \"valueType\" : \"Float32\" , \"resourceName\" : \"ThermostatL\" }, { \"valueType\" : \"Float32\" , \"resourceName\" : \"ThermostatH\" } ], \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold\" }, { \"get\" : true , \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"AlarmMode\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmMode\" , \"parameters\" : [ { \"resourceName\" : \"AlarmMode\" , \"valueType\" : \"Int16\" } ] }, { \"get\" : true , \"url\" : \"http://edgex-core-command:59882\" , \"name\" : \"Temperature\" , \"path\" : \"/api/v2/device/name/Modbus-TCP-Temperature-Sensor/Temperature\" , \"parameters\" : [ { \"valueType\" : \"Float32\" , \"resourceName\" : \"Temperature\" } ] } ] } ], \"statusCode\" : 200 }","title":"Find Executable Commands"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-set-command","text":"Execute SET command according to url and parameterNames , replacing [host] with the server IP when running the SET command. $ curl http://localhost:59882/api/v2/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold \\ -H \"Content-Type:application/json\" -X PUT \\ -d '{\"ThermostatL\":\"15\",\"ThermostatH\":\"100\"}'","title":"Execute SET command"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-get-command","text":"Replace \\ with the server IP when running the GET command. $ curl h tt p : //localhos t : 59882 /api/v 2 /device/ na me/Modbus - TCP - Tempera ture - Se ns or/AlarmThreshold | jso n _pp { \"statusCode\" : 200 , \"apiVersion\" : \"v2\" , \"event\" : { \"origin\" : 1624324686964377495 , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"f3d44a0f-d2c3-4ef6-9441-ad6b1bfb8a9e\" , \"sourceName\" : \"AlarmThreshold\" , \"readings\" : [ { \"resourceName\" : \"ThermostatL\" , \"value\" : \"1.500000e+01\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"9aa879a0-c184-476b-8124-34d35a2a51f3\" , \"valueType\" : \"Float32\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"origin\" : 1624324686963970614 , \"profileName\" : \"Ethernet-Temperature-Sensor\" }, { \"value\" : \"1.000000e+02\" , \"resourceName\" : \"ThermostatH\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"bf7df23b-4338-4b93-a8bd-7abd5e848379\" , \"valueType\" : \"Float32\" , \"mediaType\" : \"\" , \"binaryValue\" : null , \"origin\" : 1624324686964343768 , \"profileName\" : \"Ethernet-Temperature-Sensor\" } ], \"apiVersion\" : \"v2\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" } }","title":"Execute GET command"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#autoevent","text":"The AutoEvent is defined in the [[DeviceList.AutoEvents]] section of the device configuration file: [[DeviceList.AutoEvents]] Interval = \"30s\" OnChange = false SourceName = \"Temperature\" After service startup, query core-data's API. The results show that the service auto-executes the command every 30 seconds. $ curl h tt p : //localhos t : 59880 /api/v 2 /eve nt /device/ na me/Modbus - TCP - Tempera ture - Se ns or | jso n _pp { \"events\" : [ { \"readings\" : [ { \"value\" : \"5.300000e+01\" , \"binaryValue\" : null , \"origin\" : 1624325219186870396 , \"id\" : \"68a66a35-d3cf-48a2-9bf0-09578267a3f7\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"mediaType\" : \"\" , \"valueType\" : \"Float32\" , \"resourceName\" : \"Temperature\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" } ], \"apiVersion\" : \"v2\" , \"origin\" : 1624325219186977564 , \"id\" : \"4b235616-7304-419e-97ae-17a244911b1c\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"sourceName\" : \"Temperature\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" }, { \"readings\" : [ { \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"resourceName\" : \"Temperature\" , \"valueType\" : \"Float32\" , \"id\" : \"56b7e8be-7ce8-4fa9-89e2-3a1a7ef09050\" , \"origin\" : 1624325189184675483 , \"value\" : \"5.300000e+01\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" } ], \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"sourceName\" : \"Temperature\" , \"deviceName\" : \"Modbus-TCP-Temperature-Sensor\" , \"id\" : \"fbab44f5-9775-4c09-84bd-cbfb00001115\" , \"origin\" : 1624325189184721223 , \"apiVersion\" : \"v2\" }, ... ], \"apiVersion\" : \"v2\" , \"statusCode\" : 200 }","title":"AutoEvent"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-the-modbus-rtu-device","text":"This section describes how to connect the Modbus RTU device. We use Ubuntu OS and a Modbus RTU device for this example. Modbus RTU device: http://www.icpdas.com/root/product/solutions/remote_io/rs-485/i-7000_m-7000/i-7055.html User manual: http://ftp.icpdas.com/pub/cd/8000cd/napdos/7000/manual/7000dio.pdf","title":"Set up the Modbus RTU Device"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#connect-the-device","text":"Connect the device to your machine(laptop or gateway,etc.) via RS485/USB adaptor and power on. Execute a command on the machine, and you can find a message like the following: $ dmesg | grep tty ... ... [18006.167625] usb 1-1: FTDI USB Serial Device converter now attached to ttyUSB0 It shows the USB attach to ttyUSB0, then you can check whether the device path exists: $ ls /dev/ttyUSB0 /dev/ttyUSB0","title":"Connect the device"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#deploy-the-edgex","text":"Modify the docker-compose.yml file to mount the device path to the device-modbus: Change the permission of the device path sudo chmod 777 /dev/ttyUSB0 Open docker-compose.yml file with text editor. $ nano /docker-compose.yml Modify the device-modbus section and save the file device-modbus: ... devices: - /dev/ttyUSB0 Deploy the EdgeX $ docker-compose up -d","title":"Deploy the EdgeX"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#add-device-to-edgex","text":"Create the device profile according to the register table $ nano modbus.rtu.demo.profile.yml name : \"Modbus-RTU-IO-Module\" manufacturer : \"icpdas\" model : \"M-7055\" labels : - \"Modbus RTU\" - \"IO Module\" description : \"This IO module offers 8 isolated channels for digital input and 8 isolated channels for digital output.\" deviceResources : - name : \"DO0\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 0 } properties : valueType : \"Bool\" readWrite : \"RW\" - name : \"DO1\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 1 } properties : valueType : \"Bool\" readWrite : \"RW\" - name : \"DO2\" isHidden : true description : \"On/Off , 0-OFF 1-ON\" attributes : { primaryTable : \"COILS\" , startingAddress : 2 } properties : valueType : \"Bool\" readWrite : \"RW\" deviceCommands : - name : \"DO\" readWrite : \"RW\" isHidden : false resourceOperations : - { deviceResource : \"DO0\" } - { deviceResource : \"DO1\" } - { deviceResource : \"DO2\" } Upload the device profile $ curl http://localhost:59881/api/v2/deviceprofile/uploadfile \\ -F \"file=@modbus.rtu.demo.profile.yml\" Create the device entity to the EdgeX. You can find the Modbus RTU setting on the device or the user manual. $ curl h tt p : //localhos t : 59881 /api/v 2 /device - H \"Content-Type:application/json\" - X POST \\ - d ' [ { \"apiVersion\" : \"v2\" , \"device\" : { \"name\" : \"Modbus-RTU-IO-Module\" , \"description\" : \"The device can be used to monitor the status of the digital input and digital output channels.\" , \"labels\" :[ \"IO Module\" , \"Modbus RTU\" ], \"serviceName\" : \"device-modbus\" , \"profileName\" : \"Ethernet-Temperature-Sensor\" , \"protocols\" :{ \"modbus-tcp\" :{ \"Address\" : \"/dev/ttyUSB0\" , \"BaudRate\" : \"19200\" , \"DataBits\" : \"8\" , \"StopBits\" : \"1\" , \"Parity\" : \"N\" , \"UnitID\" : \"1\" , \"Timeout\" : \"5\" , \"IdleTimeout\" : \"5\" } }, \"adminState\" : \"UNLOCKED\" , \"operatingState\" : \"UP\" } } ] ' Test the GET or SET command","title":"Add device to EdgeX"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/","text":"SNMP EdgeX - Ireland Release Overview In this example, you add a new Patlite Signal Tower which communicates via SNMP. This example demonstrates how to connect a device through the SNMP Device Service. Patlite Signal Tower, model NHL-FB2 Setup Hardware needed In order to exercise this example, you will need the following hardware A computer able to run EdgeX Foundry A Patlite Signal Tower (NHL-FB2 model) Both the computer and Patlite must be connected to the same ethernet network Software needed In addition to the hardware, you will need the following software Docker Docker Compose EdgeX Foundry V2 (Ireland release) curl to run REST commands (you can also use a tool like Postman) If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry. Add the SNMP Device Service to your docker-compose.yml The EdgeX docker-compose.yml file used to run EdgeX must include the SNMP device service for this example. You can either: download and use the docker-compose.yml file provided with this example or use the EdgeX Compose Builder tool to create your own custom docker-compose.yml file adding device-snmp. See Getting Started using Docker if you need assistance running EdgeX once you have your Docker Compose file. Add the SNMP Device Profile and Device SNMP devices, like the Patlite Signal Tower, provide a set of managed objects to get and set property information on the associated device. Each managed object has an address call an object identifier (or OID) that you use to interact with the SNMP device's managed object. You use the OID to query the state of the device or to set properties on the device. In the case of the Patlite, there are managed object for the colored lights and the buzzer of the device. You can read the current state of a colored light (get) or turn the light on (set) by making a call to the proper OIDs for the associated managed object. For example, on the NH series signal towers used in this example, a \"get\" call to the 1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1 OID returns the current state of the Red signal light. A return value of 1 would signal the light is off. A return value of 2 says the light is on. A return value of 3 says the light is flashing. Read this SNMP tutorial to learn more about the basics of the SNMP protocol. See the Patlite NH Series User's Manual for more information on the SNMP OIDs and function calls and parameters needed for some requests. Add the Patlite Device Profile A device profile has been created for you to get and set the signal tower's three colored lights and to get and set the buzzer. The patlite-snmp device profile defines three device resources for each of the lights and the buzzer. Current State, a read request device resource to get the current state of the requested light or buzzer Control State, a write request device resource to set the current state of the light or buzzer Timer, a write request device resource used in combination with the control state to set the state after the number of seconds provided by the timer resource Note that the attributes of each device resource specify the SNMP OID that the device service will use to make a request of the signal tower. For example, the device resource YAML below (taken from the profile) provides the means to get the current Red light state. Note that a specific OID is provided that is unique to the RED light, current state property. - name : \"RedLightCurrentState\" isHidden : false description : \"red light current state\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"R\" defaultValue : \"1\" Below is the device resource definitions for the Red light control state and timer. Again, unique OIDs are provided as attributes for each property. - name : \"RedLightControlState\" isHidden : true description : \"red light state\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"W\" defaultValue : \"1\" - name : \"RedLightTimer\" isHidden : true description : \"red light timer\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"W\" defaultValue : \"1\" In order to set the Red light on, one would need to send an SNMP request to set OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1 to a value of 2 (on state) along with a number of seconds delay to the time at OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1 . Sending a zero value (0) to the timer would say you want to turn the light on immediately. Because setting a light or buzzer requires both of the control state and timer OIDs to be set together (simultaneously), the device profile contains deviceCommands to set the light and timer device resources (and therefore their SNMP property OIDs) in a single operation. Here is the device command to set the Red light. - name : \"RedLight\" readWrite : \"W\" isHidden : false resourceOperations : - { deviceResource : \"RedLightControlState\" } - { deviceResource : \"RedLightTimer\" } You will need to upload this profile into core metadata. Download the Patlite device profile to a convenient directory. Then, using the following curl command, request the profile be uploaded into core metadata. curl -X 'POST' 'http://localhost:59881/api/v2/deviceprofile/uploadfile' --form 'file=@\"/home/yourfilelocationhere/patlite-snmp.yml\"' Alert Note that the curl command above assumes that core metadata is available at localhost . Change localhost to the host address of your core metadata service. Also note that you will need to replace the /home/yourfilelocationhere path with the path where the profile resides. Add the Patlite Device With the Patlite device profile now in metadata, you can add the Patlite device in metadata. When adding the device, you typically need to provide the name, description, labels and admin/op states of the device when creating it. You will also need to associate the device to a device service (in this case the device-snmp device service). You will ned to associate the new device to a profile - the patlite profile just added in the step above. And you will need to provide the protocol information (such as the address and port of the device) to tell the device service where it can find the physical device. If you wish the device service to automatically get readings from the device, you will also need to provide AutoEvent properties when creating the device. The curl command to POST the new Patlite device (named patlite1 ) into metadata is provide below. You will need to change the protocol Address (currently 10.0.0.14 ) and Port (currently 161 ) to point to your Patlite on your network. In this request to add a new device, AutoEvents are setup to collect the current state of the 3 lights and buzzer every 10 seconds. Notice the reference to the current state device resources in setting up the AutoEvents. curl -X 'POST' 'http://localhost:59881/api/v2/device' -d '[{\"apiVersion\": \"v2\", \"device\": {\"name\": \"patlite1\",\"description\": \"patlite #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"patlite\"],\"serviceName\": \"device-snmp\",\"profileName\": \"patlite-snmp-profile\",\"protocols\": {\"TCP\": {\"Address\": \"10.0.0.14\",\"Port\": \"161\"}}, \"AutoEvents\":[{\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"RedLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"GreenLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"AmberLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"BuzzerCurrentState\"}]}}]' Info Rather than making a REST API call into metadata to add the device, you could alternately provide device configuration files that define the device. These device configuration files would then have to be provided to the service when it starts up. Since you did not create a new Docker image containing the device configuration and just used the existing SNMP device service Docker image, it was easier to make simple API calls to add the profile and device. However, this would mean the profile and device would need to be added each time metadata's database is cleaned out and reset. Test If the device service is up and running and the profile and device have been added correctly, you should now be able to interact with the Patlite via the core command service (and SNMP under the covers via the SNMP device service). Get the Current State To get the current state of a light (in the example below the Green light), make a curl request like the following of the command service. curl 'http://localhost:59882/api/v2/device/name/patlite1/GreenLightCurrentState' | json_pp Alert Note that the curl command above assumes that the core command service is available at localhost . Change the host address of your core command service if it is not available at localhost . The results should look something like that below. { \"statusCode\" : 200 , \"apiVersion\" : \"v2\" , \"event\" : { \"origin\" : 1632188382048586660 , \"deviceName\" : \"patlite1\" , \"sourceName\" : \"GreenLightCurrentState\" , \"id\" : \"1e2a7ba1-c273-46d1-b919-207aafbc60ba\" , \"profileName\" : \"patlite-snmp-profile\" , \"apiVersion\" : \"v2\" , \"readings\" : [ { \"origin\" : 1632188382048586660 , \"resourceName\" : \"GreenLightCurrentState\" , \"deviceName\" : \"patlite1\" , \"id\" : \"a41ac1cf-703b-4572-bdef-8487e9a7100e\" , \"valueType\" : \"Int32\" , \"value\" : \"1\" , \"profileName\" : \"patlite-snmp-profile\" } ] } } Info Note the value will be one of 4 numbers indicating the current state of the light Value Description 1 Off 2 On - solid and not flashing 3 Flashing on 4 Flashing quickly on Set a light or buzzer on To turn a signal tower light or the buzzer on, you can issue a PUT device command via the core command service. The example below turns on the Green light. curl --location --request PUT 'http://localhost:59882/api/v2/device/name/patlite1/GreenLight' --header 'cont: application/json' --data-raw '{\"GreenLightControlState\":\"2\",\"GreenLightTimer\":\"0\"}' This command sets the light on (solid versus flashing) immediate (as denoted by the GreenLightTimer parameter is set to 0). The timer value is the number of seconds delay in making the request to the light or buzzer. Again, the control state can be set to one of four values as listed in the table above. Alert Again note that the curl command above assumes that the core command service is available at localhost . Change the host address of your core command service if it is not available at localhost . Observations Did you notice that EdgeX obfuscates almost all information about SNMP, and managed objects and OIDs? The power of EdgeX is to abstract away protocol differences so that to a user, getting data from a device or setting properties on a device such as this Patlite signal tower is as easy as making simple REST calls into the command service. The only place that protocol information is really seen is in the device profile (where the attributes specify the SNMP OIDs). Of course, the device service must be coded to deal with the protocol specifics and it must know how to translate the simple command REST calls into protocol specific requests of the device. But even device service creation is made easier with the use of the SDKs which provide much of the boilerplate code found in almost every device service regardless of the underlying device protocol.","title":"SNMP"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#snmp","text":"EdgeX - Ireland Release","title":"SNMP"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#overview","text":"In this example, you add a new Patlite Signal Tower which communicates via SNMP. This example demonstrates how to connect a device through the SNMP Device Service. Patlite Signal Tower, model NHL-FB2","title":"Overview"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#setup","text":"","title":"Setup"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#hardware-needed","text":"In order to exercise this example, you will need the following hardware A computer able to run EdgeX Foundry A Patlite Signal Tower (NHL-FB2 model) Both the computer and Patlite must be connected to the same ethernet network","title":"Hardware needed"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#software-needed","text":"In addition to the hardware, you will need the following software Docker Docker Compose EdgeX Foundry V2 (Ireland release) curl to run REST commands (you can also use a tool like Postman) If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry.","title":"Software needed"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-snmp-device-service-to-your-docker-composeyml","text":"The EdgeX docker-compose.yml file used to run EdgeX must include the SNMP device service for this example. You can either: download and use the docker-compose.yml file provided with this example or use the EdgeX Compose Builder tool to create your own custom docker-compose.yml file adding device-snmp. See Getting Started using Docker if you need assistance running EdgeX once you have your Docker Compose file.","title":"Add the SNMP Device Service to your docker-compose.yml"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-snmp-device-profile-and-device","text":"SNMP devices, like the Patlite Signal Tower, provide a set of managed objects to get and set property information on the associated device. Each managed object has an address call an object identifier (or OID) that you use to interact with the SNMP device's managed object. You use the OID to query the state of the device or to set properties on the device. In the case of the Patlite, there are managed object for the colored lights and the buzzer of the device. You can read the current state of a colored light (get) or turn the light on (set) by making a call to the proper OIDs for the associated managed object. For example, on the NH series signal towers used in this example, a \"get\" call to the 1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1 OID returns the current state of the Red signal light. A return value of 1 would signal the light is off. A return value of 2 says the light is on. A return value of 3 says the light is flashing. Read this SNMP tutorial to learn more about the basics of the SNMP protocol. See the Patlite NH Series User's Manual for more information on the SNMP OIDs and function calls and parameters needed for some requests.","title":"Add the SNMP Device Profile and Device"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-patlite-device-profile","text":"A device profile has been created for you to get and set the signal tower's three colored lights and to get and set the buzzer. The patlite-snmp device profile defines three device resources for each of the lights and the buzzer. Current State, a read request device resource to get the current state of the requested light or buzzer Control State, a write request device resource to set the current state of the light or buzzer Timer, a write request device resource used in combination with the control state to set the state after the number of seconds provided by the timer resource Note that the attributes of each device resource specify the SNMP OID that the device service will use to make a request of the signal tower. For example, the device resource YAML below (taken from the profile) provides the means to get the current Red light state. Note that a specific OID is provided that is unique to the RED light, current state property. - name : \"RedLightCurrentState\" isHidden : false description : \"red light current state\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"R\" defaultValue : \"1\" Below is the device resource definitions for the Red light control state and timer. Again, unique OIDs are provided as attributes for each property. - name : \"RedLightControlState\" isHidden : true description : \"red light state\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"W\" defaultValue : \"1\" - name : \"RedLightTimer\" isHidden : true description : \"red light timer\" attributes : { oid : \"1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1\" , community : \"private\" } properties : valueType : \"Int32\" readWrite : \"W\" defaultValue : \"1\" In order to set the Red light on, one would need to send an SNMP request to set OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1 to a value of 2 (on state) along with a number of seconds delay to the time at OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1 . Sending a zero value (0) to the timer would say you want to turn the light on immediately. Because setting a light or buzzer requires both of the control state and timer OIDs to be set together (simultaneously), the device profile contains deviceCommands to set the light and timer device resources (and therefore their SNMP property OIDs) in a single operation. Here is the device command to set the Red light. - name : \"RedLight\" readWrite : \"W\" isHidden : false resourceOperations : - { deviceResource : \"RedLightControlState\" } - { deviceResource : \"RedLightTimer\" } You will need to upload this profile into core metadata. Download the Patlite device profile to a convenient directory. Then, using the following curl command, request the profile be uploaded into core metadata. curl -X 'POST' 'http://localhost:59881/api/v2/deviceprofile/uploadfile' --form 'file=@\"/home/yourfilelocationhere/patlite-snmp.yml\"' Alert Note that the curl command above assumes that core metadata is available at localhost . Change localhost to the host address of your core metadata service. Also note that you will need to replace the /home/yourfilelocationhere path with the path where the profile resides.","title":"Add the Patlite Device Profile"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-patlite-device","text":"With the Patlite device profile now in metadata, you can add the Patlite device in metadata. When adding the device, you typically need to provide the name, description, labels and admin/op states of the device when creating it. You will also need to associate the device to a device service (in this case the device-snmp device service). You will ned to associate the new device to a profile - the patlite profile just added in the step above. And you will need to provide the protocol information (such as the address and port of the device) to tell the device service where it can find the physical device. If you wish the device service to automatically get readings from the device, you will also need to provide AutoEvent properties when creating the device. The curl command to POST the new Patlite device (named patlite1 ) into metadata is provide below. You will need to change the protocol Address (currently 10.0.0.14 ) and Port (currently 161 ) to point to your Patlite on your network. In this request to add a new device, AutoEvents are setup to collect the current state of the 3 lights and buzzer every 10 seconds. Notice the reference to the current state device resources in setting up the AutoEvents. curl -X 'POST' 'http://localhost:59881/api/v2/device' -d '[{\"apiVersion\": \"v2\", \"device\": {\"name\": \"patlite1\",\"description\": \"patlite #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"patlite\"],\"serviceName\": \"device-snmp\",\"profileName\": \"patlite-snmp-profile\",\"protocols\": {\"TCP\": {\"Address\": \"10.0.0.14\",\"Port\": \"161\"}}, \"AutoEvents\":[{\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"RedLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"GreenLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"AmberLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"BuzzerCurrentState\"}]}}]' Info Rather than making a REST API call into metadata to add the device, you could alternately provide device configuration files that define the device. These device configuration files would then have to be provided to the service when it starts up. Since you did not create a new Docker image containing the device configuration and just used the existing SNMP device service Docker image, it was easier to make simple API calls to add the profile and device. However, this would mean the profile and device would need to be added each time metadata's database is cleaned out and reset.","title":"Add the Patlite Device"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#test","text":"If the device service is up and running and the profile and device have been added correctly, you should now be able to interact with the Patlite via the core command service (and SNMP under the covers via the SNMP device service).","title":"Test"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#get-the-current-state","text":"To get the current state of a light (in the example below the Green light), make a curl request like the following of the command service. curl 'http://localhost:59882/api/v2/device/name/patlite1/GreenLightCurrentState' | json_pp Alert Note that the curl command above assumes that the core command service is available at localhost . Change the host address of your core command service if it is not available at localhost . The results should look something like that below. { \"statusCode\" : 200 , \"apiVersion\" : \"v2\" , \"event\" : { \"origin\" : 1632188382048586660 , \"deviceName\" : \"patlite1\" , \"sourceName\" : \"GreenLightCurrentState\" , \"id\" : \"1e2a7ba1-c273-46d1-b919-207aafbc60ba\" , \"profileName\" : \"patlite-snmp-profile\" , \"apiVersion\" : \"v2\" , \"readings\" : [ { \"origin\" : 1632188382048586660 , \"resourceName\" : \"GreenLightCurrentState\" , \"deviceName\" : \"patlite1\" , \"id\" : \"a41ac1cf-703b-4572-bdef-8487e9a7100e\" , \"valueType\" : \"Int32\" , \"value\" : \"1\" , \"profileName\" : \"patlite-snmp-profile\" } ] } } Info Note the value will be one of 4 numbers indicating the current state of the light Value Description 1 Off 2 On - solid and not flashing 3 Flashing on 4 Flashing quickly on","title":"Get the Current State"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#set-a-light-or-buzzer-on","text":"To turn a signal tower light or the buzzer on, you can issue a PUT device command via the core command service. The example below turns on the Green light. curl --location --request PUT 'http://localhost:59882/api/v2/device/name/patlite1/GreenLight' --header 'cont: application/json' --data-raw '{\"GreenLightControlState\":\"2\",\"GreenLightTimer\":\"0\"}' This command sets the light on (solid versus flashing) immediate (as denoted by the GreenLightTimer parameter is set to 0). The timer value is the number of seconds delay in making the request to the light or buzzer. Again, the control state can be set to one of four values as listed in the table above. Alert Again note that the curl command above assumes that the core command service is available at localhost . Change the host address of your core command service if it is not available at localhost .","title":"Set a light or buzzer on"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#observations","text":"Did you notice that EdgeX obfuscates almost all information about SNMP, and managed objects and OIDs? The power of EdgeX is to abstract away protocol differences so that to a user, getting data from a device or setting properties on a device such as this Patlite signal tower is as easy as making simple REST calls into the command service. The only place that protocol information is really seen is in the device profile (where the attributes specify the SNMP OIDs). Of course, the device service must be coded to deal with the protocol specifics and it must know how to translate the simple command REST calls into protocol specific requests of the device. But even device service creation is made easier with the use of the SDKs which provide much of the boilerplate code found in almost every device service regardless of the underlying device protocol.","title":"Observations"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/","text":"Modbus - Data Type Conversion In use cases where the device resource uses an integer data type with a float scale, precision can be lost following transformation. For example, a Modbus device stores the temperature and humidity in an Int16 data type with a float scale of 0.01. If the temperature is 26.53, the read value is 2653. However, following transformation, the value is 26. To avoid this scenario, the device resource data type must differ from the value descriptor data type. This is achieved using the optional rawType attribute in the device profile to define the binary data read from the Modbus device, and a valueType to indicate what data type the user wants to receive. If the rawType attribute exists, the device service parses the binary data according to the defined rawType , then casts the value according to the valueType defined in the properties of the device resources. The following extract from a device profile defines the rawType as Int16 and the valueType as Float32: EdgeX 2.0 For EdgeX 2.0 the device profile has many changes. Please see Device Profile section for more details. Example - Device Profile deviceResources : - name : \"humidity\" description : \"The response value is the result of the original value multiplied by 100.\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : \"1\" , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.01\" units : \"%RH\" - name : \"temperature\" description : \"The response value is the result of the original value multiplied by 100.\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : \"2\" , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.01\" units : \"degrees Celsius\" Read Command A Read command is executed as follows: The device service executes the Read command to read binary data The binary reading data is parsed as an Int16 data type The integer value is cast to a Float32 value Write Command A Write command is executed as follows: The device service cast the requested Float32 value to an integer value The integer value is converted to binary data The device service executes the Write command When to Transform Data You generally need to transform data when scaling readings between a 16-bit integer and a float value. The following limitations apply: rawType supports only Int16 and Uint16 data types The corresponding valueType must be Float32 or Float64 If an unsupported data type is defined for the rawType attribute, the device service throws an exception similar to the following: Read command failed. Cmd:temperature err:the raw type Int32 is not supported Supported Transformations The supported transformations are as follows: From rawType To valueType Int16 Float32 Int16 Float64 Uint16 Float32 Uint16 Float64","title":"Modbus - Data Type Conversion"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#modbus-data-type-conversion","text":"In use cases where the device resource uses an integer data type with a float scale, precision can be lost following transformation. For example, a Modbus device stores the temperature and humidity in an Int16 data type with a float scale of 0.01. If the temperature is 26.53, the read value is 2653. However, following transformation, the value is 26. To avoid this scenario, the device resource data type must differ from the value descriptor data type. This is achieved using the optional rawType attribute in the device profile to define the binary data read from the Modbus device, and a valueType to indicate what data type the user wants to receive. If the rawType attribute exists, the device service parses the binary data according to the defined rawType , then casts the value according to the valueType defined in the properties of the device resources. The following extract from a device profile defines the rawType as Int16 and the valueType as Float32: EdgeX 2.0 For EdgeX 2.0 the device profile has many changes. Please see Device Profile section for more details. Example - Device Profile deviceResources : - name : \"humidity\" description : \"The response value is the result of the original value multiplied by 100.\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : \"1\" , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.01\" units : \"%RH\" - name : \"temperature\" description : \"The response value is the result of the original value multiplied by 100.\" attributes : { primaryTable : \"HOLDING_REGISTERS\" , startingAddress : \"2\" , rawType : \"Int16\" } properties : valueType : \"Float32\" readWrite : \"R\" scale : \"0.01\" units : \"degrees Celsius\"","title":"Modbus - Data Type Conversion"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#read-command","text":"A Read command is executed as follows: The device service executes the Read command to read binary data The binary reading data is parsed as an Int16 data type The integer value is cast to a Float32 value","title":"Read Command"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#write-command","text":"A Write command is executed as follows: The device service cast the requested Float32 value to an integer value The integer value is converted to binary data The device service executes the Write command","title":"Write Command"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#when-to-transform-data","text":"You generally need to transform data when scaling readings between a 16-bit integer and a float value. The following limitations apply: rawType supports only Int16 and Uint16 data types The corresponding valueType must be Float32 or Float64 If an unsupported data type is defined for the rawType attribute, the device service throws an exception similar to the following: Read command failed. Cmd:temperature err:the raw type Int32 is not supported","title":"When to Transform Data"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#supported-transformations","text":"The supported transformations are as follows: From rawType To valueType Int16 Float32 Int16 Float64 Uint16 Float32 Uint16 Float64","title":"Supported Transformations"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/","text":"Sending and Consuming Binary Data From EdgeX Device Services EdgeX - Ireland Release Overview In this example, we will demonstrate how to send EdgeX Events and Readings that contain arbitrary binary data. DeviceService Implementation Device Profile To indicate that a deviceResource represents a Binary type, the following format is used: deviceResources : - name : \"camera_snapshot\" isHidden : false description : \"snapshot from camera\" properties : valueType : \"Binary\" readWrite : \"R\" mediaType : \"image/jpeg\" deviceCommands : - name : \"OnvifSnapshot\" isHidden : false readWrite : \"R\" resourceOperations : - { deviceResource : \"camera_snapshot\" } Device Service Here is a snippet from a hypothetical Device Service's HandleReadCommands() method that produces an event that represents a JPEG image captured from a camera: if req . DeviceResourceName == \"camera_snapshot\" { data , err := cameraClient . GetSnapshot () // returns ([]byte, error) check ( err ) cv , err := sdkModels . NewCommandValue ( reqs [ i ]. DeviceResourceName , common . ValueTypeBinary , data ) check ( err ) responses [ i ] = cv } Calling Device Service Command Querying core-metadata for the Device's Commands and DeviceName provides the following as the URL to request a reading from the snapshot command: http://localhost:59990/api/v2/device/name/camera-device/OnvifSnapshot Unlike with non-binary Events, making a request to this URL will return an event in CBOR representation. CBOR is a representation of binary data loosely based off of the JSON data model. This Event will not be human-readable. Parsing CBOR Encoded Events To access the data enclosed in these Events and Readings, they will first need to be decoded from CBOR. The following is a simple Go program that reads in the CBOR response from a file containing the response from the previous HTTP request. The Go library recommended for parsing these events can be found at https://github.com/fxamacker/cbor/ package main import ( \"io/ioutil\" \"github.com/edgexfoundry/go-mod-core-contracts/v2/dtos/requests\" \"github.com/fxamacker/cbor/v2\" ) func check ( e error ) { if e != nil { panic ( e ) } } func main () { // Read in our cbor data fileBytes , err := ioutil . ReadFile ( \"/Users/johndoe/Desktop/image.cbor\" ) check ( err ) // Decode into an EdgeX Event eventRequest := & requests . AddEventRequest {} err = cbor . Unmarshal ( fileBytes , eventRequest ) check ( err ) // Grab binary data and write to a file imgBytes := eventRequest . Event . Readings [ 0 ]. BinaryValue ioutil . WriteFile ( \"/Users/johndoe/Desktop/image.jpeg\" , imgBytes , 0644 ) } In the code above, the CBOR data is read into a byte array , an EdgeX Event struct is created, and cbor.Unmarshal parses the CBOR-encoded data and stores the result in the Event struct. Finally, the binary payload is written to a file from the BinaryValue field of the Reading. This method would work as well for decoding Events off the EdgeX message bus. Encoding Arbitrary Structures in Events The Device SDK's NewCommandValue() function above only accepts a byte slice as binary data. Any arbitrary Go structure can be encoded in a binary reading by first encoding the structure into a byte slice using CBOR. The following illustrates this method: // DeviceService HandleReadCommands() code: foo := struct { X int Y int Z int Bar string } { X : 7 , Y : 3 , Z : 100 , Bar : \"Hello world!\" , } data , err := cbor . Marshal ( & foo ) check ( err ) cv , err := sdkModels . NewCommandValue ( reqs [ i ]. DeviceResourceName , common . ValueTypeBinary , data ) responses [ i ] = cv This code takes the anonymous struct with fields X, Y, Z, and Bar (of different types) and serializes it into a byte slice using the same cbor library, and passing the output to NewCommandValue() . When consuming these events, another level of decoding will need to take place to get the structure out of the binary payload. func main () { // Read in our cbor data fileBytes , err := ioutil . ReadFile ( \"/Users/johndoe/Desktop/foo.cbor\" ) check ( err ) // Decode into an EdgeX Event eventRequest := & requests . AddEventRequest {} err = cbor . Unmarshal ( fileBytes , eventRequest ) check ( err ) // Decode into arbitrary type foo := struct { X int Y int Z int Bar string }{} err = cbor . Unmarshal ( eventRequest . Event . Readings [ 0 ]. BinaryValue , & foo ) check ( err ) fmt . Println ( foo ) } This code takes a command response in the same format as the previous example, but uses the cbor library to decode the CBOR data inside the EdgeX Reading's BinaryValue field. Using this approach, an Event can be sent containing data containing an arbitrary, flexible structure. Use cases could be a Reading containing multiple images, a variable length list of integer read-outs, etc.","title":"Sending and Consuming Binary Data From EdgeX Device Services"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#sending-and-consuming-binary-data-from-edgex-device-services","text":"EdgeX - Ireland Release","title":"Sending and Consuming Binary Data From EdgeX Device Services"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#overview","text":"In this example, we will demonstrate how to send EdgeX Events and Readings that contain arbitrary binary data.","title":"Overview"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#deviceservice-implementation","text":"","title":"DeviceService Implementation"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#device-profile","text":"To indicate that a deviceResource represents a Binary type, the following format is used: deviceResources : - name : \"camera_snapshot\" isHidden : false description : \"snapshot from camera\" properties : valueType : \"Binary\" readWrite : \"R\" mediaType : \"image/jpeg\" deviceCommands : - name : \"OnvifSnapshot\" isHidden : false readWrite : \"R\" resourceOperations : - { deviceResource : \"camera_snapshot\" }","title":"Device Profile"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#device-service","text":"Here is a snippet from a hypothetical Device Service's HandleReadCommands() method that produces an event that represents a JPEG image captured from a camera: if req . DeviceResourceName == \"camera_snapshot\" { data , err := cameraClient . GetSnapshot () // returns ([]byte, error) check ( err ) cv , err := sdkModels . NewCommandValue ( reqs [ i ]. DeviceResourceName , common . ValueTypeBinary , data ) check ( err ) responses [ i ] = cv }","title":"Device Service"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#calling-device-service-command","text":"Querying core-metadata for the Device's Commands and DeviceName provides the following as the URL to request a reading from the snapshot command: http://localhost:59990/api/v2/device/name/camera-device/OnvifSnapshot Unlike with non-binary Events, making a request to this URL will return an event in CBOR representation. CBOR is a representation of binary data loosely based off of the JSON data model. This Event will not be human-readable.","title":"Calling Device Service Command"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#parsing-cbor-encoded-events","text":"To access the data enclosed in these Events and Readings, they will first need to be decoded from CBOR. The following is a simple Go program that reads in the CBOR response from a file containing the response from the previous HTTP request. The Go library recommended for parsing these events can be found at https://github.com/fxamacker/cbor/ package main import ( \"io/ioutil\" \"github.com/edgexfoundry/go-mod-core-contracts/v2/dtos/requests\" \"github.com/fxamacker/cbor/v2\" ) func check ( e error ) { if e != nil { panic ( e ) } } func main () { // Read in our cbor data fileBytes , err := ioutil . ReadFile ( \"/Users/johndoe/Desktop/image.cbor\" ) check ( err ) // Decode into an EdgeX Event eventRequest := & requests . AddEventRequest {} err = cbor . Unmarshal ( fileBytes , eventRequest ) check ( err ) // Grab binary data and write to a file imgBytes := eventRequest . Event . Readings [ 0 ]. BinaryValue ioutil . WriteFile ( \"/Users/johndoe/Desktop/image.jpeg\" , imgBytes , 0644 ) } In the code above, the CBOR data is read into a byte array , an EdgeX Event struct is created, and cbor.Unmarshal parses the CBOR-encoded data and stores the result in the Event struct. Finally, the binary payload is written to a file from the BinaryValue field of the Reading. This method would work as well for decoding Events off the EdgeX message bus.","title":"Parsing CBOR Encoded Events"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#encoding-arbitrary-structures-in-events","text":"The Device SDK's NewCommandValue() function above only accepts a byte slice as binary data. Any arbitrary Go structure can be encoded in a binary reading by first encoding the structure into a byte slice using CBOR. The following illustrates this method: // DeviceService HandleReadCommands() code: foo := struct { X int Y int Z int Bar string } { X : 7 , Y : 3 , Z : 100 , Bar : \"Hello world!\" , } data , err := cbor . Marshal ( & foo ) check ( err ) cv , err := sdkModels . NewCommandValue ( reqs [ i ]. DeviceResourceName , common . ValueTypeBinary , data ) responses [ i ] = cv This code takes the anonymous struct with fields X, Y, Z, and Bar (of different types) and serializes it into a byte slice using the same cbor library, and passing the output to NewCommandValue() . When consuming these events, another level of decoding will need to take place to get the structure out of the binary payload. func main () { // Read in our cbor data fileBytes , err := ioutil . ReadFile ( \"/Users/johndoe/Desktop/foo.cbor\" ) check ( err ) // Decode into an EdgeX Event eventRequest := & requests . AddEventRequest {} err = cbor . Unmarshal ( fileBytes , eventRequest ) check ( err ) // Decode into arbitrary type foo := struct { X int Y int Z int Bar string }{} err = cbor . Unmarshal ( eventRequest . Event . Readings [ 0 ]. BinaryValue , & foo ) check ( err ) fmt . Println ( foo ) } This code takes a command response in the same format as the previous example, but uses the cbor library to decode the CBOR data inside the EdgeX Reading's BinaryValue field. Using this approach, an Event can be sent containing data containing an arbitrary, flexible structure. Use cases could be a Reading containing multiple images, a variable length list of integer read-outs, etc.","title":"Encoding Arbitrary Structures in Events"},{"location":"examples/Ch-ExamplesVirtualDeviceService/","text":"Using the Virtual Device Service Overview The Virtual Device Service GO can simulate different kinds of devices to generate Events and Readings to the Core Data Micro Service. Furthermore, users can send commands and get responses through the Command and Control Micro Service. The Virtual Device Service allows you to execute functional or performance tests without any real devices. This version of the Virtual Device Service is implemented based on Device SDK GO , and uses ql (an embedded SQL database engine) to simulate virtual resources. Introduction For information on the virtual device service see virtual device under the Microservices tab. Working with the Virtual Device Service Running the Virtual Device Service Container The virtual device service depends on the EdgeX core services. By default, the virtual device service is part of the EdgeX community provided Docker Compose files. If you use one of the community provide Compose files , you can pull and run EdgeX inclusive of the virtual device service without having to make any changes. Running the Virtual Device Service Natively (in development mode) If you're going to download the source code and run the virtual device service in development mode, make sure that the EdgeX core service containers are up before starting the virtual device service. See how to work with EdgeX in a hybrid environment in order to run the virtual device service outside of containers. This same file will instruct you on how to get and run the virtual device service code . GET command example The virtual device service is configured to send simulated data to core data every few seconds (from 10-30 seconds depending on device - see the device configuration file for AutoEvent details). You can exercise the GET request on the command service to see the generated value produced by any of the virtual device's simulated devices. Use the curl command below to exercise the virtual device service API (via core command service). curl -X GET localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 ` Warning The example above assumes your core command service is available on localhost at the default service port of 59882. Also, you must replace your device name and command name in the example above with your virtual device service's identifiers. If you are not sure of the identifiers to use, query the command service for the full list of commands and devices at http://localhost:59882/api/v2/device/all . The virtual device should respond (via the core command service) with event/reading JSON similar to that below. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"event\" : { \"apiVersion\" : \"v2\" , \"id\" : \"3beb5b83-d923-4c8a-b949-c1708b6611c1\" , \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"sourceName\" : \"Int8\" , \"origin\" : 1626227770833093400 , \"readings\" : [ { \"id\" : \"baf42bc7-307a-4647-8876-4e84759fd2ba\" , \"origin\" : 1626227770833093400 , \"deviceName\" : \"Random-Integer-Device\" , \"resourceName\" : \"Int8\" , \"profileName\" : \"Random-Integer-Device\" , \"valueType\" : \"Int8\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"value\" : \"-5\" } ] } } PUT command example - Assign a value to a resource The virtual devices managed by the virtual device can also be actuated. The virtual device can be told to enable or disable random number generation. When disabled, the virtual device services can be told what value to respond with for all GET operations. When setting the fixed value, the value must be valid for the data type of the virtual device. For example, the minimum value of Int8 cannot be less than -128 and the maximum value cannot be greater than 127. Below is example actuation of one of the virtual devices. In this example, it sets the fixed GET return value to 123 and turns off random generation. curl -X PUT -d '{\"Int8\": \"123\", \"EnableRandomization_Int8\": \"false\"}' localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 Note The value of the resource's EnableRandomization property is simultaneously updated to false when sending a put command to assign a specified value to the resource. Therefore, the need to set EnableRandomization_Int8 to false is not actually required in the call above Return the virtual device to randomly generating numbers with another PUT call. curl -X PUT -d '{\"EnableRandomization_Int8\": \"true\"}' localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 Reference Architectural Diagram Sequence Diagram Virtual Resource Table Schema Column Type DEVICE_NAME STRING COMMAND_NAME STRING DEVICE_RESOURCE_NAME STRING ENABLE_RANDOMIZATION BOOL DATA_TYPE STRING VALUE STRING","title":"Using the Virtual Device Service"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#using-the-virtual-device-service","text":"","title":"Using the Virtual Device Service"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#overview","text":"The Virtual Device Service GO can simulate different kinds of devices to generate Events and Readings to the Core Data Micro Service. Furthermore, users can send commands and get responses through the Command and Control Micro Service. The Virtual Device Service allows you to execute functional or performance tests without any real devices. This version of the Virtual Device Service is implemented based on Device SDK GO , and uses ql (an embedded SQL database engine) to simulate virtual resources.","title":"Overview"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#introduction","text":"For information on the virtual device service see virtual device under the Microservices tab.","title":"Introduction"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#working-with-the-virtual-device-service","text":"","title":"Working with the Virtual Device Service"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#running-the-virtual-device-service-container","text":"The virtual device service depends on the EdgeX core services. By default, the virtual device service is part of the EdgeX community provided Docker Compose files. If you use one of the community provide Compose files , you can pull and run EdgeX inclusive of the virtual device service without having to make any changes.","title":"Running the Virtual Device Service Container"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#running-the-virtual-device-service-natively-in-development-mode","text":"If you're going to download the source code and run the virtual device service in development mode, make sure that the EdgeX core service containers are up before starting the virtual device service. See how to work with EdgeX in a hybrid environment in order to run the virtual device service outside of containers. This same file will instruct you on how to get and run the virtual device service code .","title":"Running the Virtual Device Service Natively (in development mode)"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#get-command-example","text":"The virtual device service is configured to send simulated data to core data every few seconds (from 10-30 seconds depending on device - see the device configuration file for AutoEvent details). You can exercise the GET request on the command service to see the generated value produced by any of the virtual device's simulated devices. Use the curl command below to exercise the virtual device service API (via core command service). curl -X GET localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 ` Warning The example above assumes your core command service is available on localhost at the default service port of 59882. Also, you must replace your device name and command name in the example above with your virtual device service's identifiers. If you are not sure of the identifiers to use, query the command service for the full list of commands and devices at http://localhost:59882/api/v2/device/all . The virtual device should respond (via the core command service) with event/reading JSON similar to that below. { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"event\" : { \"apiVersion\" : \"v2\" , \"id\" : \"3beb5b83-d923-4c8a-b949-c1708b6611c1\" , \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"sourceName\" : \"Int8\" , \"origin\" : 1626227770833093400 , \"readings\" : [ { \"id\" : \"baf42bc7-307a-4647-8876-4e84759fd2ba\" , \"origin\" : 1626227770833093400 , \"deviceName\" : \"Random-Integer-Device\" , \"resourceName\" : \"Int8\" , \"profileName\" : \"Random-Integer-Device\" , \"valueType\" : \"Int8\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"value\" : \"-5\" } ] } }","title":"GET command example"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#put-command-example-assign-a-value-to-a-resource","text":"The virtual devices managed by the virtual device can also be actuated. The virtual device can be told to enable or disable random number generation. When disabled, the virtual device services can be told what value to respond with for all GET operations. When setting the fixed value, the value must be valid for the data type of the virtual device. For example, the minimum value of Int8 cannot be less than -128 and the maximum value cannot be greater than 127. Below is example actuation of one of the virtual devices. In this example, it sets the fixed GET return value to 123 and turns off random generation. curl -X PUT -d '{\"Int8\": \"123\", \"EnableRandomization_Int8\": \"false\"}' localhost:59882/api/v2/device/name/Random-Integer-Device/Int8 Note The value of the resource's EnableRandomization property is simultaneously updated to false when sending a put command to assign a specified value to the resource. Therefore, the need to set EnableRandomization_Int8 to false is not actually required in the call above Return the virtual device to randomly generating numbers with another PUT call. curl -X PUT -d '{\"EnableRandomization_Int8\": \"true\"}' localhost:59882/api/v2/device/name/Random-Integer-Device/Int8","title":"PUT command example - Assign a value to a resource"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#reference","text":"","title":"Reference"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#architectural-diagram","text":"","title":"Architectural Diagram"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#sequence-diagram","text":"","title":"Sequence Diagram"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#virtual-resource-table-schema","text":"Column Type DEVICE_NAME STRING COMMAND_NAME STRING DEVICE_RESOURCE_NAME STRING ENABLE_RANDOMIZATION BOOL DATA_TYPE STRING VALUE STRING","title":"Virtual Resource Table Schema"},{"location":"general/ContainerNames/","text":"EdgeX Container Names The following table provides the list of the default EdgeX Docker image names to the Docker container name and Docker Compose names. EdgeX 2.0 For EdgeX 2.0 the EdgeX docker image names have been simplified and made consistent across all EdgeX services. Core Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/core-data edgex-core-data edgex-core-data data edgexfoundry/core-metadata edgex-core-metadata edgex-core-metadata metadata edgexfoundry/core-command edgex-core-command edgex-core-command command Supporting Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/support-notifications edgex-support-notifications edgex-support-notifications notifications edgexfoundry/support-scheduler edgex-support-scheduler edgex-support-scheduler scheduler Application & Analytics Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/app-service-configurable edgex-app-rules-engine edgex-app-rules-engine app-service-rules edgexfoundry/app-service-configurable edgex-app-http-export edgex-app-http-export app-service-http-export edgexfoundry/app-service-configurable edgex-app-mqtt-export edgex-app-mqtt-export app-service-mqtt-export emqx/kuiper edgex-kuiper edgex-kuiper rulesengine Device Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/device-virtual edgex-device-virtual edgex-device-virtual device-virtual edgexfoundry/device-mqtt edgex-device-mqtt edgex-device-mqtt device-mqtt edgexfoundry/device-rest edgex-device-rest edgex-device-rest device-rest edgexfoundry/device-modbus edgex-device-modbus edgex-device-modbus device-modbus edgexfoundry/device-snmp edgex-device-snmp edgex-device-snmp device-snmp edgexfoundry/device-bacnet edgex-device-bacnet edgex-device-bacnet device-bacnet edgexfoundry/device-camera edgex-device-camera edgex-device-camera device-camera edgexfoundry/device-grove edgex-device-grove edgex-device-grove device-grove edgexfoundry/device-coap edgex-device-coap edgex-device-coap device-coap Security Docker image name Docker container name Docker network hostname Docker Compose service name vault edgex-vault edgex-vault vault postgress edgex-kong-db edgex-kong-db kong-db kong edgex-kong edgex-kong kong edgexfoundry/security-proxy-setup edgex-security-proxy-setup edgex-security-proxy-setup proxy-setup edgexfoundry/security-secretstore-setup edgex-security-secretstore-setup edgex-security-secretstore-setup secretstore-setup edgexfoundry/security-bootstrapper edgex-security-bootstrapper edgex-security-bootstrapper security-bootstrapper Miscellaneous Docker image name Docker container name Docker network hostname Docker Compose service name consul edgex-core-consul edgex-core-consul consul redis edgex-redis edgex-redis database edgexfoundry/sys-mgmt-agent edgex-sys-mgmt-agent edgex-sys-mgmt-agent system","title":"EdgeX Container Names"},{"location":"general/ContainerNames/#edgex-container-names","text":"The following table provides the list of the default EdgeX Docker image names to the Docker container name and Docker Compose names. EdgeX 2.0 For EdgeX 2.0 the EdgeX docker image names have been simplified and made consistent across all EdgeX services. Core Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/core-data edgex-core-data edgex-core-data data edgexfoundry/core-metadata edgex-core-metadata edgex-core-metadata metadata edgexfoundry/core-command edgex-core-command edgex-core-command command Supporting Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/support-notifications edgex-support-notifications edgex-support-notifications notifications edgexfoundry/support-scheduler edgex-support-scheduler edgex-support-scheduler scheduler Application & Analytics Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/app-service-configurable edgex-app-rules-engine edgex-app-rules-engine app-service-rules edgexfoundry/app-service-configurable edgex-app-http-export edgex-app-http-export app-service-http-export edgexfoundry/app-service-configurable edgex-app-mqtt-export edgex-app-mqtt-export app-service-mqtt-export emqx/kuiper edgex-kuiper edgex-kuiper rulesengine Device Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/device-virtual edgex-device-virtual edgex-device-virtual device-virtual edgexfoundry/device-mqtt edgex-device-mqtt edgex-device-mqtt device-mqtt edgexfoundry/device-rest edgex-device-rest edgex-device-rest device-rest edgexfoundry/device-modbus edgex-device-modbus edgex-device-modbus device-modbus edgexfoundry/device-snmp edgex-device-snmp edgex-device-snmp device-snmp edgexfoundry/device-bacnet edgex-device-bacnet edgex-device-bacnet device-bacnet edgexfoundry/device-camera edgex-device-camera edgex-device-camera device-camera edgexfoundry/device-grove edgex-device-grove edgex-device-grove device-grove edgexfoundry/device-coap edgex-device-coap edgex-device-coap device-coap Security Docker image name Docker container name Docker network hostname Docker Compose service name vault edgex-vault edgex-vault vault postgress edgex-kong-db edgex-kong-db kong-db kong edgex-kong edgex-kong kong edgexfoundry/security-proxy-setup edgex-security-proxy-setup edgex-security-proxy-setup proxy-setup edgexfoundry/security-secretstore-setup edgex-security-secretstore-setup edgex-security-secretstore-setup secretstore-setup edgexfoundry/security-bootstrapper edgex-security-bootstrapper edgex-security-bootstrapper security-bootstrapper Miscellaneous Docker image name Docker container name Docker network hostname Docker Compose service name consul edgex-core-consul edgex-core-consul consul redis edgex-redis edgex-redis database edgexfoundry/sys-mgmt-agent edgex-sys-mgmt-agent edgex-sys-mgmt-agent system","title":"EdgeX Container Names"},{"location":"general/Definitions/","text":"Definitions The following glossary provides terms used in EdgeX Foundry. The definition are based on how EdgeX and its community use the term versus any strict technical or industry definition. Actuate To cause a machine or device to operate. In EdgeX terms, to command a device or sensor under management of EdgeX to do something (example: stop a motor) or to reconfigure itself (example: set a thermostat's cooling point). Brownfield and Greenfield Brownfield refers to older legacy equipment (nodes, devices, sensors) in an edge/IoT deployment, which typically uses older protocols. Greenfield refers to, typically, new equipment with modern protocols. CBOR An acronym for \"concise binary object representation.\" A binary data serialization format used by EdgeX to transport binary sensed data (like an image). The user can also choose to send all data via CBOR for efficiency purposes, but at the expense of having EdgeX convert the CBOR into another format whenever the data needs to be understood and inspected or to persist the data. Containerized EdgeX micro services and infrastructure (i.e. databases, registry, etc.) are built as executable programs, put into Docker images, and made available via Docker Hub (and Nexus repository for nightly builds). A service (or infrastructure element) that is available in Docker Hub (or Nexus) is said to be containerized. Docker images can be quickly downloaded and new Docker containers created from the images. Contributor/Developer If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort. Created time stamp The Created time stamp is the time the data was created in the database and is unchangeable. The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data was sent to EdgeX Foundry and the database. Usually, the Origin and Created time stamps are the same, or very close to being the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different. If persistence is disable in core-data, the time stamp will default to 0. Device In EdgeX parlance, \"device\" is used to refer to a sensor, actuator, or IoT \"thing\". A sensor generally collects information from the physical world - like a temperature or vibration sensor. Actuators are machines that can be told to do something. Actuators move or otherwise control a mechanism or system - like a value on a pump. While there may be some technical differences, for the purposes of EdgeX documentation, device will refer to a sensor, actuator or \"thing\". Edge Analytics The terms edge or local analytics (the terms are used interchangeably and have the same meaning in this context) for the purposes of edge computing (and EdgeX), refers to an \u201canalytics\u201d service is that: - Receives and interprets the EdgeX sensor data to some degree; some analytics services are more sophisticated and able to provide more insights than others - Make determinations on what actions and actuations need to occur based on the insights it has achieved, thereby driving actuation requests to EdgeX associated devices or other services (like notifications) The analytics service could be some simple logic built into an app service, a rules engine package, or an agent of some artificial intelligence/machine learning system. From an EdgeX perspective, actionable intelligence generation is all the same. From an EdgeX perspective, edge analytics = seeing the edge data and be able to make requests to act on what is seen. While EdgeX provides a rules engine service as its reference implementation of local analytics, app services and its data preparation capability allow sensor data to be streamed to any analytics package. Because of EdgeX\u2019s micro service architecture and distributed nature, the analytics service would not necessarily have to run local to the devices / sensors. In other words, it would not have to run at the edge. App services could deliver the edge data to analytics living in the cloud. However, in these scenarios, the insight intelligence would not be considered local or edge in context. Because of latency concerns, data security and privacy needs, intermittent connectivity of edge systems, and other reasons, it is often vital for edge platforms to retain an analytic capability at the edge or local. Gateway An IoT gateway is a compute platform at the farthest ends of an edge or IoT network. It is the host or \u201cbox\u201d to which physical sensors and devices connect and that is, in turn, connected to the networks (wired or wirelessly) of the information technology realm. IoT or edge gateways are compute platforms that connect \u201cthings\u201d (sensors and devices) to IT networks and systems. Micro service In a micro service architecture, each component has its own process. This is in contrast to a monolithic architecture in which all components of the application run in the same process. Benefits of micro service architectures include: - Allow any one service to be replaced and upgraded more easily - Allow services to be programmed using different programming languages and underlying technical solutions (use the best technology for each specific service) - Ex: services written in C can communicate and work with services written in Go - This allows organizations building solutions to maximize available developer resources and some legacy code - Allow services to be distributed across host compute platforms - allowing better utilization of available compute resources - Allow for more scalable solutions by adding copies of services when needed Origin time stamp The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data is sent to EdgeX Foundry and the database. The Created time stamp is the time the data was created in the database. Usually, the Origin and Created time stamps are the same or very close to the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different. Reference Implementation Default and example implementation(s) offered by the EdgeX community. Other implementations may be offered by 3rd parties or for specialization. Resource A piece of information or data available from a sensor or \"thing\". For example, a thermostat would have temperature and humidity resources. A resource has a name (ResourceName) to identify it (\"temperature\" or \"humidity\" in this example) and a value (the sensed data - like 72 degrees). A resource may also have additional properties or attributes associated with it. The data type of the value (e.g., integer, float, string, etc.) would be an example of a resource property. Rules Engine Rules engines are important to the IoT edge system. A rules engine is a software system that is connected to a collection of data (either database or data stream). The rules engine examines various elements of the data and monitors the data, and then triggers some action based on the results of the monitoring of the data it. A rules engine is a collection of \"If-Then\" conditional statements. The \"If\" informs the rules engine what data to look at and what ranges or values of data must match in order to trigger the \"Then\" part of the statement, which then informs the rules engine what action to take or what external resource to call on, when the data is a match to the \"If\" statement. Most rules engines can be dynamically programmed meaning that new \"If-Then\" statements or rules, can be provided while the engine is running. The rules are often defined by some type of rule language with simple syntax to enable non-Developers to provide the new rules. Rules engines are one of the simplest forms of \"edge analytics\" provided in IoT systems. Rules engines enable data picked up by IoT sensors to be monitored and acted upon (actuated). Typically, the actuation is accomplished on another IoT device or sensor. For example, a temperature sensor in an equipment enclosure may be monitored by a rules engine to detect when the temperature is getting too warm (or too cold) for safe or optimum operation of the equipment. The rules engine, upon detecting temperatures outside of the acceptable range, shuts off the equipment in the enclosure. Software Development Kit In EdgeX, a software development kit (or SDK) is a library or module to be incorporated into a new micro service. It provides a lot of the boilerplate code and scaffolding associated with the type of service being created. The SDK allows the developer to focus on the details of the service functionality and not have to worry about the mundane tasks associated with EdgeX services. South and North Side South Side: All IoT objects, within the physical realm, and the edge of the network that communicates directly with those devices, sensors, actuators, and other IoT objects, and collects the data from them, is known collectively as the \"south side.\" North Side: The cloud (or enterprise system) where data is collected, stored, aggregated, analyzed, and turned into information, and the part of the network that communicates with the cloud, is referred to as the \"north side\" of the network. EdgeX enables data to be sent \"north, \" \"south, \" or laterally as needed and as directed. \"Snappy\" / Ubuntu Core & Snaps A Linux-based Operating System provided by Ubuntu - formally called Ubuntu Core but often referred to as \"Snappy\". The packages are called 'snaps' and the tool for using them 'snapd', and works for phone, cloud, internet of things, and desktop computers. The \"Snap\" packages are self-contained and have no dependency on external stores. \"Snaps\" can be used to create command line tools, background services, and desktop applications. User If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\".","title":"Definitions"},{"location":"general/Definitions/#definitions","text":"The following glossary provides terms used in EdgeX Foundry. The definition are based on how EdgeX and its community use the term versus any strict technical or industry definition.","title":"Definitions"},{"location":"general/Definitions/#actuate","text":"To cause a machine or device to operate. In EdgeX terms, to command a device or sensor under management of EdgeX to do something (example: stop a motor) or to reconfigure itself (example: set a thermostat's cooling point).","title":"Actuate"},{"location":"general/Definitions/#brownfield-and-greenfield","text":"Brownfield refers to older legacy equipment (nodes, devices, sensors) in an edge/IoT deployment, which typically uses older protocols. Greenfield refers to, typically, new equipment with modern protocols.","title":"Brownfield and Greenfield"},{"location":"general/Definitions/#cbor","text":"An acronym for \"concise binary object representation.\" A binary data serialization format used by EdgeX to transport binary sensed data (like an image). The user can also choose to send all data via CBOR for efficiency purposes, but at the expense of having EdgeX convert the CBOR into another format whenever the data needs to be understood and inspected or to persist the data.","title":"CBOR"},{"location":"general/Definitions/#containerized","text":"EdgeX micro services and infrastructure (i.e. databases, registry, etc.) are built as executable programs, put into Docker images, and made available via Docker Hub (and Nexus repository for nightly builds). A service (or infrastructure element) that is available in Docker Hub (or Nexus) is said to be containerized. Docker images can be quickly downloaded and new Docker containers created from the images.","title":"Containerized"},{"location":"general/Definitions/#contributordeveloper","text":"If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort.","title":"Contributor/Developer"},{"location":"general/Definitions/#created-time-stamp","text":"The Created time stamp is the time the data was created in the database and is unchangeable. The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data was sent to EdgeX Foundry and the database. Usually, the Origin and Created time stamps are the same, or very close to being the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different. If persistence is disable in core-data, the time stamp will default to 0.","title":"Created time stamp"},{"location":"general/Definitions/#device","text":"In EdgeX parlance, \"device\" is used to refer to a sensor, actuator, or IoT \"thing\". A sensor generally collects information from the physical world - like a temperature or vibration sensor. Actuators are machines that can be told to do something. Actuators move or otherwise control a mechanism or system - like a value on a pump. While there may be some technical differences, for the purposes of EdgeX documentation, device will refer to a sensor, actuator or \"thing\".","title":"Device"},{"location":"general/Definitions/#edge-analytics","text":"The terms edge or local analytics (the terms are used interchangeably and have the same meaning in this context) for the purposes of edge computing (and EdgeX), refers to an \u201canalytics\u201d service is that: - Receives and interprets the EdgeX sensor data to some degree; some analytics services are more sophisticated and able to provide more insights than others - Make determinations on what actions and actuations need to occur based on the insights it has achieved, thereby driving actuation requests to EdgeX associated devices or other services (like notifications) The analytics service could be some simple logic built into an app service, a rules engine package, or an agent of some artificial intelligence/machine learning system. From an EdgeX perspective, actionable intelligence generation is all the same. From an EdgeX perspective, edge analytics = seeing the edge data and be able to make requests to act on what is seen. While EdgeX provides a rules engine service as its reference implementation of local analytics, app services and its data preparation capability allow sensor data to be streamed to any analytics package. Because of EdgeX\u2019s micro service architecture and distributed nature, the analytics service would not necessarily have to run local to the devices / sensors. In other words, it would not have to run at the edge. App services could deliver the edge data to analytics living in the cloud. However, in these scenarios, the insight intelligence would not be considered local or edge in context. Because of latency concerns, data security and privacy needs, intermittent connectivity of edge systems, and other reasons, it is often vital for edge platforms to retain an analytic capability at the edge or local.","title":"Edge Analytics"},{"location":"general/Definitions/#gateway","text":"An IoT gateway is a compute platform at the farthest ends of an edge or IoT network. It is the host or \u201cbox\u201d to which physical sensors and devices connect and that is, in turn, connected to the networks (wired or wirelessly) of the information technology realm. IoT or edge gateways are compute platforms that connect \u201cthings\u201d (sensors and devices) to IT networks and systems.","title":"Gateway"},{"location":"general/Definitions/#micro-service","text":"In a micro service architecture, each component has its own process. This is in contrast to a monolithic architecture in which all components of the application run in the same process. Benefits of micro service architectures include: - Allow any one service to be replaced and upgraded more easily - Allow services to be programmed using different programming languages and underlying technical solutions (use the best technology for each specific service) - Ex: services written in C can communicate and work with services written in Go - This allows organizations building solutions to maximize available developer resources and some legacy code - Allow services to be distributed across host compute platforms - allowing better utilization of available compute resources - Allow for more scalable solutions by adding copies of services when needed","title":"Micro service"},{"location":"general/Definitions/#origin-time-stamp","text":"The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data is sent to EdgeX Foundry and the database. The Created time stamp is the time the data was created in the database. Usually, the Origin and Created time stamps are the same or very close to the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different.","title":"Origin time stamp"},{"location":"general/Definitions/#reference-implementation","text":"Default and example implementation(s) offered by the EdgeX community. Other implementations may be offered by 3rd parties or for specialization.","title":"Reference Implementation"},{"location":"general/Definitions/#resource","text":"A piece of information or data available from a sensor or \"thing\". For example, a thermostat would have temperature and humidity resources. A resource has a name (ResourceName) to identify it (\"temperature\" or \"humidity\" in this example) and a value (the sensed data - like 72 degrees). A resource may also have additional properties or attributes associated with it. The data type of the value (e.g., integer, float, string, etc.) would be an example of a resource property.","title":"Resource"},{"location":"general/Definitions/#rules-engine","text":"Rules engines are important to the IoT edge system. A rules engine is a software system that is connected to a collection of data (either database or data stream). The rules engine examines various elements of the data and monitors the data, and then triggers some action based on the results of the monitoring of the data it. A rules engine is a collection of \"If-Then\" conditional statements. The \"If\" informs the rules engine what data to look at and what ranges or values of data must match in order to trigger the \"Then\" part of the statement, which then informs the rules engine what action to take or what external resource to call on, when the data is a match to the \"If\" statement. Most rules engines can be dynamically programmed meaning that new \"If-Then\" statements or rules, can be provided while the engine is running. The rules are often defined by some type of rule language with simple syntax to enable non-Developers to provide the new rules. Rules engines are one of the simplest forms of \"edge analytics\" provided in IoT systems. Rules engines enable data picked up by IoT sensors to be monitored and acted upon (actuated). Typically, the actuation is accomplished on another IoT device or sensor. For example, a temperature sensor in an equipment enclosure may be monitored by a rules engine to detect when the temperature is getting too warm (or too cold) for safe or optimum operation of the equipment. The rules engine, upon detecting temperatures outside of the acceptable range, shuts off the equipment in the enclosure.","title":"Rules Engine"},{"location":"general/Definitions/#software-development-kit","text":"In EdgeX, a software development kit (or SDK) is a library or module to be incorporated into a new micro service. It provides a lot of the boilerplate code and scaffolding associated with the type of service being created. The SDK allows the developer to focus on the details of the service functionality and not have to worry about the mundane tasks associated with EdgeX services.","title":"Software Development Kit"},{"location":"general/Definitions/#south-and-north-side","text":"South Side: All IoT objects, within the physical realm, and the edge of the network that communicates directly with those devices, sensors, actuators, and other IoT objects, and collects the data from them, is known collectively as the \"south side.\" North Side: The cloud (or enterprise system) where data is collected, stored, aggregated, analyzed, and turned into information, and the part of the network that communicates with the cloud, is referred to as the \"north side\" of the network. EdgeX enables data to be sent \"north, \" \"south, \" or laterally as needed and as directed.","title":"South and North Side"},{"location":"general/Definitions/#snappy-ubuntu-core-snaps","text":"A Linux-based Operating System provided by Ubuntu - formally called Ubuntu Core but often referred to as \"Snappy\". The packages are called 'snaps' and the tool for using them 'snapd', and works for phone, cloud, internet of things, and desktop computers. The \"Snap\" packages are self-contained and have no dependency on external stores. \"Snaps\" can be used to create command line tools, background services, and desktop applications.","title":"\"Snappy\" / Ubuntu Core & Snaps"},{"location":"general/Definitions/#user","text":"If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\".","title":"User"},{"location":"general/PlatformRequirements/","text":"Platform Requirements EdgeX Foundry is an operating system (OS)-agnostic and hardware (HW)-agnostic IoT edge platform. At this time the following platform minimums are recommended: Memory Memory: minimum of 1 GB When considering memory for your EdgeX platform consider your use of database - Redis is the current default. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory. Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see list below). Those strategies has continued to evolve . When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual). Antirez Redis RAM Ramifications Redis IO Memory Optimization Storage Hard drive space: minimum of 3 GB of space to run the EdgeX Foundry containers, but you may want more depending on how long sensor and device data is to be retained. Approximately 32GB of storage is minimally recommended to start. Operating Systems EdgeX Foundry has been run successfully on many systems, including, but not limited to the following systems Windows (ver 7 - 10) Ubuntu Desktop (ver 14-20) Ubuntu Server (ver 14-20) Ubuntu Core (ver 16-18) Mac OS X 10 Info EdgeX is agnostic with regards to hardware (x86 and ARM), but only release artifacts for x86 and ARM 64 systems. EdgeX has been successfully run on ARM 32 platforms but has required users to build their own executable from source. EdgeX does not officially support ARM 32.","title":"Platform Requirements"},{"location":"general/PlatformRequirements/#platform-requirements","text":"EdgeX Foundry is an operating system (OS)-agnostic and hardware (HW)-agnostic IoT edge platform. At this time the following platform minimums are recommended: Memory Memory: minimum of 1 GB When considering memory for your EdgeX platform consider your use of database - Redis is the current default. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory. Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see list below). Those strategies has continued to evolve . When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual). Antirez Redis RAM Ramifications Redis IO Memory Optimization Storage Hard drive space: minimum of 3 GB of space to run the EdgeX Foundry containers, but you may want more depending on how long sensor and device data is to be retained. Approximately 32GB of storage is minimally recommended to start. Operating Systems EdgeX Foundry has been run successfully on many systems, including, but not limited to the following systems Windows (ver 7 - 10) Ubuntu Desktop (ver 14-20) Ubuntu Server (ver 14-20) Ubuntu Core (ver 16-18) Mac OS X 10 Info EdgeX is agnostic with regards to hardware (x86 and ARM), but only release artifacts for x86 and ARM 64 systems. EdgeX has been successfully run on ARM 32 platforms but has required users to build their own executable from source. EdgeX does not officially support ARM 32.","title":"Platform Requirements"},{"location":"general/ServiceConfiguration/","text":"Service Configuration Each EdgeX micro service requires configuration (i.e. - a repository of initialization and operating values). The configuration is initially provided by a TOML file but a service can utilize the centralized configuration management provided by EdgeX for its configuration. See the Configuration and Registry documentation for more details about initialization of services and the use of the configuration service. Please refer to the EdgeX Foundry architectural decision record for details (and design decisions) behind the configuration in EdgeX. Please refer to the general Common Configuration documentation for configuration properties common to all services. Find service specific configuration references in the tabs below. EdgeX 2.0 For EdgeX 2.0 the Service configuration section has been standardized across all EdgeX services. Core Service Name Configuration Reference core-data Core Data Configuration core-metadata Core Metadata Configuration core-command Core Command Configuration Supporting Service Name Configuration Reference support-notifications Support Notifications Configuration support-scheduler Support Scheduler Configuration Application & Analytics Services Name Configuration Reference app-service General Application Service Configuration app-service-configurable Configurable Application Service Configuration eKuiper rules engine/eKuiper Basic eKuiper Configuration Device Services Name Configuration Reference device-service General Device Service Configuration device-virtual Virtual Device Service Configuration Security Services Name Configuration Reference API Gateway Kong Configuration Add-on Services Configuring Add-on Service System Management Services Name Configuration Reference system management System Management Agent Configuration","title":"Service Configuration"},{"location":"general/ServiceConfiguration/#service-configuration","text":"Each EdgeX micro service requires configuration (i.e. - a repository of initialization and operating values). The configuration is initially provided by a TOML file but a service can utilize the centralized configuration management provided by EdgeX for its configuration. See the Configuration and Registry documentation for more details about initialization of services and the use of the configuration service. Please refer to the EdgeX Foundry architectural decision record for details (and design decisions) behind the configuration in EdgeX. Please refer to the general Common Configuration documentation for configuration properties common to all services. Find service specific configuration references in the tabs below. EdgeX 2.0 For EdgeX 2.0 the Service configuration section has been standardized across all EdgeX services. Core Service Name Configuration Reference core-data Core Data Configuration core-metadata Core Metadata Configuration core-command Core Command Configuration Supporting Service Name Configuration Reference support-notifications Support Notifications Configuration support-scheduler Support Scheduler Configuration Application & Analytics Services Name Configuration Reference app-service General Application Service Configuration app-service-configurable Configurable Application Service Configuration eKuiper rules engine/eKuiper Basic eKuiper Configuration Device Services Name Configuration Reference device-service General Device Service Configuration device-virtual Virtual Device Service Configuration Security Services Name Configuration Reference API Gateway Kong Configuration Add-on Services Configuring Add-on Service System Management Services Name Configuration Reference system management System Management Agent Configuration","title":"Service Configuration"},{"location":"general/ServicePorts/","text":"Default Service Ports The following tables (organized by type of service) capture the default service ports. These default ports are also used in the EdgeX provided service routes defined in the Kong API Gateway for access control. Core Services Name Port Definition core-data 59880 ZMQ - to be deprecated in a future release 5563 core-metadata 59881 core-command 59882 Supporting Services Name Port Definition support-notifications 59860 support-scheduler 59861 Application & Analytics Services Name Port Definition app-sample 59700 app-service-rules 59701 app-push-to-core 59702 app-mqtt-export 59703 app-http-export 59704 app-functional-tests 59705 app-rfid-llrp-inventory 59711 rules engine/eKuiper 59720 Device Services Name Port Definition device-virtual 59900 device-modbus 59901 device-bacnet 59980 device-mqtt 59982 device-camera 59985 device-rest 59986 device-coap 59988 device-llrp 59989 device-grove 59992 device-snmp 59993 device-gpio 59994 Security Services Name Port Definition kong-db 5432 vault 8200 kong 8000 8100 8443 security-spire-server 59840 security-spiffe-token-provider 59841 Miscellaneous Services Name Port Definition Modbus simulator 1502 MQTT broker 1883 redis 6379 consul 8500 system management 58890","title":"Default Service Ports"},{"location":"general/ServicePorts/#default-service-ports","text":"The following tables (organized by type of service) capture the default service ports. These default ports are also used in the EdgeX provided service routes defined in the Kong API Gateway for access control. Core Services Name Port Definition core-data 59880 ZMQ - to be deprecated in a future release 5563 core-metadata 59881 core-command 59882 Supporting Services Name Port Definition support-notifications 59860 support-scheduler 59861 Application & Analytics Services Name Port Definition app-sample 59700 app-service-rules 59701 app-push-to-core 59702 app-mqtt-export 59703 app-http-export 59704 app-functional-tests 59705 app-rfid-llrp-inventory 59711 rules engine/eKuiper 59720 Device Services Name Port Definition device-virtual 59900 device-modbus 59901 device-bacnet 59980 device-mqtt 59982 device-camera 59985 device-rest 59986 device-coap 59988 device-llrp 59989 device-grove 59992 device-snmp 59993 device-gpio 59994 Security Services Name Port Definition kong-db 5432 vault 8200 kong 8000 8100 8443 security-spire-server 59840 security-spiffe-token-provider 59841 Miscellaneous Services Name Port Definition Modbus simulator 1502 MQTT broker 1883 redis 6379 consul 8500 system management 58890","title":"Default Service Ports"},{"location":"getting-started/","text":"Getting Started To get started you need to get EdgeX Foundry either as a User or as a Developer/Contributor. User If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". You will want to follow the Getting Started as a User guide which takes you through the process of deploying the latest EdgeX releases. For demo purposes and to run EdgeX on your machine in just a few minutes, please refer to the Quick Start guide. Developer and Contributor If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort. You will want to follow the Getting Started for Developers guide. Hybrid See Getting Started Hybrid if you are developing or working on a particular micro service, but want to run the other micro services via Docker Containers. When working on something like an analytics service (as a developer or contributor) you may not wish to download, build and run all the EdgeX code - you only want to work with the code of your service. Your new service may still need to communicate with other services while you test your new service. Unless you want to get and build all the services, developers will often get and run the containers for the other EdgeX micro services and run only their service natively in a development environment. The EdgeX community refers to this as \"Hybrid\" development. Device Service Developer As a developer, if you intend to connect IoT objects (device, sensor or other \"thing\") that are not currently connected to EdgeX Foundry, you may also want to obtain the Device Service Software Development Kit (DS SDK) and create new device services. The DS SDK creates all the scaffolding code for a new EdgeX Foundry device service; allowing you to focus on the details of interfacing with the device in its native protocol. See Getting Started with Device SDK for help on using the DS SDK to create a new device service. Learn more about Device Services and the Device Service SDK at Device Services . Application Service Developer As a developer, if you intend to get EdgeX sensor data to external systems (be that an enterprise application, on-prem server or Cloud platform like Azure IoT Hub, AWS IoT, Google Cloud IOT, etc.), you will likely want to obtain the Application Functions SDK (App Func SDK) and create new application services. The App Func SDK creates all the scaffolding code for a new EdgeX Foundry application service; allowing you to focus on the details of data transformation, filtering, and otherwise prepare the sensor data for the external endpoint. Learn more about Application Services and the Application Functions SDK at Application Services . Versioning Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX. Long Term Support Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.","title":"Getting Started"},{"location":"getting-started/#getting-started","text":"To get started you need to get EdgeX Foundry either as a User or as a Developer/Contributor.","title":"Getting Started"},{"location":"getting-started/#user","text":"If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". You will want to follow the Getting Started as a User guide which takes you through the process of deploying the latest EdgeX releases. For demo purposes and to run EdgeX on your machine in just a few minutes, please refer to the Quick Start guide.","title":"User"},{"location":"getting-started/#developer-and-contributor","text":"If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort. You will want to follow the Getting Started for Developers guide.","title":"Developer and Contributor"},{"location":"getting-started/#hybrid","text":"See Getting Started Hybrid if you are developing or working on a particular micro service, but want to run the other micro services via Docker Containers. When working on something like an analytics service (as a developer or contributor) you may not wish to download, build and run all the EdgeX code - you only want to work with the code of your service. Your new service may still need to communicate with other services while you test your new service. Unless you want to get and build all the services, developers will often get and run the containers for the other EdgeX micro services and run only their service natively in a development environment. The EdgeX community refers to this as \"Hybrid\" development.","title":"Hybrid"},{"location":"getting-started/#device-service-developer","text":"As a developer, if you intend to connect IoT objects (device, sensor or other \"thing\") that are not currently connected to EdgeX Foundry, you may also want to obtain the Device Service Software Development Kit (DS SDK) and create new device services. The DS SDK creates all the scaffolding code for a new EdgeX Foundry device service; allowing you to focus on the details of interfacing with the device in its native protocol. See Getting Started with Device SDK for help on using the DS SDK to create a new device service. Learn more about Device Services and the Device Service SDK at Device Services .","title":"Device Service Developer"},{"location":"getting-started/#application-service-developer","text":"As a developer, if you intend to get EdgeX sensor data to external systems (be that an enterprise application, on-prem server or Cloud platform like Azure IoT Hub, AWS IoT, Google Cloud IOT, etc.), you will likely want to obtain the Application Functions SDK (App Func SDK) and create new application services. The App Func SDK creates all the scaffolding code for a new EdgeX Foundry application service; allowing you to focus on the details of data transformation, filtering, and otherwise prepare the sensor data for the external endpoint. Learn more about Application Services and the Application Functions SDK at Application Services .","title":"Application Service Developer"},{"location":"getting-started/#versioning","text":"Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX.","title":"Versioning"},{"location":"getting-started/#long-term-support","text":"Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.","title":"Long Term Support"},{"location":"getting-started/ApplicationFunctionsSDK/","text":"Getting Started The Application Functions SDK The SDK is built around the idea of a \"Functions Pipeline\". A functions pipeline is a collection of various functions that process the data in the order that you've specified. The functions pipeline is executed by the specified trigger in the configuration.toml . The first function in the pipeline is called with the event that triggered the pipeline (ex. dtos.Event ). Each successive call in the pipeline is called with the return result of the previous function. Let's take a look at a simple example that creates a pipeline to filter particular device ids and subsequently transform the data to XML: package main import ( \"errors\" \"fmt\" \"os\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/interfaces\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/transforms\" ) const ( serviceKey = \"app-simple-filter-xml\" ) func main () { // turn off secure mode for examples. Not recommended for production _ = os . Setenv ( \"EDGEX_SECURITY_SECRET_STORE\" , \"false\" ) // 1) First thing to do is to create an new instance of an EdgeX Application Service. service , ok := pkg . NewAppService ( serviceKey ) if ! ok { os . Exit ( - 1 ) } // Leverage the built in logging service in EdgeX lc := service . LoggingClient () // 2) shows how to access the application's specific configuration settings. deviceNames , err := service . GetAppSettingStrings ( \"DeviceNames\" ) if err != nil { lc . Error ( err . Error ()) os . Exit ( - 1 ) } lc . Info ( fmt . Sprintf ( \"Filtering for devices %v\" , deviceNames )) // 3) This is our pipeline configuration, the collection of functions to // execute every time an event is triggered. if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML ); err != nil { lc . Errorf ( \"SetFunctionsPipeline returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } // 4) Lastly, we'll go ahead and tell the SDK to \"start\" and begin listening for events // to trigger the pipeline. err = service . MakeItRun () if err != nil { lc . Errorf ( \"MakeItRun returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } // Do any required cleanup here os . Exit ( 0 ) } The above example is meant to merely demonstrate the structure of your application. Notice that the output of the last function is not available anywhere inside this application. You must provide a function in order to work with the data from the previous function. Let's go ahead and add the following function that prints the output to the console. func printXMLToConsole ( ctx interfaces . AppFunctionContext , data interface {}) ( bool , interface {}) { // Leverage the built in logging service in EdgeX lc := ctx . LoggingClient () if data == nil { return false , errors . New ( \"printXMLToConsole: No data received\" ) } xml , ok := data .( string ) if ! ok { return false , errors . New ( \"printXMLToConsole: Data received is not the expected 'string' type\" ) } println ( xml ) return true , nil } After placing the above function in your code, the next step is to modify the pipeline to call this function: if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML , printXMLToConsole //notice this is not a function call, but simply a function pointer. ); err != nil { ... } Set the Trigger type to http in res/configuration.toml [Trigger] Type = \"http\" Using PostMan or curl send the following JSON to localhost:/api/v2/trigger { \"requestId\" : \"82eb2e26-0f24-48ba-ae4c-de9dac3fb9bc\" , \"apiVersion\" : \"v2\" , \"event\" : { \"apiVersion\" : \"v2\" , \"deviceName\" : \"Random-Float-Device\" , \"profileName\" : \"Random-Float-Device\" , \"sourceName\" : \"Float32\" , \"origin\" : 1540855006456 , \"id\" : \"94eb2e26-0f24-5555-2222-de9dac3fb228\" , \"readings\" : [ { \"apiVersion\" : \"v2\" , \"resourceName\" : \"Float32\" , \"profileName\" : \"Random-Float-Device\" , \"deviceName\" : \"Random-Float-Device\" , \"value\" : \"76677\" , \"origin\" : 1540855006469 , \"ValueType\" : \"Float32\" , \"id\" : \"82eb2e36-0f24-48aa-ae4c-de9dac3fb920\" } ] } } After making the above modifications, you should now see data printing out to the console in XML when an event is triggered. Note You can find this complete example \" Simple Filter XML \" and more examples located in the examples section. Up until this point, the pipeline has been triggered by an event over HTTP and the data at the end of that pipeline lands in the last function specified. In the example, data ends up printed to the console. Perhaps we'd like to send the data back to where it came from. In the case of an HTTP trigger, this would be the HTTP response. In the case of EdgeX MessageBus, this could be a new topic to send the data back to the MessageBus for other applications that wish to receive it. To do this, simply call ctx.SetResponseData(data []byte) passing in the data you wish to \"respond\" with. In the above printXMLToConsole(...) function, replace println(xml) with ctx.SetResponseData([]byte(xml)) . You should now see the response in your postman window when testing the pipeline.","title":"Application Functions SDK"},{"location":"getting-started/ApplicationFunctionsSDK/#getting-started","text":"","title":"Getting Started"},{"location":"getting-started/ApplicationFunctionsSDK/#the-application-functions-sdk","text":"The SDK is built around the idea of a \"Functions Pipeline\". A functions pipeline is a collection of various functions that process the data in the order that you've specified. The functions pipeline is executed by the specified trigger in the configuration.toml . The first function in the pipeline is called with the event that triggered the pipeline (ex. dtos.Event ). Each successive call in the pipeline is called with the return result of the previous function. Let's take a look at a simple example that creates a pipeline to filter particular device ids and subsequently transform the data to XML: package main import ( \"errors\" \"fmt\" \"os\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/interfaces\" \"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/transforms\" ) const ( serviceKey = \"app-simple-filter-xml\" ) func main () { // turn off secure mode for examples. Not recommended for production _ = os . Setenv ( \"EDGEX_SECURITY_SECRET_STORE\" , \"false\" ) // 1) First thing to do is to create an new instance of an EdgeX Application Service. service , ok := pkg . NewAppService ( serviceKey ) if ! ok { os . Exit ( - 1 ) } // Leverage the built in logging service in EdgeX lc := service . LoggingClient () // 2) shows how to access the application's specific configuration settings. deviceNames , err := service . GetAppSettingStrings ( \"DeviceNames\" ) if err != nil { lc . Error ( err . Error ()) os . Exit ( - 1 ) } lc . Info ( fmt . Sprintf ( \"Filtering for devices %v\" , deviceNames )) // 3) This is our pipeline configuration, the collection of functions to // execute every time an event is triggered. if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML ); err != nil { lc . Errorf ( \"SetFunctionsPipeline returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } // 4) Lastly, we'll go ahead and tell the SDK to \"start\" and begin listening for events // to trigger the pipeline. err = service . MakeItRun () if err != nil { lc . Errorf ( \"MakeItRun returned error: %s\" , err . Error ()) os . Exit ( - 1 ) } // Do any required cleanup here os . Exit ( 0 ) } The above example is meant to merely demonstrate the structure of your application. Notice that the output of the last function is not available anywhere inside this application. You must provide a function in order to work with the data from the previous function. Let's go ahead and add the following function that prints the output to the console. func printXMLToConsole ( ctx interfaces . AppFunctionContext , data interface {}) ( bool , interface {}) { // Leverage the built in logging service in EdgeX lc := ctx . LoggingClient () if data == nil { return false , errors . New ( \"printXMLToConsole: No data received\" ) } xml , ok := data .( string ) if ! ok { return false , errors . New ( \"printXMLToConsole: Data received is not the expected 'string' type\" ) } println ( xml ) return true , nil } After placing the above function in your code, the next step is to modify the pipeline to call this function: if err := service . SetFunctionsPipeline ( transforms . NewFilterFor ( deviceNames ). FilterByDeviceName , transforms . NewConversion (). TransformToXML , printXMLToConsole //notice this is not a function call, but simply a function pointer. ); err != nil { ... } Set the Trigger type to http in res/configuration.toml [Trigger] Type = \"http\" Using PostMan or curl send the following JSON to localhost:/api/v2/trigger { \"requestId\" : \"82eb2e26-0f24-48ba-ae4c-de9dac3fb9bc\" , \"apiVersion\" : \"v2\" , \"event\" : { \"apiVersion\" : \"v2\" , \"deviceName\" : \"Random-Float-Device\" , \"profileName\" : \"Random-Float-Device\" , \"sourceName\" : \"Float32\" , \"origin\" : 1540855006456 , \"id\" : \"94eb2e26-0f24-5555-2222-de9dac3fb228\" , \"readings\" : [ { \"apiVersion\" : \"v2\" , \"resourceName\" : \"Float32\" , \"profileName\" : \"Random-Float-Device\" , \"deviceName\" : \"Random-Float-Device\" , \"value\" : \"76677\" , \"origin\" : 1540855006469 , \"ValueType\" : \"Float32\" , \"id\" : \"82eb2e36-0f24-48aa-ae4c-de9dac3fb920\" } ] } } After making the above modifications, you should now see data printing out to the console in XML when an event is triggered. Note You can find this complete example \" Simple Filter XML \" and more examples located in the examples section. Up until this point, the pipeline has been triggered by an event over HTTP and the data at the end of that pipeline lands in the last function specified. In the example, data ends up printed to the console. Perhaps we'd like to send the data back to where it came from. In the case of an HTTP trigger, this would be the HTTP response. In the case of EdgeX MessageBus, this could be a new topic to send the data back to the MessageBus for other applications that wish to receive it. To do this, simply call ctx.SetResponseData(data []byte) passing in the data you wish to \"respond\" with. In the above printXMLToConsole(...) function, replace println(xml) with ctx.SetResponseData([]byte(xml)) . You should now see the response in your postman window when testing the pipeline.","title":"The Application Functions SDK"},{"location":"getting-started/Ch-GettingStartedCDevelopers/","text":"Getting Started - C Developers Introduction These instructions are for C Developers and Contributors to get, run and otherwise work with C-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements . If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User ) What You Need For C Development Many of EdgeX device services are built in C. In the future, other services could be built in C. In additional to the hardware and software listed in the Developers guide , to build EdgeX C services, you will need the following: libmicrohttpd libcurl libyaml libcbor paho libuuid hiredis You can install these on Debian 11 (Bullseye) by running: sudo apt-get install libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev libcbor-dev libpaho-mqtt-dev uuid-dev libhiredis-dev Some of these supporting packages have dependencies of their own, which will be automatically installed when using package managers such as APT , DNF etc. libpaho-mqtt-dev is not included in Ubuntu prior to Groovy (20.10). IOTech provides a package for Focal (20.04 LTS) which may be installed as follows: sudo curl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public -o /etc/apt/trusted.gpg.d/iotech-public.asc sudo echo \"deb https://iotech.jfrog.io/iotech/debian-release $( lsb_release -cs ) main\" | tee -a /etc/apt/sources.list.d/iotech.list sudo apt-get update sudo apt-get install libpaho-mqtt EdgeX 2.0 For EdgeX 2.0 the C SDK now supports MQTT and Redis implementations of the EdgeX MessageBus CMake is required to build the SDKs. Version 3 or better is required. You can install CMake on Debian by running: sudo apt-get install cmake Check that your C development environment includes the following: a version of GCC supporting C11 CMake version 3 or greater Development libraries and headers for: curl (version 7.56 or later) microhttpd (version 0.9) libyaml (version 0.1.6 or later) libcbor (version 0.5) libuuid (from util-linux v2.x) paho (version 1.3.x) hiredis (version 0.14) Next Steps To explore how to create and build EdgeX device services in C, head to the Device Services, C SDK guide .","title":"Getting Started - C Developers"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#getting-started-c-developers","text":"","title":"Getting Started - C Developers"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#introduction","text":"These instructions are for C Developers and Contributors to get, run and otherwise work with C-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements . If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User )","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#what-you-need-for-c-development","text":"Many of EdgeX device services are built in C. In the future, other services could be built in C. In additional to the hardware and software listed in the Developers guide , to build EdgeX C services, you will need the following: libmicrohttpd libcurl libyaml libcbor paho libuuid hiredis You can install these on Debian 11 (Bullseye) by running: sudo apt-get install libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev libcbor-dev libpaho-mqtt-dev uuid-dev libhiredis-dev Some of these supporting packages have dependencies of their own, which will be automatically installed when using package managers such as APT , DNF etc. libpaho-mqtt-dev is not included in Ubuntu prior to Groovy (20.10). IOTech provides a package for Focal (20.04 LTS) which may be installed as follows: sudo curl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public -o /etc/apt/trusted.gpg.d/iotech-public.asc sudo echo \"deb https://iotech.jfrog.io/iotech/debian-release $( lsb_release -cs ) main\" | tee -a /etc/apt/sources.list.d/iotech.list sudo apt-get update sudo apt-get install libpaho-mqtt EdgeX 2.0 For EdgeX 2.0 the C SDK now supports MQTT and Redis implementations of the EdgeX MessageBus CMake is required to build the SDKs. Version 3 or better is required. You can install CMake on Debian by running: sudo apt-get install cmake Check that your C development environment includes the following: a version of GCC supporting C11 CMake version 3 or greater Development libraries and headers for: curl (version 7.56 or later) microhttpd (version 0.9) libyaml (version 0.1.6 or later) libcbor (version 0.5) libuuid (from util-linux v2.x) paho (version 1.3.x) hiredis (version 0.14)","title":"What You Need For C Development"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#next-steps","text":"To explore how to create and build EdgeX device services in C, head to the Device Services, C SDK guide .","title":"Next Steps"},{"location":"getting-started/Ch-GettingStartedDevelopers/","text":"Getting Started as a Developer Introduction These instructions are for Developers and Contributors to get and run EdgeX Foundry. If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User ) EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. EdgeX consists of a collection of reference implementation services and SDK tools. The micro services and SDKs are written in Go or C. These documentation pages provide a developer with the information and instructions to get and run EdgeX Foundry in development mode - that is running natively outside of containers and with the intent of adding to or changing the existing code base. What You Need Hardware EdgeX Foundry is an operating system (OS) and hardware (HW)-agnostic edge software platform. See the reference page for platform requirements . These provide guidance on a minimal platform to run the EdgeX platform. However, as a developer, you may find that additional memory, disk space, and improved CPU are essential to building and debugging. Software Developers need to install the following software to get, run and develop EdgeX Foundry micro services: Git Use this free and open source version control (SVC) system to download (and upload) the EdgeX Foundry source code from the project's GitHub repositories. See https://git-scm.com/downloads for download and install instructions. Alternative tools (Easy Git for example) could be used, but this document assumes use of git and leaves how to use alternative SVC tools to the reader. Redis By default, EdgeX Foundry uses Redis (version 5 starting with the Geneva release) as the persistence mechanism for sensor data as well as metadata about the devices/sensors that are connected. See https://redis.io/ for download and installation instructions. MongoDB As an alternative, EdgeX Foundry allows use of MongoDB (version 4.2 as of Geneva) as the alternative persistence mechanism in place of Redis for sensor data as well as metadata about the connected devices/sensors. See https://www.mongodb.com/download-center?jmp=nav#community for download and installation instructions. Warning Use of MongoDB is deprecated with the Geneva release. EdgeX will remove MongoDB support in a future release. Developers should start to migrate to Redis in all development efforts targeting future EdgeX releases. ZeroMQ Several EdgeX Foundry services depend on ZeroMQ for communications by default. See the installation for your OS. Linux/Unix The easiest way to get and install ZeroMQ on Linux is to use this setup script: https://gist.github.com/katopz/8b766a5cb0ca96c816658e9407e83d00 . Note The 0MQ install script above assumes bash is available on your system and the bash executable is in /usr/bin. Before running the script at the link, run which bash at your Linux terminal to insure that bash is in /usr/bin. If not, change the first line of the script so that it points to the correct location of bash. MacOS For MacOS, use brew to install ZeroMQ. brew install zeromq Windows For directions installing ZeroMQ on Windows, please see the Windows documentation: https://github.com/edgexfoundry/edgex-go/blob/master/ZMQWindows.md Docker (Optional) If you intend to create Docker images for your updated or newly created EdgeX services, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information. Additional Programming Tools and Next Steps Depending on which part of EdgeX you work on, you need to install one or more programming languages (Go, C, etc.) and associated tooling. These tools are covered under the documentation specific to each type of development. Go (Golang) C Versioning Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX. Long Term Support Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.","title":"Getting Started as a Developer"},{"location":"getting-started/Ch-GettingStartedDevelopers/#getting-started-as-a-developer","text":"","title":"Getting Started as a Developer"},{"location":"getting-started/Ch-GettingStartedDevelopers/#introduction","text":"These instructions are for Developers and Contributors to get and run EdgeX Foundry. If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User ) EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. EdgeX consists of a collection of reference implementation services and SDK tools. The micro services and SDKs are written in Go or C. These documentation pages provide a developer with the information and instructions to get and run EdgeX Foundry in development mode - that is running natively outside of containers and with the intent of adding to or changing the existing code base.","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedDevelopers/#what-you-need","text":"","title":"What You Need"},{"location":"getting-started/Ch-GettingStartedDevelopers/#hardware","text":"EdgeX Foundry is an operating system (OS) and hardware (HW)-agnostic edge software platform. See the reference page for platform requirements . These provide guidance on a minimal platform to run the EdgeX platform. However, as a developer, you may find that additional memory, disk space, and improved CPU are essential to building and debugging.","title":"Hardware"},{"location":"getting-started/Ch-GettingStartedDevelopers/#software","text":"Developers need to install the following software to get, run and develop EdgeX Foundry micro services:","title":"Software"},{"location":"getting-started/Ch-GettingStartedDevelopers/#git","text":"Use this free and open source version control (SVC) system to download (and upload) the EdgeX Foundry source code from the project's GitHub repositories. See https://git-scm.com/downloads for download and install instructions. Alternative tools (Easy Git for example) could be used, but this document assumes use of git and leaves how to use alternative SVC tools to the reader.","title":"Git"},{"location":"getting-started/Ch-GettingStartedDevelopers/#redis","text":"By default, EdgeX Foundry uses Redis (version 5 starting with the Geneva release) as the persistence mechanism for sensor data as well as metadata about the devices/sensors that are connected. See https://redis.io/ for download and installation instructions.","title":"Redis"},{"location":"getting-started/Ch-GettingStartedDevelopers/#mongodb","text":"As an alternative, EdgeX Foundry allows use of MongoDB (version 4.2 as of Geneva) as the alternative persistence mechanism in place of Redis for sensor data as well as metadata about the connected devices/sensors. See https://www.mongodb.com/download-center?jmp=nav#community for download and installation instructions. Warning Use of MongoDB is deprecated with the Geneva release. EdgeX will remove MongoDB support in a future release. Developers should start to migrate to Redis in all development efforts targeting future EdgeX releases.","title":"MongoDB"},{"location":"getting-started/Ch-GettingStartedDevelopers/#zeromq","text":"Several EdgeX Foundry services depend on ZeroMQ for communications by default. See the installation for your OS. Linux/Unix The easiest way to get and install ZeroMQ on Linux is to use this setup script: https://gist.github.com/katopz/8b766a5cb0ca96c816658e9407e83d00 . Note The 0MQ install script above assumes bash is available on your system and the bash executable is in /usr/bin. Before running the script at the link, run which bash at your Linux terminal to insure that bash is in /usr/bin. If not, change the first line of the script so that it points to the correct location of bash. MacOS For MacOS, use brew to install ZeroMQ. brew install zeromq Windows For directions installing ZeroMQ on Windows, please see the Windows documentation: https://github.com/edgexfoundry/edgex-go/blob/master/ZMQWindows.md","title":"ZeroMQ"},{"location":"getting-started/Ch-GettingStartedDevelopers/#docker-optional","text":"If you intend to create Docker images for your updated or newly created EdgeX services, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information.","title":"Docker (Optional)"},{"location":"getting-started/Ch-GettingStartedDevelopers/#additional-programming-tools-and-next-steps","text":"Depending on which part of EdgeX you work on, you need to install one or more programming languages (Go, C, etc.) and associated tooling. These tools are covered under the documentation specific to each type of development. Go (Golang) C","title":"Additional Programming Tools and Next Steps"},{"location":"getting-started/Ch-GettingStartedDevelopers/#versioning","text":"Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX.","title":"Versioning"},{"location":"getting-started/Ch-GettingStartedDevelopers/#long-term-support","text":"Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.","title":"Long Term Support"},{"location":"getting-started/Ch-GettingStartedDockerUsers/","text":"Getting Started using Docker Introduction These instructions are for users to get and run EdgeX Foundry using the latest stable Docker images. If you wish to get the latest builds of EdgeX Docker images (prior to releases), then see the EdgeX Nexus Repository guide. Get & Run EdgeX Foundry Install Docker & Docker Compose To run Dockerized EdgeX, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information. The following short video is also very informative https://www.youtube.com/watch?time_continue=3&v=VhabrYF1nms Use Docker Compose to orchestrate the fetch (or pull), install, and start the EdgeX micro service containers. Also use Docker Compose to stop the micro service containers. See: https://docs.docker.com/compose/ to learn more about Docker Compose. You do not need to be an expert with Docker (or Docker Compose) to get and run EdgeX. This guide provides the steps to get EdgeX running in your environment. Some knowledge of Docker and Docker Compose are nice to have, but not required. Basic Docker and Docker Compose commands provided here enable you to run, update, and diagnose issues within EdgeX. Select a EdgeX Foundry Compose File After installing Docker and Docker Compose, you need a EdgeX Docker Compose file. EdgeX Foundry has over a dozen micro services, each deployed in its own Docker container. This file is a manifest of all the EdgeX Foundry micro services to run. The Docker Compose file provides details about how to run each of the services. Specifically, a Docker Compose file is a manifest file, which lists: The Docker container images that should be downloaded, The order in which the containers should be started, The parameters (such as ports) under which the containers should be run The EdgeX development team provides Docker Compose files for each release. Visit the project's GitHub and find the edgex-compose repository . This repository holds all of the EdgeX Docker Compose files for each of the EdgeX releases/versions. The Compose files for each release are found in separate branches. Click on the main button to see all the branches. The edgex-compose repositor contains branches for each release. Select the release branch to locate the Docker Compose files for each release. Locate the branch containing the EdgeX Docker Compose file for the version of EdgeX you want to run. Note The main branch contains the Docker Compose files that use artifacts created from the latest code submitted by contributors (from the night builds). Most end users should avoid using these Docker Compose files. They are work-in-progress. Users should use the Docker Compose files for the latest version of EdgeX. In each edgex-compose branch, you will find several Docker Compose files (all with a .yml extension). The name of the file will suggest the type of EdgeX instance the Compose file will help setup. The table below provides a list of the Docker Compose filenames for the latest release (Ireland). Find the Docker Compose file that matches: your hardware (x86 or ARM) your desire to have security services on or off filename Docker Compose contents docker-compose-arm64.yml Specifies x86 containers, uses Redis database for persistence, and includes security services docker-compose-no-secty-arm64.yml Specifies ARM 64 containers, uses Redis database for persistence, but does not include security services docker-compose-no-secty.yml Specifies x86 containers, uses Redis database for persistence, but does not include security services docker-compose.yml Specifies x86 containers, uses Redis database for persistence, and includes security services docker-compose-no-secty-with-ui-arm64. Same as docker-compose-no-secty-arm64.yml but also includes EdgeX user interface docker-compose-no-secty-with-ui.yml Same as docker-compose-no-secty.yml but also includes EdgeX user interface docker-compose-portainer.yml Specifies the Portainer user interface extension (to be used with the x86 or ARM EdgeX platform) Download a EdgeX Foundry Compose File Once you have selected the release branch of edgex-compose you want to use, download it using your favorite tool. The examples below uses wget to fetch Docker Compose for the Ireland release with no security. x86 wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/ireland/docker-compose-no-secty.yml -O docker-compose.yml ARM wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/ireland/docker-compose-no-secty-arm64.yml -O docker-compose.yml Note The commands above fetch the Docker Compose to a file named 'docker-compose.yml' in the current directory. Docker Compose commands look for a file named 'docker-compose.yml' by default. You can use an alternate file name but then must specify that file name when issuing Docker Compose commands. See Compose reference documentation for help. Generate a custom Docker Compose file The Docker Compose files in the ireland branch contain the standard set of EdgeX services configured to use Redis message bus and include only the Virtual and REST device services. If you need to have different device services running or use MQTT for the message bus, you need a modified version of one of the standard Docker Compose files. You could manually add the device services to one of the existing EdgeX Compose files or, use the EdgeX Compose Builder tool to generate a new custom Compose file that contains the services you would like included. When you use Compose Builder, you don't have to worry about adding all the necessary ports, variables, etc. as the tool will generate the service elements in the file for you. The Compose Builder tool was added with the Hanoi release. You will find the Compose Builder tool in each of the release branches since Hanoi under the compose-builder folder of those branches. You will also find a compose-builder folder on the main branch for creating custom Compose files for the nightly builds. Do the following to use this tool to generate a custom Compose file: Clone the edgex-compose repository. git clone https://github.com/edgexfoundry/edgex-compose.git 2. Change directories to the clone and checkout the appropriate release branch. Checkout of the Ireland release branch is shown here. cd edgex-compose/ git checkout ireland 3. Change directories to the compose-builder folder and then use the make gen command to generate your custom compose file. The generated Docker Compose file is named docker-compose.yaml . Here are some examples: cd compose-builder/ make gen ds-mqtt mqtt-broker - Generates secure Compose file configured to use MQTT for the message bus, adds then MQTT broker and the Device MQTT services. make gen no-secty ds-modbus - Generates non-secure compose file with just the Device Modbus device service. make gen no-secty arm64 ds-grove - Generates non-secure compose file for ARM64 with just the Device Grove device service. \u200b See the README document in the compose-builder directory for details on all the available options. The Compose Builder is different per release, so make sure to consult the README in the appropriate release branch. See Ireland's Compose Builder README for details on the lastest release Compose Builder options for make gen . Note The generated Docker Compose file may require addition customizations for your specific needs, such as environment override(s) to set appropriate Host IP address, etc. Run EdgeX Foundry Now that you have the EdgeX Docker Compose file, you are ready to run EdgeX. Follow these steps to get the container images and start EdgeX! In a command terminal, change directories to the location of your docker-compose.yml. Run the following command in the terminal to pull (fetch) and then start the EdgeX containers. docker-compose up -d Info If you wish, you can fetch the images first and then run them. This allows you to make sure the EdgeX images you need are all available before trying to run. docker-compose pull docker-compose up -d Note The -d option indicates you want Docker Compose to run the EdgeX containers in detached mode - that is to run the containers in the background. Without -d, the containers will all start in the terminal and in order to use the terminal further you have to stop the containers. Verify EdgeX Foundry Running In the same terminal, run the process status command shown below to confirm that all the containers downloaded and started. docker-compose ps If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above. If you are using a custom Compose file, your containers list may vary. Also note that some \"setup\" containers are designed to start and then exit after configuring your EdgeX instance. Checking the Status of EdgeX Foundry In addition to the process status of the EdgeX containers, there are a number of other tools to check on the health and status of your EdgeX instance. EdgeX Foundry Container Logs Use the command below to see the log of any service. # see the logs of a service docker-compose logs -f [ compose-service-name ] # example - core data docker-compose logs -f data See EdgeX Container Names for a list of the EdgeX Docker Compose service names. A check of an EdgeX service log usually indicates if the service is running normally or has errors. When you are done reviewing the content of the log, select Control-c to stop the output to your terminal. Ping Check Each EdgeX micro service has a built-in response to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro service containers are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available. http://localhost:[service port]/api/v2/ping See EdgeX Default Service Ports for a list of the EdgeX default service ports. \"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues. Consul Registry Check EdgeX uses the open source Consul project as its registry service. All EdgeX micro services are expected to register with Consul as they start. Going to Consul's dashboard UI enables you to see which services are up. Find the Consul UI at http://localhost:8500/ui . EdgeX 2.0 Please note that as of EdgeX 2.0, Consul can be secured. When EdgeX is running in secure mode with secure Consul , you must provide Consul's access token to get to the dashboard UI referenced above. See How to get Consul ACL token for details.","title":"Getting Started using Docker"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#getting-started-using-docker","text":"","title":"Getting Started using Docker"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#introduction","text":"These instructions are for users to get and run EdgeX Foundry using the latest stable Docker images. If you wish to get the latest builds of EdgeX Docker images (prior to releases), then see the EdgeX Nexus Repository guide.","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#get-run-edgex-foundry","text":"","title":"Get & Run EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#install-docker-docker-compose","text":"To run Dockerized EdgeX, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information. The following short video is also very informative https://www.youtube.com/watch?time_continue=3&v=VhabrYF1nms Use Docker Compose to orchestrate the fetch (or pull), install, and start the EdgeX micro service containers. Also use Docker Compose to stop the micro service containers. See: https://docs.docker.com/compose/ to learn more about Docker Compose. You do not need to be an expert with Docker (or Docker Compose) to get and run EdgeX. This guide provides the steps to get EdgeX running in your environment. Some knowledge of Docker and Docker Compose are nice to have, but not required. Basic Docker and Docker Compose commands provided here enable you to run, update, and diagnose issues within EdgeX.","title":"Install Docker & Docker Compose"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#select-a-edgex-foundry-compose-file","text":"After installing Docker and Docker Compose, you need a EdgeX Docker Compose file. EdgeX Foundry has over a dozen micro services, each deployed in its own Docker container. This file is a manifest of all the EdgeX Foundry micro services to run. The Docker Compose file provides details about how to run each of the services. Specifically, a Docker Compose file is a manifest file, which lists: The Docker container images that should be downloaded, The order in which the containers should be started, The parameters (such as ports) under which the containers should be run The EdgeX development team provides Docker Compose files for each release. Visit the project's GitHub and find the edgex-compose repository . This repository holds all of the EdgeX Docker Compose files for each of the EdgeX releases/versions. The Compose files for each release are found in separate branches. Click on the main button to see all the branches. The edgex-compose repositor contains branches for each release. Select the release branch to locate the Docker Compose files for each release. Locate the branch containing the EdgeX Docker Compose file for the version of EdgeX you want to run. Note The main branch contains the Docker Compose files that use artifacts created from the latest code submitted by contributors (from the night builds). Most end users should avoid using these Docker Compose files. They are work-in-progress. Users should use the Docker Compose files for the latest version of EdgeX. In each edgex-compose branch, you will find several Docker Compose files (all with a .yml extension). The name of the file will suggest the type of EdgeX instance the Compose file will help setup. The table below provides a list of the Docker Compose filenames for the latest release (Ireland). Find the Docker Compose file that matches: your hardware (x86 or ARM) your desire to have security services on or off filename Docker Compose contents docker-compose-arm64.yml Specifies x86 containers, uses Redis database for persistence, and includes security services docker-compose-no-secty-arm64.yml Specifies ARM 64 containers, uses Redis database for persistence, but does not include security services docker-compose-no-secty.yml Specifies x86 containers, uses Redis database for persistence, but does not include security services docker-compose.yml Specifies x86 containers, uses Redis database for persistence, and includes security services docker-compose-no-secty-with-ui-arm64. Same as docker-compose-no-secty-arm64.yml but also includes EdgeX user interface docker-compose-no-secty-with-ui.yml Same as docker-compose-no-secty.yml but also includes EdgeX user interface docker-compose-portainer.yml Specifies the Portainer user interface extension (to be used with the x86 or ARM EdgeX platform)","title":"Select a EdgeX Foundry Compose File"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#download-a-edgex-foundry-compose-file","text":"Once you have selected the release branch of edgex-compose you want to use, download it using your favorite tool. The examples below uses wget to fetch Docker Compose for the Ireland release with no security. x86 wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/ireland/docker-compose-no-secty.yml -O docker-compose.yml ARM wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/ireland/docker-compose-no-secty-arm64.yml -O docker-compose.yml Note The commands above fetch the Docker Compose to a file named 'docker-compose.yml' in the current directory. Docker Compose commands look for a file named 'docker-compose.yml' by default. You can use an alternate file name but then must specify that file name when issuing Docker Compose commands. See Compose reference documentation for help.","title":"Download a EdgeX Foundry Compose File"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#generate-a-custom-docker-compose-file","text":"The Docker Compose files in the ireland branch contain the standard set of EdgeX services configured to use Redis message bus and include only the Virtual and REST device services. If you need to have different device services running or use MQTT for the message bus, you need a modified version of one of the standard Docker Compose files. You could manually add the device services to one of the existing EdgeX Compose files or, use the EdgeX Compose Builder tool to generate a new custom Compose file that contains the services you would like included. When you use Compose Builder, you don't have to worry about adding all the necessary ports, variables, etc. as the tool will generate the service elements in the file for you. The Compose Builder tool was added with the Hanoi release. You will find the Compose Builder tool in each of the release branches since Hanoi under the compose-builder folder of those branches. You will also find a compose-builder folder on the main branch for creating custom Compose files for the nightly builds. Do the following to use this tool to generate a custom Compose file: Clone the edgex-compose repository. git clone https://github.com/edgexfoundry/edgex-compose.git 2. Change directories to the clone and checkout the appropriate release branch. Checkout of the Ireland release branch is shown here. cd edgex-compose/ git checkout ireland 3. Change directories to the compose-builder folder and then use the make gen command to generate your custom compose file. The generated Docker Compose file is named docker-compose.yaml . Here are some examples: cd compose-builder/ make gen ds-mqtt mqtt-broker - Generates secure Compose file configured to use MQTT for the message bus, adds then MQTT broker and the Device MQTT services. make gen no-secty ds-modbus - Generates non-secure compose file with just the Device Modbus device service. make gen no-secty arm64 ds-grove - Generates non-secure compose file for ARM64 with just the Device Grove device service. \u200b See the README document in the compose-builder directory for details on all the available options. The Compose Builder is different per release, so make sure to consult the README in the appropriate release branch. See Ireland's Compose Builder README for details on the lastest release Compose Builder options for make gen . Note The generated Docker Compose file may require addition customizations for your specific needs, such as environment override(s) to set appropriate Host IP address, etc.","title":"Generate a custom Docker Compose file"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#run-edgex-foundry","text":"Now that you have the EdgeX Docker Compose file, you are ready to run EdgeX. Follow these steps to get the container images and start EdgeX! In a command terminal, change directories to the location of your docker-compose.yml. Run the following command in the terminal to pull (fetch) and then start the EdgeX containers. docker-compose up -d Info If you wish, you can fetch the images first and then run them. This allows you to make sure the EdgeX images you need are all available before trying to run. docker-compose pull docker-compose up -d Note The -d option indicates you want Docker Compose to run the EdgeX containers in detached mode - that is to run the containers in the background. Without -d, the containers will all start in the terminal and in order to use the terminal further you have to stop the containers.","title":"Run EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#verify-edgex-foundry-running","text":"In the same terminal, run the process status command shown below to confirm that all the containers downloaded and started. docker-compose ps If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above. If you are using a custom Compose file, your containers list may vary. Also note that some \"setup\" containers are designed to start and then exit after configuring your EdgeX instance.","title":"Verify EdgeX Foundry Running"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#checking-the-status-of-edgex-foundry","text":"In addition to the process status of the EdgeX containers, there are a number of other tools to check on the health and status of your EdgeX instance.","title":"Checking the Status of EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#edgex-foundry-container-logs","text":"Use the command below to see the log of any service. # see the logs of a service docker-compose logs -f [ compose-service-name ] # example - core data docker-compose logs -f data See EdgeX Container Names for a list of the EdgeX Docker Compose service names. A check of an EdgeX service log usually indicates if the service is running normally or has errors. When you are done reviewing the content of the log, select Control-c to stop the output to your terminal.","title":"EdgeX Foundry Container Logs"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#ping-check","text":"Each EdgeX micro service has a built-in response to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro service containers are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available. http://localhost:[service port]/api/v2/ping See EdgeX Default Service Ports for a list of the EdgeX default service ports. \"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues.","title":"Ping Check"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#consul-registry-check","text":"EdgeX uses the open source Consul project as its registry service. All EdgeX micro services are expected to register with Consul as they start. Going to Consul's dashboard UI enables you to see which services are up. Find the Consul UI at http://localhost:8500/ui . EdgeX 2.0 Please note that as of EdgeX 2.0, Consul can be secured. When EdgeX is running in secure mode with secure Consul , you must provide Consul's access token to get to the dashboard UI referenced above. See How to get Consul ACL token for details.","title":"Consul Registry Check"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/","text":"Getting Started - Go Developers Introduction These instructions are for Go Lang Developers and Contributors to get, run and otherwise work with Go-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements . If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User ) What You Need For Go Development In additional to the hardware and software listed in the Developers guide , you will need the following to work with the EdgeX Go-based micro services. Go The open sourced micro services of EdgeX Foundry are written in Go 1.16. See https://golang.org/dl/ for download and installation instructions. Newer versions of Go are available and may work, but the project has not built and tested to these newer versions of the language. Older versions of Go, especially 1.10 or older, are likely to cause issues (EdgeX now uses Go Modules which were introduced with Go Lang 1.11). Build Essentials In order to compile and build some elements of EdgeX, Gnu C compiler, utilities (like make), and associated librarires need to be installed. Some IDEs may already come with these tools. Some OS environments may already come with these tools. Others environments may require you install them. For Ubuntu environments, you can install a convenience package called Build Essentials . Note If you are installing Build Essentials, note that there is a build-essential package for each Ubuntu release. Search for 'build-essential' associated to your Ubuntu version via Ubuntu Packages Search . IDE (Optional) There are many tool options for writing and editing Go Lang code. You could use a simple text editor. For more convenience, you may choose to use an integrated development environment (IDE). The list below highlights IDEs used by some of the EdgeX community (without any project endorsement). GoLand GoLand is a popular, although subscription-fee based, Go specific IDE. Learn how to purchase and download Go Land here: https://www.jetbrains.com/go/ . Visual Studio Code Visual Studio Code is a free, open source IDE developed by Microsoft. Find and download Visual Studio Code here: https://code.visualstudio.com/ . Atom Atom is also a free, open source IDE used with many languages. Find and download Atom here: https://ide.atom.io/ . Get the code This part of the documentation assumes you wish to get and work with the key EdgeX services. This includes but is not limited to Core, Supporting, some security, and system management services. To work with other Go-based security services, device services, application services, SDKs, user interface, or other service you may need to pull in other EdgeX repository code. See other getting started guides for working with other Go-based services. As you will see below, you do not need to explicitly pull in dependency modules (whether EdgeX or 3rd party provided). Dependencies will automatically be pulled through the building process. To work with the key services, you will need to download the source code from the EdgeX Go repository . The EdgeX Go-based micro services are all available in a single GitHub repository download. Once the code is pulled, the Go micro services are built and packaged as platform dependent executables. If Docker is installed, the executable can also be containerized for end user deployment/use. To download the EdgeX Go code, first change directories to the location where you want to download the code (to edgex in the image below). Then use your git tool and request to clone this repository with the following command: git clone https://github.com/edgexfoundry/edgex-go.git Note If you plan to contribute code back to the EdgeX project (as a Contributor), you are going to want to fork the repositories you plan to work with and then pull your fork versus the EdgeX repositories directly. This documentation does not address the process and procedures for working with an EdgeX fork, committing changes and submitting contribution pull requests (PRs). See some of the links below in the EdgeX Wiki for help on how to fork and contribute EdgeX code. https://wiki.edgexfoundry.org/display/FA/Contributor%27s+Guide https://wiki.edgexfoundry.org/display/FA/Contributor%27s+Guide+-+Go+Lang https://wiki.edgexfoundry.org/display/FA/Contributor+Process?searchId=AW768BAW7 Furthermore, this pulls and works with the latest code from the main branch. The main branch contains code that is \"work in progress\" for the upcoming release. If you want to work with a specific release, checkout code from the specific release branch or tag(e.g. v2.0.0 , hanoi , v1.3.11 , etc.) Build EdgeX Foundry To build the Go Lang services found in edgex-go, first change directories to the root of the edgex-go code cd edgex-go Second, use the community provided Makefile to build all the services in a single call make build Info The first time EdgeX builds, it will take longer than other builds as it has to download all dependencies. Depending on the size of your host machine, an initial build can take several minutes. Make sure the build completes and has no errors. If it does build, you should find new service executables in each of the service folders under the service directories found in the /edgex-go/cmd folder. Run EdgeX Foundry Run the Database Several of the EdgeX Foundry micro services use a database. This includes core-data, core-metadata, support-scheduler, among others. Therefore, when working with EdgeX Foundry its a good idea to have the database up and running as a general rule. See the Redis Quick Start Guide for how to run Redis in a Linux environment (or find similar documentation for other environments). Run EdgeX Services With the services built, and the database up and running, you can now run each of the services. In this example, the services will run without security services turned on. If you wish to run with security, you will need to clone, build and run the security services. In order to turn security off, first set the EDGEX_SECURITY_SECRET_STORE environment variable to false with an export call. Simply call export EDGEX_SECURITY_SECRET_STORE = false Next, move to the cmd folder and then change folders to the service folder for the service you want to run. Start the executable (with default configuration) that is in that folder. For example, to start Core Metadata, enter the cmd/core-metadata folder and start core-metadata. cd cmd/core-metadata/ ./core-metadata & Note When running the services from the command line, you will usually want to start the service with the & character after the command. This makes the command run in the background. If you do not run the service in the background, then you will need to leave the service running in the terminal and open another terminal to start the other services. This will start the EdgeX go service and leave it running in the background until you kill it. The log entries from the service will still display in the terminal. Watch the log entries for any ERROR indicators. Info To kill a service there are several options, but an easy means is to use pkill with the service name. pkill core-metadata Start as many services as you need in order to carry out your development, testing, etc. As an absolute minimal set, you will typically need to run core-metadata, core-data, core-command and a device service. Selection of the device service will depend on which physical sensor or device you want to use (or use the virtual device to simulate a sensor). Here are the set of commands to launch core-data and core-command (in addition to core-metadata above) cd ../core-data/ ./core-data & cd ../core-command/ ./core-command & Tip You can run some services via Docker containers while working on specific services in Go. See Working in a Hybrid Environment for more details. While the EdgeX services are running you can make EdgeX API calls to localhost . Info No sensor data will flow yet as this just gets the key services up and running. To get sensor data flowing into EdgeX, you will need to get, build and run an EdgeX device service in a similar fashion. The community provides a virtual device service to test and experiment with ( https://github.com/edgexfoundry/device-virtual-go ). Verify EdgeX is Working Each EdgeX micro service has a built-in respond to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro services are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available. http://localhost:[port]/api/v2/ping See EdgeX Default Service Ports for a list of the EdgeX default service ports. \"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues. The example above shows the ping of core-data. Next Steps Application services and some device services are also built in Go. To explore how to create and build EdgeX application and devices services in Go, head to SDK documentation covering these EdgeX elements. Application Services and the Application Functions SDK Device Services in Go EdgeX Foundry in GoLand IDEs offer many code editing conveniences. Go Land was specifically built to edit and work with Go code. So if you are doing any significant code work with the EdgeX Go micro services, you will likely find it convenient to edit, build, run, test, etc. from GoLand or other IDE. Import EdgeX To bring in the EdgeX repository code into Go Land, use the File \u2192 Open... menu option in Go Land to open the Open File or Project Window. In the \"Open File or Project\" popup, select the location of the folder containing your cloned edgex-go repo. Open the Terminal From the View menu in Go Land, select the Terminal menu option. This will open a command terminal from which you can issue commands to install the dependencies, build the micro services, run the micro services, etc. Build the EdgeX Micro Services Run \"make build\" in the Terminal view (as shown below) to build the services. This can take a few minutes to build all the services. Just as when running make build from the command line in a terminal, the micro service executables that get built in Go Land's terminal will be created in each of the service folders under the service directories found in the /edgex-go/cmd folder.. Run EdgeX With all the micro services built, you can now run EdgeX services. You may first want to make sure the database is running. Then, set any environment variables, change directories to the /cmd and service subfolder, and run the service right from the the terminal (same as in Run EdgeX Services ). You can now call on the service APIs to make sure they are running correctly. Namely, call on http://localhost:\\[service port\\]/api/v2/ping to see each service respond to the simplest of requests.","title":"Getting Started - Go Developers"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#getting-started-go-developers","text":"","title":"Getting Started - Go Developers"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#introduction","text":"These instructions are for Go Lang Developers and Contributors to get, run and otherwise work with Go-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements . If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User )","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#what-you-need-for-go-development","text":"In additional to the hardware and software listed in the Developers guide , you will need the following to work with the EdgeX Go-based micro services.","title":"What You Need For Go Development"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#go","text":"The open sourced micro services of EdgeX Foundry are written in Go 1.16. See https://golang.org/dl/ for download and installation instructions. Newer versions of Go are available and may work, but the project has not built and tested to these newer versions of the language. Older versions of Go, especially 1.10 or older, are likely to cause issues (EdgeX now uses Go Modules which were introduced with Go Lang 1.11).","title":"Go"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-essentials","text":"In order to compile and build some elements of EdgeX, Gnu C compiler, utilities (like make), and associated librarires need to be installed. Some IDEs may already come with these tools. Some OS environments may already come with these tools. Others environments may require you install them. For Ubuntu environments, you can install a convenience package called Build Essentials . Note If you are installing Build Essentials, note that there is a build-essential package for each Ubuntu release. Search for 'build-essential' associated to your Ubuntu version via Ubuntu Packages Search .","title":"Build Essentials"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#ide-optional","text":"There are many tool options for writing and editing Go Lang code. You could use a simple text editor. For more convenience, you may choose to use an integrated development environment (IDE). The list below highlights IDEs used by some of the EdgeX community (without any project endorsement).","title":"IDE (Optional)"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#goland","text":"GoLand is a popular, although subscription-fee based, Go specific IDE. Learn how to purchase and download Go Land here: https://www.jetbrains.com/go/ .","title":"GoLand"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#visual-studio-code","text":"Visual Studio Code is a free, open source IDE developed by Microsoft. Find and download Visual Studio Code here: https://code.visualstudio.com/ .","title":"Visual Studio Code"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#atom","text":"Atom is also a free, open source IDE used with many languages. Find and download Atom here: https://ide.atom.io/ .","title":"Atom"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#get-the-code","text":"This part of the documentation assumes you wish to get and work with the key EdgeX services. This includes but is not limited to Core, Supporting, some security, and system management services. To work with other Go-based security services, device services, application services, SDKs, user interface, or other service you may need to pull in other EdgeX repository code. See other getting started guides for working with other Go-based services. As you will see below, you do not need to explicitly pull in dependency modules (whether EdgeX or 3rd party provided). Dependencies will automatically be pulled through the building process. To work with the key services, you will need to download the source code from the EdgeX Go repository . The EdgeX Go-based micro services are all available in a single GitHub repository download. Once the code is pulled, the Go micro services are built and packaged as platform dependent executables. If Docker is installed, the executable can also be containerized for end user deployment/use. To download the EdgeX Go code, first change directories to the location where you want to download the code (to edgex in the image below). Then use your git tool and request to clone this repository with the following command: git clone https://github.com/edgexfoundry/edgex-go.git Note If you plan to contribute code back to the EdgeX project (as a Contributor), you are going to want to fork the repositories you plan to work with and then pull your fork versus the EdgeX repositories directly. This documentation does not address the process and procedures for working with an EdgeX fork, committing changes and submitting contribution pull requests (PRs). See some of the links below in the EdgeX Wiki for help on how to fork and contribute EdgeX code. https://wiki.edgexfoundry.org/display/FA/Contributor%27s+Guide https://wiki.edgexfoundry.org/display/FA/Contributor%27s+Guide+-+Go+Lang https://wiki.edgexfoundry.org/display/FA/Contributor+Process?searchId=AW768BAW7 Furthermore, this pulls and works with the latest code from the main branch. The main branch contains code that is \"work in progress\" for the upcoming release. If you want to work with a specific release, checkout code from the specific release branch or tag(e.g. v2.0.0 , hanoi , v1.3.11 , etc.)","title":"Get the code"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-edgex-foundry","text":"To build the Go Lang services found in edgex-go, first change directories to the root of the edgex-go code cd edgex-go Second, use the community provided Makefile to build all the services in a single call make build Info The first time EdgeX builds, it will take longer than other builds as it has to download all dependencies. Depending on the size of your host machine, an initial build can take several minutes. Make sure the build completes and has no errors. If it does build, you should find new service executables in each of the service folders under the service directories found in the /edgex-go/cmd folder.","title":"Build EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex-foundry","text":"","title":"Run EdgeX Foundry"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-the-database","text":"Several of the EdgeX Foundry micro services use a database. This includes core-data, core-metadata, support-scheduler, among others. Therefore, when working with EdgeX Foundry its a good idea to have the database up and running as a general rule. See the Redis Quick Start Guide for how to run Redis in a Linux environment (or find similar documentation for other environments).","title":"Run the Database"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex-services","text":"With the services built, and the database up and running, you can now run each of the services. In this example, the services will run without security services turned on. If you wish to run with security, you will need to clone, build and run the security services. In order to turn security off, first set the EDGEX_SECURITY_SECRET_STORE environment variable to false with an export call. Simply call export EDGEX_SECURITY_SECRET_STORE = false Next, move to the cmd folder and then change folders to the service folder for the service you want to run. Start the executable (with default configuration) that is in that folder. For example, to start Core Metadata, enter the cmd/core-metadata folder and start core-metadata. cd cmd/core-metadata/ ./core-metadata & Note When running the services from the command line, you will usually want to start the service with the & character after the command. This makes the command run in the background. If you do not run the service in the background, then you will need to leave the service running in the terminal and open another terminal to start the other services. This will start the EdgeX go service and leave it running in the background until you kill it. The log entries from the service will still display in the terminal. Watch the log entries for any ERROR indicators. Info To kill a service there are several options, but an easy means is to use pkill with the service name. pkill core-metadata Start as many services as you need in order to carry out your development, testing, etc. As an absolute minimal set, you will typically need to run core-metadata, core-data, core-command and a device service. Selection of the device service will depend on which physical sensor or device you want to use (or use the virtual device to simulate a sensor). Here are the set of commands to launch core-data and core-command (in addition to core-metadata above) cd ../core-data/ ./core-data & cd ../core-command/ ./core-command & Tip You can run some services via Docker containers while working on specific services in Go. See Working in a Hybrid Environment for more details. While the EdgeX services are running you can make EdgeX API calls to localhost . Info No sensor data will flow yet as this just gets the key services up and running. To get sensor data flowing into EdgeX, you will need to get, build and run an EdgeX device service in a similar fashion. The community provides a virtual device service to test and experiment with ( https://github.com/edgexfoundry/device-virtual-go ).","title":"Run EdgeX Services"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#verify-edgex-is-working","text":"Each EdgeX micro service has a built-in respond to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro services are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available. http://localhost:[port]/api/v2/ping See EdgeX Default Service Ports for a list of the EdgeX default service ports. \"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues. The example above shows the ping of core-data.","title":"Verify EdgeX is Working"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#next-steps","text":"Application services and some device services are also built in Go. To explore how to create and build EdgeX application and devices services in Go, head to SDK documentation covering these EdgeX elements. Application Services and the Application Functions SDK Device Services in Go","title":"Next Steps"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#edgex-foundry-in-goland","text":"IDEs offer many code editing conveniences. Go Land was specifically built to edit and work with Go code. So if you are doing any significant code work with the EdgeX Go micro services, you will likely find it convenient to edit, build, run, test, etc. from GoLand or other IDE.","title":"EdgeX Foundry in GoLand"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#import-edgex","text":"To bring in the EdgeX repository code into Go Land, use the File \u2192 Open... menu option in Go Land to open the Open File or Project Window. In the \"Open File or Project\" popup, select the location of the folder containing your cloned edgex-go repo.","title":"Import EdgeX"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#open-the-terminal","text":"From the View menu in Go Land, select the Terminal menu option. This will open a command terminal from which you can issue commands to install the dependencies, build the micro services, run the micro services, etc.","title":"Open the Terminal"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-the-edgex-micro-services","text":"Run \"make build\" in the Terminal view (as shown below) to build the services. This can take a few minutes to build all the services. Just as when running make build from the command line in a terminal, the micro service executables that get built in Go Land's terminal will be created in each of the service folders under the service directories found in the /edgex-go/cmd folder..","title":"Build the EdgeX Micro Services"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex","text":"With all the micro services built, you can now run EdgeX services. You may first want to make sure the database is running. Then, set any environment variables, change directories to the /cmd and service subfolder, and run the service right from the the terminal (same as in Run EdgeX Services ). You can now call on the service APIs to make sure they are running correctly. Namely, call on http://localhost:\\[service port\\]/api/v2/ping to see each service respond to the simplest of requests.","title":"Run EdgeX"},{"location":"getting-started/Ch-GettingStartedHybrid/","text":"Working in a Hybrid Environment In some cases, as a developer or contributor , you want to work on a particular micro service. Yet, you don't want to have to download all the source code, and then build and run all the micro services. There is an alternative approach! You can download and run the EdgeX Docker containers for all the micro services you need and run your single micro service (the one you are presumably working on) natively or from a developer tool of choice outside of a container. Within EdgeX, we call this a \"hybrid\" environment - where part of your EdgeX platform is running from a development environment, while other parts are running from Docker containers. This page outlines how to work in a hybrid development environment. As an example of this process, let's say you want to do coding work with/on the Virtual Device service. You want the rest of the EdgeX environment up and running via Docker containers. How would you set up this hybrid environment? Let's take a look. Get and Run the EdgeX Docker Containers If you haven't already, follow the Getting Started using Docker guide to set up your environment (Docker, Docker Compose, etc.) before continuing. Since we plan to work with the virtual device service in this example, you don't need or want to run the virtual device service. You will run all the other services via Docker Compose. Based on the instructions found in the Getting Started using Docker , locate and download the appropriate Docker Compose file for your development environment. Next, issue the following commands to start the EdgeX containers and then stop the virtual device service (which is the service you are working on in this example). docker-compose up -d docker-compose stop device-virtual Run the EdgeX containers and then stop the service container that you are going to work on - in this case the virtual device service container. Note These notes assume you are working with the EdgeX Ireland release. It also assumes you have downloaded the appropriate Docker Compose file and have named it docker-compose.yml so you don't have to specify the file name each time you run a Docker Compose command. Some versions of EdgeX may require other or additional containers to run. Tip You can also use the EdgeX Compose Builder tool to create a custom Docker Compose file with just the services you want. See the Compose Builder documentation on and checkout the Compose Builder tool in GitHub . Run the command below to confirm that all the containers have started and that the virtual device container is no longer running. docker-compose ps Get, Build and Run the (non-Docker) Service With the EdgeX containers running, you can now download, build and run natively (outside of a container) the service you want to work on. In this example, the virtual device service is used to exemplify the steps necessary to get, build and run the native service with the EdgeX containerized services. However, the practice could be applied to any service. Get the service code Per Getting Started Go Developers , pull the micro service code you want to work on from GitHub. In this example, we use the device-virtual-go as the micro service that is going to be worked on. git clone https://github.com/edgexfoundry/device-virtual-go.git Build the service code At this time, you can add or modify the code to make the service changes you need. Once ready, you must compile and build the service into an executable. Change folders to the cloned micro service directory and build the service. cd device-virtual-go/ make build Clone the service from Github, make your code changes and then build the service locally. Change the configuration Depending on the service you are working on, you may need to change the configuration of the service to point to and use the other services that are containerized (running in Docker). In particular, if the service you are working on is not on the same host as the Docker Engine running the containerized services, you will likely need to change the configuration. Examine the configuration.toml file in the cmd/res folder of the device-virtual-go. Note that the Service (located in the [Service] section of the configuration), Registry (located in the [Registry] section) and all the \"Clients\" (located in the [Clients] section) suggest that the Host of these services is \"localhost\". These and other host configuration elements need to change when the services are not running on the same host - specifically the localhost. When your service is running on a different host than the rest of EdgeX, change the [Service] Host to be the address of the machine hosting your service. Change the [Registry] and [Clients] Host configuration to specify the location of the machine hosting these services. If you do have to change the configuration, save the configuration.toml file after making changes. Run the service code natively. The executable created by the make build command is found in the cmd folder of the service. Change folders to the location of the executable. Set any environment variables needed depending on your EdgeX setup. In this example, we did not start the security elements so we need to set EDGEX_SECURITY_SECRET_STORE to false in order to turn off security. Finally, run the service right from a terminal. cd cmd export EDGEX_SECURITY_SECRET_STORE = false ./device-virtual Change folders to the service's cmd/ folder, set env vars, and then execute the service executable in the cmd folder. Check the results At this time, your virtual device micro service should be communicating with the other EdgeX micro services running in their Docker containers. Because Core Metadata callbacks do not work in the hybrid environment, the virtual device service will not receive the Add Device callbacks on the inital run after creating them in Core Metadata. The simple work around for this issue is to stop ( Ctrl-c from the terminal) and restart the virtual device service (again with ./device-virtual execution). The virtual device service log after stopping and restarting. Give the virtual device a few seconds or so to initialize itself and start sending data to Core Data. To check that it is working properly, open a browser and point your browser to Core Data to check that events are being deposited. You can do this by calling on the Core Data API that checks the count of events in Core Data. http://localhost:59880/api/v2/event/count For this example, you can check that the virtual device service is sending data into Core Data by checking the event count. Note If you choose, you can also import the service into GoLand and then code and run the service from GoLand. Follow the instructions in the Getting Started - Go Developers to learn how to import, build and run a service in GoLand.","title":"Working in a Hybrid Environment"},{"location":"getting-started/Ch-GettingStartedHybrid/#working-in-a-hybrid-environment","text":"In some cases, as a developer or contributor , you want to work on a particular micro service. Yet, you don't want to have to download all the source code, and then build and run all the micro services. There is an alternative approach! You can download and run the EdgeX Docker containers for all the micro services you need and run your single micro service (the one you are presumably working on) natively or from a developer tool of choice outside of a container. Within EdgeX, we call this a \"hybrid\" environment - where part of your EdgeX platform is running from a development environment, while other parts are running from Docker containers. This page outlines how to work in a hybrid development environment. As an example of this process, let's say you want to do coding work with/on the Virtual Device service. You want the rest of the EdgeX environment up and running via Docker containers. How would you set up this hybrid environment? Let's take a look.","title":"Working in a Hybrid Environment"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-and-run-the-edgex-docker-containers","text":"If you haven't already, follow the Getting Started using Docker guide to set up your environment (Docker, Docker Compose, etc.) before continuing. Since we plan to work with the virtual device service in this example, you don't need or want to run the virtual device service. You will run all the other services via Docker Compose. Based on the instructions found in the Getting Started using Docker , locate and download the appropriate Docker Compose file for your development environment. Next, issue the following commands to start the EdgeX containers and then stop the virtual device service (which is the service you are working on in this example). docker-compose up -d docker-compose stop device-virtual Run the EdgeX containers and then stop the service container that you are going to work on - in this case the virtual device service container. Note These notes assume you are working with the EdgeX Ireland release. It also assumes you have downloaded the appropriate Docker Compose file and have named it docker-compose.yml so you don't have to specify the file name each time you run a Docker Compose command. Some versions of EdgeX may require other or additional containers to run. Tip You can also use the EdgeX Compose Builder tool to create a custom Docker Compose file with just the services you want. See the Compose Builder documentation on and checkout the Compose Builder tool in GitHub . Run the command below to confirm that all the containers have started and that the virtual device container is no longer running. docker-compose ps","title":"Get and Run the EdgeX Docker Containers"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-build-and-run-the-non-docker-service","text":"With the EdgeX containers running, you can now download, build and run natively (outside of a container) the service you want to work on. In this example, the virtual device service is used to exemplify the steps necessary to get, build and run the native service with the EdgeX containerized services. However, the practice could be applied to any service.","title":"Get, Build and Run the (non-Docker) Service"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-the-service-code","text":"Per Getting Started Go Developers , pull the micro service code you want to work on from GitHub. In this example, we use the device-virtual-go as the micro service that is going to be worked on. git clone https://github.com/edgexfoundry/device-virtual-go.git","title":"Get the service code"},{"location":"getting-started/Ch-GettingStartedHybrid/#build-the-service-code","text":"At this time, you can add or modify the code to make the service changes you need. Once ready, you must compile and build the service into an executable. Change folders to the cloned micro service directory and build the service. cd device-virtual-go/ make build Clone the service from Github, make your code changes and then build the service locally.","title":"Build the service code"},{"location":"getting-started/Ch-GettingStartedHybrid/#change-the-configuration","text":"Depending on the service you are working on, you may need to change the configuration of the service to point to and use the other services that are containerized (running in Docker). In particular, if the service you are working on is not on the same host as the Docker Engine running the containerized services, you will likely need to change the configuration. Examine the configuration.toml file in the cmd/res folder of the device-virtual-go. Note that the Service (located in the [Service] section of the configuration), Registry (located in the [Registry] section) and all the \"Clients\" (located in the [Clients] section) suggest that the Host of these services is \"localhost\". These and other host configuration elements need to change when the services are not running on the same host - specifically the localhost. When your service is running on a different host than the rest of EdgeX, change the [Service] Host to be the address of the machine hosting your service. Change the [Registry] and [Clients] Host configuration to specify the location of the machine hosting these services. If you do have to change the configuration, save the configuration.toml file after making changes.","title":"Change the configuration"},{"location":"getting-started/Ch-GettingStartedHybrid/#run-the-service-code-natively","text":"The executable created by the make build command is found in the cmd folder of the service. Change folders to the location of the executable. Set any environment variables needed depending on your EdgeX setup. In this example, we did not start the security elements so we need to set EDGEX_SECURITY_SECRET_STORE to false in order to turn off security. Finally, run the service right from a terminal. cd cmd export EDGEX_SECURITY_SECRET_STORE = false ./device-virtual Change folders to the service's cmd/ folder, set env vars, and then execute the service executable in the cmd folder.","title":"Run the service code natively."},{"location":"getting-started/Ch-GettingStartedHybrid/#check-the-results","text":"At this time, your virtual device micro service should be communicating with the other EdgeX micro services running in their Docker containers. Because Core Metadata callbacks do not work in the hybrid environment, the virtual device service will not receive the Add Device callbacks on the inital run after creating them in Core Metadata. The simple work around for this issue is to stop ( Ctrl-c from the terminal) and restart the virtual device service (again with ./device-virtual execution). The virtual device service log after stopping and restarting. Give the virtual device a few seconds or so to initialize itself and start sending data to Core Data. To check that it is working properly, open a browser and point your browser to Core Data to check that events are being deposited. You can do this by calling on the Core Data API that checks the count of events in Core Data. http://localhost:59880/api/v2/event/count For this example, you can check that the virtual device service is sending data into Core Data by checking the event count. Note If you choose, you can also import the service into GoLand and then code and run the service from GoLand. Follow the instructions in the Getting Started - Go Developers to learn how to import, build and run a service in GoLand.","title":"Check the results"},{"location":"getting-started/Ch-GettingStartedSDK-C/","text":"C SDK In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some of the SDK framework and work necessary to complete a device service without actually having a device to talk to. Install dependencies See the Getting Started - C Developers guide to install the necessary tools and infrastructure needed to develop a C service. Get the EdgeX Device SDK for C The next step is to download and build the EdgeX device service SDK for C. First, clone the device-sdk-c from Github: git clone -b v2.0.0 https://github.com/edgexfoundry/device-sdk-c.git cd ./device-sdk-c Note The clone command above has you pull v2.0.0 of the C SDK which is the version compatible with the Ireland release. Then, build the device-sdk-c: make Starting a new Device Service For this guide, you use the example template provided by the C SDK as a starting point for a new device service. You modify the device service to generate random integer values. Begin by copying the template example source into a new directory named example-device-c : mkdir -p ../example-device-c/res/profiles mkdir -p ../example-device-c/res/devices cp ./src/c/examples/template.c ../example-device-c cd ../example-device-c EdgeX 2.0 In EdgeX 2.0 the profiles have been moved to their own res/profiles directory and device definitions have been moved out of the configuration file into the res/devices directory. Build your Device Service Now you are ready to build your new device service using the C SDK you compiled in an earlier step. Tell the compiler where to find the C SDK files: export CSDK_DIR = ../device-sdk-c/build/release/_CPack_Packages/Linux/TGZ/csdk-2.0.0 Note The exact path to your compiled CSDK_DIR may differ depending on the tagged version number on the SDK. The version of the SDK can be found in the VERSION file located in the ./device-sdk-c/VERSION file. In the example above, the Ireland release of 2.0.0 is used. Now build your device service executable: gcc -I $CSDK_DIR /include -L $CSDK_DIR /lib -o device-example-c template.c -lcsdk If everything is working properly, a device-example-c executable will be created in the directory. Customize your Device Service Up to now you've been building the example device service provided by the C SDK. In order to change it to a device service that generates random numbers, you need to modify your template.c method template_get_handler . Replace the following code: for ( uint32_t i = 0 ; i < nreadings ; i ++ ) { /* Log the attributes for each requested resource */ iot_log_debug ( driver -> lc , \" Requested reading %u:\" , i ); dump_attributes ( driver -> lc , requests [ i ]. resource -> attrs ); /* Fill in a result regardless */ readings [ i ]. value = iot_data_alloc_string ( \"Template result\" , IOT_DATA_REF ); } return true ; with this code: for ( uint32_t i = 0 ; i < nreadings ; i ++ ) { const char * rdtype = iot_data_string_map_get_string ( requests [ i ]. resource -> attrs , \"type\" ); if ( rdtype ) { if ( strcmp ( rdtype , \"random\" ) == 0 ) { /* Set the reading as a random value between 0 and 100 */ readings [ i ]. value = iot_data_alloc_i32 ( rand () % 100 ); } else { * exception = iot_data_alloc_string ( \"Unknown sensor type requested\" , IOT_DATA_REF ); return false ; } } else { * exception = iot_data_alloc_string ( \"Unable to read value, no \\\" type \\\" attribute given\" , IOT_DATA_REF ); return false ; } } return true ; Here the reading value is set to a random signed integer. Various iot_data_alloc_ functions are defined in the iot/data.h header allowing readings of different types to be generated. Creating your Device Profile A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it. Follow these steps to create a device profile for the simple random number generating device service. Explore the files in the device-sdk-c/src/c/examples/res/profiles folder. Note the example TemplateProfile.json device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources in the file represent properties of a device (properties like SensorOne, SensorTwo and Switch). A pre-created device profile for the random number device is provided in this documentation. This is supplied in the alternative file format .yaml. Download random-generator-device.yaml and save the file to the ./res/profiles folder. Open the random-generator-device.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber . Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a Int32. In real world IoT situations, this deviceResource list could be extensive and filled with many deviceResources all different types of data. Creating your Device Device Service accepts pre-defined devices to be added to EdgeX during device service startup. Follow these steps to create a pre-defined device for the simple random number generating device service. Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.json that is already in this folder. Open the file with your favorite editor and explore its contents. Note how the file contents represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). A pre-created device for the random number device is provided in this documentation. Download random-generator-device.json and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices folder. Open the random-generator-device.json file in a text editor. In this example, the device described has a profileName: RandNum-Device . In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile Configuring your Device Service Now update the configuration for the new device service. This documentation provides a new configuration.toml file. This configuration file: - changes the port the service operates on so as not to conflict with other device services Download configuration.toml and save the file to the ./res folder. Custom Structured Configuration C Device Services support structured custom configuration as part of the [Driver] section in the configuration.toml file. View the main function of template.c . The confparams variable is initialized with default values for three test parameters. These values may be overridden by entries in the configuration file or by environment variables in the usual way. The resulting configuration is passed to the init function when the service starts. Configuration parameters X , Y/Z and Writable/Q correspond to configuration file entries as follows: [Writable] [Writable.Driver] Q = \"foo\" [Driver] X = \"bar\" [Driver.Y] Z = \"baz\" Entries in the writable section can be changed dynamically if using the registry; the reconfigure callback will be invoked with the new configuration when changes are made. In addition to strings, configuration entries may be integer, float or boolean typed. Use the different iot_data_alloc_ functions when setting up the defaults as appropriate. Rebuild your Device Service Now you have your new device service, modified to return a random number, a device profile that will tell EdgeX how to read that random number, as well as a configuration file that will let your device service register itself and its device profile with EdgeX, and begin taking readings every 10 seconds. Rebuild your Device Service to reflect the changes that you have made: gcc -I $CSDK_DIR /include -L $CSDK_DIR /lib -o device-example-c template.c -lcsdk Run your Device Service Allow your newly created Device Service, which was formed out of the Device Service C SDK, to create sensor mimicking data which it then sends to EdgeX. Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call: docker-compose up -d Back in your custom device service directory, tell your device service where to find the libcsdk.so : export LD_LIBRARY_PATH = $CSDK_DIR /lib Run your device service: ./device-example-c You should now see your device service having its /Random command called every 10 seconds. You can verify that it is sending data into EdgeX by watching the logs of the edgex-core-data service: docker logs -f edgex-core-data Which would print an event record every time your device service is called. You can manually generate an event using curl to query the device service directly: curl 0 :59992/api/v2/device/name/RandNum-Device01/RandomNumber Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX: http://localhost:59880/api/v2/event/device/name/RandNum-Device01?limit=100 This request asks core data to provide the last 100 events/readings associated to the RandNum-Device-01.","title":"C SDK"},{"location":"getting-started/Ch-GettingStartedSDK-C/#c-sdk","text":"In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some of the SDK framework and work necessary to complete a device service without actually having a device to talk to.","title":"C SDK"},{"location":"getting-started/Ch-GettingStartedSDK-C/#install-dependencies","text":"See the Getting Started - C Developers guide to install the necessary tools and infrastructure needed to develop a C service.","title":"Install dependencies"},{"location":"getting-started/Ch-GettingStartedSDK-C/#get-the-edgex-device-sdk-for-c","text":"The next step is to download and build the EdgeX device service SDK for C. First, clone the device-sdk-c from Github: git clone -b v2.0.0 https://github.com/edgexfoundry/device-sdk-c.git cd ./device-sdk-c Note The clone command above has you pull v2.0.0 of the C SDK which is the version compatible with the Ireland release. Then, build the device-sdk-c: make","title":"Get the EdgeX Device SDK for C"},{"location":"getting-started/Ch-GettingStartedSDK-C/#starting-a-new-device-service","text":"For this guide, you use the example template provided by the C SDK as a starting point for a new device service. You modify the device service to generate random integer values. Begin by copying the template example source into a new directory named example-device-c : mkdir -p ../example-device-c/res/profiles mkdir -p ../example-device-c/res/devices cp ./src/c/examples/template.c ../example-device-c cd ../example-device-c EdgeX 2.0 In EdgeX 2.0 the profiles have been moved to their own res/profiles directory and device definitions have been moved out of the configuration file into the res/devices directory.","title":"Starting a new Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#build-your-device-service","text":"Now you are ready to build your new device service using the C SDK you compiled in an earlier step. Tell the compiler where to find the C SDK files: export CSDK_DIR = ../device-sdk-c/build/release/_CPack_Packages/Linux/TGZ/csdk-2.0.0 Note The exact path to your compiled CSDK_DIR may differ depending on the tagged version number on the SDK. The version of the SDK can be found in the VERSION file located in the ./device-sdk-c/VERSION file. In the example above, the Ireland release of 2.0.0 is used. Now build your device service executable: gcc -I $CSDK_DIR /include -L $CSDK_DIR /lib -o device-example-c template.c -lcsdk If everything is working properly, a device-example-c executable will be created in the directory.","title":"Build your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#customize-your-device-service","text":"Up to now you've been building the example device service provided by the C SDK. In order to change it to a device service that generates random numbers, you need to modify your template.c method template_get_handler . Replace the following code: for ( uint32_t i = 0 ; i < nreadings ; i ++ ) { /* Log the attributes for each requested resource */ iot_log_debug ( driver -> lc , \" Requested reading %u:\" , i ); dump_attributes ( driver -> lc , requests [ i ]. resource -> attrs ); /* Fill in a result regardless */ readings [ i ]. value = iot_data_alloc_string ( \"Template result\" , IOT_DATA_REF ); } return true ; with this code: for ( uint32_t i = 0 ; i < nreadings ; i ++ ) { const char * rdtype = iot_data_string_map_get_string ( requests [ i ]. resource -> attrs , \"type\" ); if ( rdtype ) { if ( strcmp ( rdtype , \"random\" ) == 0 ) { /* Set the reading as a random value between 0 and 100 */ readings [ i ]. value = iot_data_alloc_i32 ( rand () % 100 ); } else { * exception = iot_data_alloc_string ( \"Unknown sensor type requested\" , IOT_DATA_REF ); return false ; } } else { * exception = iot_data_alloc_string ( \"Unable to read value, no \\\" type \\\" attribute given\" , IOT_DATA_REF ); return false ; } } return true ; Here the reading value is set to a random signed integer. Various iot_data_alloc_ functions are defined in the iot/data.h header allowing readings of different types to be generated.","title":"Customize your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#creating-your-device-profile","text":"A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it. Follow these steps to create a device profile for the simple random number generating device service. Explore the files in the device-sdk-c/src/c/examples/res/profiles folder. Note the example TemplateProfile.json device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources in the file represent properties of a device (properties like SensorOne, SensorTwo and Switch). A pre-created device profile for the random number device is provided in this documentation. This is supplied in the alternative file format .yaml. Download random-generator-device.yaml and save the file to the ./res/profiles folder. Open the random-generator-device.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber . Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a Int32. In real world IoT situations, this deviceResource list could be extensive and filled with many deviceResources all different types of data.","title":"Creating your Device Profile"},{"location":"getting-started/Ch-GettingStartedSDK-C/#creating-your-device","text":"Device Service accepts pre-defined devices to be added to EdgeX during device service startup. Follow these steps to create a pre-defined device for the simple random number generating device service. Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.json that is already in this folder. Open the file with your favorite editor and explore its contents. Note how the file contents represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). A pre-created device for the random number device is provided in this documentation. Download random-generator-device.json and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices folder. Open the random-generator-device.json file in a text editor. In this example, the device described has a profileName: RandNum-Device . In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile","title":"Creating your Device"},{"location":"getting-started/Ch-GettingStartedSDK-C/#configuring-your-device-service","text":"Now update the configuration for the new device service. This documentation provides a new configuration.toml file. This configuration file: - changes the port the service operates on so as not to conflict with other device services Download configuration.toml and save the file to the ./res folder.","title":"Configuring your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#custom-structured-configuration","text":"C Device Services support structured custom configuration as part of the [Driver] section in the configuration.toml file. View the main function of template.c . The confparams variable is initialized with default values for three test parameters. These values may be overridden by entries in the configuration file or by environment variables in the usual way. The resulting configuration is passed to the init function when the service starts. Configuration parameters X , Y/Z and Writable/Q correspond to configuration file entries as follows: [Writable] [Writable.Driver] Q = \"foo\" [Driver] X = \"bar\" [Driver.Y] Z = \"baz\" Entries in the writable section can be changed dynamically if using the registry; the reconfigure callback will be invoked with the new configuration when changes are made. In addition to strings, configuration entries may be integer, float or boolean typed. Use the different iot_data_alloc_ functions when setting up the defaults as appropriate.","title":"Custom Structured Configuration"},{"location":"getting-started/Ch-GettingStartedSDK-C/#rebuild-your-device-service","text":"Now you have your new device service, modified to return a random number, a device profile that will tell EdgeX how to read that random number, as well as a configuration file that will let your device service register itself and its device profile with EdgeX, and begin taking readings every 10 seconds. Rebuild your Device Service to reflect the changes that you have made: gcc -I $CSDK_DIR /include -L $CSDK_DIR /lib -o device-example-c template.c -lcsdk","title":"Rebuild your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-C/#run-your-device-service","text":"Allow your newly created Device Service, which was formed out of the Device Service C SDK, to create sensor mimicking data which it then sends to EdgeX. Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call: docker-compose up -d Back in your custom device service directory, tell your device service where to find the libcsdk.so : export LD_LIBRARY_PATH = $CSDK_DIR /lib Run your device service: ./device-example-c You should now see your device service having its /Random command called every 10 seconds. You can verify that it is sending data into EdgeX by watching the logs of the edgex-core-data service: docker logs -f edgex-core-data Which would print an event record every time your device service is called. You can manually generate an event using curl to query the device service directly: curl 0 :59992/api/v2/device/name/RandNum-Device01/RandomNumber Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX: http://localhost:59880/api/v2/event/device/name/RandNum-Device01?limit=100 This request asks core data to provide the last 100 events/readings associated to the RandNum-Device-01.","title":"Run your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/","text":"Golang SDK In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some SDK framework and work necessary to complete a device service without actually having a device to talk to. Install dependencies See the Getting Started - Go Developers guide to install the necessary tools and infrastructure needed to develop a GoLang service. Get the EdgeX Device SDK for Go Follow these steps to create a folder on your file system, download the Device SDK , and get the GoLang device service SDK on your system. Create a collection of nested folders, ~/edgexfoundry on your file system. This folder will hold your new Device Service. In Linux, create a directory with a single mkdir command mkdir -p ~/edgexfoundry In a terminal window, change directories to the folder just created and pull down the SDK in Go with the commands as shown. cd ~/edgexfoundry git clone --depth 1 --branch v2.0.0 https://github.com/edgexfoundry/device-sdk-go.git Note The clone command above has you pull v2.0.0 of the Go SDK which is the version associated to Ireland. There are later releases of EdgeX, and it is always a good idea to pull and use the latest version associated with the major version of EdgeX you are using. You may want to check for the latest released version by going to https://github.com/edgexfoundry/device-sdk-go and look for the latest release. Create a folder that will hold the new device service. The name of the folder is also the name you want to give your new device service. Standard practice in EdgeX is to prefix the name of a device service with device- . In this example, the name 'device-simple' is used. mkdir -p ~/edgexfoundry/device-simple Copy the example code from device-sdk-go to device-simple : cd ~/edgexfoundry cp -rf ./device-sdk-go/example/* ./device-simple/ Copy Makefile to device-simple: cp ./device-sdk-go/Makefile ./device-simple Copy version.go to device-simple: cp ./device-sdk-go/version.go ./device-simple/ After completing these steps, your device-simple folder should look like the listing below. Start a new Device Service With the device service application structure in place, time now to program the service to act like a sensor data fetching service. Change folders to the device-simple directory. cd ~/edgexfoundry/device-simple Open main.go file in the cmd/device-simple folder with your favorite text editor. Modify the import statements. Replace github.com/edgexfoundry/device-sdk-go/v2/example/driver with github.com/edgexfoundry/device-simple/driver in the import statements. Also replace github.com/edgexfoundry/device-sdk-go/v2 with github.com/edgexfoundry/device-simple . Save the file when you have finished editing. Open Makefile found in the base folder (~/edgexfoundry/device-simple) in your favorite text editor and make the following changes Replace: MICROSERVICES=example/cmd/device-simple/device-simple with: MICROSERVICES=cmd/device-simple/device-simple Change: GOFLAGS = -ldflags \"-X github.com/edgexfoundry/device-sdk-go/v2.Version= $( VERSION ) \" to refer to the new service with: GOFLAGS = -ldflags \"-X github.com/edgexfoundry/device-simple.Version= $( VERSION ) \" Change: example/cmd/device-simple/device-simple : go mod tidy $( GOCGO ) build $( GOFLAGS ) -o $@ ./example/cmd/device-simple to: cmd/device-simple/device-simple : go mod tidy $( GOCGO ) build $( GOFLAGS ) -o $@ ./cmd/device-simple Save the file. Enter the following command to create the initial module definition and write it to the go.mod file: GO111MODULE = on go mod init github . com / edgexfoundry / device - simple Use an editor to open and edit the go.mod file created in ~/edgexfoundry/device-simple. Add the code highlighted below to the bottom of the file. This code indicates which version of the device service SDK and the associated EdgeX contracts module to use. require ( github . com / edgexfoundry / device - sdk - go / v2 v2 .0.0 github . com / edgexfoundry / go - mod - core - contracts / v2 v2 .0.0 ) Note You should always check the go.mod file in the latest released version SDK for the correct versions of the Go SDK and go-mod-contracts to use in your go.mod. Build your Device Service To ensure that the code you have moved and updated still works, build the device service. In a terminal window, make sure you are still in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command: make build If there are no errors, your service is ready for you to add custom code to generate data values as if there was a sensor attached. Customize your Device Service The device service you are creating isn't going to talk to a real device. Instead, it is going to generate a random number where the service would ordinarily make a call to get sensor data from the actual device. Locate the simpledriver.go file in the /driver folder and open it with your favorite editor. In the import() area at the top of the file, add \"math/rand\" under \"time\". Locate the HandleReadCommands() function in this same file (simpledriver.go). Find the following lines of code in this file (around line 139): if reqs [ 0 ]. DeviceResourceName == \"SwitchButton\" { cv , _ := sdkModels . NewCommandValue ( reqs [ 0 ]. DeviceResourceName , common . ValueTypeBool , s . switchButton ) res [ 0 ] = cv } Add the conditional (if-else) code in front of the above conditional: if reqs [ 0 ]. DeviceResourceName == \"randomnumber\" { cv , _ := sdkModels . NewCommandValue ( reqs [ 0 ]. DeviceResourceName , common . ValueTypeInt32 , int32 ( rand . Intn ( 100 ))) res [ 0 ] = cv } else The first line of code checks that the current request is for a resource called \"RandomNumber\". The second line of code generates an integer (between 0 and 100) and uses that as the value the device service sends to EdgeX -- mimicking the collection of data from a real device. It is here that the device service would normally capture some sensor reading from a device and send the data to EdgeX. The HandleReadCommands is where you'd need to do some customization work to talk to the device, get the latest sensor values and send them into EdgeX. Save the simpledriver.go file Creating your Device Profile A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it. Follow these steps to create a device profile for the simple random number generating device service. Explore the files in the cmd/device-simple/res/profiles folder. Note the example Simple-Driver.yaml device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources in the file represent properties of a device (properties like SwitchButton, X, Y and Z rotation). A pre-created device profile for the random number device is provided in this documentation. Download random-generator-device.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/profiles folder. Open the random-generator-device.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber . Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a INT32. In real world IoT situations, this deviceResource list could be extensive. Rather than a single deviceResource, you might find this section filled with many deviceResources and each deviceResource associated to a different type. Creating your Device Device Service accepts pre-defined devices to be added to EdgeX during device service startup. Follow these steps to create a pre-defined device for the simple random number generating device service. Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.toml that is already in this folder. Open the file with your favorite editor and explore its contents. Note how DeviceList in the file represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). A pre-created device for the random number device is provided in this documentation. Download random-generator-device.toml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices folder. Open the random-generator-device.toml file in a text editor. In this example, the device described has a ProfileName: RandNum-Device . In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile Validating your Device Go Device Services provide /api/v2/validate/device API to validate device's ProtocolProperties. This feature allows Device Services whose protocol has strict rule to validate their devices before adding them into EdgeX. Go SDK provides DeviceValidator interface: // DeviceValidator is a low-level device-specific interface implemented // by device services that validate device's protocol properties. type DeviceValidator interface { // ValidateDevice triggers device's protocol properties validation, returns error // if validation failed and the incoming device will not be added into EdgeX. ValidateDevice ( device models . Device ) error } By implementing DeviceValidator interface whenever a device is added or updated, ValidateDevice function will be called to validate incoming device's ProtocolProperties and reject the request if validation failed. Configuring your Device Service Now update the configuration for the new device service. This documentation provides a new configuration.toml file. This configuration file: changes the port the service operates on so as not to conflict with other device services Download configuration.toml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res folder (overwrite the existing configuration file). Change the host address of the device service to your system's IP address. Warning In the configuration.toml, change the host address (around line 14) to the IP address of the system host. This allows core metadata to callback to your new device service when a new device is created. Because the rest of EdgeX, to include core metadata, will be running in Docker, the IP address of the host system on the Docker network must be provided to allow metadata in Docker to call out from Docker to the new device service running on your host system. Custom Structured Configuration EdgeX 2.0 New for EdgeX 2.0 Go Device Services can now define their own custom structured configuration section in the configuration.toml file. Any additional sections in the TOML are ignored by the SDK when it parses the file for the SDK defined sections. This feature allows a Device Service to define and watch it's own structured section in the service's TOML configuration file. The SDK API provides the follow APIs to enable structured custom configuration: LoadCustomConfig(config UpdatableConfig, sectionName string) error Loads the service's custom configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration the first time the service is started, if service is using the Configuration Provider. The UpdateFromRaw interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider. ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error Starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the UpdateWritableFromRaw interface will be called on the custom configuration to apply the updates and then signal that the changes occurred via changedCallback. See the Device MQTT Service for an example of using the new Structured Custom Configuration capability. See here for defining the structured custom configuration See here for custom section on the configuration.toml file See here for loading, validating and watching the configuration Retrieving Secrets The Go Device SDK provides the SecretProvider.GetSecret() API to retrieve the Device Services secrets. See the Device MQTT Service for an example of using the SecretProvider.GetSecret() API. Note that this code implements a retry loop allowing time for the secret(s) to be push into the service's SecretStore via the /secret endpoint. See Storing Secrets section for more details. Rebuild your Device Service Just as you did in the Build your Device Service step above, build the device-simple service, which creates the executable program that is your device service. In a terminal window, make sure you are in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command: cd ~/edgexfoundry/device-simple make build If there are no errors, your service is created and put in the ~/edgexfoundry/device-simple/cmd/device-simple folder. Look for the device-simple executable in the folder. Run your Device Service Allow the newly created device service, which was formed out of the Device Service Go SDK, to create sensor-mimicking data that it then sends to EdgeX: Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call (we're using non-security EdgeX in this example): docker-compose -f docker-compose-no-secty.yml up -d In a terminal window, change directories to the device-simple's cmd/device-simple folder and run the new device-simple service. cd ~/edgexfoundry/device-simple/cmd/device-simple ./device-simple This starts the service and immediately displays log entries in the terminal. Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX: http://localhost:59880/api/v2/event/device/name/RandNum-Device01 This request asks core data to provide the events associated to the RandNum-Device-01.","title":"Golang SDK"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#golang-sdk","text":"In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some SDK framework and work necessary to complete a device service without actually having a device to talk to.","title":"Golang SDK"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#install-dependencies","text":"See the Getting Started - Go Developers guide to install the necessary tools and infrastructure needed to develop a GoLang service.","title":"Install dependencies"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#get-the-edgex-device-sdk-for-go","text":"Follow these steps to create a folder on your file system, download the Device SDK , and get the GoLang device service SDK on your system. Create a collection of nested folders, ~/edgexfoundry on your file system. This folder will hold your new Device Service. In Linux, create a directory with a single mkdir command mkdir -p ~/edgexfoundry In a terminal window, change directories to the folder just created and pull down the SDK in Go with the commands as shown. cd ~/edgexfoundry git clone --depth 1 --branch v2.0.0 https://github.com/edgexfoundry/device-sdk-go.git Note The clone command above has you pull v2.0.0 of the Go SDK which is the version associated to Ireland. There are later releases of EdgeX, and it is always a good idea to pull and use the latest version associated with the major version of EdgeX you are using. You may want to check for the latest released version by going to https://github.com/edgexfoundry/device-sdk-go and look for the latest release. Create a folder that will hold the new device service. The name of the folder is also the name you want to give your new device service. Standard practice in EdgeX is to prefix the name of a device service with device- . In this example, the name 'device-simple' is used. mkdir -p ~/edgexfoundry/device-simple Copy the example code from device-sdk-go to device-simple : cd ~/edgexfoundry cp -rf ./device-sdk-go/example/* ./device-simple/ Copy Makefile to device-simple: cp ./device-sdk-go/Makefile ./device-simple Copy version.go to device-simple: cp ./device-sdk-go/version.go ./device-simple/ After completing these steps, your device-simple folder should look like the listing below.","title":"Get the EdgeX Device SDK for Go"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#start-a-new-device-service","text":"With the device service application structure in place, time now to program the service to act like a sensor data fetching service. Change folders to the device-simple directory. cd ~/edgexfoundry/device-simple Open main.go file in the cmd/device-simple folder with your favorite text editor. Modify the import statements. Replace github.com/edgexfoundry/device-sdk-go/v2/example/driver with github.com/edgexfoundry/device-simple/driver in the import statements. Also replace github.com/edgexfoundry/device-sdk-go/v2 with github.com/edgexfoundry/device-simple . Save the file when you have finished editing. Open Makefile found in the base folder (~/edgexfoundry/device-simple) in your favorite text editor and make the following changes Replace: MICROSERVICES=example/cmd/device-simple/device-simple with: MICROSERVICES=cmd/device-simple/device-simple Change: GOFLAGS = -ldflags \"-X github.com/edgexfoundry/device-sdk-go/v2.Version= $( VERSION ) \" to refer to the new service with: GOFLAGS = -ldflags \"-X github.com/edgexfoundry/device-simple.Version= $( VERSION ) \" Change: example/cmd/device-simple/device-simple : go mod tidy $( GOCGO ) build $( GOFLAGS ) -o $@ ./example/cmd/device-simple to: cmd/device-simple/device-simple : go mod tidy $( GOCGO ) build $( GOFLAGS ) -o $@ ./cmd/device-simple Save the file. Enter the following command to create the initial module definition and write it to the go.mod file: GO111MODULE = on go mod init github . com / edgexfoundry / device - simple Use an editor to open and edit the go.mod file created in ~/edgexfoundry/device-simple. Add the code highlighted below to the bottom of the file. This code indicates which version of the device service SDK and the associated EdgeX contracts module to use. require ( github . com / edgexfoundry / device - sdk - go / v2 v2 .0.0 github . com / edgexfoundry / go - mod - core - contracts / v2 v2 .0.0 ) Note You should always check the go.mod file in the latest released version SDK for the correct versions of the Go SDK and go-mod-contracts to use in your go.mod.","title":"Start a new Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#build-your-device-service","text":"To ensure that the code you have moved and updated still works, build the device service. In a terminal window, make sure you are still in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command: make build If there are no errors, your service is ready for you to add custom code to generate data values as if there was a sensor attached.","title":"Build your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#customize-your-device-service","text":"The device service you are creating isn't going to talk to a real device. Instead, it is going to generate a random number where the service would ordinarily make a call to get sensor data from the actual device. Locate the simpledriver.go file in the /driver folder and open it with your favorite editor. In the import() area at the top of the file, add \"math/rand\" under \"time\". Locate the HandleReadCommands() function in this same file (simpledriver.go). Find the following lines of code in this file (around line 139): if reqs [ 0 ]. DeviceResourceName == \"SwitchButton\" { cv , _ := sdkModels . NewCommandValue ( reqs [ 0 ]. DeviceResourceName , common . ValueTypeBool , s . switchButton ) res [ 0 ] = cv } Add the conditional (if-else) code in front of the above conditional: if reqs [ 0 ]. DeviceResourceName == \"randomnumber\" { cv , _ := sdkModels . NewCommandValue ( reqs [ 0 ]. DeviceResourceName , common . ValueTypeInt32 , int32 ( rand . Intn ( 100 ))) res [ 0 ] = cv } else The first line of code checks that the current request is for a resource called \"RandomNumber\". The second line of code generates an integer (between 0 and 100) and uses that as the value the device service sends to EdgeX -- mimicking the collection of data from a real device. It is here that the device service would normally capture some sensor reading from a device and send the data to EdgeX. The HandleReadCommands is where you'd need to do some customization work to talk to the device, get the latest sensor values and send them into EdgeX. Save the simpledriver.go file","title":"Customize your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#creating-your-device-profile","text":"A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it. Follow these steps to create a device profile for the simple random number generating device service. Explore the files in the cmd/device-simple/res/profiles folder. Note the example Simple-Driver.yaml device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources in the file represent properties of a device (properties like SwitchButton, X, Y and Z rotation). A pre-created device profile for the random number device is provided in this documentation. Download random-generator-device.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/profiles folder. Open the random-generator-device.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber . Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a INT32. In real world IoT situations, this deviceResource list could be extensive. Rather than a single deviceResource, you might find this section filled with many deviceResources and each deviceResource associated to a different type.","title":"Creating your Device Profile"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#creating-your-device","text":"Device Service accepts pre-defined devices to be added to EdgeX during device service startup. Follow these steps to create a pre-defined device for the simple random number generating device service. Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.toml that is already in this folder. Open the file with your favorite editor and explore its contents. Note how DeviceList in the file represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). A pre-created device for the random number device is provided in this documentation. Download random-generator-device.toml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices folder. Open the random-generator-device.toml file in a text editor. In this example, the device described has a ProfileName: RandNum-Device . In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile","title":"Creating your Device"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#validating-your-device","text":"Go Device Services provide /api/v2/validate/device API to validate device's ProtocolProperties. This feature allows Device Services whose protocol has strict rule to validate their devices before adding them into EdgeX. Go SDK provides DeviceValidator interface: // DeviceValidator is a low-level device-specific interface implemented // by device services that validate device's protocol properties. type DeviceValidator interface { // ValidateDevice triggers device's protocol properties validation, returns error // if validation failed and the incoming device will not be added into EdgeX. ValidateDevice ( device models . Device ) error } By implementing DeviceValidator interface whenever a device is added or updated, ValidateDevice function will be called to validate incoming device's ProtocolProperties and reject the request if validation failed.","title":"Validating your Device"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#configuring-your-device-service","text":"Now update the configuration for the new device service. This documentation provides a new configuration.toml file. This configuration file: changes the port the service operates on so as not to conflict with other device services Download configuration.toml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res folder (overwrite the existing configuration file). Change the host address of the device service to your system's IP address. Warning In the configuration.toml, change the host address (around line 14) to the IP address of the system host. This allows core metadata to callback to your new device service when a new device is created. Because the rest of EdgeX, to include core metadata, will be running in Docker, the IP address of the host system on the Docker network must be provided to allow metadata in Docker to call out from Docker to the new device service running on your host system.","title":"Configuring your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#custom-structured-configuration","text":"EdgeX 2.0 New for EdgeX 2.0 Go Device Services can now define their own custom structured configuration section in the configuration.toml file. Any additional sections in the TOML are ignored by the SDK when it parses the file for the SDK defined sections. This feature allows a Device Service to define and watch it's own structured section in the service's TOML configuration file. The SDK API provides the follow APIs to enable structured custom configuration: LoadCustomConfig(config UpdatableConfig, sectionName string) error Loads the service's custom configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration the first time the service is started, if service is using the Configuration Provider. The UpdateFromRaw interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider. ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error Starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the UpdateWritableFromRaw interface will be called on the custom configuration to apply the updates and then signal that the changes occurred via changedCallback. See the Device MQTT Service for an example of using the new Structured Custom Configuration capability. See here for defining the structured custom configuration See here for custom section on the configuration.toml file See here for loading, validating and watching the configuration","title":"Custom Structured Configuration"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#retrieving-secrets","text":"The Go Device SDK provides the SecretProvider.GetSecret() API to retrieve the Device Services secrets. See the Device MQTT Service for an example of using the SecretProvider.GetSecret() API. Note that this code implements a retry loop allowing time for the secret(s) to be push into the service's SecretStore via the /secret endpoint. See Storing Secrets section for more details.","title":"Retrieving Secrets"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#rebuild-your-device-service","text":"Just as you did in the Build your Device Service step above, build the device-simple service, which creates the executable program that is your device service. In a terminal window, make sure you are in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command: cd ~/edgexfoundry/device-simple make build If there are no errors, your service is created and put in the ~/edgexfoundry/device-simple/cmd/device-simple folder. Look for the device-simple executable in the folder.","title":"Rebuild your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#run-your-device-service","text":"Allow the newly created device service, which was formed out of the Device Service Go SDK, to create sensor-mimicking data that it then sends to EdgeX: Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call (we're using non-security EdgeX in this example): docker-compose -f docker-compose-no-secty.yml up -d In a terminal window, change directories to the device-simple's cmd/device-simple folder and run the new device-simple service. cd ~/edgexfoundry/device-simple/cmd/device-simple ./device-simple This starts the service and immediately displays log entries in the terminal. Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX: http://localhost:59880/api/v2/event/device/name/RandNum-Device01 This request asks core data to provide the events associated to the RandNum-Device-01.","title":"Run your Device Service"},{"location":"getting-started/Ch-GettingStartedSDK/","text":"Device Service SDK The EdgeX device service software development kits (SDKs) help developers create new device connectors for EdgeX. An SDK provides the common scaffolding that each device service needs. This allows developers to create new device/sensor connectors more quickly. The EdgeX community already provides many device services. However, there is no way the community can provide for every protocol and every sensor. Even if the EdgeX community provided a device service for every protocol, your use case, sensor, or security infrastructure might require customization. Thus, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity. EdgeX provides two SDKs to help developers create new device services. Most of EdgeX is written in Go and C. Thus, there's a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, the community may offer alternate language SDKs. The SDKs are libraries that get incorporated into a new micro services. They make writing a new device service much easier. By importing the SDK library into your new device service project, developers are left to focus on the code that is specific to the communications with the device via the protocol of the device. The code in the SDK handles the other details, such as: - initialization of the device service - getting the service configured - sending sensor data to core data - managing communications with core metadata - and much more. The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX. For example, it makes sure the service registers with the EdgeX registry service when it starts. Use the GoLang SDK Use the C SDK","title":"Device Service SDK"},{"location":"getting-started/Ch-GettingStartedSDK/#device-service-sdk","text":"The EdgeX device service software development kits (SDKs) help developers create new device connectors for EdgeX. An SDK provides the common scaffolding that each device service needs. This allows developers to create new device/sensor connectors more quickly. The EdgeX community already provides many device services. However, there is no way the community can provide for every protocol and every sensor. Even if the EdgeX community provided a device service for every protocol, your use case, sensor, or security infrastructure might require customization. Thus, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity. EdgeX provides two SDKs to help developers create new device services. Most of EdgeX is written in Go and C. Thus, there's a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, the community may offer alternate language SDKs. The SDKs are libraries that get incorporated into a new micro services. They make writing a new device service much easier. By importing the SDK library into your new device service project, developers are left to focus on the code that is specific to the communications with the device via the protocol of the device. The code in the SDK handles the other details, such as: - initialization of the device service - getting the service configured - sending sensor data to core data - managing communications with core metadata - and much more. The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX. For example, it makes sure the service registers with the EdgeX registry service when it starts. Use the GoLang SDK Use the C SDK","title":"Device Service SDK"},{"location":"getting-started/Ch-GettingStartedSnapUsers/","text":"Getting Started using Snaps Introduction Snaps are application packages that are easy to install and update while being secure, cross\u2010platform and self-contained. Snaps can be installed on any Linux distribution with snap support . Snap packages of EdgeX services are published on the Snap Store . The list of all EdgeX snaps is available below . EdgeX Snaps The following snaps are maintained by the EdgeX working groups: Platform snap: edgexfoundry : the main platform snap containing all reference core services along with several other security, supporting, application, and device services. Development tools: edgex-ui edgex-cli Application services: edgex-app-service-configurable Device services: edgex-device-camera edgex-device-modbus edgex-device-mqtt edgex-device-rest edgex-device-snmp edgex-device-grove Other EdgeX snaps do exist on the public Snap Store ( search by keyword ) or private stores under brand accounts. Installing the edgexfoundry snap This is the main platform snap which contains all reference core services along with several other security, supporting, application, and device services. The Snap Store allows access to multiple versions of the EdgeX Foundry snap using channels . If not specified, snaps are installed from the default latest/stable channel. You can see the current snap channels available for your machine's architecture by running the command: snap info edgexfoundry In order to install a specific version of the snap by setting the --channel flag. For example, to install the Jakarta (2.1) release: sudo snap install edgexfoundry --channel = 2 .1 To install the latest beta: sudo snap install edgexfoundry --channel = latest/beta # or using the shorthand sudo snap install edgexfoundry --beta Replace beta with edge to get the latest nightly build! Upon installation, the following internal EdgeX services are automatically started: consul vault redis kong postgres core-data core-command core-metadata security-services (see Security Services section below) The following services are disabled by default: app-service-configurable (required for eKuiper) device-virtual kuiper support-notifications support-scheduler sys-mgmt-agent Any disabled services can be enabled and started up using snap set : sudo snap set edgexfoundry support-notifications = on To turn a service off (thereby disabling and immediately stopping it) set the service to off: sudo snap set edgexfoundry support-notifications = off All services which are installed on the system as systemd units, which if enabled will automatically start running when the system boots or reboots. Configuring individual services This snap supports configuration overrides via snap configure hooks which generate service-specific .env files which are used to provide a custom environment to the service, overriding the default configuration provided by the service's configuration.toml file. If a configuration override is made after a service has already started, then the service must be restarted via command-line (e.g. snap restart edgexfoundry. ), or snapd's REST API . If the overrides are provided via the snap configuration defaults capability of a gadget snap, the overrides will be picked up when the services are first started. The following syntax is used to specify service-specific configuration overrides for the edgexfoundry snap: env... For instance, to setup an override of core data's port use: sudo snap set edgexfoundry env.core-data.service.port = 2112 And restart the service: sudo snap restart edgexfoundry.core-data Note At this time changes to configuration values in the [Writable] section are not supported. For details on the mapping of configuration options to Config options, please refer to Service Environment Configuration Overrides . Viewing logs To view the logs for all services in an EdgeX snap use the snap log command: sudo snap logs edgexfoundry Individual service logs may be viewed by specifying the service name: sudo snap logs edgexfoundry.consul Or by using the systemd unit name and journalctl : journalctl -u snap.edgexfoundry.consul These techniques can be used with any snap including application snap and device services snaps. Security services Currently, The EdgeX snap has security (Secret Store and API Gateway) enabled by default. The security services constitute the following components: kong-daemon (API Gateway a.k.a. Reverse Proxy) postgres (kong's database) vault (Secret Store) Oneshot services which perform the necessary security setup and stop, when listed using snap services , they show up as enabled/inactive : security-proxy-setup (kong setup) security-secretstore-setup (vault setup) security-bootstrapper-redis (secure redis setup) security-consul-bootstrapper (secure consul setup) Vault is known within EdgeX as the Secret Store, while Kong+PostgreSQL are used to provide the EdgeX API Gateway. For more details please refer to the snap's Secret Store and API Gateway documentation.","title":"Getting Started using Snaps"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#getting-started-using-snaps","text":"","title":"Getting Started using Snaps"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#introduction","text":"Snaps are application packages that are easy to install and update while being secure, cross\u2010platform and self-contained. Snaps can be installed on any Linux distribution with snap support . Snap packages of EdgeX services are published on the Snap Store . The list of all EdgeX snaps is available below .","title":"Introduction"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#edgex-snaps","text":"The following snaps are maintained by the EdgeX working groups: Platform snap: edgexfoundry : the main platform snap containing all reference core services along with several other security, supporting, application, and device services. Development tools: edgex-ui edgex-cli Application services: edgex-app-service-configurable Device services: edgex-device-camera edgex-device-modbus edgex-device-mqtt edgex-device-rest edgex-device-snmp edgex-device-grove Other EdgeX snaps do exist on the public Snap Store ( search by keyword ) or private stores under brand accounts.","title":"EdgeX Snaps"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#installing-the-edgexfoundry-snap","text":"This is the main platform snap which contains all reference core services along with several other security, supporting, application, and device services. The Snap Store allows access to multiple versions of the EdgeX Foundry snap using channels . If not specified, snaps are installed from the default latest/stable channel. You can see the current snap channels available for your machine's architecture by running the command: snap info edgexfoundry In order to install a specific version of the snap by setting the --channel flag. For example, to install the Jakarta (2.1) release: sudo snap install edgexfoundry --channel = 2 .1 To install the latest beta: sudo snap install edgexfoundry --channel = latest/beta # or using the shorthand sudo snap install edgexfoundry --beta Replace beta with edge to get the latest nightly build! Upon installation, the following internal EdgeX services are automatically started: consul vault redis kong postgres core-data core-command core-metadata security-services (see Security Services section below) The following services are disabled by default: app-service-configurable (required for eKuiper) device-virtual kuiper support-notifications support-scheduler sys-mgmt-agent Any disabled services can be enabled and started up using snap set : sudo snap set edgexfoundry support-notifications = on To turn a service off (thereby disabling and immediately stopping it) set the service to off: sudo snap set edgexfoundry support-notifications = off All services which are installed on the system as systemd units, which if enabled will automatically start running when the system boots or reboots.","title":"Installing the edgexfoundry snap"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#configuring-individual-services","text":"This snap supports configuration overrides via snap configure hooks which generate service-specific .env files which are used to provide a custom environment to the service, overriding the default configuration provided by the service's configuration.toml file. If a configuration override is made after a service has already started, then the service must be restarted via command-line (e.g. snap restart edgexfoundry. ), or snapd's REST API . If the overrides are provided via the snap configuration defaults capability of a gadget snap, the overrides will be picked up when the services are first started. The following syntax is used to specify service-specific configuration overrides for the edgexfoundry snap: env... For instance, to setup an override of core data's port use: sudo snap set edgexfoundry env.core-data.service.port = 2112 And restart the service: sudo snap restart edgexfoundry.core-data Note At this time changes to configuration values in the [Writable] section are not supported. For details on the mapping of configuration options to Config options, please refer to Service Environment Configuration Overrides .","title":"Configuring individual services"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#viewing-logs","text":"To view the logs for all services in an EdgeX snap use the snap log command: sudo snap logs edgexfoundry Individual service logs may be viewed by specifying the service name: sudo snap logs edgexfoundry.consul Or by using the systemd unit name and journalctl : journalctl -u snap.edgexfoundry.consul These techniques can be used with any snap including application snap and device services snaps.","title":"Viewing logs"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#security-services","text":"Currently, The EdgeX snap has security (Secret Store and API Gateway) enabled by default. The security services constitute the following components: kong-daemon (API Gateway a.k.a. Reverse Proxy) postgres (kong's database) vault (Secret Store) Oneshot services which perform the necessary security setup and stop, when listed using snap services , they show up as enabled/inactive : security-proxy-setup (kong setup) security-secretstore-setup (vault setup) security-bootstrapper-redis (secure redis setup) security-consul-bootstrapper (secure consul setup) Vault is known within EdgeX as the Secret Store, while Kong+PostgreSQL are used to provide the EdgeX API Gateway. For more details please refer to the snap's Secret Store and API Gateway documentation.","title":"Security services"},{"location":"getting-started/Ch-GettingStartedUsers/","text":"Getting Started as a User This section provides instructions for Users to get EdgeX up and running. If you are a Developer, you should read Getting Started as a Developer . EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. You can download EdgeX micro service source code and build your own micro services. However, if you do not have a need to change or add to EdgeX, then you do not need to download source code. Instead, you can download and run the pre-built EdgeX micro service artifacts. The EdgeX community builds and creates Docker images as well as Snap packages with each release. The community also provides the latest unstable builds (prior to releases). Please continue by referring to: Getting Started using Docker Getting Started using Snaps","title":"Getting Started as a User"},{"location":"getting-started/Ch-GettingStartedUsers/#getting-started-as-a-user","text":"This section provides instructions for Users to get EdgeX up and running. If you are a Developer, you should read Getting Started as a Developer . EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. You can download EdgeX micro service source code and build your own micro services. However, if you do not have a need to change or add to EdgeX, then you do not need to download source code. Instead, you can download and run the pre-built EdgeX micro service artifacts. The EdgeX community builds and creates Docker images as well as Snap packages with each release. The community also provides the latest unstable builds (prior to releases). Please continue by referring to: Getting Started using Docker Getting Started using Snaps","title":"Getting Started as a User"},{"location":"getting-started/Ch-GettingStartedUsersNexus/","text":"Getting Docker Images from EdgeX Nexus Repository Released EdgeX Docker container images are available from Docker Hub . Please refer to the Getting Started using Docker for instructions related to stable releases. In some cases, it may be necessary to get your EdgeX container images from the Nexus repository. The Linux Foundation manages the Nexus repository for the project. Warning Containers used from Nexus are considered \"work in progress\". There is no guarantee that these containers will function properly or function properly with other containers from the current release. Nexus contains the EdgeX project staging and development container images. In other words, Nexus contains work-in-progress or pre-release images. These, pre-release/work-in-progress Docker images are built nightly and made available at the following Nexus location: nexus3.edgexfoundry.org:10004 Rationale To Use Nexus Images Reasons you might want to use container images from Nexus include: The container is not available from Docker Hub (or Docker Hub is down temporarily) You need the latest development container image (the work in progress) You are working in a Windows or non-Linux environment and you are unable to build a container without some issues. A set of Docker Compose files have been created to allow you to get and use the latest EdgeX service images from Nexus. Find these Nexus \"Nightly Build\" Compose files in the main branch of the edgex-compose respository in GitHub. The EdgeX development team provides these Docker Compose files. As with the EdgeX release Compose files, you will find several different Docker Compose files that allow you to get the type of EdgeX instance setup based on: your hardware (x86 or ARM) your desire to have security services on or off your desire to run with the EdgeX GUI included Warning The \"Nightly Build\" images are provided as-is and may not always function properly or with other EdgeX services. Use with caution and typically only if you are a developer/contributor to EdgeX. These images represent the latest development work and may not have been thoroughly tested or integrated. Using Nexus Images The operations to pull the images and run the Nexus Repository containers are the same as when using EdgeX images from Docker Hub (see Getting Started using Docker ). To get container images from the Nexus Repository, in a command terminal, change directories to the location of your downloaded Nexus Docker Compose yaml. Rename the file to docker-compose.yml. Then run the following command in the terminal to pull (fetch) and then start the EdgeX Nexus-image containers. docker-compose up -d Using a Single Nexus Image In some cases, you may only need to use a single image from Nexus while other EdgeX services are created from the Docker Hub images. In this case, you can simply replace the image location for the selected image in your original Docker Compose file. The address of Nexus is nexus3.edgexfoundry.org at port 10004 . So, if you wished to use the EdgeX core data image from Nexus, you would replace the name and location of the core data image edgexfoundry/core-data:2.0.0 with nexus3.edgexfoundry.org:10004/core-data:latest in the Compose file. Note The example above replaces the Ireland core data service from Docker Hub with the latest core data image in Nexus.","title":"Getting Docker Images from EdgeX Nexus Repository"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#getting-docker-images-from-edgex-nexus-repository","text":"Released EdgeX Docker container images are available from Docker Hub . Please refer to the Getting Started using Docker for instructions related to stable releases. In some cases, it may be necessary to get your EdgeX container images from the Nexus repository. The Linux Foundation manages the Nexus repository for the project. Warning Containers used from Nexus are considered \"work in progress\". There is no guarantee that these containers will function properly or function properly with other containers from the current release. Nexus contains the EdgeX project staging and development container images. In other words, Nexus contains work-in-progress or pre-release images. These, pre-release/work-in-progress Docker images are built nightly and made available at the following Nexus location: nexus3.edgexfoundry.org:10004","title":"Getting Docker Images from EdgeX Nexus Repository"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#rationale-to-use-nexus-images","text":"Reasons you might want to use container images from Nexus include: The container is not available from Docker Hub (or Docker Hub is down temporarily) You need the latest development container image (the work in progress) You are working in a Windows or non-Linux environment and you are unable to build a container without some issues. A set of Docker Compose files have been created to allow you to get and use the latest EdgeX service images from Nexus. Find these Nexus \"Nightly Build\" Compose files in the main branch of the edgex-compose respository in GitHub. The EdgeX development team provides these Docker Compose files. As with the EdgeX release Compose files, you will find several different Docker Compose files that allow you to get the type of EdgeX instance setup based on: your hardware (x86 or ARM) your desire to have security services on or off your desire to run with the EdgeX GUI included Warning The \"Nightly Build\" images are provided as-is and may not always function properly or with other EdgeX services. Use with caution and typically only if you are a developer/contributor to EdgeX. These images represent the latest development work and may not have been thoroughly tested or integrated.","title":"Rationale To Use Nexus Images"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#using-nexus-images","text":"The operations to pull the images and run the Nexus Repository containers are the same as when using EdgeX images from Docker Hub (see Getting Started using Docker ). To get container images from the Nexus Repository, in a command terminal, change directories to the location of your downloaded Nexus Docker Compose yaml. Rename the file to docker-compose.yml. Then run the following command in the terminal to pull (fetch) and then start the EdgeX Nexus-image containers. docker-compose up -d","title":"Using Nexus Images"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#using-a-single-nexus-image","text":"In some cases, you may only need to use a single image from Nexus while other EdgeX services are created from the Docker Hub images. In this case, you can simply replace the image location for the selected image in your original Docker Compose file. The address of Nexus is nexus3.edgexfoundry.org at port 10004 . So, if you wished to use the EdgeX core data image from Nexus, you would replace the name and location of the core data image edgexfoundry/core-data:2.0.0 with nexus3.edgexfoundry.org:10004/core-data:latest in the Compose file. Note The example above replaces the Ireland core data service from Docker Hub with the latest core data image in Nexus.","title":"Using a Single Nexus Image"},{"location":"getting-started/quick-start/","text":"Quick Start This guide will get EdgeX up and running on your machine in as little as 5 minutes using Docker containers. We will skip over lengthy descriptions for now. The goal here is to get you a working IoT Edge stack, from device to cloud, as simply as possible. When you need more detailed instructions or a breakdown of some of the commands you see in this quick start, see either the Getting Started as a User or Getting Started as a Developer guides. Setup The fastest way to start running EdgeX is by using our pre-built Docker images. To use them you'll need to install the following: Docker https://docs.docker.com/install/ Docker Compose https://docs.docker.com/compose/install/ Running EdgeX Info Jakarta (v 2.1) is the latest version of EdgeX and used by example in this guide. Once you have Docker and Docker Compose installed, you need to: download / save the latest docker-compose file issue command to download and run the EdgeX Foundry Docker images from Docker Hub This can be accomplished with a single command as shown below (please note the tabs for x86 vs ARM architectures). x86 curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/jakarta/docker-compose-no-secty.yml -o docker-compose.yml; docker-compose up -d ARM curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/Jakarta/docker-compose-no-secty-arm64.yml -o docker-compose.yml; docker-compose up -d Verify that the EdgeX containers have started: docker-compose ps If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above. Connected Devices EdgeX Foundry provides a Virtual device service which is useful for testing and development. It simulates a number of devices , each randomly generating data of various types and within configurable parameters. For example, the Random-Integer-Device will generate random integers. The Virtual Device (also known as Device Virtual) service is already a service pulled and running as part of the default EdgeX configuration. You can verify that Virtual Device readings are already being sent by querying the EdgeX core data service for the event records sent for Random-Integer-Device: curl http://localhost:59880/api/v2/event/device/name/Random-Integer-Device Verify the virtual device service is operating correctly by requesting the last event records received by core data for the Random-Integer-Device. Note By default, the maximum number of events returned will be 20 (the default limit). You can pass a limit parameter to get more or less event records. curl http://localhost:59880/api/v2/event/device/name/Random-Integer-Device?limit=50 Controlling the Device Reading data from devices is only part of what EdgeX is capable of. You can also use it to control your devices - this is termed 'actuating' the device. When a device registers with the EdgeX services, it provides a Device Profile that describes both the data readings available from that device, and also the commands that control it. When our Virtual Device service registered the device Random-Integer-Device , it used a profile to also define commands that allow you to tell the service not to generate random integers, but to always return a value you set. You won't call commands on devices directly, instead you use the EdgeX Foundry Command Service to do that. The first step is to check what commands are available to call by asking the Command service about your device: curl http://localhost:59882/api/v2/device/name/Random-Integer-Device This will return a lot of JSON, because there are a number of commands you can call on this device, but the commands we're going to use in this guide are Int16 (the comand to get the current integer 16 value) and WriteInt16Value (the command to disable the generation of the random integer 16 number and specify the integer value to return). Look for the Int16 and WriteInt16Value commands like those shown in the JSON as below: { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"deviceCoreCommand\" : { \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"coreCommands\" : [ { \"name\" : \"WriteInt16Value\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Integer-Device/WriteInt16Value\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Int16\" , \"valueType\" : \"Int16\" }, { \"resourceName\" : \"EnableRandomization_Int16\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"Int16\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Integer-Device/Int16\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Int16\" , \"valueType\" : \"Int16\" } ] } ... ] } } You'll notice that the commands have get or set (or both) options. A get call will return a random number (integer 16), and is what is being called automatically to send data into the rest of EdgeX (specifically core data). You can also call get manually using the URL provided (with no additinal parameters needed): curl http://localhost:59882/api/v2/device/name/Random-Integer-Device/Int16 Warning Notice that localhost replaces edgex-core-command here. That's because the EdgeX Foundry services are running in Docker. Docker recognizes the internal hostname edgex-core-command , but when calling the service from outside of Docker, you have to use localhost to reach it. This command will return a JSON result that looks like this: { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"event\" : { \"apiVersion\" : \"v2\" , \"id\" : \"6d829637-730c-4b70-9208-dc179070003f\" , \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"sourceName\" : \"Int16\" , \"origin\" : 1625605672073875500 , \"readings\" : [ { \"id\" : \"545b7add-683b-4745-84f1-d859f3d839e0\" , \"origin\" : 1625605672073875500 , \"deviceName\" : \"Random-Integer-Device\" , \"resourceName\" : \"Int16\" , \"profileName\" : \"Random-Integer-Device\" , \"valueType\" : \"Int16\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"value\" : \"-8146\" } ] } } A call to GET of the Int16 device's Random-Integer-Device operation through the command service results in the next random value produced by the device in JSON format. The default range for this reading is -32,768 to 32,767. In the example above, a value of -8146 was returned as the reading value. With the service set up to randomly return values, the value returned will be different each time the Int16 command is sent. However, we can use the WriteInt16Value command to disable random values from being returned and instead specify a value to return. Use the curl command below to call the set command to disable random values and return the value 42 each time. curl -X PUT -d '{\"Int16\":\"42\", \"EnableRandomization_Int16\":\"false\"}' http://localhost:59882/api/v2/device/name/Random-Integer-Device/WriteInt16Value Warning Again, also notice that localhost replaces edgex-core-command . If successful, the service will confirm your setting of the value to be returned with a 200 status code. A call to the device's SET command through the command service will return the API version and a status code (200 for success). Now every time we call get on the Int16 command, the returned value will be 42 . A call to GET of the Int16 device's Random-Integer-Device operation after setting the Int16 value to 42 and disabling randomization will always return a value of 42. Exporting Data EdgeX provides exporters (called application services) for a variety of cloud services and applications. To keep this guide simple, we're going to use the community provided 'application service configurable' to send the EdgeX data to a public MQTT broker hosted by HiveMQ. You can then watch for the EdgeX event data via HiveMQ provided MQTT browser client. First add the following application service to your docker-compose.yml file right after the 'app-service-rules' service (the first service in the file). Spacing is important in YAML, so make sure to copy and paste it correctly. app-service-mqtt : container_name : edgex-app-mqtt depends_on : - consul - data environment : CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis EDGEX_PROFILE : mqtt-export EDGEX_SECURITY_SECRET_STORE : \"false\" MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul SERVICE_HOST : edgex-app-mqtt TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS : tcp://broker.mqttdashboard.com:1883 WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_TOPIC : EdgeXEvents hostname : edgex-app-mqtt image : edgexfoundry/app-service-configurable:2.0.0 networks : edgex-network : {} ports : - 127.0.0.1:59702:59702/tcp read_only : true security_opt : - no-new-privileges:true user : 2002:2001 Note This adds the application service configurable to your EdgeX system. The application service configurable allows you to configure (versus program) new exports - in this case exporting the EdgeX sensor data to the HiveMQ broker at tcp://broker.mqttdashboard.com:1883 . You will be publishing to the EdgeXEvents topic. For convenience, see documentation on the EdgeX Compose Builder to create custom Docker Compose files. Save the compose file and then execute another compose up command to have Docker Compose pull and start the configurable application service. docker-compose up -d You can connect to this broker with any MQTT client to watch the sent data. HiveMQ provides a web-based client that you can use. Use a browser to go to the client's URL. Once there, hit the Connect button to connect to the HiveMQ public broker. Using the HiveMQ provided client tool, connect to the same public HiveMQ broker your configurable application service is sending EdgeX data to. Then, use the Subscriptions area to subscribe to the \"EdgeXEvents\" topic. You must subscribe to the same topic - EdgeXEvents - to see the EdgeX data sent by the configurable application service. You will begin seeing your random number readings appear in the Messages area on the screen. Once subscribed, the EdgeX event data will begin to appear in the Messages area on the browser screen. Next Steps Congratulations! You now have a full EdgeX deployment reading data from a (virtual) device and publishing it to an MQTT broker in the cloud, and you were able to control your device through commands into EdgeX. It's time to continue your journey by reading the Introduction to EdgeX Foundry, what it is and how it's built. From there you can take the Walkthrough to learn how the micro services work together to control devices and read data from them as you just did.","title":"Quick Start"},{"location":"getting-started/quick-start/#quick-start","text":"This guide will get EdgeX up and running on your machine in as little as 5 minutes using Docker containers. We will skip over lengthy descriptions for now. The goal here is to get you a working IoT Edge stack, from device to cloud, as simply as possible. When you need more detailed instructions or a breakdown of some of the commands you see in this quick start, see either the Getting Started as a User or Getting Started as a Developer guides.","title":"Quick Start"},{"location":"getting-started/quick-start/#setup","text":"The fastest way to start running EdgeX is by using our pre-built Docker images. To use them you'll need to install the following: Docker https://docs.docker.com/install/ Docker Compose https://docs.docker.com/compose/install/","title":"Setup"},{"location":"getting-started/quick-start/#running-edgex","text":"Info Jakarta (v 2.1) is the latest version of EdgeX and used by example in this guide. Once you have Docker and Docker Compose installed, you need to: download / save the latest docker-compose file issue command to download and run the EdgeX Foundry Docker images from Docker Hub This can be accomplished with a single command as shown below (please note the tabs for x86 vs ARM architectures). x86 curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/jakarta/docker-compose-no-secty.yml -o docker-compose.yml; docker-compose up -d ARM curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/Jakarta/docker-compose-no-secty-arm64.yml -o docker-compose.yml; docker-compose up -d Verify that the EdgeX containers have started: docker-compose ps If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above.","title":"Running EdgeX"},{"location":"getting-started/quick-start/#connected-devices","text":"EdgeX Foundry provides a Virtual device service which is useful for testing and development. It simulates a number of devices , each randomly generating data of various types and within configurable parameters. For example, the Random-Integer-Device will generate random integers. The Virtual Device (also known as Device Virtual) service is already a service pulled and running as part of the default EdgeX configuration. You can verify that Virtual Device readings are already being sent by querying the EdgeX core data service for the event records sent for Random-Integer-Device: curl http://localhost:59880/api/v2/event/device/name/Random-Integer-Device Verify the virtual device service is operating correctly by requesting the last event records received by core data for the Random-Integer-Device. Note By default, the maximum number of events returned will be 20 (the default limit). You can pass a limit parameter to get more or less event records. curl http://localhost:59880/api/v2/event/device/name/Random-Integer-Device?limit=50","title":"Connected Devices"},{"location":"getting-started/quick-start/#controlling-the-device","text":"Reading data from devices is only part of what EdgeX is capable of. You can also use it to control your devices - this is termed 'actuating' the device. When a device registers with the EdgeX services, it provides a Device Profile that describes both the data readings available from that device, and also the commands that control it. When our Virtual Device service registered the device Random-Integer-Device , it used a profile to also define commands that allow you to tell the service not to generate random integers, but to always return a value you set. You won't call commands on devices directly, instead you use the EdgeX Foundry Command Service to do that. The first step is to check what commands are available to call by asking the Command service about your device: curl http://localhost:59882/api/v2/device/name/Random-Integer-Device This will return a lot of JSON, because there are a number of commands you can call on this device, but the commands we're going to use in this guide are Int16 (the comand to get the current integer 16 value) and WriteInt16Value (the command to disable the generation of the random integer 16 number and specify the integer value to return). Look for the Int16 and WriteInt16Value commands like those shown in the JSON as below: { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"deviceCoreCommand\" : { \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"coreCommands\" : [ { \"name\" : \"WriteInt16Value\" , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Integer-Device/WriteInt16Value\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Int16\" , \"valueType\" : \"Int16\" }, { \"resourceName\" : \"EnableRandomization_Int16\" , \"valueType\" : \"Bool\" } ] }, { \"name\" : \"Int16\" , \"get\" : true , \"set\" : true , \"path\" : \"/api/v2/device/name/Random-Integer-Device/Int16\" , \"url\" : \"http://edgex-core-command:59882\" , \"parameters\" : [ { \"resourceName\" : \"Int16\" , \"valueType\" : \"Int16\" } ] } ... ] } } You'll notice that the commands have get or set (or both) options. A get call will return a random number (integer 16), and is what is being called automatically to send data into the rest of EdgeX (specifically core data). You can also call get manually using the URL provided (with no additinal parameters needed): curl http://localhost:59882/api/v2/device/name/Random-Integer-Device/Int16 Warning Notice that localhost replaces edgex-core-command here. That's because the EdgeX Foundry services are running in Docker. Docker recognizes the internal hostname edgex-core-command , but when calling the service from outside of Docker, you have to use localhost to reach it. This command will return a JSON result that looks like this: { \"apiVersion\" : \"v2\" , \"statusCode\" : 200 , \"event\" : { \"apiVersion\" : \"v2\" , \"id\" : \"6d829637-730c-4b70-9208-dc179070003f\" , \"deviceName\" : \"Random-Integer-Device\" , \"profileName\" : \"Random-Integer-Device\" , \"sourceName\" : \"Int16\" , \"origin\" : 1625605672073875500 , \"readings\" : [ { \"id\" : \"545b7add-683b-4745-84f1-d859f3d839e0\" , \"origin\" : 1625605672073875500 , \"deviceName\" : \"Random-Integer-Device\" , \"resourceName\" : \"Int16\" , \"profileName\" : \"Random-Integer-Device\" , \"valueType\" : \"Int16\" , \"binaryValue\" : null , \"mediaType\" : \"\" , \"value\" : \"-8146\" } ] } } A call to GET of the Int16 device's Random-Integer-Device operation through the command service results in the next random value produced by the device in JSON format. The default range for this reading is -32,768 to 32,767. In the example above, a value of -8146 was returned as the reading value. With the service set up to randomly return values, the value returned will be different each time the Int16 command is sent. However, we can use the WriteInt16Value command to disable random values from being returned and instead specify a value to return. Use the curl command below to call the set command to disable random values and return the value 42 each time. curl -X PUT -d '{\"Int16\":\"42\", \"EnableRandomization_Int16\":\"false\"}' http://localhost:59882/api/v2/device/name/Random-Integer-Device/WriteInt16Value Warning Again, also notice that localhost replaces edgex-core-command . If successful, the service will confirm your setting of the value to be returned with a 200 status code. A call to the device's SET command through the command service will return the API version and a status code (200 for success). Now every time we call get on the Int16 command, the returned value will be 42 . A call to GET of the Int16 device's Random-Integer-Device operation after setting the Int16 value to 42 and disabling randomization will always return a value of 42.","title":"Controlling the Device"},{"location":"getting-started/quick-start/#exporting-data","text":"EdgeX provides exporters (called application services) for a variety of cloud services and applications. To keep this guide simple, we're going to use the community provided 'application service configurable' to send the EdgeX data to a public MQTT broker hosted by HiveMQ. You can then watch for the EdgeX event data via HiveMQ provided MQTT browser client. First add the following application service to your docker-compose.yml file right after the 'app-service-rules' service (the first service in the file). Spacing is important in YAML, so make sure to copy and paste it correctly. app-service-mqtt : container_name : edgex-app-mqtt depends_on : - consul - data environment : CLIENTS_CORE_COMMAND_HOST : edgex-core-command CLIENTS_CORE_DATA_HOST : edgex-core-data CLIENTS_CORE_METADATA_HOST : edgex-core-metadata CLIENTS_SUPPORT_NOTIFICATIONS_HOST : edgex-support-notifications CLIENTS_SUPPORT_SCHEDULER_HOST : edgex-support-scheduler DATABASES_PRIMARY_HOST : edgex-redis EDGEX_PROFILE : mqtt-export EDGEX_SECURITY_SECRET_STORE : \"false\" MESSAGEQUEUE_HOST : edgex-redis REGISTRY_HOST : edgex-core-consul SERVICE_HOST : edgex-app-mqtt TRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST : edgex-redis TRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST : edgex-redis WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS : tcp://broker.mqttdashboard.com:1883 WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_TOPIC : EdgeXEvents hostname : edgex-app-mqtt image : edgexfoundry/app-service-configurable:2.0.0 networks : edgex-network : {} ports : - 127.0.0.1:59702:59702/tcp read_only : true security_opt : - no-new-privileges:true user : 2002:2001 Note This adds the application service configurable to your EdgeX system. The application service configurable allows you to configure (versus program) new exports - in this case exporting the EdgeX sensor data to the HiveMQ broker at tcp://broker.mqttdashboard.com:1883 . You will be publishing to the EdgeXEvents topic. For convenience, see documentation on the EdgeX Compose Builder to create custom Docker Compose files. Save the compose file and then execute another compose up command to have Docker Compose pull and start the configurable application service. docker-compose up -d You can connect to this broker with any MQTT client to watch the sent data. HiveMQ provides a web-based client that you can use. Use a browser to go to the client's URL. Once there, hit the Connect button to connect to the HiveMQ public broker. Using the HiveMQ provided client tool, connect to the same public HiveMQ broker your configurable application service is sending EdgeX data to. Then, use the Subscriptions area to subscribe to the \"EdgeXEvents\" topic. You must subscribe to the same topic - EdgeXEvents - to see the EdgeX data sent by the configurable application service. You will begin seeing your random number readings appear in the Messages area on the screen. Once subscribed, the EdgeX event data will begin to appear in the Messages area on the browser screen.","title":"Exporting Data"},{"location":"getting-started/quick-start/#next-steps","text":"Congratulations! You now have a full EdgeX deployment reading data from a (virtual) device and publishing it to an MQTT broker in the cloud, and you were able to control your device through commands into EdgeX. It's time to continue your journey by reading the Introduction to EdgeX Foundry, what it is and how it's built. From there you can take the Walkthrough to learn how the micro services work together to control devices and read data from them as you just did.","title":"Next Steps"},{"location":"getting-started/tools/Ch-CommandLineInterface/","text":"Command Line Interface (CLI) What is EdgeX CLI? EdgeX CLI is a command-line interface tool for developers, used for interacting with EdgeX Foundry microservices. Installing EdgeX CLI The client can be installed using a snap sudo snap install edgex-cli You can also download the appropriate binary for your operating system from GitHub . If you want to build EdgeX CLI from source, do the following: git clone http://github.com/edgexfoundry/edgex-cli.git cd edgex-cli make tidy make build ./bin/edgex-cli For more information, see the EdgeX CLI README . Features EdgeX CLI provides access to most of the core and support APIs. The commands map directly to the REST API structure. Running edgex-cli with no arguments shows a list of the available commands and information for each of them, including the name of the service implementing the command. Use the -h or --help flag to get more information about each command. $ edgex-cli EdgeX-CLI Usage: edgex-cli [command] Available Commands: command Read, write and list commands [Core Command] config Return the current configuration of all EdgeX core/support microservices device Add, remove, get, list and modify devices [Core Metadata] deviceprofile Add, remove, get and list device profiles [Core Metadata] deviceservice Add, remove, get, list and modify device services [Core Metadata] event Add, remove and list events help Help about any command interval Add, get and list intervals [Support Scheduler] intervalaction Get, list, update and remove interval actions [Support Scheduler] metrics Output the CPU/memory usage stats for all EdgeX core/support microservices notification Add, remove and list notifications [Support Notifications] ping Ping (health check) all EdgeX core/support microservices provisionwatcher Add, remove, get, list and modify provison watchers [Core Metadata] reading Count and list readings subscription Add, remove and list subscriptions [Support Notificationss] transmission Remove and list transmissions [Support Notifications] version Output the current version of EdgeX CLI and EdgeX microservices Flags: -h, --help help for edgex-cli Use \"edgex-cli [command] --help\" for more information about a command. Commands implemented by all microservices The ping , config , metrics and version work with more than one microservice. By default these commands will return values from all core and support services: $ edgex-cli metrics Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-metadata 13 1878936 38262 9445 47707 75318280 5967608 core-data 13 1716256 40200 8997 49197 75580424 5949504 core-command 13 1737288 31367 8582 39949 75318280 5380584 support-scheduler 10 2612296 20754 20224 40978 74728456 4146800 support-notifications 10 2714480 21199 20678 41877 74728456 4258640 To only return information for one service, specify the service to use: -c, --command use core-command service endpoint -d, --data use core-data service endpoint -m, --metadata use core-metadata service endpoint -n, --notifications use support-notifications service endpoint -s, --scheduler use support-scheduler service endpoint Example: $ edgex-cli metrics -d Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-data 14 1917712 870037 12258 882295 75580424 64148880 $ edgex-cli metrics -c Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-command 13 1618424 90890 8328 99218 75580424 22779448 $ edgex-cli metrics --metadata Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-metadata 12 1704256 39606 8870 48476 75318280 6139912 The -j/--json flag can be used with most of edgex-go commands to return the JSON output: $ edgex-cli metrics --metadata --json {\"apiVersion\":\"v2\",\"metrics\":{\"memAlloc\":1974544,\"memFrees\":39625,\"memLiveObjects\":9780,\"memMallocs\":49405,\"memSys\":75318280,\"memTotalAlloc\":6410200,\"cpuBusyAvg\":13}} This could then be formatted and filtered using jq : $ edgex-cli metrics --metadata --json | jq '.' { \"apiVersion\": \"v2\", \"metrics\": { \"memAlloc\": 1684176, \"memFrees\": 41142, \"memLiveObjects\": 8679, \"memMallocs\": 49821, \"memSys\": 75318280, \"memTotalAlloc\": 6530824, \"cpuBusyAvg\": 12 } } Core-command service edgex-cli command list Return a list of all supported device commands, optionally filtered by device name. Example: $ edgex-cli command list Name Device Name Profile Name Methods URL BoolArray Random-Boolean-Device Random-Boolean-Device Get, Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/BoolArray WriteBoolValue Random-Boolean-Device Random-Boolean-Device Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue WriteBoolArrayValue Random-Boolean-Device Random-Boolean-Device Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolArrayValue edgex-cli command read Issue a read command to the specified device. Example: $ edgex-cli command read -c Int16 -d Random-Integer-Device -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"event\": { \"apiVersion\": \"v2\", \"id\": \"e19f417e-3130-485f-8212-64b593b899f9\", \"deviceName\": \"Random-Integer-Device\", \"profileName\": \"Random-Integer-Device\", \"sourceName\": \"Int16\", \"origin\": 1641484109458647300, \"readings\": [ { \"id\": \"dc1f212d-148a-457c-ab13-48aa0fa58dd1\", \"origin\": 1641484109458647300, \"deviceName\": \"Random-Integer-Device\", \"resourceName\": \"Int16\", \"profileName\": \"Random-Integer-Device\", \"valueType\": \"Int16\", \"binaryValue\": null, \"mediaType\": \"\", \"value\": \"587\" } ] } } edgex-cli command write Issue a write command to the specified device. Example using in-line request body: $ edgex-cli command write -d Random-Integer-Device -c Int8 -b \"{\\\"Int8\\\": \\\"99\\\"}\" $ edgex-cli command read -d Random-Integer-Device -c Int8 apiVersion: v2,statusCode: 200 Command Name Device Name Profile Name Value Type Value Int8 Random-Integer-Device Random-Integer-Device Int8 99 Example using a file containing the request: $ echo \"{ \\\"Int8\\\":\\\"88\\\" }\" > file.txt $ edgex-cli command write -d Random-Integer-Device -c Int8 -f file.txt apiVersion: v2,statusCode: 200 $ edgex-cli command read -d Random-Integer-Device -c Int8 Command Name Device Name Profile Name Value Type Value Int8 Random-Integer-Device Random-Integer-Device Int8 88 Core-metadata service edgex-cli deviceservice list List device services $ edgex-cli deviceservice list edgex-cli deviceservice add Add a device service $ edgex-cli deviceservice add -n TestDeviceService -b \"http://localhost:51234\" edgex-cli deviceservice name Shows information about a device service. Most edgex-cli commands support the -v/--verbose and -j/--json flags: $ edgex-cli deviceservice name -n TestDeviceService Name BaseAddress Description TestDeviceService http://localhost:51234 $ edgex-cli deviceservice name -n TestDeviceService -v Name BaseAddress Description AdminState Id Labels LastConnected LastReported Modified TestDeviceService http://localhost:51234 UNLOCKED 7f29ad45-65dc-46c0-a928-00147d328032 [] 0 0 10 Jan 22 17:26 GMT $ edgex-cli deviceservice name -n TestDeviceService -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"service\": { \"created\": 1641835585465, \"modified\": 1641835585465, \"id\": \"7f29ad45-65dc-46c0-a928-00147d328032\", \"name\": \"TestDeviceService\", \"baseAddress\": \"http://localhost:51234\", \"adminState\": \"UNLOCKED\" } } edgex-cli deviceservice rm Remove a device service $ edgex-cli deviceservice rm -n TestDeviceService edgex-cli deviceservice update Update the device service, getting the ID using jq and confirm that the labels were added $ edgex-cli deviceservice add -n TestDeviceService -b \"http://localhost:51234\" {{{v2} c2600ad2-6489-4c3f-9207-5bdffdb8d68f 201} 844473b1-551d-4545-9143-28cfdf68a539} $ ID=`edgex-cli deviceservice name -n TestDeviceService -j | jq -r '.service.id'` $ edgex-cli deviceservice update -n TestDeviceService -i $ID --labels \"label1,label2\" {{v2} 9f4a4758-48a1-43ce-a232-828f442c2e34 200} $ edgex-cli deviceservice name -n TestDeviceService -v Name BaseAddress Description AdminState Id Labels LastConnected LastReported Modified TestDeviceService http://localhost:51234 UNLOCKED 844473b1-551d-4545-9143-28cfdf68a539 [label1 label2] 0 0 28 Jan 22 12:00 GMT edgex-cli deviceprofile list List device profiles $ edgex-cli deviceprofile list edgex-cli deviceprofile add Add a device profile $ edgex-cli deviceprofile add -n TestProfile -r \"[{\\\"name\\\": \\\"SwitchButton\\\",\\\"description\\\": \\\"Switch On/Off.\\\",\\\"properties\\\": {\\\"valueType\\\": \\\"String\\\",\\\"readWrite\\\": \\\"RW\\\",\\\"defaultValue\\\": \\\"On\\\",\\\"units\\\": \\\"On/Off\\\" } }]\" -c \"[{\\\"name\\\": \\\"Switch\\\",\\\"readWrite\\\": \\\"RW\\\",\\\"resourceOperations\\\": [{\\\"deviceResource\\\": \\\"SwitchButton\\\",\\\"DefaultValue\\\": \\\"false\\\" }]} ]\" {{{v2} 65d083cc-b876-4744-af65-59a00c63fc25 201} 4c0af6b0-4e83-4f3c-a574-dcea5f42d3f0} edgex-cli deviceprofile name Show information about a specifed device profile $ edgex-cli deviceprofile name -n TestProfile Name Description Manufacturer Model Name TestProfile TestProfile edgex-cli deviceprofile rm Remove a device profile $ edgex-cli deviceprofile rm -n TestProfile edgex-cli device list List current devices $ edgex-cli device list Name Description ServiceName ProfileName Labels AutoEvents Random-Float-Device Example of Device Virtual device-virtual Random-Float-Device [device-virtual-example] [{30s false Float32} {30s false Float64}] Random-UnsignedInteger-Device Example of Device Virtual device-virtual Random-UnsignedInteger-Device [device-virtual-example] [{20s false Uint8} {20s false Uint16} {20s false Uint32} {20s false Uint64}] Random-Boolean-Device Example of Device Virtual device-virtual Random-Boolean-Device [device-virtual-example] [{10s false Bool}] TestDevice TestDeviceService TestProfile [] [] Random-Binary-Device Example of Device Virtual device-virtual Random-Binary-Device [device-virtual-example] [] Random-Integer-Device Example of Device Virtual device-virtual Random-Integer-Device [device-virtual-example] [{15s false Int8} {15s false Int16} {15s false Int32} {15s false Int64}] edgex-cli device add Add a new device. This needs a device service and device profile to be created first $ edgex-cli device add -n TestDevice -p TestProfile -s TestDeviceService --protocols \"{\\\"modbus-tcp\\\":{\\\"Address\\\": \\\"localhost\\\",\\\"Port\\\": \\\"1234\\\" }}\" {{{v2} e912aa16-af4a-491d-993b-b0aeb8cd9c67 201} ae0e8b95-52fc-4778-892d-ae7e1127ed39} edgex-cli device name Show information about a specified named device $ edgex-cli device name -n TestDevice Name Description ServiceName ProfileName Labels AutoEvents TestDevice TestDeviceService TestProfile [] [] edgex-cli device rm Remove a device edgex-cli device rm -n TestDevice edgex-cli device list edgex-cli device add -n TestDevice -p TestProfile -s TestDeviceService --protocols \"{\\\"modbus-tcp\\\":{\\\"Address\\\": \\\"localhost\\\",\\\"Port\\\": \\\"1234\\\" }}\" edgex-cli device list edgex-cli device update Update a device This example gets the ID of a device, updates it using that ID and then displays device information to confirm that the labels were added $ ID=`edgex-cli device name -n TestDevice -j | jq -r '.device.id'` $ edgex-cli device update -n TestDevice -i $ID --labels \"label1,label2\" {{v2} 73427492-1158-45b2-9a7c-491a474cecce 200} $ edgex-cli device name -n TestDevice Name Description ServiceName ProfileName Labels AutoEvents TestDevice TestDeviceService TestProfile [label1 label2] [] edgex-cli provisionwatcher add Add a new provision watcher $ edgex-cli provisionwatcher add -n TestWatcher --identifiers \"{\\\"address\\\":\\\"localhost\\\",\\\"port\\\":\\\"1234\\\"}\" -p TestProfile -s TestDeviceService {{{v2} 3f05f6e0-9d9b-4d96-96df-f394cc2ad6f4 201} ee76f4d8-46d4-454c-a4da-8ad9e06d8d7e} edgex-cli provisionwatcher list List provision watchers $ edgex-cli provisionwatcher list Name ServiceName ProfileName Labels Identifiers TestWatcher TestDeviceService TestProfile [] map[address:localhost port:1234] edgex-cli provisionwatcher name Show information about a specific named provision watcher $ edgex-cli provisionwatcher name -n TestWatcher Name ServiceName ProfileName Labels Identifiers TestWatcher TestDeviceService TestProfile [] map[address:localhost port:1234] edgex-cli provisionwatcher rm Remove a provision watcher $ edgex-cli provisionwatcher rm -n TestWatcher $ edgex-cli provisionwatcher list No provision watchers available edgex-cli provisionwatcher update Update a provision watcher This example gets the ID of a provision watcher, updates it using that ID and then displays information about it to confirm that the labels were added $ edgex-cli provisionwatcher add -n TestWatcher2 --identifiers \"{\\\"address\\\":\\\"localhost\\\",\\\"port\\\":\\\"1234\\\"}\" -p TestProfile -s TestDeviceService {{{v2} fb7b8bcf-8f58-477b-929e-8dac53cddc81 201} 7aadb7df-1ff1-4b3b-8986-b97e0ef53116} $ ID=`edgex-cli provisionwatcher name -n TestWatcher2 -j | jq -r '.provisionWatcher.id'` $ edgex-cli provisionwatcher update -n TestWatcher2 -i $ID --labels \"label1,label2\" {{v2} af1e70bf-4705-47f4-9046-c7b789799405 200} $ edgex-cli provisionwatcher name -n TestWatcher2 Name ServiceName ProfileName Labels Identifiers TestWatcher2 TestDeviceService TestProfile [label1 label2] map[address:localhost port:1234] Core-data service edgex-cli event add Create an event with a specified number of random readings $ edgex-cli event add -d Random-Integer-Device -p Random-Integer-Device -r 1 -s Int16 -t int16 Added event 75f06078-e8da-4671-8938-ab12ebb2c244 $ edgex-cli event list -v Origin Device Profile Source Id Versionable Readings 10 Jan 22 15:38 GMT Random-Integer-Device Random-Integer-Device Int16 75f06078-e8da-4671-8938-ab12ebb2c244 {v2} [{974a70fe-71ef-4a47-a008-c89f0e4e3bb6 1641829092129391876 Random-Integer-Device Int16 Random-Integer-Device Int16 {[] } {13342}}] edgex-cli event count Count the number of events in core data, optionally filtering by device name $ edgex-cli event count -d Random-Integer-Device Total Random-Integer-Device events: 54 edgex-cli event list List all events, optionally specifying a limit and offset $ edgex-cli event list To see two readings only, skipping the first 100 readings: $ edgex-cli reading list --limit 2 --offset 100 Origin Device ProfileName Value ValueType 28 Jan 22 12:55 GMT Random-Integer-Device Random-Integer-Device 22502 Int16 28 Jan 22 12:55 GMT Random-Integer-Device Random-Integer-Device 1878517239016780388 Int64 edgex-cli event rm Remove events, specifying either device name or maximum event age in milliseconds - edgex-cli event rm --device {devicename} removes all events for the specified device - edgex-cli event rm --age {ms} removes all events generated in the last {ms} milliseconds $ edgex-cli event rm -a 30000 $ edgex-cli event count Total events: 0 edgex-cli reading count Count the number of readings in core data, optionally filtering by device name $ edgex-cli reading count Total readings: 235 edgex-cli reading list List all readings, optionally specifying a limit and offset $ edgex-cli reading list Support-scheduler service edgex-cli interval add Add an interval $ edgex-cli interval add -n \"hourly\" -i \"1h\" {{{v2} c7c51f21-dab5-4307-a4c9-bc5d5f2194d9 201} 98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18} edgex-cli interval name Return an interval by name $ edgex-cli interval name -n \"hourly\" Name Interval Start End hourly 1h edgex-cli interval list List all intervals $ edgex-cli interval list -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"intervals\": [ { \"created\": 1641830955058, \"modified\": 1641830955058, \"id\": \"98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18\", \"name\": \"hourly\", \"interval\": \"1h\" }, { \"created\": 1641830953884, \"modified\": 1641830953884, \"id\": \"507a2a9a-82eb-41ea-afa8-79a9b0033665\", \"name\": \"midnight\", \"start\": \"20180101T000000\", \"interval\": \"24h\" } ] } edgex-cli interval update Update an interval, specifying either ID or name $ edgex-cli interval update -n \"hourly\" -i \"1m\" {{v2} 08239cc4-d4d7-4ea2-9915-d91b9557c742 200} $ edgex-cli interval name -n \"hourly\" -v Id Name Interval Start End 98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18 hourly 1m edgex-cli interval rm Delete a named interval and associated interval actions $ edgex-cli interval rm -n \"hourly\" edgex-cli intervalaction add Add an interval action $ edgex-cli intervalaction add -n \"name01\" -i \"midnight\" -a \"{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"192.168.0.102\\\", \\\"port\\\": 8080, \\\"httpMethod\\\": \\\"GET\\\"}\" edgex-cli intervalaction name Return an interval action by name $ edgex-cli intervalaction name -n \"name01\" Name Interval Address Content ContentType name01 midnight {REST 192.168.0.102 8080 { GET} { 0 0 false false 0} {[]}} edgex-cli intervalaction list List all interval actions $ edgex-cli intervalaction list Name Interval Address Content ContentType name01 midnight {REST 192.168.0.102 8080 { GET} { 0 0 false false 0} {[]}} scrub-aged-events midnight {REST localhost 59880 {/api/v2/event/age/604800000000000 DELETE} { 0 0 false false 0} {[]}} edgex-cli intervalaction update Update an interval action, specifying either ID or name $ edgex-cli intervalaction update -n \"name01\" --admin-state \"LOCKED\" {{v2} afc7b08c-5dc6-4923-9786-30bfebc8a8b6 200} $ edgex-cli intervalaction name -n \"name01\" -j | jq '.action.adminState' \"LOCKED\" edgex-cli intervalaction rm Delete an interval action by name $ edgex-cli intervalaction rm -n \"name01\" Support-notifications service edgex-cli notification add Add a notification to be sent $ edgex-cli notification add -s \"sender01\" -c \"content\" --category \"category04\" --labels \"l3\" {{{v2} 13938e01-a560-47d8-bb50-060effdbe490 201} 6a1138c2-b58e-4696-afa7-2074e95165eb} edgex-cli notification list List notifications associated with a given label, category or time range $ edgex-cli notification list -c \"category04\" Category Content Description Labels Sender Severity Status category04 content [l3] sender01 NORMAL PROCESSED $ edgex-cli notification list --start \"01 jan 20 00:00 GMT\" --end \"01 dec 24 00:00 GMT\" Category Content Description Labels Sender Severity Status category04 content [l3] sender01 NORMAL PROCESSED edgex-cli notification rm Delete a notification and all of its associated transmissions $ ID=`edgex-cli notification list -c \"category04\" -v -j | jq -r '.notifications[0].id'` $ echo $ID 6a1138c2-b58e-4696-afa7-2074e95165eb $ edgex-cli notification rm -i $ID $ edgex-cli notification list -c \"category04\" No notifications available edgex-cli notification cleanup Delete all notifications and corresponding transmissions $ edgex-cli notification cleanup $ edgex-cli notification list --start \"01 jan 20 00:00 GMT\" --end \"01 dec 24 00:00 GMT\" No notifications available edgex-cli subscription add Add a new subscription $ edgex-cli subscription add -n \"name01\" --receiver \"receiver01\" -c \"[{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 7770, \\\"httpMethod\\\": \\\"POST\\\"}]\" --labels \"l1,l2,l3\" {{{v2} 2bbfdac0-d2e1-4f08-8344-392b8e8ddc5e 201} 1ec08af0-5767-4505-82f7-581fada6006b} $ edgex-cli subscription add -n \"name02\" --receiver \"receiver01\" -c \"[{\\\"type\\\": \\\"EMAIL\\\", \\\"recipients\\\": [\\\"123@gmail.com\\\"]}]\" --labels \"l1,l2,l3\" {{{v2} f6b417ca-740c-4dee-bc1e-c721c0de4051 201} 156fc2b9-de60-423b-9bff-5312d8452c48} edgex-cli subscription name Return a subscription by its unique name $ edgex-cli subscription name -n \"name01\" Name Description Channels Receiver Categories Labels name01 [{REST localhost 7770 { POST} { 0 0 false false 0} {[]}}] receiver01 [] [l1 l2 l3] edgex-cli subscription list List all subscriptions, optionally filtered by a given category, label or receiver $ edgex-cli subscription list --label \"l1\" Name Description Channels Receiver Categories Labels name02 [{EMAIL 0 { } { 0 0 false false 0} {[123@gmail.com]}}] receiver01 [] [l1 l2 l3] name01 [{REST localhost 7770 { POST} { 0 0 false false 0} {[]}}] receiver01 [] [l1 l2 l3] edgex-cli subscription rm Delete the named subscription $ edgex-cli subscription rm -n \"name01\" edgex-cli transmission list To create a transmission, first create a subscription and notifications: $ edgex-cli subscription add -n \"Test-Subscription\" --description \"Test data for subscription\" --categories \"health-check\" --labels \"simple\" --receiver \"tafuser\" --resend-limit 0 --admin-state \"UNLOCKED\" -c \"[{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 7770, \\\"httpMethod\\\": \\\"POST\\\"}]\" {{{v2} f281ec1a-876e-4a29-a14d-195b66d0506c 201} 3b489d23-b0c7-4791-b839-d9a578ebccb9} $ edgex-cli notification add -d \"Test data for notification 1\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} 8df79c7c-03fb-4626-b6e8-bf2d616fa327 201} 0be98b91-daf9-46e2-bcca-39f009d93866} $ edgex-cli notification add -d \"Test data for notification 2\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} ec0b2444-c8b0-45d0-bbd6-847dd007c2fd 201} a7c65d7d-0f9c-47e1-82c2-c8098c47c016} $ edgex-cli notification add -d \"Test data for notification 3\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} 45af7f94-c99e-4fb1-a632-fab5ff475be4 201} f982fc97-f53f-4154-bfce-3ef8666c3911} Then list the transmissions: $ edgex-cli transmission list SubscriptionName ResendCount Status Test-Subscription 0 FAILED Test-Subscription 0 FAILED Test-Subscription 0 FAILED edgex-cli transmission id Return a transmission by ID $ ID=`edgex-cli transmission list -j | jq -r '.transmissions[0].id'` $ edgex-cli transmission id -i $ID SubscriptionName ResendCount Status Test-Subscription 0 FAILED edgex-cli transmission rm Delete processed transmissions older than the specificed age (in milliseconds) $ edgex-cli transmission rm -a 100","title":"Command Line Interface (CLI)"},{"location":"getting-started/tools/Ch-CommandLineInterface/#command-line-interface-cli","text":"","title":"Command Line Interface (CLI)"},{"location":"getting-started/tools/Ch-CommandLineInterface/#what-is-edgex-cli","text":"EdgeX CLI is a command-line interface tool for developers, used for interacting with EdgeX Foundry microservices.","title":"What is EdgeX CLI?"},{"location":"getting-started/tools/Ch-CommandLineInterface/#installing-edgex-cli","text":"The client can be installed using a snap sudo snap install edgex-cli You can also download the appropriate binary for your operating system from GitHub . If you want to build EdgeX CLI from source, do the following: git clone http://github.com/edgexfoundry/edgex-cli.git cd edgex-cli make tidy make build ./bin/edgex-cli For more information, see the EdgeX CLI README .","title":"Installing EdgeX CLI"},{"location":"getting-started/tools/Ch-CommandLineInterface/#features","text":"EdgeX CLI provides access to most of the core and support APIs. The commands map directly to the REST API structure. Running edgex-cli with no arguments shows a list of the available commands and information for each of them, including the name of the service implementing the command. Use the -h or --help flag to get more information about each command. $ edgex-cli EdgeX-CLI Usage: edgex-cli [command] Available Commands: command Read, write and list commands [Core Command] config Return the current configuration of all EdgeX core/support microservices device Add, remove, get, list and modify devices [Core Metadata] deviceprofile Add, remove, get and list device profiles [Core Metadata] deviceservice Add, remove, get, list and modify device services [Core Metadata] event Add, remove and list events help Help about any command interval Add, get and list intervals [Support Scheduler] intervalaction Get, list, update and remove interval actions [Support Scheduler] metrics Output the CPU/memory usage stats for all EdgeX core/support microservices notification Add, remove and list notifications [Support Notifications] ping Ping (health check) all EdgeX core/support microservices provisionwatcher Add, remove, get, list and modify provison watchers [Core Metadata] reading Count and list readings subscription Add, remove and list subscriptions [Support Notificationss] transmission Remove and list transmissions [Support Notifications] version Output the current version of EdgeX CLI and EdgeX microservices Flags: -h, --help help for edgex-cli Use \"edgex-cli [command] --help\" for more information about a command.","title":"Features"},{"location":"getting-started/tools/Ch-CommandLineInterface/#commands-implemented-by-all-microservices","text":"The ping , config , metrics and version work with more than one microservice. By default these commands will return values from all core and support services: $ edgex-cli metrics Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-metadata 13 1878936 38262 9445 47707 75318280 5967608 core-data 13 1716256 40200 8997 49197 75580424 5949504 core-command 13 1737288 31367 8582 39949 75318280 5380584 support-scheduler 10 2612296 20754 20224 40978 74728456 4146800 support-notifications 10 2714480 21199 20678 41877 74728456 4258640 To only return information for one service, specify the service to use: -c, --command use core-command service endpoint -d, --data use core-data service endpoint -m, --metadata use core-metadata service endpoint -n, --notifications use support-notifications service endpoint -s, --scheduler use support-scheduler service endpoint Example: $ edgex-cli metrics -d Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-data 14 1917712 870037 12258 882295 75580424 64148880 $ edgex-cli metrics -c Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-command 13 1618424 90890 8328 99218 75580424 22779448 $ edgex-cli metrics --metadata Service CpuBusyAvg MemAlloc MemFrees MemLiveObjects MemMallocs MemSys MemTotalAlloc core-metadata 12 1704256 39606 8870 48476 75318280 6139912 The -j/--json flag can be used with most of edgex-go commands to return the JSON output: $ edgex-cli metrics --metadata --json {\"apiVersion\":\"v2\",\"metrics\":{\"memAlloc\":1974544,\"memFrees\":39625,\"memLiveObjects\":9780,\"memMallocs\":49405,\"memSys\":75318280,\"memTotalAlloc\":6410200,\"cpuBusyAvg\":13}} This could then be formatted and filtered using jq : $ edgex-cli metrics --metadata --json | jq '.' { \"apiVersion\": \"v2\", \"metrics\": { \"memAlloc\": 1684176, \"memFrees\": 41142, \"memLiveObjects\": 8679, \"memMallocs\": 49821, \"memSys\": 75318280, \"memTotalAlloc\": 6530824, \"cpuBusyAvg\": 12 } }","title":"Commands implemented by all microservices"},{"location":"getting-started/tools/Ch-CommandLineInterface/#core-command-service","text":"","title":"Core-command service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-command-list","text":"Return a list of all supported device commands, optionally filtered by device name. Example: $ edgex-cli command list Name Device Name Profile Name Methods URL BoolArray Random-Boolean-Device Random-Boolean-Device Get, Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/BoolArray WriteBoolValue Random-Boolean-Device Random-Boolean-Device Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolValue WriteBoolArrayValue Random-Boolean-Device Random-Boolean-Device Put http://localhost:59882/api/v2/device/name/Random-Boolean-Device/WriteBoolArrayValue","title":"edgex-cli command list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-command-read","text":"Issue a read command to the specified device. Example: $ edgex-cli command read -c Int16 -d Random-Integer-Device -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"event\": { \"apiVersion\": \"v2\", \"id\": \"e19f417e-3130-485f-8212-64b593b899f9\", \"deviceName\": \"Random-Integer-Device\", \"profileName\": \"Random-Integer-Device\", \"sourceName\": \"Int16\", \"origin\": 1641484109458647300, \"readings\": [ { \"id\": \"dc1f212d-148a-457c-ab13-48aa0fa58dd1\", \"origin\": 1641484109458647300, \"deviceName\": \"Random-Integer-Device\", \"resourceName\": \"Int16\", \"profileName\": \"Random-Integer-Device\", \"valueType\": \"Int16\", \"binaryValue\": null, \"mediaType\": \"\", \"value\": \"587\" } ] } }","title":"edgex-cli command read"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-command-write","text":"Issue a write command to the specified device. Example using in-line request body: $ edgex-cli command write -d Random-Integer-Device -c Int8 -b \"{\\\"Int8\\\": \\\"99\\\"}\" $ edgex-cli command read -d Random-Integer-Device -c Int8 apiVersion: v2,statusCode: 200 Command Name Device Name Profile Name Value Type Value Int8 Random-Integer-Device Random-Integer-Device Int8 99 Example using a file containing the request: $ echo \"{ \\\"Int8\\\":\\\"88\\\" }\" > file.txt $ edgex-cli command write -d Random-Integer-Device -c Int8 -f file.txt apiVersion: v2,statusCode: 200 $ edgex-cli command read -d Random-Integer-Device -c Int8 Command Name Device Name Profile Name Value Type Value Int8 Random-Integer-Device Random-Integer-Device Int8 88","title":"edgex-cli command write"},{"location":"getting-started/tools/Ch-CommandLineInterface/#core-metadata-service","text":"","title":"Core-metadata service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-list","text":"List device services $ edgex-cli deviceservice list","title":"edgex-cli deviceservice list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-add","text":"Add a device service $ edgex-cli deviceservice add -n TestDeviceService -b \"http://localhost:51234\"","title":"edgex-cli deviceservice add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-name","text":"Shows information about a device service. Most edgex-cli commands support the -v/--verbose and -j/--json flags: $ edgex-cli deviceservice name -n TestDeviceService Name BaseAddress Description TestDeviceService http://localhost:51234 $ edgex-cli deviceservice name -n TestDeviceService -v Name BaseAddress Description AdminState Id Labels LastConnected LastReported Modified TestDeviceService http://localhost:51234 UNLOCKED 7f29ad45-65dc-46c0-a928-00147d328032 [] 0 0 10 Jan 22 17:26 GMT $ edgex-cli deviceservice name -n TestDeviceService -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"service\": { \"created\": 1641835585465, \"modified\": 1641835585465, \"id\": \"7f29ad45-65dc-46c0-a928-00147d328032\", \"name\": \"TestDeviceService\", \"baseAddress\": \"http://localhost:51234\", \"adminState\": \"UNLOCKED\" } }","title":"edgex-cli deviceservice name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-rm","text":"Remove a device service $ edgex-cli deviceservice rm -n TestDeviceService","title":"edgex-cli deviceservice rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceservice-update","text":"Update the device service, getting the ID using jq and confirm that the labels were added $ edgex-cli deviceservice add -n TestDeviceService -b \"http://localhost:51234\" {{{v2} c2600ad2-6489-4c3f-9207-5bdffdb8d68f 201} 844473b1-551d-4545-9143-28cfdf68a539} $ ID=`edgex-cli deviceservice name -n TestDeviceService -j | jq -r '.service.id'` $ edgex-cli deviceservice update -n TestDeviceService -i $ID --labels \"label1,label2\" {{v2} 9f4a4758-48a1-43ce-a232-828f442c2e34 200} $ edgex-cli deviceservice name -n TestDeviceService -v Name BaseAddress Description AdminState Id Labels LastConnected LastReported Modified TestDeviceService http://localhost:51234 UNLOCKED 844473b1-551d-4545-9143-28cfdf68a539 [label1 label2] 0 0 28 Jan 22 12:00 GMT","title":"edgex-cli deviceservice update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceprofile-list","text":"List device profiles $ edgex-cli deviceprofile list","title":"edgex-cli deviceprofile list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceprofile-add","text":"Add a device profile $ edgex-cli deviceprofile add -n TestProfile -r \"[{\\\"name\\\": \\\"SwitchButton\\\",\\\"description\\\": \\\"Switch On/Off.\\\",\\\"properties\\\": {\\\"valueType\\\": \\\"String\\\",\\\"readWrite\\\": \\\"RW\\\",\\\"defaultValue\\\": \\\"On\\\",\\\"units\\\": \\\"On/Off\\\" } }]\" -c \"[{\\\"name\\\": \\\"Switch\\\",\\\"readWrite\\\": \\\"RW\\\",\\\"resourceOperations\\\": [{\\\"deviceResource\\\": \\\"SwitchButton\\\",\\\"DefaultValue\\\": \\\"false\\\" }]} ]\" {{{v2} 65d083cc-b876-4744-af65-59a00c63fc25 201} 4c0af6b0-4e83-4f3c-a574-dcea5f42d3f0}","title":"edgex-cli deviceprofile add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceprofile-name","text":"Show information about a specifed device profile $ edgex-cli deviceprofile name -n TestProfile Name Description Manufacturer Model Name TestProfile TestProfile","title":"edgex-cli deviceprofile name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-deviceprofile-rm","text":"Remove a device profile $ edgex-cli deviceprofile rm -n TestProfile","title":"edgex-cli deviceprofile rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-list","text":"List current devices $ edgex-cli device list Name Description ServiceName ProfileName Labels AutoEvents Random-Float-Device Example of Device Virtual device-virtual Random-Float-Device [device-virtual-example] [{30s false Float32} {30s false Float64}] Random-UnsignedInteger-Device Example of Device Virtual device-virtual Random-UnsignedInteger-Device [device-virtual-example] [{20s false Uint8} {20s false Uint16} {20s false Uint32} {20s false Uint64}] Random-Boolean-Device Example of Device Virtual device-virtual Random-Boolean-Device [device-virtual-example] [{10s false Bool}] TestDevice TestDeviceService TestProfile [] [] Random-Binary-Device Example of Device Virtual device-virtual Random-Binary-Device [device-virtual-example] [] Random-Integer-Device Example of Device Virtual device-virtual Random-Integer-Device [device-virtual-example] [{15s false Int8} {15s false Int16} {15s false Int32} {15s false Int64}]","title":"edgex-cli device list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-add","text":"Add a new device. This needs a device service and device profile to be created first $ edgex-cli device add -n TestDevice -p TestProfile -s TestDeviceService --protocols \"{\\\"modbus-tcp\\\":{\\\"Address\\\": \\\"localhost\\\",\\\"Port\\\": \\\"1234\\\" }}\" {{{v2} e912aa16-af4a-491d-993b-b0aeb8cd9c67 201} ae0e8b95-52fc-4778-892d-ae7e1127ed39}","title":"edgex-cli device add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-name","text":"Show information about a specified named device $ edgex-cli device name -n TestDevice Name Description ServiceName ProfileName Labels AutoEvents TestDevice TestDeviceService TestProfile [] []","title":"edgex-cli device name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-rm","text":"Remove a device edgex-cli device rm -n TestDevice edgex-cli device list edgex-cli device add -n TestDevice -p TestProfile -s TestDeviceService --protocols \"{\\\"modbus-tcp\\\":{\\\"Address\\\": \\\"localhost\\\",\\\"Port\\\": \\\"1234\\\" }}\" edgex-cli device list","title":"edgex-cli device rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-device-update","text":"Update a device This example gets the ID of a device, updates it using that ID and then displays device information to confirm that the labels were added $ ID=`edgex-cli device name -n TestDevice -j | jq -r '.device.id'` $ edgex-cli device update -n TestDevice -i $ID --labels \"label1,label2\" {{v2} 73427492-1158-45b2-9a7c-491a474cecce 200} $ edgex-cli device name -n TestDevice Name Description ServiceName ProfileName Labels AutoEvents TestDevice TestDeviceService TestProfile [label1 label2] []","title":"edgex-cli device update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-add","text":"Add a new provision watcher $ edgex-cli provisionwatcher add -n TestWatcher --identifiers \"{\\\"address\\\":\\\"localhost\\\",\\\"port\\\":\\\"1234\\\"}\" -p TestProfile -s TestDeviceService {{{v2} 3f05f6e0-9d9b-4d96-96df-f394cc2ad6f4 201} ee76f4d8-46d4-454c-a4da-8ad9e06d8d7e}","title":"edgex-cli provisionwatcher add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-list","text":"List provision watchers $ edgex-cli provisionwatcher list Name ServiceName ProfileName Labels Identifiers TestWatcher TestDeviceService TestProfile [] map[address:localhost port:1234]","title":"edgex-cli provisionwatcher list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-name","text":"Show information about a specific named provision watcher $ edgex-cli provisionwatcher name -n TestWatcher Name ServiceName ProfileName Labels Identifiers TestWatcher TestDeviceService TestProfile [] map[address:localhost port:1234]","title":"edgex-cli provisionwatcher name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-rm","text":"Remove a provision watcher $ edgex-cli provisionwatcher rm -n TestWatcher $ edgex-cli provisionwatcher list No provision watchers available","title":"edgex-cli provisionwatcher rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-provisionwatcher-update","text":"Update a provision watcher This example gets the ID of a provision watcher, updates it using that ID and then displays information about it to confirm that the labels were added $ edgex-cli provisionwatcher add -n TestWatcher2 --identifiers \"{\\\"address\\\":\\\"localhost\\\",\\\"port\\\":\\\"1234\\\"}\" -p TestProfile -s TestDeviceService {{{v2} fb7b8bcf-8f58-477b-929e-8dac53cddc81 201} 7aadb7df-1ff1-4b3b-8986-b97e0ef53116} $ ID=`edgex-cli provisionwatcher name -n TestWatcher2 -j | jq -r '.provisionWatcher.id'` $ edgex-cli provisionwatcher update -n TestWatcher2 -i $ID --labels \"label1,label2\" {{v2} af1e70bf-4705-47f4-9046-c7b789799405 200} $ edgex-cli provisionwatcher name -n TestWatcher2 Name ServiceName ProfileName Labels Identifiers TestWatcher2 TestDeviceService TestProfile [label1 label2] map[address:localhost port:1234]","title":"edgex-cli provisionwatcher update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#core-data-service","text":"","title":"Core-data service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-event-add","text":"Create an event with a specified number of random readings $ edgex-cli event add -d Random-Integer-Device -p Random-Integer-Device -r 1 -s Int16 -t int16 Added event 75f06078-e8da-4671-8938-ab12ebb2c244 $ edgex-cli event list -v Origin Device Profile Source Id Versionable Readings 10 Jan 22 15:38 GMT Random-Integer-Device Random-Integer-Device Int16 75f06078-e8da-4671-8938-ab12ebb2c244 {v2} [{974a70fe-71ef-4a47-a008-c89f0e4e3bb6 1641829092129391876 Random-Integer-Device Int16 Random-Integer-Device Int16 {[] } {13342}}]","title":"edgex-cli event add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-event-count","text":"Count the number of events in core data, optionally filtering by device name $ edgex-cli event count -d Random-Integer-Device Total Random-Integer-Device events: 54","title":"edgex-cli event count"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-event-list","text":"List all events, optionally specifying a limit and offset $ edgex-cli event list To see two readings only, skipping the first 100 readings: $ edgex-cli reading list --limit 2 --offset 100 Origin Device ProfileName Value ValueType 28 Jan 22 12:55 GMT Random-Integer-Device Random-Integer-Device 22502 Int16 28 Jan 22 12:55 GMT Random-Integer-Device Random-Integer-Device 1878517239016780388 Int64","title":"edgex-cli event list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-event-rm","text":"Remove events, specifying either device name or maximum event age in milliseconds - edgex-cli event rm --device {devicename} removes all events for the specified device - edgex-cli event rm --age {ms} removes all events generated in the last {ms} milliseconds $ edgex-cli event rm -a 30000 $ edgex-cli event count Total events: 0","title":"edgex-cli event rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-reading-count","text":"Count the number of readings in core data, optionally filtering by device name $ edgex-cli reading count Total readings: 235","title":"edgex-cli reading count"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-reading-list","text":"List all readings, optionally specifying a limit and offset $ edgex-cli reading list","title":"edgex-cli reading list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#support-scheduler-service","text":"","title":"Support-scheduler service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-add","text":"Add an interval $ edgex-cli interval add -n \"hourly\" -i \"1h\" {{{v2} c7c51f21-dab5-4307-a4c9-bc5d5f2194d9 201} 98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18}","title":"edgex-cli interval add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-name","text":"Return an interval by name $ edgex-cli interval name -n \"hourly\" Name Interval Start End hourly 1h","title":"edgex-cli interval name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-list","text":"List all intervals $ edgex-cli interval list -j | jq '.' { \"apiVersion\": \"v2\", \"statusCode\": 200, \"intervals\": [ { \"created\": 1641830955058, \"modified\": 1641830955058, \"id\": \"98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18\", \"name\": \"hourly\", \"interval\": \"1h\" }, { \"created\": 1641830953884, \"modified\": 1641830953884, \"id\": \"507a2a9a-82eb-41ea-afa8-79a9b0033665\", \"name\": \"midnight\", \"start\": \"20180101T000000\", \"interval\": \"24h\" } ] }","title":"edgex-cli interval list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-update","text":"Update an interval, specifying either ID or name $ edgex-cli interval update -n \"hourly\" -i \"1m\" {{v2} 08239cc4-d4d7-4ea2-9915-d91b9557c742 200} $ edgex-cli interval name -n \"hourly\" -v Id Name Interval Start End 98a6d5f6-f4c4-4ec5-a00c-7fe24b9c9a18 hourly 1m","title":"edgex-cli interval update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-interval-rm","text":"Delete a named interval and associated interval actions $ edgex-cli interval rm -n \"hourly\"","title":"edgex-cli interval rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-add","text":"Add an interval action $ edgex-cli intervalaction add -n \"name01\" -i \"midnight\" -a \"{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"192.168.0.102\\\", \\\"port\\\": 8080, \\\"httpMethod\\\": \\\"GET\\\"}\"","title":"edgex-cli intervalaction add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-name","text":"Return an interval action by name $ edgex-cli intervalaction name -n \"name01\" Name Interval Address Content ContentType name01 midnight {REST 192.168.0.102 8080 { GET} { 0 0 false false 0} {[]}}","title":"edgex-cli intervalaction name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-list","text":"List all interval actions $ edgex-cli intervalaction list Name Interval Address Content ContentType name01 midnight {REST 192.168.0.102 8080 { GET} { 0 0 false false 0} {[]}} scrub-aged-events midnight {REST localhost 59880 {/api/v2/event/age/604800000000000 DELETE} { 0 0 false false 0} {[]}}","title":"edgex-cli intervalaction list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-update","text":"Update an interval action, specifying either ID or name $ edgex-cli intervalaction update -n \"name01\" --admin-state \"LOCKED\" {{v2} afc7b08c-5dc6-4923-9786-30bfebc8a8b6 200} $ edgex-cli intervalaction name -n \"name01\" -j | jq '.action.adminState' \"LOCKED\"","title":"edgex-cli intervalaction update"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-intervalaction-rm","text":"Delete an interval action by name $ edgex-cli intervalaction rm -n \"name01\"","title":"edgex-cli intervalaction rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#support-notifications-service","text":"","title":"Support-notifications service"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-notification-add","text":"Add a notification to be sent $ edgex-cli notification add -s \"sender01\" -c \"content\" --category \"category04\" --labels \"l3\" {{{v2} 13938e01-a560-47d8-bb50-060effdbe490 201} 6a1138c2-b58e-4696-afa7-2074e95165eb}","title":"edgex-cli notification add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-notification-list","text":"List notifications associated with a given label, category or time range $ edgex-cli notification list -c \"category04\" Category Content Description Labels Sender Severity Status category04 content [l3] sender01 NORMAL PROCESSED $ edgex-cli notification list --start \"01 jan 20 00:00 GMT\" --end \"01 dec 24 00:00 GMT\" Category Content Description Labels Sender Severity Status category04 content [l3] sender01 NORMAL PROCESSED","title":"edgex-cli notification list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-notification-rm","text":"Delete a notification and all of its associated transmissions $ ID=`edgex-cli notification list -c \"category04\" -v -j | jq -r '.notifications[0].id'` $ echo $ID 6a1138c2-b58e-4696-afa7-2074e95165eb $ edgex-cli notification rm -i $ID $ edgex-cli notification list -c \"category04\" No notifications available","title":"edgex-cli notification rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-notification-cleanup","text":"Delete all notifications and corresponding transmissions $ edgex-cli notification cleanup $ edgex-cli notification list --start \"01 jan 20 00:00 GMT\" --end \"01 dec 24 00:00 GMT\" No notifications available","title":"edgex-cli notification cleanup"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-subscription-add","text":"Add a new subscription $ edgex-cli subscription add -n \"name01\" --receiver \"receiver01\" -c \"[{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 7770, \\\"httpMethod\\\": \\\"POST\\\"}]\" --labels \"l1,l2,l3\" {{{v2} 2bbfdac0-d2e1-4f08-8344-392b8e8ddc5e 201} 1ec08af0-5767-4505-82f7-581fada6006b} $ edgex-cli subscription add -n \"name02\" --receiver \"receiver01\" -c \"[{\\\"type\\\": \\\"EMAIL\\\", \\\"recipients\\\": [\\\"123@gmail.com\\\"]}]\" --labels \"l1,l2,l3\" {{{v2} f6b417ca-740c-4dee-bc1e-c721c0de4051 201} 156fc2b9-de60-423b-9bff-5312d8452c48}","title":"edgex-cli subscription add"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-subscription-name","text":"Return a subscription by its unique name $ edgex-cli subscription name -n \"name01\" Name Description Channels Receiver Categories Labels name01 [{REST localhost 7770 { POST} { 0 0 false false 0} {[]}}] receiver01 [] [l1 l2 l3]","title":"edgex-cli subscription name"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-subscription-list","text":"List all subscriptions, optionally filtered by a given category, label or receiver $ edgex-cli subscription list --label \"l1\" Name Description Channels Receiver Categories Labels name02 [{EMAIL 0 { } { 0 0 false false 0} {[123@gmail.com]}}] receiver01 [] [l1 l2 l3] name01 [{REST localhost 7770 { POST} { 0 0 false false 0} {[]}}] receiver01 [] [l1 l2 l3]","title":"edgex-cli subscription list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-subscription-rm","text":"Delete the named subscription $ edgex-cli subscription rm -n \"name01\"","title":"edgex-cli subscription rm"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-transmission-list","text":"To create a transmission, first create a subscription and notifications: $ edgex-cli subscription add -n \"Test-Subscription\" --description \"Test data for subscription\" --categories \"health-check\" --labels \"simple\" --receiver \"tafuser\" --resend-limit 0 --admin-state \"UNLOCKED\" -c \"[{\\\"type\\\": \\\"REST\\\", \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 7770, \\\"httpMethod\\\": \\\"POST\\\"}]\" {{{v2} f281ec1a-876e-4a29-a14d-195b66d0506c 201} 3b489d23-b0c7-4791-b839-d9a578ebccb9} $ edgex-cli notification add -d \"Test data for notification 1\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} 8df79c7c-03fb-4626-b6e8-bf2d616fa327 201} 0be98b91-daf9-46e2-bcca-39f009d93866} $ edgex-cli notification add -d \"Test data for notification 2\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} ec0b2444-c8b0-45d0-bbd6-847dd007c2fd 201} a7c65d7d-0f9c-47e1-82c2-c8098c47c016} $ edgex-cli notification add -d \"Test data for notification 3\" --category \"health-check\" --labels \"simple\" --content-type \"string\" --content \"This is a test notification\" --sender \"taf-admin\" {{{v2} 45af7f94-c99e-4fb1-a632-fab5ff475be4 201} f982fc97-f53f-4154-bfce-3ef8666c3911} Then list the transmissions: $ edgex-cli transmission list SubscriptionName ResendCount Status Test-Subscription 0 FAILED Test-Subscription 0 FAILED Test-Subscription 0 FAILED","title":"edgex-cli transmission list"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-transmission-id","text":"Return a transmission by ID $ ID=`edgex-cli transmission list -j | jq -r '.transmissions[0].id'` $ edgex-cli transmission id -i $ID SubscriptionName ResendCount Status Test-Subscription 0 FAILED","title":"edgex-cli transmission id"},{"location":"getting-started/tools/Ch-CommandLineInterface/#edgex-cli-transmission-rm","text":"Delete processed transmissions older than the specificed age (in milliseconds) $ edgex-cli transmission rm -a 100","title":"edgex-cli transmission rm"},{"location":"getting-started/tools/Ch-GUI/","text":"Graphical User Interface (GUI) EdgeX's graphical user interface (GUI) is provided for demonstration and development use to manage and monitor a single instance of EdgeX Foundry. Setup You can quickly run the GUI in a Docker container or as a Snap. You can also download, build and run the GUI natively on your host. Docker Compose The EdgeX GUI is now incorporated into all the secure and non-sure Docker Compose files provided by the project. Locate and download the Docker Compose file that best suits your needs from https://github.com/edgexfoundry/edgex-compose. For example, in the Jakarta branch of edgex-compose the *-with-app-sample* compose files include the Sample App Service allowing the configurable pipeline to be manipulated from the UI. See the four Docker Compose files that include the Sample App Service circled below. Note The GUI can now be used in secure mode as well as non-secure mode. See the Getting Started using Docker guide for help on how to find, download and use a Docker Compose file to run EdgeX - in this case with the Sample App Service. Secure mode with API Gateway token When first running the UI in secure mode, you will be prompted to enter a token. Following the How to get access token? link to view the documentation how get an API Gateway access token. Once you enter the token the UI will have asses to the EdgeX service via the API Gateway. Note The UI is no longer restricted to access from localhost . It can now be accessed from any IP address that can access the host system. This is allowed because the UI is secured via API Gateway token when running in secure mode. Snaps Installing EdgeX UI as a snap The latest stable version of the snap can be installed using: $ sudo snap install edgex-ui A specific release of the snap can be installed from a dedicated channel. For example, to install the 2.1 (Jakarta) release: $ sudo snap install edgex-ui --channel=2.1 The latest development version of the edgex-ui snap can be installed using: $ sudo snap install edgex-ui --edge Generate token for entering UI secure mode A JWT access token is required to access the UI securely through the API Gateway. To do so: Generate a public/private keypair $ openssl ecparam -genkey -name prime256v1 -noout -out private.pem $ openssl ec -in private.pem -pubout -out public.pem Configure user and public-key $ sudo snap set edgexfoundry env.security-proxy.user=user01,USER_ID,ES256 $ sudo snap set edgexfoundry env.security-proxy.public-key=\"$(cat public.pem)\" Generate a token $ edgexfoundry.secrets-config proxy jwt --algorithm ES256 \\ --private_key private.pem --id USER_ID --expiration=1h This output is the JWT token for UI login in secure mode. Please keep the token in a safe place for future re-use as the same token cannot be regenerated or recovered from EdgeX's secret-config CLI. The token is required each time you reopen the web page. Using the edgex-ui snap Open your browser http://localhost:4000 Please log in to EdgeX with the JWT token we generated above. For more details please refer to edgex-ui Snap Native If you are running EdgeX natively (outside of Docker Compose or a Snap), you will find instructions on how to build and run the GUI on your platform in the GUI repository README General GUI Address Once the GUI is up and running, simply visit port 4000 on the GUI's host machine (ex: http://localhost:4000) to enter the GUI Dashboard (see below). The GUI does not require any login. Menu Bar The left side of the Dashboard holds a menu bar that allows you access to the GUI functionality. The \"hamburger\" icon on the menu bar allows you to shrink or expand the menu bar to icons vs icons and menu bar labels. Mobile Device Ready The EdgeX GUI can be used/displayed on a mobile device via the mobile device's browser if the GUI address is accessible to the device. The display may be skewed in order to fit the device screen. For example, the Dashboard menu will often change to icons over the expanded labeled menu bar when shown on a mobile device. Capability The GUI allows you to manage (add, remove, update) most of the EdgeX objects to include devices, device profiles, device services, rules, schedules, notifications, app services, etc. start, stop or restart the EdgeX services explore the memory, CPU and network traffic usage of EdgeX services monitor the data stream (the events and readings) collected by sensors and devices explore the configuration of an EdgeX service Dashboard The Dashboard page (the main page of the GUI) presents you with a set of clickable \"tiles\" that provide a quick view of the status of your EdgeX instance. That is, it provides some quick data points about the EdgeX instance and what the GUI is tracking. Specifically, the tiles in the Dashboard show you: the number of device services that it is aware of and their status (locked vs unlocked) the number of devices being managed by EdgeX (through the associated device services) the number of device profiles registered with core metadata the number of schedules (or intervals) EdgeX is managing the number of notifications EdgeX has seen the number of events and readings generated by device services and passing through core data the number of EdgeX micro services currently being monitored through the system management service If for some reason the GUI has an issue or difficulty getting the information it needs to display a tile in the Dashboard when it is displayed, a popup will be displayed over the screen indicating the issue. In the example below, the support scheduling service was down and the GUI Dashboard was unable to access the scheduler service. In this way, the Dashboard provides a quick and easy way to see whether the EdgeX instance is nominal or has underlying issues. You can click on each of the tiles in the Dashboard. Doing so provides more details about each. More precisely, clicking on a tile takes you to another part of the GUI where the details of that item can be found. For example, clicking on the Device Profiles tile takes you to the Metadata page and the Device Profile tab (covered below) System The EdgeX platform is comprised of a set of micro services. The system management service (and associated executors) tracks the micro services status (up or down), metrics of the running service (memory, CPU, network traffic), and configuration influencing the operation of the service. The system management service also provides the ability (through APIs) to start, stop and restart a service. Service information and the ability to call on the start, stop, restart APIs is surfaced through the System page. Warning The system management services are deprecated in EdgeX as of Ireland. Their full replacement has not been identified, but adopters should be aware that the service will be replaced in a future release. Please note that the System List display provides access to a static list of EdgeX services. As device services and application services (among other services) may be added or removed based on use case needs (often requireing new custom south and north side services), the GUI is not made aware of these and therefore will not display details on these services. Metrics From the System Service List, you can click on the Metric icon for any service to see the memory, CPU and network traffic telemetry for any service. The referesh rate can be adjusted on the display to have the GUI poll the system management service more or less frequently. Info The metrics are provided via an associated executor feeding the system management agent telemtry data. In the case of Docker, a Docker executor is capturing standard Docker stats and relaying them to the system management agent that in turn makes these available through its APIs to the GUI. Config The configuration of each service is made available for each service by clicking on the Config icon for any service from the System Service List. The configuration is displayed in JSON form and is read only. If running Consul, use the Consul Web UI to make changes to the configuration. Operation From the System Service List, you can request to stop, start or restart any of the listed services with the operation buttons in the far right column. Warning There is no confirmation popup or warning on these requests. When you push a stop, start, restart button, the request is immediately made to the system management service for that operation. The state of the service will change when these operations are invoked. When a service is stopped, the metric and config information for the service will be unavailable. After starting (or restarting) a service, you may need to hit the Refresh button on the page to get the state and metric/config icons to change. Metadata The Metadata page (available from the Metadata menu option) provides three tabs to be able to see and manage the basic elements of metadata: device services, device profiles and devices. Device Service Tab The Device Service tab displays the device services known to EdgeX (as device services registered in core metadata). Device services cannot be added or removed through the GUI, but information about the existing device services (i.e., port, admin state) and several actions on the existing device services can be accomplished on this tab. First note that for each device service listed, the number of associated devices are depicted. If you click on the Associated Devices button, it will take you to the Device tab to be able to get more information about or work with any of the associated devices. The Settings button on each device service allows you to change the description or the admin state of the device service. Alert Please note that you must hit the Save button after making any changes to the Device Service Settings. If you don't and move away from the page, your changes will be lost. Device Tab The Device Tab on the Metadata page offers you details about all the sensors/devices known to your EdgeX instance. Buttons at the top of the tab allow you to add, remove or edit a device (or collection of devices when deleting and using the selector checkbox in the device list). On the row of each device listed, links take you to the appropriate tabs to see the associated device profile or device service for the device. Icons on the row of each device listed cause editable areas to expand at the bottom of the tab to execute a device command or see/modify the device's AutoEvents. The command execution display allows you to select the specific device resource or device command (from the Command Name List ), and execute or try either a GET or SET command (depending on what the associated device profile for the device says is allowed). The response will be displayed in the ResponseRaw area after the try button is pushed. Add Device Wizard The Add button on the Device List tab will take you to the Add Device Wizard . This nice utility will assist you, entry screen by entry screen, in getting a new device setup in EdgeX. Specifically, it has you (in order): select the device service to which the new device will be associated select the device profile to which the new device will be templated or typed after enter general characteristics for the device (name, description, labels, etc.) and set its operating and admin states optionally setup auto events for scheduled data collection enter specific protocol properties for the device (based on known templates the GUI has at its disposal such as REST, MQTT, Modbus, etc.) Once all the information in the Add Device Wizard screens is entered, the Submit button at the end of the wizard causes your new device to be created in core metadata with all appropriate associations. Device Profile Tab The Device Profile Tab on the Metadata page displays the device profiles known to EdgeX and allows you to add new profiles or edit/remove existing profiles. The AssociatedDevice button on each row of the Device Profile List will take you to the Device tab and show you the list of devices currently associated to the device profile. Warning When deleting a profile, the system will popup an error if deices are still associated to the profile. Data Center (Seeing Event/Reading Data) From the Data Center option on the GUI's menu bar you can see the stream of Event/Readings coming from the device services into core data. The event/reading data will be displayed in JSON form. There are two tabs on the Data Stream page, both with Start and Pause buttons: Event (which allows incoming events to be displayed and the display will include the event's associated readings) Reading (allows incoming readings to be displayed, which will only show the reading and not its associated owning event) Hit the Start button on either tab to see the event or reading data displayed in the stream pane (events are shown in the example below). Push the Pause button to stop the display of event or reading data. Warning In actuality, the event and reading data is pulled from core data via REST call every three (3) seconds - so it is not a live stream display but a poll of data. Furthermore, if EdgeX is setup to have device services send data directly to application services via message bus and core data is not running or if core data is configured to have persistence turned off, there will be no data in core data to pull and so there will be no events or readings to see. Scheduler (Interval/Interval List) Interval and Interval Actions, which help define task management schedules in EdgeX, are managed via the Scheduler page from selecting Scheduler off the menu bar. Again, as with many of the EdgeX GUI pages, there are two tabs on the Scheduler page: Interval List to display, add, edit and delete Intervals Interval Action List to display, add, edit and delete Interval Actions which must be associated to an Interval Interval List When updating or adding an Interval, you must provide a name Interval duration string which takes an unsigned integer plus a unit of measure which must be one of \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Optionally provide a start/end dates and an indication that the interval runs only once (and thereby ignores the interval). Interval Action List Interval Actions define what happens when the Interval kicks off. Interval Actions can define REST, MQTT or Email actions that take place when an Interval timer hits. The GUI provides the means to edit or create any of these actions. Note that an Interval Action must be associated to an already defined Interval. Notifications Notifications are messages from EdgeX to external systems about something that has happened in EdgeX - for example that a new device has been created. Currently, notifications can be sent by email or REST call. The Notification Center page, available from the Notifications menu option, allows you to see new (not processed), processed or escalated (notifications that have failed to be sent within its resend limit) notifications. By default, the new notifications are displayed, but if you click on the Advanced >> link on the page (see below), you can select which type of notifications to display. The Subscriptions tab on the Notification Center page allows you to add, update or remove subscriptions to notifications. Subscribers are registered receivers of notifications - either via email or REST. When adding (or editing) a subscription, you must provide a name, category, label, receiver, and either an email address or REST endpoint. A template is provided to specify either the email or REST endpoint configuration data needed for the subscription. RuleEngine The Rule Engine page, from the RuleEngine menu option, provides the means to define streams and rules for the integrated eKuiper rules engine. Via the Stream tab, streams are defined by JSON. All that is really required is a stream name (EdgeXStream in the example below). The Rules tab allows eKuiper rules to be added, removed or updated/edited as well as started, stopped or restarted. When adding or editing a rule, you must provide a name, the rule SQL and action. The action can be one of the following (some requiring extra parameters): send the result to a REST HTTP Server (allowing an EdgeX command to be called) send the result to an MQTT broker send the result to the EdgeX message bus send the result to a log file See the eKuiper documentation for more information on how to define rules. Alert Once a rule is created, it is started by default. Return to the Rules tab on the RulesEngine page to stop a new rule. When creating or editing the rule, if the stream referenced in the rule is not already defined, the GUI will present an error when trying to submit the rule. AppService In the AppService page, you can configure existing configurable application services . The list of available configurable app services is determined by the UI automatically (based on a query for available app services from the registry service). Configurable When the application service is a configurable app service and is known to the GUI, the Configurable button on the App Service List allows you to change the triggers, functions, secrets and other configuration associated to the configurable app service. There are four tabs in the Configurable Setting editor: Trigger which defines how the configurable app service begins execution Pipeline Functions defining which functions are part of the configurable app service pipeline and in which order should they be executed Insecure Secrets - setting up secrets used by the configurable app service when running in non-secure mode (meaning Vault is not used to provide the secrets) Store and Forward which enables and configures the batch store and forward export capability Note When the Trigger is changed, the service must be restarted for the change to take effect. Why Demo and Developer Use Only The GUI is meant as a developer tool or to be used in EdgeX demonstration situations. It is not yet designed for production settings. There are several reasons for this restriction. The GUI is not designed to assist you in managing multiple EdgeX instances running in a deployment as would be typical in a production setting. It cannot be dynamically pointed to any running instance of EdgeX on multiple hosts. The GUI knows about a single instance of EdgeX running (by default, the instance that is on the same host as the GUI). The GUI provides no access controls. All functionality is open to anyone that can access the GUI URL. The GUI does not have the Kong token to negotiate through the API Gateway when the GUI is running outside of the Docker network - where the other EdgeX services are running. This would mean that the GUI would not be able to access any of the EdgeX service instance APIs. The EdgeX community is exploring efforts to make the GUI available in secure mode in a future release.","title":"Graphical User Interface (GUI)"},{"location":"getting-started/tools/Ch-GUI/#graphical-user-interface-gui","text":"EdgeX's graphical user interface (GUI) is provided for demonstration and development use to manage and monitor a single instance of EdgeX Foundry.","title":"Graphical User Interface (GUI)"},{"location":"getting-started/tools/Ch-GUI/#setup","text":"You can quickly run the GUI in a Docker container or as a Snap. You can also download, build and run the GUI natively on your host.","title":"Setup"},{"location":"getting-started/tools/Ch-GUI/#docker-compose","text":"The EdgeX GUI is now incorporated into all the secure and non-sure Docker Compose files provided by the project. Locate and download the Docker Compose file that best suits your needs from https://github.com/edgexfoundry/edgex-compose. For example, in the Jakarta branch of edgex-compose the *-with-app-sample* compose files include the Sample App Service allowing the configurable pipeline to be manipulated from the UI. See the four Docker Compose files that include the Sample App Service circled below. Note The GUI can now be used in secure mode as well as non-secure mode. See the Getting Started using Docker guide for help on how to find, download and use a Docker Compose file to run EdgeX - in this case with the Sample App Service.","title":"Docker Compose"},{"location":"getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token","text":"When first running the UI in secure mode, you will be prompted to enter a token. Following the How to get access token? link to view the documentation how get an API Gateway access token. Once you enter the token the UI will have asses to the EdgeX service via the API Gateway. Note The UI is no longer restricted to access from localhost . It can now be accessed from any IP address that can access the host system. This is allowed because the UI is secured via API Gateway token when running in secure mode.","title":"Secure mode with API Gateway token"},{"location":"getting-started/tools/Ch-GUI/#snaps","text":"","title":"Snaps"},{"location":"getting-started/tools/Ch-GUI/#installing-edgex-ui-as-a-snap","text":"The latest stable version of the snap can be installed using: $ sudo snap install edgex-ui A specific release of the snap can be installed from a dedicated channel. For example, to install the 2.1 (Jakarta) release: $ sudo snap install edgex-ui --channel=2.1 The latest development version of the edgex-ui snap can be installed using: $ sudo snap install edgex-ui --edge","title":"Installing EdgeX UI as a snap"},{"location":"getting-started/tools/Ch-GUI/#generate-token-for-entering-ui-secure-mode","text":"A JWT access token is required to access the UI securely through the API Gateway. To do so: Generate a public/private keypair $ openssl ecparam -genkey -name prime256v1 -noout -out private.pem $ openssl ec -in private.pem -pubout -out public.pem Configure user and public-key $ sudo snap set edgexfoundry env.security-proxy.user=user01,USER_ID,ES256 $ sudo snap set edgexfoundry env.security-proxy.public-key=\"$(cat public.pem)\" Generate a token $ edgexfoundry.secrets-config proxy jwt --algorithm ES256 \\ --private_key private.pem --id USER_ID --expiration=1h This output is the JWT token for UI login in secure mode. Please keep the token in a safe place for future re-use as the same token cannot be regenerated or recovered from EdgeX's secret-config CLI. The token is required each time you reopen the web page.","title":"Generate token for entering UI secure mode"},{"location":"getting-started/tools/Ch-GUI/#using-the-edgex-ui-snap","text":"Open your browser http://localhost:4000 Please log in to EdgeX with the JWT token we generated above. For more details please refer to edgex-ui Snap","title":"Using the edgex-ui snap"},{"location":"getting-started/tools/Ch-GUI/#native","text":"If you are running EdgeX natively (outside of Docker Compose or a Snap), you will find instructions on how to build and run the GUI on your platform in the GUI repository README","title":"Native"},{"location":"getting-started/tools/Ch-GUI/#general","text":"","title":"General"},{"location":"getting-started/tools/Ch-GUI/#gui-address","text":"Once the GUI is up and running, simply visit port 4000 on the GUI's host machine (ex: http://localhost:4000) to enter the GUI Dashboard (see below). The GUI does not require any login.","title":"GUI Address"},{"location":"getting-started/tools/Ch-GUI/#menu-bar","text":"The left side of the Dashboard holds a menu bar that allows you access to the GUI functionality. The \"hamburger\" icon on the menu bar allows you to shrink or expand the menu bar to icons vs icons and menu bar labels.","title":"Menu Bar"},{"location":"getting-started/tools/Ch-GUI/#mobile-device-ready","text":"The EdgeX GUI can be used/displayed on a mobile device via the mobile device's browser if the GUI address is accessible to the device. The display may be skewed in order to fit the device screen. For example, the Dashboard menu will often change to icons over the expanded labeled menu bar when shown on a mobile device.","title":"Mobile Device Ready"},{"location":"getting-started/tools/Ch-GUI/#capability","text":"The GUI allows you to manage (add, remove, update) most of the EdgeX objects to include devices, device profiles, device services, rules, schedules, notifications, app services, etc. start, stop or restart the EdgeX services explore the memory, CPU and network traffic usage of EdgeX services monitor the data stream (the events and readings) collected by sensors and devices explore the configuration of an EdgeX service","title":"Capability"},{"location":"getting-started/tools/Ch-GUI/#dashboard","text":"The Dashboard page (the main page of the GUI) presents you with a set of clickable \"tiles\" that provide a quick view of the status of your EdgeX instance. That is, it provides some quick data points about the EdgeX instance and what the GUI is tracking. Specifically, the tiles in the Dashboard show you: the number of device services that it is aware of and their status (locked vs unlocked) the number of devices being managed by EdgeX (through the associated device services) the number of device profiles registered with core metadata the number of schedules (or intervals) EdgeX is managing the number of notifications EdgeX has seen the number of events and readings generated by device services and passing through core data the number of EdgeX micro services currently being monitored through the system management service If for some reason the GUI has an issue or difficulty getting the information it needs to display a tile in the Dashboard when it is displayed, a popup will be displayed over the screen indicating the issue. In the example below, the support scheduling service was down and the GUI Dashboard was unable to access the scheduler service. In this way, the Dashboard provides a quick and easy way to see whether the EdgeX instance is nominal or has underlying issues. You can click on each of the tiles in the Dashboard. Doing so provides more details about each. More precisely, clicking on a tile takes you to another part of the GUI where the details of that item can be found. For example, clicking on the Device Profiles tile takes you to the Metadata page and the Device Profile tab (covered below)","title":"Dashboard"},{"location":"getting-started/tools/Ch-GUI/#system","text":"The EdgeX platform is comprised of a set of micro services. The system management service (and associated executors) tracks the micro services status (up or down), metrics of the running service (memory, CPU, network traffic), and configuration influencing the operation of the service. The system management service also provides the ability (through APIs) to start, stop and restart a service. Service information and the ability to call on the start, stop, restart APIs is surfaced through the System page. Warning The system management services are deprecated in EdgeX as of Ireland. Their full replacement has not been identified, but adopters should be aware that the service will be replaced in a future release. Please note that the System List display provides access to a static list of EdgeX services. As device services and application services (among other services) may be added or removed based on use case needs (often requireing new custom south and north side services), the GUI is not made aware of these and therefore will not display details on these services.","title":"System"},{"location":"getting-started/tools/Ch-GUI/#metrics","text":"From the System Service List, you can click on the Metric icon for any service to see the memory, CPU and network traffic telemetry for any service. The referesh rate can be adjusted on the display to have the GUI poll the system management service more or less frequently. Info The metrics are provided via an associated executor feeding the system management agent telemtry data. In the case of Docker, a Docker executor is capturing standard Docker stats and relaying them to the system management agent that in turn makes these available through its APIs to the GUI.","title":"Metrics"},{"location":"getting-started/tools/Ch-GUI/#config","text":"The configuration of each service is made available for each service by clicking on the Config icon for any service from the System Service List. The configuration is displayed in JSON form and is read only. If running Consul, use the Consul Web UI to make changes to the configuration.","title":"Config"},{"location":"getting-started/tools/Ch-GUI/#operation","text":"From the System Service List, you can request to stop, start or restart any of the listed services with the operation buttons in the far right column. Warning There is no confirmation popup or warning on these requests. When you push a stop, start, restart button, the request is immediately made to the system management service for that operation. The state of the service will change when these operations are invoked. When a service is stopped, the metric and config information for the service will be unavailable. After starting (or restarting) a service, you may need to hit the Refresh button on the page to get the state and metric/config icons to change.","title":"Operation"},{"location":"getting-started/tools/Ch-GUI/#metadata","text":"The Metadata page (available from the Metadata menu option) provides three tabs to be able to see and manage the basic elements of metadata: device services, device profiles and devices.","title":"Metadata"},{"location":"getting-started/tools/Ch-GUI/#device-service-tab","text":"The Device Service tab displays the device services known to EdgeX (as device services registered in core metadata). Device services cannot be added or removed through the GUI, but information about the existing device services (i.e., port, admin state) and several actions on the existing device services can be accomplished on this tab. First note that for each device service listed, the number of associated devices are depicted. If you click on the Associated Devices button, it will take you to the Device tab to be able to get more information about or work with any of the associated devices. The Settings button on each device service allows you to change the description or the admin state of the device service. Alert Please note that you must hit the Save button after making any changes to the Device Service Settings. If you don't and move away from the page, your changes will be lost.","title":"Device Service Tab"},{"location":"getting-started/tools/Ch-GUI/#device-tab","text":"The Device Tab on the Metadata page offers you details about all the sensors/devices known to your EdgeX instance. Buttons at the top of the tab allow you to add, remove or edit a device (or collection of devices when deleting and using the selector checkbox in the device list). On the row of each device listed, links take you to the appropriate tabs to see the associated device profile or device service for the device. Icons on the row of each device listed cause editable areas to expand at the bottom of the tab to execute a device command or see/modify the device's AutoEvents. The command execution display allows you to select the specific device resource or device command (from the Command Name List ), and execute or try either a GET or SET command (depending on what the associated device profile for the device says is allowed). The response will be displayed in the ResponseRaw area after the try button is pushed.","title":"Device Tab"},{"location":"getting-started/tools/Ch-GUI/#add-device-wizard","text":"The Add button on the Device List tab will take you to the Add Device Wizard . This nice utility will assist you, entry screen by entry screen, in getting a new device setup in EdgeX. Specifically, it has you (in order): select the device service to which the new device will be associated select the device profile to which the new device will be templated or typed after enter general characteristics for the device (name, description, labels, etc.) and set its operating and admin states optionally setup auto events for scheduled data collection enter specific protocol properties for the device (based on known templates the GUI has at its disposal such as REST, MQTT, Modbus, etc.) Once all the information in the Add Device Wizard screens is entered, the Submit button at the end of the wizard causes your new device to be created in core metadata with all appropriate associations.","title":"Add Device Wizard"},{"location":"getting-started/tools/Ch-GUI/#device-profile-tab","text":"The Device Profile Tab on the Metadata page displays the device profiles known to EdgeX and allows you to add new profiles or edit/remove existing profiles. The AssociatedDevice button on each row of the Device Profile List will take you to the Device tab and show you the list of devices currently associated to the device profile. Warning When deleting a profile, the system will popup an error if deices are still associated to the profile.","title":"Device Profile Tab"},{"location":"getting-started/tools/Ch-GUI/#data-center-seeing-eventreading-data","text":"From the Data Center option on the GUI's menu bar you can see the stream of Event/Readings coming from the device services into core data. The event/reading data will be displayed in JSON form. There are two tabs on the Data Stream page, both with Start and Pause buttons: Event (which allows incoming events to be displayed and the display will include the event's associated readings) Reading (allows incoming readings to be displayed, which will only show the reading and not its associated owning event) Hit the Start button on either tab to see the event or reading data displayed in the stream pane (events are shown in the example below). Push the Pause button to stop the display of event or reading data. Warning In actuality, the event and reading data is pulled from core data via REST call every three (3) seconds - so it is not a live stream display but a poll of data. Furthermore, if EdgeX is setup to have device services send data directly to application services via message bus and core data is not running or if core data is configured to have persistence turned off, there will be no data in core data to pull and so there will be no events or readings to see.","title":"Data Center (Seeing Event/Reading Data)"},{"location":"getting-started/tools/Ch-GUI/#scheduler-intervalinterval-list","text":"Interval and Interval Actions, which help define task management schedules in EdgeX, are managed via the Scheduler page from selecting Scheduler off the menu bar. Again, as with many of the EdgeX GUI pages, there are two tabs on the Scheduler page: Interval List to display, add, edit and delete Intervals Interval Action List to display, add, edit and delete Interval Actions which must be associated to an Interval","title":"Scheduler (Interval/Interval List)"},{"location":"getting-started/tools/Ch-GUI/#interval-list","text":"When updating or adding an Interval, you must provide a name Interval duration string which takes an unsigned integer plus a unit of measure which must be one of \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Optionally provide a start/end dates and an indication that the interval runs only once (and thereby ignores the interval).","title":"Interval List"},{"location":"getting-started/tools/Ch-GUI/#interval-action-list","text":"Interval Actions define what happens when the Interval kicks off. Interval Actions can define REST, MQTT or Email actions that take place when an Interval timer hits. The GUI provides the means to edit or create any of these actions. Note that an Interval Action must be associated to an already defined Interval.","title":"Interval Action List"},{"location":"getting-started/tools/Ch-GUI/#notifications","text":"Notifications are messages from EdgeX to external systems about something that has happened in EdgeX - for example that a new device has been created. Currently, notifications can be sent by email or REST call. The Notification Center page, available from the Notifications menu option, allows you to see new (not processed), processed or escalated (notifications that have failed to be sent within its resend limit) notifications. By default, the new notifications are displayed, but if you click on the Advanced >> link on the page (see below), you can select which type of notifications to display. The Subscriptions tab on the Notification Center page allows you to add, update or remove subscriptions to notifications. Subscribers are registered receivers of notifications - either via email or REST. When adding (or editing) a subscription, you must provide a name, category, label, receiver, and either an email address or REST endpoint. A template is provided to specify either the email or REST endpoint configuration data needed for the subscription.","title":"Notifications"},{"location":"getting-started/tools/Ch-GUI/#ruleengine","text":"The Rule Engine page, from the RuleEngine menu option, provides the means to define streams and rules for the integrated eKuiper rules engine. Via the Stream tab, streams are defined by JSON. All that is really required is a stream name (EdgeXStream in the example below). The Rules tab allows eKuiper rules to be added, removed or updated/edited as well as started, stopped or restarted. When adding or editing a rule, you must provide a name, the rule SQL and action. The action can be one of the following (some requiring extra parameters): send the result to a REST HTTP Server (allowing an EdgeX command to be called) send the result to an MQTT broker send the result to the EdgeX message bus send the result to a log file See the eKuiper documentation for more information on how to define rules. Alert Once a rule is created, it is started by default. Return to the Rules tab on the RulesEngine page to stop a new rule. When creating or editing the rule, if the stream referenced in the rule is not already defined, the GUI will present an error when trying to submit the rule.","title":"RuleEngine"},{"location":"getting-started/tools/Ch-GUI/#appservice","text":"In the AppService page, you can configure existing configurable application services . The list of available configurable app services is determined by the UI automatically (based on a query for available app services from the registry service).","title":"AppService"},{"location":"getting-started/tools/Ch-GUI/#configurable","text":"When the application service is a configurable app service and is known to the GUI, the Configurable button on the App Service List allows you to change the triggers, functions, secrets and other configuration associated to the configurable app service. There are four tabs in the Configurable Setting editor: Trigger which defines how the configurable app service begins execution Pipeline Functions defining which functions are part of the configurable app service pipeline and in which order should they be executed Insecure Secrets - setting up secrets used by the configurable app service when running in non-secure mode (meaning Vault is not used to provide the secrets) Store and Forward which enables and configures the batch store and forward export capability Note When the Trigger is changed, the service must be restarted for the change to take effect.","title":"Configurable"},{"location":"getting-started/tools/Ch-GUI/#why-demo-and-developer-use-only","text":"The GUI is meant as a developer tool or to be used in EdgeX demonstration situations. It is not yet designed for production settings. There are several reasons for this restriction. The GUI is not designed to assist you in managing multiple EdgeX instances running in a deployment as would be typical in a production setting. It cannot be dynamically pointed to any running instance of EdgeX on multiple hosts. The GUI knows about a single instance of EdgeX running (by default, the instance that is on the same host as the GUI). The GUI provides no access controls. All functionality is open to anyone that can access the GUI URL. The GUI does not have the Kong token to negotiate through the API Gateway when the GUI is running outside of the Docker network - where the other EdgeX services are running. This would mean that the GUI would not be able to access any of the EdgeX service instance APIs. The EdgeX community is exploring efforts to make the GUI available in secure mode in a future release.","title":"Why Demo and Developer Use Only"},{"location":"microservices/application/AdvancedTopics/","text":"Advanced Topics The following items discuss topics that are a bit beyond the basic use cases of the Application Functions SDK when interacting with EdgeX. Configurable Functions Pipeline This SDK provides the capability to define the functions pipeline via configuration rather than code by using the app-service-configurable application service. See the App Service Configurable section for more details. Custom REST Endpoints It is not uncommon to require your own custom REST endpoints when building an Application Service. Rather than spin up your own webserver inside of your app (alongside the already existing running webserver), we've exposed a method that allows you add your own routes to the existing webserver. A few routes are reserved and cannot be used: /api/v2/version /api/v2/ping /api/v2/metrics /api/v2/config /api/v2/trigger /api/v2/secret To add your own route, use the AddRoute() API provided on the ApplicationService interface. Example - Add Custom REST route myhandler := func ( writer http . ResponseWriter , req * http . Request ) { service := req . Context (). Value ( interfaces . AppServiceContextKey ).( interfaces . ApplicationService ) service . LoggingClient (). Info ( \"TEST\" ) writer . Header (). Set ( \"Content-Type\" , \"text/plain\" ) writer . Write ([] byte ( \"hello\" )) writer . WriteHeader ( 200 ) } service := pkg . NewAppService ( serviceKey ) service . AddRoute ( \"/myroute\" , myHandler , \"GET\" ) Under the hood, this simply adds the provided route, handler, and method to the gorilla mux.Router used in the SDK. For more information on gorilla mux you can check out the github repo here . You can access the interfaces.ApplicationService API for resources such as the logging client by pulling it from the context as shown above -- this is useful for when your routes might not be defined in your main.go where you have access to the interfaces.ApplicationService instance. Target Type The target type is the object type of the incoming data that is sent to the first function in the function pipeline. By default this is an EdgeX dtos.Event since typical usage is receiving Events from the EdgeX MessageBus. There are scenarios where the incoming data is not an EdgeX Event . One example scenario is two application services are chained via the EdgeX MessageBus. The output of the first service is inference data from analyzing the original Event data, and published back to the EdgeX MessageBus. The second service needs to be able to let the SDK know the target type of the input data it is expecting. For usages where the incoming data is not events , the TargetType of the expected incoming data can be set when the ApplicationService instance is created using the NewAppServiceWithTargetType() factory function. Example - Set and use custom Target Type type Person struct { FirstName string `json:\"first_name\"` LastName string `json:\"last_name\"` } service := pkg . NewAppServiceWithTargetType ( serviceKey , & Person {}) TargetType must be set to a pointer to an instance of your target type such as &Person{} . The first function in your function pipeline will be passed an instance of your target type, not a pointer to it. In the example above, the first function in the pipeline would start something like: func MyPersonFunction ( ctx interfaces . AppFunctionContext , data interface {}) ( bool , interface {}) { ctx . LoggingClient (). Debug ( \"MyPersonFunction executing\" ) if data == nil { return false , errors . New ( \"no data received to MyPersonFunction\" ) } person , ok := data .( Person ) if ! ok { return false , errors . New ( \"MyPersonFunction type received is not a Person\" ) } // .... The SDK supports un-marshaling JSON or CBOR encoded data into an instance of the target type. If your incoming data is not JSON or CBOR encoded, you then need to set the TargetType to &[]byte . If the target type is set to &[]byte the incoming data will not be un-marshaled. The content type, if set, will be set on the interfaces.AppFunctionContext and can be access via the InputContentType() API. Your first function will be responsible for decoding the data or not. Command Line Options See the Common Command Line Options for the set of command line options common to all EdgeX services. The following command line options are specific to Application Services. Skip Version Check -s/--skipVersionCheck Indicates the service should skip the Core Service's version compatibility check. Service Key -sk/--serviceKey Sets the service key that is used with Registry, Configuration Provider and security services. The default service key is set by the application service. If the name provided contains the placeholder text