From 14607f613566a13f32f2efdf7b0e817daf81f218 Mon Sep 17 00:00:00 2001 From: Phillip LeBlanc Date: Fri, 23 Feb 2024 10:32:09 +0900 Subject: [PATCH] Update dataset spicepod reference (#102) * Update dataset spicepod reference * Update datasets.md --- spiceaidocs/content/en/concepts/_index.md | 32 ----- .../content/en/concepts/rewards/_index.md | 72 ---------- .../content/en/concepts/rewards/external.md | 71 ---------- .../content/en/concepts/time/_index.md | 2 - .../content/en/reference/Spicepod/_index.md | 14 +- .../content/en/reference/Spicepod/datasets.md | 125 ++++++++++++++++++ .../reference/Spicepod/quickstarts-trader.md | 109 --------------- .../en/reference/Spicepod/samples-gardener.md | 75 ----------- .../reference/Spicepod/samples-serverops.md | 99 -------------- spiceaidocs/content/en/training/monitoring.md | 69 ---------- 10 files changed, 138 insertions(+), 530 deletions(-) delete mode 100644 spiceaidocs/content/en/concepts/rewards/_index.md delete mode 100644 spiceaidocs/content/en/concepts/rewards/external.md create mode 100644 spiceaidocs/content/en/reference/Spicepod/datasets.md delete mode 100644 spiceaidocs/content/en/reference/Spicepod/quickstarts-trader.md delete mode 100644 spiceaidocs/content/en/reference/Spicepod/samples-gardener.md delete mode 100644 spiceaidocs/content/en/reference/Spicepod/samples-serverops.md delete mode 100644 spiceaidocs/content/en/training/monitoring.md diff --git a/spiceaidocs/content/en/concepts/_index.md b/spiceaidocs/content/en/concepts/_index.md index cf876a62..7525d20c 100644 --- a/spiceaidocs/content/en/concepts/_index.md +++ b/spiceaidocs/content/en/concepts/_index.md @@ -25,35 +25,3 @@ A `Pod` is a package of configuration and data used to train and deploy Spice.ai A `Pod manifest` is a YAML file that describes how to connect data with a learning environment. A Pod is constructed from the following components: - -### Dataspace - -A [dataspace]({{}}) is a specification on how the Spice.ai runtime and AI engine loads, processes and interacts with data from a single source. A dataspace may contain a single data connector and data processor. There may be multiple dataspace definitions within a pod. The fields specified in the union of dataspaces are used as inputs to the neural networks that Spice.ai trains. - -A dataspace that doesn't contain a data connector/processor means that the observation data for this dataspace will be provided by calling [POST /pods/{pod}/observations]({{}}). - -### Data Connector - -A [data connector]({{}}) is a reuseable component that contains logic to fetch or ingest data from an external source. Spice.ai provides a general interface that anyone can implement to create a data connector, see the [data-components-contrib](https://github.com/spiceai/data-components-contrib/tree/trunk/dataconnectors) repo for more information. - -### Data Processor - -A [data processor]({{}}) is a reusable component, composable with a data connector that contains logic to process raw connector data into [observations]({{}}) and state Spice.ai can use. - -Spice.ai provides a general interface that anyone can implement to create a data processor, see the [data-components-contrib](https://github.com/spiceai/data-components-contrib/tree/trunk/dataprocessors) repo for more information. - -### Actions - -[Actions]({{}}) are the set of actions the Spice.ai runtime can recommend for a pod. - -### Recommendations - -To intelligently adapt its behavior, an application should query the Spice.ai runtime for which [action]({{}}) it recommends to take given a specified time. The result of this query is a [recommendation]({{}}). - -If a time is not specified, the resulting recommendation query time will default to the time of the most recently ingested observation. - -### Training Rewards - -[Training Rewards]({{}}) are code definitions in Python that tell the Spice.ai AI Engine how to train the neural networks to achieve the desired goal. A reward is defined for each action specified in the pod. - -In the future we will expand the languages we support for writing the reward functions in. [Let us know](mailto:hey@spiceai.io) which language you want to be able to write your reward functions in! diff --git a/spiceaidocs/content/en/concepts/rewards/_index.md b/spiceaidocs/content/en/concepts/rewards/_index.md deleted file mode 100644 index 98048142..00000000 --- a/spiceaidocs/content/en/concepts/rewards/_index.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -type: docs -title: "Rewards" -linkTitle: "Rewards" -weight: 15 -description: "Documentation for authoring Spice.ai rewards" ---- - -The Spice.ai engine learns and provides recommendations to your application using a type of AI called deep reinforcement learning. To learn more, see [Deep Learning AI]({{}}). - -A fundamental concept in deep reinforcement learning is to reward actions a learning agent takes during a training run. These rewards are numerical values and can be negative or positive. - -In Spice.ai, developers define the rewards the AI engine uses in training runs through reward function definitions. Reward functions are Python functions (with more languages supported in the future) and can be authored either inline in the Spicepod manifest YAML or a separate Python `.py` file. - -To see how to define reward functions using an external file, click [here]({{}}). - -## Rewards in YAML - -To define the reward functions in the YAML directly, put the Python code fragment in the `with` node. - -The reward function must assign a value to `reward` for it to be valid. - -The following variables are available to be used in the reward function: - -| variable | Type | Description | -| ------------- | ---------------------------------------------------------------------- | --------------------------------------------------------------------- | -| current_state | [dict](https://docs.python.org/3.8/library/stdtypes.html#typesmapping) | The observation state when the action was taken | -| next_state | [dict](https://docs.python.org/3.8/library/stdtypes.html#typesmapping) | The observation state one granularity step after the action was taken | - -### Example - -See the full example manifest [here]({{}}). - -```yaml -training: - rewards: - - reward: close_valve - # Reward keeping moisture content above 25% - with: | - if next_state["sensors_garden_moisture"] > 0.25: - reward = 200 - - # Penalize low moisture content depending on how far the garden has dried out - else: - reward = -100 * (0.25 - next_state["sensors_garden_moisture"]) - - # Penalize especially heavily if the drying trend is continuing (next_state is drier than current_state) - if next_state["sensors_garden_moisture"] < current_state["sensors_garden_moisture"]: - reward = reward * 2 - - - reward: open_valve_half - # Reward watering when needed, more heavily if the garden is more dried out - with: | - if next_state["sensors_garden_moisture"] < 0.25: - reward = 100 * (0.25 - next_state["sensors_garden_moisture"]) - - # Penalize wasting water - # Penalize overwatering depending on how overwatered the garden is - else: - reward = -50 * (next_state["sensors_garden_moisture"] - 0.25) - - - reward: open_valve_full - # Reward watering when needed, more heavily if the garden is more dried out - with: | - if next_state["sensors_garden_moisture"] < 0.25: - reward = 200 * (0.25 - next_state["sensors_garden_moisture") - - # Penalize wasting water more heavily with valve fully open - # Penalize overwatering depending on how overwatered the garden is - else: - reward = -100 * (next_state["sensors_garden_moisture"] - 0.25) -``` diff --git a/spiceaidocs/content/en/concepts/rewards/external.md b/spiceaidocs/content/en/concepts/rewards/external.md deleted file mode 100644 index d1718f93..00000000 --- a/spiceaidocs/content/en/concepts/rewards/external.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -type: docs -title: "Reward Function Files" -linkTitle: "Reward Function Files" -weight: 15 ---- - -Reward functions may be defined in a single Python (.py) file. - -This file may be authored in standard Python3.8+ code and the file may define global functions and import packages. - -The packages that can be imported are limited to what is [imported by the AI Engine](https://github.com/spiceai/spiceai/blob/trunk/ai/src/requirements/common.txt). - -## Action Reward - -For each action defined in the Spicepod manifest, a corresponding function (i.e. action reward) should be defined in the Python file. The mapping of action to function name is specified in the manifest using the `with` node. - -Each reward function should match the following function signature, with the function name matching that defined in the Spicepod manifest. - -```python -def reward_for_action(current_state: dict, current_state_interpretations: list, next_state: dict, next_state_interpretations: list) -> float: - """ - Returns a reward given the action and observation space - - Args: - action_name: The name of the action to generate a reward for - current_state: Value of the observation state when the action was recommended - current_state_interpretations: Array of interpretations for current_state - next_state: Value of the observation state that immediately follows current_state - next_state_interpretations: Array of interpretations for next_state - - Note: As interactive environments are not fully supported, it may not make sense to use - next_state when calculating the reward - - Returns: - (float): The reward that the agent should receive for taking this action. - """ -``` - -Learn more about interpretations [here]({{}}). - -### Example - -For the following manifest: - -```yaml -training: - reward_funcs: my_reward.py - rewards: - - reward: buy - with: buy_reward - - reward: sell - with: sell_reward - - reward: hold - with: hold_reward -``` - -Author a Python file with the following content to define the reward functions: - -`my_reward.py` - -```python -def buy_reward(current_state: dict, current_state_interps, next_state: dict, next_state_interps) -> float: - return complex_calculation(current_state) - -def sell_reward(current_state: dict, current_state_interps, next_state: dict, next_state_interps) -> float: - return current_state["price"] - next_state["price"] - -def hold_reward(current_state: dict, current_state_interps, next_state: dict, next_state_interps) -> float: - return 1 -``` diff --git a/spiceaidocs/content/en/concepts/time/_index.md b/spiceaidocs/content/en/concepts/time/_index.md index 5a4c1d00..b2e921ca 100644 --- a/spiceaidocs/content/en/concepts/time/_index.md +++ b/spiceaidocs/content/en/concepts/time/_index.md @@ -39,8 +39,6 @@ params: If not provided in the manifest, Spicepods will default to a period of **3 days**, intervals of **1 min**, and granularity of **10 seconds**. The period epoch will default to a dynamic epoch of the current time minus the period. In this mode, the period becomes a sliding window over time. -See reference documentation for [Spicepod params]({{}}). - ### Period The `period` defines the entire timespan the Spicepod will use for learning and decision-making. diff --git a/spiceaidocs/content/en/reference/Spicepod/_index.md b/spiceaidocs/content/en/reference/Spicepod/_index.md index 6f10d459..76405bfb 100644 --- a/spiceaidocs/content/en/reference/Spicepod/_index.md +++ b/spiceaidocs/content/en/reference/Spicepod/_index.md @@ -41,7 +41,7 @@ metadata: ## `datasets` -A Spicepod can contain one or more [datasets](https://docs.spice.ai/reference/specifications/dataset-and-view-yaml-specification) referenced by relative path. +A Spicepod can contain one or more [datasets]({{}}) referenced by relative path. **Example** @@ -60,6 +60,18 @@ datasets: dependsOn: datasets/uniswap_eth_usdc ``` +A dataset defined inline. + +```yaml +datasets: + - name: spiceai.uniswap_v2_eth_usdc + type: overwrite + source: spice.ai + acceleration: + enabled: true + refresh: 1h +``` + ## `functions` A Spicepod can contain one or more [functions](https://docs.spice.ai/reference/specifications/spice-functions-yaml-specification) referenced by relative path. diff --git a/spiceaidocs/content/en/reference/Spicepod/datasets.md b/spiceaidocs/content/en/reference/Spicepod/datasets.md new file mode 100644 index 00000000..50da67cc --- /dev/null +++ b/spiceaidocs/content/en/reference/Spicepod/datasets.md @@ -0,0 +1,125 @@ +--- +type: docs +title: "Datasets" +linkTitle: "Datasets" +description: 'Datasets YAML reference' +weight: 80 +--- + +A Spicepod can contain one or more datasets referenced by relative path, or defined inline. + +# `datasets` + +Inline example: + +`spicepod.yaml` +```yaml +datasets: + - from: spice.ai/eth/beacon/eigenlayer + name: strategy_manager_deposits + params: + app: goerli-app + acceleration: + enabled: true + mode: inmemory # / file + engine: arrow # / duckdb + refresh_interval: 1h + refresh_mode: full / append # update / incremental + retention: 30m +``` + +`spicepod.yaml` +```yaml +datasets: + - from: databricks.com/spiceai/datasets + name: uniswap_eth_usd + params: + environment: prod + acceleration: + enabled: true + mode: inmemory # / file + engine: arrow # / duckdb + refresh_interval: 1h + refresh_mode: full / append # update / incremental + retention: 30m +``` + +`spicepod.yaml` +```yaml +datasets: + - from: local/Users/phillip/data/test.parquet + name: test + acceleration: + enabled: true + mode: inmemory # / file + engine: arrow # / duckdb + refresh_interval: 1h + refresh_mode: full / append # update / incremental + retention: 30m +``` + +Relative path example: + +`spicepod.yaml` +```yaml +datasets: + - from: datasets/uniswap_v2_eth_usdc +``` + +`datasets/uniswap_v2_eth_usdc/dataset.yaml` +```yaml +name: spiceai.uniswap_v2_eth_usdc +type: overwrite +source: spice.ai +auth: spice.ai +acceleration: + enabled: true + refresh: 1h +``` + +## `name` + +The name of the dataset. This is used to reference the dataset in the pod manifest, as well as in external data sources. + +## `type` + +The type of dataset. The following types are supported: + +- `overwrite` - Overwrites the dataset with the contents of the dataset source. +- `append` - Appends new data from dataset source to the dataset. + +## `source` + +The source of the dataset. The following sources are supported: + +- `spice.ai` +- `dremio` (coming soon) +- `databricks` (coming soon) + +## `auth` + +Optional. The authentication profile to use to connect to the dataset source. Use `spice login` to create a new authentication profile. + +If not specified, the default profile for the data source is used. + +## `acceleration` + +Optional. Accelerate queries to the dataset by caching data locally. + +## `acceleration.enabled` + +Optional. Enable or disable acceleration. + +## `acceleration.refresh` + +Optional. The interval to refresh the data for the dataset if the dataset type is overwrite. Specified as a [duration literal]({{}}). + +For `append` datasets, the refresh interval not used. + +i.e. `1h` for 1 hour, `1m` for 1 minute, `1s` for 1 second, etc. + +## `acceleration.retention` + +Optional. Only supported for `append` datasets. Specifies how long to retain data updates from the data source before they are deleted. Specified as a [duration literal]({{}}). + +If not specified, the default retention is to keep all data. diff --git a/spiceaidocs/content/en/reference/Spicepod/quickstarts-trader.md b/spiceaidocs/content/en/reference/Spicepod/quickstarts-trader.md deleted file mode 100644 index 60e75ff7..00000000 --- a/spiceaidocs/content/en/reference/Spicepod/quickstarts-trader.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -type: docs -title: 'Trader' -linkTitle: 'Example - Trader' -weight: 70 ---- - -From: https://github.com/spiceai/quickstarts/tree/trunk/trader - -```yaml -version: v1beta1 -kind: Spicepod -name: trader -params: - epoch_time: 1605312000 - granularity: 30m - interval: 6h - period: 72h - episodes: 10 -dataspaces: - - from: coinbase - name: btcusd - measurements: - - name: close - data: - connector: - name: file - params: - path: spicepods/data/btcusd.csv - processor: - name: csv - - from: local - name: portfolio - measurements: - - name: usd_balance - initializer: 0 # update with the starting balance to train with - - name: btc_balance - initializer: 0 # update with the starting balance to train with - actions: - small_buy: | - usd_balance -= args.price - btc_balance += 1 - large_buy: | - usd_balance -= args.price - btc_balance += 10 - sell: | - usd_balance += args.price - btc_balance -= 1 - laws: - - usd_balance >= 0 - - btc_balance >= 0 - -actions: - - name: small_buy - do: - name: local.portfolio.small_buy - args: - price: coinbase.btcusd.close - - - name: large_buy - do: - name: local.portfolio.large_buy - args: - price: coinbase.btcusd.close - - - name: sell - do: - name: local.portfolio.sell - args: - price: coinbase.btcusd.close - - - name: hold - -training: - # Compute price change between previous state and this one - # so it can be used in all three reward functions - reward_init: | - prev_price = current_state["coinbase_btcusd_close"] - new_price = next_state["coinbase_btcusd_close"] - change_in_price = new_price - prev_price - - rewards: - - reward: small_buy - # Reward buying when the price decreases - # Penalize buying when the price increases - with: | - reward = -change_in_price - - - reward: large_buy - # Reward buying when the price decreases - # Penalize buying when the price increases - with: | - reward = -10 * change_in_price - - - reward: sell - # Reward selling when the price increases - # Penalize selling when the price decreases - with: | - reward = change_in_price - - - reward: hold - # Penalize holding slightly to incentivize more frequent trading - # Holding during large price movements will be penalized more harshly - with: | - if change_in_price > 0: - reward = -0.1 - else: - reward = 0.1 -``` diff --git a/spiceaidocs/content/en/reference/Spicepod/samples-gardener.md b/spiceaidocs/content/en/reference/Spicepod/samples-gardener.md deleted file mode 100644 index ca8067ef..00000000 --- a/spiceaidocs/content/en/reference/Spicepod/samples-gardener.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -type: docs -title: 'Gardener' -linkTitle: 'Example - Gardener' -weight: 60 ---- - -From: https://github.com/spiceai/samples/blob/trunk/gardener/README.md - -```yaml -version: v1beta1 -kind: Spicepod -name: gardener -params: - epoch_time: 1612557000 - granularity: 10m - interval: 1h - period: 720h -dataspaces: - - from: sensors - name: garden - measurements: - - name: temperature - - name: moisture - data: - connector: - name: file - params: - path: data/garden_data.csv - processor: - name: csv - -actions: - - name: close_valve - - name: open_valve_half - - name: open_valve_full - -training: - rewards: - - reward: close_valve - # Reward keeping moisture content above 25% - with: | - if next_state["sensors_garden_moisture"] > 0.25: - reward = 200 - - # Penalize low moisture content depending on how far the garden has dried out - else: - reward = -100 * (0.25 - next_state["sensors_garden_moisture"]) - - # Penalize especially heavily if the drying trend is continuing (next_state is drier than current_state) - if next_state["sensors_garden_moisture"] < current_state["sensors_garden_moisture"]: - reward = reward * 2 - - - reward: open_valve_half - # Reward watering when needed, more heavily if the garden is more dried out - with: | - if next_state["sensors_garden_moisture"] < 0.25: - reward = 100 * (0.25 - next_state["sensors_garden_moisture"]) - - # Penalize wasting water - # Penalize overwatering depending on how overwatered the garden is - else: - reward = -50 * (next_state["sensors_garden_moisture"] - 0.25) - - - reward: open_valve_full - # Reward watering when needed, more heavily if the garden is more dried out - with: | - if next_state["sensors_garden_moisture"] < 0.25: - reward = 200 * (0.25 - next_state["sensors_garden_moisture"]) - - # Penalize wasting water more heavily with valve fully open - # Penalize overwatering depending on how overwatered the garden is - else: - reward = -100 * (next_state["sensors_garden_moisture"] - 0.25) -``` diff --git a/spiceaidocs/content/en/reference/Spicepod/samples-serverops.md b/spiceaidocs/content/en/reference/Spicepod/samples-serverops.md deleted file mode 100644 index 7553c618..00000000 --- a/spiceaidocs/content/en/reference/Spicepod/samples-serverops.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -type: docs -title: 'Server Ops' -linkTitle: 'Example - Server Ops' -weight: 80 ---- - -From: https://github.com/spiceai/samples/tree/trunk/serverops - -```yaml -version: v1beta1 -kind: Spicepod -name: serverops -params: - period: 24h - interval: 10m - granularity: 30s -dataspaces: - - from: hostmetrics - name: cpu - data: - connector: - name: influxdb - params: - url: SPICE_INFLUXDB_URL - token: SPICE_INFLUXDB_TOKEN - org: SPICE_INFLUXDB_ORG - bucket: SPICE_INFLUXDB_BUCKET - measurement: cpu - field: usage_idle - processor: - name: flux-csv - measurements: - # "usage_idle" measures the percentage of time the CPU is idle - # Higher values indicate less CPU usage - - name: usage_idle - -actions: - - name: perform_maintenance - - name: preload_cache - - name: do_nothing - -training: - reward_init: | - high_cpu_usage_threshold = 10 - - cpu_usage_new = 100 - next_state["hostmetrics_cpu_usage_idle"] - cpu_usage_prev = 100 - current_state["hostmetrics_cpu_usage_idle"] - cpu_usage_delta = cpu_usage_new - cpu_usage_prev - - cpu_usage_delta_abs = cpu_usage_delta - if cpu_usage_delta_abs < 0: - cpu_usage_delta_abs *= -1 - - rewards: - - reward: perform_maintenance - # Reward when cpu usage is low and stable - with: | - if cpu_usage_new < high_cpu_usage_threshold: - # The lower the cpu usage, the higher the reward - reward = high_cpu_usage_threshold - cpu_usage_new - - # Add an additional reward if the cpu usage trend is stable - if cpu_usage_delta_abs < 2: - reward *= 1.5 - - else: - # Penalize performing maintenance at a time when cpu usage is high - # The higher the cpu usage, the more harsh the penalty should be - reward = high_cpu_usage_threshold - cpu_usage_new - - - reward: preload_cache - # Reward when cpu usage is low and rising - # Is the cpu usage high now, and was the cpu usage low previously? - # If so, previous state was a better time to preload, - # so give a negative reward based on the change - with: | - if cpu_usage_new > high_cpu_usage_threshold and cpu_usage_delta > 25: - reward = -cpu_usage_delta - - # Reward preloading during low cpu usage - else: - reward = high_cpu_usage_threshold - cpu_usage_new - - - reward: do_nothing - # Reward doing nothing under high cpu usage - # The higher the cpu usage, the higher the reward - with: | - if cpu_usage_new > high_cpu_usage_threshold: - reward = high_cpu_usage_threshold - cpu_usage_new - - # Penalize doing nothing slightly when cpu usage is low - else: - reward = -1 - - # If the cpu usage trend is unstable, do not apply the penalty - if cpu_usage_delta_abs > 5: - reward = 0 -``` diff --git a/spiceaidocs/content/en/training/monitoring.md b/spiceaidocs/content/en/training/monitoring.md deleted file mode 100644 index 2bc4c1cf..00000000 --- a/spiceaidocs/content/en/training/monitoring.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -type: docs -title: "Monitoring Training" -linkTitle: "Monitoring Training" -weight: 31 ---- - -Training runs may be monitored for progress, performance, and debugging in several ways. - -The main mediums of monitoring are: - -- Command Line -- Dashboard Monitoring -- Training Loggers - -### Command Line - -Training run progress is logged to the command line by default. - -**_Example output from the Trader Quickstart:_** - -```bash -2021/12/23 05:33:19 trader -> Starting training... -2021/12/23 05:33:19 trader -> Training 10 episodes... -2021/12/23 05:33:20 trader -> Episode 1 completed with score of -560.8. -2021/12/23 05:33:20 trader -> Action Counts: hold = 20, large_buy = 27, sell = 57, small_buy = 28. -2021/12/23 05:33:20 trader -> Episode 2 completed with score of -619.8. -2021/12/23 05:33:20 trader -> Action Counts: hold = 8, large_buy = 6, sell = 102, small_buy = 16. -2021/12/23 05:33:21 trader -> Episode 3 completed with score of -80.8. -2021/12/23 05:33:21 trader -> Action Counts: hold = 116, large_buy = 8, sell = 6, small_buy = 2. -2021/12/23 05:33:21 trader -> Episode 4 completed with score of -40.6. -2021/12/23 05:33:21 trader -> Action Counts: hold = 124, large_buy = 2, sell = 2, small_buy = 4. -2021/12/23 05:33:22 trader -> Episode 5 completed with score of -26.1. -2021/12/23 05:33:22 trader -> Action Counts: hold = 127, large_buy = 2, sell = 1, small_buy = 2. -2021/12/23 05:33:22 trader -> Episode 6 completed with score of -15.9. -2021/12/23 05:33:22 trader -> Action Counts: hold = 129, large_buy = 1, sell = 1, small_buy = 1. -2021/12/23 05:33:23 trader -> Episode 7 completed with score of -1.0. -2021/12/23 05:33:23 trader -> Action Counts: hold = 132, large_buy = 0, sell = 0, small_buy = 0. -2021/12/23 05:33:23 trader -> Episode 8 completed with score of -1.0. -2021/12/23 05:33:23 trader -> Action Counts: hold = 132, large_buy = 0, sell = 0, small_buy = 0. -2021/12/23 05:33:24 trader -> Episode 9 completed with score of -10.8. -2021/12/23 05:33:24 trader -> Action Counts: hold = 130, large_buy = 2, sell = 0, small_buy = 0. -2021/12/23 05:33:24 trader -> Episode 10 completed with score of -6.1. -2021/12/23 05:33:24 trader -> Action Counts: hold = 131, large_buy = 0, sell = 1, small_buy = 0. -2021/12/23 05:33:24 trader -> Max training episodes (10) reached! -``` - -### Dashboard Monitoring - -Training run progress can also be visualized in the dashboard [http://localhost:8000](http://localhost:8000) after navigating to the pod view. - -dashboard-training-run - -### Training Loggers - -Spice.ai supports logging to monitoring tools like [TensorBoard](https://www.tensorflow.org/tensorboard/). - -This logging can either be enabled at the pod level using the `training.loggers` Spicepod section or as a parameter to the `spice train` command. Once enabled, the runtime will log training metrics for that tool. - -A button to open the tool will appear on the training run in the dashboard. Clicking the button will open the relevant monitoring tool. - -**_Example for TensorBoard:_** - -