From e9bb0d39c12b2974c8a141ca86051f53a6bf487b Mon Sep 17 00:00:00 2001 From: Arne Symons Date: Mon, 25 Nov 2024 13:01:11 +0100 Subject: [PATCH] fix readme typo --- lab1/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lab1/README.md b/lab1/README.md index 4a61053..17b3adc 100644 --- a/lab1/README.md +++ b/lab1/README.md @@ -9,7 +9,7 @@ The goal of this lab is to perform the first run of the Stream framework. You wi ## Inputs There are three main inputs defined in the `inputs/` folder: -1. **Workload**: Four back-to-back convolutional layers. The layer names are `Layer0`, `Layer`, etc. You can use [Netron](https://netron.app) to visualize the model. +1. **Workload**: Four back-to-back convolutional layers. The layer names are `Layer0`, `Layer1`, etc. You can use [Netron](https://netron.app) to visualize the model. 2. **Hardware**: A sample accelerator is encoded in `hda_bus.yaml`. There are three computing cores, `accelerator1.yaml` through `accelerator3.yaml`. These cores are defined using the ZigZag architecture definition (more information on the ZigZag architecture representation can be found [here](https://kuleuven-micas.github.io/zigzag/hardware.html)). Additionally, there is an `offchip_core` field which specifies the description of the offchip core. This offchip core also uses the ZigZag description, of which only the memory information is used (as no computations can be allocated to the offchip core). The cores are interconnected using the `core_connectivity` field of the HDA description. This specifies on each line a communication channel between two or more cores. A link is automatically added from the offchip core to the different compute cores. 3. **Mapping**: The mapping specifies for the `Layer0`-`Layer3` which core they can be allocated to, and whether the workload allocation is fixed or not. In this first run, we fix the allocation of the layers to a single core to get a baseline of the execution.