Skip to content

System-on-Chip Resource Adaptive Scheduling using Deep Reinforcement Learning

License

Notifications You must be signed in to change notification settings

EpiSci/SoCRATES

Repository files navigation

System-on-Chip Resource Adaptive Scheduling using Deep Reinforcement Learning

SoCRATES, the System-on-Chip Resource AdapTivE Scheduling, is a DRL scheduler specializes in scheduling SoC jobs to heterogeneous resources showing the state-of-the-art run-time performance. The Eclectic Interaction Matching technique matches the individual state-action tuple with the reward received by the system clock frequency.

Our recent paper, DRL for SoC: Myths and Realities, investigates the feasibility of neural schedulers for the domain of SoC resource allocation through extensive experiments and comparison with non-neural, heuristic schedulers.

The scheduler runs on the System-on-Chip (SoC) framework. The simulation is developed under DS3 framework, which is a high-fidelity system-level domain-specific system-on-chip simulation environment. The system provides plug-and-play run-time policy and energy/power modules. The main objective is to optimize performances (i.e., run-time latency, power dissipation, and energy consumption). To enable ease-to-use deep reinforcement learning algorithms, we run scheduling algorithms using DS3Gym simulator built with Gym environment for SoC-level task scheduling. The overall diagram of DS3 and the comparison of job characteristics are illustrated in below.


Figure 1: An overview of DS3 workflow.

Figure 2: The edge density and chain ratio of cluster and SoC workloads.

An overall systematic workflow of DS3 with scheduling policies is depicted in below.


Figure 3: The architecture of neural schedulers applied to DS3 simulator.

The comparison of the DRL algorithms is illustrated in below.


Figure 4: An overview of DRL scheduler properties in job injecting frequencies and resource types.

The evaluation of the run-time performances in different algorithms are depicted in below.


Figure 5: Overall performances of heuristic and DRL scheduling algorithms.

Figure 6: Scalablility analysis on different scheduling algorithms.

Installation

First, install DS3Gym and the required dependencies, and then install this repository as a Python package.

Requirements

  • CPU or NVIDIA GPU, Linux, Python 3.6+
  • PyTorch, Python packages; instructions for installing dependencies are followed.
  1. Python environment: We recommend using Conda package manager
conda create -n socrates python=3.6
conda activate socrates
  1. Install DS3Gym framework and required Python dependencies
pip install torch numpy
pip install -r requirements.txt
pip install -e .

Usage

This repository supports heuristic and DRL schedulers. To reproduce the results, one can execute the following commands corresponding scheduler.

Currently, we support training agents for CPU only. It takes approximately 1~2 hours for completion.

python run_socrates_scheduler.py
python run_heuristic_scheduler.py
python run_scarl_scheduler.py
python run_deepsocs_scheduler.py

User customization

A DS3Gym framework allows users to customize different configurations. The supported settings are listed in config.py:

  • --resource_profile: A list of resource profiles
  • --job_profile: A list of job profiles
  • --scale: Job frequency (lower scale for fast injection rate)
  • --simulation_length: A total simulation length for one running episode
  • --scheduler_name: A name of scheduler (ETF/MET/STF/HEFT/random/SCARL/DeepSoCS/SoCRATES)
  • --max_num_jobs: A length of job queue
  • --run_mode: A choice of mode in simulation execution (run for standard DS3 framework / step for DS3Gym framework)
  • --pss: A choice of mode for enabling pseudo-steady-state

Heuristic Schedulers

This repository implemented some of well-known heuristic schedulers: MET, ETF, EFT, STF, and HEFT. To run simulations with these schedulers, you can simply give scheduler's name as an argument when running the python code.

DRL Schedulers

The presented repository provides DRL-based schedulers: SoCRATES, DeepSoCS and SCARL. Detailed information for each scheduler are described in below.

SoCRATES (IEEE ICMLA, 2021)

SoCRATES, the System-on-Chip Resource AdapTivE Scheduling, is a DRL scheduler specializes in scheduling SoC jobs to heterogeneous resources showing the state-of-the-art run-time performance. The Eclectic Interaction Matching technique matches the individual `state-action' tuple with the reward received by the system clock frequency.

@inproceedings{sung2021socrates,
  title={SoCRATES: System-on-Chip Resource Adaptive Scheduling using Deep Reinforcement Learning},
  author={Sung, Tegg Taekyong and Ryu, Bo},
  booktitle={2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)},
  pages={496--501},
  year={2021},
  organization={IEEE}
}

DeepSoCS (Electronics, 2020)

DeepSoCS is a first DRL-based scheduler applied in DS3 framework. By extending Decima architecture, DeepSoCS rearranges the given tasks using graph neural networks and policy networks. Then, it applies a greedy algorithm to map tasks to available resources. This mechanism similarly operated with HEFT algorithm.

@article{sung2020deepsocs,
  title={DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC) Resource Scheduling},
  author={Sung, Tegg Taekyong and Ha, Jeongsoo and Kim, Jeewoo and Yahja, Alex and Sohn, Chae-Bong and Ryu, Bo},
  journal={Electronics},
  volume={9},
  number={6},
  pages={936},
  year={2020},
  publisher={Multidisciplinary Digital Publishing Institute}
}

SCARL (IEEE Access, 2019)

SCARL applies attentive embedding to policy networks to map jobs to heterogeneous resources in a simple environment.

Citation

If you use SoCRLFramework in your work or use any models published in SoCRLFramework, please cite:

@article{sung2022deep,
  title={Deep Reinforcement Learning for System-on-Chip: Myths and Realities},
  author={Sung, Tegg Taekyong and Ryu, Bo},
  journal={IEEE Access},
  volume={10},
  pages={98048--98064},
  year={2022},
  publisher={IEEE}
}
@inproceedings{sung2021socrates,
  title={SoCRATES: System-on-Chip Resource Adaptive Scheduling using Deep Reinforcement Learning},
  author={Sung, Tegg Taekyong and Ryu, Bo},
  booktitle={2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)},
  pages={496--501},
  year={2021},
  organization={IEEE}
}
@article{sung2021scalable,
  title={A Scalable and Reproducible System-on-Chip Simulation for Reinforcement Learning},
  author={Sung, Tegg Taekyong and Ryu, Bo},
  journal={arXiv preprint arXiv:2104.13187},
  year={2021}
}

License

SoCRATES is licensed under MIT license available in LICENSE file

About

System-on-Chip Resource Adaptive Scheduling using Deep Reinforcement Learning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages