"When you can write well, you can think well."
--Matt Mullenweg.
Python repository for the paper "Meta-ViterbiNet: Online Meta-Learned Viterbi Equalization for Non-Stationary Channels".
Please cite our paper, if the code is used for publishing research.
Table of contents generated with markdown-toc
This repository implements classical and machine-learning based detectors for a channel with memory of L. We implemented the naive Viterbi algorithm, as well as the ViterbiNet in python. Our method, for incorporating temporal evolution over a sequence of symbols, is referred to as Meta-ViterbiNet. We also implemented a model-free baseline of a windowed-LSTM detector. We explain on the different directories and subdirectories below.
The python simulations of the simplified communication chain: encoder, channel and detectors.
Includes all relevant channel functions and classes. The class in "channel_dataset.py" implements the main class for aggregating pairs of (transmitted,received) samples. In "channel.py", the ISI AWGN channel is implemented. "channel_estimation.py" is for the calculation of the h values. Lastly, the channel BPSK modulator lies in "channel_modulator.py".
The backbone detectors: VA, VNET, LSTM, META_VNET and META_LSTM. The meta and non-meta detectors have slightly different API so they are seperated in the trainer class below. Also, we use VA as the ML detector, thus we assume full knowledge of the CSI. To have a single API across the detectors, the snr and gamma appear in all the approriate forward calls, but are omitted in the code itself. A factory design pattern could have been a better fit here, and is left as future work.
Error-correction codes functions. Code from site.
Plotting of the FER versus SNR, and the FER versus the blocks.
Wrappers for the training and evaluation of the detectors.
The basic trainer class holds most used methods: train, meta-train and evaluation (per SNR/block, see the paper for the two types of eval). It is also used for parsing the config.yaml file and preparing the deep learning setup (loss, optimizer, ...).
Each trainer inherets from the basic trainer class, extending it as needed. You can run each trainer with the train/evaluate commands in their main.
Extra utils for saving and loading pkls; calculating the accuracy over FER and BER; and transitioning over the trellis.
Controls all parameters and hyperparameters.
Keeps the channel coefficients vectors (4 taps, each with 300 blocks).
Definitions of relative directories.
To execute the code, first download and install Git, Anaconda and PyCharm.
Then install the environment, follow the installation setup below.
At last, open PyCharm in the root directory. You may run either the trainers or one of the plotters.
This code was simulated with GeForce RTX 2060 with driver version 432.00 and CUDA 10.1.
-
Open git bash and cd to a working directory of you choice.
-
Clone this repository to your local machine.
-
Open Anaconda prompt and navigate to the cloned repository.
-
Run the command "conda env create -f metanet.yml". This should install the required python environment.
-
Open the cloned directory using PyCharm.
-
After the project has been opened in PyCharm, go to settings, File -> Settings... (or CTRL ALT S)
-
In the opened window open the tab Project -> Project Interpreter
-
In the new window, click on the cog icon and then on Add...
-
In the add python interpreter window, click on the Conda Environment tab
-
Select Existing environment and navigate to where the python.exe executable of the deep_ensemble environment is installed under the interpreter setting
-
For windows its usually found at C:\users<username>\anaconda3\envs\metanet\python.exe)
-
For linux its usually found at /home//anaconda3
-
Click OK
-
Done!