Skip to content

Commit

Permalink
Added Support for Modeling Source and Line Resistances for 1.1.4 Rele…
Browse files Browse the repository at this point in the history
…ase (#98)

* Added support for modeling source and line resistances for passive crossbars/tiles.
* Added C++ and CUDA bindings for modeling source and line resistances for passive crossbars/tiles*.
* Added a new MemTorch logo to `README.md`.
* Added the `set_cuda_malloc_heap_size` routine to patched `torch.mn` modules.
* Added unit tests for source and line resistance modeling.
* Updated ReadTheDocs documentation.
* Transitioned from Gitter to GitHub Discussions for general discussion.

***Note** It is strongly suggested to set `cuda_malloc_heap_size` using `m.set_cuda_malloc_heap_size` manually when simulating source and line resistances using CUDA bindings.
  • Loading branch information
coreylammie authored Sep 21, 2021
1 parent e6825bb commit 0dfdbe1
Show file tree
Hide file tree
Showing 41 changed files with 2,315 additions and 197 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,5 @@ venv/
/cuda_quantization.cpython-38-x86_64-linux-gnu.so
/quantization.cpython-38-x86_64-linux-gnu.so
.idea/
/memtorch.egg-info/
/memtorch.egg-info/
*.csv
29 changes: 12 additions & 17 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,17 @@
## Added

1. Added another version of the Data Driven Model defined using `memtorch.bh.memrsitor.Data_Driven2021`.
2. Added CPU- and GPU-bound C++ bindings for `gen_tiles`.
3. Exposed `use_bindings`.
4. Added unit tests for `use_bindings`.
5. Added `exemptAssignees` tag to `scale.yml`.
6. Created `memtorch.map.Input` to encapsulate customizable input scaling methods.
7. Added the `force_scale` input argument to the default scaling method to specify whether inputs are force scaled if they do not exceed `max_input_voltage`.
8. Added CPU and GPU bindings for `tiled_inference`.
1. Added Patching Support for `torch.nn.Sequential` containers.
2. Added support for modeling source and line resistances for passive crossbars/tiles.
3. Added C++ and CUDA bindings for modeling source and line resistances for passive crossbars/tiles\*.
4. Added a new MemTorch logo to `README.md`
5. Added the `set_cuda_malloc_heap_size` routine to patched `torch.mn` modules.
6. Added unit tests for source and line resistance modeling.
7. Relaxed requirements for programming passive crossbars/tiles.

## Enhanced

1. Modularized input scaling logic for all layer types.
2. Modularized `tile_inference` for all layer types.
3. Updated ReadTheDocs documentation.
**\*Note** it is strongly suggested to set `cuda_malloc_heap_size` using `m.set_cuda_malloc_heap_size` manually when simulating source and line resistances using CUDA bindings.

## Fixed
## Enhanced

1. Fixed GitHub Action Workflows for external pull requests.
2. Fixed error raised by `memtorch.map.Parameter` when `p_l` is defined.
3. Fixed semantic error in `memtorch.cpp.gen_tiles`.
1. Modularized patching logic in `memtorch.bh.nonideality.NonIdeality` and `memtorch.mn.Module`.
2. Updated `ReadTheDocs` documentation.
3. Transitioned from `Gitter` to `GitHub Discussions` for general discussion.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
<h1 align="center">
<br>
MemTorch
<img src="https://github.com/coreylammie/MemTorch/blob/master/logo.svg?raw=True" alt="MemTorch" width="40%"/>
<br>
</h1>

[![](https://img.shields.io/badge/python-3.6+-blue.svg)](https://www.python.org/)
![](https://img.shields.io/badge/license-GPL-blue.svg)
![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3760695.svg)
[![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/memtorch/community)
[![GitHub Discussions](https://img.shields.io/badge/chat-discussions-ff69b4)](https://github.com/coreylammie/MemTorch/discussions/97)
![](https://readthedocs.org/projects/pip/badge/?version=latest)
[![CI](https://github.com/coreylammie/MemTorch/actions/workflows/push_pull.yml/badge.svg)](https://github.com/coreylammie/MemTorch/actions/workflows/push_pull.yml)
[![codecov](https://codecov.io/gh/coreylammie/MemTorch/branch/master/graph/badge.svg)](https://codecov.io/gh/coreylammie/MemTorch)
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
author = "Corey Lammie"

# The full version, including alpha/beta/rc tags
release = "1.1.3"
release = "1.1.4"
autodoc_inherit_docstrings = False

# -- General configuration ---------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ We provide documentation in the form of a complete Python API, and numerous inte

memtorch
tutorials
Discuss MemTorch on Gitter <https://gitter.im/memtorch/community>
Discuss MemTorch on GitHub Discussions <https://github.com/coreylammie/MemTorch/discussions/97>
112 changes: 112 additions & 0 deletions logo.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 1 addition & 4 deletions memtorch/bh/crossbar/Crossbar.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,16 +203,13 @@ def write_conductance_matrix(
conductance_matrix = torch.max(
torch.min(conductance_matrix.to(self.device), max), min
)
if transistor:
if transistor or programming_routine is None:
self.conductance_matrix = conductance_matrix
self.max_abs_conductance = (
torch.abs(self.conductance_matrix).flatten().max()
)
self.update(from_devices=False)
else:
assert (
programming_routine is not None
), "programming_routine must be defined if transistor is False."
if self.tile_shape is not None:
for i in range(0, self.devices.shape[0]):
for j in range(0, self.devices.shape[1]):
Expand Down
Loading

0 comments on commit 0dfdbe1

Please sign in to comment.