-
Notifications
You must be signed in to change notification settings - Fork 4
/
CITATION.cff
57 lines (57 loc) · 2.05 KB
/
CITATION.cff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
cff-version: 1.2.0
title: 'AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks'
message: 'If you use this software, please cite it as below.'
preferred-citation:
type: article
authors:
- given-names: Vivswan
family-names: Shah
email: [email protected]
affiliation: University of Pittsburgh
- family-names: Youngblood
given-names: Nathan
affiliation: University of Pittsburgh
doi: "10.1063/5.0134156"
journal: "APL Machine Learning"
title: 'AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks'
year: 2023
authors:
- given-names: Vivswan
family-names: Shah
email: [email protected]
affiliation: University of Pittsburgh
- family-names: Youngblood
given-names: Nathan
affiliation: University of Pittsburgh
identifiers:
- type: doi
value: 10.1063/5.0134156
description: >-
The concept DOI for the collection containing
all versions of the Citation File Format.
repository-code: 'https://github.com/Vivswan/AnalogVNN'
url: 'https://analogvnn.readthedocs.io/'
abstract: >-
AnalogVNN, a simulation framework built on PyTorch
which can simulate the effects of optoelectronic
noise, limited precision, and signal normalization
present in photonic neural network accelerators. We
use this framework to train and optimize linear and
convolutional neural networks with up to 9 layers
and ~1.7 million parameters, while gaining insights
into how normalization, activation function,
reduced precision, and noise influence accuracy in
analog photonic neural networks. By following the
same layer structure design present in PyTorch, the
AnalogVNN framework allows users to convert most
digital neural network models to their analog
counterparts with just a few lines of code, taking
full advantage of the open-source optimization,
deep learning, and GPU acceleration libraries
available through PyTorch.
keywords:
- photonics
- neural-networks
- analog-computing
- deep-learning
license: MPL-2.0