Skip to content
forked from noahtren/Freewire

An experiment with "freely" wired neural networks (no layers)

Notifications You must be signed in to change notification settings

PhSteel/Freewire

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Freewire: Freely Wired Neural Networks

Freewire is a Keras-like API for creating optimized freely wired neural networks to run on CUDA. Freely wired neural networks are defined at the level of individual nodes (or neurons) and their connections, instead of at the level of homogeneous layers. The goal of Freewire is to make it so that any arbitrary DAG of artificial neurons can be defined first and the optimized set of operations can be compiled at runtime and run on CUDA.

This repository is a starting point for exploring how to design and optimize neural networks that can be wired in very novel ways at the level of individual artificial neurons, while retaining the speed and memory efficiency of traditional neural networks. Future versions will likely make use of sparse tensor operations.

Compiling Parallel Operations

Since freely wired neural networks may not fit the paradigm of having layers, it's necessary to consider ways to optimize them for training and inference. The most time-efficient implementation of a freely wired network would be a series of parallelized operations that extend a 1D tape of numbers, where each operation is a function of the input and the results of all previous operations. This code uses a topological sorting algorithm to find the minimum number of required operations for a given graph.

This graphic shows the 1D tape on the left and the freely wired neural network that it represents on the right (biases are left out in this image for simplicity). Also note than the 1D tape is extended to 2D to allow training in batches.

XOR Gate Example

from freewire import Node, Graph, Model

# node with no arguments is an input node
inputs = [Node(), Node()]
# first argument of Node constructor is a list of input nodes
hidden = [Node(inputs, activation='sigmoid') for _ in range(0, 5)]
output = Node(hidden, activation='sigmoid')
# specify which nodes are inputs, hidden, or output nodes when generating graph
g = Graph(inputs, hidden, [output])
m = Model(g)
# create training data
data = [
  [0, 0],
  [1, 0],
  [0, 1],
  [1, 1]
]
target = [0, 1, 1, 0]
# similar API to Keras
m.compile(optimizer='sgd', loss='mse')
m.fit(data, target, epochs=10000, batch_size=1)
print("0 xor 0:", m([0, 0]))
print("0 xor 1:", m([0, 1]))
print("1 xor 0:", m([1, 0]))
print("1 xor 1:", m([1, 1]))

Visualization Example

You can visualize a graph to see its architecture and weights (given that the graph is small enough). The visualization is made using the graphviz library.

A graph's weights and biases start out as zero. This changes when a graph is used to construct a model.

from freewire import Node, Graph, Model
from freewire import visualize
inputs = [Node(), Node()]
hidden1 = Node(inputs)
hidden2 = Node([inputs[0], hidden1])
hidden3 = Node([inputs[1], hidden1])
output = Node([hidden2, hidden3])
g = Graph(inputs, [hidden1, hidden2, hidden3], [output])
visualize(g, title="architecture")

Now, create a model from this graph and view the updated weights and biases.

m = Model(g, initialization="he")
visualize(g, title="architecture_and_weights")

More Examples

See the examples folder for more examples, including a network for MNIST with randomly wired layers.

Installation

git clone https://github.com/noahtren/Freewire
cd Freewire
pip install -e .

This will automatically install the requirements in requirements.txt:

  • numpy
  • torch==1.2.0
  • graphviz
  • pydot
  • Optional: the mnist package to run the example on MNIST dataset

About

An experiment with "freely" wired neural networks (no layers)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%