Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantum reservoir processing - Analysis #16

Open
NicolaBernini opened this issue May 13, 2019 · 1 comment
Open

Quantum reservoir processing - Analysis #16

NicolaBernini opened this issue May 13, 2019 · 1 comment
Assignees
Labels
readthrough Performing a Read Through

Comments

@NicolaBernini
Copy link
Owner

Overview

Readthrough and Analysis related to Quantum reservoir processing

@NicolaBernini NicolaBernini added the readthrough Performing a Read Through label May 13, 2019
@NicolaBernini NicolaBernini self-assigned this May 13, 2019
@NicolaBernini
Copy link
Owner Author

Reservoir Computing - Basic Elements

Architecture

ResComp1

The Neural Networks can be thought as composed of 2 main architectural elements

  • a Core, where most of the computation is performed and
  • a Final Layer, which is application specific, as the training signal is stronger

The Traditional Neural Networks used in Machine Learning do not show this distinction very clearly as

  • everywhere they rely on the same kind of (biologically inspired) artificial neurons
  • everywhere they use a fully connected topology
  • each params is trained
    but it as also been observed depth was not a game changing factor

The Deep Neural Networks used in Deep Learning show this distinction clearly

  • the backbone
    • is essentially focused on performing automatic feature detection learning
    • makes uses of powerful filters, like Convolutions, relying on parameters sharing hence it's like they slide (e.g. in the case of images) along the full input domain
  • the final layer(s)
    • use(s) the features learned by the core to solve the task
    • makes use of traditional neurons and Dense Layers

Also in Reservoir Computing this distinction is clear

  • the Reservoir
    • is focused on performing a random high dimensional mapping of the input
    • it is not trainable
  • the Readout
    • is focused on solving the specific task
    • its connections are trainable

Underlying Idea

The underlying idea seems to be very similar to the SVM Kernel Trick : as the Readout performs a classification hence essentially finds linear subdivisions in input space, then it should be facilitated by the fact it is high dimensional, as it is the Reservoir output space

The idea seems to be if the Reservoir output space is enough high dimensional, there is no need to train it, hence there is no need to fit this mapping on the data, just focus on training the linear discriminator in this space: this makes training much cheaper and easier

Work in progress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
readthrough Performing a Read Through
Projects
None yet
Development

No branches or pull requests

1 participant