-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connecting Conv1d layer with LSTM Layer. #1052
Comments
Hi! I had a look at your model, and just printing the trainable weights for the first LSTM layer, I see
for the kernel weights, which are of size 32 x 32 = 1024. So hls4ml is correctly inferring the size of the weight tensor. I think there is a misunderstanding of the expected size here, the number of samples does not impact the size of the weight tensors, see for example https://medium.com/analytics-vidhya/demystifying-lstm-weights-and-biases-dimensions-c47dbd39b30a. |
Thank you for your reply! After reading the page linked in your response, I see that my terminology might be incorrect. I'm assuming--or rather I desire--that the CNN+pooling layer is providing a "sequence length" of 1 and an embedded dimension of samples * channels. We're building latency-constrained models and we can't afford to invoke the LSTM equations multiple times per forward pass. In other words, we want to flatten the tensor feeding the LSTM layer into a single embedding vector. Is this possible? Thank you! |
Hi Suyash, I don't think something like this is supported in hls4ml at the moment. AFAIK, our implementation keeps the structure of iterating over the time steps to calculate the results. I presume it would be possible to add an optional version that flattens the inputs ( |
I’m afraid that hls4ml is not properly flattening the tensor between a conv1d layer and an LSTM layer.
For the network generated from the following Keras code:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, MaxPooling1D, LSTM, Flatten, Dense,TimeDistributed
def create_model():
model = Sequential()
model.add(Conv1D(8, 3, padding='same', activation='relu', input_shape=(16, 1)))
model.add(MaxPooling1D(pool_size=2, strides=2, padding='same'))
model.add(Conv1D(16, 3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2, strides=2, padding='same'))
model.add(Conv1D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2, strides=2, padding='same'))
Create the model
model = create_model()
I see the following generated CPP code:
#include
#include "network_64_4_64_2_32_2_32_ru.h"
#include "parameters.h"
void network_64_4_64_2_32_2_32_ru(
hls::stream<input_t> &conv1d_input,
hls::stream<result_t> &layer15_out
) {
#ifndef SYNTHESIS
static bool loaded_weights = false;
if (!loaded_weights) {
// hls-fpga-machine-learning insert load weights
nnet::load_weights_from_txt<model_default_t, 24>(w2, "w2.txt");
nnet::load_weights_from_txt<model_default_t, 8>(b2, "b2.txt");
nnet::load_weights_from_txt<model_default_t, 384>(w5, "w5.txt");
nnet::load_weights_from_txt<model_default_t, 16>(b5, "b5.txt");
nnet::load_weights_from_txt<model_default_t, 1536>(w8, "w8.txt");
nnet::load_weights_from_txt<model_default_t, 32>(b8, "b8.txt");
nnet::load_weights_from_txt<model_default_t, 1024>(w11, "w11.txt");
nnet::load_weights_from_txt<model_default_t, 256>(wr11, "wr11.txt");
nnet::load_weights_from_txt<model_default_t, 32>(b11, "b11.txt");
nnet::load_weights_from_txt<model_default_t, 32>(br11, "br11.txt");
nnet::load_weights_from_txt<model_default_t, 256>(w12, "w12.txt");
nnet::load_weights_from_txt<model_default_t, 256>(wr12, "wr12.txt");
nnet::load_weights_from_txt<model_default_t, 32>(b12, "b12.txt");
nnet::load_weights_from_txt<model_default_t, 32>(br12, "br12.txt");
nnet::load_weights_from_txt<model_default_t, 8>(w15, "w15.txt");
nnet::load_weights_from_txt<model_default_t, 1>(b15, "b15.txt");
loaded_weights = true;
}
#endif
}
The number of weights on the first LSTM layer is expected to be the number of outputs from the last conv1d+pooling layer, which is 2 samples * 32 channels x 4 gates * 8 states = 2048, but is instead shown in the generated CPP as 1024.
How should I be connecting a pooling layer to an LSTM layer to guarantee that all outputs are conveyed?
The text was updated successfully, but these errors were encountered: