You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for sharing the code. I'm trying to implement CARM and your code helps a lot.
I'm a little bit confused about the data processing codes in PCA_STFT_visualize.ipynb. Does the data processing method in this script totally equal to the method in CARM paper? np.convolve is used to calculate the constant offset in the script , while the paper says it is calculated by average CSI amplitude for 4 seconds, so I think it refers to simply averaging the amplitude for 4 sec? Besides the H matrix in the paper is built by chunks obtained in 1-second interval which are cut from CSI streams, while the script used another np.convolve whose window_size is 0.1s to build this matrix (or maybe just to smooth the signal, I'm not sure). Could you help explain why these np.convolve are used? Thanks a lot.
The text was updated successfully, but these errors were encountered:
Hi, thank you for sharing the code. I'm trying to implement CARM and your code helps a lot.
I'm a little bit confused about the data processing codes in
PCA_STFT_visualize.ipynb
. Does the data processing method in this script totally equal to the method in CARM paper?np.convolve
is used to calculate the constant offset in the script , while the paper says it is calculated byaverage CSI amplitude for 4 seconds
, so I think it refers to simply averaging the amplitude for 4 sec? Besides theH
matrix in the paper is built by chunksobtained in 1-second interval
which are cut from CSI streams, while the script used anothernp.convolve
whosewindow_size
is 0.1s to build this matrix (or maybe just to smooth the signal, I'm not sure). Could you help explain why thesenp.convolve
are used? Thanks a lot.The text was updated successfully, but these errors were encountered: