You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a considerable difference between the output and the expected output. Expected output is the output image received while passing the input image through the software model. I use the latest FINN branch. I trained the network outside the Finn docker. Then exported the qonnx format to Finn. Most of the nodes are rtl while some are hls.
I have included the intermediate models and the build_dataflow_steps.py in the zip file as well as the output received through finn and the expected output. The notebook for running in Pynq is also included. I don't know whether I am doing anything wrong. Also, I took out the last mul node because it just divides by 255 to get the output between 0 and 1. check.zip
[Update:]
It works with old layers of v0.9 but not with the latest branch of FINN v0.10 with RTL layers. Since I want more throughput, RTL layers are better. But for some reason output is all white like, with some feature here and there. Hope this can be solved.
Also I have targeted ZCU104 Ultrascale+ Evaluation Board
The text was updated successfully, but these errors were encountered:
Arbiter-glitch
changed the title
The model works with V9 but not with new RTL layers in V10.
The model works with V0.9 but not with new RTL layers in V0.10.
Apr 23, 2024
Arbiter-glitch
changed the title
The model works with V0.9 but not with new RTL layers in V0.10.
The model works with V0.9 but not with new layers in V0.10.
Apr 24, 2024
To find out why the output is different in finn v0.10. I took Finn v0.9 and started to replace each of the old layers with new RTL layers, I checked at each stage. And it was only after replacing the old Thresholding hls layers with RTL layers, the output varied from the expected to give the white one again. So the problem might be with the thresholding rtl layers.
There is a considerable difference between the output and the expected output. Expected output is the output image received while passing the input image through the software model. I use the latest FINN branch. I trained the network outside the Finn docker. Then exported the qonnx format to Finn. Most of the nodes are rtl while some are hls.
I have included the intermediate models and the build_dataflow_steps.py in the zip file as well as the output received through finn and the expected output. The notebook for running in Pynq is also included. I don't know whether I am doing anything wrong. Also, I took out the last mul node because it just divides by 255 to get the output between 0 and 1.
check.zip
[Update:]
It works with old layers of v0.9 but not with the latest branch of FINN v0.10 with RTL layers. Since I want more throughput, RTL layers are better. But for some reason output is all white like, with some feature here and there. Hope this can be solved.
Also I have targeted ZCU104 Ultrascale+ Evaluation Board
Expected Output
Actual Output Received
Originally posted by @Arbiter-glitch in #1052
The text was updated successfully, but these errors were encountered: