-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cyclic-free graph violated error in TFC for BBN-PYNQ example for 2w2a (weight 2 bits, activation 2bits) #938
Comments
Hello, I have the same error in the same example with a custom network #936 |
Even I was trying my custom network (Fully connected network , with weight
2 bit and activation 2 bit), got the error, so tried the example notebook.
Got the same error in example net also.
…On Sun, 10 Dec 2023, 19:05 Batuhan Akkus, ***@***.***> wrote:
Hello,
I have the same error in the same example with custom network #936
<#936>
—
Reply to this email directly, view it on GitHub
<#938 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AS2KL4GNLEARCG2TMIH5AI3YIW3B7AVCNFSM6AAAAABAOQXE62VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYHE3DONJYGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Could you please share more details on your model, i.e., the initial onnx graph right after export and the one right before the failing transformation? Might indeed be the same issue as #936, but without seeing the graph it is not possible to tell. |
Model.txt The first image at the top is of the Model just after export to onnx
Let me know if you require anything else. |
Hm, for some reason your MatMul layers are not converted to the corresponding HLS layer You are using the bnn-pynq notebooks as is, just loading the w2a2 model at the start? You changed nothing else? Then the problem is very likely that this example notebook is intended for binarized (or bipolar) neural networks. By loading the 2-bit variant, this is not the case any more for you. Thus the You have two options now: Either stick to the binarized model to follow the example notebook as it is, or adapt the "Conversion to HLS layers" cells such that they work with the 2-bit (or even more bit) models. For the second option, I suggest you have a look at the |
You are using the bnn-pynq notebooks as is, just loading the w2a2 model at the start? You changed nothing else? :: Yes I need to use w2a2 or w2a4 (as my intended final network is not giving good accuracy below this). So I have to use either w2a2 or w2a4 configuration. "InferBinaryMatrixVectorActivation" how to call or use it? any references? (P.S.:: When I was was using w1a1 configuration , I was able to successfully run the code) Thanks |
Look into the first code cell of the "Conversion to HLS layers" section in the notebook. The third line should be: |
@iksnagreb Hi,
|
Nice to hear you are making some progress. Yes, empty runtime weights should be expected in this case. Regarding the mismatch when running on the board: I am just guessing, but likely the dataflow partition moved the input quantization (and maybe the output de-quantization, if you are expecting floating-point outputs) out of the hardware design, such that what is running on the device is purely integer, thus you are getting only integers back. That means, you probably have to quantize your inputs manually (and maybe de-quantize your outputs manually as well, or, alternatively compare against quantized expected outputs for verification). |
Thanks for the reply. Can you provide any example code or links, which can be useful to debug this issue? I am unaware of how to locate different design parts between HW (PL) and SW(PS). Calls which can be useful for getting insight may be very handy. That means, you probably have to quantize your inputs manually (and maybe de-quantize your outputs manually as well, or, alternatively compare against quantized expected outputs for verification).-------> How to retrieve the information required to do this manually (like mean and dynamic range) |
To see how FINN partitioned your model, you can have a look into the parent model after creating the dataflow partition: The original notebook saves this as Guessing from the last model graph you provided, it seems to be everything up to (and including) the first MultiThreshold which is not included in the hardware - this makes sense as it corresponds to the input quantization. Normally, you would now have to figure out which Quant node (probably just the first one as well) originally corresponds to this MultiThreshold and use the quantization parameters (scale, zero-point, etc.) from there. However, it looks like you have some Mul and Add nodes (we do not care for the Reshape here) preceding the MultiThreshold, which suspiciously look like a conversion to bipolar inputs. Bipolar inputs do not really make sense for your 2w2a model. Is this still the case? You might want to check again, whether your inputs and outputs (as the Mul and Add following the last MatMul look suspiciously like the reverse of the bipolar-conversion) are treated correctly or whether there are still some leftovers form the binary/bipolar example in there. |
Hi @iksnagreb , finally the code is working fine on the board. I was able to push the input Quant layer inside the dataflow partition and was able to convert the INT output of dataflow partition to FLOAT by using the values. Thanks a lot for your help. Now moving forward I want to play around with the folding factors and performance enhancement. I have few queries regarding this. If you can answer them or direct me to relevant section, it will be of great help:
Thanks |
Hi,
|
Hi,
are attached herewith. FINN is creating dangling nodes for other inputs. Any idea how to address this?
Here are my few observations, that might help someone:
|
Hi @fpjentzsch ,
and
nodes, which dont go away. I am attaching the image of the model, which in turn gives cycle-free graph error when I run "Dataflow" transformation. |
Discussed in #937
Originally posted by pkdeep December 10, 2023
Hello I am trying to run BBN-PYNQ example notebook with TFC (for MNIST data) network with weight = 2bits and activation also as 2 bit. I am getting cycle-free graph violated error while generating dataflow partition. Any workaround?
(docker image: xilinx/finn v0.9-2-gb3bdff11-dirty.xrt_202210.2.13.466_18.04-amd64-xrt)
Below in the screenshot of the error.
The text was updated successfully, but these errors were encountered: