-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Early-Exit dev #15
base: master
Are you sure you want to change the base?
Early-Exit dev #15
Conversation
return 1.0 | ||
|
||
def streams_in(self): | ||
def streams_in(self, port_index): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these streams distinct from the separate ports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, so streams are streams per port, so port 0 could have different number of parallel streams compared to port 1.
|
||
def workload_in(self, index): | ||
def workload_in(self, port_index): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the case that you have multiple inputs (ports) into a node, would you want the workload in to be the sum of all of those? Or treat the workload of each port independently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah it would be the work done by each port independantly. So say for a Concat
layer, one input port might have to do more work (stream in more data) than the other
coarse_in: list[int], | ||
coarse_out: list[int], | ||
ports_in=1, | ||
ports_out=1, | ||
data_width |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if you get the same error but is the data_width non-default arg allowed to follow the default port args?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yeah good point, tbh I haven't tested this. I think this weekend I'm going to get started on the testing. But for now, you can default it to 16. There will be someone working on quantisation soon as well, so all the bitwidth stuff will be changing soon
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried changing it to 16 and got a weird error 😅 I'll try and figure it out
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also remove all the : list[int]
aswell, i think that's breaking it
in the future, I think it would be good to give type hints, but actually lets leave it for now
In Putting indices everywhere might not be the best plan - I can change this to adding single port versions of the functions in those layers if that would be better |
Nice work! I think we should keep the functions with |
@@ -26,7 +28,7 @@ def __init__( | |||
sa =0.5, | |||
sa_out =0.5 | |||
): | |||
Layer.__init__(self,dim,coarse_in,coarse_out,data_width) | |||
Layer.__init__(self, [rows], [cols], [channels], [coarse_in], [coarse_out], data_width) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't add the port information! Whoops, will change now.
bf5fda6
to
3697d47
Compare
* started split layer * working with main branch now (for current tests) * added test for optimiser * working on improving resource modelling * updated resource models * changes to split layer * fixed merge conflict * updated visualiser and fn model for splitlayer Co-authored-by: AlexMontgomerie <[email protected]> Co-authored-by: AlexMontgomerie <[email protected]>
* ee parser work bringup * started updating parser, temp save * expanded subgraphs, updated explicit edges of subnodes * added early exit dataflow edges to output of If operation/layer * adding splitlayers to branching connections, removed extra nodes * adding buffer layer, reworking ctrl edges * updated parsering layers * added Buffer and BufferLayer for hw optimiser * ignoring egg dir, adding custom setup for recompilation ease * updated Buffer layer/mod, added Exit layers, updated init * updated add_hardware with new layers, linking control signals, fixing graph and ctrl edges * updating add_dimensions function - savepoint * fixing additional conflicts after rebase * init hw for split layer, fixed comment typos * working parser for branchnet onnx graph (somewhat verified)
3697d47
to
4edc459
Compare
Co-authored-by: Ben Biggs <[email protected]>
Co-authored-by: Ben Biggs <[email protected]>
No description provided.