-
Notifications
You must be signed in to change notification settings - Fork 778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[csrng/rtl] add reseed intervall sts err #22883
[csrng/rtl] add reseed intervall sts err #22883
Conversation
156427c
to
79f56a5
Compare
@vogelpi I rebased now. I think this can be reviewed now, although I still get some failing tests. I'm working on resolving the failures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @h-filali . There are a couple of issues in this PR I would like to discuss with you 1 by 1. Let's set up a meeting.
hw/ip/csrng/rtl/csrng_cmd_stage.sv
Outdated
if (cmd_sts_err_q) begin | ||
state_d = Idle; | ||
cmd_sts_err_release = 1'b1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should only release the error when getting the ack. Right now you drop the error upon entering the CmdAck state. What's worse, cmd_sts_err_release doesn't seem to have an effect other than delaying the branch checking cmd_ack_i
by one clock cycle. Does this make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it such that the check now happens directly in the cmd_stage. I think it should be better now.
(cmdreq_ccmd == INS) ? {{(CtrLen-1){1'b0}},1'b1} : | ||
(cmdreq_ccmd == RES) ? {{(CtrLen-1){1'b0}},1'b1} : | ||
(cmdreq_ccmd == INS) ? {{(CtrLen-1){1'b0}},1'b0} : | ||
(cmdreq_ccmd == RES) ? {{(CtrLen-1){1'b0}},1'b0} : |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed in our 1:1 this is needed for the reseed_interval values to make any sense. Otherwise a value of 2 for reseed_interval would only allow 1 generate command between two reseeds (instead of two).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree that this makes sense.
edda763
to
e2247c6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update @h-filali . This is going in the right direction but it's not yet working as expected. Please see my comments for details.
hw/ip/csrng/rtl/csrng_core.sv
Outdated
// Set reseed_cnt_reached_d to true if the max number of generate requests between reseeds | ||
// has been reached for the respective counter. | ||
assign reseed_cnt_reached_d[ai] = (shid_q == ai) ? | ||
(state_db_rd_rc == reg2hw.reseed_interval.q) : | ||
reseed_cnt_reached_q[ai]; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fear this is not going to work properly when more than 1 instance is busy. Because:
shid_q
always updates 1 cycle afteracmd_sop
which itself is high 1 cycle after the command stage of interest has won arbitration.- But inside the command stage we check
reseed_cnt_reached
before initiating arbitration.
As a result, the command stage might use reseed_cnt_reached
of the wrong instance.
To solve this, I see the following two options:
- Feed the reseed counter values for all instances out of the state database to compare them to the limit "live" inside the command stage. This means you need 3 comparators instead of 1, but no muxes and no FFs. It's easy to do and will help us to with upcoming changes for M4.
- The check is performed upon writing to the state database instead. This has the same area requirements but there is a risk of the comparison getting out of sync when SW updates the reseed limit in the meantime.
I would go for Option 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are 3 different reseed count reached values. One for each endpoint. They all start off at 0 and for them to be increased, a generate command must have been sent at some point, which means shid_q == ai
must have been true at some point and the value must have been updated. I probably should have added a comment explaining this or maybe I'm missing something here. What do you think @vogelpi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, yes thanks for clarifying. However, there is still an issue that the comparison can be out of date. What can happen:
- The last Generate command ends and the state database is updated.
- The next Generate is received, the command stage checks
reseed_cnt_reached_q
. It's showing value from when that reseed counter was last read, not when it was written. - It's below the limit, so it's requesting arbitration.
- It wins arbitration,
shid_q
is updated and we read the state database and updatereseed_cnt_reached_q
. - Now it reached the limit but the Generate is already in the pipeline and we will ack it at the end.
So we are doing one Generate more than we should.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think an example for when the state is read, is for the error response or the acknowledgement. The command stage needs to read from the state db every time a command is acked, so the reseed_cnt_reached_d variables will be updated any time this happens. For a second generate to be processed, the previous one has to have read from state db first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm now also convinced, that doing it on state_db writes instead of state_db reads is safer. I now changed it to writes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @h-filali !
@@ -433,13 +449,18 @@ module csrng_cmd_stage import csrng_pkg::*; #( | |||
|
|||
assign cmd_stage_ack_o = cmd_ack_q; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's missing here is to also set cmd_stage_ack_o
in case of cmd_err_ack
. Right now, just the status is set but the ack is not going out. As a result CSRNG just sucks the command without doing anything. The app interface dies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes totally missed that one. Let me fix that.
e2247c6
to
078f877
Compare
@vogelpi Thanks for reviewing, I addressed your points. Hopefully this works now |
078f877
to
1ca97d0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks now good, thanks @h-filali ! Let's see what CI has to say to this :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very reasonable to me. Just a small question on the condition in which the exceeded reseed is triggered.
hw/ip/csrng/rtl/csrng_core.sv
Outdated
// has been reached for the respective counter. | ||
assign reseed_cnt_reached_d[ai] = | ||
state_db_wr_req && state_db_wr_req_rdy && (state_db_wr_inst_id == ai) ? | ||
(state_db_wr_rc == reg2hw.reseed_interval.q) : |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want this to be >=
? Maybe that case can be reached if we are reseeding based on a large reseed value and then when a lower reseed value is put in the reseed count is already beyond it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This count goes up for each generate command that is completed. When the reseed_interval
is reached no further generates should be accepted. I guess we could add an assertion for that to make sure it doesn't happen. However, I already go another potential PR cooking that tests this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, it's not a huge deal. I was just thinking it might happen if your counter already has a higher value when you write your reseed interval. We can just put some text in the reseed interval register description saying you should reseed the block to guarantee the new reseed interval takes effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah now I see, thanks for clarifying. I guess I can do it in a follow up PR once this one is merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, that's fine!
Thanks @marnovandermaas for having a look! |
1ca97d0
to
0417776
Compare
|
0417776
to
a9e28b6
Compare
This commit adds a new status error response, that is triggered whenever the number of generates between reseeds exceeds the reseed_interval. Signed-off-by: Hakim Filali <[email protected]>
a9e28b6
to
7b67a48
Compare
Please see the commit messages for info.
Resolves #16499
I still need to do some cleanup and debug the last part of the alert test vseq.
There seems to be an issue in DV where the wait for an acknowledgement doesn't behave as
expected.
I also need to rebase.
The first commit can be ignored since it is already merged.