Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[entropy_src] Fix handling of backpressure in the hardware pipeline #21799

Merged
merged 6 commits into from
Mar 6, 2024

Conversation

vogelpi
Copy link
Contributor

@vogelpi vogelpi commented Mar 2, 2024

Now that the the noise source is no longer disabled upon the esrng FIFO filling up, the hardware pipeline is no longer handling backpressure from the conditioner appropriately. This becomes very obvious with #21787 which fixes the max rate test as well as the CS AES halt interface (a main source of backpressure in the entropy source).

This PR contains a couple of commits to counteract this:

  1. The FIFO controls are reworked and the entropy drop point is moved to before the postht FIFO to and the sample counter is updated to keep the number of samples going into the conditioner constant, independent of samples being dropped. Even though this behavior should be fine in practice, it's very hard to verify this with our DV environment as we don't (want / can't) accurately model the internals of the pipeline in the scoreboard.
  2. As an alternative, new FIFO - the distribution FIFO - is added between postht, precon, observe and bypass FIFOs. The purpose of this FIFO is to absorb backpressure from the conditioner/CS AES halt interface such that we don't have to drop samples and can keep verifying the design. The FIFO has a parametrizable depth to adjust it to pessimistic conditions if needed. With a 32-bit, 2-deep FIFO things work already pretty well. The max rate passes again with a rate of almost 90%.

This is resolves #21686 and is related to #20953.

What's still required and not yet part of this PR:

  • I need to investigate whether dropping entropy from at the input of the bypass FIFO is desirable / okay in bypass mode. This should be okay as discussed with @h-filali
  • I need to investigate whether there is enough time between the health test done pulse and the pulse to trigger the conditioner such that last tested samples can flow from the postht FIFO through the distr FIFO and precon FIFO into the conditioner. This is critical to keep the number of bits in the conditioner fixed (a spec requirement). -> thinking more about this, it should never happen as the distr FIFO is instantiated with pass-through option (0 latency when empty) and it only fills up when the conditioner is active, i.e., before the conditioner gets active, it should always be empty. But we should maybe write an SVA to ensure this.
  • There seems to be a kind of a deadlock situation when re-enabling the module after having used the conditioner: as the conditioner might still have some data from before the disable inside, the delayed-enable module doesn't enable the noise source, so no bits are coming in. No bits coming in means the conditioner will never run to produce the done pulse the delayed enable module is waiting for... One way out of this is to enable the module in Firmware Override: Extract and Insert mode and push in some fake randomness to manually operate the conditioner. this is now fixed by [entropy_src] Align enable delay module with fixed CS AES Halt interface

@vogelpi
Copy link
Contributor Author

vogelpi commented Mar 5, 2024

@h-filali , I didn't get to take care of the three opens above but some of this should better be handled in a separate PR I believe.

Today, I've investigated some more waves to make assumptions on the max entropy rate given a certain depth of the new distribution FIFO. Based on this, I've now reduced the max delay of the CS AES halt interface for the max rate test (this generates a lot of backpressure). I still see some failing tests that I need to investigate tomorrow evening. Plus I want to document this.

Would you mind reviewing this PR and providing feedback tomorrow please? This will be important to finish the PR and hopefully merge it soon. Thanks!

@vogelpi vogelpi requested review from andreaskurth and removed request for pamaury March 5, 2024 01:15
Copy link

@h-filali h-filali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vogelpi thanks a lot for taking care of this. This looks very good!
I mainly have some questions but if we can clear those up we could merge this IMO.

Comment on lines +2362 to 2424
// The prim_packer_fifo primitive is constructed to only accept pushes if there is indeed space
// available. The pop signal of the preceeding esrng FIFO is unconditional, meaning samples can
// be dropped before the esbit FIFO in case of backpressure. The samples are however still
// tested.
assign pfifo_esbit_push = rng_bit_en && sfifo_esrng_not_empty;
assign pfifo_esbit_clr = ~es_delayed_enable;
assign pfifo_esbit_pop = rng_bit_en && pfifo_esbit_not_empty && pfifo_postht_not_full;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this potentially leave us with non contiguous data?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-contiguous bits going into the conditioner: yes. I think that's unavoidable if bits can be dropped. But health tests will always be on all bits, thus contiguous.

Does this cause a problem with windowed health tests, though? I mean the window of bits that gets health-tested is no longer necessarily the window of bits that gets pushed into the conditioner. From what I'm told that's allowed by SP 800-90B, but just wanted to double check

Copy link

@h-filali h-filali Mar 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, @vogelpi implemented it such that the counter isn't increased in case entropy is dropped in continuous mode. This is allowed by the spec.
For the fw ov observe mode we might need some way to prove that we aren't dropping anything.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that this is the only what to do this in a spec compliant way. I've also discussed the implementation with @johannheyszl and he shared my view.

As for the FW Override Observe Mode: I think you are correct @h-filali . In the observe variant, post health test entropy goes both into the observe FIFO and the preconditioner FIFO. If we drop before the postht FIFO the observed data is no longer contiguous. Good catch!

This could be solved by implementing a status bit / recov alert that asserts when something is dropped before the health test FIFO. This would be beneficial also for DV.

Comment on lines +2397 to 2459
// The prim_packer_fifo primitive is constructed to only accept pushes if there is indeed space
// available. In case the single-bit mode is enabled, the pop signal of the preceeding esbit FIFO
// is conditional on the full status of the postht FIFO, meaning backpressure can be handled. In
// case the single-bit mode is disabled, the pop signal of the preceeding esrng FIFO is
// unconditional, meaning samples can be dropped before the esbit in case of backpressure. The
// samples are however still tested.
assign pfifo_postht_push = rng_bit_en ? pfifo_esbit_not_empty : sfifo_esrng_not_empty;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

hw/ip/entropy_src/rtl/entropy_src_core.sv Show resolved Hide resolved
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure where to put this but I'll put this here. I'm not 100% convinced that we can just drop entropy as long as it's health tested. For normal operation I agree. However, for validation and estimating the min entropy of Entropy_src I think we might need to be able to provide contiguous bits to firmware.
See section 3.1.1, section 3.1.4.1 or section 6.3.3 in the NIST spec 900-80B.

I guess we could argue that there should be no back pressure reaching the postht FIFO in the case where firmware is reading the entropy from the observe FIFO. Additionally, we can argue that the noise source is producing entropy at a rate that is too slow to cause scenarios like this:

  • The esrng FIFO pushes entropy to the postht FIFO one cycle after the postht has been filled
  • The esrng FIFO pushes entropy to the esbit FIFO one cycle after it has been filled

don't happen.

We also have the assertion section in the entropy_src_core.sv written by @andreaskurth to check if we loose date anywhere. Do you think this is sufficient to guarantee no entropy is dropped?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point that for validation and min entropy estimation, no entropy bits may be dropped before the observe FIFO, and I think in real world this holds, because SW can poll the Observe FIFO to extract the necessary bits for validation/estimation (4 kibit or 512 byte, if I'm informed correctly). Ibex can pop these bits from the Observe FIFO (and push them to SRAM for later use in validation/estimation algorithms) at a much faster rate than any noise source could provide them. (Between such 4 kibit windows, it's okay to drop entropy AFAIK.)

To address this concern of yours, we could:

  • Rephrase/extend the about entropy dropping to say "it's okay to drop entropy after health tests and before the conditioner outside validation/estimation".
  • In a top-level test that runs validation/min entropy estimation on Ibex, check that no entropy is dropped within a 4 kibit window. (This test can be added in a later, DV-focused milestone.)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, something like this might be nice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @h-filali is right that in Observe mode (not Extract & Insert) we can now drop entropy bits in two locations:

  • before the postht FIFO we drop upon backpressure from the conditioner. How fast Ibex extracts the data has zero impact on this. This dropping is not controlled by the observe FIFO fill level but solely by the conditioner. But if bits are dropped, the data in the observe FIFO isn't contiguous anymore. This is a problem. In the real world it shouldn't happen because the noise source is very slow. But we should be aware if it happens, I'll add a status / alert bit for this. This will also help DV. And the existing test written by @pamaury can be extend to check that status bit as well.
  • before the observe FIFO. This only affects data going into the observe FIFO, not the data going into the conditioner. How much is dropped depends on how fast software can read out the FIFO. There is a status flag already to check whether data has been dropped.

hw/ip/entropy_src/rtl/entropy_src_enable_delay.sv Outdated Show resolved Hide resolved
Comment on lines 61 to 65
// When CSRNG has just started operating its AES, it may take up to 48 cycles to acknowledge
// the request. When running ast/rng at the maximum rate (this is an unrealistic scenario
// primarily used for reaching coverage metrics) we reduce the max acknowledge delay to
// reduce backpressure and avoid entropy bits from being dropped from the pipeline as our
// scoreboard cannot handle this.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we even allow the rng_max_delay to be 1?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@h-filali has a fair point that it makes more sense to keep rng_max_delay within realistic bounds than to set m_aes_halt_agent_cfg.device_delay_max to an unrealistically low value because rng_max_delay is unrealistically low. IMO that's a concern for V3, though. And at that point, we may as well model backpressure in the scoreboard.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far, this was needed to reach the coverage metrics with reasonable simulation time. To give you an idea of the noise source rates used at the moment:

  • On Earl Grey, the noise source will produce around 50 - 100 kbps. Assuming the ENTROPY_SRC runs at 100 MHz, this results in one 4-bit symbol every 8000 - 4000 clock cycles.
  • Regular tests generate symbols that are 1 - 12 clock cycles apart, on average that's one 4-bit symbol every 6.5 cycles.
  • The max rate test produces one 4-bit symbol every 2 cycles.

@h-filali
Copy link

h-filali commented Mar 5, 2024

@vogelpi for the first of your three opens I would argue that it's fine dropping in front of the bypass FIFO, since if the configuration is not FIPS compliant we don't really care about dropping data (as long as it's not dropped unnecessarily).
If the bypass configuration should be FIPS compliant, then dropping at the bypass FIFO is effectively the same as dropping at the esfinal FIFO.
IMO this is a question of efficiency, so ideally drops should only happen when the esfinal FIFO is full.

@pamaury
Copy link
Contributor

pamaury commented Mar 5, 2024

Should this PR also update the documentation? Or are you planning to do it after all the problems are fixed?

Copy link
Contributor

@andreaskurth andreaskurth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall, thanks @vogelpi!

hw/ip/entropy_src/rtl/entropy_src_core.sv Outdated Show resolved Hide resolved
hw/ip/entropy_src/rtl/entropy_src_core.sv Outdated Show resolved Hide resolved
Comment on lines +2362 to 2424
// The prim_packer_fifo primitive is constructed to only accept pushes if there is indeed space
// available. The pop signal of the preceeding esrng FIFO is unconditional, meaning samples can
// be dropped before the esbit FIFO in case of backpressure. The samples are however still
// tested.
assign pfifo_esbit_push = rng_bit_en && sfifo_esrng_not_empty;
assign pfifo_esbit_clr = ~es_delayed_enable;
assign pfifo_esbit_pop = rng_bit_en && pfifo_esbit_not_empty && pfifo_postht_not_full;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-contiguous bits going into the conditioner: yes. I think that's unavoidable if bits can be dropped. But health tests will always be on all bits, thus contiguous.

Does this cause a problem with windowed health tests, though? I mean the window of bits that gets health-tested is no longer necessarily the window of bits that gets pushed into the conditioner. From what I'm told that's allowed by SP 800-90B, but just wanted to double check

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point that for validation and min entropy estimation, no entropy bits may be dropped before the observe FIFO, and I think in real world this holds, because SW can poll the Observe FIFO to extract the necessary bits for validation/estimation (4 kibit or 512 byte, if I'm informed correctly). Ibex can pop these bits from the Observe FIFO (and push them to SRAM for later use in validation/estimation algorithms) at a much faster rate than any noise source could provide them. (Between such 4 kibit windows, it's okay to drop entropy AFAIK.)

To address this concern of yours, we could:

  • Rephrase/extend the about entropy dropping to say "it's okay to drop entropy after health tests and before the conditioner outside validation/estimation".
  • In a top-level test that runs validation/min entropy estimation on Ibex, check that no entropy is dropped within a 4 kibit window. (This test can be added in a later, DV-focused milestone.)

hw/ip/entropy_src/rtl/entropy_src_core.sv Outdated Show resolved Hide resolved
hw/ip/entropy_src/rtl/entropy_src_core.sv Show resolved Hide resolved
Comment on lines 61 to 65
// When CSRNG has just started operating its AES, it may take up to 48 cycles to acknowledge
// the request. When running ast/rng at the maximum rate (this is an unrealistic scenario
// primarily used for reaching coverage metrics) we reduce the max acknowledge delay to
// reduce backpressure and avoid entropy bits from being dropped from the pipeline as our
// scoreboard cannot handle this.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@h-filali has a fair point that it makes more sense to keep rng_max_delay within realistic bounds than to set m_aes_halt_agent_cfg.device_delay_max to an unrealistically low value because rng_max_delay is unrealistically low. IMO that's a concern for V3, though. And at that point, we may as well model backpressure in the scoreboard.

@vogelpi
Copy link
Contributor Author

vogelpi commented Mar 5, 2024

@vogelpi for the first of your three opens I would argue that it's fine dropping in front of the bypass FIFO, since if the configuration is not FIPS compliant we don't really care about dropping data (as long as it's not dropped unnecessarily). If the bypass configuration should be FIPS compliant, then dropping at the bypass FIFO is effectively the same as dropping at the esfinal FIFO. IMO this is a question of efficiency, so ideally drops should only happen when the esfinal FIFO is full.

Thanks for sharing your thoughts. I also think it's fine to drop before the bypass FIFO. And it's also more efficient. If you'd drop the entire seed and then the esfinal FIFO is popped, you have to again collect 384 bits. Whereas if you drop a 32 word and then the esfinal FIFO is popped, you can fill the FIFO again after having gathered 32 more bits which is a lot faster.

@vogelpi
Copy link
Contributor Author

vogelpi commented Mar 5, 2024

Should this PR also update the documentation? Or are you planning to do it after all the problems are fixed?

Yes, that's exactly my plan :-)

This commit cleans up and documents the control signals for the main
FIFOs of the pipeline including the esrng, esbit, postht, precon and
esfinal FIFOs. Most of the changes simplify the code but don't alter
the behavior of the design as the used FIFO primitives already
implement the logic to not accept pushes when full internally.

However, there are some important changes that are necessary:
1. The esrng FIFO handles no backpressure anymore. This wasn't spec
   compliant.
2. The sample drop point in case of backpressure is moved to after the
   health tests and the window counter controls are updated. This means
   in case of backpressure, samples are tested but they're not pushed
   into the postht FIFO (or the esbit FIFO in case of single-bit mode).
   The window timer doesn't increment to keep the number of samples
   ending up in the conditioner fixed, independent of backpressure.
   This is required by the spec.

Signed-off-by: Pirmin Vogel <[email protected]>
This commit switches all instances of prim_fifo_sync to use hardened
counters for the pointers.

Signed-off-by: Pirmin Vogel <[email protected]>
@vogelpi
Copy link
Contributor Author

vogelpi commented Mar 6, 2024

Thanks @h-filali and @andreaskurth for your reviews! I've extended this PR to:

  1. Include your feedback and most of the changes Hakim prepared in the other PR. The only exemptions are two comments regarding Firmware Override mode. It's actually just Firmware Override: Observe mode which is affected and I've now documented this elsewhere.
  2. I've added an alert bit to inform software when entropy is dropped before the postht FIFO. This is particularly relevant for Firmware Override: Observe mode as correctly pointed out by Hakim.
  3. I've debugged (hopefully all) remaining failures in the max rate test. It turns out the worst case latency is a lot higher than what I've thought. The worst case is that the SHA3 core is active back to back (the first run is when blocksize amount of data has been pushed in, the second run commanded by the main SM). In total, the precon FIFO can blocking writes for up to 175 clock cycles. The distr FIFO depth is left at 2, but I've changed the constraints for the max rate test for the minimal latency and removed one unnecessary clock cycle from the main SM (related to the CS AES halt interface which is no longer handled by the main SM of ENTROPY_SRC).
  4. Fix the bug in the enable delay module I've mentioned earlier.

What is left for after M2:

  1. Adding an SVA to ensure the FIFOs are indeed emptied before the conditioner runs.
  2. Documentation
  3. Adjusting @pamaury 's observe mode TLT/SiVal test to check the new alert bit.
  4. Ironing out remaining test failures.

This commit adds a 32-bit wide distribution FIFO of configurable depth.
The FIFO is added between the postht FIFO, the observe FIFO, the bypass
FIFO and the precon FIFO. Its main purpose is to buffer entropy bits
while the conditioner is busy such that we don't have to drop entropy
bits from the hardware pipeline.

Dropping entropy bits is not a big issue per se as it's allowed by the
spec (when done after the health tests and in a way such that number
of samples going into the conditioner is fixed). Also, under normal
operating conditions, noise source samples arrive at very low rate and
dropping bits should not be needed.

However, verifying that the `correct` entropy bits are dropped is hard
and seems impossible for our current DV environment as it requires to
very accurately model the hardware pipeline which is undesirable. Thus,
the safest approach is to add this new distribution FIFO and tune its
depth parameter to handle potential backpressure from the conditioner
such that dropping bits is not necessary.

Signed-off-by: Pirmin Vogel <[email protected]>
@vogelpi
Copy link
Contributor Author

vogelpi commented Mar 6, 2024

I am now running a full regression with this PR. First results using smaller reseed multipliers look good. All tests had a pass rate above 90%. I'll check again tomorrow and merge the PR unless I see some unexpected results or if CI fails obviously.

This is useful to reduce the backpressure in the pipeline as this leads
to entropy bits being dropped eventually which the scoreboard cannot
handle at the moment.

Signed-off-by: Pirmin Vogel <[email protected]>
Now that the CS AES Halt interface is handled by the SHA3 core itself,
the Sha3Quiesce / Sha3MsgDone states can be combined into one state.
This helps reducing the latency of the conditioner and thus the
backpressure onto the entropy pipeline.

The main_stage_rdy_i input signal can be removed as this is identical
to the sha3_state_vld_i input signal checked in Sha3Valid. At this
point in the FSM, it is always asserted and doesn't need to be
checked again.

Signed-off-by: Pirmin Vogel <[email protected]>
Previously, the CS AES Halt interface was only active when the SHA3
engine was performing the final Process operation for which the main SM
always acknowledges the completion with a done pulse. After fixing
the interface to always be active when the SHA3 engine is actively
processing data, the main SM only sends the done pulse for a minority
of SHA3 operations. This can cause the enable delay module to block
the re-enablement of the entropy pipeline after disabling as it keeps
waiting for a done pulse that is never going to arrive.

This commit fixes this issue by using the sha3_block_processed signal
instead of the done pulse. This signal is sent by the SHA3 engine
whenever the processing of a block finishes.

Signed-off-by: Pirmin Vogel <[email protected]>
@vogelpi
Copy link
Contributor Author

vogelpi commented Mar 6, 2024

The regression ran through. See below for the full report. Pass rates are again all above 92% and coverage numbers increased again as well. I am merging this.

<title>ENTROPY_SRC Simulation Results</title> <style type="text/css">.results tr:hover {background-color:#f2f2f2 !important} .results tbody tr:nth-child(even) {background:#f2f2f2 !important}</style>

ENTROPY_SRC Simulation Results

Wednesday March 06 2024 02:26:24 UTC

GitHub Revision: 382f5cdd27

Branch: HEAD

Testplan

Simulator: XCELIUM

Test Results

Stage Name Tests Max Job Runtime Simulated Time Passing Total Pass Rate
V1 smoke entropy_src_smoke 6.000s 21.491us 50 50 100.00
V1 csr_hw_reset entropy_src_csr_hw_reset 4.000s 24.111us 5 5 100.00
V1 csr_rw entropy_src_csr_rw 4.000s 87.364us 20 20 100.00
V1 csr_bit_bash entropy_src_csr_bit_bash 13.000s 612.260us 5 5 100.00
V1 csr_aliasing entropy_src_csr_aliasing 7.000s 115.951us 5 5 100.00
V1 csr_mem_rw_with_rand_reset entropy_src_csr_mem_rw_with_rand_reset 5.000s 52.630us 20 20 100.00
V1 regwen_csr_and_corresponding_lockable_csr entropy_src_csr_rw 4.000s 87.364us 20 20 100.00
entropy_src_csr_aliasing 7.000s 115.951us 5 5 100.00
V1 TOTAL 105 105 100.00
V2 firmware entropy_src_smoke 6.000s 21.491us 50 50 100.00
entropy_src_rng 7.617m 10.040ms 300 300 100.00
entropy_src_fw_ov 3.533m 5.041ms 288 300 96.00
V2 firmware_mode entropy_src_fw_ov 3.533m 5.041ms 288 300 96.00
V2 rng_mode entropy_src_rng 7.617m 10.040ms 300 300 100.00
V2 rng_max_rate entropy_src_rng_max_rate 14.267m 10.023ms 384 400 96.00
V2 health_checks entropy_src_rng 7.617m 10.040ms 300 300 100.00
V2 conditioning entropy_src_rng 7.617m 10.040ms 300 300 100.00
V2 interrupts entropy_src_rng 7.617m 10.040ms 300 300 100.00
V2 alerts entropy_src_rng 7.617m 10.040ms 300 300 100.00
entropy_src_functional_alerts 7.000s 796.716us 50 50 100.00
V2 stress_all entropy_src_stress_all 19.000s 695.097us 50 50 100.00
V2 functional_errors entropy_src_functional_errors 19.200m 10.012ms 978 1000 97.80
V2 intr_test entropy_src_intr_test 4.000s 21.877us 50 50 100.00
V2 alert_test entropy_src_alert_test 6.000s 99.637us 50 50 100.00
V2 tl_d_oob_addr_access entropy_src_tl_errors 7.000s 350.600us 20 20 100.00
V2 tl_d_illegal_access entropy_src_tl_errors 7.000s 350.600us 20 20 100.00
V2 tl_d_outstanding_access entropy_src_csr_hw_reset 4.000s 24.111us 5 5 100.00
entropy_src_csr_rw 4.000s 87.364us 20 20 100.00
entropy_src_csr_aliasing 7.000s 115.951us 5 5 100.00
entropy_src_same_csr_outstanding 6.000s 255.706us 20 20 100.00
V2 tl_d_partial_access entropy_src_csr_hw_reset 4.000s 24.111us 5 5 100.00
entropy_src_csr_rw 4.000s 87.364us 20 20 100.00
entropy_src_csr_aliasing 7.000s 115.951us 5 5 100.00
entropy_src_same_csr_outstanding 6.000s 255.706us 20 20 100.00
V2 TOTAL 2190 2240 97.77
V2S tl_intg_err entropy_src_sec_cm 6.000s 59.622us 5 5 100.00
entropy_src_tl_intg_err 8.000s 375.135us 20 20 100.00
V2S sec_cm_config_regwen entropy_src_rng 7.617m 10.040ms 300 300 100.00
entropy_src_cfg_regwen 6.000s 27.814us 50 50 100.00
V2S sec_cm_config_mubi entropy_src_rng 7.617m 10.040ms 300 300 100.00
V2S sec_cm_config_redun entropy_src_rng 7.617m 10.040ms 300 300 100.00
V2S sec_cm_intersig_mubi entropy_src_rng 7.617m 10.040ms 300 300 100.00
entropy_src_fw_ov 3.533m 5.041ms 288 300 96.00
V2S sec_cm_main_sm_fsm_sparse entropy_src_functional_errors 19.200m 10.012ms 978 1000 97.80
entropy_src_sec_cm 6.000s 59.622us 5 5 100.00
V2S sec_cm_ack_sm_fsm_sparse entropy_src_functional_errors 19.200m 10.012ms 978 1000 97.80
entropy_src_sec_cm 6.000s 59.622us 5 5 100.00
V2S sec_cm_rng_bkgn_chk entropy_src_rng 7.617m 10.040ms 300 300 100.00
V2S sec_cm_ctr_redun entropy_src_functional_errors 19.200m 10.012ms 978 1000 97.80
entropy_src_sec_cm 6.000s 59.622us 5 5 100.00
V2S sec_cm_ctr_local_esc entropy_src_functional_errors 19.200m 10.012ms 978 1000 97.80
V2S sec_cm_esfinal_rdata_bus_consistency entropy_src_functional_alerts 7.000s 796.716us 50 50 100.00
V2S sec_cm_tile_link_bus_integrity entropy_src_tl_intg_err 8.000s 375.135us 20 20 100.00
V2S TOTAL 75 75 100.00
V3 external_health_tests entropy_src_rng_with_xht_rsps 6.733m 10.096ms 50 50 100.00
V3 TOTAL 50 50 100.00
Unmapped tests entropy_src_intr 20.000s 305.328us 46 50 92.00
TOTAL 2466 2520 97.86

Coverage Results

Coverage Dashboard

Score Block Branch Statement Expression Toggle Fsm Assertion CoverGroup
86.52 98.29 95.65 98.42 95.88 88.02 97.92 90.49 57.91

Failure Buckets

  • xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_sha3_*/rtl/sha*.sv,505): Assertion KeccakIdleWhenNoRunHs_A has failed has 12 failures:
    • Test entropy_src_rng_max_rate has 9 failures.
      • 125.entropy_src_rng_max_rate.9014364120238965142270940635501474975683256433157869579065791325258650172874
        Line 1628, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/125.entropy_src_rng_max_rate/latest/run.log

          xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_sha3_0.1/rtl/sha3.sv,505): (time 10141745408 PS) Assertion tb.dut.u_entropy_src_core.u_sha3.KeccakIdleWhenNoRunHs_A has failed
          UVM_ERROR @ 10141745408 ps: (sha3.sv:505) [ASSERT FAILED] KeccakIdleWhenNoRunHs_A
          UVM_INFO @ 10141745408 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • 131.entropy_src_rng_max_rate.236899495760946650809828504548733155713753573564903556500719589735232925938
        Line 3358, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/131.entropy_src_rng_max_rate/latest/run.log

          xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_sha3_0.1/rtl/sha3.sv,505): (time 10017586171 PS) Assertion tb.dut.u_entropy_src_core.u_sha3.KeccakIdleWhenNoRunHs_A has failed
          UVM_ERROR @ 10017586171 ps: (sha3.sv:505) [ASSERT FAILED] KeccakIdleWhenNoRunHs_A
          UVM_INFO @ 10017586171 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • ... and 7 more failures.

    • Test entropy_src_fw_ov has 3 failures.
      • 138.entropy_src_fw_ov.41650487085038753019384779996051077128973553335077624474077811949132628141545
        Line 760, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/138.entropy_src_fw_ov/latest/run.log

          xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_sha3_0.1/rtl/sha3.sv,505): (time 5054016849 PS) Assertion tb.dut.u_entropy_src_core.u_sha3.KeccakIdleWhenNoRunHs_A has failed
          UVM_ERROR @ 5054016849 ps: (sha3.sv:505) [ASSERT FAILED] KeccakIdleWhenNoRunHs_A
          UVM_INFO @ 5054016849 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • 255.entropy_src_fw_ov.99246117437635156296301253462931888139548867739791903369676324299149173648379
        Line 1026, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/255.entropy_src_fw_ov/latest/run.log

          xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_sha3_0.1/rtl/sha3.sv,505): (time 5006449107 PS) Assertion tb.dut.u_entropy_src_core.u_sha3.KeccakIdleWhenNoRunHs_A has failed
          UVM_ERROR @ 5006449107 ps: (sha3.sv:505) [ASSERT FAILED] KeccakIdleWhenNoRunHs_A
          UVM_INFO @ 5006449107 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • ... and 1 more failures.

  • UVM_FATAL (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.fifo_state_err (addr=*) == * has 10 failures:
    • Test entropy_src_functional_errors has 10 failures.
      • 115.entropy_src_functional_errors.75078071060375662640251049559421254546113714057547254303479306202821150308543
        Line 144, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/115.entropy_src_functional_errors/latest/run.log

          UVM_FATAL @ 10060329309 ps: (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.fifo_state_err (addr=0xf698aad8) == 0x1
          UVM_INFO @ 10060329309 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • 130.entropy_src_functional_errors.25461873071430463531742117063295861304747331827716936655703524760626309582683
        Line 144, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/130.entropy_src_functional_errors/latest/run.log

          UVM_FATAL @ 10011751194 ps: (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.fifo_state_err (addr=0x7ee7aed8) == 0x1
          UVM_INFO @ 10011751194 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • ... and 8 more failures.

  • UVM_ERROR (cip_base_scoreboard.sv:228) [scoreboard] Check failed expected_alert[alert_name].expected == * (* [*] vs * [*]) alert fatal_alert triggered unexpectedly has 9 failures:
    • Test entropy_src_fw_ov has 9 failures.
      • 13.entropy_src_fw_ov.19325383436462205450690559581738838570357876489592831150924262186300497331006
        Line 242, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/13.entropy_src_fw_ov/latest/run.log

          UVM_ERROR @ 3300461535 ps: (cip_base_scoreboard.sv:228) [uvm_test_top.env.scoreboard] Check failed expected_alert[alert_name].expected == 1 (0 [0x0] vs 1 [0x1]) alert fatal_alert triggered unexpectedly
          UVM_INFO @ 3300461535 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • 23.entropy_src_fw_ov.62507058252234356379411256938278996405922774101659437699220346838321579602154
        Line 251, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/23.entropy_src_fw_ov/latest/run.log

          UVM_ERROR @ 1644446641 ps: (cip_base_scoreboard.sv:228) [uvm_test_top.env.scoreboard] Check failed expected_alert[alert_name].expected == 1 (0 [0x0] vs 1 [0x1]) alert fatal_alert triggered unexpectedly
          UVM_INFO @ 1644446641 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • ... and 7 more failures.

  • UVM_FATAL (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.fifo_read_err (addr=*) == * has 8 failures:
    • Test entropy_src_functional_errors has 8 failures.
      • 60.entropy_src_functional_errors.69432113377300602794185706050153107335465002105915429934503847089561516782266
        Line 144, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/60.entropy_src_functional_errors/latest/run.log

          UVM_FATAL @ 10012923922 ps: (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.fifo_read_err (addr=0x4aaba9d8) == 0x1
          UVM_INFO @ 10012923922 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • 95.entropy_src_functional_errors.61555795615177738986511809021561705918211246891227839007927347838860639638335
        Line 144, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/95.entropy_src_functional_errors/latest/run.log

          UVM_FATAL @ 10015050053 ps: (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.fifo_read_err (addr=0x9b131ad8) == 0x1
          UVM_INFO @ 10015050053 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • ... and 6 more failures.

  • UVM_FATAL (entropy_src_base_vseq.sv:430) [entropy_src_rng_vseq] Timeout encountered while reading TlSrcObserveFIFO has 5 failures:
    • Test entropy_src_rng_max_rate has 5 failures.
      • 68.entropy_src_rng_max_rate.52719330572346168348803508491307307099287013464460386850938618140576376595942
        Line 1163, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/68.entropy_src_rng_max_rate/latest/run.log

          UVM_FATAL @ 3055600829 ps: (entropy_src_base_vseq.sv:430) [uvm_test_top.env.virtual_sequencer.entropy_src_rng_vseq] Timeout encountered while reading TlSrcObserveFIFO
          UVM_INFO @ 3055600829 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • 91.entropy_src_rng_max_rate.78526044938709536394345996291139368202783654763703308156616874379357117175754
        Line 2762, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/91.entropy_src_rng_max_rate/latest/run.log

          UVM_FATAL @ 5784978381 ps: (entropy_src_base_vseq.sv:430) [uvm_test_top.env.virtual_sequencer.entropy_src_rng_vseq] Timeout encountered while reading TlSrcObserveFIFO
          UVM_INFO @ 5784978381 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • ... and 3 more failures.

  • xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_entropy_src_*/rtl/entropy_src_core.sv,3377): Assertion Final_PreconFifoPushedPostStartup_A has failed has 4 failures:
    • Test entropy_src_intr has 4 failures.
      • 10.entropy_src_intr.65313915749895005563398339137349252441261413720375532290713804988405208610633
        Line 187, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/10.entropy_src_intr/latest/run.log

          xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_entropy_src_0.1/rtl/entropy_src_core.sv,3377): (time 580522373 PS) Assertion tb.dut.u_entropy_src_core.Final_PreconFifoPushedPostStartup_A has failed
          xmsim: *W,SLFINV: Call to process::self() from invalid process; returning null.
          xmsim: *W,SLFINV: Call to process::self() from invalid process; returning null.
          xmsim: *W,SLFINV: Call to process::self() from invalid process; returning null.
          xmsim: *W,SLFINV: Call to process::self() from invalid process; returning null.
        
      • 27.entropy_src_intr.115190854621060026730307960212597583458477795883177744738146018808699503161341
        Line 187, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/27.entropy_src_intr/latest/run.log

          xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_entropy_src_0.1/rtl/entropy_src_core.sv,3377): (time 919545426 PS) Assertion tb.dut.u_entropy_src_core.Final_PreconFifoPushedPostStartup_A has failed
          xmsim: *W,SLFINV: Call to process::self() from invalid process; returning null.
          xmsim: *W,SLFINV: Call to process::self() from invalid process; returning null.
          xmsim: *W,SLFINV: Call to process::self() from invalid process; returning null.
          xmsim: *W,SLFINV: Call to process::self() from invalid process; returning null.
        
      • ... and 2 more failures.

  • UVM_FATAL (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.sfifo_observe_err (addr=*) == * has 4 failures:
    • Test entropy_src_functional_errors has 4 failures.
      • 108.entropy_src_functional_errors.54538816356270949911518707103423393393169591313494601634860942106370440742968
        Line 144, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/108.entropy_src_functional_errors/latest/run.log

          UVM_FATAL @ 10023461344 ps: (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.sfifo_observe_err (addr=0x9e0554d8) == 0x1
          UVM_INFO @ 10023461344 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • 325.entropy_src_functional_errors.63393307184708212108166245119231874618390580427122436098188287714760394590512
        Line 144, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/325.entropy_src_functional_errors/latest/run.log

          UVM_FATAL @ 10032547504 ps: (csr_utils_pkg.sv:577) [csr_utils::csr_spinwait] timeout entropy_src_reg_block.err_code.sfifo_observe_err (addr=0xb643e2d8) == 0x1
          UVM_INFO @ 10032547504 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • ... and 2 more failures.

  • xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_entropy_src_*/rtl/entropy_src_core.sv,3377): Assertion AtReset_PreconFifoPushedPostStartup_A has failed has 2 failures:
    • Test entropy_src_rng_max_rate has 2 failures.
      • 64.entropy_src_rng_max_rate.61940996003561568501075383516090172813441435529220047599906761457503668315049
        Line 4394, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/64.entropy_src_rng_max_rate/latest/run.log

          xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_entropy_src_0.1/rtl/entropy_src_core.sv,3377): (time 8456419056 PS) Assertion tb.dut.u_entropy_src_core.AtReset_PreconFifoPushedPostStartup_A has failed
          UVM_ERROR @ 8456419056 ps: (entropy_src_core.sv:3377) [ASSERT FAILED] AtReset_PreconFifoPushedPostStartup_A
          UVM_INFO @ 8456419056 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

      • 180.entropy_src_rng_max_rate.115400036172159338336715333406832059302930444117018374853991545620235281954284
        Line 2887, in log /home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/180.entropy_src_rng_max_rate/latest/run.log

          xmsim: *E,ASRTST (/home/dev/src/scratch/HEAD/entropy_src-sim-xcelium/default/src/lowrisc_ip_entropy_src_0.1/rtl/entropy_src_core.sv,3377): (time 5499885383 PS) Assertion tb.dut.u_entropy_src_core.AtReset_PreconFifoPushedPostStartup_A has failed
          UVM_ERROR @ 5499885383 ps: (entropy_src_core.sv:3377) [ASSERT FAILED] AtReset_PreconFifoPushedPostStartup_A
          UVM_INFO @ 5499885383 ps: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
          --- UVM Report catcher Summary ---
        

@vogelpi vogelpi merged commit 2b8870c into lowRISC:master Mar 6, 2024
32 checks passed
@vogelpi vogelpi deleted the entropy-src_fix-dropping branch March 20, 2024 22:09
This was referenced Mar 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[entropy_src] Determine when and what FIFOs can drop entropy
4 participants