Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft HPDIcache (extended hpdcache) #2506

Draft
wants to merge 4 commits into
base: feature/interconnect
Choose a base branch
from

Conversation

takeshiho0531
Copy link
Contributor

  • extended ver. of cva6_hpdcache_axi_arbiter: support for HPICache
  • extended ver. of cva6_hpdcache_subsystem: support for HPICache
  • cva6_hpdcache_icache_if_adapter: new adapter for Icache

@takeshiho0531
Copy link
Contributor Author

@yanicasa
Could you explain how to differentiate the usage between the dreq_o.ready signal for the core and the fetch_obi_rsp_o.gnt signal..?

Copy link
Contributor

❌ failed run, report available here.

@yanicasa
Copy link
Contributor

Yes, sure:

  • dreq_o.req && dreq_o.ready grant the virtual address to decode the cache index. (outside obi protocol)
  • fetch_obi_rsp_o.gnt grants the physical address (after the MMU), --> end of OBI address phase.

Hope this answers your question!

Copy link
Contributor

❌ failed run, report available here.

Copy link
Contributor

❌ failed run, report available here.

@takeshiho0531
Copy link
Contributor Author

@yanicasa
Thank you for your reply!

Considering that the physical address is generally sent one cycle after dreq_i.req && dreq_o.ready, for the Icache to receive the physical address, is it correct in general for the HPDIcache adapter to send the obi grant signal one cycle after dreq_i.req && dreq_o.ready (since HPDIcache only has one ready signal)?

I am implementing HPDCache as an icache (HPICache) based on this understanding, but it is not functioning correctly :‑(
Since HPICache can handle multiple requests, I think the frontend is manipulating its if_ready signal to continuously advance the fetch address. However, when bp_valid=1 is triggered by the data fetched (returning from icache), the fetch address that has been advanced is not collected, which results in the base address for the jump becoming wrong.

In the diagram below, npc_d advances at the two points where if_ready=1 is enclosed in yellow circles around 632ps (which also advances the addr), and after that, bp_valid=1 occurs. However, since the base address for the jump is incorrect, the jump destination becomes incorrect as well...

The frontend logic is complicated, and I’m getting confused. Do you have any idea how I could solve this problem...?

B5EA1BD4-8781-47F5-B886-D66B4B15F431_1_201_a

@yanicasa
Copy link
Contributor

yanicasa commented Sep 24, 2024

As a preamble, I wanted to inform that the verification of the frontend modifications for the OBI is still ongoing. With the historical icache, we seem to have reached an operational point. Recently, we added a UVM OBI agent in active mode, and when randomization is enabled, some bugs appear.

I am currently analyzing this and it seems that the ready signal from the cache is more sensitive than expected. There may be internal constraints within the current icache that cause the ready signal to systematically drop during the kill_req phase, unlike what can occurs with the hpicache?

I'm sharing this because the current issue with UVM arises when the cache grant obi faster than the icache, causing unwanting outstanding transactions. Since the frontend does not support outstanding transactions, the address gets out of sync in the instruction queue.

I’m now testing a frontend modification where the FSM obi_r is replaced with a FIFO, which I believe will improve the situation.

To answer the question, it is possible to "grant" all requests directly, and it even seems to me that "grant" can remain static at 1. You just have to keep in mind that the master is not obliged to maintain the address and that therefore the slave must have recorded it!

Some details that could help:

  • on new access, fetch_dreq_i.ready causes an NPC update
  • dreq.req + virt addr can be sent as many times as long as the obi transaction has not been triggered
  • kill_req can be sent to the cache at any time, but when the obi transaction has not been triggered this is not necessary.
  • when the obi transaction has been triggered, kill_req indicates that the cache should send a response on obi_r_channel as soon as possible, the data returned can in this case be valid or invalid.

I also notice that VCS simulation doesn't work is this current PR, I will try on my side to see if I can help you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants