-
Notifications
You must be signed in to change notification settings - Fork 682
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft HPDIcache (extended hpdcache) #2506
base: feature/interconnect
Are you sure you want to change the base?
Draft HPDIcache (extended hpdcache) #2506
Conversation
takeshiho0531
commented
Sep 23, 2024
- extended ver. of cva6_hpdcache_axi_arbiter: support for HPICache
- extended ver. of cva6_hpdcache_subsystem: support for HPICache
- cva6_hpdcache_icache_if_adapter: new adapter for Icache
@yanicasa |
❌ failed run, report available here. |
Yes, sure:
Hope this answers your question! |
❌ failed run, report available here. |
❌ failed run, report available here. |
@yanicasa Considering that the physical address is generally sent one cycle after I am implementing HPDCache as an icache (HPICache) based on this understanding, but it is not functioning correctly :‑( In the diagram below, npc_d advances at the two points where The frontend logic is complicated, and I’m getting confused. Do you have any idea how I could solve this problem...? |
As a preamble, I wanted to inform that the verification of the frontend modifications for the OBI is still ongoing. With the historical icache, we seem to have reached an operational point. Recently, we added a UVM OBI agent in active mode, and when randomization is enabled, some bugs appear. I am currently analyzing this and it seems that the ready signal from the cache is more sensitive than expected. There may be internal constraints within the current icache that cause the ready signal to systematically drop during the kill_req phase, unlike what can occurs with the hpicache? I'm sharing this because the current issue with UVM arises when the cache grant obi faster than the icache, causing unwanting outstanding transactions. Since the frontend does not support outstanding transactions, the address gets out of sync in the instruction queue. I’m now testing a frontend modification where the FSM obi_r is replaced with a FIFO, which I believe will improve the situation. To answer the question, it is possible to "grant" all requests directly, and it even seems to me that "grant" can remain static at 1. You just have to keep in mind that the master is not obliged to maintain the address and that therefore the slave must have recorded it! Some details that could help:
I also notice that VCS simulation doesn't work is this current PR, I will try on my side to see if I can help you |