Skip to content

Commit

Permalink
core: mm: ensure all pager VA space is mapped with small pages
Browse files Browse the repository at this point in the history
Fix can_map_at_level() to ensure all memory areas related the pager
pageable virtual memory are mapped with small pages. This change
fixes an issue found when the pager physical RAM ends on a section
boundary (e.g. 512MB or 2MB on LPAE case) making the virtual memory
mapping above that boundary to be prepared with pgdir or wider MMU tables
while pager implementation expects 4kB page MMU tables.

Signed-off-by: Etienne Carriere <etienne.carriere@foss.st.com>
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
  • Loading branch information
etienne-lms authored and jforissier committed Jul 18, 2024
1 parent e5500ff commit bfb714a
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions core/mm/core_mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -1805,10 +1805,11 @@ static bool can_map_at_level(paddr_t paddr, vaddr_t vaddr,

#ifdef CFG_WITH_PAGER
/*
* If pager is enabled, we need to map tee ram
* If pager is enabled, we need to map TEE RAM and the whole pager
* regions with small pages only
*/
if (map_is_tee_ram(mm) && block_size != SMALL_PAGE_SIZE)
if ((map_is_tee_ram(mm) || mm->type == MEM_AREA_PAGER_VASPACE) &&
block_size != SMALL_PAGE_SIZE)
return false;
#endif

Expand Down

0 comments on commit bfb714a

Please sign in to comment.