Skip to content

Commit

Permalink
feat: arm64: mmu: auto-sensing of best paging stride
Browse files Browse the repository at this point in the history
Improves the memory mapping process by dynamically selecting the optimal
paging stride (4K or 2M) based on virtual address alignment and mapping
size. This eliminates the need for upfront stride determination, enhancing
flexibility and maintainability in memory management.

Changes:
- Replaced fixed stride selection logic with a dynamic decision loop.
- Removed `npages` calculation and replaced with `remaining_sz` to track
  unprocessed memory size.
- Added assertions to ensure `size` is properly aligned to the smallest
  page size.
- Adjusted loop to dynamically determine and apply the appropriate stride
  (4K or 2M) for each mapping iteration.
- Updated virtual and physical address increments to use the dynamically
  selected stride.

Signed-off-by: Shell <[email protected]>
  • Loading branch information
polarvid authored and mysterywolf committed Dec 14, 2024
1 parent 02a1149 commit 7ff75e2
Showing 1 changed file with 18 additions and 16 deletions.
34 changes: 18 additions & 16 deletions libcpu/aarch64/common/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -275,27 +275,27 @@ void *rt_hw_mmu_map(rt_aspace_t aspace, void *v_addr, void *p_addr, size_t size,
int ret = -1;

void *unmap_va = v_addr;
size_t npages;
size_t remaining_sz = size;
size_t stride;
int (*mapper)(unsigned long *lv0_tbl, void *vaddr, void *paddr, unsigned long attr);

if (((rt_ubase_t)v_addr & ARCH_SECTION_MASK) || (size & ARCH_SECTION_MASK))
{
/* legacy 4k mapping */
npages = size >> ARCH_PAGE_SHIFT;
stride = ARCH_PAGE_SIZE;
mapper = _kernel_map_4K;
}
else
{
/* 2m huge page */
npages = size >> ARCH_SECTION_SHIFT;
stride = ARCH_SECTION_SIZE;
mapper = _kernel_map_2M;
}
RT_ASSERT(!(size & ARCH_PAGE_MASK));

while (npages--)
while (remaining_sz)
{
if (((rt_ubase_t)v_addr & ARCH_SECTION_MASK) || (remaining_sz < ARCH_SECTION_SIZE))
{
/* legacy 4k mapping */
stride = ARCH_PAGE_SIZE;
mapper = _kernel_map_4K;
}
else
{
/* 2m huge page */
stride = ARCH_SECTION_SIZE;
mapper = _kernel_map_2M;
}

MM_PGTBL_LOCK(aspace);
ret = mapper(aspace->page_table, v_addr, p_addr, attr);
MM_PGTBL_UNLOCK(aspace);
Expand All @@ -314,6 +314,8 @@ void *rt_hw_mmu_map(rt_aspace_t aspace, void *v_addr, void *p_addr, size_t size,
}
break;
}

remaining_sz -= stride;
v_addr = (char *)v_addr + stride;
p_addr = (char *)p_addr + stride;
}
Expand Down

0 comments on commit 7ff75e2

Please sign in to comment.