Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rpk v4.14 meltdown spectre mitigations #5

Closed
wants to merge 31 commits into from
Closed

Rpk v4.14 meltdown spectre mitigations #5

wants to merge 31 commits into from

Commits on Jan 5, 2018

  1. arm64: mm: Use non-global mappings for kernel space

    In preparation for unmapping the kernel whilst running in userspace,
    make the kernel mappings non-global so we can avoid expensive TLB
    invalidation on kernel exit to userspace.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    58de4b2 View commit details
    Browse the repository at this point in the history
  2. arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN

    We're about to rework the way ASIDs are allocated, switch_mm is
    implemented and low-level kernel entry/exit is handled, so keep the
    ARM64_SW_TTBR0_PAN code out of the way whilst we do the heavy lifting.
    
    It will be re-enabled in a subsequent patch.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    eb38782 View commit details
    Browse the repository at this point in the history
  3. arm64: mm: Move ASID from TTBR0 to TTBR1

    In preparation for mapping kernelspace and userspace with different
    ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch
    TTBR0 via an invalid mapping (the zero page).
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    b324d16 View commit details
    Browse the repository at this point in the history
  4. arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003

    The pre_ttbr0_update_workaround hook is called prior to context-switching
    TTBR0 because Falkor erratum E1003 can cause TLB allocation with the wrong
    ASID if both the ASID and the base address of the TTBR are updated at
    the same time.
    
    With the ASID sitting safely in TTBR1, we no longer update things
    atomically, so we can remove the pre_ttbr0_update_workaround macro as
    it's no longer required. The erratum infrastructure and documentation
    is left around for #E1003, as it will be required by the entry
    trampoline code in a future patch.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    e8fc49c View commit details
    Browse the repository at this point in the history
  5. arm64: mm: Rename post_ttbr0_update_workaround

    The post_ttbr0_update_workaround hook applies to any change to TTBRx_EL1.
    Since we're using TTBR1 for the ASID, rename the hook to make it clearer
    as to what it's doing.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    343435d View commit details
    Browse the repository at this point in the history
  6. arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN

    With the ASID now installed in TTBR1, we can re-enable ARM64_SW_TTBR0_PAN
    by ensuring that we switch to a reserved ASID of zero when disabling
    user access and restore the active user ASID on the uaccess enable path.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    0e19d7e View commit details
    Browse the repository at this point in the history
  7. arm64: mm: Allocate ASIDs in pairs

    In preparation for separate kernel/user ASIDs, allocate them in pairs
    for each mm_struct. The bottom bit distinguishes the two: if it is set,
    then the ASID will map only userspace.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    93d5213 View commit details
    Browse the repository at this point in the history
  8. arm64: mm: Add arm64_kernel_unmapped_at_el0 helper

    In order for code such as TLB invalidation to operate efficiently when
    the decision to map the kernel at EL0 is determined at runtime, this
    patch introduces a helper function, arm64_kernel_unmapped_at_el0, to
    determine whether or not the kernel is mapped whilst running in userspace.
    
    Currently, this just reports the value of CONFIG_UNMAP_KERNEL_AT_EL0,
    but will later be hooked up to a fake CPU capability using a static key.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    d39dc2b View commit details
    Browse the repository at this point in the history
  9. arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI

    Since an mm has both a kernel and a user ASID, we need to ensure that
    broadcast TLB maintenance targets both address spaces so that things
    like CoW continue to work with the uaccess primitives in the kernel.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    16cf312 View commit details
    Browse the repository at this point in the history
  10. arm64: entry: Add exception trampoline page for exceptions from EL0

    To allow unmapping of the kernel whilst running at EL0, we need to
    point the exception vectors at an entry trampoline that can map/unmap
    the kernel on entry/exit respectively.
    
    This patch adds the trampoline page, although it is not yet plugged
    into the vector table and is therefore unused.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    26a3432 View commit details
    Browse the repository at this point in the history
  11. arm64: mm: Map entry trampoline into trampoline and kernel page tables

    The exception entry trampoline needs to be mapped at the same virtual
    address in both the trampoline page table (which maps nothing else)
    and also the kernel page table, so that we can swizzle TTBR1_EL1 on
    exceptions from and return to EL0.
    
    This patch maps the trampoline at a fixed virtual address in the fixmap
    area of the kernel virtual address space, which allows the kernel proper
    to be randomized with respect to the trampoline when KASLR is enabled.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    7616e38 View commit details
    Browse the repository at this point in the history
  12. arm64: entry: Explicitly pass exception level to kernel_ventry macro

    We will need to treat exceptions from EL0 differently in kernel_ventry,
    so rework the macro to take the exception level as an argument and
    construct the branch target using that.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    aa182ac View commit details
    Browse the repository at this point in the history
  13. arm64: entry: Hook up entry trampoline to exception vectors

    Hook up the entry trampoline to our exception vectors so that all
    exceptions from and returns to EL0 go via the trampoline, which swizzles
    the vector base register accordingly. Transitioning to and from the
    kernel clobbers x30, so we use tpidrro_el0 and far_el1 as scratch
    registers for native tasks.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    0f1a307 View commit details
    Browse the repository at this point in the history
  14. arm64: erratum: Work around Falkor erratum #E1003 in trampoline code

    We rely on an atomic swizzling of TTBR1 when transitioning from the entry
    trampoline to the kernel proper on an exception. We can't rely on this
    atomicity in the face of Falkor erratum #E1003, so on affected cores we
    can issue a TLB invalidation to invalidate the walk cache prior to
    jumping into the kernel. There is still the possibility of a TLB conflict
    here due to conflicting walk cache entries prior to the invalidation, but
    this doesn't appear to be the case on these CPUs in practice.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    2dabed8 View commit details
    Browse the repository at this point in the history
  15. arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks

    When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register
    during exception entry from native tasks and subsequently zero it in
    the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0
    in the context-switch path for native tasks using the entry trampoline.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    fd6309b View commit details
    Browse the repository at this point in the history
  16. arm64: entry: Add fake CPU feature for unmapping the kernel at EL0

    Allow explicit disabling of the entry trampoline on the kernel command
    line (kpti=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0)
    that can be used to toggle the alternative sequences in our entry code and
    avoid use of the trampoline altogether if desired. This also allows us to
    make use of a static key in arm64_kernel_unmapped_at_el0().
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    beec04e View commit details
    Browse the repository at this point in the history
  17. arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0

    Add a Kconfig entry to control use of the entry trampoline, which allows
    us to unmap the kernel whilst running in userspace and improve the
    robustness of KASLR.
    
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    494177b View commit details
    Browse the repository at this point in the history
  18. arm64: mm: Introduce TTBR_ASID_MASK for getting at the ASID in the TTBR

    There are now a handful of open-coded masks to extract the ASID from a
    TTBR value, so introduce a TTBR_ASID_MASK and use that instead.
    
    Suggested-by: Mark Rutland <[email protected]>
    Reviewed-by: Mark Rutland <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    67bf5f3 View commit details
    Browse the repository at this point in the history
  19. arm64: kaslr: Put kernel vectors address in separate data page

    The literal pool entry for identifying the vectors base is the only piece
    of information in the trampoline page that identifies the true location
    of the kernel.
    
    This patch moves it into a page-aligned region of the .rodata section
    and maps this adjacent to the trampoline text via an additional fixmap
    entry, which protects against any accidental leakage of the trampoline
    contents.
    
    Suggested-by: Ard Biesheuvel <[email protected]>
    Tested-by: Laura Abbott <[email protected]>
    Tested-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    c1959d4 View commit details
    Browse the repository at this point in the history
  20. arm64: use RET instruction for exiting the trampoline

    Speculation attacks against the entry trampoline can potentially resteer
    the speculative instruction stream through the indirect branch and into
    arbitrary gadgets within the kernel.
    
    This patch defends against these attacks by forcing a misprediction
    through the return stack: a dummy BL instruction loads an entry into
    the stack, so that the predicted program flow of the subsequent RET
    instruction is to a branch-to-self instruction which is finally resolved
    as a branch to the kernel vectors with speculation suppressed.
    
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    22388c0 View commit details
    Browse the repository at this point in the history
  21. arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry

    Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
    actually more useful as a mitigation against speculation attacks that
    can leak arbitrary kernel data to userspace through speculation.
    
    Reword the Kconfig help message to reflect this, and make the option
    depend on EXPERT so that it is on by default for the majority of users.
    
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    e3b4439 View commit details
    Browse the repository at this point in the history
  22. arm64: Take into account ID_AA64PFR0_EL1.CSV3

    For non-KASLR kernels where the KPTI behaviour has not been overridden
    on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether
    or not we should unmap the kernel whilst running at EL0.
    
    Reviewed-by: Suzuki K Poulose <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    19239ca View commit details
    Browse the repository at this point in the history
  23. arm64: cpufeature: Pass capability structure to ->enable callback

    In order to invoke the CPU capability ->matches callback from the ->enable
    callback for applying local-CPU workarounds, we need a handle on the
    capability structure.
    
    This patch passes a pointer to the capability structure to the ->enable
    callback.
    
    Reviewed-by: Suzuki K Poulose <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    bc879b8 View commit details
    Browse the repository at this point in the history
  24. drivers/firmware: Expose psci_get_version through psci_ops structure

    Entry into recent versions of ARM Trusted Firmware will invalidate the CPU
    branch predictor state in order to protect against aliasing attacks.
    
    This patch exposes the PSCI "VERSION" function via psci_ops, so that it
    can be invoked outside of the PSCI driver where necessary.
    
    Acked-by: Lorenzo Pieralisi <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    c91c920 View commit details
    Browse the repository at this point in the history
  25. arm64: Move post_ttbr_update_workaround to C code

    We will soon need to invoke a CPU-specific function pointer after changing
    page tables, so move post_ttbr_update_workaround out into C code to make
    this possible.
    
    Signed-off-by: Marc Zyngier <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    Marc Zyngier authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    a3c9f0b View commit details
    Browse the repository at this point in the history
  26. arm64: Add skeleton to harden the branch predictor against aliasing a…

    …ttacks
    
    Aliasing attacks against CPU branch predictors can allow an attacker to
    redirect speculative control flow on some CPUs and potentially divulge
    information from one context to another.
    
    This patch adds initial skeleton code behind a new Kconfig option to
    enable implementation-specific mitigations against these attacks for
    CPUs that are affected.
    
    Signed-off-by: Marc Zyngier <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    a19574b View commit details
    Browse the repository at this point in the history
  27. arm64: KVM: Use per-CPU vector when BP hardening is enabled

    Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code.
    
    Signed-off-by: Marc Zyngier <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    Marc Zyngier authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    d66d032 View commit details
    Browse the repository at this point in the history
  28. arm64: KVM: Make PSCI_VERSION a fast path

    For those CPUs that require PSCI to perform a BP invalidation,
    going all the way to the PSCI code for not much is a waste of
    precious cycles. Let's terminate that call as early as possible.
    
    Signed-off-by: Marc Zyngier <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    Marc Zyngier authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    0e8d0ce View commit details
    Browse the repository at this point in the history
  29. arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75

    Hook up MIDR values for the Cortex-A72 and Cortex-A75 CPUs, since they
    will soon need MIDR matches for hardening the branch predictor.
    
    Signed-off-by: Will Deacon <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    dacdc93 View commit details
    Browse the repository at this point in the history
  30. arm64: Implement branch predictor hardening for affected Cortex-A CPUs

    Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing
    and can theoretically be attacked by malicious code.
    
    This patch implements a PSCI-based mitigation for these CPUs when available.
    The call into firmware will invalidate the branch predictor state, preventing
    any malicious entries from affecting other victim contexts.
    
    Signed-off-by: Marc Zyngier <[email protected]>
    Signed-off-by: Will Deacon <[email protected]>
    [ardb: fix incorrect stack offset in __psci_hyp_bp_inval_start()]
    Signed-off-by: Ard Biesheuvel <[email protected]>
    wildea01 authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    2549d74 View commit details
    Browse the repository at this point in the history
  31. arm64: Implement branch predictor hardening for Falkor

    Falkor is susceptible to branch predictor aliasing and can
    theoretically be attacked by malicious code. This patch
    implements a mitigation for these attacks, preventing any
    malicious entries from affecting other victim contexts.
    
    Signed-off-by: Shanker Donthineni <[email protected]>
    Signed-off-by: Ard Biesheuvel <[email protected]>
    Shanker Donthineni authored and Ard Biesheuvel committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    68ea632 View commit details
    Browse the repository at this point in the history