Skip to content

Commit

Permalink
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Browse files Browse the repository at this point in the history
Pull kvm updates from Paolo Bonzini:
 "ARM:
   - Move the arch-specific code into arch/arm64/kvm

   - Start the post-32bit cleanup

   - Cherry-pick a few non-invasive pre-NV patches

  x86:
   - Rework of TLB flushing

   - Rework of event injection, especially with respect to nested
     virtualization

   - Nested AMD event injection facelift, building on the rework of
     generic code and fixing a lot of corner cases

   - Nested AMD live migration support

   - Optimization for TSC deadline MSR writes and IPIs

   - Various cleanups

   - Asynchronous page fault cleanups (from tglx, common topic branch
     with tip tree)

   - Interrupt-based delivery of asynchronous "page ready" events (host
     side)

   - Hyper-V MSRs and hypercalls for guest debugging

   - VMX preemption timer fixes

  s390:
   - Cleanups

  Generic:
   - switch vCPU thread wakeup from swait to rcuwait

  The other architectures, and the guest side of the asynchronous page
  fault work, will come next week"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (256 commits)
  KVM: selftests: fix rdtsc() for vmx_tsc_adjust_test
  KVM: check userspace_addr for all memslots
  KVM: selftests: update hyperv_cpuid with SynDBG tests
  x86/kvm/hyper-v: Add support for synthetic debugger via hypercalls
  x86/kvm/hyper-v: enable hypercalls regardless of hypercall page
  x86/kvm/hyper-v: Add support for synthetic debugger interface
  x86/hyper-v: Add synthetic debugger definitions
  KVM: selftests: VMX preemption timer migration test
  KVM: nVMX: Fix VMX preemption timer migration
  x86/kvm/hyper-v: Explicitly align hcall param for kvm_hyperv_exit
  KVM: x86/pmu: Support full width counting
  KVM: x86/pmu: Tweak kvm_pmu_get_msr to pass 'struct msr_data' in
  KVM: x86: announce KVM_FEATURE_ASYNC_PF_INT
  KVM: x86: acknowledgment mechanism for async pf page ready notifications
  KVM: x86: interrupt based APF 'page ready' event delivery
  KVM: introduce kvm_read_guest_offset_cached()
  KVM: rename kvm_arch_can_inject_async_page_present() to kvm_arch_can_dequeue_async_page_present()
  KVM: x86: extend struct kvm_vcpu_pv_apf_data with token info
  Revert "KVM: async_pf: Fix #DF due to inject "Page not Present" and "Page Ready" exceptions simultaneously"
  KVM: VMX: Replace zero-length array with flexible-array
  ...
  • Loading branch information
torvalds committed Jun 3, 2020
2 parents 6b2591c + 13ffbd8 commit 039aeb9
Show file tree
Hide file tree
Showing 155 changed files with 5,309 additions and 3,048 deletions.
41 changes: 40 additions & 1 deletion Documentation/virt/kvm/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4336,9 +4336,13 @@ Errors:
#define KVM_STATE_NESTED_VMX_SMM_GUEST_MODE 0x00000001
#define KVM_STATE_NESTED_VMX_SMM_VMXON 0x00000002

#define KVM_STATE_VMX_PREEMPTION_TIMER_DEADLINE 0x00000001

struct kvm_vmx_nested_state_hdr {
__u32 flags;
__u64 vmxon_pa;
__u64 vmcs12_pa;
__u64 preemption_timer_deadline;

struct {
__u16 flags;
Expand Down Expand Up @@ -5068,10 +5072,13 @@ EOI was received.
struct kvm_hyperv_exit {
#define KVM_EXIT_HYPERV_SYNIC 1
#define KVM_EXIT_HYPERV_HCALL 2
#define KVM_EXIT_HYPERV_SYNDBG 3
__u32 type;
__u32 pad1;
union {
struct {
__u32 msr;
__u32 pad2;
__u64 control;
__u64 evt_page;
__u64 msg_page;
Expand All @@ -5081,6 +5088,15 @@ EOI was received.
__u64 result;
__u64 params[2];
} hcall;
struct {
__u32 msr;
__u32 pad2;
__u64 control;
__u64 status;
__u64 send_page;
__u64 recv_page;
__u64 pending_page;
} syndbg;
} u;
};
/* KVM_EXIT_HYPERV */
Expand All @@ -5097,6 +5113,12 @@ Hyper-V SynIC state change. Notification is used to remap SynIC
event/message pages and to enable/disable SynIC messages/events processing
in userspace.

- KVM_EXIT_HYPERV_SYNDBG -- synchronously notify user-space about

Hyper-V Synthetic debugger state change. Notification is used to either update
the pending_page location or to send a control command (send the buffer located
in send_page or recv a buffer to recv_page).

::

/* KVM_EXIT_ARM_NISV */
Expand Down Expand Up @@ -5779,7 +5801,7 @@ will be initialized to 1 when created. This also improves performance because
dirty logging can be enabled gradually in small chunks on the first call
to KVM_CLEAR_DIRTY_LOG. KVM_DIRTY_LOG_INITIALLY_SET depends on
KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (it is also only available on
x86 for now).
x86 and arm64 for now).

KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 was previously available under the name
KVM_CAP_MANUAL_DIRTY_LOG_PROTECT, but the implementation had bugs that make
Expand All @@ -5804,6 +5826,23 @@ If present, this capability can be enabled for a VM, meaning that KVM
will allow the transition to secure guest mode. Otherwise KVM will
veto the transition.

7.20 KVM_CAP_HALT_POLL
----------------------

:Architectures: all
:Target: VM
:Parameters: args[0] is the maximum poll time in nanoseconds
:Returns: 0 on success; -1 on error

This capability overrides the kvm module parameter halt_poll_ns for the
target VM.

VCPU polling allows a VCPU to poll for wakeup events instead of immediately
scheduling during guest halts. The maximum time a VCPU can spend polling is
controlled by the kvm module parameter halt_poll_ns. This capability allows
the maximum halt time to specified on a per-VM basis, effectively overriding
the module parameter for the target VM.

8. Other capabilities.
======================

Expand Down
8 changes: 7 additions & 1 deletion Documentation/virt/kvm/cpuid.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ KVM_FEATURE_NOP_IO_DELAY 1 not necessary to perform delays
KVM_FEATURE_MMU_OP 2 deprecated

KVM_FEATURE_CLOCKSOURCE2 3 kvmclock available at msrs

0x4b564d00 and 0x4b564d01

KVM_FEATURE_ASYNC_PF 4 async pf can be enabled by
writing to msr 0x4b564d02

Expand Down Expand Up @@ -86,6 +86,12 @@ KVM_FEATURE_PV_SCHED_YIELD 13 guest checks this feature bit
before using paravirtualized
sched yield.

KVM_FEATURE_ASYNC_PF_INT 14 guest checks this feature bit
before using the second async
pf control msr 0x4b564d06 and
async pf acknowledgment msr
0x4b564d07.

KVM_FEATURE_CLOCSOURCE_STABLE_BIT 24 host will warn if no guest-side
per-cpu warps are expeced in
kvmclock
Expand Down
119 changes: 88 additions & 31 deletions Documentation/virt/kvm/msr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -190,41 +190,72 @@ MSR_KVM_ASYNC_PF_EN:
0x4b564d02

data:
Bits 63-6 hold 64-byte aligned physical address of a
64 byte memory area which must be in guest RAM and must be
zeroed. Bits 5-3 are reserved and should be zero. Bit 0 is 1
when asynchronous page faults are enabled on the vcpu 0 when
disabled. Bit 1 is 1 if asynchronous page faults can be injected
when vcpu is in cpl == 0. Bit 2 is 1 if asynchronous page faults
are delivered to L1 as #PF vmexits. Bit 2 can be set only if
KVM_FEATURE_ASYNC_PF_VMEXIT is present in CPUID.

First 4 byte of 64 byte memory location will be written to by
the hypervisor at the time of asynchronous page fault (APF)
injection to indicate type of asynchronous page fault. Value
of 1 means that the page referred to by the page fault is not
present. Value 2 means that the page is now available. Disabling
interrupt inhibits APFs. Guest must not enable interrupt
before the reason is read, or it may be overwritten by another
APF. Since APF uses the same exception vector as regular page
fault guest must reset the reason to 0 before it does
something that can generate normal page fault. If during page
fault APF reason is 0 it means that this is regular page
fault.

During delivery of type 1 APF cr2 contains a token that will
be used to notify a guest when missing page becomes
available. When page becomes available type 2 APF is sent with
cr2 set to the token associated with the page. There is special
kind of token 0xffffffff which tells vcpu that it should wake
up all processes waiting for APFs and no individual type 2 APFs
will be sent.
Asynchronous page fault (APF) control MSR.

Bits 63-6 hold 64-byte aligned physical address of a 64 byte memory area
which must be in guest RAM and must be zeroed. This memory is expected
to hold a copy of the following structure::

struct kvm_vcpu_pv_apf_data {
/* Used for 'page not present' events delivered via #PF */
__u32 flags;

/* Used for 'page ready' events delivered via interrupt notification */
__u32 token;

__u8 pad[56];
__u32 enabled;
};

Bits 5-4 of the MSR are reserved and should be zero. Bit 0 is set to 1
when asynchronous page faults are enabled on the vcpu, 0 when disabled.
Bit 1 is 1 if asynchronous page faults can be injected when vcpu is in
cpl == 0. Bit 2 is 1 if asynchronous page faults are delivered to L1 as
#PF vmexits. Bit 2 can be set only if KVM_FEATURE_ASYNC_PF_VMEXIT is
present in CPUID. Bit 3 enables interrupt based delivery of 'page ready'
events. Bit 3 can only be set if KVM_FEATURE_ASYNC_PF_INT is present in
CPUID.

'Page not present' events are currently always delivered as synthetic
#PF exception. During delivery of these events APF CR2 register contains
a token that will be used to notify the guest when missing page becomes
available. Also, to make it possible to distinguish between real #PF and
APF, first 4 bytes of 64 byte memory location ('flags') will be written
to by the hypervisor at the time of injection. Only first bit of 'flags'
is currently supported, when set, it indicates that the guest is dealing
with asynchronous 'page not present' event. If during a page fault APF
'flags' is '0' it means that this is regular page fault. Guest is
supposed to clear 'flags' when it is done handling #PF exception so the
next event can be delivered.

Note, since APF 'page not present' events use the same exception vector
as regular page fault, guest must reset 'flags' to '0' before it does
something that can generate normal page fault.

Bytes 5-7 of 64 byte memory location ('token') will be written to by the
hypervisor at the time of APF 'page ready' event injection. The content
of these bytes is a token which was previously delivered as 'page not
present' event. The event indicates the page in now available. Guest is
supposed to write '0' to 'token' when it is done handling 'page ready'
event and to write 1' to MSR_KVM_ASYNC_PF_ACK after clearing the location;
writing to the MSR forces KVM to re-scan its queue and deliver the next
pending notification.

Note, MSR_KVM_ASYNC_PF_INT MSR specifying the interrupt vector for 'page
ready' APF delivery needs to be written to before enabling APF mechanism
in MSR_KVM_ASYNC_PF_EN or interrupt #0 can get injected. The MSR is
available if KVM_FEATURE_ASYNC_PF_INT is present in CPUID.

Note, previously, 'page ready' events were delivered via the same #PF
exception as 'page not present' events but this is now deprecated. If
bit 3 (interrupt based delivery) is not set APF events are not delivered.

If APF is disabled while there are outstanding APFs, they will
not be delivered.

Currently type 2 APF will be always delivered on the same vcpu as
type 1 was, but guest should not rely on that.
Currently 'page ready' APF events will be always delivered on the
same vcpu as 'page not present' event was, but guest should not rely on
that.

MSR_KVM_STEAL_TIME:
0x4b564d03
Expand Down Expand Up @@ -319,3 +350,29 @@ data:

KVM guests can request the host not to poll on HLT, for example if
they are performing polling themselves.

MSR_KVM_ASYNC_PF_INT:
0x4b564d06

data:
Second asynchronous page fault (APF) control MSR.

Bits 0-7: APIC vector for delivery of 'page ready' APF events.
Bits 8-63: Reserved

Interrupt vector for asynchnonous 'page ready' notifications delivery.
The vector has to be set up before asynchronous page fault mechanism
is enabled in MSR_KVM_ASYNC_PF_EN. The MSR is only available if
KVM_FEATURE_ASYNC_PF_INT is present in CPUID.

MSR_KVM_ASYNC_PF_ACK:
0x4b564d07

data:
Asynchronous page fault (APF) acknowledgment.

When the guest is done processing 'page ready' APF event and 'token'
field in 'struct kvm_vcpu_pv_apf_data' is cleared it is supposed to
write '1' to bit 0 of the MSR, this causes the host to re-scan its queue
and check if there are more notifications pending. The MSR is available
if KVM_FEATURE_ASYNC_PF_INT is present in CPUID.
5 changes: 1 addition & 4 deletions Documentation/virt/kvm/nested-vmx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -116,10 +116,7 @@ struct shadow_vmcs is ever changed.
natural_width cr4_guest_host_mask;
natural_width cr0_read_shadow;
natural_width cr4_read_shadow;
natural_width cr3_target_value0;
natural_width cr3_target_value1;
natural_width cr3_target_value2;
natural_width cr3_target_value3;
natural_width dead_space[4]; /* Last remnants of cr3_target_value[0-3]. */
natural_width exit_qualification;
natural_width guest_linear_address;
natural_width guest_cr0;
Expand Down
1 change: 0 additions & 1 deletion MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -9368,7 +9368,6 @@ F: arch/arm64/include/asm/kvm*
F: arch/arm64/include/uapi/asm/kvm*
F: arch/arm64/kvm/
F: include/kvm/arm_*
F: virt/kvm/arm/

KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
L: [email protected]
Expand Down
4 changes: 3 additions & 1 deletion arch/arm64/include/asm/kvm_asm.h
Original file line number Diff line number Diff line change
Expand Up @@ -64,12 +64,14 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);

extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high);
extern void __kvm_timer_set_cntvoff(u64 cntvoff);

extern int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu);

extern int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu);

extern void __kvm_enable_ssbs(void);

extern u64 __vgic_v3_get_ich_vtr_el2(void);
extern u64 __vgic_v3_read_vmcr(void);
extern void __vgic_v3_write_vmcr(u32 vmcr);
Expand Down
46 changes: 6 additions & 40 deletions arch/arm64/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,9 @@
#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
#define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4)

#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
KVM_DIRTY_LOG_INITIALLY_SET)

DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);

extern unsigned int kvm_sve_max_vl;
Expand Down Expand Up @@ -112,12 +115,8 @@ struct kvm_vcpu_fault_info {
u64 disr_el1; /* Deferred [SError] Status Register */
};

/*
* 0 is reserved as an invalid value.
* Order should be kept in sync with the save/restore code.
*/
enum vcpu_sysreg {
__INVALID_SYSREG__,
__INVALID_SYSREG__, /* 0 is reserved as an invalid value */
MPIDR_EL1, /* MultiProcessor Affinity Register */
CSSELR_EL1, /* Cache Size Selection Register */
SCTLR_EL1, /* System Control Register */
Expand Down Expand Up @@ -415,6 +414,8 @@ struct kvm_vm_stat {
struct kvm_vcpu_stat {
u64 halt_successful_poll;
u64 halt_attempted_poll;
u64 halt_poll_success_ns;
u64 halt_poll_fail_ns;
u64 halt_poll_invalid;
u64 halt_wakeup;
u64 hvc_exit_stat;
Expand Down Expand Up @@ -530,39 +531,6 @@ static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt)
cpu_ctxt->sys_regs[MPIDR_EL1] = read_cpuid_mpidr();
}

void __kvm_enable_ssbs(void);

static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
unsigned long hyp_stack_ptr,
unsigned long vector_ptr)
{
/*
* Calculate the raw per-cpu offset without a translation from the
* kernel's mapping to the linear mapping, and store it in tpidr_el2
* so that we can use adr_l to access per-cpu variables in EL2.
*/
u64 tpidr_el2 = ((u64)this_cpu_ptr(&kvm_host_data) -
(u64)kvm_ksym_ref(kvm_host_data));

/*
* Call initialization code, and switch to the full blown HYP code.
* If the cpucaps haven't been finalized yet, something has gone very
* wrong, and hyp will crash and burn when it uses any
* cpus_have_const_cap() wrapper.
*/
BUG_ON(!system_capabilities_finalized());
__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, tpidr_el2);

/*
* Disabling SSBD on a non-VHE system requires us to enable SSBS
* at EL2.
*/
if (!has_vhe() && this_cpu_has_cap(ARM64_SSBS) &&
arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) {
kvm_call_hyp(__kvm_enable_ssbs);
}
}

static inline bool kvm_arch_requires_vhe(void)
{
/*
Expand Down Expand Up @@ -594,8 +562,6 @@ int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);

static inline void __cpu_init_stage2(void) {}

/* Guest/host FPSIMD coordination helpers */
int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
Expand Down
12 changes: 6 additions & 6 deletions arch/arm64/include/asm/kvm_hyp.h
Original file line number Diff line number Diff line change
Expand Up @@ -55,12 +55,12 @@

int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);

void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
void __vgic_v3_activate_traps(struct kvm_vcpu *vcpu);
void __vgic_v3_deactivate_traps(struct kvm_vcpu *vcpu);
void __vgic_v3_save_aprs(struct kvm_vcpu *vcpu);
void __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu);
void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if);
int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu);

void __timer_enable_traps(struct kvm_vcpu *vcpu);
Expand Down
Loading

0 comments on commit 039aeb9

Please sign in to comment.