- Mar 19, 2018
-
-
Christoffer Dall authored
As we are about to be more lazy with some of the trap configuration register read/writes for VHE systems, move the logic that is currently shared between VHE and non-VHE into a separate function which can be called from either the world-switch path or from vcpu_load/vcpu_put. Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
When running a 32-bit VM (EL1 in AArch32), the AArch32 system registers can be deferred to vcpu load/put on VHE systems because neither the host kernel nor host userspace uses these registers. Note that we can't save DBGVCR32_EL2 conditionally based on the state of the debug dirty flag on VHE after this change, because during vcpu_load() we haven't calculated a valid debug flag yet, and when we've restored the register during vcpu_load() we also have to save it during vcpu_put(). This means that we'll always restore/save the register for VHE on load/put, but luckily vcpu load/put are called rarely, so saving an extra register unconditionally shouldn't significantly hurt performance. We can also not defer saving FPEXC32_32 because this register only holds a guest-valid value for 32-bit guests during the exit path when the guest has used FPSIMD registers and restored the register in the early assembly handler from taking the EL2 fault, and therefore we have to check if fpsimd is enabled for the guest in the exit path and save the register then, for both VHE and non-VHE guests. Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
There is no need to have multiple identical functions with different names for saving host and guest state. When saving and restoring state for the host and guest, the state is the same for both contexts, and that's why we have the kvm_cpu_context structure. Delete one version and rename the other into simply save/restore. Reviewed-by:
Andrew Jones <drjones@redhat.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
The comment only applied to SPE on non-VHE systems, so we simply remove it. Suggested-by:
Andrew Jones <drjones@redhat.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
As we are about to handle system registers quite differently between VHE and non-VHE systems. In preparation for that, we need to split some of the handling functions between VHE and non-VHE functionality. For now, we simply copy the non-VHE functions, but we do change the use of static keys for VHE and non-VHE functionality now that we have separate functions. Reviewed-by:
Andrew Jones <drjones@redhat.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
The VHE switch function calls __timer_enable_traps and __timer_disable_traps which don't do anything on VHE systems. Therefore, simply remove these calls from the VHE switch function and make the functions non-conditional as they are now only called from the non-VHE switch path. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
There is no need to reset the VTTBR to zero when exiting the guest on VHE systems. VHE systems don't use stage 2 translations for the EL2&0 translation regime used by the host. Reviewed-by:
Andrew Jones <drjones@redhat.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
VHE kernels run completely in EL2 and therefore don't have a notion of kernel and hyp addresses, they are all just kernel addresses. Therefore don't call kern_hyp_va() in the VHE switch function. Reviewed-by:
Andrew Jones <drjones@redhat.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
So far this is mostly (see below) a copy of the legacy non-VHE switch function, but we will start reworking these functions in separate directions to work on VHE and non-VHE in the most optimal way in later patches. The only difference after this patch between the VHE and non-VHE run functions is that we omit the branch-predictor variant-2 hardening for QC Falkor CPUs, because this workaround is specific to a series of non-VHE ARMv8.0 CPUs. Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
The current world-switch function has functionality to detect a number of cases where we need to fixup some part of the exit condition and possibly run the guest again, before having restored the host state. This includes populating missing fault info, emulating GICv2 CPU interface accesses when mapped at unaligned addresses, and emulating the GICv3 CPU interface on systems that need it. As we are about to have an alternative switch function for VHE systems, but VHE systems still need the same early fixup logic, factor out this logic into a separate function that can be shared by both switch functions. No functional change. Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Instead of having multiple calls from the world switch path to the debug logic, each figuring out if the dirty bit is set and if we should save/restore the debug registers, let's just provide two hooks to the debug save/restore functionality, one for switching to the guest context, and one for switching to the host context, and we get the benefit of only having to evaluate the dirty flag once on each path, plus we give the compiler some more room to inline some of this functionality. Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We have numerous checks around that checks if the HCR_EL2 has the RW bit set to figure out if we're running an AArch64 or AArch32 VM. In some cases, directly checking the RW bit (given its unintuitive name), is a bit confusing, and that's not going to improve as we move logic around for the following patches that optimize KVM on AArch64 hosts with VHE. Therefore, introduce a helper, vcpu_el1_is_32bit, and replace existing direct checks of HCR_EL2.RW with the helper. Reviewed-by:
Julien Grall <julien.grall@arm.com> Reviewed-by:
Julien Thierry <julien.thierry@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We currently have a separate read-modify-write of the HCR_EL2 on entry to the guest for the sole purpose of setting the VF and VI bits, if set. Since this is most rarely the case (only when using userspace IRQ chip and interrupts are in flight), let's get rid of this operation and instead modify the bits in the vcpu->arch.hcr[_el2] directly when needed. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Reviewed-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Shih-Wei Li authored
We always set the IMO and FMO bits in the HCR_EL2 when running the guest, regardless if we use the vgic or not. By moving these flags to HCR_GUEST_FLAGS we can avoid one of the extra save/restore operations of HCR_EL2 in the world switch code, and we can also soon get rid of the other one. This is safe, because even though the IMO and FMO bits control both taking the interrupts to EL2 and remapping ICC_*_EL1 to ICV_*_EL1 when executed at EL1, as long as we ensure that these bits are clear when running the EL1 host, we're OK, because we reset the HCR_EL2 to only have the HCR_RW bit set when returning to EL1 on non-VHE systems. Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Shih-Wei Li <shihwei@cs.columbia.edu> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
VHE actually doesn't rely on clearing the VTTBR when returning to the host kernel, and that is the current key mechanism of hyp_panic to figure out how to attempt to return to a state good enough to print a panic statement. Therefore, we split the hyp_panic function into two functions, a VHE and a non-VHE, keeping the non-VHE version intact, but changing the VHE behavior. The vttbr_el2 check on VHE doesn't really make that much sense, because the only situation where we can get here on VHE is when the hypervisor assembly code actually called into hyp_panic, which only happens when VBAR_EL2 has been set to the KVM exception vectors. On VHE, we can always safely disable the traps and restore the host registers at this point, so we simply do that unconditionally and call into the panic function directly. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We already have the percpu area for the host cpu state, which points to the VCPU, so there's no need to store the VCPU pointer on the stack on every context switch. We can be a little more clever and just use tpidr_el2 for the percpu offset and load the VCPU pointer from the host context. This has the benefit of being able to retrieve the host context even when our stack is corrupted, and it has a potential performance benefit because we trade a store plus a load for an mrs and a load on a round trip to the guest. This does require us to calculate the percpu offset without including the offset from the kernel mapping of the percpu array to the linear mapping of the array (which is what we store in tpidr_el1), because a PC-relative generated address in EL2 is already giving us the hyp alias of the linear mapping of a kernel address. We do this in __cpu_init_hyp_mode() by using kvm_ksym_ref(). The code that accesses ESR_EL2 was previously using an alternative to use the _EL1 accessor on VHE systems, but this was actually unnecessary as the _EL1 accessor aliases the ESR_EL2 register on VHE, and the _EL2 accessor does the same thing on both systems. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Andrew Jones <drjones@redhat.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Feb 26, 2018
-
-
Dave Martin authored
The HCR_EL2.TID3 flag needs to be set when trapping guest access to the CPU ID registers is required. However, the decision about whether to set this bit does not need to be repeated at every switch to the guest. Instead, it's sufficient to make this decision once and record the outcome. This patch moves the decision to vcpu_reset_hcr() and records the choice made in vcpu->arch.hcr_el2. The world switch code can then load this directly when switching to the guest without the need for conditional logic on the critical path. Signed-off-by:
Dave Martin <Dave.Martin@arm.com> Suggested-by:
Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
- Feb 12, 2018
-
-
Shanker Donthineni authored
References to CPU part number MIDR_QCOM_FALKOR were dropped from the mailing list patch due to mainline/arm64 branch dependency. So this patch adds the missing part number. Fixes: ec82b567 ("arm64: Implement branch predictor hardening for Falkor") Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Feb 07, 2018
-
-
Marc Zyngier authored
Now that we've standardised on SMCCC v1.1 to perform the branch prediction invalidation, let's drop the previous band-aid. If vendors haven't updated their firmware to do SMCCC 1.1, they haven't updated PSCI either, so we don't loose anything. Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
We're about to need kvm_psci_version in HYP too. So let's turn it into a static inline, and pass the kvm structure as a second parameter (so that HYP can do a kern_hyp_va on it). Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 16, 2018
-
-
James Morse authored
When we exit a guest due to an SError the vcpu fault info isn't updated with the ESR. Today this is only done for traps. The v8.2 RAS Extensions define ISS values for SError. Update the vcpu's fault_info with the ESR on SError so that handle_exit() can determine if this was a RAS SError and decode its severity. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
James Morse authored
Prior to v8.2's RAS Extensions, the HCR_EL2.VSE 'virtual SError' feature generated an SError with an implementation defined ESR_EL1.ISS, because we had no mechanism to specify the ESR value. On Juno this generates an all-zero ESR, the most significant bit 'ISV' is clear indicating the remainder of the ISS field is invalid. With the RAS Extensions we have a mechanism to specify this value, and the most significant bit has a new meaning: 'IDS - Implementation Defined Syndrome'. An all-zero SError ESR now means: 'RAS error: Uncategorized' instead of 'no valid ISS'. Add KVM support for the VSESR_EL2 register to specify an ESR value when HCR_EL2.VSE generates a virtual SError. Change kvm_inject_vabt() to specify an implementation-defined value. We only need to restore the VSESR_EL2 value when HCR_EL2.VSE is set, KVM save/restores this bit during __{,de}activate_traps() and hardware clears the bit once the guest has consumed the virtual-SError. Future patches may add an API (or KVM CAP) to pend a virtual SError with a specified ESR. Cc: Dongjiu Geng <gengdongjiu@huawei.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 13, 2018
-
-
James Morse authored
Make tpidr_el2 a cpu-offset for per-cpu variables in the same way the host uses tpidr_el1. This lets tpidr_el{1,2} have the same value, and on VHE they can be the same register. KVM calls hyp_panic() when anything unexpected happens. This may occur while a guest owns the EL1 registers. KVM stashes the vcpu pointer in tpidr_el2, which it uses to find the host context in order to restore the host EL1 registers before parachuting into the host's panic(). The host context is a struct kvm_cpu_context allocated in the per-cpu area, and mapped to hyp. Given the per-cpu offset for this CPU, this is easy to find. Change hyp_panic() to take a pointer to the struct kvm_cpu_context. Wrap these calls with an asm function that retrieves the struct kvm_cpu_context from the host's per-cpu area. Copy the per-cpu offset from the hosts tpidr_el1 into tpidr_el2 during kvm init. (Later patches will make this unnecessary for VHE hosts) We print out the vcpu pointer as part of the panic message. Add a back reference to the 'running vcpu' in the host cpu context to preserve this. Signed-off-by:
James Morse <james.morse@arm.com> Reviewed-by:
Christoffer Dall <cdall@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 09, 2018
-
-
Shanker Donthineni authored
Falkor is susceptible to branch predictor aliasing and can theoretically be attacked by malicious code. This patch implements a mitigation for these attacks, preventing any malicious entries from affecting other victim contexts. Signed-off-by:
Shanker Donthineni <shankerd@codeaurora.org> [will: fix label name when !CONFIG_KVM and remove references to MIDR_FALKOR] Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
For those CPUs that require PSCI to perform a BP invalidation, going all the way to the PSCI code for not much is a waste of precious cycles. Let's terminate that call as early as possible. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 08, 2018
-
-
Marc Zyngier authored
kvm_hyp.h has an odd dependency on kvm_mmu.h, which makes the opposite inclusion impossible. Let's start with breaking that useless dependency. Acked-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
- Nov 30, 2017
-
-
Alex Bennée authored
There is a fast-path of MMIO emulation inside hyp mode. The handling of single-step is broadly the same as kvm_arm_handle_step_debug() except we just setup ESR/HSR so handle_exit() does the correct thing as we exit. For the case of an emulated illegal access causing an SError we will exit via the ARM_EXCEPTION_EL1_SERROR path in handle_exit(). We behave as we would during a real SError and clear the DBG_SPSR_SS bit for the emulated instruction. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
- Nov 06, 2017
-
-
Christoffer Dall authored
As we are about to be lazy with saving and restoring the timer registers, we prepare by moving all possible timer configuration logic out of the hyp code. All virtual timer registers can be programmed from EL1 and since the arch timer is always a level triggered interrupt we can safely do this with interrupts disabled in the host kernel on the way to the guest without taking vtimer interrupts in the host kernel (yet). The downside is that the cntvoff register can only be programmed from hyp mode, so we jump into hyp mode and back to program it. This is also safe, because the host kernel doesn't use the virtual timer in the KVM code. It may add a little performance performance penalty, but only until following commits where we move this operation to vcpu load/put. Signed-off-by:
Christoffer Dall <cdall@linaro.org> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Nov 03, 2017
-
-
Dave Martin authored
Until KVM has full SVE support, guests must not be allowed to execute SVE instructions. This patch enables the necessary traps, and also ensures that the traps are disabled again on exit from the guest so that the host can still use SVE if it wants to. On guest exit, high bits of the SVE Zn registers may have been clobbered as a side-effect the execution of FPSIMD instructions in the guest. The existing KVM host FPSIMD restore code is not sufficient to restore these bits, so this patch explicitly marks the CPU as not containing cached vector state for any task, thus forcing a reload on the next return to userspace. This is an interim measure, in advance of adding full SVE awareness to KVM. This marking of cached vector state in the CPU as invalid is done using __this_cpu_write(fpsimd_last_state, NULL) in fpsimd.c. Due to the repeated use of this rather obscure operation, it makes sense to factor it out as a separate helper with a clearer name. This patch factors it out as fpsimd_flush_cpu_state(), and ports all callers to use it. As a side effect of this refactoring, a this_cpu_write() in fpsimd_cpu_pm_notifier() is changed to __this_cpu_write(). This should be fine, since cpu_pm_enter() is supposed to be called only with interrupts disabled. Signed-off-by:
Dave Martin <Dave.Martin@arm.com> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Dave Martin authored
Currently, a guest kernel sees the true CPU feature registers (ID_*_EL1) when it reads them using MRS instructions. This means that the guest may observe features that are present in the hardware but the host doesn't understand or doesn't provide support for. A guest may legimitately try to use such a feature as per the architecture, but use of the feature may trap instead of working normally, triggering undef injection into the guest. This is not a problem for the host, but the guest may go wrong when running on newer hardware than the host knows about. This patch hides from guest VMs any AArch64-specific CPU features that the host doesn't support, by exposing to the guest the sanitised versions of the registers computed by the cpufeatures framework, instead of the true hardware registers. To achieve this, HCR_EL2.TID3 is now set for AArch64 guests, and emulation code is added to KVM to report the sanitised versions of the affected registers in response to MRS and register reads from userspace. The affected registers are removed from invariant_sys_regs[] (since the invariant_sys_regs handling is no longer quite correct for them) and added to sys_reg_desgs[], with appropriate access(), get_user() and set_user() methods. No runtime vcpu storage is allocated for the registers: instead, they are read on demand from the cpufeatures framework. This may need modification in the future if there is a need for userspace to customise the features visible to the guest. Attempts by userspace to write the registers are handled similarly to the current invariant_sys_regs handling: writes are permitted, but only if they don't attempt to change the value. This is sufficient to support VM snapshot/restore from userspace. Because of the additional registers, restoring a VM on an older kernel may not work unless userspace knows how to handle the extra VM registers exposed to the KVM user ABI by this patch. Under the principle of least damage, this patch makes no attempt to handle any of the other registers currently in invariant_sys_regs[], or to emulate registers for AArch32: however, these could be handled in a similar way in future, as necessary. Signed-off-by:
Dave Martin <Dave.Martin@arm.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Jun 15, 2017
-
-
Marc Zyngier authored
In order to start handling guest access to GICv3 system registers, let's add a hook that will get called when we trap a system register access. This is gated by a new static key (vgic_v3_cpuif_trap). Tested-by:
Alexander Graf <agraf@suse.de> Acked-by:
David Daney <david.daney@cavium.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Christoffer Dall <cdall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <cdall@linaro.org>
-
- May 16, 2017
-
-
James Morse authored
When KVM panics, it hurridly restores the host context and parachutes into the host's panic() code. At some point panic() touches the physical timer/counter. Unless we are an arm64 system with VHE, this traps back to EL2. If we're lucky, we panic again. Add a __timer_save_state() call to KVMs hyp_panic() path, this saves the guest registers and disables the traps for the host. Fixes: 53fd5b64 ("arm64: KVM: Add panic handling") Signed-off-by:
James Morse <james.morse@arm.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Christoffer Dall <cdall@linaro.org> Signed-off-by:
Christoffer Dall <cdall@linaro.org>
-
- Feb 03, 2017
-
-
Will Deacon authored
The SPE buffer is virtually addressed, using the page tables of the CPU MMU. Unusually, this means that the EL0/1 page table may be live whilst we're executing at EL2 on non-VHE configurations. When VHE is in use, we can use the same property to profile the guest behind its back. This patch adds the relevant disabling and flushing code to KVM so that the host can make use of SPE without corrupting guest memory, and any attempts by a guest to use SPE will result in a trap. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Cc: Alex Bennée <alex.bennee@linaro.org> Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Dec 09, 2016
-
-
Marc Zyngier authored
The ARMv8 architecture allows the cycle counter to be configured by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0, hence accessing PMCCFILTR_EL0. But it disallows the use of PMSELR_EL0.SEL==0x1f to access the cycle counter itself through PMXEVCNTR_EL0. Linux itself doesn't violate this rule, but we may end up with PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1, despite the guest not having done anything wrong. In order to avoid this unfortunate course of events (haha!), let's sanitize PMSELR_EL0 on guest entry. This ensures that the guest won't explode unexpectedly. Cc: stable@vger.kernel.org #4.6+ Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Nov 17, 2016
-
-
Suzuki K Poulose authored
The arm64 kernel assumes that FP/ASIMD units are always present and accesses the FP/ASIMD specific registers unconditionally. This could cause problems when they are absent. This patch adds the support for kernel handling systems without FP/ASIMD by skipping the register access within the kernel. For kvm, we trap the accesses to FP/ASIMD and inject an undefined instruction exception to the VM. The callers of the exported kernel_neon_begin_partial() should make sure that the FP/ASIMD is supported. Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: add comment on the ARM64_HAS_NO_FPSIMD conflict and the new location] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Sep 22, 2016
-
-
Vladimir Murzin authored
Currently GIC backend is selected via alternative framework and this is fine. We are going to introduce vgic-v3 to 32-bit world and there we don't have patching framework in hand, so we can either check support for GICv3 every time we need to choose which backend to use or try to optimise it by using static keys. The later looks quite promising because we can share logic involved in selecting GIC backend between architectures if both uses static keys. This patch moves arm64 from alternative to static keys framework for selecting GIC backend. For that we embed static key into vgic_global and enable the key during vgic initialisation based on what has already been exposed by the host GIC driver. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
- Sep 08, 2016
-
-
Marc Zyngier authored
If, when proxying a GICV access at EL2, we detect that the guest is doing something silly, report an EL1 SError instead ofgnoring the access. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
Marc Zyngier authored
If EL1 generates an asynchronous abort and then traps into EL2 before the abort has been delivered, we may end-up with the abort firing at the worse possible place: on the host. In order to avoid this, it is necessary to take the abort at EL2, by clearing the PSTATE.A bit. In order to survive this abort, we do it at a point where we're in a known state with respect to the world switch, and handle the resulting exception, overloading the exit code in the process. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
Marc Zyngier authored
The HCR_EL2.VSE bit is used to signal an SError to a guest, and has the peculiar feature of getting cleared when the guest has taken the abort (this is the only bit that behaves as such in this register). This means that if we signal such an abort, we must leave it in the guest context until it disappears from HCR_EL2, and at which point it must be cleared from the context. This is achieved by reading back from HCR_EL2 until the guest takes the fault. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-