KVM: x86/mmu: Expand on the comment in kvm_vcpu_ad_need_write_protect()
authorSean Christopherson <seanjc@google.com>
Sat, 13 Feb 2021 00:50:08 +0000 (16:50 -0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 20 Mar 2021 09:51:11 +0000 (10:51 +0100)
[ Upstream commit 2855f98265dc579bd2becb79ce0156d08e0df813 ]

Expand the comment about need to use write-protection for nested EPT
when PML is enabled to clarify that the tagging is a nop when PML is
_not_ enabled.  Without the clarification, omitting the PML check looks
wrong at first^Wfifth glance.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
arch/x86/kvm/mmu/mmu_internal.h

index bfc6389edc28a4c0d0d27ce95cc33276efe04b27..8404145fb179aa475c13b7a3ac8fccf92ac92560 100644 (file)
@@ -79,7 +79,10 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu)
         * When using the EPT page-modification log, the GPAs in the log
         * would come from L2 rather than L1.  Therefore, we need to rely
         * on write protection to record dirty pages.  This also bypasses
-        * PML, since writes now result in a vmexit.
+        * PML, since writes now result in a vmexit.  Note, this helper will
+        * tag SPTEs as needing write-protection even if PML is disabled or
+        * unsupported, but that's ok because the tag is consumed if and only
+        * if PML is enabled.  Omit the PML check to save a few uops.
         */
        return vcpu->arch.mmu == &vcpu->arch.guest_mmu;
 }