Hardware virtualization


General Idea

Virtualization is an extension of the core idea in computer science of abstracting resources, abstracting to the point of the software being unable to directly talk to the hardware in any way. With complete abstraction of the hardware allows for completely separate processing streams to run at the same time without interfering, essentially multiple OSs on the same system.


Proper virtualization is defined by Popek and Goldberg in their 1974 article ďFormal Requirements for Virtualizable Third Generation ArchitecturesĒ[1] and presents the following qualifiers:

  1. Equivalence: A program running through virtualization should run identically to one running without on the emulated hardware.
  2. Resource control: The managing software must have complete control over all virtualized resources.
  3. Efficiency: A large percentage of the virtual machines calls must be executed without intervention from the running software.

Modern day implementations adhere to parts 1 and 2, but part 3 has been modified to be a judgment of efficiency.


While virtualization has been implemented successfully in software it incurs the associated software delay. A now viable alternative is hardware based virtualization.


Hardware Virtualization:

While many hardware architectures, first starting with the IBM 7044[2] in the mid 60ís, were designed with hardware virtualization in mind, the x86 line was not. Recently, both AMD and Intel have added their own virtualization options to their processors, in their own proprietary way.



AMD-V starts with the IOMMU, the on chip virtualization module. It rests on the HyperTransport and intercepts a specific range of address requests. When the resident OS writes ď0bĒ to the IOMMU Control Register, IOMMU is activated. The OS then sets up the guest OSís address space by specifying the start of the space, and itís length. Next it sets up an internal interrupt table mask which is used to specify the guest OSís available interrupts. For each interrupt the host OS can specify whether it should be allowed, or whether it should cause a switch back to the host OS. This is combined with a set of registers passed back to the host OS with information about the last call. Lastly the host switches the Control Register is set to ď1bĒ switching the possessor to ring level -1. All of the hostís registers are saved and swapped for the guests. The floating point stack isnít swapped however. Instead a separate entry is added to the interrupt redirection table. This allows for the floating point stack only to be switched out if needed, saving time.[3]


Intel VT-i

The Intel VT technology shares most features, but with different implementation details. VT has an additional call to allow for the guest OS to tell what itís interrupt indirection table is, if itís allowed in the interrupt table. There is also a separate bitmap for each of the chips registers that can be separately set by the host OS. When the CPU switch back to the guest OS, an interrupt can be triggered to tell the internal OS that it has been switched back to. There is also an added bitarray which the processor uses to allow for multiple virtual OS recursion. A group of interrupts is then hard coded into the CPU, which when called then passes focus back to the previous OS, removing the last entry from the bit array. This is handled by the Intel PAL architecture, which also handles the pushing and popping of the processor state.[4]


-By Ben Hartman, for CS 441