Virtualisation

Submitted by coleen.yan@edd… on Mon, 04/15/2024 - 16:51
Sub Topics
Practice

Go to the Yoobee Build Sandbox in Practice Lab's and build 2 Virtual Machines (VM) with different Operating Systems.

Explore

Explore Lesson 7 of the CompTIA A+ resource on Virtualisation and Cloud Concepts.

In this topic, we will explore the differences between physical machines and virtual machines to determine which one is more suitable for business implementation and why.  

Our focus here is not necessarily on the Cloud but on virtual and physical machines that are within a business or a data centre. 

Ultimately, whether you are in the Business or IT industry, the overarching goals remain consistent:  

  • minimising costs,  
  • maximising productivity,  
  • ensuring system security, and  
  • establishing robust disaster recovery measures.  

These considerations are paramount when determining whether to opt for a physical or virtual infrastructure solution.

Why Virtualisation? 

After exploring Lesson 7 of the CompTIA resource, the first simple question we want to try, and answer is “Why is virtualisation very important or needed?” 

To provide some context to this question, let’s try to understand the drawbacks of relying solely on physical machines. 

Common Challenges with Running Physical Servers 

In the traditional computing model, infrastructure is primarily thought of as hardware. Hardware solutions are physical (hardware you can touch and feel, sometimes called “bare metal”), requiring space, staff, physical security, planning, and capital expenditure.  

Watch - Physical server vs Virtual servers (9:33 minutes)

As you watch the video take notes about the consequences of both overestimated server capacity and underestimated server capacity e.g., weeks between having resources and wanting resources.

As you may have noted, some of the common drawbacks and limitations of relying solely on physical machines can be: 

  • Resource Underutilisation: Physical servers often run at less than maximum capacity, leading to wasted resources and inefficiencies. 
  • Total Cost of Ownership (TCO): The total cost of owning and operating physical servers, including hardware, maintenance, and energy consumption, can be significantly higher compared to virtualized environments. Managing and maintaining a large fleet of physical machines can be expensive in terms of both time and resources. 
  • Waiting Time for Resources: Provisioning new physical hardware or reallocating resources can involve long lead times, causing delays in meeting business needs and agility. In addition to significant upfront investment, another prohibitive aspect of traditional computing is the long hardware procurement cycle that involves acquiring, provisioning, and maintaining on-premises infrastructure. This cycle demands substantial upfront investment and often leads to inefficiencies in resource allocation. 
  • Scalability Challenges: As you know, scalability is vital when considering enterprise hardware. Adding or removing physical hardware to accommodate changing workloads can be time-consuming and costly. 
  • Limited Disaster Recovery: Recovering from hardware failures or disasters typically involves lengthy downtimes and data loss. 
  • Fault Tolerance: Physical machines have limited built-in fault tolerance mechanisms, making them more susceptible to failures and downtime compared to virtualized environments with features like live migration and high availability. 
  • Environmental Impact: Operating numerous physical servers consumes significant energy and contributes to carbon emissions.
  • Overestimated and Underestimated Capacity: Predicting the exact resource needs of physical servers can be challenging, leading to instances of either over-provisioning (wasting resources) or under-provisioning (performance bottlenecks).  
A close view of a computer server rack

With a hardware solution, you must ask if there is enough resource capacity or sufficient storage to meet your needs, and you provision capacity by guessing theoretical maximum peaks.  If you do not meet your projected maximum peak, then you pay for expensive resources that stay idle. If you exceed your projected maximum peak, then you do not have sufficient capacity to meet your needs. And if your needs change, then you must spend the time, effort, and money required to implement a new solution. 

For example, if you wanted to provision a new website, you would need to buy the hardware, rack and stack it, put it in a data center, and then manage it or have someone else manage it. This approach is expensive and time-consuming. 

By understanding these limitations, we can appreciate why virtualisation has become increasingly vital in modern IT environments.  Acknowledging these factors highlights the importance of virtualisation in addressing these challenges and improving efficiency, flexibility, and resilience in IT infrastructure.

A person working on a computer in an office

Introduction to Virtualisation and Emulation

Virtualisation and emulation are both techniques used to create virtual versions of computing environments, but they serve different purposes and operate in distinct ways. 

In computing, virtualisation is the act of creating a virtual (rather than actual or physical) version of something at the same abstraction level, including virtual computer hardware platforms, storage devices, and computer network resources. It refers to the process of creating virtual versions of physical resources, such as servers, storage devices, operating systems, or networks. It enables the consolidation and abstraction of these resources, allowing multiple virtual instances to run on a single physical infrastructure. 

To better understand virtualisation better, let’s consider hardware virtualisation in more detail: 

Hardware Virtualisation 

Hardware virtualisation refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Ubuntu may host a virtual machine that looks like a computer with the Microsoft Windows operating system; Windows-based software can be run on the virtual machine. 

Host Machine and Guest Machine 

Host Machine: In hardware virtualisation, the host machine is the machine that is used by the virtualisation. The host is the machine running the emulation.  Guest Machine: is the virtual machine.  The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine.  

Hypervisors

A hypervisor is a type of computer software, firmware, or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The main idea is that we run a piece of software, generally called a hypervisor on top of our hardware, providing an environment where guest operating systems can run.  

The diagram below shows what is in a physical machine:

The diagram below shows the inner workings of a virtual machine:

The diagram below shows the virtualisation and emulation of physical hardware resources in a PC:

The first three diagrams above show a clear visual idea of how a physical hardware component is divided into smaller independent units.  

We can see the hypervisor layer which enables the division of a single physical resource into 2 smaller independent virtual units. This is Virtualisation – made possible by the hypervisor layer. 

Each of the small virtual units can emulate the behavior of the underlying physical resource. 

In a nutshell, virtualisation involves creating a virtual version of a computing environment, including hardware platforms, storage devices, and network resources. It allows multiple virtual machines (VMs) to run on a single physical machine, sharing the underlying hardware resources. A common example of virtualisation is VMware Workstation, where you can run multiple virtual machines with different operating systems on a single physical computer. Each virtual machine operates as if it has its own dedicated hardware, but in reality, they share the resources of the host machine. 

On the other hand, Emulation involves using software to simulate hardware that you do not have.  It involves mimicking the hardware and functionality of one system on a different system. Emulators reproduce the exact behavior of the original hardware, allowing software designed for one platform to run on a completely different platform. An example of emulation is the Dolphin emulator, which allows games designed for the Nintendo GameCube and Wii consoles to be played on a PC. The emulator replicates the hardware of these gaming consoles, enabling the games to run as if they were on the original hardware. 

Both virtualisation and emulation often come hand-in-hand e.g., we can virtualise a PC by using it to emulate a collection of less-powerful PCs. In software, you can simulate the behavior of a device that does not exist. For example, emulation of a CD-ROM drive using an ISO file. Emulating a CD-ROM drive using an ISO file involves creating a virtual CD-ROM drive on your computer and mounting the ISO file as if it were a physical CD-ROM. You can simulate any hardware - including the CPU or an entire system! 

An example of an entire system is an Android SDK which emulates an Android smartphone with an ARM CPU. The "screen" is mapped to a window on your PC as an emulated mobile device:

Shape

There is no physical phone hardware. So, when the software executes an instruction that tries to write to the "screen", this is intercepted and does something else. It instead updates a buffer in memory which then gets drawn in a window. The software running inside the emulator is unaware that this is happening. 

You can see the elements of virtualised infrastructure below:

The image above shows a specific type of virtualised environment. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Unlike an emulator, the guest executes most instructions on the native hardware. 

With emulation, we create a virtual environment that models a hardware architecture that is different from the underlying host. We usually do this to develop software for hardware that does not yet exist or to run old software built for an architecture that is no longer available. 

The QEMU (Quick Emulator) is a free and open-source emulator. It emulates a computer's processor through dynamic binary translation and provides a set of different hardware and device models for the machine, enabling it to run a variety of guest operating systems. QEMU emulates an entire PC (i386 processor and interfaces) 

Types of Hypervisors 

There are about 2 basic ways of implementing a hypervisor: 

Type-1, native, or bare-metal hypervisors:

These hypervisors run directly on the host's hardware to control the hardware and manage guest operating systems. For this reason, they are sometimes called bare-metal hypervisors. See Figure 16 shows a representation of Type 1-based virtualised infrastructure. A few examples of Type 1 hypervisors are Citrix/Xen Server, VMware ESXi, and Microsoft Hyper-V. 

The diagram below shows a  Type 1, bare metal hypervisor  which is installed directly on the host hardware along with the management application, The VMs are installed with the hypervisor:

 Type-2, guest OS, or hosted hypervisors: 

These hypervisors run on a conventional operating system (OS) just as other computer programs do. A virtual machine monitor runs as a process on the host, such as VirtualBox. Type-2 hypervisors abstract guest operating systems from the host operating system, effectively creating an isolated system that can be interacted with by the host. A few examples of Type 1 hypervisors are Microsoft Virtual PC, Oracle Virtual Box, VMware Workstation, Oracle Solaris Zones, VMware Fusion, and Oracle VM Server. Here we can relate some names as we mostly use Type 2 Hypervisor in our PC to run a virtual machine for Learning or Fun.  

The diagram below shows a representation of Type 2-based virtualised infrastructure:  The hypervisor is an application running with a native OS, and guest OSes are installed with the hypervisor:

The Kernel-based Virtual Machine (KVM)

The Kernel-based Virtual Machine (KVM) is a virtualisation module in the Linux kernel that allows the kernel to function as a hypervisor. This open-sourced Linux-based hypervisor is mostly classified as a Type-1 hypervisor, which turns the Linux kernel into a “bare-metal” hypervisor. At the same time, the overall system is categorized as a Type-2 hypervisor due to the fully functional Operating System used.

Watch - Virtualisation explained (5:20 minutes)

Watch this brief video to gain a clearer understanding of the fundamental advantages provided by different types of hypervisors used in virtualisation. 

Activity

What type of hypervisor is used in the diagram below? How do you know this?

Watch- Virtualisation Explained (8:06 minutes)

Review the following video to ensure you have a good grasp of virtualisation so you can complete Checkpoint 1.

 

Read Checkpoint 1 and complete the tasks. Add to your Hardware Task Forum and discuss your work in the next Live Session.

 

 

Now that we understand the underlying technology that drives the creation of Virtual Machines (VM). Let’s discuss a bit more about how to create a VM, what it requires to create one, and the difference between VMs and Physical Machines. 

What we know now about Virtualisation

At this stage you should be aware that: 

  • A virtual machine (VM) is the virtualisation or emulation of a computer system.  Each emulated PC is a 'virtual machine'.  
  • Hypervisors allocate some real system RAM to each VM and shares the CPU time. 
  • Hypervisors emulate other hardware, e.g., disk and network interfaces. 
  • Within each VM you can boot an operating system 
  • Virtual machines are based on computer architectures and provide the functionality of a physical computer. 
  • Virtualisation separates multiple software environments - OS, drivers, and applications from each other and the physical hardware by using a hypervisor to mediate the access. 
  • Full hardware virtualisation means different VMs can be running different OSes. 

Creating a Virtual Machine (VM)

In general, what do we need? 

To emulate a PC, we must emulate all the components of the PC hard disk interface, network card graphics card, keyboard, mouse, clock, memory management unit, etc. 

These components need multiple instances to co-exist and not be able to interfere with each other (memory access must also be controlled). The software to do this is called a hypervisor. 

Therefore, our virtualisation resource requirements include: 

  • CPU platform: which includes Virtualisation extension and Virtual CPU allocation. 
  • System RAM allocation: Each guest OS requires sufficient system memory over and above what is required by the host OS/hypervisor. For example, it is recommended that Windows 10 be installed on a computer with at least 2 GB of memory. This means that the virtualisation workstation must have at least 4 GB RAM to run the host and a single Windows 10 guest OS. If you want to run multiple guest OSs concurrently, the resource demands can quickly add up. 
  • Mass storage space for disk images: Each guest OS also takes up a substantial amount of disk space. The VM's "hard disk" is stored as an image file on the host. 
  • Networking: A hypervisor will be able to create a virtual network environment through which all the VMs can communicate, and a network shared by the host and by VMs on the same host and on other hosts.

Physical vs. Virtual Machine Feature Comparison 

Although physical servers and virtual machines have different architectures, they also exhibit similarities.  

When it comes down to connecting to a “physical server” vs. a “virtual server,” the experience from a client perspective is going to be the exact same. 

Generally, applications are indifferent to whether they connect to a physical server or a virtual machine because virtual machines run the same operating systems found on physical servers, such as Windows Server and Linux. 

As long as the required resources are provided by either a physical server or a virtual machine, applications can function similarly regardless of the server's nature (whether or not the server is physical vs virtual.). Now, let's take a look at the other comparisons between physical servers and virtual machines: 

Table 1: Physical vs Virtual Machine Feature Comparison 

Feature Physical Server Virtual Machine
Cost Expensive upfront costs, ranging from thousands to tens of thousands of dollars for a single server  More abstract cost, as it depends on hardware specs and resources allocated from the physical host 
Physical Footprint Extensive physical space is required, especially for multiple servers  Allows server consolidation, more efficient use of physical space 
Lifespan  Typically, 3-5 years lifespan, requiring migration after expiration  Longer lifespan due to abstraction from hardware can migrate seamlessly to new hosts
Migration  Complex migration process, involving imaging or software migration  Simple hypervisor-level migration process, such as vMotion or Live Migration 
Performance  Superior performance due to direct hardware access, especially for business-critical applications Narrow performance gap due to improved hypervisor schedulers, and minimal overhead compared to bare metal 
Efficiency  Inefficient use of resources, as single workloads per server result in wasted idle resources  Highly efficient resource utilization, allowing multiple VMs per host and reducing power/cooling costs 
Disaster Recovery   Limited mobility and complexity in backup and restoration processes  Highly mobile and easily replicated, with efficient backup and restoration processes
High Availability   Requires complex clustering solutions for high availability  Simplified high availability through virtualisation clusters, easy replication to DR facilities 

How do you decide between physical servers and virtual machines? 

The choice has become clearer with the widespread adoption of virtualisation. For many, the benefits offered by virtual machines in terms of cost, physical footprint, lifespan, migration, performance, efficiency, and disaster recovery/high availability outweigh those of running a single workload on a physical server. 

However, does this mean that hosting applications and data on physical workloads is never a viable option? Not necessarily. Physical servers still play a significant role in the enterprise data centre environment.  

There are scenarios and use cases where running an application on a physical server is preferable. This could be due to performance requirements or the need to connect physical devices directly to the server. 

Ultimately, the decision involves both technological considerations and business priorities for your organisation. In most environments, the majority of workloads will be virtual machines and containers, with a smaller number of physical servers supporting various applications. 

Activity

You are installing virtualisation on a workstation that needs to support multiple operating systems. Which type of hypervisor is best suited for this environment?

Discuss your answer in your Hardware Task Forum or in your next Live Session.

Module Linking
Main Topic Image
An internal view of a gaming computer
Is Study Guide?
Off
Is Assessment Consultation?
Off