Virtual-Future-Computer Essay Example
Virtual-Future-Computer Essay Example

Virtual-Future-Computer Essay Example

Available Only on StudyHippo
  • Pages: 13 (3406 words)
  • Published: January 28, 2018
  • Type: Case Study
View Entire Sample
Text preview

During a certain period, virtual machines were popular as they allowed existing software to run on the physical machine without any modifications. This was especially useful when mainframe hardware was expensive and there was a need to efficiently share limited resources among multiple applications. However, with the emergence of multitasking operating systems and cheaper hardware, the value of virtual machines decreased. As computer architectures evolved, it became less feasible to implement virtual machines. By the late 1980s, both academics and industry practitioners considered virtual machines as merely a historical curiosity.

However, in recent years, virtual machines have regained attention in academia and industry. Venture capital firms are competing to fund startups that focus on virtual-machine-based technologies. Major companies like Intel, Sun Microsystems, and MM are developing strategies in this field to target billion-dollar markets. Additionally, researchers in labs and universities are

...

utilizing virtual machines as a foundation for solving problems related to mobility, security, and manageability.

The retirement and subsequent resurgence of virtual machines can be attributed to research conducted at Stanford University in the sass. The researchers explored the potential of using virtual machines to address challenges posed by hardware and operating system limitations. Specifically, they tackled difficulties encountered while programming massively parallel processing (MAP) machines that were incapable of running existing operating systems.By utilizing virtual machines, researchers were able to overcome hurdles and make unwieldy architectures more manageable. This development caught the attention of Ovenware Inc., the original supplier of Vim's for commodity computing hardware, along with researchers and entrepreneurs intrigued by the implications of having a VIM for commodity platforms.

The revival of Vim's came about due to several factors. Ironically, modern advancements in operatin

View entire sample
Join StudyHippo to see entire essay

systems and decreased hardware costs that had made Vim's less necessary in the sass began creating new problems that Vim's seemed capable of solving. The availability of cheaper hardware resulted in a surplus of underused machines, leading to significant space and management overheads.

Furthermore, increased functionality in operating systems made them more capable but also more fragile and susceptible to vulnerabilities. System administrators started running one application per machine to reduce system crashes and breaking. However, this approach increased hardware requirements and costs.

By moving applications to virtual machines and consolidating them onto fewer physical platforms, efficiency is increased while reducing space and management costs. This shift has brought back prominence for Vim as a means of multiplexing hardware for server consolidation and utility computing.

Looking ahead, Vim will focus less on multitasking and prioritize security and reliability instead.The implementation of difficult functionality in modern operating systems is made possible by Vim. By maintaining backward-compatibility, innovative operating system solutions can be deployed while preserving the existing software base. The Virtual machine monitor (VIM) creates a level of indirection between hardware and software, giving it control over how guest operating systems utilize hardware resources. It presents a unified perspective of the underlying hardware, making different machines appear identical and facilitating virtual machine usage on any available computer. Additionally, VIM encapsulates the software state of virtual machines, enabling their mapping to various hardware resources and even migration across machines. This allows for load balancing, effective handling of hardware failures, and scalability.

Virtual machines can be easily replicated, empowering administrators to introduce new services as required. Encapsulation further enables administrators to suspend, resume, checkpoint, and roll back virtual machines – offering an

undo capability that aids in recovering from crashes or configuration errors. Mobility is also supported through encapsulation; suspended virtual machines can be copied over networks or stored on removable media for transportation purposes.The Virtual Infrastructure Manager (VIM) acts as a mediator between virtual machines and underlying hardware, ensuring strong isolation and multiplexing of multiple virtual machines on a single hardware platform. This consolidation of low resource virtual machines onto one computer reduces hardware costs and space requirements. The VIM also enhances reliability and security by containing faults to individual virtual machines, preventing them from affecting other applications running on separate virtual machines.

Classic VIM
Figure 1: Classic VIM. The VIM is a thin software layer that exports a virtual machine abstraction.

The design of the virtual machine (Vim) closely resembles the hardware it runs on, allowing any software compatible with the hardware to run seamlessly. This ensures improved robustness and security without the need for separate physical machines. In case of a compromise, only the affected virtual machine is impacted. When implementing Vim, it's important to maintain control over the machine while providing a hardware interface to the software within the virtual machine. Various techniques can be employed to achieve this, each having its own design tradeoffs. The key design goals for Vim are compatibility, performance, and simplicity, ensuring legacy software can still run effectively on Vim platforms.The goal of performance is to ensure that the virtual machine operates at the same speed as the actual machine. However, simplicity is essential because if one Vim fails, it can affect all virtual machines on the computer.

To achieve secure isolation, it is necessary for the VIM to be free of bugs so that attackers cannot exploit it. This is facilitated by the CPU architecture which supports direct execution technique of the virtual machine (VIM) on the real machine.

In terms of control, the VIM has ultimate authority over the CPU. The privileged and unprivileged code of the virtual machine runs in unprivileged mode while the VIM operates in privileged mode. When a privileged operation is attempted by a virtual machine, such as disabling interrupts, it triggers a trap from the CPU to be handled by the VIM. The VIM then emulates this operation on its managed state for that specific virtual machine.

To establish a secure and transparent execution environment, trap semantics are crucial. These allow for safe and direct utilization of CPU by the VIM when executing tasks within the virtual machine. By employing these semantics, direct execution can simulate a regular physical machine for software running in the virtual machine.

Challenges arise with modern CPU architectures in terms of compatibility with Brazzaville and the widely-used ex.architecture. For example, operating systems rely on the ex.POP instruction to modify the interrupt-disable flag, but it does not trigger a trap when executed in unprivileged mode.

As a result, direct execution techniques are ineffective for privileged-mode code that relies on this instruction. Another obstacle is that unprivileged instructions provide CPU access to privileged state. By reading the code segment register, software within a virtual machine can determine the processor's current privilege level. If a Brazzaville processor were to trap this instruction, the VIM could manipulate what is perceived by the software running in the virtual machine

to correspond with its own privilege level.

However, the ex.architecture does not trap this instruction. Consequently, if direct execution is employed, the software would inaccurately perceive the privilege level in the code segment register. To address these issues, various techniques such as parameterizations and combined direct execution with fast binary translation are used to handle Vim's on CPUs that cannot be directly modified. The VIM builder enhances the virtual machine interface by substituting valuable portions of the original instruction set with more efficient equivalents.The personalized approach used in Disco, a VIM for the MIPS architecture, involves modifying the MIPS interrupt flag and substituting certain instructions and register accesses with references to a special memory location within the virtual machine. These modifications eliminate unnecessary overhead and enhance performance. Additionally, a modified version of the Iris operating system is adapted to take advantage of this parallelized MIPS architecture. However, there are issues with compatibility when it comes to personalization as any operating system running on a parallelized VIM must be specifically ported and legacy systems cannot be utilized. Existing machines also encounter challenges when migrating into virtual machines. Despite these difficulties, personalization has been proven successful over the years by maintaining backwards compatibility whenever possible. Academic research projects have prioritized personalization in constructing a VIM that can offer both full compatibility and high performance. To achieve fast and compatible fertilization of the existing architecture, Ovenware has developed a new fertilization technique that combines operating systems. This technique enables normal application programs to run using direct execution in processor modes like Brazzaville.The use of a binary translator allows for the running and patching of privileged modes, resulting in a

high-performance virtual machine that maintains software compatibility. Unlike other translators that handle code translation between Cups with different instruction sets, Ovenware's approach is simpler due to the similarity of source and target instruction sets. VIM utilizes kernel code control within the binary translator by replacing problematic instructions with similar blocks, enabling direct execution on CPA]. To optimize performance, translated blocks are cached in a trace cache to avoid repeated translation during subsequent executions. The translated code preserves normal instructions but replaces special treatment instructions like POP and reading from code segment registers with instruction sequences resembling those used by a parallelized virtual machine for execution. It is important to note that these changes are implemented when the code is first run rather than modifying the source code of the operating system or applications. While there may be some overhead involved in binary translation, it is typically negligible for most workloads.The code executed by the translator is only a fraction of the total code. Once the trace cache has warmed up, execution speeds are nearly identical to direct execution. Compatibility and binary translation are used to optimize high-performance direct execution. Privileged code often encounters traps, which can cause excessive administration overhead in direct execution. Each trap requires control transfer from the processor to the monitor and back. Binary translation can eliminate many of these traps, reducing overall fertilization overhead. This is especially beneficial for processors with deep instruction pipelines like those found in modern ex.JPL's processors as traps incur high overhead on these processors. Future support is also considered in binary translation.
Both Intel's Ponderosa technology and MAD's Pacific technology have recently introduced hardware support for executing

virtual machines. Instead of modifying existing execution modes, both technologies add a new mode specifically designed for efficient virtual machine operation to the processor. This new mode aims to minimize the number of traps needed for implementing virtual machines and reduce trap performance time, resulting in improved performance.

The availability of hardware support could enable directionality-only virtual machines on existing processors, especially for operating systems that do not utilize these new execution modes. If this hardware support is as successful as IBM's early mainframe fertilization support, it could potentially reduce performance overhead even more and simplify the implementation of fertilization techniques. Previous experiences have shown that adequate hardware support can reduce overhead, even if virtual machine abstraction overrides any performance advantages gained by breaking compatibility.

One traditional approach to memory fertilization involves the virtual machine maintaining a shadow copy of the memory-management data structure. The VIM uses a data structure called the shadow page table to precisely manage machine memory pages for virtual machines. When the operating system in a virtual machine creates a mapping in its page table, the VIM detects these changes and establishes a corresponding mapping in the shadow page resource table, which links to the actual hardware memory page. During execution, the Dual machine utilizes the shadow page table for memory translation, effectively allowing the VIM to control memory allocation for each virtual machine.The VIM, similar to traditional OS virtual memory subsystems, can store virtual machines' pages on disk, allowing for memory allocation beyond physical limits. This reduces hardware requirements for virtual machine workloads and dynamically adjusts memory allocation based on each virtual machine's needs. However, the VIM faces challenges in managing

memory allocation and reclaiming it through paging portions of virtual machines to disk. The Guests operating system within the virtual machine possesses more knowledge about suitable pages for paging out compared to Vim's virtual memory system. For example, the Guests can determine if a page is no longer needed because its creating process has ended. In contrast, Vim cannot detect this at the hardware level and might unnecessarily page out the page. To address this issue, Ovenware's SEX Servers adopt a personalized approach by incorporating a balloon process within the Guests that communicates with Vim. When Vim requires memory reclamation from a virtual machine, it asks the balloon process to allocate more memory by "inflating" itself. The Guests intelligently selects the pages for handing over to the balloon process based on their understanding of page replacement algorithms. These selected pages are then passed to Vim for reallocation purposes. As a result of inflating the balloon process, there is an increase in memory pressure leading to intelligent paging of memory to the virtual disk by the GuestsMemory management in modern operating systems and applications poses a challenge due to their large size. This is especially evident when running multiple virtual machines, as it can lead to wasted memory by storing duplicate copies of code and shared data across the virtual machines.

To address this issue, Ovenware designers have developed a content-based page sharing technique for server products. This technique involves the Virtual Machine Monitor (VIM) monitoring the contents of physical pages and determining if they are identical. If they are, the VIM modifies the shadow page tables of the virtual machine to point to a single copy.

Consequently, redundant copies can be deleted, freeing up memory for other purposes. In cases where page contents differ, each virtual machine is given its own copy using a copy-on-write page-sharing scheme.

For instance, imagine a computer running 30 virtual machines with Microsoft Windows 2000. Despite having multiple virtual machines, only one copy of the Windows kernel would exist in memory. This significantly reduces physical memory usage.

Looking ahead to future support, operating systems frequently undergo changes in their page tables. However, maintaining up-to-date software shadow copies can result in undesirable overheads. One potential solution is leveraging hardware-managed shadow page tables that have long been employed in mainframe computer architectures.Implementing this approach could speed up CPU fertilization. Looking ahead, resource management shows potential as a research area. Cooperative resource management decisions between VIMs and guest operating systems need to be explored further. Research in resource management at the entire data center level is expected to make significant progress in the next decade. This includes examining 1/0 fertilization, which involves managing and optimizing input/output (1/0) subsystems.

Thirty years ago, IBM mainframes used a channel-based architecture where communication with a separate channel processor facilitated access to 1/0 devices. The use of a channel processor allowed for direct export of 1/0 device access to the virtual machine (VIM), resulting in minimal fertilization overhead for 1/0 operations. Instead of relying on traps into the VIM, software in the virtual machine could directly interact with 1/0 devices like text terminals, disks, card readers, and card punches.

However, current computing environments present challenges when dealing with a wider range of 1/0 devices. These environments support various 1/0 devices from different vendors with different programming interfaces. Consequently,

developing a VIM layer that can effectively communicate with these devices becomes complex and time-consuming.Furthermore, certain devices such as modern PC graphics subsystems or server network interfaces have high-performance requirements, making it crucial to have efficient fertilization for widespread acceptance. The ability of the fertilization layer to communicate with a computer's input/output (1/0) devices is necessary when exporting a standard device interface. To achieve this, Ovenware Workstation, a product specifically designed for desktop computers, has developed hosted architectures as depicted in Figure 2. In this architecture, the fertilization layer utilizes device drivers from a host operating system (Hosts), such as Windows or Linux, to access the devices. Since most 1/0 devices are compatible with these operating systems' drivers, the virtual layer translates commands for reading or writing blocks from the virtual disk into system calls that read or write files in the Hostess's file system. Similarly, the 1/0 VIM displays the virtual display card of the virtual machine in a window on the Hosts. This enables Hosts to control, drive and manage 1/0 display devices of the virtual machine regardless of what devices Guests may assume are present. The hosted architecture brings three significant advantages: firstly, installing VIM is simple as users can install it as an application on Hosts instead of directly on hardware like traditional Vim's; secondly, the hosted architecture fully accommodates the wide range of 1/0 devices available in today's PC market.Thirdly, the VIM can utilize scheduling, resource management, and other services offered by the Hosts environment. However, Ovenware encountered drawbacks when they developed products for the ex.server marketplace. This architecture caused a significant increase in performance overhead for 1/0 device fertilization

as each 1/0 request had to go through the Hosts environment and its software layers to communicate with the devices. Consequently, server environments with high-performance network and disk subsystems found this level of overhead unacceptable.

In addition, modern operating systems like Windows and Linux lack resource-management support to provide performance isolation and service guarantees to virtual machines, which is often crucial in server environments. On the other hand, SEX Servers adopts a more traditional approach by running directly on the hardware without a host operating system. This enables advanced scheduling, resource management, and direct communication with devices using Linux device drivers. As a result, fertilization overhead for 1/0 devices is significantly reduced.

Instead of functioning as an underlying layer beneath other software, the hosted architecture shares hardware with an existing operating system (Hosts). The server incorporates an optimized 1/0 subsystem designed specifically for network and storage devices.The SEX Server kernel can utilize device drivers from the Linux kernel to communicate directly with devices, resulting in reduced overhead for 1/0 devices. Ovenware can benefit from this approach because major ex.vendor server machines have limited support for certified network and storage 1/0 devices. By limiting support to these specific devices, servers can manage them directly, making the management of 1/0 devices more feasible. Another performance optimization in Ovenware's products is the ability to export highly optimized virtual 1/0 devices that are not associated with any existing ones. Guest environments need to use a specific device driver to access these devices, which leads to an efficient 1/0 device interface with reduced transmission overhead for Guests' commands and improved performance. The future of 1/0 subsystems aligns with the hardware trend supporting

high-performance 1/0 device interfaces. Traditional discrete 1/0 devices like PC keyboard controllers and DID disk controllers are being replaced by channel-like interfaces such as USB and CICS. These interfaces simplify implementation similar to IBM mainframe 1/0 channels while minimizing overhead.With appropriate hardware support, it is feasible to directly transfer channel 1/0 devices to software in virtual machines. This process eliminates any additional overhead associated with 1/0 devices. To achieve success, the 1/0 devices must possess knowledge of virtual machines and have the capability to accommodate multiple virtual interfaces.

This allows for the Virtual Infrastructure Manager (VIM) to safely assign the interface to the respective virtual machine, enabling direct communication between the virtual machine's device drivers and the 1/0 device without requiring intervention from the VIM. However, if a virtual machine performs direct memory access, address remapping is essential. This remapping follows specifications outlined by shadow page tables in order to accurately map memory addresses requested by the device driver within the computer's memory.

It is crucial that only specific memory belonging to a particular virtual machine be accessible by its designated device, regardless of how it may be programmed within that virtual machine. In scenarios where multiple virtual machines utilize the same 1/0 device, an efficient mechanism must exist within the VIM for routing device completion interrupts appropriately to each respective virtual machine.Finally, it is necessary for Brazzaville 1/0 devices to connect with the VIM in order to maintain separation between hardware and software. This allows for virtual machine migration and checkpointing by the VIM. Devices that offer this support can reduce the burden of fertilization, enabling virtual machines to handle even the most demanding workloads. In

addition to improved performance, there are significant advantages in terms of security and reliability since complex device driver code is eliminated from the VIM. Physical machines can be organized based on data center requirements. The VIM effectively manages hardware issues like failure by transferring virtual machines from failed computers to functioning ones. The ability to move running virtual machines also aids in handling hardware challenges such as scheduling preventive maintenance, managing equipment lease ends, and conducting hardware upgrades. Administrators can perform these tasks using hot migration without disrupting service.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New