Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Cutting through the virtualization hype:
Strong and Weak Points of Hypervisors
as a Special Class of Operating Systems

News

See Also

Recommended Links

Selected Papers

Reference

OS Internals

VMware Xen Virtual PC Suse on Virtual PC Conversion of harddrive partitions Virtual Software Appliances
Solaris Zones Solaris Ldoms BSD Jails VM/CMS

 Humor

Etc

In a traditional sense that we will use here virtualization is the simulation of the hardware upon which other software runs. This simulated hardware environment is called a virtual machine (VM). Classic form of virtualization, known as operating system virtualization, provides the ability to multiple instances of OS on the same physical  computer under the direction of a special layer of software called hypervisor. There are several forms of virtualization, distinguished primarily by the hypervisor architecture.

Each such virtual instance (or guest) OS thinks that is running on a real hardware with full access to the address space but in reality is operating in a separate VM container which maps this address space into segment of address space of the physical computer.  this operation is called address translation.  Guest OS can be unmodified (so-called heavy-weight virtualization) or specifically recompiled for the hypervisor API (para-virtualization).   In light-weight virtualization a single OS instance presents itself as multiple personalities (called jails or zones), allowing high level of isolation of applications from each other at a very low overhead.

There is entirely different type of virtualization often called application virtualization. The latter  provides a virtual instruction set and virtual implementation of the application programming interface (API) that a running application expects to use, allowing writing compilers that compile into this vitual instruction set. Along with huge synergy it can permits  applications developed for one platform to run on another without modifying the application itself. The Java Virtual Machine (JVM)  and, in more limited way,  Microsoft .Net are two prominent examples of this type of virtualization. This type acts as an intermediary between the application code, the operating system (OS) API and instruction set of the computer. We will not discuss it here. 

Virtualization was pioneered by IBM in early 1960th with its ground breaking VM/CMS. It is still superior to many existing VMs, as it handle virtual memory management for hosts (on hosted OS, virtual memory management layer should be disabled as it provides nothing but additional overhead). Also IBM mainframe hardware was the first virtualization friendly hardware (IBM and HP virtualization):

Contrary to what many PC VMWARE techies believe, virtualization technology did not start with VMWARE back in 1999. It was pioneered by IBM more than 40 years ago. It all started with the IBM mainframe back in the 1960s, with CP-40, an operating system which was geared for the System/360 Mainframe. In 1967, the first hypervisor was developed and the second version of IBM's hypervisor (CP-67) was developed in 1968, which enabled memory sharing across virtual machines, providing each user his or her own memory space. A hypervisor is a type of software that allows multiple operating systems to share a single hardware host. This version was used for consolidation of physical hardware and to more quickly deploy environments, such as development environments. In the 1970s, IBM continued to improve on their technology, allowing you to run MVS, along with other operating systems, including UNIX on the VM/370. In 1997, some of the same folks who were involved in creating virtualization on the mainframe were transitioned towards creating a hypervisor on IBM's midrange platform.

One critical element that IBM's hypervisor has is the fact that virtualization is part of the system's firmware itself, unlike other hypervisor-based solutions. This is because of the very tight integration between the OS, the hardware, and the hypervisor, which is the systems software that sits between the OS and hardware that provides for the virtualization.

In 2001, after a four-year period of design and development, IBM released its hypervisor for its midrange UNIX systems, allowing for logical partitioning. Advanced Power Virtualization (APV) shipped in 2004, which was IBM's first real virtualization solution and allowed for sharing of resources. It was rebranded in 2008 to PowerVM.

As Intel CPUs became dominant in enterprise virtualization technologies invented for other CPUs were gradually reinvented for Intel. In 1998 VMware built VMwae Workstation, which ran on a regular Intel CPU despite the fact that at this time Intel CPUs did not directly supported virtualization extensions. The first mass deployment of virtualization on Intel Platform was not on servers but for "legacy desktop applications" for Windows 98 when organization started moving to Windows 2000 and then Windows XP.

There were three major virtualization solutions for Intel-based computers, XEN, VMware and Microsoft Virtual PC

For system administrators, programmers and consultants VMware desktop provided an opportunity to run linux on the same PCs as Windows. This was very convenient for various demo and such configuration became holy grail for all types of consultants who became major promoter of VMware and ensures its quick penetration at the enterprise level. It also became common solution for training as it permits to provide to each student a set of virtual desktop and servers that would too costly to provide in physical hardware.

The other important area is experimentation: you can create set of virtual machines in no time without usual bureaucratic overhead typical for large organizations.

More problematic area is usage of virtualization for server consolidation. VMware found here its niche but only consolidating "fake" servers -- servers that run applications with almost no users and no load. For servers with real load blades provide much more solid alternative with similar capabilities and cost. Still VMware has some level of success in server space but it is difficult to say how much of it is due to advantages of virtualization and how much due to technical incompetence of corporate IT which simply follows the current fashion.

I was actually surprised that VMware got so much traction with the exorbitant extortion level prices they charge. Pricing that is designed almost perfectly to channel all the savings to VMware itself instead of organization that this deploying VMware hypervisor. For me blades were more always simpler and more promising server consolidation solution with better price/performance ration. So when companies look at virtualization as the way to cut costs they might be looking at the wrong solution. First of all you cannot defy gravity with virtualization: you still have a single channel of access to RAM and with several OS running concurrently the bridge between RAM and CPU became a bottleneck. Only if the application is mostly idle (for example hosts a low traffic websites, etc) it makes sense to consolidate it. So the idea works when you consolidate small and not very loaded servers into fewer, larger, more heavily-loaded physical servers. It also works perfectly well for development and quality servers which by definition are mainly circulating air. For everything else your mileage may vary. For example why on earth I would put Oracle on a virtual machine? to benefit from the ability to migrate to another server? That's fake benefit as it almost never happens in real life without Oracle version upgrade. To provide more uniform environment for all my Oracle installations? Does it worth troubles with disk i/o that I will get ?

So it is very important to avoid excessive zeal in implementing virtualization on enterprise environment and calculate five years total ownership difference between various variants before jumping into the water. If overdone server consolidation via virtualization can bring up a whole new set of complications. And other things equal one should consider cheaper alternatives to VMware like Xen, especially for Linux servers, as again the truth about VMware is that the lion share of saving goes to VMware, not to the company that implement it.

It is very important to avoid excessive zeal. If overdone server consolidation via virtualization can bring up a whole new set of complications.

In short there is no free lunch. If used in moderation and with Xen instead of VMware to avoid excessive licensing costs, this new techno fashion can help to get rid of "low load" servers as well as cut maintenance cost replacing some servers with specific applications run by "virtual appliances". Also provisioning became really fast which is extremely important in research and lab environment. One can get a server to experiment in 5-10 min instead of 5-10 days :-). This is a win-win situation. It is quite beneficial for environment and for the enterprise as it opens some additional, non-foreseen avenues of savings.

But there is another caveat. The price of servers grows very fast beyond midrange servers and, say, an Intel server that costs $35K will never be able to replace seven reasonably loaded low end servers costing $5K each. And using separate server you do need to worry that they are not too loaded or that peak loads for different servers happens at different time. The main competition here are blade servers. For example the cost of VMware server is approximately $5K with annual maintenance cost of $500. If we can run just four virtual instances under it and the server cost, say $20K, while a small 1U server capable of running one instance costs $5K (no savings on hardware due to higher margins on medium servers) you lose approximately $1K a year per instance in comparison with using physical servers or blades. Advantages due to better maintainability are marginal (if we assume 1U servers are identical and use kickstart and, say, Acronis images fro OS restore) and stability is lower and behavior under simultaneous peaks is highly problematic. In other words virtualization is far from being a free lunch.

At the same time the heavy reliance on virtualized servers for production applications, as well as the task of managing and provisioning them, are fairly new areas in the "new brave" virtualized IT world increases the importance of monitoring applications and enterprise schedulers. In large enterprises that means additional value provided by already installed HP Operations Manager, Tivoli and other ESM applications. Virtualization also has changed configuration management, capacity management, provisioning, patch management, back-ups, and software licensing. It is inherently favorable toward open source software and OS solutions, where you do not pay for each core or physical CPU on the server.

Dual host virtualization

We will call "Dual host virtualization" the scenario when one physical server hosts just two guest OSes. Exactly two, no more, no less.

If applied to all servers in the datacenter this approach guarantees 50% reduction in the number of physical servers. Saving on hardware that motivates many virtualization efforts is a questionable idea as low level servers represent the most competitive segment of server market with profit margins squeezed to minimum; the margins are generally much larger of mid-range and high end servers.  But in "Dual host virtualization" some savings on hardware might be squeezed. For example, a fully configured Intel server with two four core CPUs and with, say, with 32GB of RAM costs less that two servers with one fore core CPU and 16GB RAM in each. 

Larger number of applications on a single server are possible, but more tricky: such virtual server need careful planning as it faces memory bottleneck and CPU power bottleneck, especially painful if "rush hours" are the same for both applications. If applications have some synergy and have peaks at different time of the say, then one 2U server with two, say, quad core CPU and 32G of memory split equally between two partitions can be even more efficient then two several servers with one quid core CPU and 16G of memory if speed of memory and speed of CPU are equal.

Dual host virtualization works well on all types of enterprise servers: Intel servers, IBM Power servers, HP servers and Sun/Oracle servers (both Intel and UltraSparc based).

If we are talking about Intel platform using Xen or Microsoft VM are probably the only realistic options for Dual host virtualization. VMware is way too expensive. Using Xen and Linux you can squeeze two virtual servers previously running on individual 1U server into one single 2U server and get 30-50% reduction in the cost of both hardware and software maintenance. The latter is approximately $1K per year per server (virtual instances are free under Suse and Red Hat). There are also some marginal savings in electricity and air-conditioning related savings. Low end servers have small and usually less efficient power supplies and using one 2U server instead of two 1U servers lead to almost 30-40% savings in consumed energy (higher saving are possible if 2U server is using a single CPU with, say, four of eight cores).

If you go beyond dual host virtualization outlined above, savings on hardware are more difficult to achieve as low end Intel servers represent the most competitive segment of the Intel server market with profit margins squeezed to minimum; the margins are generally much larger of mid-range and high end Intel servers. The same is true for another architectures as well. In other words, vendor margins on midrange servers and high-end servers work against virtualization. This is especially true about HP, which overcharges customers for midrange server by tremendous margin providing mediocre servers, that are less suitable for running Linux then servers from Dell.

Types of virtualization

Virtualization is the simulation of the software and/or hardware upon which guest operating systems run. This simulated environment is called a virtual machine (VM). Each instance of an OS and its applications runs in a separate VM called a guest operating system. Those VMs are managed by the hypervisor. There are several forms of virtualization, distinguished by the architecture of hypervisor.

Full virtualization has some negative security implications. Virtualization adds layers of technology, which can increase the security management burden by necessitating additional security controls. Also, combining many systems onto a single physical computer can cause a larger impact if a security compromise occurs, especially grave if it occurs on VM level (access to VM console). Further, some virtualization systems make it easy to share information between the systems; this convenience can turn out to be an attack vector if it is not carefully controlled. In some cases, virtualized environments are quite dynamic, which makes creating and maintaining the necessary security boundaries more complex.

There are two types of hypervisors:

In both bare metal and hosted virtualization, each guest OS appears to have its own hardware, like a regular computer. This includes:

But in reality it is difficult to virtualize storage and networking, so some additional overhead is imminent. Some hypervisors also provide direct memory access (DMA) to high-speed storage controllers and Ethernet controllers, if such features are supported in the hardware CPU on which the hypervisor is running. DMA access from guest OSs can significantly increase the speed of disk and network access, although this type of acceleration prevents some useful virtualization features such as snapshots and moving guest OSs while they are running.

Virtualized Networking

Hypervisors usually provide networking capabilities to the individual guest OSs enabling them to communicate with one another while simultaneously limiting access to the external physical network. The network interfaces that the guest OSs see may be virtual Ethernet controller, physical Ethernet controller, or both. Typical hypervisors offer three primary forms of network access:

When a number of guest OSes exist on a single host, the hypervisor can provide a virtual network for these guest OSs. The hypervisor may implement virtual switches, hubs, and other network devices. Using a hypervisor’s networking for communications between guests on a single host has the advantage of greatly increased speed because the packets never hit physical networking devices. Internal host-only networking can be done in many ways by the hypervisor. In some systems, the internal network looks like a virtual switch. Others use virtual LAN (VLAN) standards to allow better control of how the guest systems are connected. Most hypervisors also provide internal network address and port translation (NAPT) that acts like a virtual router with NAT.

Networks that are internal to a hypervisor’s networking structure can pose an operational disadvantage, however. Many networks rely on tools that watch traffic as it flows across routers and switches; these tools cannot view traffic as it moves in a hypervisor’s network. There are some hypervisors that allow network monitoring, but this capability is generally not as robust as the tools that many organizations have come to expect for significant monitoring of physical networks. Some hypervisors provide APIs that allow a privileged VM to have full visibility to the network traffic. Unfortunately, these APIs may also provide additional ways for attackers to attempt to monitor network communications. Another concern with network monitoring through a hypervisor is the potential for performance degradation or denial of service conditions to occur for the hypervisor because of high volumes of traffic.

Virtualized Storage

Hypervisor systems have many ways of simulating disk storage for guest OSs. All hypervisors, at a minimum, provide virtual hard drives mapped to files, while some of them also have more advanced virtual storage options. In addition, most hypervisors can use advanced storage interfaces on the host system, such as network-attached storage (NAS) and storage area networks (SAN) to present different storage options to the guest OSs.

All hypervisors can present the guest OSs with virtual hard drives though the use of disk images. A disk image is a file on the host that looks to the guest OS like an entire disk drive. Whatever the guest OS writes onto the virtual hard drive goes into the disk image. With hosted virtualization, the disk image appears in the host OS as a file or a folder, and it can be handled like other files and folders. As speed of read access is important this is a natural area of application for SSD disks.

Most virtualization systems also allow a guest OS to access physical hard drives as if they were connected to the guest OS directly. This is different than using disk images. Disk image is a virtual representation of a real drive. The main advantage of using physical hard drives is that unless SSD is used, accessing them is faster than accessing disk images.

Typically virtual systems in enterprise environment use SAN storage. that's probably why EMC bought VMware. This is an active area of development in the virtualization market as it permits migration of guest OS from one physical server (more loaded or less powerful) to another (less loaded and./or more powerful) if one of virtual images experience a bottleneck.

Guest OS Images

A full virtualization hypervisor encapsulates all of the components of a guest OS, including its applications and the virtual resources they use, into a single logical entity. An image is a file or a directory that contains, at a minimum, this encapsulated information. Images are stored on hard drives, and can be transferred to other systems the same way that any file can (note, however, that images are often many gigabytes in size). Some virtualization systems use a virtualization image metadata standard called the Open Virtualization Format (OVF) that supports interoperability for image metadata and components across virtualization solutions. A snapshot is a record of the state of a running image, generally captured as the differences between an image and the current state. For example, a snapshot would record changes within virtual storage, virtual memory, network connections, and other state-related data. Snapshots allow the guest OS to be suspended and subsequently resumed without having to shut down or reboot the guest OS. Many, but not all, virtualization systems can take snapshots.

On some hypervisors, snapshots of the guest OS can even be resumed on a different host. While a number of issues may be introduced to handle real-time migration, including the transfer delay and any differences that may exist between the two physical servers (e.g., IP address, number of processors or hard disk space), most live-migration solutions provide mechanisms to resolve these issues.

Classification of Types of Virtualization by Complexity of Hypervisor

We can distinguish the following five different types of virtualization:

Super-heavy-weight

This is hardware domain-based virtualization that is used only on high-end servers. Domain can, essentially, be called  "blades with common memory and I/O devices". Those "blades on steroids" are probably the closest thing on getting more power from a singe server without related sacrifices in CPU, memory access and I/O speed, sacrifices that are typical for all other virtualization solutions. Of course there is no free lunch and you need to pay for such luxury.  Sun is the most prominent vendor of such servers (mainframe class servers like Sun Fire 15K).

A dynamic system domain (DSD) on Sun Fire 15K is an independent environment, a subset of a server, that is capable of running a unique version of firmware and a unique version of the Solaris operating environment. Each domain is insulated from the other domains. Continued operation of a domain is not affected by any software failures in other domains nor by most hardware failures in any other domain. The Sun Fire 15K system allows up to 18 domains to be configured.

A domain configuration unit (DCU) is a unit of hardware that can be assigned to a single domain; DCUs are the hardware components from which domains are constructed. DCUs that are not assigned to any domain are said to be in no-domain . There several types of DCU: CPU/Memory board, I/O assembly, etc. Sun Fire 15K hardware requires the presence of at least one board containing CPUs and memory, plus at least one of the I/O board types in each configured domain.  Typically those servers as NUMA based. Access to memory of other domains is slower then to local memory. 

 

Heavy-weight

By heavy-weight virtualization we will understand full hardware virtualization as exemplified by VMware. CPU vendors now are paying huge attention to this type of virtualization as they can no longer increase the CPU frequency and are forced to the path of increasing the number of cores. Intel latest CPU that are now dominant in server space are a classic example of this trend.  With eight and 10 core CPUs available it is clear tat Intel is putting money on the virtualization trend.  IBM P5/P6 and Sun UltraSparc T1/T2/T3 are examples among RISC CPUs.

All new Intel CPUs are "virtualization-friendly" and with the exception of cheapest models contain instructions and hardware capabilities that make heavy-weight virtualization more efficient. First of all this is related to the capability of "zero address relocation": availability of a special register which is added to each address calculation by regular instruction and thus provides illusion of multiple "zero addresses" to the programs.

VMware is the most popular representative of this approach to the design of hypervisor and recently it was greatly helped by Intel and AMD who incorporated virtualization extensions in their CPUs.  VMware started to gain popularity before the latest Intel CPUs with virtualization instruction set extensions and demonstrated that it is possible to implement it reasonably efficiently even without hardware support.  VMware officially supports a dozen of  different types of guests: it can run Linux (Red Hat and Suse), Solaris and Windows as virtual instances(guests) on one physical server.  32-bit Suse can be run in paravirtualized mode on VMware.   

The industry consensus is that VMware's solution is overpriced. Please ignore hogwash like   the following VM PR:

Horschman countered the 'high pricing' claim saying "Virtualization customers should focus on cost per VM more than upfront license costs when choosing a hypervisor. VMware Infrastructure's exclusive ability to overcommit memory gives it an advantage in cost per VM the others can't match." And he adds, "Our rivals are simply trying to compensate for limitations in their products with realistic pricing."

This overcommitting of memory is a standard feature related to presence of virtual memory subsystem in the hypervisor and first was implemented by IBM VM/CMS in early 1970th. So much about new technology. All those attempts to run dozens of guests on a server with multiple cores (and in mid 2011 you can get 80 core server -- HP DL 980 -- for less then $60K) are more result of incompetence of typical IT brass and related to that the number of servers that simply circulate air in a typical datacenter then the progress in virtualization technology.

No matter how much you can share the memory (and over commitment is just a new term for what IBM VM did since 1972), you can't bypass the limitation of a single channel from CPU to memory, unless this is a NUMA server. The more guests are running the more this channel is stressed and running dozens of  instances is possible mainly in situations when they are doing nothing or close to nothing (circulating air in corporate IT jargon). That's happens (unpopular, unused corporate  web servers are one typical example), but even for web servers paravirtualization and zones are much better solutions.  

Even assuming the same efficiency as multiple standalone 1U servers VMware  is not cost efficient unless you can squeeze more then four guests per server. And more then four guests is possible only with servers that are doing nothing or close to nothing because if  each guest is equally loaded then each of them can use only 33% or less of memory bandwidth of the server (which means memory channel for guest operating at 333MHz or less, assuming the server uses 1.028GHz memory).  Due to this I would not recommend running four heavily used database servers on a single physical server for any organization. But running several servers for the compliance training that was implemented because the company was caught fixing prices along with a server or two which implement a questionnaire about how good is company IT brass in communicating IT policy to rank and file is OK ;-)

The following table demonstrates that the cost savings with less then four guest per physical server are  non-existent even if we assume equal efficiency of VMware and separate physical servers. Moreover VMware price premium means that you need at least eight guests on a single physical server to achieve the same cost efficiency as four Xen servers running two guests each (Red Hat and Novell do not charge for additional guests on the same physical server, up to a limit). 

  Cost of the server Number of physical servers Number of  guests Cost of SAN cards (Qlogic) Cost of SAN storage Server maintenance (annual) VM license VM Maintenance (annual) OS maintenance (annual) Five years total cost of ownership annualized cost per one guest or physical server Cost efficiency of one guest vs.  one 1U server (annualized)
VMware solution                        
Running 2 guests 7 1 2 0.00 0.00 0.42 5 1.4 0.35 25.02 12.51 -3.24
Running 4 guests 10 1 4 4.00 3.00 0.42 5 1.4 0.35 38.52 9.63 -0.36
Running 8 guests 20 1 8 4.00 6.00 0.42 5 1.4 0.35 58.52 7.32 3.13
                       
Xen solution                        
Running 2 guests 7 1 2 0.00 0.00 0.42 0 0 0.35 13.02 6.51 2.76
Running 4 guests 10 1 4 4.00 3.00 0.42 0 1.3 0.35 33.02 8.26 1.02
                       
Physical servers                        
two 1U servers 5 2 0 0.00 0.00 0.42 0 0 0.35 18.54 9.27 0.00
four 1U servers 5 4 0 0.00 0.00 0.42 0 0 0.35 37.08 9.27 0.00

Notes

1 Even assuming the same efficiency, there is no cost savings running 4 or less guests per VMware server in comparison with equal number of standard 1U servers.
2 The cost of blades is slightly higher then equal number of 1U servers due to the cost of the enclosure but can be assumed equal for simplicity
3 We assume that in case of two instances no SAN is needed/used (internal drives are used for each guest)
4 We assume that in case of 4 guests or more, SAN cards and SAN storage is used
5 For Xen we assume that in case of 4 or more guests Oracle virtual VM is used (which has maintenance fees)
6 For simplicity the cost of SAN storage is assumed to be fixed cost $3K per 1T per 5 years
(includes SAN unit amortization, maintenance and switches, excludes SAN cards in the server itself)

Performance of VMware guests on high loads is not impressive as it should be for any non-paravirtualized hypervisor. Here is a more realistic assessment from a rival Xen camp:

 Simon Crosby, CTO of the Virtualization and Management Division at Citrix Systems, writes on his blog: "The bottom line: VMware's 'ROI analysis' offers neither an ROI comparison nor any analysis. But it does offer valuable insight into the mindset of a company that will fight tooth and nail to maintain VI3 sales at the expense of a properly thought through solution that meets end user requirements.

The very fact that the VMware EULA still forbids Citrix or Microsoft or anyone in the Xen community from publishing performance comparisons against ESX is further testimony to VMware's deepest fear, that customers will become smarter about their choices, and begin to really question ROI."

The main advantage of heavy-weight virtualization is almost complete isolation of instances.  Paravirtualization and blades achieve similar level of isolation so this advantage is not exclusive.

"The very fact that the VMware EULA still forbids Citrix or Microsoft or anyone in the Xen community from publishing performance comparisons against ESX is further testimony to VMware's deepest fear, that customers will become smarter about their choices, and begin to really question ROI."

-- Simon Crosby,  Citrix Systems

The fact that  CPUs, memory and I/O channels (PCI bus) are shared among guests means that you will never get the same speed on high simultaneous workloads for several guests as in the case of equal number of standalone servers each with corresponding fraction of CPUs and memory and the same set of applications. Especially problematic is sharing of memory bridge which works on lower speed then CPUs and can starve CPU, becoming the bottleneck well before CPU.  Each virtual instance of OS loads pages independently of the other and compete for limited memory bandwidth. Even in best cases that means that each guest gets a fraction of memory bandwidth that is lower then memory bandwidth on a standalone server. So if, for example,  two virtual instances are simultaneously active and are performing operations that do not fit in L2 cache only 2/3 of the memory bandwidth (accesses to memory are randomly spread in time so sum should probably be greater then 100%)  in comparison with a standalone system.  In memory operated on 1.024GHz that means that only 666MHz of bandwidth is availed for each guest while on a standalone server it would be at least 800MHz and can be as high as 1.33GHz.  In other words you lose approximately 1/3 of memory bandwidth by jumping into virtualization bandwagon.  That's why heavy-weight virtualization behaves bad on memory intensive applications.

There can be a lot of synergy if you run two or more instances of identical OSes. Many pages representing identical part of the kernel and applications  can be loaded only once while used in all virtual instances. But I think you lose stack overflow protection this way as your pages are shared by different instances.

As memory speed and memory channel are bottlenecks adding  CPUs (or cores) at some point became just wasting of money. The amount of resources used for intercommunication dramatically increases with the growth of the number of CPUs. VMware server farms based on the largest Intel servers like HP DL 980 (up to eight 10 core CPUs ) tend to suffer from this effect.  The presence of a full non-modified version of an OS for each partition introduces significant drag on resources (both memory and CPU-wise).  I/O load can be diminished by using SAN for each virtual instance OS and multiple cards on the server. Still in some deep sense heavy-weight partitioning is inefficient and will always waist significant part of server resources.

Still this approach is important for running legacy applications which is the area where this type of virtualization shine. 

Sun calls heavy-weight virtual partitions "logical domains"(LDOM) . It is supported on Sun's T1-T3 CPU based and all the latest Oracle servers. Sun supports up to 32 guests with this virtualization technology. About differences with LPARs see Rolf M Dietze blog:

Sun’s LDoms supply a virtual terminal server, so you have consoles for the partitions, but I guess this comes out of the UNIX history: You don’t like flying without any sight or instruments at high speed through caves, do you? So you need a console for a partition! T2000 with LDoms seems to support this, at IBM you need to buy an HMC (Linux-PC with HMC-software).

With crossbow virtual network comes to Solaris. LDoms seem to give all advantages of logical partitioning as IBMs have, but hopefully a bit faster and clearly less power consumption.

Sun offers a far more open licensing of course and: You do not need a Windows-PC to administer the machine (iSeries OS/400 is administered from such a thing).

A T2000 is fast and has up to 8 cores (32 thread-CPUs) 16GBRam and has a good price and those that do not really need the pure power and are more interested in partitioning.

The Solaris zones have some restrictions aka no NFS server in zones etc. That is where LDoms come in. That’s why I want to actually compare LDoms and LPARs.

It looks like it becomes cold out there for IBM boxes….

Medium-weight (para-virtualization)

Para-virtualization is a variant of native virtualization, where the VM (hypervisor) emulates only part of hardware and provides a special API requiring OS modifications. The most popular representative of this approach is Xen with AIX as a distant second:

With Xen virtualization, a thin software layer known as the Xen hypervisor is inserted between the server’s hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more “virtual servers,” effectively decoupling the operating system and its applications from the underlying physical server.

IBM LPARs for AIX are currently the king of the hill in this area because of higher stability in comparison with alternatives. IBM actually pioneered this class of  VM machines in late 60 with the release of famous VM/CMS.  Until recently Power5 based servers with AIX  5.3 and LPARs were the most battle-tested and reliable virtualized environments based on paravirtualization.

Xen is the king of paravirtualization hill in Intel space. Work on Xen has been supported by UK EPSRC grant GR/S01894, Intel Research, HP Labs and Microsoft Research (Yes, despite naive Linux zealots wining Microsoft did contributed code to Linux ;-). Other things equal it provides higher speed and less overhead then native virtualization. NetBSD was the first to implement Xen. Currently the key platform for Xen is linux with Novell supporting it in production version of Suse.

Xen is now resold commercially by IBM, Oracle and several other companies. XenSource, the company create for commercialization of Xen technology,  was bought by Cytrix.

The main advantage of Xen is that it supports live relocation capability. It is also more cost effective solution the VMware that is definitely overpriced.

The main problem is that para-virtualization requires OS kernel modification to be aware of the environment it is running and pass control to hypervisor in case of executing all privileged instructions. Therefore it is not suitable for running legacy OSes and for running Microsoft Windows (although Xen can run it in newer 51xx CPU series)

Para-virtualization improves speed in comparison with heavy-weight virtualization (much less context switching), but does little beyond that. It is unclear how much faster is para-virtualized instance of OS in comparison with heavy-weight virtualization on "virtualization-friendly" CPUs. Xen page claims that:

Xen offers near-native performance for virtual servers with up to 10 times less overhead than proprietary offerings, and benchmarked overhead of well under 5% in most cases compared to 35% or higher overhead rates for other virtualization technologies.

It's unclear was this difference measured of old Intel CPU or new 5xxx series that support virtualization extensions. I suspect the difference on newer CPUs should be smaller.

I would like to stress it again that the level of modification OS is very basic and important idea of factoring out common functions like virtual memory management that was implemented in classic VM/CMS is not utilized. Therefore all the redundant processing typical for heavy-weight virtualization is present in para-virtualization environment.

Note: Xen 3.0 and above support both para-virtualization and full (heavy-weight) virtualization to leverage the built-in hardware support built into the Intel-VT-x and AMD Pacifica processors. According to XenSource Products - Xen 3.0 page:

With the 3.0 release, Xen extends its feature leadership with functionality required to virtualize the servers found in today’s enterprise data centers. New features include:

One very interesting application of paravirtualization are so called virtual appliances. This is a wholenew area that we discuss on a separate page.

Another very interesting application of paravirtualization is "cloud" environment like Amazon Elastic cloud.

All-in-all paravirtualization along with light-weight virtualization (BSD jail and Solaris zones) looks like the most promising types of virtualization.

 

Light-weight virtualization

This type of virtualization was pioneered in Free BCD (jails) and was further developed by Sun and introduced in Solaris 10 as concept of Zones. There are various experimental add-ons of this type for Linux but none got any prominence.

Solaris 10 11/06 and later are capable to clone a Zone as well as relocate it to another box, through a feature called Attach/Detach.  The key advantage is that you have a single instance of OS so the price that you paid in case of heavy-weight virtualization is waived. That means that light-weight virtualization is the most efficient resources-wise. It also has great security value. Memory can become a bottleneck here as all memory accesses are channeled via a single controller.  Also now it is possible to run Linux applications in zones on X86 servers (branded zones). 

Zones are really revolutionary and underappreciated development which were hurt greatly by inept Sun management and subsequent acquisition by Oracle. The key advantage is that you have a single instance of OS so the price that you paid in case of heavy-weight virtualization is waived. That means that light-weight virtualization is the most efficient resources-wise. It also has great security value. Memory can become a bottleneck here as all memory accesses are channeled via a single controller, but you have a single virtual system for all zones -- great advantage that permits to reuse memory for similar processes. 

IBM's "lightweight" product would be "Workload manager" for AIX which is an older (2001 ???)and less elegant technology then BSD Jails and Solaris zones:

Current UNIX offerings for partitioning and workload management have clear architectural differences. Partitioning creates isolation between multiple applications running on a single server, hosting multiple instances of the operating system. Workload management supplies effective management of multiple, diverse workloads to efficiently share a single copy of the operating system and a common pool of resources

IBM lightweight virtualization in version of AIX before 6 operated under a different paradigm with the most close thing to zone being a "class". The system administrator (root) can delegate the administration of the subclasses of each superclass to a superclass administrator (a non-root user). Unlike zones classes can be nested:

The central concept of WLM is the class. A class is a collection of processes (jobs) that has a single set of resource limits applied to it. WLM assigns processes to the various classes and controls the allocation of system resources among the different classes. For this purpose, WLM uses class assignment rules and per-class resource shares and limits set by the system administrator. T he resource entitlements and limits are enforced at the class level. This is a way of defining classes of service and regulating the resource utilization of each class of applications to prevent applications with very different resource utilization patterns from interfering with each other when they are sharing a single server.

In AIX 6 IBM adopted Solaris style light-weight virtualization.

One very interesting application of paravirtualization are so called virtual appliances. This is a wholenew area that we discuss on a separate page.

Another very interesting application of paravirtualization is "cloud" environment like Amazon Elactinc cloud.

All-in-all paravirtualization

Blades

Blade servers are an increasingly important part of the enterprise datacenters, with consistent double-digit growth which is outpacing the overall server market. IDC estimated that 500,000 blade servers were sold in 2005, or 7% of the total market, with customers spending $2.1 billion.

While blades are not virtualization in pure technical sense, the rack with blades (bladesystem) possesses some additional management capabilities that are similar to virtualized system and that are not present in stand-alone set of 1U servers. Blades usually have shared I/O channel to NAS. They also have shared remote management capaibilities (ILO on HP blades).

They can be viewed as "hardware factorization" approach to server construction, which is not that different from virtualization. The first shot in this direction is the new generation of bladesystems like IBM BladeCenter H system has offered I/O virtualization since February, 2006 and HP BladeSystem c-Class. A bladesystem saves up to 30% power in comparison with rack mounted 1U servers with identical CPU and memory configurations.

Sun also offers blades but it is a minor player in this area. It offers pretty interesting and innovative Sun Blade 8000 Modular System which target higher end that usual blade servers. Here is how Cnet described the key idea behind the server if the article Sun defends big blade server 'Size matters':

Sun co-founder Andy Bechtolsheim, the company's top x86 server designer and a respected computer engineer, shed light on his technical reasoning for the move.

"It's not that our blade is too large. It's that the others are too small," he said.

Today's dual-core processors will be followed by models with four, eight and 16 cores, Bechtolsheim said. "There are two megatrends in servers: miniaturization and multicore--quad-core, octo-core, hexadeci-core. You definitely want bigger blades with more memory and more input-output."

When blade server leaders IBM and HP introduced their second-generation blade chassis earlier this year, both chose larger products. IBM's grew 3.5 inches taller, while HP's grew 7 inches taller. But opinions vary on whether Bechtolsheim's prediction of even larger systems will come true.

"You're going to have bigger chassis," said IDC analyst John Humphries, because blade server applications are expanding from lower-end tasks such as e-mail to higher-end tasks such as databases. On the more cautious side is Illuminata analyst Gordon Haff, who said that with IBM and HP just at the beginning of a new blade chassis generation, "I don't see them rushing to add additional chassis any time soon."

Business reasons as well as technology reasons led Sun to re-enter the blade server arena with big blades rather than more conventional smaller models that sell in higher volumes, said the Santa Clara, Calif.-based company's top server executive, John Fowler. "We believe there is a market for a high-end capabilities. And sometimes you go to where the competition isn't," Fowler said.

As a result of such factorization more and more functions move to the blade enclosure. As a result power consumption improves dramatically as blades typically use low power dissipating CPUs and all blades typically share the same power supply that in case of full or nearly full rack permits power supply to work with much greater power efficiency (twice of more efficient then on a typical server). That cuts air conditioning costs too. Also newer blades monitor air flow and adjust fans accordingly. As a result energy bill can be half of the same amount of U1 servers.

Blades generally solves the problem of memory bandwidth typical for most types of virtualization except domain-based. Think about them are predefined partitions with fixed amount of CPU and memory. Dynamic swap of images between blades is possible.  Some I/O can be local and with high speed solid drives very reliable and fast. That permits offloading OS-related IO from application related I/O.

Think about them are predefined (fixed) partitions with fixed number of CPUs and size of memory. Dynamic swap of images between blades is possible. Some I/O can be local as blade typically can carry 2 (half-size blades) or 4 (full size blades) 2.5" disks. With solid state drive being a reliable and fast, albeit expensive alternative to tradition rotating hardrives and memory cards like ioDrive local disk speed can be as good as better as on the large server with, say, sixteen 15K RPM hardrives.

 

 

Major vendors support

Among major vendors:

Conclusions

There is no free lunch and virtualization is not panacea. It increases the complexity of environment and puts severe stress of a single server that host multiple instances on virtual machines. Failure of this server lead to failure of all instances. The same is true about failure of hypervisor.

All-in-all paravirtualization along with light-weight virtualization (BSD jail and Solaris zones) looks like the most promising types of virtualization.

The natural habitat of virtualization are:

At the same time virtualization opens new capabilities for running multiple instances of the same application, for example Web server and some types of virtualization like paravirtualization and light-weight virtualization (zones) can do it more not less efficiently then a similar single physical server with multiple web servers running on different ports.

Sometimes it make sense to run a single instance of virtual machine on the server to get such advantages as on the fly relocation of instances, virtual images manipulation capabilities, etc. With technologies like Xen that claims less then 5% overhead that approach becomes feasible. "Binary servers" -- servers that host just two applications also look very promising as in this case you still can buy low cost servers and in case of Xen do not need to pay for hypervisor.

Migration of rack-mounted servers to blade servers is probably the most safe approach to server consolidation. Managers without experience of work in partitioned environment shouldn't underestimate what their administrators need to learn and the set of new problems that virtualization creates One good advice is "Make sure you put the training dollars in."

There are also other problems. A lot of software vendors won't certify applications as virtual environment compatible, for example VMware compatible. In such cases running the application in virtual environment means that you need to assume the risks and cannot count on vendor tech support to resolve your issues.

All-in all virtualization is mainly played now in desktop and low end servers space. It make sense to proceed slowly testing the water before jumping in. Those that have adopted virtualization have, on average, only about 20% of their environment virtualized, according to IDC. VMware pricing structure is a little bit ridiculous and nullifies hardware savings, if any. Their maintenance costs are even worse. That means that alternative solutions like Xen3 or Microsoft should be considered on Intel side and IBM and Sun on Unix side. As vendor consolidation is ahead if you don't have a clear benefit from virtualization today, you can wait or limit yourself to "sure bets" like development, testing and staging servers. The next version of Windows Server will put serious pressure on VMware in a year or so. Xen is also making progress with IBM support behind it. With those competitive pressures, VMware could become significantly less expensive in the future.

VMs are also touted as a solution to the computer security problem. It's pretty obvious that they can improve security. After all, if you're running your browser on one VM and your mailer on another, a security failure by one shouldn't affect the other. If one virtual machine is compromised you can just discard it and create an fresh image. There is some merit to that argument, and in many situations it's a good configuration to use. But at the same time the transient nature of Virtual Machines introduces new security and compliance challenges not addressed by traditional systems management processes and tools. For example virtual images are more portable and possibility of stealing the whole OS images and running them on a different VM are very real. New security risks inherent in virtualized environments need to be understood and mitigated.

Here is a suitable definition taken from the article published in Linux Magazine:

"(Virtual machines) offer the ability to partition the resources of a large machine between a large number of users in such a way that those users can't interfere with one another. Each user gets a virtual machine running a separate operating system with a certain amount of resources assigned to it. Getting more memory, disks, or processors is a matter of changing a configuration, which is far easier than buying and physically installing the equivalent hardware."

And FreeBSD and Solaris users has their lightweight VM built in the OS. Actually FreeBSD jails, Solaris 10 zone and Xen are probably the most democratic light weight VM. To counter the threat from free VMs VMware now produces a free version too. VMware Player is able to run virtual machines made in VMware Workstation. There are many free OS's on the website. Most of them are community made. There are also freeware tools for creating VM's, mounting, manipulating and converting VMware disks and floppies, so it is possible to create, run and maintain virtual machines for free (even for commercial use).

Here is how this class of virtual machines is described in Wikipedia

Conventional emulators like Bochs emulate the microprocessor, executing each guest CPU instruction by calling a software subroutine on the host machine that simulates the function of that CPU instruction. This abstraction allows the guest machine to run on host machines with a different type of microprocessor, but is also very slow.

An improvement on this approach is dynamically recompiling blocks of machine instructions the first time they are executed, and later using the translated code directly when the code runs a second time. This approach is taken by Microsoft's Virtual PC for Mac OS X.

VMware Workstation takes an even more optimized approach and uses the CPU to run code directly when this is possible. This is the case for user mode and virtual 8086 mode code on x86. When direct execution is not possible, code is rewritten dynamically. This is the case for kernel-level and real mode code. In VMware's case, the translated code is put into a spare area of memory, typically at the end of the address space, which can then be protected and made invisible using the segmentation mechanisms. For these reasons, VMware is dramatically faster than emulators, running at more than 80% of the speed that the virtual guest OS would run on hardware. VMware boasts an overhead as small as 3%–6% for computationally intensive applications.

Although VMware virtual machines run in user mode, VMware Workstation itself requires installing various drivers in the host operating system, notably in order to dynamically switch the GDT and the IDT tables.

One final note: it is often erroneously believed that virtualization products like VMware or Virtual PC replace offending instructions or simply run kernel code in user mode. Neither of these approaches can work on x86. Replacing instructions means that if the code reads itself it will be surprised not to find the expected content; it is not possible to protect code against reading and at the same time allow normal execution; replacing in place is complicated. Running the code unmodified in user mode is not possible either, as most instructions which just read the machine state do not cause an exception and will betray the real state of the program, and certain instructions silently change behavior in user mode. A rewrite is always necessary; a simulation of the current program counter in the original location is performed when necessary and notably hardware code breakpoints are remapped.

The Xen open source virtual machine partitioning project is picking up momentum since acquiring the backing of venture capitalists at the end of 2004. Now, server makers and Linux operating system providers are starting to line up to support the project, contribute code, and make it a feature of their systems at some point in the future. Work on Xen has been supported by UK EPSRC grant GR/S01894, Intel Research, HP Labs and Microsoft Research. Novell and Advanced Micro Devices also back Xen. See also

While everybody seemed to get interested in the open source Xen virtual machine partitioning hypervisor just when XenSource incorporated and made its plans clear for the Linux platform, the NetBSD variant of the BSD Unix platform has been Xen-compatible for over a year now, and will be as fully embracing the technology as Linux is expected to.

Xen has really taken off since Dec, 2004, when the leaders of the Xen project formed a corporation to sell and support Xen and they immediately secured $6 million from venture capitalists Kleiner Perkins Caufield & Byers and Sevin Rosen Funds.

Xen is headed up by Ian Pratt, a senior faculty member at the University of Cambridge in the United Kingdom, who is the chief technology officer at XenSource, the company that has been created to commercialize Xen. Pratt told me in December that he had basically been told to start a company to support Xen because some big financial institutions on Wall Street and in the City (that's London's version of Wall Street for the Americans reading this who may not have heard the term) insisted that he do so because they loved what Xen was doing.

Seven years ago, Ian Pratt joined the senior faculty at the University of Cambridge in the United Kingdom, and after being on the staff for two years, he came up with a schematic for a futuristic, distributed computing platform for wide area network computing called Xenoserver. The idea behind the Xenoserver project is one that now sounds familiar, at least in concept, but sounded pretty sci-fi seven years ago: hundreds of millions of virtual machines running on tens of millions of servers, connected by the Internet, and delivering virtualized computing resources on a utility basis where people are charged for the computing they use. The Xenoserver project consisted of the Xen virtual machine monitor and hypervisor abstraction layer, which allows multiple operating systems to logically share the hardware on a single physical server, the Xenoserver Open Platform for connecting virtual machines to distributed storage and networks, and the Xenoboot remote boot and management system for controlling servers and their virtual machines over the Internet.

Work on the Xen hypervisor began in 1999 at Cambridge, where Pratt was irreverently called the "XenMaster" by project staff and students. During that first year, Pratt and his project team identified how to do secure partitioning on 32-bit X86 servers using a hypervisor and worked out a means for shuttling active virtual machine partitions around a network of machines. This is more or less what VMware does with its ESX Server partitioning software and its VMotion add-on to that product. About 18 months ago, after years of coding the hypervisor in C and the interface in Python, the Xen portion of the Xenoserver project was released as Xen 1.0. According to Pratt, it had tens of thousands of downloads. This provided the open source developers working on Xen with a lot of feedback, which was used to create Xen 2.0, which started shipping last year. With the 2.0 release, the Xen project added the Live Migration feature for moving virtual machines between physical machines, and then added some tweaks to make the code more robust.

Xen and VMware's GSX Server and EXS Server have a major architectural difference. VMware's hypervisor layer completely abstracts the X86 system, which means any operating system supported on X86 processors can be loaded into a virtual machine partition. This, said Pratt, puts tremendous overhead on the systems. Xen was designed from the get-go with an architecture focused on running virtual machines in a lean and mean fashion, and Xen does this by having versions of open source operating systems tweaked to run on the Xen hypervisor. That is why Xen 2.0 only supports Linux 2.4, Linux 2.6, FreeBSD 4.9 and 5.2, and NetBSD 2.0 at the moment; special tweaks of NetBSD and Plan 9 are in the works, and with Solaris 10 soon to be open-source, that will be available as well. With Xen 1.0, Pratt had access to the source code to Windows XP from Microsoft, which allowed the Xen team to put Windows XP inside Xen partitions. With the future "Pacifica" hardware virtualization features in single-core and dual-core Opterons and Intel creating a version of its "Vanderpool" virtualization hardware features in Xeon and Itanium processors also being made for Pentium 4 processors (this is called "Silvervale" for some reason), both Xen and VMware partitioning software will have hardware-assisted virtual machine partitioning. While no one is saying this because they cannot reveal how Pacifica or Vanderpool actually work, these technologies may do most of the X86 abstraction work, and therefore should allow standard, compiled operating system kernels run inside Xen or VMware partitions. That means Microsoft can't stop Windows from being supported inside Xen over the long haul.

Thor Lancelot Simon, one of the key developers and administrators at the NetBSD Foundation that controls the development of NetBSD, reminded everyone that NetBSD has been supporting the Xen 1.2 hypervisor and monitor within a variant of the NetBSD kernel (that's NetBSD/xen instead of NetBSD/i386) since March of last year. Moreover, the foundation's own servers are all equipped with Xen, which allows programmers to work in isolated partitions with dedicated resources and not stomp all over each other as they are coding and compiling. "We aren't naive enough to think that any system has perfect security; but Xen helps us isolate critical systems from each other, and at the same time helps keep our systems physically compact and easy to manage," he said. "When you combine virtualization with Xen with NetBSD's small size, code quality, permissive license, and comprehensive set of security features, it's pretty clear you have a winning combination, which is why we run it on our own systems." NetBSD contributor Manuel Bouyer has done a lot of work to integrate the Xen 2.0 hypervisor and monitor into the NetBSD-current branch, and he said he would be making changes to the NetBSD/i386 release that would all integrate /xen kernels into it and will allow Xen partitions to run in privileged and unprivileged mode.

The Xen 3.0 hypervisor and monitor is expected some time in late 2005 early 2006, with support for 64-bit Xeon and Opteron processors. XenSource's Pratt told me recently that Xen 4.0 is due to be released in the second half of 2005, and it will have better tools for provisioning and managing partitions. It is unclear how the NetBSD project will absorb these changes, but NetBSD 3.0 is expected around the middle of 2005. The project says that they plan to try to get one big release of NetBSD out the door once a year going forward.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Best 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000

[Jul 16, 2017] How to install and setup LXC (Linux Container) on Fedora Linux 26 – nixCraft

Jul 16, 2017 | www.cyberciti.biz
How to install and setup LXC (Linux Container) on Fedora Linux 26 Posted on July 13, 2017 July 13, 2017 in Categories Fedora Linux , Linux , Linux Containers (LXC) last updated July 13, 2017 H ow do I install, create and manage LXC (Linux Containers – an operating system-level virtualization) on Fedora Linux version 26 server?

LXC is an acronym for Linux Containers. It is nothing but an operating system-level virtualization technology for running multiple isolated Linux distros (systems containers) on a single Linux host. This tutorial shows you how to install and manage LXC containers on Fedora Linux server.

Our sample setup

welcome-fedora-lxc
The LXC often described as a lightweight virtualization technology. You can think LXC as chrooted jail on steroids. There is no guest operating system involved. You can only run Linux distros with LXC. You can not run MS-Windows or *BSD or any other operating system with LXC. You can run CentOS, Fedora, Ubuntu, Debian, Gentoo or any other Linux distro using LXC. Traditional virtualization such as KVM/XEN/VMWARE and paravirtualization need a full operating system image for each instance. You can run any operating system using traditional virtualization.

Installation

Type the following dnf command to install lxc and related packages on Fedora 26:
$ sudo dnf install lxc lxc-templates lxc-extra debootstrap libvirt perl gpg
Sample outputs:

Fig.01:  LXC Installation on Fedora 26
Fig.01: LXC Installation on Fedora 26

Start and enable needed services

First start virtualization daemon named libvirtd and lxc using the systemctl command:
$ sudo systemctl start libvirtd.service
$ sudo systemctl start lxc.service
$ sudo systemctl enable lxc.service

Sample outputs:

Created symlink /etc/systemd/system/multi-user.target.wants/lxc.service ? /usr/lib/systemd/system/lxc.service.

Verify that services are running:
$ sudo systemctl status libvirtd.service
Sample outputs:

? libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 3688 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           ??3688 /usr/sbin/libvirtd
           ??3760 /usr/sbin/dnsmasq --conf-file
=
/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

           ??3761 /usr/sbin/dnsmasq --conf-file
=
/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: DHCP, sockets bound exclusively to interface virbr0
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: reading /etc/resolv.conf
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.11.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.13.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.14.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: read /etc/hosts - 3 addresses
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: read /var/lib/libvirt/dnsmasq/default.hostsfile

? libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago Docs: man:libvirtd(8) http://libvirt.org Main PID: 3688 (libvirtd) CGroup: /system.slice/libvirtd.service ??3688 /usr/sbin/libvirtd ??3760 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ??3761 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, sockets bound exclusively to interface virbr0 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: reading /etc/resolv.conf Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.11.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.13.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.14.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /etc/hosts - 3 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: read /var/lib/libvirt/dnsmasq/default.hostsfile

And:
$ sudo systemctl status lxc.service
Sample outputs:

? lxc.service - LXC Container Initialization and Autoboot Code
   Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled)
   Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago
     Docs: man:lxc-autostart
           man:lxc
 Main PID: 3830 (code
=
exited, status=0/SUCCESS)

      CPU: 9ms
 
Jul 13 07:25:34 nixcraft-f26 systemd
[1]
: Starting LXC Container Initialization and Autoboot Code...
Jul 13 07:25:34 nixcraft-f26 systemd
[1]
: Started LXC Container Initialization and Autoboot Code.

? lxc.service - LXC Container Initialization and Autoboot Code Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled) Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago Docs: man:lxc-autostart man:lxc Main PID: 3830 (code=exited, status=0/SUCCESS) CPU: 9ms Jul 13 07:25:34 nixcraft-f26 systemd[1]: Starting LXC Container Initialization and Autoboot Code... Jul 13 07:25:34 nixcraft-f26 systemd[1]: Started LXC Container Initialization and Autoboot Code. LXC networking

To view configured networking interface for lxc, run:
$ sudo brctl show
Sample outputs:

bridge name	bridge id		STP enabled	interfaces
virbr0		8000.525400293323	yes		virbr0-nic

You must set default bridge to virbr0 in the file /etc/lxc/default.conf:
$ sudo vi /etc/lxc/default.conf
Sample config (replace lxcbr0 with virbr0 for lxc.network.link):

lxc.network.type = veth
lxc.network.link = 
virbr0

lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

Save and close the file. To see DHCP range used by containers, enter:
$ sudo systemctl status libvirtd.service | grep range
Sample outputs:

Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h

To check the current kernel for lxc support, enter:
$ lxc-checkconfig
Sample outputs:

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.11.9-300.fc26.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled
FUSE (for use with lxcfs): enabled

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
How can I create a Ubuntu Linux container?

Type the following command to create Ubuntu 16.04 LTS container:
$ sudo lxc-create -t download -n ubuntu-c1 -- -d ubuntu -r xenial -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=xenial, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

To setup admin password, run:
$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd ubuntu

Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully

Make sure root account is locked out:
$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd
To start container run:
$ sudo lxc-start -n ubuntu-c1
To login to the container named ubuntu-c1 use ubuntu user and password set earlier:
$ lxc-console -n ubuntu-c1
Sample outputs:

Fig.02:  Launch a console for the specified container
Fig.02: Launch a console for the specified container

You can now install packages and configure your server. For example, to enable sshd, run apt-get command / apt command :
ubuntu@ubuntu-c1:~$ sudo apt-get install openssh-server
To exit from lxc-console type Ctrl+a q to exit the console session and back to the host .

How do I create a Debain Linux container?

Type the following command to create Debian 9 ("stretch") container:
$ sudo lxc-create -t download -n debian-c1 -- -d debian -r stretch -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a Debian container (release=stretch, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Setup root account password , run:
$ sudo chroot /var/lib/lxc/debian-c1/rootfs/ passwd
Start the container and login into it for management purpose, run:
$ sudo lxc-start -n debian-c1
$ lxc-console -n debian-c1

How do I create a CentOS Linux container?

Type the following command to create CentOS 7 container:
$ sudo lxc-create -t download -n centos-c1 -- -d centos -r 7 -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a CentOS container (release=7, arch=amd64, variant=default)

To enable sshd, run: yum install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Set the root account password and start the container:
$ sudo chroot /var/lib/lxc/centos-c1/rootfs/ passwd
$ sudo lxc-start -n centos-c1
$ lxc-console -n centos-c1

How do I create a Fedora Linux container?

Type the following command to create Fedora 25 container:
$ sudo lxc-create -t download -n fedora-c1 -- -d fedora -r 25 -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a Fedora container (release=25, arch=amd64, variant=default)

To enable sshd, run: dnf install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Set the root account password and start the container:
$ sudo chroot /var/lib/lxc/fedora-c1/rootfs/ passwd
$ sudo lxc-start -n fedora-c1
$ lxc-console -n fedora-c1

How do I create a CentOS 6 Linux container and store it in btrfs ?

You need to create or format hard disk as btrfs and use that one:
# mkfs.btrfs /dev/sdb
# mount /dev/sdb /mnt/btrfs/

If you do not have /dev/sdb create an image using the dd or fallocate command as follows:
# fallocate -l 10G /nixcraft-btrfs.img
# losetup /dev/loop0 /nixcraft-btrfs.img
# mkfs.btrfs /dev/loop0
# mount /dev/loop0 /mnt/btrfs/
# btrfs filesystem show

Sample outputs:

Label: none  uuid: 4deee098-94ca-472a-a0b5-0cd36a205c35
	Total devices 1 FS bytes used 361.53MiB
	devid    1 size 10.00GiB used 3.02GiB path /dev/loop0

Now create a CentOS 6 LXC:
# lxc-create -B btrfs -P /mnt/btrfs/ -t download -n centos6-c1 -- -d centos -r 6 -a amd64
# chroot /mnt/btrfs/centos6-c1/rootfs/ passwd
# lxc-start -P /mnt/btrfs/ -n centos6-c1
# lxc-console -P /mnt/btrfs -n centos6-c1
# lxc-ls -P /mnt/btrfs/ -f

Sample outputs:

NAME       STATE   AUTOSTART GROUPS IPV4            IPV6 
centos6-c1 RUNNING 0         -      192.168.122.145 -    
How do I see a list of all available images?

Type the following command:
$ lxc-create -t download -n NULL -- --list
Sample outputs:

Setting up the GPG keyring
Downloading the image index

---
DIST	RELEASE	ARCH	VARIANT	BUILD
---
alpine	3.1	amd64	default	20170319_17:50
alpine	3.1	armhf	default	20161230_08:09
alpine	3.1	i386	default	20170319_17:50
alpine	3.2	amd64	default	20170504_18:43
alpine	3.2	armhf	default	20161230_08:09
alpine	3.2	i386	default	20170504_17:50
alpine	3.3	amd64	default	20170712_17:50
alpine	3.3	armhf	default	20170103_17:50
alpine	3.3	i386	default	20170712_17:50
alpine	3.4	amd64	default	20170712_17:50
alpine	3.4	armhf	default	20170111_20:27
alpine	3.4	i386	default	20170712_17:50
alpine	3.5	amd64	default	20170712_17:50
alpine	3.5	i386	default	20170712_17:50
alpine	3.6	amd64	default	20170712_17:50
alpine	3.6	i386	default	20170712_17:50
alpine	edge	amd64	default	20170712_17:50
alpine	edge	armhf	default	20170111_20:27
alpine	edge	i386	default	20170712_17:50
archlinux	current	amd64	default	20170529_01:27
archlinux	current	i386	default	20170529_01:27
centos	6	amd64	default	20170713_02:16
centos	6	i386	default	20170713_02:16
centos	7	amd64	default	20170713_02:16
debian	jessie	amd64	default	20170712_22:42
debian	jessie	arm64	default	20170712_22:42
debian	jessie	armel	default	20170711_22:42
debian	jessie	armhf	default	20170712_22:42
debian	jessie	i386	default	20170712_22:42
debian	jessie	powerpc	default	20170712_22:42
debian	jessie	ppc64el	default	20170712_22:42
debian	jessie	s390x	default	20170712_22:42
debian	sid	amd64	default	20170712_22:42
debian	sid	arm64	default	20170712_22:42
debian	sid	armel	default	20170712_22:42
debian	sid	armhf	default	20170711_22:42
debian	sid	i386	default	20170712_22:42
debian	sid	powerpc	default	20170712_22:42
debian	sid	ppc64el	default	20170712_22:42
debian	sid	s390x	default	20170712_22:42
debian	stretch	amd64	default	20170712_22:42
debian	stretch	arm64	default	20170712_22:42
debian	stretch	armel	default	20170711_22:42
debian	stretch	armhf	default	20170712_22:42
debian	stretch	i386	default	20170712_22:42
debian	stretch	powerpc	default	20161104_22:42
debian	stretch	ppc64el	default	20170712_22:42
debian	stretch	s390x	default	20170712_22:42
debian	wheezy	amd64	default	20170712_22:42
debian	wheezy	armel	default	20170712_22:42
debian	wheezy	armhf	default	20170712_22:42
debian	wheezy	i386	default	20170712_22:42
debian	wheezy	powerpc	default	20170712_22:42
debian	wheezy	s390x	default	20170712_22:42
fedora	22	amd64	default	20170216_01:27
fedora	22	i386	default	20170216_02:15
fedora	23	amd64	default	20170215_03:33
fedora	23	i386	default	20170215_01:27
fedora	24	amd64	default	20170713_01:27
fedora	24	i386	default	20170713_01:27
fedora	25	amd64	default	20170713_01:27
fedora	25	i386	default	20170713_01:27
gentoo	current	amd64	default	20170712_14:12
gentoo	current	i386	default	20170712_14:12
opensuse	13.2	amd64	default	20170320_00:53
opensuse	42.2	amd64	default	20170713_00:53
oracle	6	amd64	default	20170712_11:40
oracle	6	i386	default	20170712_11:40
oracle	7	amd64	default	20170712_11:40
plamo	5.x	amd64	default	20170712_21:36
plamo	5.x	i386	default	20170712_21:36
plamo	6.x	amd64	default	20170712_21:36
plamo	6.x	i386	default	20170712_21:36
ubuntu	artful	amd64	default	20170713_03:49
ubuntu	artful	arm64	default	20170713_03:49
ubuntu	artful	armhf	default	20170713_03:49
ubuntu	artful	i386	default	20170713_03:49
ubuntu	artful	ppc64el	default	20170713_03:49
ubuntu	artful	s390x	default	20170713_03:49
ubuntu	precise	amd64	default	20170713_03:49
ubuntu	precise	armel	default	20170713_03:49
ubuntu	precise	armhf	default	20170713_03:49
ubuntu	precise	i386	default	20170713_03:49
ubuntu	precise	powerpc	default	20170713_03:49
ubuntu	trusty	amd64	default	20170713_03:49
ubuntu	trusty	arm64	default	20170713_03:49
ubuntu	trusty	armhf	default	20170713_03:49
ubuntu	trusty	i386	default	20170713_03:49
ubuntu	trusty	powerpc	default	20170713_03:49
ubuntu	trusty	ppc64el	default	20170713_03:49
ubuntu	xenial	amd64	default	20170713_03:49
ubuntu	xenial	arm64	default	20170713_03:49
ubuntu	xenial	armhf	default	20170713_03:49
ubuntu	xenial	i386	default	20170713_03:49
ubuntu	xenial	powerpc	default	20170713_03:49
ubuntu	xenial	ppc64el	default	20170713_03:49
ubuntu	xenial	s390x	default	20170713_03:49
ubuntu	yakkety	amd64	default	20170713_03:49
ubuntu	yakkety	arm64	default	20170713_03:49
ubuntu	yakkety	armhf	default	20170713_03:49
ubuntu	yakkety	i386	default	20170713_03:49
ubuntu	yakkety	powerpc	default	20170713_03:49
ubuntu	yakkety	ppc64el	default	20170713_03:49
ubuntu	yakkety	s390x	default	20170713_03:49
ubuntu	zesty	amd64	default	20170713_03:49
ubuntu	zesty	arm64	default	20170713_03:49
ubuntu	zesty	armhf	default	20170713_03:49
ubuntu	zesty	i386	default	20170713_03:49
ubuntu	zesty	powerpc	default	20170317_03:49
ubuntu	zesty	ppc64el	default	20170713_03:49
ubuntu	zesty	s390x	default	20170713_03:49
---
How do I list the containers existing on the system?

Type the following command:
$ lxc-ls -f
Sample outputs:

NAME      STATE   AUTOSTART GROUPS IPV4            IPV6 
centos-c1 RUNNING 0         -      192.168.122.174 -    
debian-c1 RUNNING 0         -      192.168.122.241 -    
fedora-c1 RUNNING 0         -      192.168.122.176 -    
ubuntu-c1 RUNNING 0         -      192.168.122.56  - 
How do I query information about a container?

The syntax is:
$ lxc-info -n {container}
$ lxc-info -n centos-c1

Sample outputs:

Name:           centos-c1
State:          RUNNING
PID:            5749
IP:             192.168.122.174
CPU use:        0.87 seconds
BlkIO use:      6.51 MiB
Memory use:     31.66 MiB
KMem use:       3.01 MiB
Link:           vethQIP1US
 TX bytes:      2.04 KiB
 RX bytes:      8.77 KiB
 Total bytes:   10.81 KiB
How do I stop/start/restart a container?

The syntax is:
$ sudo lxc-start -n {container}
$ sudo lxc-start -n fedora-c1
$ sudo lxc-stop -n {container}
$ sudo lxc-stop -n fedora-c1

How do I monitor container statistics?

To display containers, updating every second, sorted by memory use:
$ lxc-top --delay 1 --sort m
To display containers, updating every second, sorted by cpu use:
$ lxc-top --delay 1 --sort c
To display containers, updating every second, sorted by block I/O use:
$ lxc-top --delay 1 --sort b
Sample outputs:

Fig.03: Shows  container  statistics with lxc-top
Fig.03: Shows container statistics with lxc-top

How do I destroy/delete a container?

The syntax is:
$ sudo lxc-destroy -n {container}
$ sudo lxc-stop -n fedora-c2
$ sudo lxc-destroy -n fedora-c2

If a container is running, stop it first and destroy it:
$ sudo lxc-destroy -f -n fedora-c2

How do I creates, lists, and restores container snapshots?

The syntax is as follows as per snapshots operation. Please note that you must use snapshot aware file system such as BTRFS/ZFS or LVM.

Create snapshot for a container

$ sudo lxc-snapshot -n {container} -c "comment for snapshot"
$ sudo lxc-snapshot -n centos-c1 -c "13/July/17 before applying patches"

List snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -L -C

Restore snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -r snap0

Destroy/Delete snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -d snap0

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter , Facebook , Google+ .

[May 29, 2017] Release of Wine 2.8

May 29, 2017 | news.softpedia.com
What's new in this release (see below for details):
- - TCP and UDP connection support in WebServices.
- - Various shader improvements for Direct3D 11.
- - Improved support for high DPI settings.
- - Partial reimplementation of the GLU library.
- - Support for recent versions of OSMesa.
- - Window management improvements on macOS.
+ - Direct3D command stream runs asynchronously.
+ - Better serial and parallel ports autodetection.
+ - Still more fixes for high DPI settings.
+ - System tray notifications on macOS.
- Various bug fixes.

... improved support for Warhammer 40,000: Dawn of War III that'll be ported to Linux and SteamOS platforms by Feral Interactive on June 8, Wine 2.9 is here to introduce support for tesselation shaders in Direct3D, binary mode support in WebServices, RegEdit UI improvements, and clipboard changes detected through Xfixes.

...

The Wine 2.9 source tarball can be downloaded right now from our website if you fancy compiling it on your favorite GNU/Linux distribution, but please try to keep in mind that this is a pre-release version not suitable for production use. We recommend installing the stable Wine branch if you want to have a reliable and bug-free experience.

Wine 2.9 will also be installable from the software repos of your operating system in the coming days.

[Jan 08, 2016] Canonical claims LXD crushes KVM

May 19, 2015 | ZDNet

Shuttleworth said, "LXD crushes traditional virtualisation for common enterprise environments, where density and raw performance are the primary concerns. Canonical is taking containers to the level of a full hypervisor, with guarantees of CPU, RAM, I/O and latency backed by silicon and the latest Ubuntu kernels."

So what is crushing? According to Shuttleworth, LXD runs guest machines 14.5 times more densely and with 57 percent less latency than KVM. So, for example, you can run 47 KVM Ubuntu VMs on a 16GB Intel server, or an amazing 536 LXD Ubuntu containers on the same hardware.

Shuttleworth also stated that LXD was far faster than KVM. For example, all 536 guests started with LXD in far less time than it took KVM to launch its 37 guests. "On average" he claimed," LXD guests started in 1.5 seconds, while KVM guests took 25 seconds to start."

As for latency, Shuttleworth boasted that, "Without the overhead emulation of a VM, LXD avoids the scheduling latencies and other performance hazards. Using a sample 0MQ [a popular Linux high-performance asynchronous messaging library] workload, LXD guests had 57 percent less latency for KVM guests.

Thus, LXD should cut more than half of the latency for such latency-sensitive workloads as voice or video transcode. This makes LXD an important potential tool in the move to network function virtualisation (NFV) in telecommunications and media, and the convergence of cloud and high performance computing.

Indeed, Shuttleworth claimed that with LXD the Ubuntu containers ran at speeds so close to bare-metal that they couldn't see any performance difference. Now, that's impressive!

Virtualization has swept through the data center in recent years, enabling IT transformation and serving as the secret sauce behind cloud computing. Now it's time to examine what's next for virtualization as the data center options mature and virtualization spreads to desktops, networks, and beyond.

LXD, however, as Shuttleworth pointed out, is not a replacement for KVM or other hypervisor technologies such as Xen. Indeed, it can't replace them. In addition, LXD is not trying to displace Docker as a container technology.

[May 18, 2011] Xen vs. KVM vs. the rest of the world

Virtually A Machine

No website about Xen can be considered complete without an opinion on this topic. KVM got included into the Linux kernel and is considered the right solution by most distributions and top Linux developers, including Linus Thorvalds himself. This made many people think Xen is somehow inferior or is on the way to decline. The truth is, these solutions differ both in terms of underlying technology and common applications.

How Xen works

Xen not only didn't make it to the main tree of the Linux kernel. It doesn't even run on Linux, although it looks like it. It's a bare metal hypervisor (or: type 1 hypervisor)- a piece of software that runs directly on hardware. If you install a Xen package on your normal Linux distribution, after rebooting you will see Xen messages first. It will then boot your existing system into a first, specially privileged virtual machine called dom0.

This makes the process quite complex. If you start experimenting with Xen and at first attempt make your machine unbootable, don't worry - it happened to many people, including Yours Truly. You can also download Xen Server - commercial, but free distribution of Xen which comes with a simple to use installer, a specially tailored, minimal Linux system in dom0 and enterprise-class management tools. I'll write some more about diffences between XenServer and "community" Xen in a few days.


It also means you won't be able to manipulate VMs using ordinary Linux tools, e.g. stop them with kill and monitor with top. However, Xen comes with some great management software and even greater 3rd-party apps are available (be careful, some of them don't work with Xen Server). They can fully utilize interesting features of Xen, like storing snapshots of VMs and live-migration between physical servers.

Xen is also special for its use of technology called paravirtualization. In short, it means that the guest operating systems knows it runs on a virtualized system. There is an obvious downside: it needs to be specially modified, although with open source OSes that's not much of an issue. But there's also one very important advantage: speed. Xen delivers almost native performance. Other virtualization platforms use this approach in a very limited way, usually in form of a driver package that you install on a guest systems. This improves the speed compaired to completely non-paravirtualized system, but is still far from what can be achieved with Xen.

How KVM works

KVM runs inside a Linux system, not above it - it's called type 2, or hosted hypervisor. This has several significant implications. From technical point of view, it makes it easier to deploy and manage, no need for special boot-time support; but it also makes it harder to deliver good performance. From political point of view, Linux developers view it as superior to Xen because it's a part of the system, not an outside piece of software.

KVM requires CPU with hardware virtualization support. Most new server, desktop and laptop processors from Intel and AMD work with KVM. Older CPUs or low-power units for netbooks, PDAs and the like lack this feature. Hardware-assisted virtualization makes it possible to run an unmodified operating system with an adequate speed. Xen can do it too, although this feature is mostly used to run Windows or other proprietary guests. Even with hardware support, pure virtualization is still much slower than paravirtualization.

Rest of the world

Some VMware server platforms and Microsoft Hyper-V are bare-metal hypervisors, like Xen. VMware's desktop solutions (Player, Workstation) are hosted, as well as QEMU, VirtualBox, Microsoft Virtual PC and pretty much everything else. None of them employ a full paravirtualization, although they sometimes offer drivers improving the performance of guest systems.

KVM only runs on machines with hardware virtualization support. Some enterprise platforms have this requirement too. VirtualBox and desktop versions of VMware work on CPUs lacking virtualization support, but the performance is greatly reduced.

What shoud you choose?

For the server, grid or cloud

If you want to run Linux, BSD or Solaris guests, nothing beats the paravirtualized performance of Xen. For Windows and other proprietary operating systems, there's not much difference between the platforms. Performance and features are similar.

In the beginning KVM lacked live migration and good tools. Nowadays most open source VM management applications (like virt-manager on the screenshot) support both Xen and KVM. Live migration was added in 2007. The whole system is considered stable, although some people still have reservations and think it's not mature enough. Out of the box support in leading Linux distributions is definitely a good point.

VMware is the most widespread solutions - as they proudly admit, it's used by all companies from Fortune 100. Main disadvantage is poor support from open source community. If free management software from VMware is not enough for you, you usually have no choice but to buy a commercial solution - and they don't come cheap. Expect to pay several thousand $ per server or even per CPU.

My subjective choice would be: 1 - Xen, 2 - KVM, 3 - VMware ESXi.

For the personal computer

While Xen is my first choice for the server, it would be very far on the list of "best desktop virtualization platforms". One reason is poor support for power management. It slowly improves, but still I wouldn't install Xen on my laptop. Also the installation method is more suitable for server platforms, but inconvenient for the desktop.

KVM falls somewhere in the middle. As a hosted hypervisor, it's easier to run. Your Linux distribution probably already supports it. Yet, it lacks some user-friendliness of true desktop solutions and if your CPU doesn't have virtualization extensions, you're out of luck.

VMware Player (free of charge, but not open source) is extremaly easy to use, when you want to run VMs prepared by somebody else (hence the name Player - nothing to do with games). Creating a new machine requires editing configuration file or external software (eg. this web-based VM creator). What I really like is convenient hardware management (see screenshot) - just one click to decide if your USB drive belongs to host or guest operating system, another to mount ISO image as guest's DVD-ROM. Another feature is easy file sharing between guest and host. Player's bigger brother is VMware Workstation (about $180). It comes with the ability to create new VMs as well as some other additions. Due to the number of features it slightly harder to use, but still very user-friendly.

VMware offers special drivers for guest operating systems. They are bundled with Workstation, for Player they have to be downloaded separately (or you can borrow them from Workstation, even demo download - license allows it). They are especially useful if you want to run Windows guest, even on older CPUs without hardware assist it's quite responsive.

VirtualBox comes close to VMware. It also has the desktop look&feel and runs on non-hardware-assisted platforms. Bundled guest additions improve performance of virtualized systems. Sharing files and hardware is easy - but not that easy. Overall, in both speed and features, it comes second.

My subjective choice: 1 - VMware Player or Workstation, 2 - VirtualBox, 3 - KVM

EDIT: I later found out that new version of VirtualBox is superior to VMware Player.

[Mar 15, 2011] Hype and virtue by Timothy Roscoe, Kevin Elphinstone,Gernot Heiser

In this paper, we question whether hypervisors are really acting as a disruptive force in OS research, instead arguing that they have so far changed very little at a technical level. Essentially, we have retained the conventional Unix-like OS interface and added a new ABI based on PC hardware which is highly unsuitable for most purposes.

Despite commercial excitement, focus on hypervisor design may be leading OS research astray. However, adopting a different approach to virtualization and recognizing its value to academic research holds the prospect of opening up kernel research to new directions.

[Feb 13, 2011] Manage resources on overcommitted KVM hosts

The best way is probably to exclude memory allocation subsystem of guest systems presenting them with unlimited linear memory space (effectively converting them to Dos from the point of view of memory allocation ;-) and handle all memory allocation in hypervisor... That was done in VM/CMS many years ago. Those guys are reinventing bicycle like if often happens when old technology become revitalized due to hardware advances.
Because KVM virtual machines are regular processes, the standard memory conservation techniques apply. But unlike regular processes, KVM guests contain a nested operating system, which impacts memory overcommitment in two key ways. KVM guests can have greater memory overcommitment potential than regular processes. This is due to a large difference between minimum and maximum guest memory requirements caused by swings in utilization.

Capitalizing on this variability is central to the appeal of virtualization, but it is not always easy. While the host is managing the memory allocated to a KVM guest, the guest kernel is simultaneously managing the same memory. Lacking any form of collaboration between the host and guest, neither the host nor the guest memory manager is able to make optimal decisions regarding caching and swapping, which can lead to less efficient use of memory and degraded performance.

Linux provides additional mechanisms to address memory overcommitment specific to virtualization.

[Jan 15, 2010] Migrate to a virtual Linux environment with Clonezilla

Learn how to use the open source Clonezilla Live cloning software to convert your physical server to a virtual one. Specifically, see how to perform a physical-to-virtual system migration using an image-based method.

IBM and HP virtualization

As mentioned, IBM has one virtualization type on their midrange systems, PowerVM, formerly referred to as Advanced Power Virtualization. IBM uses a type-one hypervisor for its logical partitioning and virtualization, similar in some respects to Sun Microsystems' LDOMs and VMWARE's ESX server. Type-1 hypervisors run directly on a host's hardware, as a hardware control and guest operating system, which is an evolvement of IBM's classic originally hypervisor- vp/cms. Generally speaking, they are more efficient, more tightly integrated with hardware, better performing, and more reliable than other types of hypervisors. Figure 1 illustrates some of the fundamental differences between the different types of partitioning and hypervisor-based virtualization solutions. IBM LPARs and HP vPars fall into the first example -- hardware partitioning (through their logical partitioning products), while HP also offers physical partitioning through nPars.


Figure 1. Server virtualization approaches
Server virtualization approaches

IBM's solution, sometimes referred to as para-virtualization, embeds the hypervisor within the hardware platform. The fundamental difference with IBM is that there is one roadmap, strategy, and hypervisor, all integrated around one hardware platform: IBM Power Systems. Because of this clear focus, IBM can enhance and innovate, without trying to mix and match many different partitioning and virtualization models around different hardware types. Further, they can integrate their virtualization into the firmware, where HP simply cannot or chooses not to.

Continued



Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: October 01, 2017