Virtualization is a crucial technological innovation makes it possible for the skilled Information Technology managers to deploy creative solutions to the challenges which are faced in the performance of various organizational tasks and in businesses. The advantages which come as a result virtualization technology are limited to government regulations, competition and economic factors such as scarcity of resources. Today, the most popular virtualization is the operating system virtualization. It helps the IT developers and professionals to create a system which is flexible and reliable for the adjustment of the changing business condition through the alignment of the technical resources with strategic objectives. This paper discusses the general aspects as applied in the world of virtualization.
Virtualization can simply be defined as the creation of a virtual form of something, such as operating system, hardware platform, network resource or a storage device. Virtualization, as such, is a software technology which ensures that the physical resources like the servers are used in the creation of Virtual machines (VMs). With the system, the users are able to reduce the cooling and power requirements, simplify administration and deployment, and consolidate the physical resources. The software implementation of a virtual machine in a computing environment can have the operating system (OS) run and installed. Virtualization is a system which allows for the hardware abstraction from the installed software. The process can be achieved in various forms such as application virtualization with APP – V; or Hyper- V; and companies such as Hewlett Packard and Cisco have storage and network virtualization. There are numerous advantages which can be offered by OS’s installation on the physical hardware (Ryan, 2012). They can easily be reassigned, copied, and moved between the host servers so as to ensure the optimization of hardware resource utilization. The system is installed on systems and used on the servers on workstations for the conduction of demonstrations and doing development (Shackleford, 2013). It can help various companies to maximize the value of their IT investments, reduce the complexity and costs of running the IT systems, energy consumption while ensuring that there is increased overall environment flexibility.
As much as virtualization seems to be a brand-new thing, the original frameworks were initially used in the 1960s. The CP/CMDS systems were running on IBM 360/67 for virtualization as a form of time-sharing. The 360 machines would be run on each machine. This framework virtualization ensured that the portioned virtual disks were meant forever single use and remained popular through the 70s.
Virtualization kind of disappeared during the 19090s and 1980s where there were a lot of products which were made from the Intel PC’s. MS-DOS would be managed and developed by Merger/386 and Simultask of Locus Computing Corporation. Software PC was produced by Insignia Solutions in 1988 for the operations in the Macintosh and Sun platforms.
A new wave of virtualization was ushered in in the late 1990s. Virtual PC for Macintosh was released by Connectix in 1997. Other versions for windows would later be released by Connectix before Microsoft bought it in 2003. The VMware was introduced in virtualization in 1999 (Portnoy, 2016). Virtualization technology has been acquired by many major players in the last decade.
Workability of Virtualization
This is essentially how virtualization works and operates. The servers are very necessary for doing the virtual jobs, and the traditionally each machine was designated to do only one job at a time. This was to keep the software and hardware problems of one machine from causing problems with the other machines. The problem with this was that there was small processing capability and things would take longer time. Also, the physical space needed for this was much complex and larger. Virtualization of the data ensures that problems are fixed.
Virtualization of the server uses software which is specially designed to allow for an administration that can convert a server into different virtual systems. Today, the upcoming systems have in-built systems which are not complementary to the systems which were being used initially. A virtualized environment makes it possible for the software and hardware systems to create virtual servers. Still, there is a need for the network administrators to create the most efficient systems by the application of the right software.
A world of technology which we are in demands that companies need to adopt server virtualization which is more efficient. There are several approaches which are used by companies to create virtual servers such as OS-level virtualization, para-virtualization, and full virtualization. There are a few common traits in all of these approaches. There are the guest and host which are the virtual and physical servers respectively. The former act as if they were the latter so as to produce the results which are needed (Negus, 2007). However, it should be noted that there are different approaches which are used in allocating the server resources for the needs of the virtual servers.
Types of virtualization
This type of virtualization has been around since the inception of the virtualization concept. It is an advanced load balancing which is meant to spread application all across the applications in the server. As such, it allows the IT departments to have workload balance on software so that the project or any changes in the systems can be given first priority when sorting out system issues. Workload balancing ensures that there is easier applications and servers management since it can be done from a single machine.
Many people confuse this with application-server virtualization. Here, the applications run on the server even though it seems like they are from computers which naturally operate on the hard drive. The CPU’s and RAM’s ability to store the information centrally is improved and the software is easier to roll out.
This is the most common virtualization system. It allows for operation of multiple systems on a single machine. As such, there is a reduced need for hardware since the number of actual machines which are needed is few. It is efficient as it saves companies rack space, hardware, cabling, and spending while the available system is able to do the same applications quantity.
It is not a very popular form of virtualization as its primary functions are useful only in the data centers. The management and administration concept ensure that the roles are segmented through user and group policies.
This is accomplished through various applications as VLAN tags, switches, NICs and routing tables while it involves the virtual management of IPs.
These are multiple servers whose management is done by a virtual storage system. Here, the data locations are invisible to the servers, and they operate more labor intensively.
It is also a rare virtualization form, and work the same way as OS virtualization. The only difference is that the machines are partitioned for varied tasks instead of putting them in single machines to do multiple functions.
The machines used in virtualization are employed in the consolidation of the workload in under-utilized servers. Running the legacy applications is made possible with the virtual machines. These machines are used to provide isolated and secure sandboxes for running the untrusted applications. Given the right schedules, they are able to create operating systems and work under systems which have the scarcity of resources. There is hardware configuration in these multiple processors are not possible to implement. Virtualization allows for simulation of independent computer networks where they can run different systems simultaneously. Virtual machines make it possible to have performance monitoring and powerful debugging (Natarajan, 2012). It can enable the operating systems to operate on memory multiprocessors which are shared.
This is also known as virtual machine manager and is a program for system operating which shared single hardware host. The independent operating systems have a host’s memory, processor, and other valuable resources. The hypervisor monitor and control the host resources and processor by allocating systematically what is needed in running the OS and making sure that the guest operating systems are not interfering with each other.
The two main types of hypervisor are microkernel-zed hypervisor and monolithic hypervisor. The former is installed to the hardware directly and hold drivers and third-party tools needed for guest VMs and admin VM to operate. The microkernel-zed hypervisor, on the other hand, is directly installed to offload driver management and virtualization that stack parent partition.
Environment for virtualization
Virtualization has led to the consolidation and better infrastructure control, and provide the opportunity of creating environments which are more secure. The virtualization solutions today are mostly about hypervisor which is the representation of basic abstraction layer in the virtual systems and the physical hardware that run in any platform which is virtualized. Each platform provides the tools that help in system administration so as move an active virtual machine to the rest of the servers without internal interfering. The VM enables the image and file which is stored in physical servers and can be accessed by the remounting of an image.
This is the supposed best system of virtualization. It is the most preferred virtualization infrastructure provider that offers a reliable and efficient platform for federating public cloud and creating private clouds. The VMware Inc. is the organization which is responsible for this software. It was found in California and was started in 1998. Its desktop software is compatible with Mac OS X, Linux, and Microsoft Windows while its server hypervisors are VMware ESXi, ESX and bare-metal which are embedded to run on the server without the need of an extra operating system. The GSX is the most efficient and popular hypervisor (Marten van Sinderen. & Boris Shishkov., 2012). There is the type 1 and the type 2 kind of hypervisors where the functions are just the same only that the second type is an upgraded version of the latter.
The software provides a completely virtualized set of hardware to the main operating system. It ensures that there is virtualized video adapter, hardware adapter, and network adapter for the provision of hosts passthrough. The VMware is highly portable between the computers and the identical guest. The operations can be paused on the virtual machine guest if they move and copy the physical computer. The execution resume does not give a formal way that requires virtualization-enable processors. VMware software has a set of instruction which is not emulated for different hardware does not present physically. The performance boosts can be caused by problems which are not found in the servers.
This is VMware infrastructure components, ensure that the reliability and management services are the core server products. The storage sort requires that basic array of disk driver for support files are virtualized. These moves are dedicated for hardware devices where the server kernels are optimized by the use of VMKernel. This kernel is used for the specialization of components which are virtualized including VMKernel. The service console is installed on the Linux kernel. There are three interfaces which accomplish the tasks in the VMKernel, these are; guest systems, accessing other interface using the hardware, and the hardware itself. The interface allows for the simulation of the hardware which in turn modify the guest system.
This is a solution whose aim is to consolidate production systems of full-scale capacity. It allows for efficient scalability with high-availability, load-balancing, and new management features. It is based on an ESX Server which is not dependent on an operating system that can host other common systems which are scalable and can be hosted by the local disks, storage area network, and network attached storage. VMFS allows for the provisioning and management of the systems too much simpler and usable platforms. There can be dynamically allocated virtual machine from a server to another which the user and applications can add actively (Malhoit, 2014). The VMware distributed resource scheduler tunes the memory a processor automatically. The high availability (HA) ensures that there is the automatic addition of software problem or hardware that makes it possible for the VM to migrate within the servers. As a result, there is more load balancing and maintenance.
This is the industry’s most powerful, scalable, and complete visualization platform which delivers application services and infrastructure which is needed by organizations for the transformation of information technology and providing the various solutions. It provides unapparelled efficiency, control, and agility for customer choices which are experienced in the technology industry.
How vSphere is used?
It allows for the IT organizations to delay disruptive and costly datacenter for the expansion of projects that ensures consolidation of the virtual machines on a server which has performance complexities. It helps in improving the business continuity and reducing the complexities which can be disastrous to business continuity while giving layered protection against service of data loss and service outages (Kusnetzky, 2011). There is the simplification of the geographically distributed management, production IT development and QA.
Key components and features
ESX and VMware ESXi developers provide a high-performance, production-driven, and robust layer which have multiple machines for the sharing of the hardware resources with the performance which can exceed the native throughput.
These systems allow for the application of ultra-powerful machines that have up to five CPUs. The VMware vStorage Virtual Machine File Systems makes it easier for the VMs to perform on the virtual devices and is needed in the ensuring that technology and the other performance units such as storage vMotion are functional.
The vMotions reduce the need for having a downtown schedule application due to the maintenance of servers. Here, the machines initiate live migration across the servers without user disruption or service loss. HA ensures that there are cost-effective ways in which the application can restart in case there is operating or hardware systems failure.
Data virtualization works the same way as information agility which delivers an integrated, unified, and simplified trusted business data view in time with which a certain project is a need by the business users, analytics, processes, or the consuming applications. Data virtualization allows for the integration of formats and locations, to build the main data layer without replicating the data that delivers unified database system which can withstand and support a number of users and applications at one time. As a result, there is faster data access, better agility to change, and less cost expenditure and replication. It is, in today’s technological field, the data integration which performs various quality functions and transformation as the ancient integration of data, data federation, data replication, etcetera while providing the platform for modern data technology to deliver the integration of real-time data with more agility, more speed, and lesser costs (Krishnan, 2013). This can, as such, lower the need for replicated data and traditional integration of data, the data warehouses in different ways but not entirely.
Data virtualization is a layer for data services and an abstraction layer with highly complementary use between derived and original data sources, devices, and applications which provide flexibility between business technology and layers of information. Data virtualization is able to do some of the things stated herein.
Unified data security and governance; where all the data is made easily integrate-able and discoverable through the layers which have faster quality issues and redundancy exposure. Data virtualization while addressed can impose data model security and governance due to the services providing output data and the data quality issues and integration.
- Provisioning of agile data services; the virtualization of data enhances API economy. The virtual or integrated primary data which has been derived is made more accessible to different protocol and format than what it was when original this ensures controlled system within a short period.
- Semantic integration of unstructured and structured data; this is by bridging the web data and unstructured semantic with a schema-based understanding of data which is unstructured to enable quality improvement and integration of data.
- Steroids data federation- this is part of data virtualization which is enhanced with real-time intelligence that is chosen automatically based on network awareness, application need, and the sources of constraint
- Logical decoupling and abstraction; consuming applications, middleware, and disparate data sources that expect and make good use of specific interfaces and platforms, query paradigms, security protocols, schema, and formats can easily interact by initiation of data virtualization.
On the other hand, data virtualization has some features which are included as additional features or the add-on module. This is included in the overall costs of the main product. Knowing how to differentiate built-in and add-on data virtualization product and a platform for data virtualization enterprise is beneficial in a number of ways such as;
- The add-ons and pre-requisite products that require vendor lock-in from similar location have the most value of virtualization product data.
- They are optimized such that they can act as the main vendor product to provide semantic BI tool layer and tool vendor hence there is defocusing from a high-performance enterprise which supports solution patterns, customers, and heterogeneous sources
- The capabilities breadth is limited to security and governance, performance, and logical modeling. These tools allow for more understanding of how virtualization work.
- Platform for data virtualization; which is built to provide capabilities on data virtualization for the enterprise in various ways through the use of a data layer which is essentially virtual. They are designed for speed and agility for many uses so as to compete with the other more efficient middleware.
- Cloud data services; the products are often cloud deployed and have integrations which are pre-packaged to cloud applications and SaaS, on-premise and desktop tools like Excel, and the cloud databases. Instead of having a true product on data virtualization which has query execution which is delegable, the products are used to expose APIs which are normalized across the cloud sources for easy exchange of data in the medium volume. Those projects which are out of scope are those which involve unstructured data, flat files, large databases, mainframes, major enterprise systems, and big data analytics.
- SQLification Products which is an upcoming service especially among the Hadoop vendors and Big Data. They virtualize the technologies around big data and ensure that they are combined with data sources which are relatable, queried, and flat files with the use of the standard SQL. This is especially useful for the big data stack and more.
Data services module is typically offered for cost addition data warehouse vendors and Data integration suite. It is normally very strong in most of the areas which include data virtualization where there are shared features such as transformation, modeling and with robust quality functions. On the contrary, there is a weak general performance with unstructured sources for data model flexibility, virtual security layers, caching, query optimization, and data virtualization engine (Hess & Newman, 2010). The product which is designed is not complete with production use and design which is supposed to give the required prototype. Data blending is a part of business intelligence (BI) which have a module that is predominantly offered by the vendor in BI. The system is able to combine a number of sources so as to feed in the tool, but is also available and cannot be accessed from any other external consumption or application.
It should be noted that data virtualization is not virtualization. The former is typically used alone to refer to the hardware virtualizations such as networks, disks, storage, servers and all that. There are companies which use the word data virtualization in the description of the work that is done on virtualized database software or solutions on hardware storage. They are not able to provide data services and data real-time data integration across the unstructured and structured data sources (Wang, 2012). Also, data virtualization does not, in most cases, replicate data from the sources which are supportive and can work or function when independent. Since it is not a replicate data store, it temporarily stores data in in-memory and cache databases (Golden, 2008). They can all be persisted by simply ensuring that they are invoked using sources such as ETL. As a result, data virtualization can be described as being as powerful yet giving agile solutions which are light-weight.
The primary reason for virtualization is not cost-cutting, but rather, it is helpful for cloud computing and flexible sourcing. The organizations which are embracing virtualization have a different point of view regarding speed, agility, and the flexibility of moving to a better VM system. As such, there are a few things which are done by virtualization to allow for cloud computing to take effect thus pushing various organizations to this needed direction.
Cloud computing challenges
As much as there are many benefits which are related to cloud computing, there are many other challenges which are as well related to it.
- Privacy and security; there are a number of companies which are concerned with cloud computing security. There are vulnerability issues with which the clients are worried about especially on attacks. There are standards which need to be followed so as to ensure that the providers give the standards which are correct to all the users.
- Performance; this is a major issue which is mainly affiliated to data-intensive and transaction-oriented applications which may lack adequate performance. The users are able to for a long time have issues concerning delays and high latency.
- Reliability of cloud computing is not always guaranteed as there is no round-the-clock reliability in the cases where cloud computing can have a number of outages.
- Bandwidth Costs where the companies are supposed to cost-cut on software and hardware. However, the cases are a little different as they end up incurring higher costs due to the bandwidth charges. The internet-based applications may have lower bandwidth costs which are not data-intensive but have applications which are significantly high data-intensive.
- The control of the cloud computing elements is usually found around the platforms typically having practices and database for specific companies.
Agility, flexibility, and speed
It is rather easy to get new servers online when cloud computing is adopted. It takes an average of 4-6 weeks to deploy a server. The VMs can at times be deployed at a faster rate. The cloud is not necessarily needed for this speed development. The management tools and operational processes allow for a change of speed deal. Culture changes also lead to different behavior and expectations.
A virtualization is a form of technology which makes it easier for one to access a number of physical devices at a given time. One operating system may be analyzing a database of multiple computers, or a single computer may be in control of a number of machines (Terzo & Mossucca, 2015). Virtualization enables the running of different kinds of applications on each respective server instead of a single one; as such, the companies are relieved the burden of buying and managing many servers which are not necessarily useful. Due to cloud computing, it is possible to achieve software and infrastructure off-site, reduce power costs, have scalable hardware and saves for the resources and time in labor. The virtual resources of the cloud are cheaper and more dedicated when used by the right professionals. The software programs are not operated from the personal computer when using cloud computing, rather they are put elsewhere provided there are server and internet access. That is to say, that cloud computing is a single computer which acts as multiple computers in different environments (Dittner & Rule, 2007). Allowing for scaling. It provides scalability and flexibility which is a great requirement especially that there are many useful things which need to be run at the same time. Sometimes, cloud computing is defined based on the system and the type of virtual machines which are being used.
Virtualization is what makes the processes possible while cloud computing is the approach applied to reach for the things which are needed. The large organizations which have little downtime tolerance and security needs are more likely to benefit from virtualization. The smaller corporations may benefit from the system by mainly focusing on the mission while ensuring that the database chores are left for the benefit of the company. As such, virtualization in one way or another provider for more servers and operations possibilities on a single system while cloud computing ensures that there are measured resources use while running these systems from the same hardware (Garfinkel, Rosenblum & Boneh, 2010). Discussing them interchangeably solve the problem of maximizing the resources which are available. There are different ways in which they operate hence one might as well consider to select between the two. The former enables the creation of real resources subtitles such as cost, performance, and size where the subtitles have similar external interfaces and functions as their counterparts apart from a few physical differences. The consolidation of the logical resources is supposed to be thought of before adopting virtualization in a different environment. This is a preference apart from the design of the server, network, and storage of the primary system. There is the creation of flexible infrastructure and an on-demand that facilitate the handling of the workload when virtualization technology is added especially to different environments.
Dittner, R., & Rule, D. (2007). The Best Damn Server Virtualization Book Period. Burlington: Elsevier.
Garfinkel, T., Rosenblum, M., & Boneh, D. (2010). Paradigms for virtualization based host security.
Golden, B. (2008). Virtualization for dummies. Hoboken, N.J: Wiley.
Hess, K., & Newman, A. (2010). Practical virtualization solutions. Upper Saddle River, NJ ; Boston, MA: Prentice Hall/Pearson Education.
Krishnan, K. (2013). Data warehousing in the age of big data. Amsterdam: Morgan Kaufmann is an imprint of Elsevier.
Kusnetzky, D. (2011). Virtualization. Sebastopol, CA: O’Reilly.
Malhoit, L. (2014). VMware vCenter operations manager essentials. Birmingham, UK: Packt Pub.
Marten van Sinderen., & Boris Shishkov. (2012). Cloud Computing and Services Science. Springer New York.
Natarajan, S. (2012). Security issues in network virtualization for the future internet. Amherst, Mass.: University of Massachusetts Amherst.
Negus, C. (2007). Virtualization: From the Desktop to the Enterprise. Indianapolis, IN: Wiley Pub.
Portnoy, M. (2016). Virtualization essentials. Indianapolis, Indiana: Sybex.
Ryan, P. (2012). Data Virtualization for Business Intelligence Systems. Morgan Kaufmann.
Shackleford, D. (2013). Virtualization Security. Indianapolis, Ind.: Wiley.
Terzo, O., & Mossucca, L. (2015). Cloud computing with e-science applications. Boca Raton: Taylor & Francis.
Wang, L. (2012). Cloud Computing in Virtualization. Boca Raton, FL: CRC Press.