We live in an era of technological advancement. And as such, computers have become an integral part of our daily lives. They contribute to the daily news we consume, to the arrangement and organization of our finances, and recently, even to online experiences, in the form of shopping and social media. In the recent years, there has been an improved use of the Internet; and the introduction of high-speed networking has gradually altered the way we do computing.
A scientist, looking to make discoveries, logs into a computer and uses an entire network of computers to analyse data. A businessman accesses his company’s network in order to manage finances. An Army official accesses and coordinates information from different sources and military networks to formulate a battle strategy. All these 3 scenarios have something in common: they all rely on a concept called grid computing.
The concept of grid came into being in the mid-1990s. The concept of grid computing isn’t as new as one might think it to be. However, it is not yet perfected. Computer scientists, programmers and engineers are still working on creating, establishing and upholding standard tools and protocols. Presently, many grid computing systems rely on patented software and tools. The main motivation to introduce grid computing was that these high-performance computing resources were expensive and difficult to get access to. Hence, in the initial days, resources were introduced, that would have the ability to compute, store and network resources from multiple geographically distributed locations. Generally, such resources are heterogeneous and dynamic.
At its most basic level, grid computing is a type of distributed computer network, which mixes computer resources from various domains to reach a common objective. Processing power, memories, and data storage, are all community resources that authorized users can access, and leverage for specific tasks. All the independent computers communicate through the use of the network in such a manner that makes them look and act like a single computing entity. A grid computing system can be as simple as a collection of similar computers running on the same operating system, or as complex as a number of inter-networked systems comprised of every thinkable computer platform.
Grid computing can be distinguished from conventional high-performance computing systems, in the sense that grid computers have a separate node/set to perform a different task/application. They are obviously more heterogeneous and geographically dispersed than cluster computers.
Let us now delve into the architecture of a grid. Grids provide protocols and services in layers, each providing a separate function.
Generally, the higher layers are focused on the user (they are user-centric), whereas, the lower layers are focused more on the computers and networks (they are hardware-centric). At the base of everything, the bottom layer is the network, which is responsible for assuring the connectivity of the resources in the Grid. On top of this layer is the resources layer, which is made up of the actual resources that are a part of the Grid, such as computers, storage systems, electronic data catalogs and others, which can be connected directly via a network. The middle ware layer consists of the various tools (servers, networks etc.) that enable the various elements to participate in the unified Grid environment. This layer can be thought of as the intelligence of the Grid, since it brings the various elements of the Grid together. It can be thought of as the “brain” of the Grid.The highest layer of the Grid structure is the application layer. This layer includes all different user application portals, and development tool kits supporting those applications. This is the layer that the users of the grid are able to view.
In most common Grid architectures, the application layer also provides the service ware, which comprises of general management functions such as, measuring the amount a particular user employs for the Grid, billing and other commercial uses, who is providing resources and who is using them etc.
In this constantly changing industry, there are many changing processes, technologies, and terminologies. A person needs to be constantly updated since many processes are too similar to be distinct, but they are distinct in their own way. One such disparity is between grid computing and cloud computing. Basically, both grid and cloud computing are used to process tasks. However, grid computing is used in cloud computing, but is not a part of it. A grid is not a cloud. But both grid and cloud computing involve massive computer infrastructures, and managing them.
Let us look at some of the differences between grid computing and cloud computing:
· Grid computing follows a kind of distributed architecture, where a single task is broken down into several smaller tasks through a distributed system involving multiple computer networks. Whereas, cloud computing is another arena where every user of the cloud has his own private resource, provided by the consequent service provider.
· Grid computing and cloud computing are different from each other in terms of architecture, business model, interoperability, etc. Grid computing is a collection of computer resources from multiple locations, to achieve a common goal. The grid, as its name suggests, acts as a distributed system for collaborative and collective sharing of resources. Cloud computing, on the other hand, is a form of computing based on virtualized resources, which are clustered over multiple locations.
· Grid computing is based on a distribution system, where computing resources are distributed among varied computing units located across different sites, countries and continents. Whereas cloud computing is one where resources are managed centrally over multiple servers, in the cloud providers’ private data centers.
· The primary function of grid computing is job scheduling, using various computing resources, where a task is divided into several independent sub-tasks, and each machine on the grid is assigned with a separate task. After each sub-task is completed, the results are sent back to the main machine, which processes all the sub-tasks. Cloud computing, on the other hand, involves pooling through grouping resources, as and when needed, from the cluster of servers.
Grid computing refers to sharing computer resources in order to create super-computing capabilities out of desktop computers, by using their spare CPU time. In due course of time, grid will evolve into something that is treated as more of a commodity. Indeed, the architecture is proven and has been applicable in its necessary domain. Some companies and organizations are obviously unwilling to share their resources with their rivals, and will be fearful of opening up their resources to other organizations, and to the rest of the world. In this case, the grid would allow the organization to link up their PC s in their local office, or different branches, on to its network.
Comments