Commentary: This is a general discussion in which I wrapped up into the discussion an unique use of Virtualization. In the short term, companies can benefit from off loading heightened computational demands. They may desire to purchase computational power for a limited time versus the capital expenditure of purchasing and expanding the systems. The virtualized environment also can solve issues relating to geographically dispersed personnel. Overall, we are a long way from meaningfully and effectively using the excess computational power residing on the web or across an organization. This discussion though hopefully gives some insight on how to use that excess power.
Virtualize computing can occur over any internetwork system including World Wide Web. The concept centers on distributing the use of excess system resources such as computational power, memory, and storage space in a service-oriented or utilitarian architecture. In simple terms, internet based resource provisioning. Multiple virtualized environments can exist on the same network. Although, the physical hardware can only be assigned to and managed by a single virtualized engine. Each virtualized environment, a cloud, encapsulates a unique group of participating hardware resources which is managed through virtualization; Figure 1. Demand for services are then sent out into the cloud to be processed and the results are returned to the virtual machine.
Figure 1: The Virtualized Concept
The virtual machine can be as simple as a browser or can be the complete set of applications including the operating system running on a terminal through thin clients such as Citrix. The cloud service can be as simple as a search service such as google and/or database storage of information. A simple cloud example is SkyDrive, MobileMeTM, and now iCloudTM. iCloudTM offers backup services, storage services, and platform synchronization services to its users over the World Wide Web.
Virtualize computing can occur over any internetwork system including World Wide Web. The concept centers on distributing the use of excess system resources such as computational power, memory, and storage space in a service-oriented or utilitarian architecture. In simple terms, internet based resource provisioning. Multiple virtualized environments can exist on the same network. Although, the physical hardware can only be assigned to and managed by a single virtualized engine. Each virtualized environment, a cloud, encapsulates a unique group of participating hardware resources which is managed through virtualization; Figure 1. Demand for services are then sent out into the cloud to be processed and the results are returned to the virtual machine.
Figure 1: The Virtualized Concept
The virtual machine can be as simple as a browser or can be the complete set of applications including the operating system running on a terminal through thin clients such as Citrix. The cloud service can be as simple as a search service such as google and/or database storage of information. A simple cloud example is SkyDrive, MobileMeTM, and now iCloudTM. iCloudTM offers backup services, storage services, and platform synchronization services to its users over the World Wide Web.
Virtualization
The virtualization concept is one in which operating systems, servers, applications, management, networks, hardware, storage, and other services are emulated in software but to the end user it is completely independent of the hardware or unique technological nuances of system configurations. Examples of virtualization include software such as Fusion or VMWare in which Microsoft's operating system and software run on a Apple MacBook. Another example of virtualization is the HoneyPot used in computer network defense. Software runs on a desktop computer that gives the appearance of a real network from inside the DMZ to a hacker attempting to penetrate the system. The idea is to decoy the hacker away from the real systems using a fake one emulated in software. An example of hardware virtualization is the soft modem. PC manufacturers found that it is cheaper to emulate some peripheral hardware in software. The problem with this is diminished system performance due to the processor being loaded with the emulation. The JAVA virtual engine is also another example of virtualization. This is a platform independent engine that permits JAVA coders to code identically the same on all platforms supported and the code to function as mobile code without accounting for each platform.
Provisioning In Virtualization
Once hardware resources are inventoried and made available for loading. Provisioning in a virtualized environment occurs in several ways. First, physical resources are provisioned by management rules in the virtualization software usually at the load management tier, Figure 1. Secondly, users of a virtual machine can schedule a number of processors, the amount of RAM required, the amount of disk space, and even the degree of precision required for their computational needs. This occurs in the administration of virtualized environment tier, Figure 1. Thus, idle or excess resources can, in effect, be economically rationed by an end user who is willing to pay for the level of service desired. In this way the end user enters into an operating lease for the computational resources for a period of time. No longer will the end user need to make a capital purchase of his computational resources.
Computational Power Challenges
I have built machines with multi-processors and arrayed full machines to handle complex computing requirements. Multi-processor machines were used to solve processor intensive problem sets such as Computer Aided Design, CAD, demands or high transaction SQL servers. Not only were multiple processors necessary but so were multiple buses and drive stacks in order to marginalize contention issues. The operating system typically ran on one buss while the application ran over several over other busses accessing independent drive stacks. Vendor solutions have progressed with newer approaches to storage systems and servers in order to better support high availability and demand. In another application, arrayed machines were used to handle intensive animated graphics compilations that involve solid modeling, ray tracing, and shadowing on animations running at 32 frames per second. This meant that a 3 minute animation had 5760 frames that needed to be crunched 3 different times. In solving this problem, the load was broken into sets. Parallel machines crunched through the solid model sets handing off to ray tracing machines then to shadowing machines. In the end the parallel tracks converged into a single machine where the sets were re-assembled into the finished product. System failures limited work stoppages to a small group of frames that could be 're-crunched' then injected into the production flow.
These kinds of problems sets are becoming more common today as computational demands on computers become more pervasive in society. Unfortunately, software and hardware configurations remain somewhat unchanged and in many cases unable to handle the stresses of complex or high demand computations. Many software packages cannot recognize more than one processor or if they do handle multiple processors the loading is batched and prioritized using a convention like first in first out (FIFO) or stacked job processing. This is fine for a production use of the computational power as given in the examples earlier. However, what if the computational demand is not production oriented but instead sentient processing or manufactures knowledge? I would like to explore an interesting concept in which computational power in the cloud is arrayed in a virtualized neural net.
Arraying for Computational Power in New Ways
One solution is to leverage arcane architectures in a new way. I begin with the creation of a virtual computational node in software, Figure 2, to handle an assigned information process. Then organize hundreds or even tens of thousands of computational nodes on an virtualized backplane, Figure 3. The nodes communicate in the virtual backplane listening for information being passed then process it, and publish the new information to the backplane. A virtualized general manager provides administration of the backplane and is capable of arraying the nodes dynamically in series or parallel to solve computational tasks. The node arrays should be designed using object oriented concepts. Encapsulated in each node is memory, processor power, its own virtual operating system and applications. The nodes are arrayed polymorphically and each node inherits public information. In this way, software developers can design workflow management methods, like manufacturing flow, that array nodes and use queues to reduce crunch time, avoid bottle necks, and distribute the workload. Mind you that this is not physical but virtual. The work packages are handed off to the load manager which tasks the physical hardware in the cloud, Figure 3.
Figure 3: Complex Computational Architecture
Figure 3: Complex Computational Architecture
This concept is not new. The telecommunications industry uses a variation of this concept for specialized switching applications rather than general use computing. There are also array processors used for parallel processing. Even the fictional story, Digital Fortress by Dan Brown centered on a million processor system. Unfortunately, none of these concepts were designed for general use computing. If arrayed computational architectures were designed to solve complex and difficult information sets then this has the potential for enormous possibilities. For example, arraying nodes to monitor for complex conditions then make decisions on courses of actions and enact the solution.
The challenges of symbolic logic processing can be overcome using arrayed processing to virtualize neural nets. A combination of sensory arrays for inputs, (node) neural-to-neural (node) processing, and valid pathways or lines of logic would provide the means to complete complex processing and output results that are otherwise difficult to achieve. If enough physical hardware participates the World Wide Web then the web could become an enormous neural processor solving some of the most incredibly complex computational problem sets.
The World Wide Web and Computational Limitations
The challenges of symbolic logic processing can be overcome using arrayed processing to virtualize neural nets. A combination of sensory arrays for inputs, (node) neural-to-neural (node) processing, and valid pathways or lines of logic would provide the means to complete complex processing and output results that are otherwise difficult to achieve. If enough physical hardware participates the World Wide Web then the web could become an enormous neural processor solving some of the most incredibly complex computational problem sets.
The World Wide Web and Computational Limitations
This architecture within a cloud is limited to developing knowledge or lines of logic. Gaps or breaks in a line of logic may be inferred based on history which is also known as quantum leaps in knowledge or wisdom. Wisdom systems are different than knowledge systems. Knowledge is highly structured and its formation can be automated more easily. Whereas wisdom is less structured having gaps in knowledge and information. Wisdom relies on inference and intuition in order to select valid information from its absence or out of innuendo, ambiguity, or otherwise noise. Wisdom is more of an art whereas knowledge is more of a science.
Nonetheless, all the participating computers on the World Wide Web could enable a giant simulated brain. Of course, movies such as The Lawn Mower Man, Demon Seed, The Forbes Project, and War Games go the extra mile making the leap to self-aware machines that conquer the world. For now though, let's just use them to solve work related problems.
Brown, Dan, May 2000. Digital Fortress, St Martin’s Press, ISBN: 9780312263126
Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.
No comments:
Post a Comment