Although there has been a rapid increase in the processing speeds of personal computers (PCs) and the Internet now has immense reach, an average user’s computational needs are relatively very less. This means that a massive amount of processing power goes spare.
Distributed computing (DC) is an excellent way of using this spare processing power. DC breaks down a computational problem or goal, which needs powerful and expensive computing resources, into small tasks and distributes it over the Internet to many PCs owned privately.
An example of a DC project is PrimeGrid, which is aimed at finding more prime numbers. It is quite a large project, open to the public. Users can participate in it by downloading the software BOINC (Berkeley Open Infrastructure for Network Computing) to their systems.
DC projects use the PCs during their downtime—in the background, when used otherwise, and when the computers don’t have other tasks, such as updating.
The crux of DC is online collaboration between many people, often volunteers, to work on large computational problems. The people who participate can either contribute actively, as in work on the problem itself, or provide computing resources.
The Internet itself is a great example of DC. It is a large repository of information, thanks to the collaboration and sharing of millions of people. Apart from the obvious cooperation involved, there are so many factors that make DC important.
Distributed systems perform much better than systems whose functions are centralized in a single location. DC allows the computational load to be spread across different nodes, thus helping all nodes to perform more efficiently.
DC makes the service provided through it more reliable, by combining multiple ordinary PCs. When you have more computing and storage capacity than you need, you can continue to provide service, even if one or more of your servers fail.
Moreover, if the service is distributed over multiple independent servers, the failure of a single server won’t affect multiple systems. When client and server instances are distributed across multiple independent PCs, the system becomes robust.
If a new project is started on a small system and is successful, you might later add more work to it. You will need more computing power, storage capacity, and network bandwidth to do so. Rather than replacing the old computer with a new one, you can design systems and incrementally expand them later, by adding computing resources.
Even if you build and test all parts of a new service on a single computer, you can run different parts on different computers, or a single part on multiple computers. If your service components interact well through network protocols, you can change the deployment model easily.
DC systems can run on many operating systems and hardware manufactured by different vendors. They can use software of various standards and are independent of the underlying software. They can also use varied communications protocols.
There are many ongoing DC projects. For example, SETI@home is a DC project hosted at the University of California, Berkeley, with over 5.2 million participants worldwide. SETI stands for the ‘search for extra-terrestrial intelligence’.
Some other interesting DC projects are Folding@home, Climateprediction.net, World Community Grid, Rosetta@home, LHC@home, Enigma@home, Einstein@home, Quake-Catcher Network, GPUGrid.net, and AQUA@home.
Such projects usually involve splitting a problem into small chunks; distributing them; finding willing participants; and assembling solutions. You can contribute by donating your time or computing resources. You just have to download the required software from their respective websites and follow instructions.
DC saves time and energy, increases efficiency, and can be a collaborative effort to solve universal problems. It is extremely relevant and useful, especially with the increase in processor power, the rise in socially conscious users, and the flourishing remote work culture.