Software developers should learn parallel programming

As time passes, we demand of our computers that they solve bigger and more complex problems. To meet these challenges, it is important that our computers execute quickly. Increasing the speed of execution is, for the most part, done in the following ways:

Until around 1980, a computer had one processor (so-called brain). Computers operating on one processor are serial or sequential. During the 1980s, serious research and engineering went into building computers with multiple processors that could execute in parallel with each other, so that large computations could be broken down into smaller computations to be handled simultaneously by the individual processors; the partial results of the individual processors are then combined to obtain a solution to the original, large problem. E.g., suppose your computer has 4 processors and you need to add up a list of 10,000 numbers. You can divvy up the list so that in parallel, each processor adds up 10,000 / 4 = 2,500 list members; then add up the 4 partial sums to get the desired total. Thus, the solution is obtained much faster, in approximately the time for one processor to add only 2,500 numbers, than would be done using a single processor operating at the same speed to add 10,000 numbers.

 

During the early years of parallel computing, parallel machines were quite expensive, hence not widely used except by well-funded organizations needing to solve large problems by fast computation. Today, however,

"... multicore processors power our laptops and cellphones. Distributed cloud servers or supercomputer clusters process large data sets to improve Facebook news feeds or predict the weather. To take full advantage of these systems, you need parallel algorithms." - [Silv]

But software development has not caught up with this advance in hardware. It is a lot harder to program parallel computers, because one must write code for individual processors (although often one has all processors execute the same code, not applied to the same data) and also must take care that the code directs the processors to share data and other computing resources such as memory, printers, and Internet connections correctly and efficiently. E.g., we do not want processor A trying to make use of a data value that processor B is responsible for computing if B has not finished the computation.

 

This means there are lots of challenges, and lots of great opportunities, for smart young programmers who learn to code for parallel platforms. If, by using, say, 8 processors, one can reduce the computing time to solve a given problem from 8 hours to 1.2 hours, or from 2 minutes to 20 seconds, one can make a lot of users of the software very happy. Conversely, a programmer who only knows how to program for one processor is working for the hardware model of the 1950s, unable to exploit the capabilities of modern computer hardware. Thus, parallel programming is a core software technology for the 21st Century; expert observers overwhelmingly agree that the future of software development is in parallelism (see, e.g., [Alb], [FullM], [PanSK]).

 

Niagara students who took CIS 302 (Object Oriented Programming II) from me had an introduction to parallel programming; by contrast, many colleges and universities do not offer their students the opportunity to learn parallel programming.

References

[Alb] Peggy Albright, Tackling Parallel Programming, in the Build Your Career column of the IEEE Web site here

[FullM] Samuel H. Fuller and Lynette I. Millett, Computing Performance: Game Over or Next Level?, IEEE Computer 44 (1), 2011, 31-38

[M&B] Russ Miller and Laurence Boxer, Algorithms Sequential & Parallel: A Unified Approach, 3rd ed., Cengage Learning, Boston, 2013

[PanSK] Victor Pankratius, Wolfram Schulte, and Kurt Keutzer, Parallelism on the Desktop, IEEE Software 28 (1) (2011), pp. 14-16 (online here)

[Silv] Andrew Silver, "Rethinking CS101," IEEE Spectrum, April, 2017, p. 23 (online here)