Parallel Programming

Author: Jessica Dagostini – beecrowd

(5 minutes of reading time)


You probably have already wondered why notebook and cell phone processors now come with dual, quad or octa core, right? After all, how will this different number of cores benefit me? The simplest answer is: performance. However, that depends.

We are getting closer and closer to the limit of Moore's Law, which determines the growth rate of the number of transistors(1) on the same processing chip. Each leap we made in its architecture brought, for the end consumer (whether a developer or not), a huge performance gain in the processes performed by the computer. However, as mentioned, we are reaching the maximum limit of optimizations in a single processing hardware. This opened other possibilities. It is in this context that parallel programming stands out.

Already well studied around the 1960s, many years before we reached the limit of this architecture, this programming paradigm has been gaining space and prominence in recent years. But what is parallel programming anyway? Are two programmers coding the same program together? Or even two programmers sharing the same keyboard? I don't think so... :)

Parallel programming is the ability to perform the processing of the same activity on different computing resources at the same time, thus reducing its execution time. Think, for example, of the traditional cake recipe example. In this recipe, there is no need for an exact sequence of addition of ingredients, which allows us to add them to the mixing bowl in any order. To make this cake we need 2 eggs, 2 cups of wheat flour, 1 cup of milk, 1 cup of sugar and two spoons of chocolate powder. In addition to you, three more people came to help you prepare the recipe and you will ask them to help you. Right away, you realize that you don't have to be alone in collecting all the ingredients and ask for their help. The four of you will be responsible for one ingredient each and, thus, will seek the same ingredients at the same time, that is, in a parallel way. After that, for mixing, you don't need your helpers and you will perform the action yourself. But remember that you need to grease the mold to add the mixture and you also need to turn on the oven to preheat. What do you do? Ask for help again so that, while you mix the ingredients, two of your helpers will be responsible for one of these two new activities. In the end, you realize that it took much less time to make this cake with help than it did alone. This thinking is the very basis of parallel programming!

Remember the many cores present on our devices that we mentioned at the beginning of the text? They can be our cake recipe helpers in running a computer program! Thus, when programming software in parallel, the developer needs to identify regions of the code that can be executed in parallel (such as collecting the ingredients for the cake) and delegate such activities so that each of the cores do this work in parallel and deliver to you, who will manage the rest. But the programmer needs to tell his software to make use of all this available parallelism. Just as if you hadn't delegated activities for your helpers to perform on the cake recipe they would have stood still, so do the multiple cores on your device. The developer of the running software needs to have delegated functions to these parallel resources so that we gain such a performance improvement from these many cores.

It is worth mentioning that this parallelization is not only restricted to the cores of a single processor. We can perform parallel computing using different complete computers, aggregated by a high-speed connection, which are what we call computer clusters. In these clusters, we can make use not only of the parallel cores but also of the entire structure of this other computing node, such as memory, disk, etc. And as far as this may seem from your reality, it's closer than you think! Weather forecasting software, for example, would practically not exist without parallel computing (or would even exist, but we wouldn't know today's weather forecast until next month). More and more we have the need for high processing to compute complex calculations and so we have simulation results that impact our lives, such as weather forecast, in a timely manner for use. In these cases, our parallel cores alone won't do, and we need a large conglomeration of computers to perform these calculations in parallel, which are what we call supercomputers. But that topic is for the next text.


Do you like our content? So, follow us on social media to stay on top of innovation and read our blog.


(1) Moore's Law dictates that the number of components on a processing chip doubles every 18 months. Most modern processors can have over a billion transistors on a single silicon board. The more transistors you put on a single board, the more processing power it has.  

Jessica Dagostini is a Principal System Architect at beecrowd. She has a Masters in Computer Science from the Federal University of Rio Grande do Sul and has had the opportunity to participate in Programming Marathons around Latin America.

Share this article on your social networks:
Rate this article:

Other articles you might be interested in reading

  • All (185)
  • Career (38)
  • Competitions (6)
  • Design (7)
  • Development (112)
  • Diversity and Inclusion (3)
  • Events (3)
  • History (15)
  • Industries (6)
  • Innovation (38)
  • Leadership (8)
  • Projects (23)
  • Well being (18)
Would you like to have your article or video posted on beecrowd’s blog and social media? If you are interested, send us an email with the subject “BLOG” to [email protected] and we will give you more details about the process and prerequisites to have your article/video published in our channels

Headquarter:
Rua Funchal, 538
Cj. 24
Vila Olímpia
04551-060
São Paulo, SP
Brazil

© 2024 beecrowd

All Rights Reserved