1209

What is the difference between concurrency and parallelism?

Examples are appreciated.

nbro
  • 12,226
  • 19
  • 85
  • 163
StackUnderflow
  • 20,450
  • 13
  • 52
  • 77
  • 99
    short answer: Concurrency is two lines of customers ordering from a single cashier (lines take turns ordering); Parallelism is two lines of customers ordering from two cashiers (each line gets its own cashier). – chharvey Jan 21 '19 at 03:19
  • 3
    @chharvey: I really think this should be the answer. Short (two lines of text, if you leave off "short answer"), to the point, instantly understandable. Nicely done! – Mike Maxwell Apr 21 '20 at 03:00

37 Answers37

1442

Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. For example, multitasking on a single-core machine.

Parallelism is when tasks literally run at the same time, e.g., on a multicore processor.


Quoting Sun's Multithreaded Programming Guide:

  • Concurrency: A condition that exists when at least two threads are making progress. A more generalized form of parallelism that can include time-slicing as a form of virtual parallelism.

  • Parallelism: A condition that arises when at least two threads are executing simultaneously.

Rick
  • 4,467
  • 1
  • 22
  • 49
RichieHindle
  • 244,085
  • 44
  • 340
  • 385
  • 195
    I like this answer, but I'd perhaps go further and characterise concurrency as a property of a program or system (and parallelism as the run-time behaviour of executing multiple tasks at the same time). – Adrian Mouat Apr 06 '11 at 15:52
  • 28
    I like Adrian Mouat's comment very much. See also this excellent explanation: http://www.haskell.org/haskellwiki/Parallelism_vs._Concurrency – jberryman Oct 07 '11 at 02:25
  • 1
    @RichieHindle : Does this mean parallelism is not possible with single core processors ? Would like to know your thoughts on this : http://stackoverflow.com/questions/10245337/how-is-parallelism-on-a-single-thread-core-possible – Jaguar Nov 06 '13 at 05:48
  • 9
    @Raj: Correct, parallelism (in the sense of multithreading) is not possible with single core processors. – RichieHindle Nov 06 '13 at 16:58
  • 5
    If Sequential and Parallel were both values in an enumeration, what would the name of that enumeration be? – toddmo Dec 07 '13 at 22:25
  • May I characterize the difference as in between fine-grained parallelism and coarse-grained parallelism? – Zk1001 Jan 20 '15 at 04:52
  • 14
    To that end, Sun's quote can be reworded as: - Concurrency: A condition that exists when, during a given _period_ of time, two threads are making progress - Parallelism: A condition that arises when, given a particular _point_ in time, two threads are executing simultaneously – Phillip May 11 '15 at 03:27
  • @toddmo, I would call it "ProgrammingModel". – Eido95 Jul 04 '15 at 18:50
  • 3
    Unfortunately, the literal meaning of "concurrency" is "happening at the same time", which is part of the reason there' so much confusion. – einpoklum Apr 13 '17 at 17:50
  • Same has been explained pictorially in this answer: http://stackoverflow.com/a/1898024/1977614 – SKPS May 08 '17 at 18:58
  • 3
    @toddmo I think you would have an enum ProgrammingModel {MultiTask, SingleTask} and an enum ExecutionModel {Sequential, Parallel}. I've chosen MultiTask and SingleTask names for consistency, but MultiTask is a synonym for Concurrent for me. – Marcel Toth Apr 07 '18 at 13:15
  • 1
    @Thetam, ok, I'll mark your comment as answer to my comment question :P – toddmo Apr 09 '18 at 00:30
  • 1
    concurrency is chewing and drinking, like when you chew you cannot drink water, not after you chew and swallow you drink it, Parallelism is chewing along with breathing, if I am not wrong :P – Aadam May 16 '18 at 13:47
  • This is misleading. In general, both concurrency and parallelism are properties of computing, which do not necessarily has anything to do with (multi)threading implementations used by programs. Multitasking is also red herring and multitasking can actually run sequentially at least in the view of a cooperative scheduler. The answer also completely missed some significant instances, e.g. concurrent user interaction and instruction-level parallelism. – FrankHB Aug 09 '18 at 04:28
  • Unfortunatley, in the world of Garbage Collectors, the meaning (with regards to GC and an application) is pretty much reversed - Concurrent GC runs at the same time as the application. – David Soroko Nov 04 '18 at 21:27
614

Why the Confusion Exists

Confusion exists because dictionary meanings of both these words are almost the same:

  • Concurrent: existing, happening, or done at the same time(dictionary.com)
  • Parallel: very similar and often happening at the same time(merriam webster).

Yet the way they are used in computer science and programming are quite different. Here is my interpretation:

  • Concurrency: Interruptability
  • Parallelism: Independentability

So what do I mean by above definitions?

I will clarify with a real world analogy. Let’s say you have to get done 2 very important tasks in one day:

  1. Get a passport
  2. Get a presentation done

Now, the problem is that task-1 requires you to go to an extremely bureaucratic government office that makes you wait for 4 hours in a line to get your passport. Meanwhile, task-2 is required by your office, and it is a critical task. Both must be finished on a specific day.

Case 1: Sequential Execution

Ordinarily, you will drive to passport office for 2 hours, wait in the line for 4 hours, get the task done, drive back two hours, go home, stay awake 5 more hours and get presentation done.

Case 2: Concurrent Execution

But you’re smart. You plan ahead. You carry a laptop with you, and while waiting in the line, you start working on your presentation. This way, once you get back at home, you just need to work 1 extra hour instead of 5.

In this case, both tasks are done by you, just in pieces. You interrupted the passport task while waiting in the line and worked on presentation. When your number was called, you interrupted presentation task and switched to passport task. The saving in time was essentially possible due to interruptability of both the tasks.

Concurrency, IMO, can be understood as the "isolation" property in ACID. Two database transactions are considered isolated if sub-transactions can be performed in each and any interleaved way and the final result is same as if the two tasks were done sequentially. Remember, that for both the passport and presentation tasks, you are the sole executioner.

Case 3: Parallel Execution

Now, since you are such a smart fella, you’re obviously a higher-up, and you have got an assistant. So, before you leave to start the passport task, you call him and tell him to prepare first draft of the presentation. You spend your entire day and finish passport task, come back and see your mails, and you find the presentation draft. He has done a pretty solid job and with some edits in 2 more hours, you finalize it.

Now since, your assistant is just as smart as you, he was able to work on it independently, without needing to constantly ask you for clarifications. Thus, due to the independentability of the tasks, they were performed at the same time by two different executioners.

Still with me? Alright...

Case 4: Concurrent But Not Parallel

Remember your passport task, where you have to wait in the line? Since it is your passport, your assistant cannot wait in line for you. Thus, the passport task has interruptability (you can stop it while waiting in the line, and resume it later when your number is called), but no independentability (your assistant cannot wait in your stead).

Case 5: Parallel But Not Concurrent

Suppose the government office has a security check to enter the premises. Here, you must remove all electronic devices and submit them to the officers, and they only return your devices after you complete your task.

In this, case, the passport task is neither independentable nor interruptible. Even if you are waiting in the line, you cannot work on something else because you do not have necessary equipment.

Similarly, say the presentation is so highly mathematical in nature that you require 100% concentration for at least 5 hours. You cannot do it while waiting in line for passport task, even if you have your laptop with you.

In this case, the presentation task is independentable (either you or your assistant can put in 5 hours of focused effort), but not interruptible.

Case 6: Concurrent and Parallel Execution

Now, say that in addition to assigning your assistant to the presentation, you also carry a laptop with you to passport task. While waiting in the line, you see that your assistant has created the first 10 slides in a shared deck. You send comments on his work with some corrections. Later, when you arrive back home, instead of 2 hours to finalize the draft, you just need 15 minutes.

This was possible because presentation task has independentability (either one of you can do it) and interruptability (you can stop it and resume it later). So you concurrently executed both tasks, and executed the presentation task in parallel.

Let’s say that, in addition to being overly bureaucratic, the government office is corrupt. Thus, you can show your identification, enter it, start waiting in line for your number to be called, bribe a guard and someone else to hold your position in the line, sneak out, come back before your number is called, and resume waiting yourself.

In this case, you can perform both the passport and presentation tasks concurrently and in parallel. You can sneak out, and your position is held by your assistant. Both of you can then work on the presentation, etc.


Back to Computer Science

In computing world, here are example scenarios typical of each of these cases:

  • Case 1: Interrupt processing.
  • Case 2: When there is only one processor, but all executing tasks have wait times due to I/O.
  • Case 3: Often seen when we are talking about map-reduce or hadoop clusters.
  • Case 4: I think Case 4 is rare. It’s uncommon for a task to be concurrent but not parallel. But it could happen. For example, suppose your task requires access to a special computational chip which can be accessed through only processor-1. Thus, even if processor-2 is free and processor-1 is performing some other task, the special computation task cannot proceed on processor-2.
  • Case 5: also rare, but not quite as rare as Case 4. A non-concurrent code can be a critical region protected by mutexes. Once it is started, it must execute to completion. However, two different critical regions can progress simultaneously on two different processors.
  • Case 6: IMO, most discussions about parallel or concurrent programming are basically talking about Case 6. This is a mix and match of both parallel and concurrent executions.

Concurrency and Go

If you see why Rob Pike is saying concurrency is better, you have to understand that the reason is. You have a really long task in which there are multiple waiting periods where you wait for some external operations like file read, network download. In his lecture, all he is saying is, “just break up this long sequential task so that you can do something useful while you wait.” That is why he talks about different organizations with various gophers.

Now the strength of Go comes from making this breaking really easy with go keyword and channels. Also, there is excellent underlying support in the runtime to schedule these goroutines.

But essentially, is concurrency better that parallelism?

Are apples better than oranges?

Certary
  • 29
  • 1
  • 9
Methos
  • 10,908
  • 11
  • 41
  • 47
  • Thanks for case 5. I often I think parallel implicit means concurrency. – hqt Jul 28 '19 at 20:18
  • 4
    Node.js event loop is a good example for case 4. Even though processor B has free resources, the request X should be handled by processor A which is busy processing Y. If setTimeout is called for Y, X can be processed, then, after the timeout Y will end being processed too. – Lucas Janon Jul 29 '19 at 04:48
  • 2
    It's worth to note the two definitions of a word "concurrency" which were put in the accepted answer and this one are quite **distinct**. The first refers to the conception to run several tasks in overlapping time periods (i.e. parallelism means concurrency by def), the second refers to the conception to interrupt one task to run some other. – Mergasov Nov 01 '19 at 05:39
  • Similar to comment above - multithread python is an example of case 4. I don't think this case is uncommon. Any global interpreter lock will result in case 4 (if it allows for concurrency at all). – chub500 Apr 20 '20 at 17:38
  • As I think Case 5 (Parallel but not concurrent) is a kind of misleading fact since all the parallel programs are a subset of concurrent programs – tnishada Nov 17 '20 at 12:50
  • In case 5, you cannot seem to make up your mind whether it is independentable or not. You seem to say both. – hkBst Feb 19 '21 at 06:58
255

I like Rob Pike's talk: Concurrency is not Parallelism (it's better!) (slides) (talk)

Rob usually talks about Go and usually addresses the question of Concurrency vs Parallelism in a visual and intuitive explanation! Here is a short summary:

Task: Let's burn a pile of obsolete language manuals! One at a time!

Task

Concurrency: There are many concurrently decompositions of the task! One example:

Gophers

Parallelism: The previous configuration occurs in parallel if there are at least 2 gophers working at the same time or not.

Kyle G
  • 959
  • 8
  • 17
asfer
  • 3,251
  • 1
  • 18
  • 16
  • 9
    For the video, see https://blog.heroku.com/archives/2013/2/24/concurrency_is_not_parallelism – Pramod Jul 27 '13 at 01:54
  • 21
    Sorry, had to downvote it for the "it's better" bit. The correct answer is that it's different. Concurrency is a part of the problem. Parallelism is a part of the solution. – pyon Aug 07 '15 at 02:18
  • @EduardoLeón You obviously did not check the name of the talk. Concurrency is not a problem, it is just a way to think on a problem/task. – asfer Aug 07 '15 at 22:58
  • 6
    @asfer Concurrency is a part of the structure of the problem. By the way, don't conflate "concurrency" (the problem) with "concurrency control" (a solution, often used together with parallelism). – pyon Aug 07 '15 at 23:30
  • 1
    I watched it and honestly I didn't like it. It adds unnecessary complications and nerdyness to something that should be explained in a much simpler way (check the jugglers answer here). – jj_ Mar 23 '17 at 21:44
162

To add onto what others have said:

Concurrency is like having a juggler juggle many balls. Regardless of how it seems, the juggler is only catching/throwing one ball per hand at a time. Parallelism is having multiple jugglers juggle balls simultaneously.

Acumenus
  • 41,481
  • 14
  • 116
  • 107
Thomas T
  • 2,039
  • 1
  • 14
  • 16
  • 3
    I'm gonna be picky, but If you are juggling with a pair number of balls, you can have two balls at the same time (depending on how you juggling). – thebugfinder Feb 11 '15 at 15:13
  • 63
    @thebugfinder, To make sure there is no more room for error in Thomas' example. Concurrency is like a person juggling with only 1 hand. Regardless of how it seems the person is only holding at most one ball at a time. Parallelism is when the juggler uses both hands. – bigtunacan Mar 05 '15 at 05:00
  • what i actually meant to say with "pair number of balls" was "even number of balls" – thebugfinder Mar 05 '15 at 05:08
  • 1
    Very clever answer. I can definitely see thebugfinder's point, but I like this answer a lot if one action at a time is taken into account and agreed upon. – B.K. May 07 '15 at 06:19
  • 2
    I think it's better with "Parallelism is having one person for for each ball". If number of balls increases (imagine web requests), those people can start juggling, making the execution concurrent and parallel. Also I would love is someone could explain the reactor pattern with the jugglers example.. – jj_ Mar 23 '17 at 21:37
  • But it's a one-armed juggler – Dan Apr 01 '17 at 03:13
142

Say you have a program that has two threads. The program can run in two ways:

Concurrency                 Concurrency + parallelism
(Single-Core CPU)           (Multi-Core CPU)
 ___                         ___ ___
|th1|                       |th1|th2|
|   |                       |   |___|
|___|___                    |   |___
    |th2|                   |___|th2|
 ___|___|                    ___|___|
|th1|                       |th1|
|___|___                    |   |___
    |th2|                   |   |th2|

In both cases we have concurrency from the mere fact that we have more than one thread running.

If we ran this program on a computer with a single CPU core, the OS would be switching between the two threads, allowing one thread to run at a time.

If we ran this program on a computer with a multi-core CPU then we would be able to run the two threads in parallel - side by side at the exact same time.

lucid_dreamer
  • 334
  • 4
  • 9
Pithikos
  • 14,773
  • 14
  • 98
  • 115
  • 5
    I liked the thread blocks. Simple, yet perfect! Thank you for such an amazing answer. – bozzmob Aug 27 '19 at 13:53
  • Nice example. I deduce that you can only have concurrency and never parallelism when there is a single-core CPU. Concurrency = processes take turns (unlike sequency) – Abdel Aleem Apr 26 '21 at 11:49
59

Concurrency: If two or more problems are solved by a single processor. alt text

Parallelism: If one problem is solved by multiple processors.

alt text

Rajendra Uppal
  • 16,504
  • 15
  • 55
  • 57
  • 58
    I'd disagree with this - a program designed to be concurrent may or may not be run in parallel; concurrency is more an attribute of a program, parallelism may occur when it executes. – Adrian Mouat Apr 06 '11 at 15:43
53

Imagine learning a new programming language by watching a video tutorial. You need to pause the video, apply what been said in code then continue watching. That's concurrency.

Now you're a professional programmer. And you enjoy listening to calm music while coding. That's Parallelism.

As Andrew Gerrand said in GoLang Blog

Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.

Enjoy.

Ramy M. Mousa
  • 4,656
  • 3
  • 32
  • 40
39

I will try to explain with a interesting and easy to understand example. :)

Assume that a organization organizes a chess tournament where 10 players (with equal chess playing skills) will challenge a professional champion chess player. And since chess is 1:1 game thus organizers have to conduct 10 games in time efficient manner so that they can finish the whole event as quickly as possible.

Hopefully following scenarios will easily describe multiple ways of conducting these 10 games:

1) SERIAL - lets say that the professional plays with each person one by one i.e. starts and finishes the game with one person and then starts the next game with next person and so on. In other words, they decided to conduct the games sequentially. So if one game takes 10 mins to complete then 10 games will take 100 mins, also assume that transition from one game to other takes 6 secs then for 10 games it will be 54 secs (approx. 1 min).

so the whole event will approximately complete in 101 mins (WORST APPROACH)

2) CONCURRENT - lets say that professional plays his turn and moves on to next player so all 10 players are playing simultaneously but the professional player is not with two person at a time, he plays his turn and moves on to next person. Now assume professional player takes 6 sec to play his turn and also transition time of professional player b/w two players is 6 sec so total transition time to get back to first player will be 1min (10x6sec). Therefore, by the time he is back to first person with, whom event was started, 2mins have passed (10xtime_per_turn_by_champion + 10xtransition_time=2mins)

Assuming that all player take 45sec to complete their turn so based on 10mins per game from SERIAL event the no. of rounds before a game finishes should 600/(45+6) = 11 rounds (approx)

So the whole event will approximately complete in 11xtime_per_turn_by_player_&_champion + 11xtransition_time_across_10_players = 11x51 + 11x60sec= 561 + 660 = 1221sec = 20.35mins (approximately)

SEE THE IMPROVEMENT from 101 mins to 20.35 mins (BETTER APPROACH)

3) PARALLEL - lets say organizers get some extra funds and thus decided to invite two professional champion player (both equally capable) and divided the set of same 10 players (challengers) in two group of 5 each and assigned them to two champion i.e. one group each. Now the event is progressing in parallel in these two sets i.e. at least two players (one in each group) are playing against the two professional players in their respective group.

However within the group the professional player with take one player at a time (i.e. sequentially) so without any calculation you can easily deduce that whole event will approximately complete in 101/2=50.5mins to complete

SEE THE IMPROVEMENT from 101 mins to 50.5 mins (GOOD APPROACH)

4) CONCURRENT + PARALLEL - In above scenario, lets say that the two champion player will play concurrently (read 2nd point) with the 5 players in their respective groups so now games across groups are running in parallel but within group they are running concurrently.

So the games in one group will approximately complete in 11xtime_per_turn_by_player_&_champion + 11xtransition_time_across_5_players = 11x51 + 11x30 = 600 + 330 = 930sec = 15.5mins (approximately)

So the whole event (involving two such parallel running group) will approximately complete in 15.5mins

SEE THE IMPROVEMENT from 101 mins to 15.5 mins (BEST APPROACH)

NOTE: in above scenario if you replace 10 players with 10 similar jobs and two professional player with a two CPU cores then again the following ordering will remain true:

SERIAL > PARALLEL > CONCURRENT > CONCURRENT+PARALLEL

(NOTE: this order might change for other scenarios as this ordering highly depends on inter-dependency of jobs, communication needs b/w jobs and transition overhead b/w jobs)

sactiw
  • 20,109
  • 4
  • 35
  • 28
  • 2
    Great explanation. There's one addition. Concurrent model for the 2nd case (when a professional player moves b/w players) will get improvement only if player do his turn in 45 seconds. In other words, we should have I/O waiting in the whole process. If a regular player can turn in less than 45 seconds (5 or may be 10 seconds) the improvement will be less. Thus, if we haven't I/O waiting time in our work, concurrency will be roughly the same as a serial execution. – Psylone Dec 29 '15 at 11:52
  • I think this is the best explanation because I was struggling wrapping my head around "Concurrent + Parallel" scenario. Also before reading this answer, I always thought "Parallelism" was better than "Concurrency" but apparently, it depends on the resource limits. The more "professional chess player" you get, the better your performance will be compared to Concurrency. – Hmerac Jul 03 '20 at 18:43
36

Simple example:

Concurrent is: "Two queues accessing one ATM machine"

Parallel is: "Two queues and two ATM machines"

Saurabh Pakhare
  • 625
  • 8
  • 17
  • And multithreading? Just thinking how the term multithreading fits in the above scenario. In this case, is the Concurrent == Multithreading, as in one from each queue go ATM per each moment? – KhoPhi Jun 19 '17 at 01:10
  • 1
    @KhoPhi Multithreading implies concurrency, but doesn't imply parallelism. Someone correct me if I'm wrong. – bool3max Nov 19 '20 at 19:00
32

They solve different problems. Concurrency solves the problem of having scarce CPU resources and many tasks. So, you create threads or independent paths of execution through code in order to share time on the scarce resource. Up until recently, concurrency has dominated the discussion because of CPU availability.

Parallelism solves the problem of finding enough tasks and appropriate tasks (ones that can be split apart correctly) and distributing them over plentiful CPU resources. Parallelism has always been around of course, but it's coming to the forefront because multi-core processors are so cheap.

JP Alioto
  • 43,483
  • 5
  • 85
  • 112
31

concurency: multiple execution flows with the potential to share resources

Ex: two threads competing for a I/O port.

paralelism: splitting a problem in multiple similar chunks.

Ex: parsing a big file by running two processes on every half of the file.

Mihai Toader
  • 11,551
  • 1
  • 27
  • 33
30

Parallelism is simultaneous execution of processes on a multiple cores per CPU or multiple CPUs (on a single motherboard).

Concurrency is when Parallelism is achieved on a single core/CPU by using scheduling algorithms that divides the CPU’s time (time-slice). Processes are interleaved.

Units:

  • 1 or many cores in a single CPU (pretty much all modern day processors)
  • 1 or many CPUs on a motherboard (think old school servers)
  • 1 application is 1 program (think Chrome browser)
  • 1 program can have 1 or many processes (think each Chrome browser tab is a process)
  • 1 process can have 1 or many threads from 1 program (Chrome tab playing Youtube video in 1 thread, another thread spawned for comments section, another for users login info)
  • Thus, 1 program can have 1 or many threads of execution
  • 1 process is thread(s)+allocated memory resources by OS (heap, registers, stack, class memory)
nabster
  • 1,319
  • 2
  • 19
  • 31
27

Concurrent programming execution has 2 types : non-parallel concurrent programming and parallel concurrent programming (also known as parallelism).

The key difference is that to the human eye, threads in non-parallel concurrency appear to run at the same time but in reality they don't. In non - parallel concurrency threads rapidly switch and take turns to use the processor through time-slicing. While in parallelism there are multiple processors available so, multiple threads can run on different processors at the same time. enter image description here

Reference: Introduction to Concurrency in Programming Languages

Apurva Thorat
  • 431
  • 4
  • 8
12

Concurrency => When multiple tasks are performed in overlapping time periods with shared resources (potentially maximizing the resources utilization).

Parallel => when single task is divided into multiple simple independent sub-tasks which can be performed simultaneously.

Will Ness
  • 62,652
  • 8
  • 86
  • 167
MBK
  • 226
  • 3
  • 7
  • How would you describe a single-core processor system that multi-tasks (time slices) to give the appearance of overlapping processing? When concurrency is defined as execution in overlapping time periods it includes this processing. You have described simultaneous execution which excludes it under your definition of concurrency. – acarlon Apr 02 '14 at 05:26
  • The best definition IMHO, but you should change "shared resources" with "shared mutable resources". – Philippe Feb 09 '21 at 07:48
10

Think of it as servicing queues where server can only serve the 1st job in a queue.

1 server , 1 job queue (with 5 jobs) -> no concurrency, no parallelism (Only one job is being serviced to completion, the next job in the queue has to wait till the serviced job is done and there is no other server to service it)

1 server, 2 or more different queues (with 5 jobs per queue) -> concurrency (since server is sharing time with all the 1st jobs in queues, equally or weighted) , still no parallelism since at any instant, there is one and only job being serviced.

2 or more servers , one Queue -> parallelism ( 2 jobs done at the same instant) but no concurrency ( server is not sharing time, the 3rd job has to wait till one of the server completes.)

2 or more servers, 2 or more different queues -> concurrency and parallelism

In other words, concurrency is sharing time to complete a job, it MAY take up the same time to complete its job but at least it gets started early. Important thing is , jobs can be sliced into smaller jobs, which allows interleaving.

Parallelism is achieved with just more CPUs , servers, people etc that run in parallel.

Keep in mind, if the resources are shared, pure parallelism cannot be achieved, but this is where concurrency would have it's best practical use, taking up another job that doesn't need that resource.

Rahul
  • 101
  • 1
  • 2
8

I'm going to offer an answer that conflicts a bit with some of the popular answers here. In my opinion, concurrency is a general term that includes parallelism. Concurrency applies to any situation where distinct tasks or units of work overlap in time. Parallelism applies more specifically to situations where distinct units of work are evaluated/executed at the same physical time. The raison d'etre of parallelism is speeding up software that can benefit from multiple physical compute resources. The other major concept that fits under concurrency is interactivity. Interactivity applies when the overlapping of tasks is observable from the outside world. The raison d'etre of interactivity is making software that is responsive to real-world entities like users, network peers, hardware peripherals, etc.

Parallelism and interactivity are almost entirely independent dimension of concurrency. For a particular project developers might care about either, both or neither. They tend to get conflated, not least because the abomination that is threads gives a reasonably convenient primitive to do both.

A little more detail about parallelism:

Parallelism exists at very small scales (e.g. instruction-level parallelism in processors), medium scales (e.g. multicore processors) and large scales (e.g. high-performance computing clusters). Pressure on software developers to expose more thread-level parallelism has increased in recent years, because of the growth of multicore processors. Parallelism is intimately connected to the notion of dependence. Dependences limit the extent to which parallelism can be achieved; two tasks cannot be executed in parallel if one depends on the other (Ignoring speculation).

There are lots of patterns and frameworks that programmers use to express parallelism: pipelines, task pools, aggregate operations on data structures ("parallel arrays").

A little more detail about interactivity:

The most basic and common way to do interactivity is with events (i.e. an event loop and handlers/callbacks). For simple tasks events are great. Trying to do more complex tasks with events gets into stack ripping (a.k.a. callback hell; a.k.a. control inversion). When you get fed up with events you can try more exotic things like generators, coroutines (a.k.a. Async/Await), or cooperative threads.

For the love of reliable software, please don't use threads if what you're going for is interactivity.

Curmudgeonliness

I dislike Rob Pike's "concurrency is not parallelism; it's better" slogan. Concurrency is neither better nor worse than parallelism. Concurrency includes interactivity which cannot be compared in a better/worse sort of way with parallelism. It's like saying "control flow is better than data".

Ben Ylvisaker
  • 593
  • 5
  • 6
7

In electronics serial and parallel represent a type of static topology, determining the actual behaviour of the circuit. When there is no concurrency, parallelism is deterministic.

In order to describe dynamic, time-related phenomena, we use the terms sequential and concurrent. For example, a certain outcome may be obtained via a certain sequence of tasks (eg. a recipe). When we are talking with someone, we are producing a sequence of words. However, in reality, many other processes occur in the same moment, and thus, concur to the actual result of a certain action. If a lot of people is talking at the same time, concurrent talks may interfere with our sequence, but the outcomes of this interference are not known in advance. Concurrency introduces indeterminacy.

The serial/parallel and sequential/concurrent characterization are orthogonal. An example of this is in digital communication. In a serial adapter, a digital message is temporally (i.e. sequentially) distributed along the same communication line (eg. one wire). In a parallel adapter, this is divided also on parallel communication lines (eg. many wires), and then reconstructed on the receiving end.

Let us image a game, with 9 children. If we dispose them as a chain, give a message at the first and receive it at the end, we would have a serial communication. More words compose the message, consisting in a sequence of communication unities.

I like ice-cream so much. > X > X > X > X > X > X > X > X > X > ....

This is a sequential process reproduced on a serial infrastructure.

Now, let us image to divide the children in groups of 3. We divide the phrase in three parts, give the first to the child of the line at our left, the second to the center line's child, etc.

I like ice-cream so much. > I like    > X > X > X > .... > ....
                          > ice-cream > X > X > X > ....
                          > so much   > X > X > X > ....

This is a sequential process reproduced on a parallel infrastructure (still partially serialized although).

In both cases, supposing there is a perfect communication between the children, the result is determined in advance.

If there are other persons that talk to the first child at the same time as you, then we will have concurrent processes. We do no know which process will be considered by the infrastructure, so the final outcome is non-determined in advance.

s1l3n0
  • 211
  • 2
  • 4
  • +1 Interesting. In computing one definition, as per the currently accepted answer concurrent means execution in overlapping time periods, not necessarily simultaneously (which would be parallel). In electronics how do you describe circuits that are designed to give the appearance of things happening at the same time, but are just switching very quickly. To continue your ice-cream analogy: I like ice-cream so much > child A1 I like > child B1 ice-cream > child C1 so much > child A2 I like > child B2 ice-cream < child C2 so much... – acarlon Apr 02 '14 at 05:21
  • I first saw this here: http://s1l3n0.blogspot.com/2013/04/serial-vs-parallel-sequential-vs.html. – FrankHB Apr 01 '17 at 16:37
  • Yes, I refined/extendend a bit my answer on one of my personal blog-notes. ;) – s1l3n0 Apr 27 '17 at 11:22
7

I really like Paul Butcher's answer to this question (he's the writer of Seven Concurrency Models in Seven Weeks):

Although they’re often confused, parallelism and concurrency are different things. Concurrency is an aspect of the problem domain—your code needs to handle multiple simultaneous (or near simultaneous) events. Parallelism, by contrast, is an aspect of the solution domain—you want to make your program run faster by processing different portions of the problem in parallel. Some approaches are applicable to concurrency, some to parallelism, and some to both. Understand which you’re faced with and choose the right tool for the job.

dangom
  • 8,347
  • 5
  • 36
  • 61
6

Concurrency is the generalized form of parallelism. For example parallel program can also be called concurrent but reverse is not true.

  1. Concurrent execution is possible on single processor (multiple threads, managed by scheduler or thread-pool)

  2. Parallel execution is not possible on single processor but on multiple processors. (One process per processor)

  3. Distributed computing is also a related topic and it can also be called concurrent computing but reverse is not true, like parallelism.

For details read this research paper Concepts of Concurrent Programming

6

Concurrency vs Parallelism

Rob Pike in 'Concurrency Is Not Parallelism'

Concurrency is about dealing with lots of things at once.

Parallelism is about doing lots of things at once.

[Concurrency theory]

Concurrency - handles several tasks at once
Parallelism - handles several thread at once

My vision of concurrency and parallelism

[Sync vs Async]

yoAlex5
  • 13,571
  • 5
  • 105
  • 98
5

I really liked this graphical representation from another answer - I think it answers the question much better than a lot of the above answers

Parallelism vs Concurrency When two threads are running in parallel, they are both running at the same time. For example, if we have two threads, A and B, then their parallel execution would look like this:

CPU 1: A ------------------------->

CPU 2: B ------------------------->

When two threads are running concurrently, their execution overlaps. Overlapping can happen in one of two ways: either the threads are executing at the same time (i.e. in parallel, as above), or their executions are being interleaved on the processor, like so:

CPU 1: A -----------> B ----------> A -----------> B ---------->

So, for our purposes, parallelism can be thought of as a special case of concurrency

Source: Another answer here

Hope that helps.

Community
  • 1
  • 1
HopeKing
  • 2,574
  • 3
  • 34
  • 54
5

From the book Linux System Programming by Robert Love:

Concurrency, Parallelism, and Races

Threads create two related but distinct phenomena: concurrency and parallelism. Both are bittersweet, touching on the costs of threading as well as its benefits. Concurrency is the ability of two or more threads to execute in overlapping time periods. Parallelism is the ability to execute two or more threads simultaneously. Concurrency can occur without parallelism: for example, multitasking on a single processor system. Parallelism (sometimes emphasized as true parallelism) is a specific form of concurrency requiring multiple processors (or a single processor capable of multiple engines of execution, such as a GPU). With concurrency, multiple threads make forward progress, but not necessarily simultaneously. With parallelism, threads literally execute in parallel, allowing multithreaded programs to utilize multiple processors.

Concurrency is a programming pattern, a way of approaching problems. Parallelism is a hardware feature, achievable through concurrency. Both are useful.

This explanation is consistent with the accepted answer. Actually the concepts are far simpler than we think. Don't think them as magic. Concurrency is about a period of time, while Parallelism is about exactly at the same time, simultaneously.

Community
  • 1
  • 1
Rick
  • 4,467
  • 1
  • 22
  • 49
4

"Concurrency" is when there are multiple things in progress.

"Parallelism" is when concurrent things are progressing at the same time.


Examples of concurrency without parallelism:

  • Multiple threads on a single core.
  • Multiple messages in a Win32 message queue.
  • Multiple SqlDataReaders on a MARS connection.
  • Multiple JavaScript promises in a browser tab.

Note, however, that the difference between concurrency and parallelism is often a matter of perspective. The above examples are non-parallel from the perspective of (observable effects of) executing your code. But there is instruction-level parallelism even within a single core. There are pieces of hardware doing things in parallel with CPU and then interrupting the CPU when done. GPU could be drawing to screen while you window procedure or event handler is being executed. The DBMS could be traversing B-Trees for the next query while you are still fetching the results of the previous one. Browser could be doing layout or networking while your Promise.resolve() is being executed. Etc, etc...

So there you go. The world is as messy as always ;)

Branko Dimitrijevic
  • 47,349
  • 10
  • 80
  • 152
4

The simplest and most elegant way of understanding the two in my opinion is this. Concurrency allows interleaving of execution and so can give the illusion of parallelism. This means that a concurrent system can run your Youtube video alongside you writing up a document in Word, for example. The underlying OS, being a concurrent system, enables those tasks to interleave their execution. Because computers execute instructions so quickly, this gives the appearance of doing two things at once.

Parallelism is when such things really are in parallel. In the example above, you might find the video processing code is being executed on a single core, and the Word application is running on another. Note that this means that a concurrent program can also be in parallel! Structuring your application with threads and processes enables your program to exploit the underlying hardware and potentially be done in parallel.

Why not have everything be parallel then? One reason is because concurrency is a way of structuring programs and is a design decision to facilitate separation of concerns, whereas parallelism is often used in the name of performance. Another is that some things fundamentally cannot fully be done in parallel. An example of this would be adding two things to the back of a queue - you cannot insert both at the same time. Something must go first and the other behind it, or else you mess up the queue. Although we can interleave such execution (and so we get a concurrent queue), you cannot have it parallel.

Hope this helps!

Daniel Soutar
  • 586
  • 1
  • 5
  • 19
3

Concurrency can involve tasks run simultaneously or not (they can indeed be run in separate processors/cores but they can as well be run in "ticks"). What is important is that concurrency always refer to doing a piece of one greater task. So basically it's a part of some computations. You have to be smart about what you can do simultaneously and what not to and how to synchronize.

Parallelism means that you're just doing some things simultaneously. They don't need to be a part of solving one problem. Your threads can, for instance, solve a single problem each. Of course synchronization stuff also applies but from different perspective.

kboom
  • 1,889
  • 3
  • 24
  • 37
3

Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang is perhaps the most promising upcoming language for highly concurrent programming.

Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising language for high-performance parallel programming on shared-memory computers (including multicores).

Copied from my answer: https://stackoverflow.com/a/3982782

Pang
  • 8,605
  • 144
  • 77
  • 113
J D
  • 46,493
  • 12
  • 162
  • 266
3

(I'm quite surprised such a fundamental question is not resolved correctly and neatly for years...)

In short, both concurrency and parallelism are properties of computing.

As of the difference, here is the explanation from Robert Harper:

The first thing to understand is parallelism has nothing to do with concurrency. Concurrency is concerned with nondeterministic composition of programs (or their components). Parallelism is concerned with asymptotic efficiency of programs with deterministic behavior. Concurrency is all about managing the unmanageable: events arrive for reasons beyond our control, and we must respond to them. A user clicks a mouse, the window manager must respond, even though the display is demanding attention. Such situations are inherently nondeterministic, but we also employ pro forma nondeterminism in a deterministic setting by pretending that components signal events in an arbitrary order, and that we must respond to them as they arise. Nondeterministic composition is a powerful program structuring idea. Parallelism, on the other hand, is all about dependencies among the subcomputations of a deterministic computation. The result is not in doubt, but there are many means of achieving it, some more efficient than others. We wish to exploit those opportunities to our advantage.

They can be sorts of orthogonal properties in programs. Read this blog post for additional illustrations. And this one discussed slightly more on difference about components in programming, like threads.

Note that threading or multitasking are all implementations of computing serving more concrete purposes. They can be related to parallelism and concurrency, but not in an essential way. Thus they are hardly good entries to start the explanation.

One more highlight: (physical) "time" has almost nothing to do with the properties discussed here. Time is just a way of implementation of the measurement to show the significance of the properties, but far from the essence. Think twice the role of "time" in time complexity - which is more or less similar, even the measurement is often more significant in that case.

FrankHB
  • 1,698
  • 17
  • 15
3

"Concurrent" is doing things -- anything -- at the same time. They could be different things, or the same thing. Despite the accepted answer, which is lacking, it's not about "appearing to be at the same time." It's really at the same time. You need multiple CPU cores, either using shared memory within one host, or distributed memory on different hosts, to run concurrent code. Pipelines of 3 distinct tasks that are concurrently running at the same time are an example: Task-level-2 has to wait for units completed by task-level-1, and task-level-3 has to wait for units of work completed by task-level-2. Another example is concurrency of 1-producer with 1-consumer; or many-producers and 1-consumer; readers and writers; et al.

"Parallel" is doing the same things at the same time. It is concurrent, but furthermore it is the same behavior happening at the same time, and most typically on different data. Matrix algebra can often be parallelized, because you have the same operation running repeatedly: For example the column sums of a matrix can all be computed at the same time using the same behavior (sum) but on different columns. It is a common strategy to partition (split up) the columns among available processor cores, so that you have close to the same quantity of work (number of columns) being handled by each processor core. Another way to split up the work is bag-of-tasks where the workers who finish their work go back to a manager who hands out the work and get more work dynamically until everything is done. Ticketing algorithm is another.

Not just numerical code can be parallelized. Files too often can be processed in parallel. In a natural language processing application, for each of the millions of document files, you may need to count the number of tokens in the document. This is parallel, because you are counting tokens, which is the same behavior, for every file.

In other words, parallelism is when same behavior is being performed concurrently. Concurrently means at the same time, but not necessarily the same behavior. Parallel is a particular kind of concurrency where the same thing is happening at the same time.

Terms for example will include atomic instructions, critical sections, mutual exclusion, spin-waiting, semaphores, monitors, barriers, message-passing, map-reduce, heart-beat, ring, ticketing algorithms, threads, MPI, OpenMP.

Gregory Andrews' work is a top textbook on it: Multithreaded, Parallel, and Distributed Programming.

Geoffrey Anderson
  • 1,293
  • 10
  • 18
2

Parallelism: Having multiple threads do similar task which are independent of each other in terms of data and resource that they require to do so. Eg: Google crawler can spawn thousands of threads and each thread can do it's task independently.

Concurrency: Concurrency comes into picture when you have shared data, shared resource among the threads. In a transactional system this means you have to synchronize the critical section of the code using some techniques like Locks, semaphores, etc.

Sudip Bhandari
  • 1,639
  • 1
  • 22
  • 23
2

Explanation from this source was helpful for me:

Concurrency is related to how an application handles multiple tasks it works on. An application may process one task at at time (sequentially) or work on multiple tasks at the same time (concurrently).

Parallelism on the other hand, is related to how an application handles each individual task. An application may process the task serially from start to end, or split the task up into subtasks which can be completed in parallel.

As you can see, an application can be concurrent, but not parallel. This means that it processes more than one task at the same time, but the tasks are not broken down into subtasks.

An application can also be parallel but not concurrent. This means that the application only works on one task at a time, and this task is broken down into subtasks which can be processed in parallel.

Additionally, an application can be neither concurrent nor parallel. This means that it works on only one task at a time, and the task is never broken down into subtasks for parallel execution.

Finally, an application can also be both concurrent and parallel, in that it both works on multiple tasks at the same time, and also breaks each task down into subtasks for parallel execution. However, some of the benefits of concurrency and parallelism may be lost in this scenario, as the CPUs in the computer are already kept reasonably busy with either concurrency or parallelism alone. Combining it may lead to only a small performance gain or even performance loss.

Boolean_Type
  • 953
  • 2
  • 12
  • 32
  • This is already posted in [this existing answer](https://stackoverflow.com/a/42453655). – Pang Jul 06 '18 at 01:31
1

Great, let me take an scenario to show what I understand. suppose there're 3 kids named: A, B, C. A and B talk, C listen. For A and B, they are parallel: A: I am A. B: I am B.

But for C, his brain must take the concurrent process to listen A and B, it maybe: I am I A am B.

mannnnerd
  • 99
  • 1
  • 4
1

Concurrency simple means more than one tasks are running (not necessary in parallel). For example assumer we have 3 tasks then at any moment of time: more than one may be running or all may be running at same time.

Parallelism mean they are literally running in parallel. So in that case all three must be running at same time.

akhil_mittal
  • 18,855
  • 7
  • 83
  • 82
1

Pike's notion of "concurrency" is an intentional design and implementation decision. A concurrent-capable program design may or may not exhibit behavioral "parallelism"; it depends upon the runtime environment.

You don't want parallelism exhibited by a program that wasn't designed for concurrency. :-) But to the extent that it's a net gain for the relevant factors (power consumption, performance, etc.), you want a maximally-concurrent design so that the host system can parallelize its execution when possible.

Pike's Go programming language illustrates this in the extreme: his functions are all threads that can run correctly concurrently, i.e. calling a function always creates a thread that will run in parallel with the caller if the system is capable of it. An application with hundreds or even thousands of threads is perfectly ordinary in his world. (I'm no Go expert, that's just my take on it.)

bgat
  • 11
  • 1
0

Excerpt from this amazing blog:

Differences between concurrency and parallelism:

Concurrency is when two tasks can start, run, and complete in overlapping time periods. Parallelism is when tasks literally run at the same time, eg. on a multi-core processor.

Concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of (possibly related) computations.

Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.

An application can be concurrent – but not parallel, which means that it processes more than one task at the same time, but no two tasks are executing at same time instant.

An application can be parallel – but not concurrent, which means that it processes multiple sub-tasks of a task in multi-core CPU at same time.

An application can be neither parallel – nor concurrent, which means that it processes all tasks one at a time, sequentially.

An application can be both parallel – and concurrent, which means that it processes multiple tasks concurrently in multi-core CPU at same time .

Hamza Belmellouki
  • 1,440
  • 11
  • 29
  • An application can be parallel – but not concurrent, which means that it processes multiple sub-tasks of a task in multi-core CPU at same time. Can you explain this line? As far as I understand, a parallel application is always concurrent. – Vaibhav Gupta Jan 13 '20 at 07:15
0

Simply, concurrency is dealing lots of things at once.

Word ‘dealing’ is bold to show the difference between concurrency and parallelism. Dealing many things at once means completing many things at once, but it does not matter whether they are executed in the same time or not. On the other hand, doing parallelism means doing lots of things at once (executed in same time). Hence, concurrency context can be achieved with one or more processing resources. Dealing many things at once with one processing resource means that you are doing many things as they are executed in the same time by doing context switching between tasks. On the other side, concurrency context with many processing resources means doing parallelism. It means we are doing concurrency by doing parallelism, but not vice versa.

You might want to learn more about concurrency and parallelism and their relationship with nowadays technology in my article.

fian
  • 29
  • 2
0

Merely to add even more clarification to other good answers:

Basing on the premise that an abstraction of processing (a CPU as a quite imaginable example) is able to run an only task at the same instant,

Concurrency is a story about the very abstraction of processing: it can switch between different tasks.

Parallelism is a story about we have more than one abstractions of processing (for example our CPU has multiple cores). So it's the cause of our system's ability to do several tasks at the same time (literally). But nothing is said here about the particular abstractions of processing (are they concurrent or not).

The emphasis here is on what these stories about.

So be aware when you are reading the accepted answer:

Concurrency is when two or more tasks can start, run, and complete in overlapping time periods.

Strickly speaking, one can conclude based on that definition that parallelism presupposes concurrency per se.

Mergasov
  • 1,363
  • 1
  • 12
  • 28
-2

Just by consulting the dictionary, you can see that concurrent (from latin) means to run together, converge, agree; ergo there is a need to synchronize because there is competition on the same resources. Parallel (from greek) means to duplicate on the side; ergo to do the same thing at the same time.

rocket441
  • 257
  • 3
  • 7
  • The downvotes you suffered were inappropriate. This is the closest to correct answer so far. Indeed, parallelism is when same behavior is being performed concurrently. Concurrently means at the same time, but not necessarily the same behavior. Parallel is a particular kind of concurrency where the same thing is happening at the same time. – Geoffrey Anderson Dec 31 '18 at 23:35