My parallel-programming education began in earnest when I joined Sequent Computer Systems in late 1990. This education was both brief and effective: within a few short years, my co-workers and I were breaking new ground. Nor was I alone: Sequent habitually hired new-to-parallelism engineers and had them producing competent parallel code within a few months. Nevertheless, more than two decades later, parallel programming is perceived to be difficult to teach and learn. Is parallel programming an exception to the typical transitioning of technology from impossible to expert-only to routine to unworthy of conscious thought?
This paper discusses how parallel programming may change from
an expert only field to mainstream. The author points out analogies
to other technologies where the same transition has already happened
, e.g., the internet. The conclusion of the paper is that parallel programming
may become mainstream if research is continued in that area.
The main paper gives an overview of where parallel programming
is these days and how it may be improved to become usable by
the masses. The conclusion of the paper is quite clear, more research
should finally lead to better tools and simplified programming
The appendix discusses several state of the art tools and parallel
programming design paradigms. The main focus is on tools that
guarantee correctness in terms of deadlock and data race avoidance.
However, performance analysis tools are not discussed. From my point of
view performance and correctness are both equally important. Model checking
helps to prove correctness of parallel algorithms. There has been quite some
effort in that area, such as, proving correctness of transactional memory
systems and concurrent objects like the Michael-Scott non-blocking queue.
An interesting branch for future work may be performance model checking
which tells you based on a model of your multiprocessor machine and your
parallel algorithm whether your algorithms performs and scales or not.
I liked part D of the appendix and would have hoped for a more
detailed discussion of that part. Parallel programming in combination with
real-time and energy efficiency constraints is interesting. Does a
high-performance concurrent object also provide low latency? How is the
trade-off space there? Do high-performance concurrent objects imply energy
Modern programming languages may also help to simplify parallel programming.
An example is the isolates concept in Dart where each thread runs on its own
private memory (isolate) and communicates with other threads using message
passing. Such a design helps to eliminate concurrency bugs like races but may
degrade performance. Providing performance in such an environment is
probably key to increase their acceptability by “macho programmers”.
The paper contains a polemic for the status quo, which sounds boring
when put that way; but it’s enjoyable, has a point, and needs to be
What is the “status quo” in parallel programming? As in any field of
programming — think programming languages — the status quo is that
it gets done in a lot of different ways. Some people do it in C, some
people do it in Java, some people do it in Haskell. Some people do it
linearizable, some people don’t care. Some people do it
embarrassingly, some people don’t. Some people do it with locks, some
people do it with lock-free techniques, some people do it with
transactional memory. Probably at least one person in history has done
it correctly. Also as in programming languages some people claim they
know the one true way to do it and that everyone else is wrong, and
such claims are stupid and best ignored. In summary the status quo is
that parallel programming, being a big and important endeavor, now
supports, and will continue to support, many approaches.
Of course! This argument does not need expansion. So I am not very
sympathetic with the Abstract, Introduction, and Componentry sections,
which seem to focus on this point.
But then we have Acculturation and Tooling.
Acculturation is a clean articulation of something I’ve seen too
little expressed, namely that parallel programming “feels” to be
getting easier. I love the “burden of proof” clause: well put, strong,
and somewhat counterintuitive. This argument, buttressed by examples
from programming as well as cars (network programming? the
increasingly broad adoption of higher-order functional programming
Tooling, unlike Acculturation, reads relatively weak, and has a weak
conclusion. But the argument you appear to be making is important and
interesting. Namely: Tooling enables Acculturation. The goal of
parallel programming is widespread Acculturation. And many people
attack this problem with new programming paradigms. Though useful,
history suggests that programmers think in a variety of ways, and no
one paradigm will win.* But Acculturation can just as easily happen
via Tooling. And tooling is arguably underserved by researchers today.
So there’s exciting stuff to be done and we should do it. This message
excited me and this way of presenting it seemed insightful.
(*This sentence says all I think needs to be said about Componentry.)
Try to compact the argument along lines like this; or if I’ve misread
you then however you want. Make a single concrete argument. Screw the
hypothetical meanies that say “only one kind of synchronization
primitive,” you are already crushing them in the marketplace. (And
besides this I don’t know who you mean!!!! Hypothetical enemies are
easy to mischaracterize. Try to engage with specific opponents, and
with their *best* arguments, rather than their bad ones.)
The Appendix wasn’t as good as I wanted it. The writing’s slack.
(Consider the first two sentences of the last paragraph in Appendix
A.) All three sections seem to be at least somewhat about Tools; the
section division’s odd. I would love paragraph-long analyses of three
tools, say, two successful and one not (in your opinion), saying why
they succeeded or failed. I hunger for your grounded, specific opinions.
Some writing comments.
Why say “INCREASIngly”?
Lot of capitalization problems in the references.
The best sentence in the main body of the paper (page 1) is the last.
That sentence should probably replace the current abstract.
The abstract contradicts the conclusion. The abstract implies that
“macho” parallel programming is like “giv[ing] that same teenager a
chainsaw” and that a “more promising way” is needed; S1 then calls
macho programming part of a “sterile debate”. The conclusion speaks up
for “the importance of continued investigation into more macho tools”.
So are you a macho man or not? Or are you implying that “macho tools”
are less than, or better than, straight “macho”? That could be clarified.
Metaphors fly fast and furious. (Bicycle, training wheels, machismo,
cars, chauffeurs, fasteners.) Some of them just occupy space. (The
list of fasteners.) Fewer, more precise metaphors would be better.
Please reconsider the word “macho,” which refers specifically to
excessive or exaggerated masculinity. All computer science has that
problem and word choice matters.
McKenney argues that we need approaches for parallel programming that neither
patronize developers nor require so much intimate knowledge that they become
impractical for a wide range of applications. He makes the point that other
‘expert-only’ ideas and technologies became mainstream as well, benefiting
greatly from advances in tooling and education. Thus, what we consider
‘expert-only’ today might well be part of the mainstream in the future.
However, he also argues that these macho approaches as he calls them, guide
and shape innovation and are necessary for progress.
McKenney expresses an interesting view and it would be worthwhile to discuss
what we can do concretely to support it. The title and appendix headlines of
his paper promise an argumentation of how to do that, i.e., how to go beyond
the current state of affairs. But, he remains on a very high level and to my
understanding expresses mostly hopes and general ideas. I had hoped for a more
concrete discussions when reading the headlines, discussions that go beyond
one or two concrete sentences. Since his point of view is at least for me not
very controversial, I would hope for more concrete points in a presentation,
to provide a ground for discussion at the workshop.
One remark he makes, which should receive more general attention is that
parallelization is one optimization besides many others. While people often
agree that parallel programming is used for performance, I think, his specific
notion is a very useful one that should be made explicit more often to set the
right expectations when it comes to parallelization.
For a publication in the digital library, the paper could benefit from
integrating the appendix into the main body and include it in the conclusion.
The work items themselves might also benefit from being phrased in a less
speculative and more resolute way, perhaps including more concrete proposals.
Sec. 2: Second sentence starts with unexpected phrase.
Appx. B: Second to last sentence contains unexpected upper case word.