Simon Peyton-Jones

We all know that Intel and AMD have punted. They can't keep building larger, faster chips for a variety of technical and economic reasons, so they have started placing multiple cores on a single chip. This, in theory, maintains the overall processing power and is easier to build. There's just one catch: it's much harder to program because to make use of that power, you have to program concurrently.

Don't get me wrong. I'm not complaining. Microprocessor engineers have saved programmers from the hassles of concurrency for years. That's as it should be: get it right once at the lower level to free programmers higher up the abstraction hierarchy to think about the domain problem being solved.

But, alas, that world has come to an end. It's only a matter of time before chips with 8, 16, 32, and, eventually, 1024 cores appear. How will we program them?

Traditional tools for managing concurrency fall short. Processes and even threads are too course grained. Fine grained concurrency primitives like monitors and conditional variables are prone to error. Fabulously prone to error.

But a new abstraction based on transactional memory is gaining traction and will probably land in your favorite programming language in the near future. Simon Peyton-Jones or Microsoft Research spoke about transactional memory at OSCON. We've got it on IT Conversations:

Simon Peyton-Jones of Microsoft Research introduces a new tool called Transactional Memory to simplify concurrent programming. Due to the increasing prevalence of multi-core hardware, concurrent programming is becoming more and more important. Unfortunately, the most common tools for handling concurrency, locks and condition variables, are over 30 years old and fundamentally flawed. Problems that are simple undergraduate assignments in sequential programming become publishable results when done with concurrency.

Transactional Memory borrows the concept of an atomic transaction from databases. Rather than locking resources, a code block is marked as atomic, and when it runs, the reads and writes are done against a transaction log instead of global memory. When the code is complete, the runtime re-checks all of the reads to make sure they are unchanged and then commits all of the changes to memory at once. If any of the reads are dirty, the transaction is rolled back and re-executed. This, when combined with additional tools for blocking and choice, allows program to remain simple, correct, and composable while scaling to many threads without the additional overhead that course grained locking incurs.

Listening to this talk will give you a fair understanding of what transactional memory is all about. If you're a developer and have only enough time to listen to one IT Conversations talk this week, this is the one.


Please leave comments using the Hypothes.is sidebar.

Last modified: Thu Oct 10 12:47:18 2019.