For the first class, we will read Chapters 1 and 3 of QED at Large: A Survey of Engineering of Formally Verified Software [PDF] [DOI] by Talia Ringer, Karl Palmskog, Ilya Sergey, Milos Gligoric, and Zachary Tatlock.

QED at Large is a survey paper about engineering formally verified proofs of program correctness; Chapters 1 and 3 are background and motivation, with a focus on impacts. They should give you a sense of some cool applications of proof assistants over the last several decades.

You can find the video lightning lecture for the first class here, and the corresponding slides with notes here. You can find a reading guide and the discussion questions below. The video will be posted when it is ready.

Please note that because of POPL, this reading is not due until January 25th at 10:00 AM. From next week forward, readings will be due the morning before the corresponding class.


Reading Guide

I actually wrote a whole reading guide for this paper a couple of years ago, so please check it out if it helps you!

I'll cover this in the class video, but in Chapter 1, I think the coolest thing to note is the (very briefly mentioned) de Bruijn criterion, which allows for this beautiful separation of concerns between the proof automation we'll build in this class and the core proof checker. Basically, it's the reason you can slap something fancy like a neural network on top of your proof assistant if you'd like, and still know that you'll never break the actual proof assistant.

Chapter 3 is kind of dense---I wouldn't worry about remembering anything in particular about it. What I hope is that it gives you a sense of some of the cool applications of proof assistants that exist right now, and maybe also give you some ideas of other potential application domains. It's totally okay to skim it, and to skip over paragraphs you find particularly dense.

Discussion Questions

Discussion Questions

Please write a short paragraph as a comment in the class forum thread about any one of the following discussion questions (you need to respond to just one question of your choice to get credit, though you're free to write more if interested):

  1. What are some example software systems you'd love to see formally verified in a proof assistant, and what kinds of properties would you like to see verified about those systems? Are there any challenges that you anticipate in proving those properties about those systems?
  2. Section 3.3 lists exactly one machine learning system that has been verified in a proof assistant. Based on what you know so far, do you think there is much hope for verifying state-of-the-art machine learning systems inside of a proof assistant? If so, why and how? And if not, what do you think it would take to be able to formally verify state-of-the-art machine learning systems?
  3. Think a bit about the kinds of systems that have been formally verified in a proof assistant so far---and about the kinds of systems that haven't. Based on what you know so far, what are some properties about a software system that make it particularly apt to formal verification in a proof assistant? And do you think the barriers to reaching other kinds of software systems are fundamental, or do you think they are surmountable with lots of work?
  4. Write a paragraph responding to someone else's comment in the discussion forum.
  5. Write a paragraph about any other single interesting thought, question, or idea that you had in response to the reading.

Since the first week is asynchronous, we will not have this discussion in person. So, feel free to keep responding to each other if you find the conversation interesting---but no pressure. You'll get credit as long as you write about just one of the above.

Class Schedule | Next >>