Programming languages (PL) are the basis of computing. We study what programs mean, in a very broad sense.
One kind of meaning is about what programs do specifically: what happens when you run a given program on a given input? What output does it produce, what files does it modify, what does it send over the network, how long does it take to run?
Another kind of meaning is about what programs are, or about what programs do in general. For example, if you write a Java method that takes two integers x and y as parameters, and the Java compiler accepts the program, the values must be integers. If another method tried to pass a String, you would get a type error.
Testing can show the presence of bugs but not their absence; types can show the absence of bugs (usually not all bugs, but a large number of them). Types reach beyond what a program does on a single input or set of inputs. That is what makes type systems useful.
However, computer science is not just about being useful, and neither are type systems. Rather, type systems have deep connections to the formal theories of mathematical logic, particularly natural deduction, the λ-calculus, and combinatory logic. Just as modular software design is about elegantly combining smaller pieces to create a whole, natural deduction elegantly combines smaller pieces of reasoning. All of this work in logic began before modern computer science, but turned out to be extraordinarily useful for type systems, compilers, and formal verification.
Our research investigates questions such as the following:
Can we design type systems that are substantially more expressive, but can still be type-checked quickly and automatically? For example, can types track the dynamic length of an array and prove that a data structure library preserves invariants?
Programmers don't always have time to use an expressive type system to show that every part of the program does what it's supposed to. They may need to defer some checking to when the program is being run, not when it's being compiled. How should we design type systems that combine static and dynamic checking?
Complex programs need to respond quickly to small changes in their input. A compiler should not need to recompile everything when you change one line of code. Programs that can respond quickly are incremental. But a subtle dependency between two parts of the compiler could lead to generating the wrong code, and tracking dependencies is difficult. So, in practice, compiler developers have to overapproximate by recompiling too much.
Incremental programming languages can track dependencies for you by moving the dependency tracking into the type system. We have techniques that work for smaller programs; how do we scale this up to entire compilers and web browsers?
Investigating these questions requires us to apply a variety of research techniques:
Formal logic—particularly constructive logic, natural deduction, and sequent calculus—is the basis for type system design. Formal logic, via the λ-calculus, also gives us techniques to describe operational semantics—what a given program does on a given input.
Mathematical proof techniques, especially induction, enable us to prove that types mean what we say, which in turn allows programmers to depend on the correctness of the type checker's judgment.
By implementing our formal systems with interpreters, type checkers, and compilers, we gain confidence that the theories we have designed are consistent with reality and expose our advances to the broader community.