Safe and Secure Code with Goblint

Goblint is developed within the ASSUME EU project.

Gobint is a static analyzer that we develop together with the Technical University of Munich. The main use for the tool is to detect bugs in concurrent C code. Such bugs that are notoriously hard to detect by testing and are known to cause major safety hazards. Goblint was previously developed as part of the MBAT project and there is now continued funding within the ASSUME project, showing the large industrial interest in this line of research. The main goal of our work is to make the code that's embedded in your car and other safety-critical systems more safe.

Although Goblint is already the best static race detection tool around, it's still somewhat hard to use for end users. During this term, we will be pushing towards a useful analyzer, exploring differential verification, counter-example generation and applying statistical methods to rank warnings. We will be working on this actively and guide your hand every step of the way. There are many things you can do to help! Goblint is written in OCaml, but not all tasks require OCaml programming; however, if you learn to develop the analyzer, the Munich team have well-funded Ph.D. positions to offer.

There are some concrete topics that we are still interested in, but mainly we would rather you come and discuss with us and we'll find a tailor-made topic for you.

1. Finding the heartbleed vulnerability with Goblint

Most security vulnerabilities are simple programming errors, so we should be able to just detect simple array out of bound accesses, right? Well, in real code, there is a lot of pointer dereferencing and bit shifting that may confuse static analyzer. Many analyzers will simply give up and not even raise a warning for fear of being to noisy. The Goblint analyzer, though, is not shy; in fact, it is a sound analyzer, so it has a duty to speak up when in doubt. Your task is to evaluate the performance of a sound analyzer on low-level code, such as the relevant part of OpenSSL, and analyzer what needs to be improved for a feasible sound analysis of security vulnerabilities.

2. Differential race detection for Linux Device Drivers

When we run the goblint analyzer on linux device drivers, there are a bit too many false positives, not because the analysis is too imprecise, but because there are plenty of implicit assumptions on how the functions exported by the device drivers can be used. Francesco Logozzo, who now spearheads the static analysis efforts at Facebook, has suggested a solution for this: verification modulo versions. We would like to see if we can extend this idea to multithreaded programs. There is some theoretical developments needed here, but the end result should be of great practical use: we want to validate the approach by checking the bleeding edge development version of Linux device drivers against the stable version and see if any bugs have been introduced.

Depending on your interest, you can do more of the experimental work, implementation work, or theoretical work. The experimental validation requires reading Linux code and figuring out if the analysis output makes sense or not. The Linux kernel is written in low-level C, but uses many custom macros, so this mainly means that you have to learn how the Linux kernel works. Implementation work for the analyzer requires fairly good functional programming skills, as the analyzer is written in OCaml. Either way, I expect that you are also interested in the theoretical developments and are not afraid to read some of the papers and discuss the theory with us, even if you do not understand all the details immediately.

3. Witness generation

Goblint is based on data flow analysis, which over-approximates all possible behaviours of the program. It is, therefore, hard to distinguish warnings generated by the analyzer from genuine bugs. Making it worse, as we have no guarantee that Goblint itself is free from bugs, there is also no guarantee that a program "verified" by Goblint is actually free from bugs. We would like to get out of this situation by generating witnesses that attest to the existence or absence of bugs. In the case of generating witnesses of bugs, we would like to use the format used by the model checking community: Error-Witness Automata.

This topic can be fairly advanced, but as a minimum, which only requires Java programming, you would get into the CPAChecker tool and write an overview of how counter-example generation works. The idea then is to figure out ways to generate these counter-examples from an analysis of a multi-threaded program, but we would start with very simple test cases. The implementation for this would be in OCaml, but the more substantial part is also reading papers, so that you can also be involved in solving the more interesting theoretical challenge of how to actually do this conceptually.