Language-based Guarantees Beyond Safety in Rust-Based Systems

Systems PL Static Analysis

Rust is an exciting language for systems research because it offers performance like C++ while also offering language-based guarantees such as memory safety.

We had first hand experience with the opportunities Rust creates for systems research while working on Sesame, which leverages different properties of the Rust programming language to enforce privacy policies in web applications.

This research investigates what makes Rust a good match for systems research in concept, and what challenges arise in using Rust for systems in practice. We are interested in building additional tooling (e.g. static analysis, compilers/plugins, formal techniques) to overcome these challenges and unleash Rust's full potential in systems research broadly, and security and privacy-concsious systems specifically.

Active Projects

over the last decade, researchers have developed several systems that rely on Rust to provide important security, privacy, or isolation guarantees. Systems that do so for client-provided extensions or applications are of particular interest.

Often, these systems require that extensions be written in "the safe subset of Rust", putting unsafe code out of scope. However, this excludes the majority of Rust programs in practice, as unsafe is common in low level libraries.

In this research, we discovered that this requirement is about more than memory safety. In fact, we devise several examples of unsafe code that is memory safe and "reasonable" but still causes catastrophic loss of guarantees for many Rust-based systems.

We are building reusable tooling to extend the guarantees and scope of Rust-based Systems to include the presence of unsafe.

Initially, we focused on surfacing potentially dangerous unsafe code for auditing by developers. A major challenge is to reduce the number of unsafe blocks that require auditing, by correctly identifying and skipping irrelevant blocks or blocks that are demonstrably not dangerous. See Corinn's honor's thesis for more details.

We are currently building a customizable compiler plugin, Sniffer, that can identify and reason about unsafe code automatically without developer audit or intervention. This includes automatically determining whether an unsafe block poses any danger to high level systems guarantees (e.g. privacy policy enforcement or isolation).

Future Ideas

We are interested in designing efficient, reliable, and easy-to-use general extension mechanisms for systems, such as eBPF in operating systems or user defined functions (UDFs) in databases. We hope to leverage our existing research and tooling (PEAR and Sniffer) to realize this goal. For example, by enhancing or extending Rust with additional properties that ensure extensions satisfy desired requirements by construction.

Past Projects

PEAR is a framework for Rust static analysis that enables complex interprocedural analysis. PEAR provides analysis designers with an easy to use API that captures the precise function call relations in the Rust program under analysis. Specifically, it constructs a monomorphized call graph (MCG) of the underlying program, that higher-level analysis can consume.

PEAR powers two static analysis tools we use in our research: Scrutinizer, Sesame's leakage-freedome static analyzer, and Sniffer, a tool for identifying and reasoning about dangerous unsafe Rust code. PEAR's evaluation confirms that its MCG construction is sound, including reslving dynamic dispatch, and that it allows expressing complex analysis with significantly less effort and higher accuracy compared to from-scratch analysis.

See Artem's honor's thesis for more details.