Keynotes

Analogical Reasoning Engines: Flash Fill vs GPT-4

Sumit Gulwani, Microsoft Research

Abstract: Analogical reasoning is at the heart of problem solving. On one hand, we have domain-specific analogical reasoning engines like Flash Fill, which utilize symbolic AI to synthesize programs from examples. On the other hand, analogical reasoning is a fundamental emergent capability of Large Language Models (LLMs) like GPT-4, allowing it to enhance various kinds of programming experiences. 

This analogical-reasoning-based perspective has two profound implications. First, framing a problem as one of analogical reasoning can elicit a more effective response from the LLM. Such analogies can also be extracted from spatial-temporal context or from an example bank. Second, various ideas from Programming-by-examples-or-analogies work such as domain-specific languages, generate and rank, failure-guided iteration, and neuro-symbolic cooperation, can be employed to improve LLM performance. This manifests through three forms: (a) Prompt engineering that involves synthesizing specification-rich, context-aware prompts from various sources, sometimes using the LLM itself, to elicit optimal output. (b) Post-processing techniques that guide, rank, and validate the LLM’s output, occasionally employing the LLM for these purposes. (c) Multi-turn workflows that involve multiple LLM invocations, allowing the model more time and iterations to optimize results.

I will illustrate these concepts using a variety of programming-related tasks, including programming from input-output examples or natural-language-based specification and code repair or edit suggestions. These innovations also equip GPT-4 to effectively tackle Flash-Fill-style analogies. So, who wins? Join me for the talk to find out!

Bio : Sumit Gulwani is a computer scientist connecting ideas, people, and research & practice. He invented the popular Flash Fill feature in Excel, which has now also found its place in middle-school computing textbooks. He leads the PROSE research and engineering team at Microsoft that develops APIs for program synthesis and has incorporated them into various Microsoft products including Visual Studio, Office, Notebooks, PowerQuery, PowerApps, PowerAutomate, Powershell, and SQL. He is a sponsor of storytelling trainings and initiatives within Microsoft. He has started a novel research fellowship program in India, a remote apprenticeship model to scale up impact while nurturing globally diverse talent and growing research leaders. He has co-authored 11 award-winning papers (including 3 test-of-time awards from ICSE and POPL) amongst 150+ research publications across multiple computer science areas and delivered 65+ keynotes/invited talks. He was awarded the Max Planck-Humboldt medal in 2021 and the ACM SIGPLAN Robin Milner Young Researcher Award in 2014 for his pioneering contributions to program synthesis and intelligent tutoring systems. He obtained his PhD in Computer Science from UC-Berkeley, and was awarded the ACM SIGPLAN Outstanding Doctoral Dissertation Award. He obtained his BTech in Computer Science and Engineering from IIT Kanpur, and was awarded the President’s Gold Medal.


Privacy-preserving Automated Reasoning

Ruzica Piskac, Yale University

Abstract : Formal methods offer a vast collection of techniques to analyze and ensure the correctness, robustness, and safety of software and hardware systems against a specification. The efficacy of such tools allows their application to even large-scale industrial codebases.  Despite the significant success of formal method techniques, privacy requirements are not considered in their design. When using automated reasoning tools, the implicit requirement is that the formula to be proved is public. To overcome this problem, we propose the concept of privacy-preserving formal reasoning.

We first present privacy-preserving Boolean satisfiability (ppSAT) solver, which allows mutually distrustful parties to evaluate the conjunction of their input formulas while maintaining privacy. We construct an oblivious variant of the classic DPLL algorithm which can be integrated with existing secure two-party computation (2PC) techniques. This solver takes input two private SAT formulas and checks if their conjunction is satisfiable. However, in verification, the users might want to keep their programs private and show in a privacy-preserving manner that their programs adhere to the given public specification.  To this end, we developed a zero-knowledge protocol for proving the unsatisfiability of Boolean formulas in propositional logic. Our protocol is based on a resolution proof of unsatisfiability. We encoded verification of the resolution proof using polynomial equivalence checking, which enabled us to use fast ZKP protocols for polynomial satisfiability.

Finally, we will conclude the talk by outlining future directions towards privacy-preserving SMT solvers and fully automated privacy-preserving program verification.

Bio : Ruzica Piskac is an associate professor of computer science at Yale University. Her research interests span the areas of programming languages, software verification, automated reasoning, and code synthesis. A common thread in Ruzica’s research is improving software reliability and trustworthiness using formal techniques. Ruzica joined Yale in 2013 as an assistant professor and prior to that, she was an independent research group leader at the Max Planck Institute for Software Systems in Germany. In July 2019, she was named the Donna L. Dubinsky Associate Professor of Computer Science, one of the highest recognition that an untenured faculty member at Yale can receive. Ruzica has received various recognitions for research and teaching, including the Patrick Denantes Prize for her PhD thesis, a CACM Research Highlight paper, an NSF CAREER award, the Facebook Communications and Networking award, the Microsoft Research Award for the Software Engineering Innovation Foundation (SEIF), the Amazon Research Award, and the 2019 Ackerman Award for Teaching and Mentoring.


Verified Software Security Down to Gates

Caroline Trippel, Stanford University

Abstract : Hardware-software (HW-SW) contracts are critical for high-assurance computer systems design and an enabler for software design/analysis tools that find and repair hardware-related bugs in programs. E.g., memory consistency models define what values shared memory loads can return in a parallel program. Emerging security contracts define what program data is susceptible to leakage via hardware side-channels and what speculative control- and data-flow is possible at runtime. However, these contracts and the analyses they support are useless if we cannot guarantee microarchitectural compliance, which is a “grand challenge.” Notably, some contracts are still evolving (e.g., security contracts), making hardware compliance a moving target. Even for mature contracts, comprehensively verifying that a complex microarchitecture implements some abstract contract is a time-consuming endeavor involving teams of engineers, which typically requires resorting to incomplete proofs.

Our work takes a radically different approach to the challenge above by synthesizing HW-SW contracts from advanced (i.e., industry-scale/complexity) processor implementations. In this talk, I will present our work on: synthesizing security contracts from processor specifications written in Verilog; designing compiler approaches parameterized by these contracts that can find and repair hardware-related vulnerabilities in programs; and updating hardware microarchitectures to support scalable verification and efficient security-hardened programs. I will conclude by outlining remaining challenges in attaining the vision of verified software security down to gates.

Bio : Caroline Trippel is an assistant professor in the Computer Science and Electrical Engineering Departments at Stanford University. Her research interests are in the area of computer architecture, with a focus on promoting correctness and security as first-order computer systems design metrics. A central theme of her work is leveraging formal methods techniques to design and verify hardware systems. Caroline’s research has been recognized with IEEE Top Picks distinctions, an NSF CAREER Award, the 2020 ACM SIGARCH/IEEE CS TCCA Outstanding Dissertation Award, and the 2020 CGS/ProQuest® Distinguished Dissertation Award in Mathematics, Physical Sciences, & Engineering