Afleveringen
-
Justin Pombrio (Brown University, USA) gives the third talk in the fifth panel, Inference and Analysis on the 3rd day of the ICFP conference. Co-written by Shriram Krishnamurthi (Brown University, USA) and Mitchell Wand (Northeastern University, USA).
Many languages use syntactic sugar to define parts of their surface language in terms of a smaller core. Thus some properties of the surface language, like its scoping rules, are not immediately evident. Nevertheless, IDEs, refactorers, and other tools that traffic in source code depend on these rules to present information to users and to soundly perform their operations. In this paper, we show how to lift scoping rules defined on a core language to rules on the surface, a process of scope inference. In the process we introduce a new representation of binding structure--scope as a preorder--and present a theoretical advance: proving that a desugaring system preserves alpha-equivalence even though scoping rules have been provided only for the core language. We have also implemented the system presented in this paper. -
Martin Avanzini (University of Innsbruck, Austria) gives the second talk in the fifth panel, Inference and Analysis on the 3rd day of the ICFP conference. This paper introduces a new methodology for the complexity analysis of higher-order functional programs, which is based on three ingredients: a powerful type system for size analysis and a sound type inference procedure for it, a ticking monadic transformation and constraint solving. Noticeably, the presented methodology can be fully automated, and is able to analyse a series of examples which cannot be handled by most competitor methodologies. This is possible due to various key ingredients, and in particular an abstract index language and index polymorphism at higher ranks. A prototype implementation is available.
-
Zijn er afleveringen die ontbreken?
-
Richard A. Eisenberg (Bryn Mawr College, USA) gives the first talk in the fifth panel, Inference and Analysis, on the 3rd day of the ICFP conference. Co-written by J. Garrett Morris (University of Kansas, USA).
We present an approach to support partiality in type-level computation without compromising expressiveness or type safety. Existing frameworks for type-level computation either require totality or implicitly assume it. For example, type families in Haskell provide a powerful, modular means of defining type-level computation. However, their current design implicitly assumes that type families are total, introducing nonsensical types and significantly complicating the metatheory of type families and their extensions. We propose an alternative design, using qualified types to pair type-level computations with predicates that capture their domains. Our approach naturally captures the intuitive partiality of type families, simplifying their metatheory. As evidence, we present the first complete proof of consistency for a language with closed type families. -
Victor Lanvin (ENS Cachan, France) gives the third talk in the fourth panel, Integrating Static and Dynamic Typing, on the 3rd day of the ICFP conference. Co-written by Giuseppe Castagna (CNRS, University of Paris Diderot).
We propose a type system for functional languages with gradual types and set-theoretic type connectives and prove its soundness. In particular, we show how to lift the definition of the domain and result type of an application from non-gradual types to gradual ones and likewise for the subtyping relation. We also show that deciding subtyping for gradual types can be reduced in linear time to deciding subtyping on non-gradual types and that the same holds true for all subtyping-related decision problems that must be solved for type inference. More generally, this work not only enriches gradual type systems with unions and intersections and with the type precision that arise from their use, but also proposes and advocates a new style of gradual types programming where union and intersection types are used by programmers to instruct the system to perform fewer dynamic checks. -
Yuu Igarashi (Kyoto University, Japan) gives the second talk in the fourth panel, Integrating Static and Dynamic Typing, on the 3rd day of the ICFP conference. Co-written by Taro Sekiyama (IBM Research, Japan), Atsushi Igarashi (Kyoto University, Japan).
We study an extension of gradual typing--a method to integrate dynamic typing and static typing smoothly in a single language--to parametric polymorphism and its theoretical properties, including conservativity of typing and semantics over both statically and dynamically typed languages, type safety, blame-subtyping theorem, and the gradual guarantee--the so-called refined criteria, advocated by Siek et al. We develop System F-G, which is a gradually typed extension of System F with the dynamic type and a new type consistency relation, and translation to a new polymorphic blame calculus System F-C, which is based on previous polymorphic blame calculi by Ahmed et al. The design of System F-G and System F-C, geared to the criteria, is influenced by the distinction between static and gradual type variables, first observed by Garcia and Cimini. This distinction is also useful to execute statically typed code without incurring additional overhead to manage type names as in the prior calculi. We prove that System F-G satisfies most of the criteria: all but the hardest property of the gradual guarantee on semantics. We show that a key conjecture to prove the gradual guarantee leads to the Jack-of-All-Trades property, conjectured as an important property of the polymorphic blame calculus by Ahmed et al. -
Amal Ahmed (Northeastern University, USA) gives the first talk in the fourth panel, Integrating Static and Dynamic Typing, on the 3rd day of the ICFP conference. Co-written by Dustin Jamner (Northeastern University, USA), Jeremy G. Siek (Indiana University, USA), Philip Wadler (University of Edinburgh, UK).
The polymorphic blame calculus integrates static typing, including universal types, with dynamic typing. The primary challenge with this integration is preserving parametricity: even dynamically-typed code should satisfy it once it has been cast to a universal type. Ahmed et al. (2011) employ runtime type generation in the polymorphic blame calculus to preserve parametricity, but a proof that it does so has been elusive. Matthews and Ahmed (2008) gave a proof of parametricity for a closely related system that combines ML and Scheme, but later found a flaw in their proof. In this paper we present an improved version of the polymorphic blame calculus and we prove that it satisfies relational parametricity. The proof relies on a step-indexed Kripke logical relation. The step-indexing is required to make the logical relation well-defined in the case for the dynamic type. The possible worlds include the mapping of generated type names to their types and the mapping of type names to relations. We prove the Fundamental Property of this logical relation and that it is sound with respect to contextual equivalence. To demonstrate the utility of parametricity in the polymorphic blame calculus, we derive two free theorems. -
Peter Thiemann (University of Freiburg, Germany) gives the fourth talk in the third panel, Contracts and Sessions, on the 3rd day of the ICFP conference. Co-written by Atsushi Igarashi (Kyoto University, Japan), Vasco Vasconcelos (University of Lisbon, Portugal), Philip Wadler (University of Edinburgh, UK).
Session types are a rich type discipline, based on linear types, that lift the sort of safety claims that come with type systems to communications. However, web-based applications and micro services are often written in a mix of languages, with type disciplines in a spectrum between static and dynamic typing. Gradual session types address this mixed setting by providing a framework which grants seamless transition between statically typed handling of sessions and any required degree of dynamic typing.
We propose GradualGV as an extension of the functional session type system GV with dynamic types and casts. We demonstrate type and communication safety as well as blame safety, thus extending previous results to functional languages with session-based communication. The interplay of linearity and dynamic types requires a novel approach to specifying the dynamics of the language. -
Stephanie Balzer (Carnegie Mellon University, USA) gives the third talk in the third panel, Contracts and Sessions, on the 3rd day of the ICFP conference. Co-written Frank Pfenning (Carnegie Mellon University, USA).
Session-typed languages building on the Curry-Howard isomorphism between linear logic and session-typed communication guarantee session fidelity and deadlock freedom. Unfortunately, these strong guarantees exclude many naturally occurring programming patterns pertaining to shared resources. In this paper, we introduce sharing into a session-typed language where types are stratified into linear and shared layers with modal operators connecting the layers. The resulting language retains session fidelity but not the absence of deadlocks, which can arise from contention for shared processes. We illustrate our language on various examples, such as the dining philosophers problem, and provide a translation of the untyped asynchronous pi-calculus into our language. -
Lucas Waye (Harvard University, USA), gives the second talk in the third panel, Contracts and Sessions , on the 3rd day of the ICFP conference. Co-written by Christos Dimoulas (Harvard University, USA), Stephen Chong (Harvard University, USA).
Modern service-oriented applications forgo semantically rich protocols and middleware when composing services. Instead, they embrace the loosely-coupled development and deployment of services that communicate via simple network protocols. Even though these applications do expose interfaces that are higher-order in spirit, the simplicity of the network protocols forces them to rely on brittle low-level encodings. To bridge the apparent semantic gap, programmers introduce ad-hoc and error-prone defensive code. Inspired by Design by Contract, we choose a different route to bridge this gap. We introduce Whip, a contract system for modern services. Whip (i) provides programmers with a higher-order contract language tailored to the needs of modern services; and (ii) monitors services at run time to detect services that do not live up to their advertised interfaces. Contract monitoring is local to a service. Services are treated as black boxes, allowing heterogeneous implementation languages without modification to services' code. Thus, Whip does not disturb the loosely coupled nature of modern services. -
Sebastian Ullrich (KIT, Germany), gives the fourth talk in the second panel, Dependently Typed Programming, on the 3rd day of the ICFP conference. Co-written by Gabriel Exner (Vienna University of Technology, Austria), Jared Roesch (University of Washington, USA), Jeremy Avigad (Carnegie Mellon University, USA), Leonardo De Moura (Microsoft Research).
Dependent type theory is a powerful framework for interactive theorem proving and automated reasoning, allowing us to encode mathematical objects, data type specifications, assertions, proofs, and programs, all in the same language.
Here we show that dependent type theory can also serve as its own metaprogramming language, that is, a language in which one can write programs that assist in the construction and manipulation of terms in dependent type theory itself. Specifically, we describe the metaprogramming language currently in use in the Lean theorem prover, which extends Lean's object language with an API for accessing natively implemented procedures and provides ways of reflecting object-level expressions into the metalanguage. We provide evidence to show that our language is performant, and that it provides a convenient and flexible way of writing not only small-scale interactive tactics, but also more substantial kinds of automation. -
Andreas Abel (University of Gothenburg, Sweden), gives the first talk in the second panel, Dependently Typed Programming, on the 3rd day of the ICFP conference. Co-written by Andrea Vezzosi (Chalmers University of Technology, Sweden), Theo Winterhalter (ENS Paris-Saclay, France).
Sized types have been developed to make termination checking more perspicuous, more powerful, and more modular by integrating termination into type checking. In dependently-typed proof assistants where proofs by induction are just recursive functional programs, the termination checker is an integral component of the trusted core, as validity of proofs depend on termination. However, a rigorous integration of full-fledged sized types into dependent type theory is lacking so far. Such an integration is non-trivial, as explicit sizes in proof terms might get in the way of equality checking, making terms appear distinct that should have the same semantics.
In this article, we integrate dependent types and sized types with higher-rank size polymorphism, which is essential for generic programming and abstraction. We introduce a size quantifier 'forall' which lets us ignore sizes in terms for equality checking, alongside with a second quantifier 'Pi' for abstracting over sizes that do affect the semantics of types and terms. Judgmental equality is decided by an adaptation of normalization-by-evaluation for our new type theory, which features type-shape-directed reflection and reification. It follows that subtyping and type checking of normal forms are decidable as well, the latter by a bidirectional algorithm. -
Antoine Vizard (University of Pennsylvania, USA), gives the first talk in the second panel, Dependently Typed Programming, on the 3rd day of the ICFP conference. Co-written by Stephanie Weiricc (University of Pennsylvania, USA), Pedro Henrique Azevedo de Amorim (Ecole Polytechnique, University of Campinas, Brazil), Richard A. Eisenberg( Bryn Mawr College, USA).
We propose a core semantics for Dependent Haskell, an extension of Haskell with full-spectrum dependent types. Our semantics consists of two related languages. The first is a Curry-style dependently-typed language with nontermination, irrelevant arguments, and equality abstraction. The second, inspired by the Glasgow Haskell Compiler's core language FC, is its explicitly-typed analogue, suitable for implementation in GHC. All of our results--chiefly, type safety, along with theorems that relate these two languages--have been formalized using the Coq proof assistant. Because our work is backwards compatible with Haskell, our type safety proof holds in the presence of nonterminating computation. However, unlike other full-spectrum dependently-typed languages, such as Coq, Agda or Idris, because of this nontermination, Haskell's term language does not correspond to a consistent logic. -
Robby Findler (Northwestern University, USA), gives the first talk in the first panel, Domain-Specific Languages, on the 3rd day of the ICFP conference. Co-written by Vincent St-Amour, Daniel Feltey, Spencer P. Florence, Shu-Hung You, Northwestern University.
Domain-specific languages are the ultimate abstraction, dixit Paul Hudak. But what abstraction should we use to build such ultimate abstractions? What is sauce for the goose is sauce for the gander: a language, of course!
Racket is the ultimate abstraction-abstraction, a platform for quickly and easily building new ultimate abstractions. This pearl demonstrates Racket's power by taking a leisurely walk through the implementation of a DSL for Lindenmayer systems, the computational model par excellence of theoretical botany. -
Francois Pottier (Inria, France), gives the second talk in the fourth panel, Program Construction, on the 2nd day of the ICFP conference. Traversing and transforming abstract syntax trees that involve name binding is notoriously difficult to do in a correct, concise, modular, customisable manner. We address this problem in the setting of OCaml, a functional programming language equipped with powerful object-oriented features. We use visitor classes as partial, composable descriptions of the operations that we wish to perform on abstract syntax trees. We introduce "visitors", a simple type-directed facility for generating visitor classes that have no knowledge of binding. Separately, we present "alphaLib", a library of small hand-written visitor classes, each of which knows about a specific binding construct, a specific representation of names, and-or a specific operation on abstract syntax trees. By combining these components, a wide range of operations can be defined. Multiple representations of names can be supported, as well as conversions between representations. Binding structure can be described either in a programmatic style, by writing visitor methods, or in a declarative style, via preprogrammed binding combinators.
-
Conal Elliott, Target, USA, gives the first talk in the fourth panel, Program Construction, on the 2nd day of the ICFP conference. It is well-known that the simply typed lambda-calculus is modeled by any cartesian closed category (CCC). This correspondence suggests giving typed functional programs a variety of interpretations, each corresponding to a different category. A convenient way to realize this idea is as a collection of meaning-preserving transformations added to an existing compiler, such as GHC for Haskell. This paper describes such an implementation and demonstrates its use for a variety of interpretations including hardware circuits, automatic differentiation, incremental computation, and interval analysis. Each such interpretation is a category easily defined in Haskell (outside of the compiler). The general technique appears to provide a compelling alternative to deeply embedded domain-specific languages.
-
Benjamin Cosman, University of California at San Diego, USA, gives the third talk in the second panel, Tools for Verification, on the 2nd day of the ICFP conference. Co-written by Ranjit Jhala, University of California at San Diego, USA.
We introduce the FUSION algorithm for local refinement type inference, yielding a new SMT-based method for verifying programs with polymorphic data types and higher-order functions. FUSION is concise as the programmer need only write signatures for (externally exported) top-level functions and places with cyclic (recursive) dependencies, after which FUSION can predictably synthesize the most precise refinement types for all intermediate terms (expressible in the decidable refinement logic), thereby checking the program without false alarms. We have implemented FUSION and evaluated it on the benchmarks from the LiquidHaskell suite totalling about 12KLOC. FUSION checks an existing safety benchmark suite using about half as many templates as previously required and nearly 2x faster. In a new set of theorem proving benchmarks FUSION is both 10 - 50x faster and, by synthesizing the most precise types, avoids false alarms to make verification possible. -
Konstantin Weitz (University of Washington, USA) gives the second talk in the second panel, Tools for Verification, on the 2nd day of the ICFP conference. Co-written by Steven Lyubomirsky, University of Washington, USA, Stefan Heule, Stanford University, USA, Emina Torlak, University of Washington, USA, Michael D. Ernst, University of Washington, USA, Zachary Matlock, University of Washington, USA.
Many verification tools build on automated solvers. These tools reduce problems in a specific application domain (e.g., compiler optimization validation) to queries that can be discharged with a highly optimized solver. But the correctness of the reductions themselves is rarely verified in practice, limiting the confidence that the solver's output establishes the desired domain-level property.
This paper presents SpaceSearch, a new library for developing solver-aided tools within a proof assistant. A user builds their solver-aided tool in Coq against the SpaceSearch interface, and the user then verifies that the results provided by the interface are sufficient to establish the tool's desired high-level properties. Once verified, the tool can be extracted to an implementation in a solver-aided language (e.g., Rosette), where SpaceSearch provides an efficient instantiation of the SpaceSearch interface with calls to an underlying SMT solver. This combines the strong correctness guarantees of developing a tool in a proof assistant with the high performance of modern SMT solvers. This paper also introduces new optimizations for such verified solver-aided tools, including parallelization and incrementalization.
We evaluate SpaceSearch by building and verifying two solver-aided tools. The first, SaltShaker, checks that RockSalt's x86 semantics for a given instruction agrees with STOKE's x86 semantics. SaltShaker identified 7 bugs in RockSalt and 1 bug in STOKE. After these systems were patched by their developers, SaltShaker verified the semantics' agreement on 15,255 instruction instantiations in under 2h. The second tool, BGProof, is a verified version of an existing Border Gateway Protocol (BGP) router configuration checker. Like the existing checker, BGProof scales to checking industrial configurations spanning over 240 KLOC, identifying 19 configuration inconsistencies with no false positives. However, the correctness of BGProof has been formally proven, and we found 2 bugs in the unverified implementation. These results demonstrate that SpaceSearch is a practical approach to developing efficient, verified solver-aided tools.
We present a logic, called Relational Higher Order Logic (RHOL), for proving relational properties of a simply typed lambda-calculus with inductive types and recursive definitions. RHOL retains the type-directed flavour of relational refinement type systems but achieves greater expressivity through rules which simultaneously reason about the two terms as well as rules which only contemplate one of the two terms. We show that RHOL has strong foundations, by proving an equivalence with higher-order logic (HOL), and leverage this equivalence to derive key meta-theoretical properties: subject reduction, admissibility of a transitivity rule and set-theoretical soundness.
Moreover, we define sound embeddings for several existing relational type systems such as relational refinement types and type systems for dependency analysis and relative cost, and we verify examples that were out of reach of prior work. -
Kami: A Platform for High-Level Parametric Hardware Specification and Its Modular Verification Co-written by Joonwon Choi, Benjamin Sherman, Adam Chlipala, Arvind (Massachusetts Institute of Technology, USA).
It has become fairly standard in the programming-languages research world to verify functional programs in proof assistants using induction, algebraic simplification, and rewriting. In this paper, we introduce Kami, a Coq library that enables similar expressive and modular reasoning for hardware designs expressed in the style of the Bluespec language. We can specify, implement, and verify realistic designs entirely within Coq, ending with automatic extraction into a pipeline that bottoms out in FPGAs. Our methodology, using labeled transition systems, has been evaluated in a case study verifying an infinite family of multicore systems, with cache-coherent shared memory and pipelined cores implementing (the base integer subset of) the RISC-V instruction set. -
Milo Davis (Northeastern University, USA) gives the fourth talk in the second panel, Foundations of Higher-Order Programming, on the 2nd day of the ICFP. Co-written by William Meehan and Olin Shivers (Northeastern University, USA).
Algorithms that convert direct-style lambda-calculus terms to their equivalent terms in continuation-passing style (CPS) typically introduce so-called 'administrative redexes' -- useless artifacts of the conversion that must be cleaned up by a subsequent pass over the result to reduce them away. We present a simple, linear-time algorithm for CPS conversion that introduces no administrative redexes. In fact, the output term is a normal form in a reduction system that generalizes the notion of 'administrative redexes' to what we call 'no-brainer redexes,' that is, redexes whose reduction shrinks the size of the term. We state the theorems which establish the algorithm's desireable properties, along with sketches of the full proofs. -
Thibaut Balabonski (LRI, France and University of Paris-Sud, France) gives the third talk in the second panel, Foundations of Higher-Order Programming, on the 2nd day of the ICFP conference. Co-written by Pablo Barenbaum (University of Buenos Aires, Argentina/IRIF, France/University of Paris Diderot, France), Eduardo Bonelli (CONICET, Argentina/Universidad Nacional de Quilmes, Argentina), Delia Kesner (IRIF/University of Paris Diderot, France).
We present a call-by-need strategy for computing strong normal forms of open terms (reduction is admitted inside the body of abstractions and substitutions, and the terms may contain free variables), which guarantees that arguments are only evaluated when needed and at most once. The strategy is shown to be complete with respect to beta-reduction to strong normal form. The proof of completeness relies on two key tools: (1) the definition of a strong call-by-need calculus where reduction may be performed inside any context, and (2) the use of non-idempotent intersection types. More precisely, terms admitting a beta-normal form in pure lambda calculus are typable, typability implies (weak) normalisation in the strong call-by-need calculus, and weak normalisation in the strong call-by-need calculus implies normalisation in the strong call-by-need strategy. Our (strong) call-by-need strategy is also shown to be conservative over the standard (weak) call-by-need. - Laat meer zien