ECOOP SUMMER SCHOOL 2010
ECOOP SUMMER SCHOOL 2010
ECOOP offers 7 summer school sessions on the 21-25th of JuNE, in parallel with the technical paper sessions of the conference. Attendance at the summer school sessions is included with registration to the main conference. The summer school sessions will be offered on a first-come, first-served basis: if you wish to attend a particular session, make sure you are registered for the full conference, and get to the summer school room early!
Check Your Units! Programming the physical world using F#
Andrew Kennedy, Microsoft Research
The F# programming language combines the succinct, expressive and compositional style of functional programming with the object model of the .NET platform. A unique feature is its support for checking and inference of units-of-measure, through a novel extension to the static type system. This tutorial will introduce the F# language, with particular emphasis on the units-of-measure feature and its application in simulation, graphics, games, finance and many other domains. No prior knowledge of F# will be assumed.
Thorn -- Robust, Concurrent Scripting on the JVM
John Field, IBM Research
Scripting languages are justifiably popular because of their support for rapid and exploratory development. However, scripts are notoriously hard to compose and to evolve. Additionally, though more and more applications require concurrency - for example, to manage interaction with remote distributed services - support for concurrency in existing scripting languages is weak at best. In this talk, I will describe and demonstrate Thorn, a new concurrent scripting language being developed by IBM and Purdue University. I will show how Thorn's module and type annotation features support the evolution of scripts into industrial-grade programs. I will also show how Thorn's concurrency features can be used to rapidly develop scalable applications, while avoiding many of the pitfalls of Java-style concurrency. (see http://www.thorn-lang.org).
CodeContracts: Language Agnostic Contracts for .NET
Francesco Logozzo, Microsoft Research
Code Contracts provide a language-agnostic way to express coding assumptions in .NET 4.0 programs. They consist of library methods for writing pre and post-conditions, and object invariants. A library has the advantage that all .NET languages can immediately take advantage of contracts. No need for a special parser or compiler. Furthermore, the language compilers naturally check contracts for well-formedness and compile contracts to MSIL. Visual Studio allows programmers to take advantage of the standard intellisense provided by the language services. Currently we have three tools that make use of Code Contracts: ccrewrite, for generating runtime checking from the contracts, cccheck, a static checker based on abstract interpretation that verifies contracts at compile-time, and ccdoc, a tool that adds contracts to the XML documentation files and to MSDN-style help files. In the lecture, I will introduce and demo the library, outline how the rewriting tool works, detail the internals of the static checker and explain why it is based on abstract interpretation. (More info here)
Shared memory, an elusive abstraction
Francesco Zappa Nardelli, INRIA
Multiprocessors provide an abstraction of shared memory, accessible by concurrently executing threads, which supports a wide range of software. However, exactly what this key abstraction is ---what the hardware designers implement, and what programmers can depend on--- is surprisingly elusive. The sophisticated optimizations implemented by modern multiprocessors have various programmer-visible effects: for some these effects are captured in a well-defined relaxed memory model, making it possible (if challenging) to reason with confidence about the behavior of concurrent programs; for others, however, it has been very unclear what a reasonable model is, despite extensive research over the last three decades. In this talk, I will present relaxed memory models and will reflect on the experience of trying to establish usable models for x86 multiprocessors, and for Power and ARM multiprocessors.
Bit rot -- caused by types, fought with types?
Martin Odersky, EPFL
I'm going to report on our experiences in redesigning Scala's collection libraries, focusing on the role that type systems play in keeping software architectures coherent over time. Type systems can make software architecture more explicit but, if they are too weak, can also cause code duplication. We observed that such code duplication led over time to significant bit rot in the code base, so that coherence was lost and many special cases were added. To address this problem, we experimented with several new framework architectures. We settled in the end on an architecture that made heavy use of implicit parameters in combination with a small dose of higher-kinded types. I will show in my talk how some hard typing problems that come up in connection with functional collections can be solved using these abstractions. I'll also describe some initial work aimed at hiding some of the complexity of these advanced constructs from application programmers. (More info here)
Module Systems: The Good, the Bad and the Ugly
Rok Strniša, Cambridge University
In this lecture, I will present through examples the most important module system properties that I have encountered during my research on the Java Module System, a module system targeted for Java 7. I will also explain why they are important, what are their benefits and drawbacks, and how they fit into the big picture.
Practical Synthesis of Concurrent Systems
Martin Vechev, IBM Research
Virtually all chips today, from high-end to embedded, are being built with an increasing number of processor cores. To effectively exploit these changes in technology, all future software will have to be concurrent. This poses a tremendous challenge for software reliability methods as building correct and efficient concurrent software is known to be a very challenging problem. A fundamental reason for this perpetual difficulty, present in all forms of concurrent programming, is that programmers are forced to manually discover how to synchronize interfering threads efficiently and correctly, a task that is extremely difficult for humans. In this lecture, I will survey some of the techniques we have developed and used to automatically synthesize correct and efficient synchronization and show how using these techniques, we have automatically synthesized efficient usage of a range of classic concurrency constructs such as atomic sections, memory fences in weak memory models and Hoare's conditional critical regions. I will also describe how we have used our approach to systematically construct concurrent data structures and concurrent garbage collection algorithms.