LIVE 2024
The Tenth Workshop on Live Programming

The Tenth Workshop on Live Programming (LIVE 2024) will take place in Los Angeles, California, in conjunction with SPLASH 2024.

Key dates
Workshop date: October 21, 2024
Format: in-person workshop
Onsite venue: Los Angeles, CA

Where UX meets PL

Programming is cognitively demanding, and too difficult. LIVE is a workshop exploring new user interfaces that improve the immediacy, usability, and learnability of programming. Whereas PL research traditionally focuses on programs, LIVE focuses more on the activity of programming.

Themes

Programmers don't materialise programs out of thin air, but construct them out of existing programs. Embracing this insight leads to a different focus at LIVE compared to traditional PL conferences. Here are some of the qualities that we care about:

Live. Live programming systems give the programmer immediate feedback on the output of a program as it is being edited, replacing the edit-compile-debug cycle with a fluid programming experience. Liveness can also mean providing feedback about how the static meaning of the program is changing, such as its type.
Structured. A program is highly structured and meaningful to the programmer, even in traditionally invalid states. “Structure-aware” programming environments understand and preserve that structure, and allow operations at the level of the structure, rather than at the level of raw text.
Tangible. In the traditional view of programs, execution takes place behind the scenes, and leaves little record of what happened. We are interested in programming systems that make execution transparent, tangible and explorable.
Concrete. People find it easier to start with concrete examples and generalise afterwards. Programming tools tailored to people will support this mode of working.

The majority of LIVE submissions are demonstrations of novel programming systems. Technical papers, insightful and clearly articulated experience reports, theoretical papers that propose and verify generalized principles, literature reviews, and position papers are also welcome.

Our goal is to provide a supportive venue where early-stage work receives constructive criticism. Whether graduate students or tenured faculty, researchers need a forum to discuss new ideas and get helpful feedback from their peers.

Program

This year we've accepted 16 papers, listed below! Schedule TBD.

Code flow canvas - a generic visual programming system

Maikel van de Lisdonk

I strongly believe that visual programming can help software development in many ways. "a picture says more then a 1000 words"... especially if it's interactive. Thats why it's very important that a visual programming needs to embeddable in exisiting code bases and not be a silo. That's what I am focussing on and want to show in this talk.

The visual programming system that I am developing has a generic embeddable core that can be used to develop multiple visual programming languages in a live infinite canvas web environment. The execution engine is separated from the core so that different execution engines can be implemented and deployed with an application (even using different technologies then the visual programming design core itself, which is javascript/web). The results can be viewed and debugged live. How the results look and can be debugged, depend on the targetted application behavior.

In the current demo setups (see here at https://codeflowcanvas.io) there's a webgl shader and a web-client-app flow. In the web-client-app flow the debugging is done using a timeline slider which can be used to view all of the steps that the program executed including its state. In the webgl pixel shader the result is updated live and the timeline slider is used to scrub through the generated shaders when creating/modifying a visual flow.

Some nice features include:

  • generic node composition
  • replacing nodes with compatible nodes and seeing the impact of the change directly
  • insert compatible nodes into connections

Snappets: a VR animation system based on Projective Geometric Algebra

Hamish Todd (Girih games)

Snappets starts from the premise that humans have a huge amount of intuition about how the world works that is embodied in hand movement. The way animators work with conventional software, they will often make motions with their hands to help them imagine movements. Snappets tries to connect the computer directly with that, aiming to make animation more like puppetry or sculpting.

But the idea goes deeper: in programming and mathematics as well as animation, the idea of "transformation A followed by transformation B" is fundamental. To compose two existing transformations in snappets, you take your hand through the desired motion, and press a button - the system will recognize this as "transform composition" and create an entry in a spreadsheet corresponding to that object.

In "Projective Geometric Algebra" (PGA), the system on which Snappets is based, all geometric objects such as points and planes are also transforms (and therefore elements of a "group"). Snappets exposes PGA operations to the user to allow them to solve geometric operations using transform composition and variants of it.

Inkling: Sketching Dynamic Systems

Marcel Goethals (Ink & Switch)
Alessandro Warth (Ink & Switch)
Ivan Reese (Ink & Switch)

A back-of-the-napkin sketch is one of the most popular ways to explore and share a rough idea. But if the idea is about something dynamic, like a clock or a dance, static sketches can only get you so far. In this video we introduce Inkling, a tablet-based “programmable napkin” that enables users to sketch and play with dynamic systems. The magic of Inkling comes from constraints you can draw and manipulate so quickly and fluidly that you could do it over a beer, in support of a conversation.

Live Programming a Live Programming Environment: An Experience Report

Elliot Evans (elliot.website)
Philippa Markovics (Nextjournal)
Martin Kavalar (Nextjournal)
Andrea Amantini (Nextjournal)
Jack Rusher (Nextjournal)

Is it effective to develop a live programming environment from within another live programming environment? We have been using Clojure and Clerk, a notebook-like live programming environment, to build reports for our users. We are now in the process of using Clerk to build an interactive report builder.

In the paper and videos below, we share our initial progress on going from a report to a report builder UI. We show a demo in which we use our prototype to build a data transformation pipeline while seeing intermediate results. We also share questions we want to answer and directions we want to explore in the future.

EYG a predictable, and useful, programming language

Peter Saxton

EYG is a programming language with the goal of dramatically reducing complexity around software deployment and software dependencies.

To enable this programs need to be completely predictable in their behavior and EYG has several features to support this predictability.

  • Managed effects, all program semantics are independent of the environment the program runs in. As well it is possible to statically analyze any effects a given program will rely on.
  • Hash references of AST fragments. Programs always fully described there dependencies via these hashes.
  • Closure serialization generation of program fragments which can be sent to other location. This allows static analysis with the type system to extend over multiple execution locations, for example a build machine and web server.
  • A minimal AST, it should be easy to re-implement interpreters or compilers in the future to run EYG programs

Diff-based interactive compiler debugging and testing

Luyu Cheng (HKUST (The Hong Kong University of Science and Technology))
Lionel Parreaux (HKUST (The Hong Kong University of Science and Technology))

Debugging and testing compilers has always been a problem. Developers need to create a large number of test cases, invoke the compiler on them, and then verify the results against the expected output. Since compilers usually have many stages, developers might usually find it difficult to know what happens at a certain stage, let alone debugging the stage. Moreover, introducing new features to the compiler may affect existing components intrinsically in ways that traditional tests cannot reveal.

We propose a novel testing method. Each test file is divided into smaller test blocks by empty lines. The test suite sequentially executes each test block and appends the inferred type, evaluation results, diagnostic messages, and debugging information as comments to each block. Developers can prepend flags to each test block to selectively display debugging information from different stages. Most importantly, these comments are committed to the version control system to track changes to the compiler. By monitoring file changes, the test suite can rerun modified files and then write new results back to the file, displaying them to developers and making modifying tests as smooth as using an interactive notebook.

We name this testing method “diff-based testing” because it relies on the diff operation of version control systems. We have implemented this testing system on the MLscript compiler and developed an interactive editor plugin for exploring debugging information.

Manifold: Throwing Together Software Systems

Jeff Lindsay

Manifold is an object model and runtime for quickly arranging and manipulating software components. Modeled after the Unity scene hierarchy, with the spirit of the Plan 9 filesystem, Manifold was made as a kernel for malleable, re-composable software.

This presentation will show examples made with Manifold across several domains of software. It will show how it can be used in authoring tools and editors to give software builders a live and direct connection to their creations. It will also show that when combined with components that take the Unix philosophy to heart, you can just ... throw together software.

Run, Build and Grow Small Systems Without Leaving Your Text Editor

Albert Zak (UAS Technikum Wien)
Karl M. Göschka (UAS Technikum Wien)

A simple ClojureScript environment integrates: self-rewriting plain text, live inspection, inline visualizations, distributed tracing, deployment, declarative process reconciliation, and object capability security.

Programmers assign functions to nodes. ("What should run where?") A process is a function running on a node; it receives all platform capabilities as an argument. Each node reconciles its process state to match the latest specification. Top level forms set their liveness, and may evaluate across the system on every keypress. Stateful expressions preserve node-local values across code changes. The editor is also a node. Data passing through any expression, on any node, is observable as live text in the editor. Visualizations render inline with the code, through functions defined in the same code.

Examples range from structured personal notes through computational documents, up towards open-world integration with hardware, filesystems, servers, browsers, and third-party dependencies to build, run, inspect, and change small distributed systems from within a lightly enhanced text editor.

DocuApps: Ampleforth Documents as Applications

Gilad Bracha (F5)

We discuss the claim that a live literate editor such as Ampleforth [1] can be used as an application builder and platform. We showcase two applications constructed using Ampleforth as existence proofs of this claim: Telescreen, a presentation tool, and Ozymandias, a computational notebook, both created in Ampleforth. Our experience indicates that a self-contained persistent representation for documents, customizable by the application, is essential. Document nesting (aka transclusion) impacts this persistence scheme, and is crucial in ensuring a composable document model that avoids the siloing of applications. The conclusion is that the initial claims made for Ampleforth are valid.

Definitions and Dimensions of Liveness

Joshua Horowitz (University of Washington)

Over the past 34 years, we've grown a community with liveness as its rallying cry, and a dream of making programming direct, visible, tangible, and alive at its heart. But it's striking to realize that -- even as we come together to discuss liveness at LIVE -- we don't really know what liveness is. Definitions of liveness in the literature are often vague, more gestures towards a direction than falsifiable descriptions. When definitions do flirt with specificity, they sometimes contradict each other in interesting but unexamined ways.

Alongside the ambiguity of liveness's definition, we have a second problem: While it is clear that approaches to liveness range widely, varying across many axes, there has been little work to articulate the dimensions of variation which efforts towards liveness encompass. Searching out patterns and mapping out these dimensions might hint us towards unexplored realms of design space, or at least give us a clearer shared language.

This essay begins what I hope will be an ongoing, collaborative effort to figure out what liveness means to us, and to map out the space of possibility we are journeying through together.

ScrapSheets: Async Programs in a Reactive 2D Environment

Taylor Troesh

Interactive spreadsheet programs can be constructed from small rulesets. ScrapSheets demonstrates a novel combination of (1) composable user-editable-rule behaviors (e.g. search/reduce) and (2) asynchronous rule evaluation (e.g. HTTP requests). Furthermore, its realtime rules-engine is implemented performantly via single-pass filtering/merging algorithms.

Arroost: Unblocking creation with friends

Lu Wilson (TodePond)

Live programming is uniquely suited to creative work. It can remove many of the creative blockers that individuals experience when trying to produce it. But we could place much more explicit emphasis on the removal of emotional blockers from the creative process, as opposed to only focusing on intellectual blockers. Arroost is a project that seeks to do that — an experimental live programming tool for making music.

Example-driven development: bridging tests and documentation

Oscar Nierstrasz (feenk GmbH)
Andrei Chiş (feenk GmbH)
Tudor Girba (feenk GmbH)

Software systems should be explainable, that is, they should help us to answer questions while exploring, developing or using them. Textual documentation is a very weak form of explanation, since it is not causally connected to the code, so easily gets out of date. Tests, on the other hand, are causally connected to code, but they are also a weak form of explanation. Although some tests encode interesting scenarios that answer certain questions about how the system works, most tests tend to be uninteresting.

Examples are tests that are also factories for interesting system entities. Instead of simply succeeding or failing, an example returns the object under test so that it can be inspected, or reused to compose further tests. An example is causally connected to the system, is always live and tested, and can be embedded into live documentation. Although technically examples constitute just a tiny modification to test methods, their impact is potentially ground-breaking.

We show (i) how Example-Driven Development (EDD) enriches TDD with live programming, (ii) how examples can be molded with tiny tools to answer analysis questions, and (iii) how examples can be embedded within live documentation to make a system explainable.

Subsequently: Telling stories with pictures makes programs

Marcel Goethals

Subsequently is an attempt at designing a live programming environment that is deeply visual, tangible and concrete. In Subsequently, you create programs by directly manipulating concrete, visual representations of data values, operations and control flow. All these different aspects are represented on screen as a diagrammatic comic strip. By simply manipulating concrete values step by step, the programmer creates a rich visual narrative that acts both as an executable program and a legible explanation of the algorithm. Because the program is represented concretely, there is no explicit “run” step or debugger. All program state is immediately visualised as you go.

TAPE: From direct to programmatic and back

Ian Clester (Georgia Institute of Technology)

Software for editing media (text, images, audio, video, etc.) faces an essential tension. Direct manipulation enables users to edit media fluidly, as the user can make changes directly to the final product rather than first describing each change programmatically. On the other hand, programming enables users to accomplish tasks that would be tedious or impossible via direct manipulation and can capture intent precisely. Problems arise when these two paradigms are combined, as in editors that support end-user programming. Such applications typically allow the user to go from direct to programmatic manipulation by creating some material directly and then transforming it programmatically (as in a spreadsheet formula). However, if the user then wishes to go back the other way by tweaking the computed output, they face a dilemma: either direct manipulation of the output is forbidden, in which case tweaks must be described programmatically by the user (sacrificing directness), or the computed output must be first copied before it can be changed, in which case the program is abandoned (sacrificing intention). TAPE, the Transformative Action-Preserving Editor, is a text-based prototype intended to suggest a general way out of the dilemma. By allowing direct edits and immediate actions in computed regions, which are automatically recorded as composable transformations, TAPE enables the user to go back and forth between direct and programmatic manipulation without sacrificing either.

Scoped Propagators

Orion Reed

Graphs, as a model of computation and as a means of interaction and authorship, have found success in specific domains such as shader programming and signal processing. In these systems, computation is often expressed on nodes of specific types, with edges representing the flow of information. This is a powerful and general-purpose model, but it incentivises a closed-world environment where both node and edge types are decided at design-time. By choosing an alternate topology where computation is represented by edges, the incentive for a closed environment disappears. Scoped Propagators are a programming model designed to be embedded within existing environments and user interfaces.

Frequently Asked Questions (FAQ) for Submitters

Here are some tips about submitting to LIVE, especially for those who are less familiar with academic workshops:

Why should I submit to LIVE?

LIVE is a community of researchers, developers, and creative people. We get together to share ideas about making programming better through liveness. Submitting work to LIVE makes it part of this conversation.

Academic researchers often use LIVE as a place to develop early stage ideas. Those from outside the academy have found LIVE to be an accessible way to connect with the scholarly community and get in-depth feedback on their work from a new perspective. (Several have even used LIVE as a stepping-stone to full-time work in academia.)

Many people appreciate the deadline and community of a workshop as a way to motivate work on a project. Being forced to explain your ideas to others can be a helpful way to figure out what your ideas are!

What does a submission need to have (and why)?

A LIVE submission is substantial documentation of your work and ideas – not just an abstract (as you may be used to from industry conferences).

Your submission will be reviewed by our program committee. They will provide you with written feedback, and based on their reviews we will select the program for the workshop. This makes for a good guideline on what makes a submission complete – is it fully-formed enough for reviewers to give you helpful feedback? We encourage you to submit works in progress, as long as they meet that threshold.

The majority of LIVE submissions are demonstrations of novel programming systems. Other types of work are also welcome, including technical papers, experience reports, theoretical papers, literature reviews, and position papers.

LIVE submissions will only be shown to the program committee. We do not publish them publicly.

Do I need to submit a fancy academic-formatted paper?

No! We encourage submissions in various formats. Submissions may be short papers, web essays with embedded videos, or demo videos. A written 250 word abstract is required for all submissions. Videos should be up to 20 minutes long, and papers up to 6 pages long.

How does a submission to LIVE differ from a blog post / industry talk / etc.?

LIVE is a scholarly workshop. Scholarly work differs from most of what you'll find on Hacker News in a few important ways:

  • Academic work is analytical and critical: A blog post about a new project plays a largely promotional role. There's room for promotion in a LIVE submission – it's appropriate to clearly and unapologetically communicate what you think your work has to offer. But submitting to LIVE is an opportunity to push further than that in a few directions. For one, it’s an opportunity to look beyond the thing you made and to try to articulate broader lessons. What patterns can you extract from your work? What future possibilities does it suggest? Secondly, submitting to LIVE is an opportunity to think critically about your work. What are its limits? What unsolved problems stand in the way of its potential? This kind of critical analysis doesn’t just defuse potential critics. It’s part of a productive research process, and will help seed fruitful conversation at the workshop.
  • Academic work builds on and makes reference to previous work: At this point, people have been making novel programming systems for over sixty years. That doesn't mean there aren't bold new paths to explore. But to avoid unwittingly retreading the same ground, we need to learn from the past. Ideally, this isn't just a matter of academic etiquette, but is a valuable part of your research process. Your work will be stronger if it explains the inspirations it draws upon and thoughtfully articulates how it differs.

If you'd like further guidance, please get in touch with the program committee: pvh@pvh.ca, gklitt@gmail.com, joshuah@alum.mit.edu

How does review work?

You will receive at least three in-depth reviews of your work from members of the program committee. These reviews will provide you feedback on your work, and will also be used to select which papers to accept. Based on past years, we anticipate accepting 50-90% of submissions. Submissions are due by July 7, and reviews will be released on August 16.

What will I need to do if my work is accepted?

We expect at least one coauthor to attend LIVE in-person in Pasadena (sometime between Oct 20-25, 2024, exact date TBD) and present the work. Attendees will need to cover their own registration fee and travel. If you are affiliated with an academic institution, many institutions offer funding support for conference travel. If needed, remote presentations can be accommodated, but in-person participation is strongly encouraged for the full workshop experience.

What are some examples of past submissions?

Work submitted to LIVE comes in different shapes and sizes. Here are a few examples, shared with permission of the authors:

  • At LIVE 2018, Geoffrey Litt presented Margin Notes. His original submission was a web essay.
  • At LIVE 2023, Mary Rose Cook presented Lude. Her original submission was a video.
  • At LIVE 2018, Josh Horowitz presented PANE. His original submission was a web essay.

To see the full range of successful LIVE submissions, check out the topics & presentation recordings from previous years.

What is the LIVE community like?

The LIVE community spans academic and independent researchers, industry programmers, artists, musicians, and all kinds of creative people. We live under an academic umbrella and support academic values, but are intentionally open to creative input from everywhere.

Committee

Program Committee

Alessandro Warth
Ink & Switch
Geoffrey Litt
Ink & Switch
Joshua Horowitz
University of Washington
Jonathan Edwards
Independent
Jun Kato
National Institute of Advanced Industrial Science and Technology (AIST)
Molly Feldman
Oberlin College
Peter van Hardenberg
Ink & Switch
Tudor Girba
feenk
Ravi Chugh
University of Chicago
Ian Arawjo
University of Montreal
David Moon
University of Michigan
Andrew Blinn
University of Michigan
Cyrus Omar
University of Michigan
Justin Lubin
UC Berkeley
Runqianqian (Lisa) Huang
UC San Diego
Clemens Nylandsted Klokmose
Aarhus University
Luke Church
University of Cambridge
Andrew Head
UC Berkeley
Brian Hempel
University of Chicago