Projects

I write code that helps students learn.

I create and maintain a variety of software tools to support my courses. Most of my projects are published on GitHub, and many are publicly deployed and freely available for educators and students to use and extend. If you’d like to use something for your own courses and need help, please get in touch! I’m happy to help and, to some degree, what I’ve made publicly available is largely a result of what people have wanted to use.

learncs.online: Learn to Program Online
learncs.online: Learn to Program Online

learncs.online is the freely-available version of the materials developed to support CS 124—my CS1 course at Illinois.

learncs.online provides public access to many of the same novel features that have helped make CS 124 so successful:

Many of the components supporting learncs.online emerged from projects described below.

Demo: https://learncs.online/
Credits: Hundreds of course staff have contributed content to learncs.online, creating the first crowdsourced CS1 course. Find the full list of contributors here.

Interactive Code Walkthroughs
Interactive Code Walkthroughs

One of the most distinctive features of the learncs.online and CS 124 materials is their use of interactive walkthroughs.

This novel component combines the best features of recorded live coding—being able to watch exactly what the instructor is doing and listen to their explanation—while still maintaining the interactive features of a playground. Because the interactive walkthrough is an animated editor and not a video, students can pause at any point and edit or execute the code.

Interactive walkthroughs are also recorded in-situ directly on the page where they are viewed, providing authors with awareness of how that explanation is used within the context of the surrounding information. By lowering the barrier to providing this kind of content, we have been able to incorporate contributions hundreds of CS 124 course staff, significantly diversifying the course’s voice.

We are working on making this explanatory component more broadly available(1). Stay tuned!

Demo: https://www.learncs.online/best#interactive-walkthroughs
Credits: Ruisong Li contributed code to the latest implementation of the interactive walkthrough component.

Rapid and Accurate Programming Exercise Authoring and Autograding
Rapid and Accurate Programming Exercise Authoring and Autograding

Students learning to program benefit from practice. But the traditional approach to writing autograded programming questions by authoring test suite is slow, inaccurate, and ineffective. Writing test suites that are resistant to memoization and catch all corner cases requires tediously enumerating many inputs. Test suite authoring provides no feedback to instructors about how effective their test suites are, leaving them unsure about how many test cases are needed. And software testing frameworks are not designed to be used without access to the tests themselves, which many autograders hide from students during the grading process.

We created a system called Questioner to address these weaknesses. Questioner leverages a key difference between typical software testing and autograding—when grading, the solution is known! Instead of requiring instructors write test suites, Questioner has them provide a solution and automates as much of the rest of the process as possible. A testing strategy is created by identifying the methods and inputs that comprise the solution’s public API. That strategy is validated by ensuring that it can identify incorrect examples, most of which are created automatically by mutating the solution. As a dedicated autograder, Questioner’s output is crafted for beginning programmers. For example, when testing objects, the entire sequence of calls and inputs that led to the failure is provided, mirroring the information that would normally be gleaned by examining the test suites themselves.

Since Fall 2020 I have used Questioner to author over 700 programming exercises to support my CS1 course, covering all aspects of conceptual thinking taught in the course: from the use of print statements on day 1, to the implementation of recursive sorting algorithms at the end of the semester. Questioner has resulted in an order-of-magnitude speed up in the time required to author accurate questions, allowing me to provide new questions for each weekly assessment, as well as a large and growing library of practice problems to support student learning.

Questioner supports the publicly-available problems on learncs.online. We are also working to make this tool more broadly available at beyondgrader.com.

Demo: https://www.learncs.online/best#homework, https://beyondgrader.com
GitHub: Jenisol solution analysis library, Questioner full system integration
Credits: Ben Nordick and Max Kopinsky worked on an early prototype of the Questioner system. Several other students, including Harsh Deep and Miguel Fernandez, have authored questions using Questioner.

Code Quality Autograding
Code Quality Autograding

An oft-cited weakness of autograding is that students don’t get feedback on the quality of their code. Even correct code can have many undesirable features: It may be overly complex, difficult to read, or use too many resources. However, manual evaluation of student code by human graders is error-prone, incredibly inefficient, and frequently results in feedback delivered far too late to support student learning.

Our goal is to provide students with automated code quality feedback—in real time, on every submission, and with no human input. This allows students to learn from and correct code quality mistakes, while ensuring that our staff are free to do what they do best: support student success through individual support.

We are using Questioner to push the boundaries of what is possible with automated code quality analysis. Currently, through a combination of source and bytecode analysis, we are able to automatically evaluate multiple code quality metrics—including linting (format checking), cyclomatic complexity, runtime and memory efficiency, source line counts, dead code detection, and recursion and feature analysis.

For example, a submission that is correct, but utilizes many more code paths than the solution, usually represents a misunderstanding of the problem and can be simplified. Dead or unexecuted code again usually points to a misunderstanding of the problem and can be removed. Feature analysis allows us to do things like prevent students from solving a problem with a loop, or ensure that they submit a recursive solution when required. Immediate feedback allows students to gradually learn how to craft solutions that are both correct and high-quality.

Demo: https://www.learncs.online/best#code-quality, https://beyondgrader.com
GitHub: Most features provided by Jeed
Credits: Ben Nordick has implemented several of the bytecode-based code quality measures, including code and memory efficiency. Our code quality toolchain utilizes Jeed to perform complexity and feature analysis.

Autogenerated Debugging Exercises
Autogenerated Debugging Exercises

Beginning programmers make mistakes, and learning how to fix those mistakes is critical to their development as successful and confident programmers. I tell students: You never stop making mistakes when writing code, you just get better at fixing them.

Students get a certain amount of practice making mistakes simply from completing programming tasks. However, they still make not make enough mistakes to learn how to identify and fix them efficiently and without getting flustered. We observed students who were otherwise well-prepared for our weekly programming quizzes still getting stuck after making a small error—unable to utilize the error output to diagnose and correct the problem.

To provide students more practice with debugging we created a tool that autogenerates incorrect submissions to our programming exercises. Our approaches utilizes source-level mutation, the ability to programmatically make small changes to source code: Literal values can be modified, conditionals subtly altered, increments and decrements swapped, return values replaced, mathematical operations rotated, and entire blocks of code removed.

We begin with correct submissions to a programming challenge submitted by previous students. For each, we use mutation to produce a set of slightly-modified submissions. We use Questioner to validate that each mutated submission is, in fact, incorrect, discarding any that are still correct. Each generated mutated incorrect submission becomes a single debugging challenge. Because each correct submission can generate dozens of mutated submissions, we can use a large number of correct submissions to generate an even larger number of debugging exercises.

When a student begins work on a debugging challenge, they are presented with an incorrect submission and asked to fix it. To do so, they must change no more lines than were altered by the mutation! This prevents students from simply replacing the entire incorrect submission with known-correct code, forcing them to provide a minimal fix for the faulty code. Students gain practice with debugging, working with unfamiliar code, and making minimal changes to correct a problem, in addition to being exposed to multiple different ways of solving the problem.

Demo: https://www.learncs.online/best#debugging-challenges
Credits: Ben Clarage contributed a first implementation of source-level mutation for Kotlin required to support our debugging exercises. Our debugging exercises generation toolchain utilizes Jeed to mutate student submissions and Questioner to verify correctness, both during generation and during interactive use.

Online Tutoring Site
Online Tutoring Site

When the pandemic began in Spring 2020, I rapidly created an online tutoring platform allowing staff to efficiently provide students with remote assistance. It was intentionally designed to allow students to receive one-on-one help while providing staff with the situational awareness required to avoid office hour queue meltdown. We have continued to use a variant of this tool since Spring 2020, and it has connected thousands of students with course staff for tutoring and significantly lowered the barrier for students to ask questions and receive help.

Read more about the design and implementation of our tutoring site, including its role in helping create a supportive environment for students from diverse groups.

Speedy JVM Execution and Analysis
Speedy JVM Execution and Analysis

Jeed (GitHub) is the speedy JVM execution and analysis toolkit.

Jeed’s primary purpose is to enable fast and safe execution of untrusted Java and Kotlin code. It’s what powers the Java and Kotlin playgrounds on this site, learncs.online, and the CS 124 website. It also powers our novel homework autograder and debugging question generation. Jeed runs untrusted code in a secure JVM sandbox 1000 times faster than approaches that rely on containerization. This performance boost makes it possible to support a large amount of interactive use using a small amount of server resources. Jeed has been in production use since 2019.

Find out more about Jeed, or read about how Jeed was developed.

Demo: https://www.learncs.online/best#playgrounds, or https://www.jeed.run/
GitHub: Jeed project repository
Credits: Ben Nordick has made substantial contributions to Jeed, including critical work on bytecode rewriting required to achieve sandbox safety. Ben Clarage added initial support for Kotlin source mutation. Hania Dziurdzik provided an initial implementation of Java feature analysis.

Upcoming
Upcoming

Things on the way but that haven’t quite reached production yet.

Autogenerated Testing Problems
Autogenerated Testing Problems

At present we have huge and growing problem banks providing students with practice at both writing code and debugging. But we are not yet evaluating one critical aspect of software creation: testing. Not only is writing good tests an incredibly important part of modern software development, but coming up with good tests helps students practice a defensive mindset that will help them write more correct code more efficiently—by anticipating corner cases, identifying problematic inputs, and so on.

To provide students with practice writing tests, we are working on an approach that leverages the same source-level mutation technique used by Questioner and our debugging exercise generation. Given the reference solution provided to Questioner, we use the same combination of mutation and manually-labeled examples to produce a set of incorrect submissions. Next, we require that students provide a test suite which can accurately distinguish the correct submissions from the incorrect ones.

Once complete, this new system will instantly add debugging exercises to nearly every problem in our existing library of over 700 Java and Kotlin programming exercises. We plan to deploy this in CS 124 and on learncs.online by Fall 2023.

GitHub: Being implemented as part of Questioner

Experiments
Experiments

Projects described below have either not yet been fully completed or are not yet deployed to production in support of one of my courses.

Polyglot Playgrounds
Polyglot Playgrounds

The approach to providing secure high-speed untrusted Java and Kotlin code execution used by Jeed relies on unique features of the Java Virtual Machine. A more general is to use containerization to protect the host from untrusted code. While slower, this allows a single backend to easily support student experimentation with multiple different languages.

Demo: https://cs124-illinois.github.io/playground/

Retired
Retired

Stuff that I don’t use anymore but haven’t forgotten about.

Slide-Based Participation Tracking
Slide-Based Participation Tracking

To measure class participation when teaching synchronously in-person, I developed a slide-activity-based tool to determine whether a student was following along with the web-based slides—and therefor, probably in attendance. Student progression through the online slide decks (examples here) was recorded, and their activity overlaid with mine. Students who were following along a reasonable percentage of the time were marked as present and received credit for class participation. The system represented a non-invasive way of recording participation without requiring a secondary device, and was eventually well-received by students. (Note that my feelings about requiring class participation have changed since the pandemic, and I would no longer utilize such a tool.)