How we create

Hey friends, happy Thursday!

(welcome to my email dispatch! You can sign up for these at https://thesephist.com or read it at https://linus.zone/latest 💌)

Essays, photographs, videogames, podcasts -- these are what we create. They live on mediums, like word documents, film, canvas, software, and audio. On the other side of the canvas or the lens or the microphone are the humans casting their ideas into form. And in between, at the point of contact between the creator and the creation, the human and the medium, is the interface. The constraints laid out by our creative media and the interfaces we use to effect it lay down the laws of physics of how we create.

Interface design for creative, thinking work is particularly important for software, which is a medium that has few true constraints. The medium of canvas or photography constrains output to two dimensions of shape and color. Recorded audio is a stream of sounds, writing is a stream of words. But software itself? It can take any perceptual form we can design. Computers also add a new dimension to creative output we haven't had before -- interactivity.

Despite the theoretically boundless constraints of software as a medium, the interfaces we've designed for us to interact with computers when we think and create are very limiting.

Dynamic documents

Take the humble "document" as an example. For decades, document editing programs like word processors effectively emulated a printed sheet of paper, onto which the user typed with an emulated typewriter. Other software tools like spreadsheets did better, managing to escape complete skeuomorphism in favor of an infinite canvas. Notion is another good example here — they leave the notion of paged paper documents completely behind for a more interactive, data-backed idea of what presenting information on a computer could be. They let you embed databases, calendars, timelines, and other interactive components into documents that are linked together rather than paged. But I think there's much more we can do to continue escaping the skeuomorphic tendencies of yesterday's software: dynamic, programmable documents, software embedded inside word docs (Excel formulas writ large), perhaps even other forms of presenting information that leave the "document" moniker entirely behind in favor of things like explorable visualizations or virtual spaces we can walk around in.

Interfaces beyond the display

It's not only the imagination of software designers that limit our interfaces. There's also the very real, concrete technical limitation: two-dimensional screens, usually with a single pointer. Bret Victor writes in this essay about how even responsive, colorful, high-resolution touchscreens are severely limiting interfaces compared to the information diversity and bandwidth our body is capable of processing — motion, texture, mass, proximity, orientation... current "interface" technology treats us as something only slightly more advanced than a pair of eyeballs with a few, dull pointing fingers. What kinds of interfaces might we imagine if we think about interfaces beyond the display and the touchscreen? How can computers interact with us the way reality itself does?

Machine meets human

The last area of evolving interfaces, and one of my current favorite areas of research, is about using artificial intelligence to augment human intelligence. There are a dizzying array of sub-branches in this thread of ideas. One direction, for example, is about designing an interface where the AI works as a collaborator with the user rather than as a simple tool or feature — If we could embody the AI with anthropomorphic features like a cursor or presence or realistic pauses between interactions, might we be able to take better advantage of them than interfaces where we simply call on a feature by pressing a button? Another direction of research: I believe generative machine learning models represent a new kind of computing capability. The Distill paper linked earlier in this paragraph calls this a new "cognitive technology", wherein ML models can learn concrete representations (a matrix of numbers) of abstract implicit concepts like how "formal" a piece of writing is, how "cold" a photograph looks, or the sense in which an idea can be more or less "ambitious". Embodiment, giving abstract concepts concrete representations, is a powerful technique for inventing new ways of thinking. I'm curious what evolution in this space will enable our thinking tools to do that we cannot even begin to model with the current crop of tools.

All of these questions, and all the million sub-questions they contain, are what I've been chasing for the last couple of months by reading, tinkering, and building some little prototypes. Going forward, I hope to focus my efforts a little more than they have been on exploring these kinds of of ideas more deliberately and more deeply: How do we think and create today with computers, and what limits us? What can we improve to unlock the interfaces and mediums we have yet to find?

More on this soon :)


Here's what else I've been up to.


A reminder: you can reply to these emails :) They go straight to my inbox.

Wishing you a happy and safe week ahead,

Linus.