Maps and compasses

Hi friends,

(welcome to my email dispatch! You can sign up for these or read the archive at buttondown.email/thesephist 💌)

I’ve been thinking about this idea, that there are dual motivations behind anything that an artist or creator makes.

The first reason is to simply make the thing, whether because it’s useful, profitable, or satisfies their desire to create. It’s a very obvious motivation, intrinsic to the work itself.

But often there’s a less obvious motivation as well, in which creating a specific artifact may be one in a series of steps to articulating or validating some larger perspective.

Bret Victor once wrote about how Doug Engelbart’s prototypes and writing were flattened by the media, being perceived as individual inventions rather than a collection of perspectives on a particular vision for the future that had to be assembled to see the world the way Engelbart did. This idea applies to other creative domains.

In each case, there exists some larger thesis behind creation that exists beyond the scope of a single creative artifact — a thesis that’s easier to see when we zoom out to look at a career or a portfolio of work rather than a single piece.

The least important question you can ask about Engelbart is, "What did he build?" By asking that question, you put yourself in a position to admire him, to stand in awe of his achievements, to worship him as a hero. But worship isn't useful to anyone. Not you, not him.

The most important question you can ask about Engelbart is, "What world was he trying to create?" By asking that question, you put yourself in a position to create that world yourself.

I think when we look at the work of researchers, artists, or engineers we tend to only take the first perspective. But often for the people creating the work the second motivation, to build towards a greater perspective on the world or vision, may be more important, with the work being only an instrument towards that vision.

Conversely, if you are a creator, this “portfolio” framing of intent may be comforting. You don’t have to squeeze in everything you want to say into your next creative artifact. Over time, your history may speak for itself.


What I read

I deeply enjoyed Ken Liu’s short story Single-Bit Error, about science and faith and love. It was my favorite thing I read this week.

Gytis published his work on steering image generating with interpretable CLIP features. You can explore and play with his demo interface and features in the Feature Lab, which lets you create images a totally different way from prompting or drawing. My favorite part of his work is the idea of a feature icon, a small thumbnail that represents a kind of “platonic ideal” of a feature completely visually. It serves as a visual label for a feature and feels to me like a new notation for visual concepts.

I really enjoyed this write-up of how Leap Motion designed their VR/AR-native one-handed navigation UI.

My coworker Jordan published a note called My Career as a Series of Emails”. If you work in tech, and especially if you’re (like me) early in career, I think you’ll find a lot to like.

We saw two major generative video model companies launch frontier models, Dream Machine from Luma and Gen-3 Alpha from Runway. I think it’s notable that both launches emphasized steerability of generation at least as much as the fidelity of generated video. As models generate more complex media, the interface we use to interact with the model will be the key to unlocking further progress.


What I’m working on

I published my big research blog on interpreting text embeddings with sparse autoencoders. It’s definitely the most detailed, in-depth piece I have ever published on my blog, and tries to bridge the gap between my interface thinking and my technical explorations. You can read it here.

Midjourney released a personalization feature, and I wrote about some of my thoughts on that in Personalization, measuring with taste, and intrinsic interfaces.

I started using a tool called Nomic Atlas to explore embedding spaces. In addition to some visualizations in my blog above, I made a map of 1M Wikipedia articles based on text embedding features they express. Check it out here.

I’ve started training experiments for the next version of my text autoencoder model Contra, which enables translating text embeddings back into text. This version benefits from a much more diverse dataset and a better architecture I’m validating.


A reminder: you can reply to these emails :-) They go straight to my inbox.

Wishing you a happy and safe week ahead,

— Linus