Upcoming paper and presentation on simplicity
I’ll be presenting my Onward! Essay about simplicity at Splash Pasadena 2024 on October 24th.
Good things take time. This one will have taken pretty much exactly a full year. Not exactly how I planned it.
What started with a random idea that popped up in my morning pages in October 2023, and then turned into first one post and then a whole series, will now also become a proper scientific paper (with a DOI and everything! Am I doing this right? This is what I should be excited about, isn’t it?). And I will present it in person at the Onward! Essays track at the Splash conference in Pasadena, California on the 24th of October 2024.
So this is where it started:
And here is where we are a year later:
A New Cognitive Perspective on Simplicity in System and Product Design
Abstract
What is simple? How can we make simple things? Simplicity seems easy to grasp but is surprisingly difficult to explain. “We know it when we see it”, but we don’t know how to describe it. This essay reflects on a number of observations about simplicity and complexity. It invites a perspective change towards a dynamical systems view of simplicity, explores how we can explain what makes things simple, and how we can utilize this knowledge as makers, creators, product and system designers to craft simple things.
I can’t quite link to the pre-print of the paper just yet, because I’m still waiting for the last review — academic processes are extremely thorough, I learned. UPDATE: Here it is!
It’s been accepted and the content approved, but getting the LaTeX formatting right has turned out to be almost as challenging as writing the paper in the first place.
However, reading a paper, with nine and a half pages full of text, by yourself, using your own eyeballs, is so 2023. And you know what I can totally do? Because it’s 2024 I can feed the paper to some AI that makes an 8-minute podcast episode out of it, thanks to Google’s Project Tailwind aka NotebookLM. Here you go:
As usual, this is both impressive and scary at the same time. It clearly picked up the main themes of the paper and put it into a coherent conversation between two characters. And of course, there is also a lot missing. But it may just be perfect as a teaser. Which kind of assumes that after listening you are still interested in reading the source material.
But let’s face it, the way it’s going to play out and how stuff like NotebookLM is going to be used is to no longer have to actually read the sources. Which is kind of what that paper is about. And now you have to read it, because otherwise you won’t understand how meta this just was. Gotcha!
On Artificial Intelligence…
In the paper I allude to AI as one of those things that seem to make things simpler when they really just make them more convenient, which the artificial podcast hosts suitably ignore. To be perfectly clear, the paper is not at all about AI, but it’s getting harder and harder to avoid mentioning it.
That’s been on my mind lately. In the last few weeks I have been, once again, embracing all kinds of flavors of AI to try to poke at the cognitive dissonance I feel about it. By the way, the generated audio for the podcast episode above is still the only AI-generated content in this post. But how would you know if that’s true, anyway?
With the first wave (or perhaps merely a ripple?) of Apple Intelligence reaching our devices shortly, even the last hold-outs will be confronted with ridiculously easily accessible AI-assisted search, transcription, writing, and summarization tools that will be so deeply integrated that soon AI-generated content will come to a chat message or email near you, promised. Which it probably already has and you just didn’t notice. And once we get the ability to AI-generate custom emojis, it’s going to be so over.
Anyway, my point is: This is going to be our future, if we are opposed to it or not. There is no opt-out for receiving such content. You can only opt-out to participate in creating such content. But should you?
I don’t think so, but I’m not going to tell you what you should do.
Hmm, actually, I will. I think we reached a point with AI that is similar to the point we reached with search engines years ago. You wouldn’t be opposed to type search keywords into a search engine and opt out of using them. Maybe you are a little picky about which one, but searching the web is kind of how the web works now for most of us. I have a feeling generative AI will be similar.
In a few years from now, of course you’ll use generative AI in some capacity, because that’s just how things work now. It would be weird if you didn’t. Hopefully, you may still be picky about which kind you’ll use, but it’s relatively likely that it has just become a thing “everybody” does now. Its convenience is just too irresistible. I think this is becoming clear now.
What will remain is our struggle to figure out how to use it rationally (in the balanced sense), in a good way, in “the right” way, in a way that doesn’t undermine our collective intelligence, our culture, and our humanity. While I can’t tell you what that’s going to look like, I can already tell you that it’s not going to come built into the commercially driven AI-enhanced products that you will find everywhere around you. We will have to figure it out ourselves.
I don’t think we’re up to the task at all. But I do think there are some emerging frameworks that might help a little.
Mirror of the Self is a series of essays investigating the connection between creators and their creations, trying to understand the process of crafting beautiful objects, products, and art.
Using recent works of cognitive scientist John Vervaeke and design theorist Christopher Alexander, we embark on a journey to find out what enables us to create meaningful things that inspire awe and wonder in the people that know, use, and love them.
Series: Mirror of the Self • On simplicity… • Voices on software design
Presentations: Finding Meaning in The Nature of Order