Unsurprisingly, I’ve been reading a lot about artificial intelligence (AI) lately. AI is deeply interwoven with the subject matter of this newsletter, and I want to write a lot more about that. But as I am traveling this week and also fighting off some bug, I’m just going to keep it short today and just pose a question that perhaps inspires some thinking.
There are some concerns about AI from taking our jobs all the way to eventually enslaving us, which is why we need to make sure that we “align” (another word for control) AI to only do what we really want it to do.
What if the current state of technology (in artificial intelligence, but also in technology in general) is rooted in the mechanical-objectivist world view in which we ignore or reduce acting agents to mere observers of a universe that doesn’t really need us for anything? We just happen to be around for mysterious, but ultimately unimportant reasons, in a universe that follows strict rules we have mostly discovered, with just a few tough ones outstanding. But in due time, we will crack those too. Yay, technological progress!
With such a world view, I can easily imagine a world controlled by artificial general intelligence (AGI) emerging out of a history of mechanistic objectivism that doesn’t see any value or utility in humans, because everything that is valuable to such an AGI is what is valuable to us today: power and efficiency.
Most of our interactions are results-oriented and transactional. We don’t care who or what we interact with, as long as we get the result we expect. We control and manipulate (in the having mode) to make things happen, sometimes with unintended consequences, but screw these consequences. If we ultimately end up with what we desire, they might be worth it.
In a value system worshipping utility and efficiency, where “life forms” are mere containers (machines) for genes and memes (information) to proliferate, and where understanding is reducible to simple chains of cause and effect (with perhaps some probability sprinkled in), and where reasoning is equivalent to computation, in such a system I’m beginning to think AGI not seeing anything useful and valuable in humans is just a natural reflection of our own values, today — our own ultimate bias being encoded in the models.
However, if you have been reading along, you know that I am not concerned that this is what our future is going to look like. In such a value system, art and beauty have no reason to exist. But last time I checked, and even though we’re trying our best to get rid of these inefficient and useless anomalies of humanity, they are still around.
Imagine an alternative value system based on relatedness, where we cultivate practices to increase our awareness of the agents we interact with, where we feel connected to them, where we feel empathy towards them, where we care about them, and where we can expect them to reciprocally feel and interact with us the same way.
If we just treated other human intelligences that way today, would the technological systems we design with such a value system look different to those we design today? And would AGI emerging out of such a culture frame things differently?
Mirror of the Self is a weekly newsletter series trying to explain the connection between creators and their creations, and analyze the process of crafting beautiful objects, products, and art.
Using recent works of cognitive scientist John Vervaeke and design theorist Christopher Alexander, we embark on a journey to find out what enables us to create meaningful things that inspire awe and wonder in the people that know, use, and love them.
If you are new to this series, start here: 01 • A secular definition of sacredness.
Overview and synopsis of articles 01-13: Previously… — A Recap.
Overview and synopsis of articles 14-26: Previously… — recap #2.