The more I think about gradual disempowerment, the more I think I should be a lot less libertarian.
Like, imo the central problem with gradual disempowerment is almost identical to (one of) the central problem(s) of unregulated markets: resources go proportionally to those who produce more value-from-the-point-of-view-of-those-with-resources.
From the point of view of human welfare, this is pretty fine when market participants are all fairly similar in terms of their potential productivity. But when some humans have extremely low potential for productivity, those humans' welfare is totally ignored by the market except insofar as other market participants happen to care about it enough to spend resources on it directly / terminally. This is exactly the situation that all humans will probably be in soon.
And it's not like I know what should happen instead; I think I had believed that UBI would be sufficient to solve it in the human-only-economy case, since surely humans have at least a bit of built-in terminal empathy on average. But it feels like, in an economy and governance structure where humans and human-sympathizing AIs compete with non-human-sympathizing AIs, we're going have to solve this in a more robust and permanent way.
like this
JP Addison and Chana like this.