How do tools differ from trading partners?
William Kiely likes this.
like this
Reminds me of of the "you are bugs" scene in the three body problem.
> And as we go about changing the world to suit our preferences, the rats will remain unconsulted. It seems clear to me that rats will only get what they want, when what they want happens to be nearly-costless to humans.
This seems like it's making progress towards a formalization, though I think it still struggles.
If you imagine that covid virons were agents, then it seems to me that although there's a sense in which we're much more powerful than them, and you know, humanity could, if "it" wanted, defeat them, they can kinda get what they want without enormous costs to humans. And yet humans are still much more powerful than covid virons.
Ben Weinstein-Raun likes this.
Hosting providers
Right now superstimul.us is hosted on a Vultr VPS instance. I use Vultr because it's a decently reliable VPS host that offers OpenBSD (though this instance is running in Docker on a Debian system). Much cheaper than AWS; comparable pricing and features with many other VPS providers.
But I just went looking at the prices of competitors, and Hetzner is cheap. How is it so cheap? For roughly the same price I'm paying for this host, I could get ~8x the vCPUs and RAM, and 4x the storage.
It would be a hassle to migrate at this point, but I'm definitely tempted.
Ben Millwood likes this.
Ben Weinstein-Raun likes this.
1. It's *even more* cost-competitive if you participate in their dutch-style server-capacity auction (a neat idea):
- hetzner.com/sb/
- docs.hetzner.com/robot/general…
- There is also open-source tooling to automate interaction with the auction.
2. Have no illusions that such infrastructure is physically secure against interference (fun example at: notes.valdikss.org.ru/jabber.r…), and account for such in your threat model if/as appropriate.
Ben Weinstein-Raun likes this.
Notes on the outcome of groups experiment
I think groups are a pretty half-baked feature, and I don't expect people to use them much. A "group" (previously called a "forum", and most of the documentation hasn't been updated to reflect this change) is basically an account that auto-reshares things it's tagged in. You can give it a few different settings for how it responds to follow requests, and another setting for the visibility of its reshares. To administer a group, you have to create a second account for it (which friendica does make relatively easy but not trivial), and then switch to being logged-in "as" the group.
So, yeah I guess it might be useful for coordinating things somehow, and the setting with private reshares is maybe promising (though also marked "experimental"). But it seems much less natural to me than the corresponding concept on Facebook.
Sam FM likes this.
like this
Some praise for the behind-the-scenes tech I'm using to run this site
- Tailscale: Lightweight personal VPNs. Tailscale is so good. I don't even need to have an open ssh port on the VPS running this instance, because I can connect over tailscale SSH with zero hassle.
- Caddy: Caddy is like nginx if nginx cared about usability. e.g. it makes it trivial to put an HTTP service behind a TLS proxy. Like, it even manages the LetsEncrypt certificate for you. Totally wild.
- Docker, and the official Friendica images especially: I hate developing for containers, and avoid it when possible. But when someone else has put in the effort to make a high-quality container image, deployment is genuinely much easier, even for hosting the thing on a VPS.
like this
The plan for the beta
Thanks to everyone who's joined to help beta test! I'm very grateful y'all are here! ❤️
My basic plan is to use superstimul.us for the next week, posting here instead of Facebook, getting a sense of the platform so that I can help other people later, and trying to iron out basic issues if they crop up.
After that, I'm going to do a push to invite clusters of people who I'm especially excited about being here. I'll probably reach out to y'all for names of people who are cruxy for your active enjoyment/participation here (feel free to preemptively message me about this!).
Anybody can invite their friends, btw, though I would slightly prefer you held off for now, because I want to be strategic about the launch.
I might do some kind of incentive / costly-signaling scheme where I give $20 or so to the first 30 people who share a substantive post here, and not on other social media? Or something; Not sure about that yet.
like this
I'm considering going to the southern hemisphere for December and January, to miss the shortest days in California.
New Zealand and Chile both seem like good options: Tons of sun that time of year, good climate, safe cities, relatively cheap. Chile is a lot cheaper, and after having a lot of fun visiting Mexico, I kind of want to try living in a country where I don't know the language.
like this
kip likes this.
I'm really excited for this experiment! Friendica exceeds my expectations in some ways (looks nice, has imo an especially good privacy model, seems easy to update and administer) and falls short in others (ease of finding people, occasional UI weirdness).
Please let me know if you run into any issues and I'll try to fix them or at least help resolve them
like this
Ben Weinstein-Raun likes this.
Sam FM likes this.
Sam FM
in reply to Ben Weinstein-Raun • • •I like this model of agency, but I'm not sure I understand the selection pressure conclusion.
I'm understanding this definition of agency as a matter of perspective, rather than an objective quality. A system is "agentic" if it is easier to describe in terms of goals rather than in terms of low level parts. Weather used to feel more agentic (gods of rain and lightning) but now feels more mechanical (modern weather forecasters using instruments and mathematical models).
Humans are so complicated, that they're rarely considered from a non-agentic perspective. Human biologists use non-agentic models, but even then, they're typically only building this mechanical model of some small subsystem (cardiology, immunology, etc).
So it follows that from the perspective of someone with limited understanding, a complicated universe would have more "agentic" systems than a simple universe. But I'm not sure what selection pressure exists to push the universe towards being complicated.
Ben Weinstein-Raun likes this.
Ben Weinstein-Raun
in reply to Sam FM • •I wouldn't guess that weather was ever actually well-modeled as agentic; Humans often see faces in random noise, and my guess is that this also happens with agency: since so many relevant things in the EEA are agentic, it's a decent prior to have for a given phenomenon.
I don't think the primary thing going on here is how complicated a system is, but rather the relative usefulness of different frames. Weather wasn't well-modeled as agentic, but also wasn't well-modeled as a policy, nor reductively, until people understood more about air pressure and the water cycle. And in lieu of an actually-good model, people fell back on the one with a larger evolutionary prior.
A system can be very complicated without being actually agentic; e.g. the behavior of a randomly selected computer program will be hard to understand from any frame, but I think reductionist or policy-based frames will work better than agentic ones.
Ben Millwood likes this.
Ben Weinstein-Raun
in reply to Ben Weinstein-Raun • •Ben Weinstein-Raun
in reply to Ben Weinstein-Raun • •Sam FM
in reply to Ben Weinstein-Raun • • •Right, I agree "complexity" isn't quite the same as "easy to reduce to policy" An if-then tree can be very large and complicated, while still having a structure that is easy to reduce to deterministic policies.
I think the legibility is closer to the key term here. Many computer systems are intended to feel legible and deterministic to the user, so they generally don't feel agentic. When they behave unpredictably we ascribe them agency with phrases like, "Ugh, my laptop is acting up again."
In cases where algorithms are intentionally hidden, they're often easier to understand in terms of agency. Consider the Pacman ghosts. They generally chase after Pacman, but get scared and run away from him when he becomes powerful. If you don't know the simple algorithms they follow, it's much easier to understand the game in terms of character motivations. (Fun fact: the different colors even have distinct "personalities" to their algorithms. I always had a soft spot for Clyde.)
In the natural world, I expect complicated systems to be illegible-by-default, unless they have s
... show moreRight, I agree "complexity" isn't quite the same as "easy to reduce to policy" An if-then tree can be very large and complicated, while still having a structure that is easy to reduce to deterministic policies.
I think the legibility is closer to the key term here. Many computer systems are intended to feel legible and deterministic to the user, so they generally don't feel agentic. When they behave unpredictably we ascribe them agency with phrases like, "Ugh, my laptop is acting up again."
In cases where algorithms are intentionally hidden, they're often easier to understand in terms of agency. Consider the Pacman ghosts. They generally chase after Pacman, but get scared and run away from him when he becomes powerful. If you don't know the simple algorithms they follow, it's much easier to understand the game in terms of character motivations. (Fun fact: the different colors even have distinct "personalities" to their algorithms. I always had a soft spot for Clyde.)
In the natural world, I expect complicated systems to be illegible-by-default, unless they have some selective pressure to optimize for legibility.
Ben Weinstein-Raun
in reply to Sam FM • •Sam FM
in reply to Ben Weinstein-Raun • • •Okay, so if these complex systems like weather and biology are theoretically best described by some ideal set of policies, then would these complex systems, even the stable self-replicating ones be considered non-agentic?
I struggling to see the fundamental difference between a fire that is hungrily eats all the wood in the pile, and me, a person that a hungrily eats all the snacks in the pantry. Unless we're considering some ineffable free well, I mostly see the difference that my systems are much more complex and illegible, making it hard to map out the full causal chain from the biochemistry in my psychology to my hands reaching for a bag of chips.
Combustion is much simpler, but from some all-knowing perspective, they're both self-sustaining chain reactions of chemistry.
Ben Weinstein-Raun
in reply to Sam FM • •Ben Weinstein-Raun
in reply to Ben Weinstein-Raun • •renshin
in reply to Ben Weinstein-Raun • • •This accords with what is taught at MAPLE.
The 'building up' of complexity (towards more 'agency') is due to selection pressure. More intelligent entities with more stable goals and drives toward those goals reproduce faster and proliferate.
These stabler agents tend to collect / eat less 'agentic' stuff (like bananas), churning matter into more complex systems. We are matter-converters, and we convert matter into complexity, in a sense. This is what we would call a function of INT (intelligence).
I think INT and agency kind of go hand in hand.
The Buddhist take on this is that all of this is driven by clinging to Self. Clinging to some idea that "I am" or "I have". This accumulation ends up creating more complex Selves. INT is used to self-preserve through more and more complexity. There is this sense that more complexity is more stable.
What we discover is that it's not actually more stable, it's more deluded. More complexity is correlated with greater delusion. Because we just get better at tricking ourselves into believing we're 'stable entities'
... show moreThis accords with what is taught at MAPLE.
The 'building up' of complexity (towards more 'agency') is due to selection pressure. More intelligent entities with more stable goals and drives toward those goals reproduce faster and proliferate.
These stabler agents tend to collect / eat less 'agentic' stuff (like bananas), churning matter into more complex systems. We are matter-converters, and we convert matter into complexity, in a sense. This is what we would call a function of INT (intelligence).
I think INT and agency kind of go hand in hand.
The Buddhist take on this is that all of this is driven by clinging to Self. Clinging to some idea that "I am" or "I have". This accumulation ends up creating more complex Selves. INT is used to self-preserve through more and more complexity. There is this sense that more complexity is more stable.
What we discover is that it's not actually more stable, it's more deluded. More complexity is correlated with greater delusion. Because we just get better at tricking ourselves into believing we're 'stable entities' but we're actually not. It just looks like that from a certain perspective.
It's hard to explain all this here, but if you're interested in a deeper conversation, I am open to it! Appreciate your write-up, and it seems basically accurate to me.
Soccum Speleodontidae
in reply to Ben Weinstein-Raun • • •Ben Weinstein-Raun
in reply to Soccum Speleodontidae • •