Skip to main content


Finally actually followed through on my long-standing intention to donate blood! Went pretty smoothly, and I'm ~500mL lighter. Next month hopefully I'll try to donate plasma.


On the way home from the coffee shop I encountered a bright blue bird as I walked past a bush, about half a foot from me, which stared at me for a second, quietly squawked, and flew away.

Then soon afterward, in a different bush, I encountered an ooze (pictured, made of bubbly foam) trying to stealth.

I'm concerned that someone has changed the genre of my RPG, in a worrying direction. Will be avoiding taverns.

This entry was edited (3 days ago)




Advertising, and especially targeted advertising, is widely hated. Something pretty interesting to me: insofar as I'm a rational agent, the amount a given advertiser should pay to show me their ad is positively correlated with how much I want to see that ad. On paper this sounds like an amazing situation, with positive sum trades all around.

But it's easy to observe that most ads are annoying and bad, and that people hate them. wtf is up with this? I don't have time to think about it today, but maybe someone here already knows. @Jeff Kaufman maybe? Or @Daniel Filan ?

in reply to Ben Weinstein-Raun

I would be pretty happy if ads were better. I regularly come across e.g. toddler products that I want to buy, and am very willing to spend money to save time/stress. But these things are almost never advertised to me.

I see *lots* of ads for things I already have, and lots of ads for things that are appropriate for parents of much younger or much older children. I can't actually think of the last time I bought something from an ad, which is shocking considering that I'm in a bunch of baby-related Facebook groups and often get products that people recommend there.

in reply to Ben Weinstein-Raun

Advertisers understand that humans are very manipulatable, and are very down to use dark-arts manipulation tactics. I don't suspect the correlation of values to actually be very high at all. (Intentionally annoying ads can be very effective.)

I hate having to be constantly on guard from all these attempts to hijack my attention and influence my beliefs/desires. I opt out of targeted marketing whenever I can because I don't want advertising systems to have *even stronger* memetic hooks to grab me with.



Just updated my git auto-wip-branch tool to use GPT-4o instead of 3.5. So much has changed in the 2 years since I made it! But the basic idea is still pretty great, at least for my workflow: github.com/benwr/gwipt


ugh, wtf is it with me and doctors canceling my appointments?

Like, granted, if I were a doctor I would want to cancel benwr's appointments, but how do they know?



Vitrification is an ancient and successful process for ensuring that one's brain makes it thousands of years into the future.


I wrote a plugin to sync flashcards from Logseq (open-source notes app, like a local Roam Research or a list-shaped Obsidian) to Mochi (closed-source spaced repetition app, like a remote / pretty Anki). Works quite well / has a lot of features for a two-weekend side project, though it took more effort than I hoped it would.

github.com/benwr/logseq-mochi-…



My mood tracking app says February was my worst month since I started keeping track in September. Makes sense: surgery recovery, plus GI issues, resulting in more isolation than usual. Hopefully things will improve as the days get longer.


Man, given that LLMs are "dream machines" / "all they can really do is hallucinate", it's wild how much they correctly remember.

Like, Claude 3.7 correctly knows a lot about the API used to write Logseq plugins. Logseq isn't exactly obscure, but it is definitely pretty niche, and the API is based on a relatively obscure database query language and a schema designed specifically for the app.

in reply to Ben Weinstein-Raun

I think they get worse about hallucinating when you ask them for something which doesn't exist.
This entry was edited (3 weeks ago)


Nausea is extremely bad for my subjective well being. I've spent the day mostly in the bathroom due to food poisoning or something, and I feel like this results in suffering comparable per hour to the worst pain I've experienced (which was when I had septic bursitis and my elbow swelled to the size of a tennis ball within a few hours)


I'm pretty sure I'd very gladly have paid 100x the electricity bill and carbon offsets for the whole day, for every time I've been stuck in a bathroom stall when the motion detector shut off the lights.
This entry was edited (4 weeks ago)


Human information throughput is allegedly only about 10-50 bits per second. This implies an interesting upper bound, in that the information throughput of biological humanity as a whole can't be higher than around 50 * 10^10 = 500Gbit/s. I.e., if all distinguishable actions made by humans were perfectly independent, biological humanity as a whole only has 500Gbit/s of "steering power".

I need to think more about the idea of "steering power" (e.g. some obvious rough edges around amplifying your steering power using external information processing / decision systems), but I have some intuition that one might actually be able to come up with a not-totally-useless concept that lets us say something like "humanity can't stay in 'meaningful control' if we have an unaligned artificial agent with more steering power than humanity, expressed in bits/s".

This entry was edited (1 month ago)


Usually when people talk about egregores, I think they mostly have ideologies in mind.

There's a somewhat "lower-level" egregore (in the sense of a low-level language, not a low-level demon), that I think is pretty overlooked, that I think of as "emotional cynicism" (to distinguish it as specifically the emotional stance we associate with the word "cynical", and from capital-c Greek Cynicism).

Emotional cynicism seems to me to be near totally dominant in public online discourse, and I think that's both interesting and somewhat concerning.

in reply to Ben Weinstein-Raun

I think I agree with you, though I'm curious for examples of types of things you might see as an expression of emotional cynicism.
in reply to JP Addison

  • a pretty large subset of comments on hacker news, lesswrong, ...
  • ~All reddit/Twitter/bluesky political discourse


Ugh, this guy built my high school dream project and I am simultaneously grinning widely and very jealous.


"I understand your concern, but" / "I share your concern, but"

Recently I've noticed that this phrase seems especially likely to ring hollow: When I hear someone say it about something that feels important to me, I usually don't believe them. Usually the phrase is accompanied by some degree of a "missing mood": If you're concerned, why do you seem to think that a declaration that you understand should be sufficient argument that in fact this concern is a secondary one? How sure are you that you actually understand?

I don't like it when people just straight-up assert that they've considered and rejected your view, without admitting that they might have misunderstood you or miscategorized you. It's like 10x better, imo, to say "I think I understand your concern, but".

in reply to Ben Weinstein-Raun

Yeah it often feels phatic, or what people do when they are trying to appear like a good listener/balanced interlocutor when actually advocating strongly for their own point of view. (Have definitely either done this myself or said things in that same spirit).

What's better? Maybe just 'that makes sense'?

in reply to Ben Weinstein-Raun

It is simply a manipulation intended to protect the speaker's ego
This entry was edited (1 month ago)



xkcd.com/3039/ wow, for a minute there it sure looked like humans were going to break the speed of light some time in the 80s in order to keep getting higher up


A partial list of people whose art I've loved, and who I might have liked to be friends with, but who I think would not like me very much (all for different reasons):

  • Ursula K. LeGuin (I'm not MtG Green enough)
  • Ayn Rand (I'm too MtG Green)
  • Ezra Koenig (I'm too MtG Blue)

I'm not really sure how or why I generated this list. It feels related to the thing about wanting to get stronger, and deleting my facebook last month. It's kind of an "edge-y" question: I don't know how to emotionally deal with the existence of people in this category, but they go on existing.



Okay, where is the few-shot "reads literally all the article summaries from the whole internet and predicts how much I'd like them" service?
in reply to Ben Weinstein-Raun

I was starting to wonder where it was when GPT 3.5 came out, and now I'm really feeling like it's suspicious
This entry was edited (2 months ago)
in reply to Ben Weinstein-Raun

I also am really annoyed the "read all my notifications, alert me about the few important ones, and batch summarize the rest" service hasn't been built


An unfortunate thing for me is that I just viscerally really like LLMs, and would like them even more if they were way smarter.


in reply to Ben Weinstein-Raun

I feel like I'm living on the internet these days, since my health is too bad for in-person stuff. So it sucks that the internet is so much more aggro.

> My strategy so far in life has just been to avoid being the kind of person who attracts "sharp" / "angry" critics, and also to filter my social bubble to exclude them. But this doesn't scale if you're trying to do the things I'm trying to do.

What are you doing that is incompatible with filtering your social bubble?

in reply to kip

Mainly: Change what happens in the world on a pretty large scale, while not being Carl Shulman.
in reply to Ben Weinstein-Raun

Re: method "c": I'm wondering if you could intentionally give yourself exposure to critics in a way that's less vulnerable. The most obvious ways to do this might be too insincere for your taste, but idk, maybe there's still something you can do?

Like if I were trying to do this, I might create an anonymous account and intentionally share my most controversial (yet unimportant) thoughts/opinions, in places where some people will probably get mad at me. (Hopefully not in a way that, like, antagonizes people? I'd want it to be net good.) Then I'd try to lean into a mindset that getting criticism is a necessary/normal part of getting noticed.



A mana pool, a ball... uh... a star, rats, a hullabaloo, panama


Idle thought: I wonder if we'll start seeing "training@home" training runs for open-source LLMs. Anyone care to run some numbers or sanity checks on whether this is possible in principle?

The folding@home project has been hugely successful, reaching at least exaFLOPS of compute.

"Training@home" would have to efficiently do partial gradient updates on extremely heterogeneous hardware with widely varying network properties; I'm not sure if this has any chance of producing base models competitive with e.g. Llama. In terms of ops alone, a 1 exaFLOPS network would have taken 10^7 seconds = ~half a year to train Llama 70b, and I imagine the costs of distributing jobs to such a network and coordinating on weight updates would make this much more expensive. So, probably not going to be competitive?

in reply to Ben Weinstein-Raun

Just this month there was a proof of concept doing distributed training of a 15B parameter model using a new technique to reduce the amount of data that needs to be shared between GPUs, so that it's actually feasible for them to not be co-located. Which is neat! Buuuut they still were using H100s (80GB of memory) as their basic unit of compute. I don't think their technique lets you train models larger than would fit in memory on each GPU, which means any training@home project is going to be limited to single- or low-double-digit billions of parameters. Small models are neat and serve some purposes but we already have a lot of pretty good ones (Llama, Phi, Gemma, NeMo, etc) and it's not clear what the niche would be for a community-trained one. (I mean, porn, I guess, but there's already a lot of NSFW fine-tunes of those models.)
in reply to Kevin Gibbons

I would guess that there will be reasons to at least want an LLM trained on an open corpus, whether it's community-trained or not.

Example reasons include ensuring that the model isn't secretly trying to get you to buy McDonalds, and the possibility that companies start releasing un-fine-tunable models.


Ben Weinstein-Raun reshared this.


Happy new year superstims!


Sparklers are illegal in Alameda county apparently, so I guess I'm off to commit some crimes.

Ben Weinstein-Raun reshared this.



Man, I miss my huge-tree-antenna. Yesterday I set up a big loop antenna along my house's wall. It transmits fine, but the noise it picks up makes it almost useless.
in reply to Ben Weinstein-Raun

Is this a problem that can be solved with money, like by just going ahead and getting an arborist preemptively?
in reply to Gretta Duleba

It definitely can't be solved with only money; it also requires at least coordinating with the landlord, who is a very reasonable person as far as Berkeley rationalist house landlords seem to go, but overall my guess is that it's not worth bothering him about it
in reply to Ben Weinstein-Raun

I am probably being too problem-solvey right now and I hereby to resolve to stop after this round, but in my experience, arborists are willing to produce documentation of their findings that can later be shown to landlords!

You just sound sad about your antenna and I wanna fix it.



I've been meaning to start donating blood and/or plasma for a few years now, partly because it's a good thing to do, but also as a way to shed accumulating substances (PFASs have been studied, but also background heavy metals in the case of whole blood donation), but I use topical finasteride for hair loss, which I'd have to stop for a month before donating.

So, say I took a month off from finasteride, and then spent a month donating: whole blood once, and plasma 7 times. If my math is right, I'd have donated / regenerated 1 - 0.92^8 = ~half my blood volume; and ~10% of my body weight. Then maybe back to finasteride for two months, another month of no finasteride, and another donation month?

in reply to Ben Weinstein-Raun

It's a shame doctors don't do blood-letting any more.
in reply to Daniel Filan

Maybe precisely in order to incentivize people to donate blood????
in reply to Ben Weinstein-Raun

This doesnt address your musings . . . but I found plasma donation prohibitively unpleasant. It was painful and time consuming. By comparison, whole blood donation is a simple and easy way to help.


I'm finding it really hard to make #hamradio contacts in Delaware. Weirdly hard, given that the five states with smaller populations than Delaware were all much easier, even though some of them are further from me, and I've had no trouble making contacts in its neighboring states.

A few days ago I decided to try to be more strategic about contacting every US state since I was really close, and I've now spent probably twice the time trying to contact Delaware, as trying to contact all four of the other stragglers combined.

in reply to Ben Weinstein-Raun

I just did the math, and it seems like Delaware is the state with the #2 lowest non-urban population. Only Rhode Island should be more difficult
in reply to Ben Weinstein-Raun

Ok, now I looked at the ARRL license counts by state. Going by General+Extra only, modified for non-urban population percentage, Delaware comes out as the worst state.


Today I was inspired to ask ChatGPT for help with my health issues for the first time since o1 was released. It suggested that I might have Cushing's Syndrome, which actually makes a lot of sense. I don't think any doctors ever suggested this directly, but I do have a recollection of a doctor asking me if I was extremely thirsty or urinating a lot (I wasn't), which might have been a question for a relevant differential.

So hopefully tomorrow I'm going to wake up and go get a cortisol test.

in reply to Ben Weinstein-Raun

Hm, cortisol levels are on the high end of normal. I wonder if I did have cushing's syndrome but am now managing it using ashwagandha and antidepressants.

Ben Weinstein-Raun reshared this.


Merry Christmas superstimulus!

Ben Weinstein-Raun reshared this.



A big chunk of my current best-guess political philosophy is somewhat libertarian, the rough intuition being that in many important respects, things very often go better when people make their own choices, especially about how much things are worth to them.

This is a helpful framework when the agents in your economy / political system are relatively static entities. But as far as I know it doesn't really have anything to say about cases where one agent might mold another agent's preferences, or decide which agents to bring into existence.

Some examples include:

  • having children
  • many aspects of how children are raised
  • building AI agents
in reply to Ben Weinstein-Raun

This suggests that if we want to figure out a liberal philosophy of building AI, we should look to find liberal philosophies of child-rearing.
in reply to Daniel Filan

Also have I tried to sell you on the book "Rationalism, pluralism, and freedom" yet?
in reply to Ben Weinstein-Raun

It's one of these books where there's one idea and the rest of the book is not super interesting once you're sold on the idea but: academic.oup.com/book/2889
in reply to Daniel Filan

It doesn't answer your worries as far as I know, but feels like it offers conceptual vocabulary that's relevant.
in reply to Ben Weinstein-Raun

Population ethics is the ~one area where my moral intuitions bottom out at "there is no actual answer here". Most questions of morality intuitively feel like there is a right answer but thinking about population ethics consistently leaves me with no solid foundations and nowhere to get foundations.

(Related: how should we think about farming animals for meat, given that mostly they wouldn't exist otherwise?)



Proposed fun / slightly edgy party game: Perzendo

Materials: index cards and a pencil, or a google doc.

One player is the perzendo master. This perzendo master writes the names of two people in the room, in a list sorted by some secret property.

The other players take turns. On each turn, a player either proposes a name (of any human, living or dead), or tries to guess the property. The perzendo master puts these names on the list, wherever they fall according to the secret property.

The first player to guess the rule correctly wins.

This entry was edited (3 months ago)
in reply to Ben Weinstein-Raun

wouldn't the master quite often not know where they fall on the list? or does the property have to be something like "how much I personally want to ask them what's wrong with them" so that there's an answer even for people you've never heard of (presumably, not very much)
in reply to Ben Millwood

Yeah I think you either have to have an "I don't know" bucket, or it has to somehow be always up to the master's impression.


TIL that an experience that I've had ~once every month or so for my whole life, and assumed was near-universal, is actually relatively rare, and correlated with various bad things that I'm not aware of experiencing in relation to it (EBV infection, migraines, head trauma).

Basically, as I experience it (typically right as I'm falling asleep) everything visually starts to feel very small and far away, except that my tongue feels large and cumbersome in my mouth.

It's called Alice in Wonderland Syndrome; other people experience similar size distortions though the details vary a lot.

in reply to Ben Weinstein-Raun

I don't know if we have discussed this . . . but me, too. So maybe it was passed down.
in reply to Ben Weinstein-Raun

I have also experienced this!! Rarely, but enough for me to have noticed the pattern.


I've now rescheduled my entire life around getting a hernia consultation twice, only to have UCSF reschedule at the last minute.
This entry was edited (3 months ago)

kip doesn't like this.



Tried using a portable vertical #hamradio antenna in my back yard this evening, as a replacement for the one I took down from the tree. It worked okay. Nowhere near the coverage I was getting from the 107ft wire, but I did manage to make a couple ft8 QSOs a few states away (South Dakota being the furthest).


Llama 3.3-70b is quite good; I think it's clearly the best local model I've tried. Not quite as good as GPT-4 on things I've tried so far, but I think better than GPT-3.5.


A wind storm two nights ago took a big branch down from the tree my antenna was in, so I took the antenna down until we have a chance to get the tree looked at. Very sad to have to pause my FT8 fun, but even if this is the end for a while I've had a great time.

niplav reshared this.


Pascal's Wager doesn't go far enough:

Granted, the Christian God offers infinite rewards, but as far as I can find this is always in terms of "eternal" life or "eternal" communion with him, and so we can be confident that he is offering rewards only as large as the cardinality of the continuum.

So come on down to Crazy Georg's Omega Plus First Church of G...d: If you can conceive of a God advertising any size of infinite reward, G...d will match it.

This entry was edited (3 months ago)
in reply to Ben Weinstein-Raun

Granted, the Christian God offers infinite rewards, but as far as I can find this is always in terms of "eternal" life or "eternal" communion with him, and so we can be confident that he is offering rewards only as large as the cardinality of the continuum.


FWIW I think it's plausible that the Greek words used in the NT doesn't have this sort of connotation.

in reply to Daniel Filan

I would find this surprising, since I don't model the ancients as having concepts for infinity that could correspond to larger infinities than this
in reply to Ben Weinstein-Raun

I'm imagining that the words / concepts they used were vague enough to include those higher cardinals - e.g. my understanding is that a lot of the words that get translated as "everlasting" could also be translated as "of the ages".
in reply to Ben Weinstein-Raun

I found this relevant and interesting chapter from Unsong by thinking "hmm, but Omega is an ancient word in some sense, and it's been more recently used in the context of infinities... and Jesus also referred to 'alpha and omega' to represent something like infinitude. So I can probably make a joke about kabbalah. Oh, but Scott Alexander will have already done that."

unsongbook.com/interlude-%D7%9…

in reply to Ben Weinstein-Raun

This entry was edited (2 months ago)


On Friday I tried to show @Daniel Filan how FT8 works, but I was having a really hard time getting QSOs. I was worried something was wrong with my #hamradio antenna setup, since the internet claimed that band conditions should be good. But this afternoon and evening I had a great time and got 17 QSOs across 5 different bands! So I think Friday must have been something transient rather than my (very janky) setup degrading.

I now have confirmed QSOs in 40 states, and unconfirmed ones in all but 3! (North Dakota, Delaware, and Vermont. Almost managed to get one in Delaware today, but wasn't quite able to complete the protocol) Plus 28 "DX entities" (mostly countries, but includes e.g. Alaska and Hawaii separately) on 6 continents!

Map of listening stations that heard me this afternoon:

W0AMT Jon reshared this.



Ended up deactivating my facebook yesterday. I wish I could have emotionally handled whatever was going on, but the only way I know how to productively deal with expressions of anger at that depth, apparently doesn't scale past one or two people at a time.

Last night I felt really conflicted about it. Like, I had just been trying to get people to give me harsh feedback, hadn't I? Doesn't this undermine that, or feel like a petty table-flipping move?

I still have some of those worries, but today I'm feeling like it was obviously the right move. Like if I had a gangrenous limb or something and had cut it off: It's pretty awful that I lost a limb, but it's way better than losing my whole self. Plus in this case I can reattach it if I figure out how to get rid of the gangrene.

This entry was edited (3 months ago)
in reply to Ben Weinstein-Raun

what was going on on FB that made you want to deactivate it, if you want to share? the last thing I looked at of yours seemed to be positively received
in reply to Gina Stuessy

I think this was a different post; basically, I wrote a post about the United Healthcare CEO assassination (the gist was, "it's wrong to express glee about someone's death"). It got a decent number of mildly positive reactions, but also a small cascade of intense negative reactions, a couple of which were kinda vicious.

Daniel Filan doesn't like this.