Skip to main content


in reply to Ben Weinstein-Raun

I've heard this from my mother rather than from Knife Guy YouTube. My anecdotal experience of kitchen work is that sharpening my knives does make a huge difference in a) the amount of force I use for basic tasks and b) the likelihood the knife will slip while doing them so I do feel the advice holds at least for regular household tasks and moderate vs. no knife care. I hone my knives every few uses for this reason but don't sharpen them often so they're probably not "knife guy" sharp? They're currently easy to work with and so getting them sharper would probably have diminishing returns.
This entry was edited (4 days ago)
in reply to Jen Blight

Yeah I think I buy it for fixed-blade knives, especially kitchen knives that are used for chopping and repeated fast slicing, but for pocket knives they're just so fiddly and used for such non-repetitive tasks that it seems pretty different.
in reply to Ben Weinstein-Raun

It's also very not good to use a much sharper knife than you're used to. That's part of how I cut off 2mm of my thumbtip several years ago.


I've been getting really into pocket knives this week, and especially learning about knife steels. The biggest surprise has been that one of the several aspects of Atlas Shrugged that caused me to lose suspension of disbelief, has become much more believable in retrospect:

When I read it I felt like this whole part about the "guy invents a new metal that's just straightforwardly better than existing metals, and names it after himself" was just too farfetched.

But it turns out that actually this is just a thing that can happen. This guy Larrin Thomas basically straight-up did this with a knife steel alloy in 2021. The alloy is notably better than others for the purposes of pocket knives in almost every respect. Like, in any single dimension there are steels that do better, but this alloy is like the Hawaii of the knife steel Pareto frontier. He didn't name it after himself but I think he could have called it "Larrin Metal" if he had wanted to. He actually called it "Magnacut".

This entry was edited (5 days ago)
in reply to Ben Weinstein-Raun

I guess my intuition is that alloys would be "smooth" in their properties. You add more chromium and certain properties increase. Maybe they stop increasing or start reversing after a while, but it's not hard to find the optional points for each property over time.

With that intuition it seems surprising that it's hard to find new alloys that haven't already been found.

in reply to JP Addison



Have been wearing minimalist/"barefoot" sandals for the last couple days, and it feels somehow unhinged to say this, but I think they're making me feel noticeably happier?

Like, it reminds me of the thing where anosmia is linked to depression. It's like I regained a nontrivial part of my sense of "what's going on in the world".



Finally actually followed through on my long-standing intention to donate blood! Went pretty smoothly, and I'm ~500mL lighter. Next month hopefully I'll try to donate plasma.
in reply to Ben Weinstein-Raun

Proud of ya! Please make sure you understand the whole plasma donation process before you go forward with that. As I have said elsewhere, my experience with plasma donation was not good . . . so of course, I am concerned for you.


On the way home from the coffee shop I encountered a bright blue bird as I walked past a bush, about half a foot from me, which stared at me for a second, quietly squawked, and flew away.

Then soon afterward, in a different bush, I encountered an ooze (pictured, made of bubbly foam) trying to stealth.

I'm concerned that someone has changed the genre of my RPG, in a worrying direction. Will be avoiding taverns.

This entry was edited (3 weeks ago)
in reply to Ben Weinstein-Raun

Indigo Bunting? Western Bluebird? Scrub Jay? (Of course I don't understand the humor . . . but I bet if I did I would laugh!)




Advertising, and especially targeted advertising, is widely hated. Something pretty interesting to me: insofar as I'm a rational agent, the amount a given advertiser should pay to show me their ad is positively correlated with how much I want to see that ad. On paper this sounds like an amazing situation, with positive sum trades all around.

But it's easy to observe that most ads are annoying and bad, and that people hate them. wtf is up with this? I don't have time to think about it today, but maybe someone here already knows. @Jeff Kaufman maybe? Or @Daniel Filan ?

in reply to Ben Weinstein-Raun

I would be pretty happy if ads were better. I regularly come across e.g. toddler products that I want to buy, and am very willing to spend money to save time/stress. But these things are almost never advertised to me.

I see *lots* of ads for things I already have, and lots of ads for things that are appropriate for parents of much younger or much older children. I can't actually think of the last time I bought something from an ad, which is shocking considering that I'm in a bunch of baby-related Facebook groups and often get products that people recommend there.

in reply to Ben Weinstein-Raun

Advertisers understand that humans are very manipulatable, and are very down to use dark-arts manipulation tactics. I don't suspect the correlation of values to actually be very high at all. (Intentionally annoying ads can be very effective.)

I hate having to be constantly on guard from all these attempts to hijack my attention and influence my beliefs/desires. I opt out of targeted marketing whenever I can because I don't want advertising systems to have *even stronger* memetic hooks to grab me with.



Just updated my git auto-wip-branch tool to use GPT-4o instead of 3.5. So much has changed in the 2 years since I made it! But the basic idea is still pretty great, at least for my workflow: github.com/benwr/gwipt


ugh, wtf is it with me and doctors canceling my appointments?

Like, granted, if I were a doctor I would want to cancel benwr's appointments, but how do they know?



Vitrification is an ancient and successful process for ensuring that one's brain makes it thousands of years into the future.


I wrote a plugin to sync flashcards from Logseq (open-source notes app, like a local Roam Research or a list-shaped Obsidian) to Mochi (closed-source spaced repetition app, like a remote / pretty Anki). Works quite well / has a lot of features for a two-weekend side project, though it took more effort than I hoped it would.

github.com/benwr/logseq-mochi-…



My mood tracking app says February was my worst month since I started keeping track in September. Makes sense: surgery recovery, plus GI issues, resulting in more isolation than usual. Hopefully things will improve as the days get longer.


Man, given that LLMs are "dream machines" / "all they can really do is hallucinate", it's wild how much they correctly remember.

Like, Claude 3.7 correctly knows a lot about the API used to write Logseq plugins. Logseq isn't exactly obscure, but it is definitely pretty niche, and the API is based on a relatively obscure database query language and a schema designed specifically for the app.

in reply to Ben Weinstein-Raun

I think they get worse about hallucinating when you ask them for something which doesn't exist.
This entry was edited (1 month ago)


Nausea is extremely bad for my subjective well being. I've spent the day mostly in the bathroom due to food poisoning or something, and I feel like this results in suffering comparable per hour to the worst pain I've experienced (which was when I had septic bursitis and my elbow swelled to the size of a tennis ball within a few hours)


I'm pretty sure I'd very gladly have paid 100x the electricity bill and carbon offsets for the whole day, for every time I've been stuck in a bathroom stall when the motion detector shut off the lights.
This entry was edited (1 month ago)


Human information throughput is allegedly only about 10-50 bits per second. This implies an interesting upper bound, in that the information throughput of biological humanity as a whole can't be higher than around 50 * 10^10 = 500Gbit/s. I.e., if all distinguishable actions made by humans were perfectly independent, biological humanity as a whole only has 500Gbit/s of "steering power".

I need to think more about the idea of "steering power" (e.g. some obvious rough edges around amplifying your steering power using external information processing / decision systems), but I have some intuition that one might actually be able to come up with a not-totally-useless concept that lets us say something like "humanity can't stay in 'meaningful control' if we have an unaligned artificial agent with more steering power than humanity, expressed in bits/s".

This entry was edited (1 month ago)


Usually when people talk about egregores, I think they mostly have ideologies in mind.

There's a somewhat "lower-level" egregore (in the sense of a low-level language, not a low-level demon), that I think is pretty overlooked, that I think of as "emotional cynicism" (to distinguish it as specifically the emotional stance we associate with the word "cynical", and from capital-c Greek Cynicism).

Emotional cynicism seems to me to be near totally dominant in public online discourse, and I think that's both interesting and somewhat concerning.

in reply to Ben Weinstein-Raun

I think I agree with you, though I'm curious for examples of types of things you might see as an expression of emotional cynicism.
in reply to JP Addison

  • a pretty large subset of comments on hacker news, lesswrong, ...
  • ~All reddit/Twitter/bluesky political discourse


Ugh, this guy built my high school dream project and I am simultaneously grinning widely and very jealous.


"I understand your concern, but" / "I share your concern, but"

Recently I've noticed that this phrase seems especially likely to ring hollow: When I hear someone say it about something that feels important to me, I usually don't believe them. Usually the phrase is accompanied by some degree of a "missing mood": If you're concerned, why do you seem to think that a declaration that you understand should be sufficient argument that in fact this concern is a secondary one? How sure are you that you actually understand?

I don't like it when people just straight-up assert that they've considered and rejected your view, without admitting that they might have misunderstood you or miscategorized you. It's like 10x better, imo, to say "I think I understand your concern, but".

in reply to Ben Weinstein-Raun

Yeah it often feels phatic, or what people do when they are trying to appear like a good listener/balanced interlocutor when actually advocating strongly for their own point of view. (Have definitely either done this myself or said things in that same spirit).

What's better? Maybe just 'that makes sense'?

in reply to Ben Weinstein-Raun

It is simply a manipulation intended to protect the speaker's ego
This entry was edited (2 months ago)



xkcd.com/3039/ wow, for a minute there it sure looked like humans were going to break the speed of light some time in the 80s in order to keep getting higher up


A partial list of people whose art I've loved, and who I might have liked to be friends with, but who I think would not like me very much (all for different reasons):

  • Ursula K. LeGuin (I'm not MtG Green enough)
  • Ayn Rand (I'm too MtG Green)
  • Ezra Koenig (I'm too MtG Blue)

I'm not really sure how or why I generated this list. It feels related to the thing about wanting to get stronger, and deleting my facebook last month. It's kind of an "edge-y" question: I don't know how to emotionally deal with the existence of people in this category, but they go on existing.



Okay, where is the few-shot "reads literally all the article summaries from the whole internet and predicts how much I'd like them" service?
in reply to Ben Weinstein-Raun

I was starting to wonder where it was when GPT 3.5 came out, and now I'm really feeling like it's suspicious
This entry was edited (3 months ago)
in reply to Ben Weinstein-Raun

I also am really annoyed the "read all my notifications, alert me about the few important ones, and batch summarize the rest" service hasn't been built


An unfortunate thing for me is that I just viscerally really like LLMs, and would like them even more if they were way smarter.


in reply to Ben Weinstein-Raun

I feel like I'm living on the internet these days, since my health is too bad for in-person stuff. So it sucks that the internet is so much more aggro.

> My strategy so far in life has just been to avoid being the kind of person who attracts "sharp" / "angry" critics, and also to filter my social bubble to exclude them. But this doesn't scale if you're trying to do the things I'm trying to do.

What are you doing that is incompatible with filtering your social bubble?

in reply to kip

Mainly: Change what happens in the world on a pretty large scale, while not being Carl Shulman.
in reply to Ben Weinstein-Raun

Re: method "c": I'm wondering if you could intentionally give yourself exposure to critics in a way that's less vulnerable. The most obvious ways to do this might be too insincere for your taste, but idk, maybe there's still something you can do?

Like if I were trying to do this, I might create an anonymous account and intentionally share my most controversial (yet unimportant) thoughts/opinions, in places where some people will probably get mad at me. (Hopefully not in a way that, like, antagonizes people? I'd want it to be net good.) Then I'd try to lean into a mindset that getting criticism is a necessary/normal part of getting noticed.



A mana pool, a ball... uh... a star, rats, a hullabaloo, panama


Idle thought: I wonder if we'll start seeing "training@home" training runs for open-source LLMs. Anyone care to run some numbers or sanity checks on whether this is possible in principle?

The folding@home project has been hugely successful, reaching at least exaFLOPS of compute.

"Training@home" would have to efficiently do partial gradient updates on extremely heterogeneous hardware with widely varying network properties; I'm not sure if this has any chance of producing base models competitive with e.g. Llama. In terms of ops alone, a 1 exaFLOPS network would have taken 10^7 seconds = ~half a year to train Llama 70b, and I imagine the costs of distributing jobs to such a network and coordinating on weight updates would make this much more expensive. So, probably not going to be competitive?

in reply to Ben Weinstein-Raun

Just this month there was a proof of concept doing distributed training of a 15B parameter model using a new technique to reduce the amount of data that needs to be shared between GPUs, so that it's actually feasible for them to not be co-located. Which is neat! Buuuut they still were using H100s (80GB of memory) as their basic unit of compute. I don't think their technique lets you train models larger than would fit in memory on each GPU, which means any training@home project is going to be limited to single- or low-double-digit billions of parameters. Small models are neat and serve some purposes but we already have a lot of pretty good ones (Llama, Phi, Gemma, NeMo, etc) and it's not clear what the niche would be for a community-trained one. (I mean, porn, I guess, but there's already a lot of NSFW fine-tunes of those models.)
in reply to Kevin Gibbons

I would guess that there will be reasons to at least want an LLM trained on an open corpus, whether it's community-trained or not.

Example reasons include ensuring that the model isn't secretly trying to get you to buy McDonalds, and the possibility that companies start releasing un-fine-tunable models.


Ben Weinstein-Raun reshared this.


Happy new year superstims!


Sparklers are illegal in Alameda county apparently, so I guess I'm off to commit some crimes.

Ben Weinstein-Raun reshared this.



Man, I miss my huge-tree-antenna. Yesterday I set up a big loop antenna along my house's wall. It transmits fine, but the noise it picks up makes it almost useless.
in reply to Ben Weinstein-Raun

Is this a problem that can be solved with money, like by just going ahead and getting an arborist preemptively?
in reply to Gretta Duleba

It definitely can't be solved with only money; it also requires at least coordinating with the landlord, who is a very reasonable person as far as Berkeley rationalist house landlords seem to go, but overall my guess is that it's not worth bothering him about it
in reply to Ben Weinstein-Raun

I am probably being too problem-solvey right now and I hereby to resolve to stop after this round, but in my experience, arborists are willing to produce documentation of their findings that can later be shown to landlords!

You just sound sad about your antenna and I wanna fix it.



I've been meaning to start donating blood and/or plasma for a few years now, partly because it's a good thing to do, but also as a way to shed accumulating substances (PFASs have been studied, but also background heavy metals in the case of whole blood donation), but I use topical finasteride for hair loss, which I'd have to stop for a month before donating.

So, say I took a month off from finasteride, and then spent a month donating: whole blood once, and plasma 7 times. If my math is right, I'd have donated / regenerated 1 - 0.92^8 = ~half my blood volume; and ~10% of my body weight. Then maybe back to finasteride for two months, another month of no finasteride, and another donation month?

in reply to Ben Weinstein-Raun

It's a shame doctors don't do blood-letting any more.
in reply to Daniel Filan

Maybe precisely in order to incentivize people to donate blood????
in reply to Ben Weinstein-Raun

This doesnt address your musings . . . but I found plasma donation prohibitively unpleasant. It was painful and time consuming. By comparison, whole blood donation is a simple and easy way to help.


I'm finding it really hard to make #hamradio contacts in Delaware. Weirdly hard, given that the five states with smaller populations than Delaware were all much easier, even though some of them are further from me, and I've had no trouble making contacts in its neighboring states.

A few days ago I decided to try to be more strategic about contacting every US state since I was really close, and I've now spent probably twice the time trying to contact Delaware, as trying to contact all four of the other stragglers combined.

in reply to Ben Weinstein-Raun

I just did the math, and it seems like Delaware is the state with the #2 lowest non-urban population. Only Rhode Island should be more difficult
in reply to Ben Weinstein-Raun

Ok, now I looked at the ARRL license counts by state. Going by General+Extra only, modified for non-urban population percentage, Delaware comes out as the worst state.


Today I was inspired to ask ChatGPT for help with my health issues for the first time since o1 was released. It suggested that I might have Cushing's Syndrome, which actually makes a lot of sense. I don't think any doctors ever suggested this directly, but I do have a recollection of a doctor asking me if I was extremely thirsty or urinating a lot (I wasn't), which might have been a question for a relevant differential.

So hopefully tomorrow I'm going to wake up and go get a cortisol test.

in reply to Ben Weinstein-Raun

Hm, cortisol levels are on the high end of normal. I wonder if I did have cushing's syndrome but am now managing it using ashwagandha and antidepressants.

Ben Weinstein-Raun reshared this.


Merry Christmas superstimulus!

Ben Weinstein-Raun reshared this.



A big chunk of my current best-guess political philosophy is somewhat libertarian, the rough intuition being that in many important respects, things very often go better when people make their own choices, especially about how much things are worth to them.

This is a helpful framework when the agents in your economy / political system are relatively static entities. But as far as I know it doesn't really have anything to say about cases where one agent might mold another agent's preferences, or decide which agents to bring into existence.

Some examples include:

  • having children
  • many aspects of how children are raised
  • building AI agents
in reply to Ben Weinstein-Raun

This suggests that if we want to figure out a liberal philosophy of building AI, we should look to find liberal philosophies of child-rearing.
in reply to Daniel Filan

Also have I tried to sell you on the book "Rationalism, pluralism, and freedom" yet?
in reply to Ben Weinstein-Raun

It's one of these books where there's one idea and the rest of the book is not super interesting once you're sold on the idea but: academic.oup.com/book/2889
in reply to Daniel Filan

It doesn't answer your worries as far as I know, but feels like it offers conceptual vocabulary that's relevant.
in reply to Ben Weinstein-Raun

Population ethics is the ~one area where my moral intuitions bottom out at "there is no actual answer here". Most questions of morality intuitively feel like there is a right answer but thinking about population ethics consistently leaves me with no solid foundations and nowhere to get foundations.

(Related: how should we think about farming animals for meat, given that mostly they wouldn't exist otherwise?)



Proposed fun / slightly edgy party game: Perzendo

Materials: index cards and a pencil, or a google doc.

One player is the perzendo master. This perzendo master writes the names of two people in the room, in a list sorted by some secret property.

The other players take turns. On each turn, a player either proposes a name (of any human, living or dead), or tries to guess the property. The perzendo master puts these names on the list, wherever they fall according to the secret property.

The first player to guess the rule correctly wins.

This entry was edited (3 months ago)
in reply to Ben Weinstein-Raun

wouldn't the master quite often not know where they fall on the list? or does the property have to be something like "how much I personally want to ask them what's wrong with them" so that there's an answer even for people you've never heard of (presumably, not very much)
in reply to Ben Millwood

Yeah I think you either have to have an "I don't know" bucket, or it has to somehow be always up to the master's impression.


TIL that an experience that I've had ~once every month or so for my whole life, and assumed was near-universal, is actually relatively rare, and correlated with various bad things that I'm not aware of experiencing in relation to it (EBV infection, migraines, head trauma).

Basically, as I experience it (typically right as I'm falling asleep) everything visually starts to feel very small and far away, except that my tongue feels large and cumbersome in my mouth.

It's called Alice in Wonderland Syndrome; other people experience similar size distortions though the details vary a lot.

in reply to Ben Weinstein-Raun

I don't know if we have discussed this . . . but me, too. So maybe it was passed down.
in reply to Ben Weinstein-Raun

I have also experienced this!! Rarely, but enough for me to have noticed the pattern.


I've now rescheduled my entire life around getting a hernia consultation twice, only to have UCSF reschedule at the last minute.
This entry was edited (4 months ago)

kip doesn't like this.



Tried using a portable vertical #hamradio antenna in my back yard this evening, as a replacement for the one I took down from the tree. It worked okay. Nowhere near the coverage I was getting from the 107ft wire, but I did manage to make a couple ft8 QSOs a few states away (South Dakota being the furthest).


Llama 3.3-70b is quite good; I think it's clearly the best local model I've tried. Not quite as good as GPT-4 on things I've tried so far, but I think better than GPT-3.5.


A wind storm two nights ago took a big branch down from the tree my antenna was in, so I took the antenna down until we have a chance to get the tree looked at. Very sad to have to pause my FT8 fun, but even if this is the end for a while I've had a great time.