Skip to main content



is it just me or does the current generation of big models produce more typos than the previous generation? Here's one I noticed today from Opus 4.5 (non-drowsy -> non-drowning):

I've noticed at least two from GPT 5 and 5.1 as well, though I didn't think to screenshot them.



Today in "asking models about their preferences". I find this one a bit uncomfortable, tbh.


Today I am trying out wearing drawstring-based underwear. I really like it! Since I've been fat, elastic bands always fold or roll over themselves, unless I get a size that's so big the elastic doesn't even work to hold them up. With a drawstring I can choose the exactly right fit, which is typically much less pinchy/constricting than even a moderately well-fitting pair of underwear. The big downside is that using the bathroom takes an extra 30s or so for tying/untying the drawstring (though, if using a urinal, you can of course use the fly instead).

I originally bought them because they're ~the only literally 100% cellulose-based underwear you can buy (elastic bands are made of mostly spandex/elastane) and I was curious. I think I will buy at least a few more pairs of these, and maybe start wearing them by default.



I'm pretty excited about the prospect of making a wallet out of UHMWPE tape. It will be like the duct tape wallets of my youth except that its material properties will be superior to expensive wallets. Also it will probably be more expensive than expensive wallets if I factor in the cost of prototyping it (before even accounting for time spent).


"social health", as a concept kind of like "mental health", seems pretty interesting to me


They minted the last penny today! Thus ends the era of the US government handing out both required solid pieces of a voltaic cell in one (not very) convenient package.



(warning: posting this for archival purposes rather than because it's interesting at all)

I asked my mom what she remembers her grandparents eating; here's some of what she remembers, a lot of which they grew themselves:

  • Potatoes
  • Cabbage
  • Tomatoes
  • Beans and green beans
  • Liver and onions
  • Chicken
  • Eggs
  • Pork / ham as a treat
  • Blackberries and raspberries
  • Rhubarb
  • Lettuce
  • Homemade bread
  • Corn
  • Peas
in reply to Ben Weinstein-Raun

Adding on here: spinach, parsnips, carrots, celery, lots of onions with other things. cucumbers, and summer and winter squash, cottage cheese, and apples and pears, cherries in the late spring, grapefruit from Florida or California in the winter and oranges in the summer.


:o 5M people have watched this (pretty good, imo!) video that includes discussion of some work we did at Palisade


Sad fact: the only way to have a proper medianworld is if you're the only occupant.



I'm in Delaware on a train. Unsurprisingly, no humans visible from the train windows; only distant cars that I assume are autonomous.


A year and a half ago, Google Maps changed their location history feature so that it (a) isn't available in the web app, (b) isn't even saved anywhere other than your phone without explicit intervention on your part, and (c) has a very limited UI compared to how it was before.

Yesterday I got a new phone and transferred everything over, incorrectly assuming this would include my location history, or that at least I had been backing it up (since I definitely explicitly opted into the backup system). Having finished with all of that, I factory reset my old phone, packed it up in the trade-in box, and sent it along. Of course, turns out that the history didn't get transferred and the phone hadn't backed anything up for six months, so now I have no record of where I've been for most of the last six months.

They claimed that this was for "increased privacy and control", but I'd bet at 3:1 that they still use my session geolocation history for advertising, and will happily hand it over to law enforcement if asked. So it feels kind of like everybody has access to it except me.

in reply to Ben Weinstein-Raun

😢 I would be so sad to lose my timeline. I'm really paranoid about it too cause they seem to really want to keep it from going anywhere.


LLMs are a huge boost for learning about [fields that are well understood and have lots written about them already], at least if you're me.

Previous attempts to learn category theory went much slower per hour spent than the current one, since insofar as I had tutoring, it was built out of humans.

It's still really hard to tell when they're hallucinating, or making mistakes in areas I'm unfamiliar with. There's a crucial skill, at least for now, of noticing when you don't have enough corroborating background to tell that they're not bullshitting, and then going to find that background.

Honestly this is probably good practice for learning from humans as well.



<Anthropic, to the r/anthropic subreddit>: You're absolutely right to point that out! I see the problem now. It's a subtle bug in the token selection algorithm.
⇧