Skip to main content


LLMs are a huge boost for learning about [fields that are well understood and have lots written about them already], at least if you're me.

Previous attempts to learn category theory went much slower per hour spent than the current one, since insofar as I had tutoring, it was built out of humans.

It's still really hard to tell when they're hallucinating, or making mistakes in areas I'm unfamiliar with. There's a crucial skill, at least for now, of noticing when you don't have enough corroborating background to tell that they're not bullshitting, and then going to find that background.

Honestly this is probably good practice for learning from humans as well.