I wrote a plugin to sync flashcards from Logseq (open-source notes app, like a local Roam Research or a list-shaped Obsidian) to Mochi (closed-source spaced repetition app, like a remote / pretty Anki). Works quite well / has a lot of features for a two-weekend side project, though it took more effort than I hoped it would.
github.com/benwr/logseq-mochi-…
GitHub - benwr/logseq-mochi-sync: One-way synchronization of flashcards from Logseq to Mochi
One-way synchronization of flashcards from Logseq to Mochi - benwr/logseq-mochi-syncGitHub
Etymology joke
First you need to know about the fish with the scientific name of _Boops boops_, which is real. This means ‘cow face cow face’ and is not a joke. If you like you can pronounce it ‘bow-ops’ to emphasise the etymology, as in ‘coöperation’. The ‘ops’ part is the bit that means ‘face’ in Ancient Greek.
The joke is this: that the etymology of _oops_ is ‘egg face’ — from the phrase “there is egg on my face” — and is pronounced oöps. (The Ancient Greek word for egg is ‘oon’ which gives us ‘oomancy’, ‘divination by eggs’.)
like this
Man, given that LLMs are "dream machines" / "all they can really do is hallucinate", it's wild how much they correctly remember.
Like, Claude 3.7 correctly knows a lot about the API used to write Logseq plugins. Logseq isn't exactly obscure, but it is definitely pretty niche, and the API is based on a relatively obscure database query language and a schema designed specifically for the app.
like this
Ben Weinstein-Raun likes this.
Run-time type checking is way more useful than I expected. I've been using it in Julia for 4 years now, and I expected it to provide ~25% of the value of the value of static type checking, but it's actually been closer to 90%.
I guess it's because when I'm developing, I'm constantly running code anyway, either through a notebook or tests. And the change -> run loop in Julia is not noticeably slower than the change -> compile loop in Scala.
The big exception is when I have code that can only reasonably be run on a remote machine and takes 5+ minutes to set up/execute. Then I'd really like more static analysis.
Ben Weinstein-Raun likes this.
like this
Human information throughput is allegedly only about 10-50 bits per second. This implies an interesting upper bound, in that the information throughput of biological humanity as a whole can't be higher than around 50 * 10^10 = 500Gbit/s. I.e., if all distinguishable actions made by humans were perfectly independent, biological humanity as a whole only has 500Gbit/s of "steering power".
I need to think more about the idea of "steering power" (e.g. some obvious rough edges around amplifying your steering power using external information processing / decision systems), but I have some intuition that one might actually be able to come up with a not-totally-useless concept that lets us say something like "humanity can't stay in 'meaningful control' if we have an unaligned artificial agent with more steering power than humanity, expressed in bits/s".
Usually when people talk about egregores, I think they mostly have ideologies in mind.
There's a somewhat "lower-level" egregore (in the sense of a low-level language, not a low-level demon), that I think is pretty overlooked, that I think of as "emotional cynicism" (to distinguish it as specifically the emotional stance we associate with the word "cynical", and from capital-c Greek Cynicism).
Emotional cynicism seems to me to be near totally dominant in public online discourse, and I think that's both interesting and somewhat concerning.
like this
- a pretty large subset of comments on hacker news, lesswrong, ...
- ~All reddit/Twitter/bluesky political discourse
Yet another short AXRP episode!
With Anthony Aguirre!
The Future of Life Institute is one of the oldest and most prominant organizations in the AI existential safety space, working on such topics as the AI pause open letter and how the EU AI Act can be improved. Metaculus is one of the premier forecasting sites on the internet. Behind both of them lie one man: Anthony Aguirre, who I talk with in this episode.
One of my favorite tests for chatbots is asking for book recommendations. I give it a list of books I liked and books I didn't like (and some flavor for why) and ask them what to read.
They're... ok at this, mostly. It's funny because I always feel like this should be a very straightforward traditional ML problem to do with Goodreads data or whatever but none of the things which purport to be that (Storygraph, etc) are any good at all.
Anyway, o3-mini seems to be the best at this so far for whatever reason. With the same prompt as I've been using elsewhere, it gave me 7 books of which I'd already read and enjoyed 5. Best hit rate on that metric from other chatbots was ~1/4, and in several cases they included books in a series I'd explicitly said as part of the prompt that I didn't enjoy.
like this
I also appreciated this from Claude:
> Ender's Game by Orson Scott Card - Strategic thinking protagonist if you haven't read it already
... Yes, Clause, you're absolutely correct to assume that I've probably already read Ender's Game based on the list of books I enjoyed, well done.
Ben Weinstein-Raun likes this.
- YouTube
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.www.youtube.com
like this
"I understand your concern, but" / "I share your concern, but"
Recently I've noticed that this phrase seems especially likely to ring hollow: When I hear someone say it about something that feels important to me, I usually don't believe them. Usually the phrase is accompanied by some degree of a "missing mood": If you're concerned, why do you seem to think that a declaration that you understand should be sufficient argument that in fact this concern is a secondary one? How sure are you that you actually understand?
I don't like it when people just straight-up assert that they've considered and rejected your view, without admitting that they might have misunderstood you or miscategorized you. It's like 10x better, imo, to say "I think I understand your concern, but".
like this
Yeah it often feels phatic, or what people do when they are trying to appear like a good listener/balanced interlocutor when actually advocating strongly for their own point of view. (Have definitely either done this myself or said things in that same spirit).
What's better? Maybe just 'that makes sense'?
What type of support feels most enjoyable/meaningful to give?
Sometimes friends ask how to support us. Realistically, there are a lot of helpful things people could do, if they wanted to. And this is kind of a long-term Situation we're going through.
Our main goal is just maintaining fun, mutually supportive relationships with people, despite being in a strange and isolating situation. So I think I wanna tailor my "things you could do for us" suggestions to be enjoyable!
like this
for me generally the most satisfying support I have given people is when they've explained something to me and I've understood it and I've noticed something about it that they find helpful, that they didn't notice without me. This kind of thing: benkuhn.net/listen/
I guess generally I think of my most important virtues as compassion + intelligence and opportunities to use them both together feel good
to respond to your list of ideas, the idea of doing medical research for someone stresses me out somewhat as I think I would not be good at it, but feel very strongly that it is important to be good at it; this is not unrelated to some unsolved / unaddressed problems in my own life (which I can't fix Right, so I can't fix them at all)
like this
My favorite way to support people is helping them figure something out, do research, make a plan, or work through internal blocks. These involve doing stuff that I'm good at.
My second-favorite category is support that involves doing things I'm not necessarily good at, like cooking, helping clean up, etc. When my own kids are a bit older I think babysitting will also fall into this category.
I'm mildly positive on support via hanging out. I don't really understand it (as in I don't experience support from something similar) but it feels like a fun, low effort way to help people.
kip likes this.
Ben Millwood likes this.
like this
man, dancing is great, I am glad the bay has so much of it
don't get to do it nearly as much now that I get up at 6am (for the child) but it's always worth it when I do
like this
More AXRP! Joel Lehman!
Typically this podcast talks about how to avert destruction from AI. But what would it take to ensure AI promotes human flourishing as well as it can? Is alignment to individuals enough, and if not, where do we go form here? In this episode, I talk with Joel Lehman about these questions.
Misty morning at Lanhydrock. Cornwall, England. NMP
From: https://x.com/HoganSOG/status/1882211656283111582/photo/1
#art
Junichiro Sekino 1914-1988
Night in Kyoto
#art
From: https://x.com/marysia_cc/status/1882215670282166390
like this
like this
like this
Tanaka Ryōhei (1933-2019)
Crow and Persimmon in the Snow
From: https://x.com/marysia_cc/status/1881097630148907230/photo/1
#art
Adria on AXRP!
Yet another new episode!
Suppose we're worried about AIs engaging in long-term plans that they don't tell us about. If we were to peek inside their brains, what should we look for to check whether this was happening? In this episode Adrià Garriga-Alonso talks about his work trying to answer this question.
6 days!
I'm probably starting my sixth day of feeling a lot more normal, health-wise. (Don't know for sure til later in the day.)
I might have returned to the level of health I was at in mid Dec — which was concerningly-bad at the time, but things got significantly worse after that.
While I'm feeling better, I'm trying to un-decondition my body a bit. Strategy: Walk a few minutes, get tired, and then put myself in a quiet/dark/controlled/alone environment to conduct Optimal Rest
like this
the desire for portability is so that I can easily take it to and from work
I think for now I will try doing this with my existing keyboard but I suspect it will be annoying
Ben Weinstein-Raun likes this.
kip likes this.
Ben Weinstein-Raun likes this.
kip
in reply to kip • •like this
Ben Weinstein-Raun and Ben Millwood like this.