I've been getting really into pocket knives this week, and especially learning about knife steels. The biggest surprise has been that one of the several aspects of Atlas Shrugged that caused me to lose suspension of disbelief, has become much more believable in retrospect:
When I read it I felt like this whole part about the "guy invents a new metal that's just straightforwardly better than existing metals, and names it after himself" was just too farfetched.
But it turns out that actually this is just a thing that can happen. This guy Larrin Thomas basically straight-up did this with a knife steel alloy in 2021. The alloy is notably better than others for the purposes of pocket knives in almost every respect. Like, in any single dimension there are steels that do better, but this alloy is like the Hawaii of the knife steel Pareto frontier. He didn't name it after himself but I think he could have called it "Larrin Metal" if he had wanted to. He actually called it "Magnacut".
JP Addison likes this.
I guess my intuition is that alloys would be "smooth" in their properties. You add more chromium and certain properties increase. Maybe they stop increasing or start reversing after a while, but it's not hard to find the optional points for each property over time.
With that intuition it seems surprising that it's hard to find new alloys that haven't already been found.
Ben Weinstein-Raun likes this.
Have been wearing minimalist/"barefoot" sandals for the last couple days, and it feels somehow unhinged to say this, but I think they're making me feel noticeably happier?
Like, it reminds me of the thing where anosmia is linked to depression. It's like I regained a nontrivial part of my sense of "what's going on in the world".
like this
AXRP Jason Gross
How do we figure out whether interpretability is doing its job? One way is to see if it helps us prove things about models that we care about knowing. In this episode, I speak with Jason Gross about his agenda to benchmark interpretability in this way, and his exploration of the intersection of proofs and modern machine learning.
Ben Weinstein-Raun likes this.
like this
On the way home from the coffee shop I encountered a bright blue bird as I walked past a bush, about half a foot from me, which stared at me for a second, quietly squawked, and flew away.
Then soon afterward, in a different bush, I encountered an ooze (pictured, made of bubbly foam) trying to stealth.
I'm concerned that someone has changed the genre of my RPG, in a worrying direction. Will be avoiding taverns.
like this
Men's Dang Soft Boxer Briefs | Duluth Trading Company
Softness from above for down below? You bet! Duluth Trading’s new Dang Soft® Boxer Briefs’ fabric is so silky-smooth, all you’ll feel is sky-high comfort!www.duluthtrading.com
like this
kip likes this.
Advertising, and especially targeted advertising, is widely hated. Something pretty interesting to me: insofar as I'm a rational agent, the amount a given advertiser should pay to show me their ad is positively correlated with how much I want to see that ad. On paper this sounds like an amazing situation, with positive sum trades all around.
But it's easy to observe that most ads are annoying and bad, and that people hate them. wtf is up with this? I don't have time to think about it today, but maybe someone here already knows. @Jeff Kaufman maybe? Or @Daniel Filan ?
like this
I would be pretty happy if ads were better. I regularly come across e.g. toddler products that I want to buy, and am very willing to spend money to save time/stress. But these things are almost never advertised to me.
I see *lots* of ads for things I already have, and lots of ads for things that are appropriate for parents of much younger or much older children. I can't actually think of the last time I bought something from an ad, which is shocking considering that I'm in a bunch of baby-related Facebook groups and often get products that people recommend there.
Ben Weinstein-Raun likes this.
Advertisers understand that humans are very manipulatable, and are very down to use dark-arts manipulation tactics. I don't suspect the correlation of values to actually be very high at all. (Intentionally annoying ads can be very effective.)
I hate having to be constantly on guard from all these attempts to hijack my attention and influence my beliefs/desires. I opt out of targeted marketing whenever I can because I don't want advertising systems to have *even stronger* memetic hooks to grab me with.
like this
GitHub - benwr/gwipt: Automatically commit all edits to a wip branch with GPT-3 commit messages
Automatically commit all edits to a wip branch with GPT-3 commit messages - benwr/gwiptGitHub
like this
like this
I wrote a plugin to sync flashcards from Logseq (open-source notes app, like a local Roam Research or a list-shaped Obsidian) to Mochi (closed-source spaced repetition app, like a remote / pretty Anki). Works quite well / has a lot of features for a two-weekend side project, though it took more effort than I hoped it would.
github.com/benwr/logseq-mochi-…
GitHub - benwr/logseq-mochi-sync: One-way synchronization of flashcards from Logseq to Mochi
One-way synchronization of flashcards from Logseq to Mochi - benwr/logseq-mochi-syncGitHub
Etymology joke
First you need to know about the fish with the scientific name of _Boops boops_, which is real. This means ‘cow face cow face’ and is not a joke. If you like you can pronounce it ‘bow-ops’ to emphasise the etymology, as in ‘coöperation’. The ‘ops’ part is the bit that means ‘face’ in Ancient Greek.
The joke is this: that the etymology of _oops_ is ‘egg face’ — from the phrase “there is egg on my face” — and is pronounced oöps. (The Ancient Greek word for egg is ‘oon’ which gives us ‘oomancy’, ‘divination by eggs’.)
like this
like this
Man, given that LLMs are "dream machines" / "all they can really do is hallucinate", it's wild how much they correctly remember.
Like, Claude 3.7 correctly knows a lot about the API used to write Logseq plugins. Logseq isn't exactly obscure, but it is definitely pretty niche, and the API is based on a relatively obscure database query language and a schema designed specifically for the app.
like this
Ben Weinstein-Raun likes this.
Run-time type checking is way more useful than I expected. I've been using it in Julia for 4 years now, and I expected it to provide ~25% of the value of the value of static type checking, but it's actually been closer to 90%.
I guess it's because when I'm developing, I'm constantly running code anyway, either through a notebook or tests. And the change -> run loop in Julia is not noticeably slower than the change -> compile loop in Scala.
The big exception is when I have code that can only reasonably be run on a remote machine and takes 5+ minutes to set up/execute. Then I'd really like more static analysis.
Ben Weinstein-Raun likes this.
like this
Human information throughput is allegedly only about 10-50 bits per second. This implies an interesting upper bound, in that the information throughput of biological humanity as a whole can't be higher than around 50 * 10^10 = 500Gbit/s. I.e., if all distinguishable actions made by humans were perfectly independent, biological humanity as a whole only has 500Gbit/s of "steering power".
I need to think more about the idea of "steering power" (e.g. some obvious rough edges around amplifying your steering power using external information processing / decision systems), but I have some intuition that one might actually be able to come up with a not-totally-useless concept that lets us say something like "humanity can't stay in 'meaningful control' if we have an unaligned artificial agent with more steering power than humanity, expressed in bits/s".
Usually when people talk about egregores, I think they mostly have ideologies in mind.
There's a somewhat "lower-level" egregore (in the sense of a low-level language, not a low-level demon), that I think is pretty overlooked, that I think of as "emotional cynicism" (to distinguish it as specifically the emotional stance we associate with the word "cynical", and from capital-c Greek Cynicism).
Emotional cynicism seems to me to be near totally dominant in public online discourse, and I think that's both interesting and somewhat concerning.
like this
- a pretty large subset of comments on hacker news, lesswrong, ...
- ~All reddit/Twitter/bluesky political discourse
Yet another short AXRP episode!
With Anthony Aguirre!
The Future of Life Institute is one of the oldest and most prominant organizations in the AI existential safety space, working on such topics as the AI pause open letter and how the EU AI Act can be improved. Metaculus is one of the premier forecasting sites on the internet. Behind both of them lie one man: Anthony Aguirre, who I talk with in this episode.
One of my favorite tests for chatbots is asking for book recommendations. I give it a list of books I liked and books I didn't like (and some flavor for why) and ask them what to read.
They're... ok at this, mostly. It's funny because I always feel like this should be a very straightforward traditional ML problem to do with Goodreads data or whatever but none of the things which purport to be that (Storygraph, etc) are any good at all.
Anyway, o3-mini seems to be the best at this so far for whatever reason. With the same prompt as I've been using elsewhere, it gave me 7 books of which I'd already read and enjoyed 5. Best hit rate on that metric from other chatbots was ~1/4, and in several cases they included books in a series I'd explicitly said as part of the prompt that I didn't enjoy.
like this
I also appreciated this from Claude:
> Ender's Game by Orson Scott Card - Strategic thinking protagonist if you haven't read it already
... Yes, Clause, you're absolutely correct to assume that I've probably already read Ender's Game based on the list of books I enjoyed, well done.
Ben Weinstein-Raun likes this.
- YouTube
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.www.youtube.com
like this
"I understand your concern, but" / "I share your concern, but"
Recently I've noticed that this phrase seems especially likely to ring hollow: When I hear someone say it about something that feels important to me, I usually don't believe them. Usually the phrase is accompanied by some degree of a "missing mood": If you're concerned, why do you seem to think that a declaration that you understand should be sufficient argument that in fact this concern is a secondary one? How sure are you that you actually understand?
I don't like it when people just straight-up assert that they've considered and rejected your view, without admitting that they might have misunderstood you or miscategorized you. It's like 10x better, imo, to say "I think I understand your concern, but".
like this
Yeah it often feels phatic, or what people do when they are trying to appear like a good listener/balanced interlocutor when actually advocating strongly for their own point of view. (Have definitely either done this myself or said things in that same spirit).
What's better? Maybe just 'that makes sense'?
What type of support feels most enjoyable/meaningful to give?
Sometimes friends ask how to support us. Realistically, there are a lot of helpful things people could do, if they wanted to. And this is kind of a long-term Situation we're going through.
Our main goal is just maintaining fun, mutually supportive relationships with people, despite being in a strange and isolating situation. So I think I wanna tailor my "things you could do for us" suggestions to be enjoyable!
like this
for me generally the most satisfying support I have given people is when they've explained something to me and I've understood it and I've noticed something about it that they find helpful, that they didn't notice without me. This kind of thing: benkuhn.net/listen/
I guess generally I think of my most important virtues as compassion + intelligence and opportunities to use them both together feel good
to respond to your list of ideas, the idea of doing medical research for someone stresses me out somewhat as I think I would not be good at it, but feel very strongly that it is important to be good at it; this is not unrelated to some unsolved / unaddressed problems in my own life (which I can't fix Right, so I can't fix them at all)
like this
My favorite way to support people is helping them figure something out, do research, make a plan, or work through internal blocks. These involve doing stuff that I'm good at.
My second-favorite category is support that involves doing things I'm not necessarily good at, like cooking, helping clean up, etc. When my own kids are a bit older I think babysitting will also fall into this category.
I'm mildly positive on support via hanging out. I don't really understand it (as in I don't experience support from something similar) but it feels like a fun, low effort way to help people.
kip likes this.
Ben Millwood likes this.
Jen Blight
in reply to Ben Weinstein-Raun • •like this
Ben Weinstein-Raun, Daniel Filan, Ben Millwood and PAR like this.
Ben Weinstein-Raun
in reply to Jen Blight • •like this
Jen Blight and Satvik like this.
Daniel Ziegler
in reply to Ben Weinstein-Raun • •like this
Ben Weinstein-Raun, Chana and Satvik like this.