Skip to main content



Advertising, and especially targeted advertising, is widely hated. Something pretty interesting to me: insofar as I'm a rational agent, the amount a given advertiser should pay to show me their ad is positively correlated with how much I want to see that ad. On paper this sounds like an amazing situation, with positive sum trades all around.

But it's easy to observe that most ads are annoying and bad, and that people hate them. wtf is up with this? I don't have time to think about it today, but maybe someone here already knows. @Jeff Kaufman maybe? Or @Daniel Filan ?

in reply to Ben Weinstein-Raun

I would be pretty happy if ads were better. I regularly come across e.g. toddler products that I want to buy, and am very willing to spend money to save time/stress. But these things are almost never advertised to me.

I see *lots* of ads for things I already have, and lots of ads for things that are appropriate for parents of much younger or much older children. I can't actually think of the last time I bought something from an ad, which is shocking considering that I'm in a bunch of baby-related Facebook groups and often get products that people recommend there.

in reply to Ben Weinstein-Raun

Advertisers understand that humans are very manipulatable, and are very down to use dark-arts manipulation tactics. I don't suspect the correlation of values to actually be very high at all. (Intentionally annoying ads can be very effective.)

I hate having to be constantly on guard from all these attempts to hijack my attention and influence my beliefs/desires. I opt out of targeted marketing whenever I can because I don't want advertising systems to have *even stronger* memetic hooks to grab me with.



Just updated my git auto-wip-branch tool to use GPT-4o instead of 3.5. So much has changed in the 2 years since I made it! But the basic idea is still pretty great, at least for my workflow: github.com/benwr/gwipt


ugh, wtf is it with me and doctors canceling my appointments?

Like, granted, if I were a doctor I would want to cancel benwr's appointments, but how do they know?



Vitrification is an ancient and successful process for ensuring that one's brain makes it thousands of years into the future.


I wrote a plugin to sync flashcards from Logseq (open-source notes app, like a local Roam Research or a list-shaped Obsidian) to Mochi (closed-source spaced repetition app, like a remote / pretty Anki). Works quite well / has a lot of features for a two-weekend side project, though it took more effort than I hoped it would.

github.com/benwr/logseq-mochi-…



My mood tracking app says February was my worst month since I started keeping track in September. Makes sense: surgery recovery, plus GI issues, resulting in more isolation than usual. Hopefully things will improve as the days get longer.


Still trying to work out how to use superstimulus




Etymology joke


First you need to know about the fish with the scientific name of _Boops boops_, which is real. This means ‘cow face cow face’ and is not a joke. If you like you can pronounce it ‘bow-ops’ to emphasise the etymology, as in ‘coöperation’. The ‘ops’ part is the bit that means ‘face’ in Ancient Greek.

The joke is this: that the etymology of _oops_ is ‘egg face’ — from the phrase “there is egg on my face” — and is pronounced oöps. (The Ancient Greek word for egg is ‘oon’ which gives us ‘oomancy’, ‘divination by eggs’.)



in reply to kip

(this feels a little different than the tone I'd use if I wrote this post *for* this platform. I wrote it for Facebook and decided I endorse cross-posting without thinking very hard about the specific platform)


Man, given that LLMs are "dream machines" / "all they can really do is hallucinate", it's wild how much they correctly remember.

Like, Claude 3.7 correctly knows a lot about the API used to write Logseq plugins. Logseq isn't exactly obscure, but it is definitely pretty niche, and the API is based on a relatively obscure database query language and a schema designed specifically for the app.

in reply to Ben Weinstein-Raun

I think they get worse about hallucinating when you ask them for something which doesn't exist.
This entry was edited (4 months ago)


Run-time type checking is way more useful than I expected. I've been using it in Julia for 4 years now, and I expected it to provide ~25% of the value of the value of static type checking, but it's actually been closer to 90%.

I guess it's because when I'm developing, I'm constantly running code anyway, either through a notebook or tests. And the change -> run loop in Julia is not noticeably slower than the change -> compile loop in Scala.

The big exception is when I have code that can only reasonably be run on a remote machine and takes 5+ minutes to set up/execute. Then I'd really like more static analysis.



Nausea is extremely bad for my subjective well being. I've spent the day mostly in the bathroom due to food poisoning or something, and I feel like this results in suffering comparable per hour to the worst pain I've experienced (which was when I had septic bursitis and my elbow swelled to the size of a tennis ball within a few hours)


I'm pretty sure I'd very gladly have paid 100x the electricity bill and carbon offsets for the whole day, for every time I've been stuck in a bathroom stall when the motion detector shut off the lights.


Human information throughput is allegedly only about 10-50 bits per second. This implies an interesting upper bound, in that the information throughput of biological humanity as a whole can't be higher than around 50 * 10^10 = 500Gbit/s. I.e., if all distinguishable actions made by humans were perfectly independent, biological humanity as a whole only has 500Gbit/s of "steering power".

I need to think more about the idea of "steering power" (e.g. some obvious rough edges around amplifying your steering power using external information processing / decision systems), but I have some intuition that one might actually be able to come up with a not-totally-useless concept that lets us say something like "humanity can't stay in 'meaningful control' if we have an unaligned artificial agent with more steering power than humanity, expressed in bits/s".



Johann Friedrich Morgenstern
From https://x.com/0zmnds/status/1890834305208696987
#art
#art


Usually when people talk about egregores, I think they mostly have ideologies in mind.

There's a somewhat "lower-level" egregore (in the sense of a low-level language, not a low-level demon), that I think is pretty overlooked, that I think of as "emotional cynicism" (to distinguish it as specifically the emotional stance we associate with the word "cynical", and from capital-c Greek Cynicism).

Emotional cynicism seems to me to be near totally dominant in public online discourse, and I think that's both interesting and somewhat concerning.

in reply to Ben Weinstein-Raun

I think I agree with you, though I'm curious for examples of types of things you might see as an expression of emotional cynicism.
in reply to JP Addison

  • a pretty large subset of comments on hacker news, lesswrong, ...
  • ~All reddit/Twitter/bluesky political discourse


Yet another short AXRP episode!


With Anthony Aguirre!

The Future of Life Institute is one of the oldest and most prominant organizations in the AI existential safety space, working on such topics as the AI pause open letter and how the EU AI Act can be improved. Metaculus is one of the premier forecasting sites on the internet. Behind both of them lie one man: Anthony Aguirre, who I talk with in this episode.

Video
Transcript



One of my favorite tests for chatbots is asking for book recommendations. I give it a list of books I liked and books I didn't like (and some flavor for why) and ask them what to read.

They're... ok at this, mostly. It's funny because I always feel like this should be a very straightforward traditional ML problem to do with Goodreads data or whatever but none of the things which purport to be that (Storygraph, etc) are any good at all.

Anyway, o3-mini seems to be the best at this so far for whatever reason. With the same prompt as I've been using elsewhere, it gave me 7 books of which I'd already read and enjoyed 5. Best hit rate on that metric from other chatbots was ~1/4, and in several cases they included books in a series I'd explicitly said as part of the prompt that I didn't enjoy.

in reply to Kevin Gibbons

I also appreciated this from Claude:

> Ender's Game by Orson Scott Card - Strategic thinking protagonist if you haven't read it already

... Yes, Clause, you're absolutely correct to assume that I've probably already read Ender's Game based on the list of books I enjoyed, well done.

in reply to Kevin Gibbons



Are fans actually white noise machines? If so, why? It seems like they're the sort of thing that has an obvious frequency that would matter so I'm not sure what's going on. Maybe that the air gets routed thru grates and stuff and that creates the white noise?
in reply to Daniel Filan

I guess they sort of obviously don't have the high pitch components that real white noise machines do.


Ugh, this guy built my high school dream project and I am simultaneously grinning widely and very jealous.


"I understand your concern, but" / "I share your concern, but"

Recently I've noticed that this phrase seems especially likely to ring hollow: When I hear someone say it about something that feels important to me, I usually don't believe them. Usually the phrase is accompanied by some degree of a "missing mood": If you're concerned, why do you seem to think that a declaration that you understand should be sufficient argument that in fact this concern is a secondary one? How sure are you that you actually understand?

I don't like it when people just straight-up assert that they've considered and rejected your view, without admitting that they might have misunderstood you or miscategorized you. It's like 10x better, imo, to say "I think I understand your concern, but".

in reply to Ben Weinstein-Raun

Yeah it often feels phatic, or what people do when they are trying to appear like a good listener/balanced interlocutor when actually advocating strongly for their own point of view. (Have definitely either done this myself or said things in that same spirit).

What's better? Maybe just 'that makes sense'?

in reply to Ben Weinstein-Raun

It is simply a manipulation intended to protect the speaker's ego
This entry was edited (4 months ago)


What type of support feels most enjoyable/meaningful to give?

Sometimes friends ask how to support us. Realistically, there are a lot of helpful things people could do, if they wanted to. And this is kind of a long-term Situation we're going through.

Our main goal is just maintaining fun, mutually supportive relationships with people, despite being in a strange and isolating situation. So I think I wanna tailor my "things you could do for us" suggestions to be enjoyable!

in reply to kip

for me generally the most satisfying support I have given people is when they've explained something to me and I've understood it and I've noticed something about it that they find helpful, that they didn't notice without me. This kind of thing: benkuhn.net/listen/

I guess generally I think of my most important virtues as compassion + intelligence and opportunities to use them both together feel good

to respond to your list of ideas, the idea of doing medical research for someone stresses me out somewhat as I think I would not be good at it, but feel very strongly that it is important to be good at it; this is not unrelated to some unsolved / unaddressed problems in my own life (which I can't fix Right, so I can't fix them at all)

in reply to kip

My favorite way to support people is helping them figure something out, do research, make a plan, or work through internal blocks. These involve doing stuff that I'm good at.

My second-favorite category is support that involves doing things I'm not necessarily good at, like cooking, helping clean up, etc. When my own kids are a bit older I think babysitting will also fall into this category.

I'm mildly positive on support via hanging out. I don't really understand it (as in I don't experience support from something similar) but it feels like a fun, low effort way to help people.



Does anyone know of a laptop price index where I can check if it spiked today?



Many humans being powerless is a familiar situation, but we think like there's some law against every human being powerless. That changes if you introduce a new species with superior performance at everything.
in reply to Katja Grace

also changes if you introduce a species with superior performance at the right things, while still not (maybe even far from) everything


Midges




Anyone have recommendations for TV I should watch tonight? I'm most interested in strategy shows, e.g. if The Traitors were on Netflix that would be my top pick.


At its best, the local YMCA steam room is better than the one at Archimedes Banya - admittedly no eucalyptus scent, but bigger and hotter.
in reply to Daniel Filan

"at its best" because sometimes you go in and it's just not that steamy for some reason. Today I went and the lights didn't work but the steam was on full blast 👌


Back in the day people used to argue whether effective altruism was an opportunity or an obligation. It's just occurred to me that the opportunity side has links to theodicies that I find pretty implausible - "oh we're so lucky that there's so much pointless suffering in the world, so that we have the opportunity to do something about it".


man, dancing is great, I am glad the bay has so much of it

don't get to do it nearly as much now that I get up at 6am (for the child) but it's always worth it when I do



More AXRP! Joel Lehman!


Typically this podcast talks about how to avert destruction from AI. But what would it take to ensure AI promotes human flourishing as well as it can? Is alignment to individuals enough, and if not, where do we go form here? In this episode, I talk with Joel Lehman about these questions.

Video
Transcript



xkcd.com/3039/ wow, for a minute there it sure looked like humans were going to break the speed of light some time in the 80s in order to keep getting higher up


Misty morning at Lanhydrock. Cornwall, England. NMP
From: https://x.com/HoganSOG/status/1882211656283111582/photo/1

#art

#art


Junichiro Sekino 1914-1988
Night in Kyoto
#art

From: https://x.com/marysia_cc/status/1882215670282166390

#art


Transcription software creates a better record of what I say when I'm experiencing transcription software than is ideal


The daytime moon is pretty cool IMO. Somehow a more vivid reminder that we're floating in (hurtling thru?) space.
in reply to Daniel Filan

sometimes I look at it and I'm like, man, it would be REALLY bad if that fell down, but it doesn't, somehow


If I lie down and close my eyes, I often get right to sleep. However, doing so sounds dreadful: I want to live! There are a handful of kinds of living that actually make me sleepy though, and one of them is mathy puzzles you can do in your head. Anyone have good ones?
in reply to Katja Grace

curious for existing examples of the thing, both for guiding what responses I think are appropriate and also so I can try using them too :P


Tanaka Ryōhei (1933-2019)
Crow and Persimmon in the Snow

From: https://x.com/marysia_cc/status/1881097630148907230/photo/1

#art

#art


Miscellaneous life updates