Advertising, and especially targeted advertising, is widely hated. Something pretty interesting to me: insofar as I'm a rational agent, the amount a given advertiser should pay to show me their ad is positively correlated with how much I want to see that ad. On paper this sounds like an amazing situation, with positive sum trades all around.
But it's easy to observe that most ads are annoying and bad, and that people hate them. wtf is up with this? I don't have time to think about it today, but maybe someone here already knows. @Jeff Kaufman maybe? Or @Daniel Filan ?
like this
GitHub - benwr/gwipt: Automatically commit all edits to a wip branch with GPT-3 commit messages
Automatically commit all edits to a wip branch with GPT-3 commit messages - benwr/gwiptGitHub
like this
like this
I wrote a plugin to sync flashcards from Logseq (open-source notes app, like a local Roam Research or a list-shaped Obsidian) to Mochi (closed-source spaced repetition app, like a remote / pretty Anki). Works quite well / has a lot of features for a two-weekend side project, though it took more effort than I hoped it would.
github.com/benwr/logseq-mochi-…
GitHub - benwr/logseq-mochi-sync: One-way synchronization of flashcards from Logseq to Mochi
One-way synchronization of flashcards from Logseq to Mochi - benwr/logseq-mochi-syncGitHub
Etymology joke
First you need to know about the fish with the scientific name of _Boops boops_, which is real. This means ‘cow face cow face’ and is not a joke. If you like you can pronounce it ‘bow-ops’ to emphasise the etymology, as in ‘coöperation’. The ‘ops’ part is the bit that means ‘face’ in Ancient Greek.
The joke is this: that the etymology of _oops_ is ‘egg face’ — from the phrase “there is egg on my face” — and is pronounced oöps. (The Ancient Greek word for egg is ‘oon’ which gives us ‘oomancy’, ‘divination by eggs’.)
like this
like this
Man, given that LLMs are "dream machines" / "all they can really do is hallucinate", it's wild how much they correctly remember.
Like, Claude 3.7 correctly knows a lot about the API used to write Logseq plugins. Logseq isn't exactly obscure, but it is definitely pretty niche, and the API is based on a relatively obscure database query language and a schema designed specifically for the app.
like this
Ben Weinstein-Raun likes this.
Run-time type checking is way more useful than I expected. I've been using it in Julia for 4 years now, and I expected it to provide ~25% of the value of the value of static type checking, but it's actually been closer to 90%.
I guess it's because when I'm developing, I'm constantly running code anyway, either through a notebook or tests. And the change -> run loop in Julia is not noticeably slower than the change -> compile loop in Scala.
The big exception is when I have code that can only reasonably be run on a remote machine and takes 5+ minutes to set up/execute. Then I'd really like more static analysis.
Ben Weinstein-Raun likes this.
like this
Human information throughput is allegedly only about 10-50 bits per second. This implies an interesting upper bound, in that the information throughput of biological humanity as a whole can't be higher than around 50 * 10^10 = 500Gbit/s. I.e., if all distinguishable actions made by humans were perfectly independent, biological humanity as a whole only has 500Gbit/s of "steering power".
I need to think more about the idea of "steering power" (e.g. some obvious rough edges around amplifying your steering power using external information processing / decision systems), but I have some intuition that one might actually be able to come up with a not-totally-useless concept that lets us say something like "humanity can't stay in 'meaningful control' if we have an unaligned artificial agent with more steering power than humanity, expressed in bits/s".
Usually when people talk about egregores, I think they mostly have ideologies in mind.
There's a somewhat "lower-level" egregore (in the sense of a low-level language, not a low-level demon), that I think is pretty overlooked, that I think of as "emotional cynicism" (to distinguish it as specifically the emotional stance we associate with the word "cynical", and from capital-c Greek Cynicism).
Emotional cynicism seems to me to be near totally dominant in public online discourse, and I think that's both interesting and somewhat concerning.
like this
- a pretty large subset of comments on hacker news, lesswrong, ...
- ~All reddit/Twitter/bluesky political discourse
Yet another short AXRP episode!
With Anthony Aguirre!
The Future of Life Institute is one of the oldest and most prominant organizations in the AI existential safety space, working on such topics as the AI pause open letter and how the EU AI Act can be improved. Metaculus is one of the premier forecasting sites on the internet. Behind both of them lie one man: Anthony Aguirre, who I talk with in this episode.
One of my favorite tests for chatbots is asking for book recommendations. I give it a list of books I liked and books I didn't like (and some flavor for why) and ask them what to read.
They're... ok at this, mostly. It's funny because I always feel like this should be a very straightforward traditional ML problem to do with Goodreads data or whatever but none of the things which purport to be that (Storygraph, etc) are any good at all.
Anyway, o3-mini seems to be the best at this so far for whatever reason. With the same prompt as I've been using elsewhere, it gave me 7 books of which I'd already read and enjoyed 5. Best hit rate on that metric from other chatbots was ~1/4, and in several cases they included books in a series I'd explicitly said as part of the prompt that I didn't enjoy.
like this
I also appreciated this from Claude:
> Ender's Game by Orson Scott Card - Strategic thinking protagonist if you haven't read it already
... Yes, Clause, you're absolutely correct to assume that I've probably already read Ender's Game based on the list of books I enjoyed, well done.
Ben Weinstein-Raun likes this.
- YouTube
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.www.youtube.com
like this
"I understand your concern, but" / "I share your concern, but"
Recently I've noticed that this phrase seems especially likely to ring hollow: When I hear someone say it about something that feels important to me, I usually don't believe them. Usually the phrase is accompanied by some degree of a "missing mood": If you're concerned, why do you seem to think that a declaration that you understand should be sufficient argument that in fact this concern is a secondary one? How sure are you that you actually understand?
I don't like it when people just straight-up assert that they've considered and rejected your view, without admitting that they might have misunderstood you or miscategorized you. It's like 10x better, imo, to say "I think I understand your concern, but".
like this
Yeah it often feels phatic, or what people do when they are trying to appear like a good listener/balanced interlocutor when actually advocating strongly for their own point of view. (Have definitely either done this myself or said things in that same spirit).
What's better? Maybe just 'that makes sense'?
What type of support feels most enjoyable/meaningful to give?
Sometimes friends ask how to support us. Realistically, there are a lot of helpful things people could do, if they wanted to. And this is kind of a long-term Situation we're going through.
Our main goal is just maintaining fun, mutually supportive relationships with people, despite being in a strange and isolating situation. So I think I wanna tailor my "things you could do for us" suggestions to be enjoyable!
like this
for me generally the most satisfying support I have given people is when they've explained something to me and I've understood it and I've noticed something about it that they find helpful, that they didn't notice without me. This kind of thing: benkuhn.net/listen/
I guess generally I think of my most important virtues as compassion + intelligence and opportunities to use them both together feel good
to respond to your list of ideas, the idea of doing medical research for someone stresses me out somewhat as I think I would not be good at it, but feel very strongly that it is important to be good at it; this is not unrelated to some unsolved / unaddressed problems in my own life (which I can't fix Right, so I can't fix them at all)
like this
My favorite way to support people is helping them figure something out, do research, make a plan, or work through internal blocks. These involve doing stuff that I'm good at.
My second-favorite category is support that involves doing things I'm not necessarily good at, like cooking, helping clean up, etc. When my own kids are a bit older I think babysitting will also fall into this category.
I'm mildly positive on support via hanging out. I don't really understand it (as in I don't experience support from something similar) but it feels like a fun, low effort way to help people.
kip likes this.
Ben Millwood likes this.
like this
man, dancing is great, I am glad the bay has so much of it
don't get to do it nearly as much now that I get up at 6am (for the child) but it's always worth it when I do
like this
More AXRP! Joel Lehman!
Typically this podcast talks about how to avert destruction from AI. But what would it take to ensure AI promotes human flourishing as well as it can? Is alignment to individuals enough, and if not, where do we go form here? In this episode, I talk with Joel Lehman about these questions.
Misty morning at Lanhydrock. Cornwall, England. NMP
From: https://x.com/HoganSOG/status/1882211656283111582/photo/1
#art
Junichiro Sekino 1914-1988
Night in Kyoto
#art
From: https://x.com/marysia_cc/status/1882215670282166390
like this
like this
like this
Tanaka Ryōhei (1933-2019)
Crow and Persimmon in the Snow
From: https://x.com/marysia_cc/status/1881097630148907230/photo/1
#art
JP Addison
in reply to Ben Weinstein-Raun • •I think broadly I think targeted ads are better? Like, you'd probably rather your experience was ad-free, but I like the ads that meta shows me better than the ones on my podcast feed.
If you want those dynamics to lead to a trade where basically an advertiser's preference to show me ads is greater than my preference to not see that ad — I don't think there's a reason to think that should be the case.
If you're instead asking, like, "why are they annoying, shouldn't they specifically try not to be annoying?" I think sadly while there's a factor pushing them to be pleasant, etc., there's a greater factor pushing them towards annoyingness: By default ads don't leave an impact. First and foremost they want you to notice them, and remember them. Given that you'd rather direct your attention away, noticing and remembering are actually things you're trying to avoid! So they're adversarial to you, in large part, and will use things like images that are hard to ignore, and repeat themselves ad nauseum.
In conclusion: I've found the way to get the best
... show moreI think broadly I think targeted ads are better? Like, you'd probably rather your experience was ad-free, but I like the ads that meta shows me better than the ones on my podcast feed.
If you want those dynamics to lead to a trade where basically an advertiser's preference to show me ads is greater than my preference to not see that ad — I don't think there's a reason to think that should be the case.
If you're instead asking, like, "why are they annoying, shouldn't they specifically try not to be annoying?" I think sadly while there's a factor pushing them to be pleasant, etc., there's a greater factor pushing them towards annoyingness: By default ads don't leave an impact. First and foremost they want you to notice them, and remember them. Given that you'd rather direct your attention away, noticing and remembering are actually things you're trying to avoid! So they're adversarial to you, in large part, and will use things like images that are hard to ignore, and repeat themselves ad nauseum.
In conclusion: I've found the way to get the best ads, which is to be susceptible to photos of shirtless guys trying to sell me clothing, which hasn't yet worked, but has put me on to generally things I want to buy. 😁
like this
Ben Weinstein-Raun likes this.
Ben Weinstein-Raun
in reply to JP Addison • •The thing I'm trying to say here is something like:
Ads are fundamentally informative. Sure, it might be that they're less valuable to me than the content they displace, but I don't see an obvious reason why that must be true, and yet it seems nearly universally true.
Maybe the problem is that if an ad were predictably more valuable to me than the displaced content, I would actively seek out the advertised information above whatever other thing I'm doing, at which point we stop calling it advertising and start calling it search or shopping?
I guess maybe I should go read something by Herb Simon about the details here.
like this
Ben Millwood and Daniel Filan like this.
Satvik
in reply to Ben Weinstein-Raun • •I would be pretty happy if ads were better. I regularly come across e.g. toddler products that I want to buy, and am very willing to spend money to save time/stress. But these things are almost never advertised to me.
I see *lots* of ads for things I already have, and lots of ads for things that are appropriate for parents of much younger or much older children. I can't actually think of the last time I bought something from an ad, which is shocking considering that I'm in a bunch of baby-related Facebook groups and often get products that people recommend there.
Ben Weinstein-Raun likes this.
Sam FM
in reply to Ben Weinstein-Raun • •Advertisers understand that humans are very manipulatable, and are very down to use dark-arts manipulation tactics. I don't suspect the correlation of values to actually be very high at all. (Intentionally annoying ads can be very effective.)
I hate having to be constantly on guard from all these attempts to hijack my attention and influence my beliefs/desires. I opt out of targeted marketing whenever I can because I don't want advertising systems to have *even stronger* memetic hooks to grab me with.
like this
Ben Weinstein-Raun and Ben Millwood like this.