On the way home from the coffee shop I encountered a bright blue bird as I walked past a bush, about half a foot from me, which stared at me for a second, quietly squawked, and flew away.
Then soon afterward, in a different bush, I encountered an ooze (pictured, made of bubbly foam) trying to stealth.
I'm concerned that someone has changed the genre of my RPG, in a worrying direction. Will be avoiding taverns.
Daniel Ziegler likes this.
Men's Dang Soft Boxer Briefs | Duluth Trading Company
Softness from above for down below? You bet! Duluth Trading’s new Dang Soft® Boxer Briefs’ fabric is so silky-smooth, all you’ll feel is sky-high comfort!www.duluthtrading.com
like this
Advertising, and especially targeted advertising, is widely hated. Something pretty interesting to me: insofar as I'm a rational agent, the amount a given advertiser should pay to show me their ad is positively correlated with how much I want to see that ad. On paper this sounds like an amazing situation, with positive sum trades all around.
But it's easy to observe that most ads are annoying and bad, and that people hate them. wtf is up with this? I don't have time to think about it today, but maybe someone here already knows. @Jeff Kaufman maybe? Or @Daniel Filan ?
like this
I would be pretty happy if ads were better. I regularly come across e.g. toddler products that I want to buy, and am very willing to spend money to save time/stress. But these things are almost never advertised to me.
I see *lots* of ads for things I already have, and lots of ads for things that are appropriate for parents of much younger or much older children. I can't actually think of the last time I bought something from an ad, which is shocking considering that I'm in a bunch of baby-related Facebook groups and often get products that people recommend there.
Ben Weinstein-Raun likes this.
Advertisers understand that humans are very manipulatable, and are very down to use dark-arts manipulation tactics. I don't suspect the correlation of values to actually be very high at all. (Intentionally annoying ads can be very effective.)
I hate having to be constantly on guard from all these attempts to hijack my attention and influence my beliefs/desires. I opt out of targeted marketing whenever I can because I don't want advertising systems to have *even stronger* memetic hooks to grab me with.
like this
GitHub - benwr/gwipt: Automatically commit all edits to a wip branch with GPT-3 commit messages
Automatically commit all edits to a wip branch with GPT-3 commit messages - benwr/gwiptGitHub
like this
like this
I wrote a plugin to sync flashcards from Logseq (open-source notes app, like a local Roam Research or a list-shaped Obsidian) to Mochi (closed-source spaced repetition app, like a remote / pretty Anki). Works quite well / has a lot of features for a two-weekend side project, though it took more effort than I hoped it would.
github.com/benwr/logseq-mochi-…
GitHub - benwr/logseq-mochi-sync: One-way synchronization of flashcards from Logseq to Mochi
One-way synchronization of flashcards from Logseq to Mochi - benwr/logseq-mochi-syncGitHub
Man, given that LLMs are "dream machines" / "all they can really do is hallucinate", it's wild how much they correctly remember.
Like, Claude 3.7 correctly knows a lot about the API used to write Logseq plugins. Logseq isn't exactly obscure, but it is definitely pretty niche, and the API is based on a relatively obscure database query language and a schema designed specifically for the app.
like this
Ben Weinstein-Raun likes this.
like this
Human information throughput is allegedly only about 10-50 bits per second. This implies an interesting upper bound, in that the information throughput of biological humanity as a whole can't be higher than around 50 * 10^10 = 500Gbit/s. I.e., if all distinguishable actions made by humans were perfectly independent, biological humanity as a whole only has 500Gbit/s of "steering power".
I need to think more about the idea of "steering power" (e.g. some obvious rough edges around amplifying your steering power using external information processing / decision systems), but I have some intuition that one might actually be able to come up with a not-totally-useless concept that lets us say something like "humanity can't stay in 'meaningful control' if we have an unaligned artificial agent with more steering power than humanity, expressed in bits/s".
Usually when people talk about egregores, I think they mostly have ideologies in mind.
There's a somewhat "lower-level" egregore (in the sense of a low-level language, not a low-level demon), that I think is pretty overlooked, that I think of as "emotional cynicism" (to distinguish it as specifically the emotional stance we associate with the word "cynical", and from capital-c Greek Cynicism).
Emotional cynicism seems to me to be near totally dominant in public online discourse, and I think that's both interesting and somewhat concerning.
like this
- a pretty large subset of comments on hacker news, lesswrong, ...
- ~All reddit/Twitter/bluesky political discourse
- YouTube
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.www.youtube.com
like this
"I understand your concern, but" / "I share your concern, but"
Recently I've noticed that this phrase seems especially likely to ring hollow: When I hear someone say it about something that feels important to me, I usually don't believe them. Usually the phrase is accompanied by some degree of a "missing mood": If you're concerned, why do you seem to think that a declaration that you understand should be sufficient argument that in fact this concern is a secondary one? How sure are you that you actually understand?
I don't like it when people just straight-up assert that they've considered and rejected your view, without admitting that they might have misunderstood you or miscategorized you. It's like 10x better, imo, to say "I think I understand your concern, but".
like this
Yeah it often feels phatic, or what people do when they are trying to appear like a good listener/balanced interlocutor when actually advocating strongly for their own point of view. (Have definitely either done this myself or said things in that same spirit).
What's better? Maybe just 'that makes sense'?
A partial list of people whose art I've loved, and who I might have liked to be friends with, but who I think would not like me very much (all for different reasons):
- Ursula K. LeGuin (I'm not MtG Green enough)
- Ayn Rand (I'm too MtG Green)
- Ezra Koenig (I'm too MtG Blue)
I'm not really sure how or why I generated this list. It feels related to the thing about wanting to get stronger, and deleting my facebook last month. It's kind of an "edge-y" question: I don't know how to emotionally deal with the existence of people in this category, but they go on existing.
like this
like this
Ben Weinstein-Raun likes this.
I feel like I'm living on the internet these days, since my health is too bad for in-person stuff. So it sucks that the internet is so much more aggro.
> My strategy so far in life has just been to avoid being the kind of person who attracts "sharp" / "angry" critics, and also to filter my social bubble to exclude them. But this doesn't scale if you're trying to do the things I'm trying to do.
What are you doing that is incompatible with filtering your social bubble?
Ben Millwood likes this.
kip likes this.
Re: method "c": I'm wondering if you could intentionally give yourself exposure to critics in a way that's less vulnerable. The most obvious ways to do this might be too insincere for your taste, but idk, maybe there's still something you can do?
Like if I were trying to do this, I might create an anonymous account and intentionally share my most controversial (yet unimportant) thoughts/opinions, in places where some people will probably get mad at me. (Hopefully not in a way that, like, antagonizes people? I'd want it to be net good.) Then I'd try to lean into a mindset that getting criticism is a necessary/normal part of getting noticed.
Ben Weinstein-Raun likes this.
Idle thought: I wonder if we'll start seeing "training@home" training runs for open-source LLMs. Anyone care to run some numbers or sanity checks on whether this is possible in principle?
The folding@home project has been hugely successful, reaching at least exaFLOPS of compute.
"Training@home" would have to efficiently do partial gradient updates on extremely heterogeneous hardware with widely varying network properties; I'm not sure if this has any chance of producing base models competitive with e.g. Llama. In terms of ops alone, a 1 exaFLOPS network would have taken 10^7 seconds = ~half a year to train Llama 70b, and I imagine the costs of distributing jobs to such a network and coordinating on weight updates would make this much more expensive. So, probably not going to be competitive?
JP Addison likes this.
Ben Weinstein-Raun likes this.
I would guess that there will be reasons to at least want an LLM trained on an open corpus, whether it's community-trained or not.
Example reasons include ensuring that the model isn't secretly trying to get you to buy McDonalds, and the possibility that companies start releasing un-fine-tunable models.
Happy new year superstims!
like this
Ben Weinstein-Raun reshared this.
Gretta Duleba likes this.
I am probably being too problem-solvey right now and I hereby to resolve to stop after this round, but in my experience, arborists are willing to produce documentation of their findings that can later be shown to landlords!
You just sound sad about your antenna and I wanna fix it.
Ben Weinstein-Raun likes this.
I've been meaning to start donating blood and/or plasma for a few years now, partly because it's a good thing to do, but also as a way to shed accumulating substances (PFASs have been studied, but also background heavy metals in the case of whole blood donation), but I use topical finasteride for hair loss, which I'd have to stop for a month before donating.
So, say I took a month off from finasteride, and then spent a month donating: whole blood once, and plasma 7 times. If my math is right, I'd have donated / regenerated 1 - 0.92^8 = ~half my blood volume; and ~10% of my body weight. Then maybe back to finasteride for two months, another month of no finasteride, and another donation month?
like this
I'm finding it really hard to make #hamradio contacts in Delaware. Weirdly hard, given that the five states with smaller populations than Delaware were all much easier, even though some of them are further from me, and I've had no trouble making contacts in its neighboring states.
A few days ago I decided to try to be more strategic about contacting every US state since I was really close, and I've now spent probably twice the time trying to contact Delaware, as trying to contact all four of the other stragglers combined.
like this
Ben Weinstein-Raun likes this.
Ben Weinstein-Raun likes this.
Today I was inspired to ask ChatGPT for help with my health issues for the first time since o1 was released. It suggested that I might have Cushing's Syndrome, which actually makes a lot of sense. I don't think any doctors ever suggested this directly, but I do have a recollection of a doctor asking me if I was extremely thirsty or urinating a lot (I wasn't), which might have been a question for a relevant differential.
So hopefully tomorrow I'm going to wake up and go get a cortisol test.
like this
like this
Ben Weinstein-Raun reshared this.
A big chunk of my current best-guess political philosophy is somewhat libertarian, the rough intuition being that in many important respects, things very often go better when people make their own choices, especially about how much things are worth to them.
This is a helpful framework when the agents in your economy / political system are relatively static entities. But as far as I know it doesn't really have anything to say about cases where one agent might mold another agent's preferences, or decide which agents to bring into existence.
Some examples include:
- having children
- many aspects of how children are raised
- building AI agents
like this
Ben Weinstein-Raun likes this.
Ben Weinstein-Raun likes this.
Ben Weinstein-Raun likes this.
Ben Weinstein-Raun likes this.
Population ethics is the ~one area where my moral intuitions bottom out at "there is no actual answer here". Most questions of morality intuitively feel like there is a right answer but thinking about population ethics consistently leaves me with no solid foundations and nowhere to get foundations.
(Related: how should we think about farming animals for meat, given that mostly they wouldn't exist otherwise?)
Ben Weinstein-Raun likes this.
Proposed fun / slightly edgy party game: Perzendo
Materials: index cards and a pencil, or a google doc.
One player is the perzendo master. This perzendo master writes the names of two people in the room, in a list sorted by some secret property.
The other players take turns. On each turn, a player either proposes a name (of any human, living or dead), or tries to guess the property. The perzendo master puts these names on the list, wherever they fall according to the secret property.
The first player to guess the rule correctly wins.
like this
TIL that an experience that I've had ~once every month or so for my whole life, and assumed was near-universal, is actually relatively rare, and correlated with various bad things that I'm not aware of experiencing in relation to it (EBV infection, migraines, head trauma).
Basically, as I experience it (typically right as I'm falling asleep) everything visually starts to feel very small and far away, except that my tongue feels large and cumbersome in my mouth.
It's called Alice in Wonderland Syndrome; other people experience similar size distortions though the details vary a lot.
like this
Ben Weinstein-Raun likes this.
Ben Weinstein-Raun likes this.
like this
kip doesn't like this.
like this
Ben Millwood likes this.
like this
Ben Millwood likes this.
Pascal's Wager doesn't go far enough:
Granted, the Christian God offers infinite rewards, but as far as I can find this is always in terms of "eternal" life or "eternal" communion with him, and so we can be confident that he is offering rewards only as large as the cardinality of the continuum.
So come on down to Crazy Georg's Omega Plus First Church of G...d: If you can conceive of a God advertising any size of infinite reward, G...d will match it.
like this
Granted, the Christian God offers infinite rewards, but as far as I can find this is always in terms of "eternal" life or "eternal" communion with him, and so we can be confident that he is offering rewards only as large as the cardinality of the continuum.
FWIW I think it's plausible that the Greek words used in the NT doesn't have this sort of connotation.
like this
like this
like this
I found this relevant and interesting chapter from Unsong by thinking "hmm, but Omega is an ancient word in some sense, and it's been more recently used in the context of infinities... and Jesus also referred to 'alpha and omega' to represent something like infinitude. So I can probably make a joke about kabbalah. Oh, but Scott Alexander will have already done that."
like this
On Friday I tried to show @Daniel Filan how FT8 works, but I was having a really hard time getting QSOs. I was worried something was wrong with my #hamradio antenna setup, since the internet claimed that band conditions should be good. But this afternoon and evening I had a great time and got 17 QSOs across 5 different bands! So I think Friday must have been something transient rather than my (very janky) setup degrading.
I now have confirmed QSOs in 40 states, and unconfirmed ones in all but 3! (North Dakota, Delaware, and Vermont. Almost managed to get one in Delaware today, but wasn't quite able to complete the protocol) Plus 28 "DX entities" (mostly countries, but includes e.g. Alaska and Hawaii separately) on 6 continents!
Map of listening stations that heard me this afternoon:
like this
W0AMT Jon reshared this.
Ended up deactivating my facebook yesterday. I wish I could have emotionally handled whatever was going on, but the only way I know how to productively deal with expressions of anger at that depth, apparently doesn't scale past one or two people at a time.
Last night I felt really conflicted about it. Like, I had just been trying to get people to give me harsh feedback, hadn't I? Doesn't this undermine that, or feel like a petty table-flipping move?
I still have some of those worries, but today I'm feeling like it was obviously the right move. Like if I had a gangrenous limb or something and had cut it off: It's pretty awful that I lost a limb, but it's way better than losing my whole self. Plus in this case I can reattach it if I figure out how to get rid of the gangrene.
like this
like this
Daniel Filan doesn't like this.
David Mears
in reply to Ben Weinstein-Raun • • •Ben Weinstein-Raun
in reply to David Mears • •