Afleveringen
-
It's time to narrow the 150 entries in the Book Review Contest to about a dozen finalists. I can't read 150 reviews alone, so I need your help.
You'll find the entries in six Google Docs (thanks to a reader for collating them):
A - D
E - I
L - P
R - S
Th - The N
The O - Y
Please pick as many as you have time for, read them, and rate them using this form.
Donât read them in order! If you read them in order, Iâll have 1,000 votes on the first review, 500 on the second, and so on to none in the second half. Either pick a random review (thanks to AlexanderTheGrand and Taymon for making a random-review-chooser script here) or pick whichever seems most interesting to you. List of all books reviewed below.
https://www.astralcodexten.com/p/choose-book-review-finalists-2024
-
Suffering is part of the human condition, except when it isn't.
I met a man at an ACX meetup once who claimed he has never felt anxiety, not even the littlest bit. His father was the same way, so maybe it's genetic.
Some people feel more pain than others. The âmore painâ category includes some big demographic groups like redheads, who seem to feel some types of pain more intensely and may need up to 20% more anaesthetic, though their exact processing differences are complicated. But there are also various lesser-known genetic conditions that can make bizarre things - water, light touch, mild temperature changes - excruciatingly painful. The most exotic cause of this syndrome has to be platypus venom, which is both painful in and of itself and also seems to increase the bodyâs overall capacity to feel pain; for years after a platypus scratch, every tiny scrape will hurt worse than usual.
https://www.astralcodexten.com/p/profile-the-far-out-initiative
-
Zijn er afleveringen die ontbreken?
-
Manifold pivot || Lab leak hindcasting || CFTC extra-double-bans prediction markets
https://www.astralcodexten.com/p/mantic-monday-51324
-
Most recent post here.
Table Of Contents:
1: Comments From Robin
2: Comments About/From Goldin et al
3: Comments From The Rest Of You Yokelshttps://www.astralcodexten.com/p/mantic-monday-51324
-
If youâre from a country that doesnât have emotional support animals, hereâs how it works.
Sometimes places ban or restrict animals. For example, an apartment building might not allow dogs. Or an airline might charge you money to transport your cat. But the law requires them to allow service animals, for example guide dogs for the blind. A newer law also requires some of these places to allow emotional support animals, ie animals that help people with mental health problems like depression or anxiety. So for example, if youâre depressed, but having your dog nearby makes you feel better, then a landlord has to let you keep your dog in the apartment. Or if youâre anxious, but petting your cat calms you down, then an airline has to take your cat free of charge.
Clinically and scientifically, this is great. Many studies show that pets help people with mental health problems. Depressed people really do benefit from a dog who loves them. Anxious people really do feel calmer when they hold a cute kitten.
Legally, itâs a racket.
https://www.astralcodexten.com/p/the-emotional-support-animal-racket
-
Californiaâs state senate is considering SB1047, a bill to regulate AI. Since OpenAI, Anthropic, Google, and Meta are all in California, this would affect most of the industry.
If the California state senate passed a bill saying that the sky was blue, I would start considering whether it might be green, or colorless, or maybe not exist at all. And people on Twitter have been saying that this bill would ban open-source AI - no, all AI! - no, all technology more complicated than a toaster! So I started out skeptical.
But Zvi Mowshowitz (summary article in Asterisk, long FAQ on his blog) has looked at it more closely and found:
https://www.astralcodexten.com/p/asteriskzvi-on-californias-ai-bill
-
Original post here.
Table Of Contents:1: Response From The Author
2: Attempted Fact Checks
3: People With Personal Experience At Their Workplace
4: People With Personal Experience In Civil Rights
5: The Origins Of Modern Wokeness
6: Other Countries
7: EEOC Lawsuits
8: Other Good Comments
9: Conclusions And Updateshttps://www.astralcodexten.com/p/highlights-from-the-comments-on-the-cf9
-
The Origins Of Woke, by Richard Hanania, has an ambitious thesis. And it argues for an ambitious thesis. But the thesis it has isnât the one it argues for.
The claimed thesis is âthe cultural package of wokeness is downstream of civil rights lawâ. It goes pretty hard on this. For example, thereâs the title, The Origins Of Woke. Or the Amazon blurb: âThe roots of the culture lie not in the culture itself, but laws and regulations enacted decades agoâ. Or the banner ad:=
The other thesis, the one it actually argues for, is âUS civil rights law is badâ. On its own, this is a fine thesis. A book called Civil Rights Law Is Bad would - okay, I admit that despite being a professional Internet writer I have no idea how the culture works anymore, or whether being outrageous is good or bad for sales these days. Weâll never know, because Richard chose to wrap his argument in a few pages on how maybe this is the origin of woke or something. Still, the book is on why civil rights law is bad.
https://www.astralcodexten.com/p/book-review-the-origins-of-woke
-
Robin Hanson replied here to my original post challenging him on health care here.
On Straw-ManningRobin thinks Iâm straw-manning him. He says:
https://www.astralcodexten.com/p/response-to-hanson-on-health-care
-
In November 2022, Aella posted this Twitter poll:
19% of women without pre-menstrual symptoms believed in the supernatural, compared to 39% of women with PMS. I canât do chi-squared tests in my head, but with 1,074 votes this looks significant. Weird!
Hereâs another one
Now 72% of people with PMS self-describe as neurotic, compared to only 45% without. Aella writes more about this here, and sebjenseb confirms here. Iâm less weirded out by this one, because you can imagine that people feel neurotic because of PMS symptoms, but itâs still a surprisingly strong effect.
https://www.astralcodexten.com/p/survey-results-pms-symptoms
-
One of the most common arguments against AI safety is:
Hereâs an example of a time someone was worried about something, but it didnât happen. Therefore, AI, which you are worried about, also wonât happen.
I always give the obvious answer: âOkay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isnât more like those?â The people Iâm arguing with always seem so surprised by this response, as if Iâm committing some sort of betrayal by destroying their beautiful argument.
The first hundred times this happened, I thought I must be misunderstanding something. Surely âI can think of one thing that didnât happen, therefore nothing happensâ is such a dramatic logical fallacy that no human is dumb enough to fall for it. But people keep bringing it up, again and again. Very smart people, people who I otherwise respect, make this argument and genuinely expect it to convince people!
Usually the thing that didnât happen is overpopulation, global cooling, etc. But most recently it was some kind of coffeepocalypse:
https://www.astralcodexten.com/p/desperately-trying-to-fathom-the
-
Robin Hanson of Overcoming Bias more or less believes medicine doesnât work [EDIT: see his response here, where he says this is an inaccurate summary of his position. Further chain of responses here and here]
This is a strong claim. It would be easy to round Hansonâs position off to something weaker, like âextra health care isnât valuable on the marginâ. This is how most people interpret the studies he cites. Still, I think his current, actual position is that medicine doesnât work. For example, he writes:
https://www.astralcodexten.com/p/contra-hanson-on-medical-effectiveness
-
[previously in series: 1, 2, 3, 4, 5]
When that April with his sunlight fierce
The rainy winter of the coast doth pierce
And filleth every spirit with such hale
As horniness engenders in the male
Then folk go out in crop tops and in shorts
Their bodies firm from exercise and sports
And men gaze at the tall girls and the shawties
And San Franciscans long to go to parties.https://www.astralcodexten.com/p/ye-olde-bay-area-house-party
-
Lumina, the genetically modified anti-tooth-decay bacterium that I wrote about in December, is back in the news after lowering its price from $20,000 to $250 and getting endorsements from Yishan Wong, Cremieux, and Richard Hanania (as well as anti-endorsements from Saloni and Stuart Ritchie). A few points that have come up:
https://www.astralcodexten.com/p/updates-on-lumina-probiotic
-
Original post here. Table of contents below. I want to especially highlight three things.
First, Saar wrote a response to my post (and to zoonosis arguments in general). Iâve put a summary and some my responses at 1.11, but you can read the full post on the Rootclaim blog.
Second, I kind of made fun of Peter for giving some very extreme odds, and I mentioned they were sort of trolling, but heâs convinced me they were 100% trolling. Many people held these poorly-done calculations against Peter, so I want to make it clear thatâs my fault for mis-presenting it. See 3.1 for more details.
Third, in my original post, I failed to mention that Peter also has a blog, including a post summing up his COVID origins argument.
Thanks to some people who want to remain anonymous for helping me with this post. Any remaining errors are my own.
1: Comments Arguing Against Zoonosis
â 1.1: Is COVID different from other zoonoses?
â 1.2: Were the raccoon-dogs wild-caught?
â 1.3: 92 early cases
â 1.4: COVID in Brazilian wastewater
â 1.5 Biorealismâs 16 arguments
â 1.6: DrJayChouâs 7 arguments
â 1.7: How much should coverup worry us?
â 1.8: Have Worobey and Pekar been debunked?
â 1.9: Was there ascertainment bias in early cases
â 1.10: Connor Reed / Gwern on cats
â 1.11: Rootclaimâs response to my post2: Comments Arguing Against Lab Leak
â 2.1: Is the pandemic starting near WIV reverse correlation?3: Other Points That Came Up
â 3.1: Apology to Peter re: extreme odds
â 3.2: Tobias Schneider on Rootclaimâs Syria Analysis
â 3.3: Closing thoughts on Rootclaim4: Summary And Updates
https://www.astralcodexten.com/p/highlights-from-the-comments-on-the-5d7
-
[I havenât independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I canât guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-april-2024
-
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times.
This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen.
You can find the list below, in the following order:
Africa & Middle East
Asia-Pacific (including Australia)
Europe (including UK)
North America & Central America
South America
There should very shortly be a map of these meetups on the LessWrong community page.
https://www.astralcodexten.com/p/spring-meetups-everywhere-2024
-
Saar Wilf is an ex-Israeli entrepreneur. Since 2016, heâs been developing a new form of reasoning, meant to transcend normal human bias.
His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isnât exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it.
But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering âBayes, Bayes, Bayesâ under your breath. Nobody - not the statisticians, not Nate Silver, certainly not me - tries to do full Bayesian reasoning on fuzzy real-world problems. Theyâd be too hard to model. Youâd make some philosophical mistake converting the situation into numbers, then end up much worse off than if youâd tried normal human intuition.
Rootclaim spent years working on this problem, until he was satisfied his method could avoid these kinds of pitfalls. Then they started posting analyses of different open problems to their site, rootclaim.com. Here are three:
-
Itâs every bloggerâs curse to return to the same arguments again and again. Matt Yglesias has to keep writing âmaybe we should do popular things instead of unpopular onesâ, Freddie de Boer has to keep writing âthe way culture depicts mental illness is badâ, and for whatever reason, I keep getting in fights about whether you can have probabilities for non-repeating, hard-to-model events. For example:
What is the probability that Joe Biden will win the 2024 election?
What is the probability that people will land on Mars before 2050?
What is the probability that AI will destroy humanity this century?
The argument against: usually we use probability to represent an outcome from some well-behaved distribution. For example, if there are 400 white balls and 600 black balls in an urn, the probability of pulling out a white ball is 40%. If you pulled out 100 balls, close to 40 of them would be white. You can literally pull out the balls and do the experiment.
In contrast, saying âthereâs a 45% probability people will land on Mars before 2050â seems to come out of nowhere. How do you know? If you were to say âthe probability humans will land on Mars is exactly 45.11782%â, you would sound like a loon. But how is saying that itâs 45% any better? With balls in an urn, the probability might very well be 45.11782%, and you can prove it. But with humanity landing on Mars, arenât you just making this number up?
Since people on social media have been talking about this again, letâs go over it one more depressing, fruitless time.
https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist
-
I have data from two big Internet surveys, Less Wrong 2014 and Clearer Thinking 2023. Both asked questions about IQ:
The average LessWronger reported their IQ as 138.
The average ClearerThinking user reported their IQ as 130.
These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average.
Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these donât look like lies. Both surveys asked for SAT scores, which are known to correspond to IQ. The LessWrong average was 1446, corresponding to IQ 140. The ClearerThinking average was 1350, corresponding to IQ 134. People seem less likely to lie about their SATs, and least likely of all to optimize their lies for getting IQ/SAT correspondences right.
And the Less Wrong survey asked people what test they based their estimates off of. Some people said fake Internet IQ tests. But other people named respected tests like the WAIS, WISC, and Stanford-Binet, or testing sessions by Mensa (yes, I know you all hate Mensa, but their IQ tests are considered pretty accurate). The subset of about 150 people who named unimpeachable tests had slightly higher IQ (average 140) than everyone else.
Thanks to Spencer Greenberg of ClearerThinking, I think Iâm finally starting to make progress in explaining whatâs going on.
https://www.astralcodexten.com/p/the-mystery-of-internet-survey-iqs
- Laat meer zien