Afleveringen
-
If you’re from a country that doesn’t have emotional support animals, here’s how it works.
Sometimes places ban or restrict animals. For example, an apartment building might not allow dogs. Or an airline might charge you money to transport your cat. But the law requires them to allow service animals, for example guide dogs for the blind. A newer law also requires some of these places to allow emotional support animals, ie animals that help people with mental health problems like depression or anxiety. So for example, if you’re depressed, but having your dog nearby makes you feel better, then a landlord has to let you keep your dog in the apartment. Or if you’re anxious, but petting your cat calms you down, then an airline has to take your cat free of charge.
Clinically and scientifically, this is great. Many studies show that pets help people with mental health problems. Depressed people really do benefit from a dog who loves them. Anxious people really do feel calmer when they hold a cute kitten.
Legally, it’s a racket.
https://www.astralcodexten.com/p/the-emotional-support-animal-racket
-
California’s state senate is considering SB1047, a bill to regulate AI. Since OpenAI, Anthropic, Google, and Meta are all in California, this would affect most of the industry.
If the California state senate passed a bill saying that the sky was blue, I would start considering whether it might be green, or colorless, or maybe not exist at all. And people on Twitter have been saying that this bill would ban open-source AI - no, all AI! - no, all technology more complicated than a toaster! So I started out skeptical.
But Zvi Mowshowitz (summary article in Asterisk, long FAQ on his blog) has looked at it more closely and found:
https://www.astralcodexten.com/p/asteriskzvi-on-californias-ai-bill
-
Zijn er afleveringen die ontbreken?
-
Original post here.
Table Of Contents:1: Response From The Author
2: Attempted Fact Checks
3: People With Personal Experience At Their Workplace
4: People With Personal Experience In Civil Rights
5: The Origins Of Modern Wokeness
6: Other Countries
7: EEOC Lawsuits
8: Other Good Comments
9: Conclusions And Updateshttps://www.astralcodexten.com/p/highlights-from-the-comments-on-the-cf9
-
The Origins Of Woke, by Richard Hanania, has an ambitious thesis. And it argues for an ambitious thesis. But the thesis it has isn’t the one it argues for.
The claimed thesis is “the cultural package of wokeness is downstream of civil rights law”. It goes pretty hard on this. For example, there’s the title, The Origins Of Woke. Or the Amazon blurb: “The roots of the culture lie not in the culture itself, but laws and regulations enacted decades ago”. Or the banner ad:=
The other thesis, the one it actually argues for, is “US civil rights law is bad”. On its own, this is a fine thesis. A book called Civil Rights Law Is Bad would - okay, I admit that despite being a professional Internet writer I have no idea how the culture works anymore, or whether being outrageous is good or bad for sales these days. We’ll never know, because Richard chose to wrap his argument in a few pages on how maybe this is the origin of woke or something. Still, the book is on why civil rights law is bad.
https://www.astralcodexten.com/p/book-review-the-origins-of-woke
-
Robin Hanson replied here to my original post challenging him on health care here.
On Straw-ManningRobin thinks I’m straw-manning him. He says:
https://www.astralcodexten.com/p/response-to-hanson-on-health-care
-
In November 2022, Aella posted this Twitter poll:
19% of women without pre-menstrual symptoms believed in the supernatural, compared to 39% of women with PMS. I can’t do chi-squared tests in my head, but with 1,074 votes this looks significant. Weird!
Here’s another one
Now 72% of people with PMS self-describe as neurotic, compared to only 45% without. Aella writes more about this here, and sebjenseb confirms here. I’m less weirded out by this one, because you can imagine that people feel neurotic because of PMS symptoms, but it’s still a surprisingly strong effect.
https://www.astralcodexten.com/p/survey-results-pms-symptoms
-
One of the most common arguments against AI safety is:
Here’s an example of a time someone was worried about something, but it didn’t happen. Therefore, AI, which you are worried about, also won’t happen.
I always give the obvious answer: “Okay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isn’t more like those?” The people I’m arguing with always seem so surprised by this response, as if I’m committing some sort of betrayal by destroying their beautiful argument.
The first hundred times this happened, I thought I must be misunderstanding something. Surely “I can think of one thing that didn’t happen, therefore nothing happens” is such a dramatic logical fallacy that no human is dumb enough to fall for it. But people keep bringing it up, again and again. Very smart people, people who I otherwise respect, make this argument and genuinely expect it to convince people!
Usually the thing that didn’t happen is overpopulation, global cooling, etc. But most recently it was some kind of coffeepocalypse:
https://www.astralcodexten.com/p/desperately-trying-to-fathom-the
-
Robin Hanson of Overcoming Bias more or less believes medicine doesn’t work [EDIT: see his response here, where he says this is an inaccurate summary of his position. Further chain of responses here and here]
This is a strong claim. It would be easy to round Hanson’s position off to something weaker, like “extra health care isn’t valuable on the margin”. This is how most people interpret the studies he cites. Still, I think his current, actual position is that medicine doesn’t work. For example, he writes:
https://www.astralcodexten.com/p/contra-hanson-on-medical-effectiveness
-
[previously in series: 1, 2, 3, 4, 5]
When that April with his sunlight fierce
The rainy winter of the coast doth pierce
And filleth every spirit with such hale
As horniness engenders in the male
Then folk go out in crop tops and in shorts
Their bodies firm from exercise and sports
And men gaze at the tall girls and the shawties
And San Franciscans long to go to parties.https://www.astralcodexten.com/p/ye-olde-bay-area-house-party
-
Lumina, the genetically modified anti-tooth-decay bacterium that I wrote about in December, is back in the news after lowering its price from $20,000 to $250 and getting endorsements from Yishan Wong, Cremieux, and Richard Hanania (as well as anti-endorsements from Saloni and Stuart Ritchie). A few points that have come up:
https://www.astralcodexten.com/p/updates-on-lumina-probiotic
-
Original post here. Table of contents below. I want to especially highlight three things.
First, Saar wrote a response to my post (and to zoonosis arguments in general). I’ve put a summary and some my responses at 1.11, but you can read the full post on the Rootclaim blog.
Second, I kind of made fun of Peter for giving some very extreme odds, and I mentioned they were sort of trolling, but he’s convinced me they were 100% trolling. Many people held these poorly-done calculations against Peter, so I want to make it clear that’s my fault for mis-presenting it. See 3.1 for more details.
Third, in my original post, I failed to mention that Peter also has a blog, including a post summing up his COVID origins argument.
Thanks to some people who want to remain anonymous for helping me with this post. Any remaining errors are my own.
1: Comments Arguing Against Zoonosis
— 1.1: Is COVID different from other zoonoses?
— 1.2: Were the raccoon-dogs wild-caught?
— 1.3: 92 early cases
— 1.4: COVID in Brazilian wastewater
— 1.5 Biorealism’s 16 arguments
— 1.6: DrJayChou’s 7 arguments
— 1.7: How much should coverup worry us?
— 1.8: Have Worobey and Pekar been debunked?
— 1.9: Was there ascertainment bias in early cases
— 1.10: Connor Reed / Gwern on cats
— 1.11: Rootclaim’s response to my post2: Comments Arguing Against Lab Leak
— 2.1: Is the pandemic starting near WIV reverse correlation?3: Other Points That Came Up
— 3.1: Apology to Peter re: extreme odds
— 3.2: Tobias Schneider on Rootclaim’s Syria Analysis
— 3.3: Closing thoughts on Rootclaim4: Summary And Updates
https://www.astralcodexten.com/p/highlights-from-the-comments-on-the-5d7
-
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-april-2024
-
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times.
This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen.
You can find the list below, in the following order:
Africa & Middle East
Asia-Pacific (including Australia)
Europe (including UK)
North America & Central America
South America
There should very shortly be a map of these meetups on the LessWrong community page.
https://www.astralcodexten.com/p/spring-meetups-everywhere-2024
-
Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias.
His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it.
But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver, certainly not me - tries to do full Bayesian reasoning on fuzzy real-world problems. They’d be too hard to model. You’d make some philosophical mistake converting the situation into numbers, then end up much worse off than if you’d tried normal human intuition.
Rootclaim spent years working on this problem, until he was satisfied his method could avoid these kinds of pitfalls. Then they started posting analyses of different open problems to their site, rootclaim.com. Here are three:
-
It’s every blogger’s curse to return to the same arguments again and again. Matt Yglesias has to keep writing “maybe we should do popular things instead of unpopular ones”, Freddie de Boer has to keep writing “the way culture depicts mental illness is bad”, and for whatever reason, I keep getting in fights about whether you can have probabilities for non-repeating, hard-to-model events. For example:
What is the probability that Joe Biden will win the 2024 election?
What is the probability that people will land on Mars before 2050?
What is the probability that AI will destroy humanity this century?
The argument against: usually we use probability to represent an outcome from some well-behaved distribution. For example, if there are 400 white balls and 600 black balls in an urn, the probability of pulling out a white ball is 40%. If you pulled out 100 balls, close to 40 of them would be white. You can literally pull out the balls and do the experiment.
In contrast, saying “there’s a 45% probability people will land on Mars before 2050” seems to come out of nowhere. How do you know? If you were to say “the probability humans will land on Mars is exactly 45.11782%”, you would sound like a loon. But how is saying that it’s 45% any better? With balls in an urn, the probability might very well be 45.11782%, and you can prove it. But with humanity landing on Mars, aren’t you just making this number up?
Since people on social media have been talking about this again, let’s go over it one more depressing, fruitless time.
https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist
-
I have data from two big Internet surveys, Less Wrong 2014 and Clearer Thinking 2023. Both asked questions about IQ:
The average LessWronger reported their IQ as 138.
The average ClearerThinking user reported their IQ as 130.
These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average.
Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these don’t look like lies. Both surveys asked for SAT scores, which are known to correspond to IQ. The LessWrong average was 1446, corresponding to IQ 140. The ClearerThinking average was 1350, corresponding to IQ 134. People seem less likely to lie about their SATs, and least likely of all to optimize their lies for getting IQ/SAT correspondences right.
And the Less Wrong survey asked people what test they based their estimates off of. Some people said fake Internet IQ tests. But other people named respected tests like the WAIS, WISC, and Stanford-Binet, or testing sessions by Mensa (yes, I know you all hate Mensa, but their IQ tests are considered pretty accurate). The subset of about 150 people who named unimpeachable tests had slightly higher IQ (average 140) than everyone else.
Thanks to Spencer Greenberg of ClearerThinking, I think I’m finally starting to make progress in explaining what’s going on.
https://www.astralcodexten.com/p/the-mystery-of-internet-survey-iqs
-
Both the Atlantic’s critique of polyamory and my defense of it shared the same villain - “therapy culture”, the idea that you should prioritize “finding your true self” and make drastic changes if your current role doesn’t seem “authentically you”.
A friend recently suggested a defense of this framework, which surprised me enough that I now relay it to you.
https://www.astralcodexten.com/p/in-partial-grudging-defense-of-some
-
(inspired by Aid Airdrop Kills Five People In Gaza After Parachute Fails)
https://www.astralcodexten.com/p/verses-on-five-people-being-killed
-
Robots of prediction, predictions of robots
https://www.astralcodexten.com/p/mantic-monday-31124
-
There are ACX meetup groups all over the world. Lots of people are vaguely interested, but don't try them out until I make a big deal about it on the blog. Since learning that, I've tried to make a big deal about it on the blog twice annually, and it's that time of year again.
If you're willing to organize a meetup for your city, please fill out the organizer form.
https://www.astralcodexten.com/p/spring-meetups-everywhere-2024-call
- Laat meer zien