• 59 Posts
  • 1.73K Comments
Joined vor 3 Jahren
cake
Cake day: 27. Juni 2023

help-circle














  • The only people I trust as little as I trust the owners of corporate social media are the politicians who have decided to cash in on the moment by ā€œregulatingā€ them. I mean, here in progressive Massachusetts, the state house of representatives just this week passed a bill that, depending on the whims of the Attorney General, would require awful.systems to verify the ages of its users by gathering their government-issued IDs or biometrics. We are, you see, a ā€œpublic website, online service, online application or mobile application that displays content primarily generated by users and allows users to create, share and view user-generated content with other usersā€. And so we would have to ā€œimplement an age assurance or verification system to determine whether a current or prospective user on the social media platformā€ is 16 or older. (Or 14 or 15 with parental consent, but your humble mods lack the resources to parse divorce laws in all localities worldwide, sort out issues of disputed guardianship, etc., etc.) The meaning of what ā€œpracticableā€ age verification is supposed to be would depend upon regulations that the Attorney General has yet to write.

    So, yeah, as an old-school listserv nerd who had the I am not on Facebook T-shirt 15 years ago, I don’t trust any of these people.


  • From what appears to be the guy’s Substack:

    East Asian people are on average more intelligent than Black people. Which is factual based on the vast majority of tests we have developed and observed over the decades.

    And:

    I am an advocate for ending mass migration and initiating mass deportations for illegal migrants in western countries. Not because I am a white supremacist (I am not white) or because I believe there is necessarily anything innately special about being white. I believe these things for three reasons. First, nations have the right to preserve their ethnic identity, and second low skill immigration saturates the job markets of these countries making jobs which could once earn a living wage become unlivable, increasing the amount of value draining people in society by both importing them and undercutting low skill natives. lastly, generally, whiteness in these countries is a decent correlative to some of the things I value.

    And:

    It is true that many of the features which white supremacist value have little to do with the genetic predisposition to European ancestry and instead have to do with higher IQ; which is relatively more common among whites than most other groups.

    So, a common-or-garden guy who is not left-wing or right-wing but a secret third thing that is also right-wing.




  • ā€œScientists invented a fake disease. AI told people it was realā€

    https://www.nature.com/articles/d41586-026-01100-y

    But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

    The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunstrƶm, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunstrƶm carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. ā€œI wanted to see if I can create a medical condition that did not exist in the database,ā€ she says.

    The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.



  • I aired some Reviewer #2 grievances in the bsky comments:

    https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c

    ā€œKalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT ā€œto get to the edge of what’s known in quantum physics.ā€ā€

    As a physicist, I have never pressed F to doubt harder.

    ā€œIn 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents.ā€ To the best of my knowledge, these suggestions were never evaluated by any other researchers.

    (The original paper was published as a ā€œcommentā€: https://www.nature.com/articles/s42256-022-00465-9)

    Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.

    https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643

    ā€œIn a 2025 study, ChatGPT passed the test more reliably than actual humans did.ā€

    If this is referring to Jones and Bergen’s ā€œLarge Language Models Pass the Turing Testā€, that’s a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.

    ā€œA classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely winā€

    Which researchers?

    (Hint: Eliezer Yudkowsky is not a researcher.)

    AI: ā€œI will convince you to let me out of this boxā€

    Humanity (wringing hands): ā€œOh, where is our savior? Who will stand fast in the face of all entreaties?ā€

    Bartleby the Scrivener: hello

    ā€œā€¦a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor.ā€

    Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.

    https://repository.uantwerpen.be/docman/irua/371b9dmotoM74

    ā€œIn late 2022, four computer scientists published a paper motivated in part by concerns about ā€œdeceptive alignment,ā€ … one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening.ā€

    Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; ā€œpostedā€ is not the same as ā€œpublishedā€. And claims in this area are rife with criti-hype:

    https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/

    Oh, right, the ā€œFuture of Life Instituteā€. Pepperidge Farm remembers:

    ā€œIn January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.ā€

    https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism

    ā€œTegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro … has written articles for the site in the past.ā€

    https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/