A machine learning librarian at Hugging Face just released a dataset composed of one million Bluesky posts, complete with when they were posted and who posted them, intended for machine learning research.

Daniel van Strien posted about the dataset on Bluesky on Tuesday:

“This dataset contains 1 million public posts collected from Bluesky Social’s firehose API, intended for machine learning research and experimentation with social media data,” the dataset description says. “Each post contains text content, metadata, and information about media attachments and reply relationships.”

The data isn’t anonymous. In the dataset, each post is listed alongside the users’ decentralized identifier, or DID; van Strien also made a search tool for finding users based on their DID and published it on Hugging Face. A quick skim through the first few hundred of the million posts shows people doing normal types of Bluesky posting—arguing about politics, talking about concerts, saying stuff like “The cat is gay” and “When’s the last time yall had Boston baked beans?”—but the dataset has also swept up a lot of adult content, too.

  • Brumefey@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    ·
    8 hours ago

    I don’t know why social media are used for training. It’s like the worst quality of data ever and it results to answers like « go kill youself » when prompted about something sad…

    • foremanguy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      They are used because they are “real life” (not really but you know) conversation example

      • Pandemanium@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 minutes ago

        But why do we need to recreate “real life?” Don’t we already do this relatively well in books, TV, and movies? People keep saying we won’t use AI to replace creative writing, but this (and propaganda, making bot conversations seem like real people) are the only use cases for this kind of data. LLMs don’t need to improve their conversation skills. What they really need is to stop hallucinating, and this kind of data won’t help with that.