cross-posted from: https://lemmy.ca/post/37011397

!opensource@programming.dev

The popular open-source VLC video player was demonstrated on the floor of CES 2025 with automatic AI subtitling and translation, generated locally and offline in real time. Parent organization VideoLAN shared a video on Tuesday in which president Jean-Baptiste Kempf shows off the new feature, which uses open-source AI models to generate subtitles for videos in several languages.

  • LandedGentry@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    14 hours ago

    Honestly though? If your audio is even half decent you’ll get like 95% accuracy. Considering a lot of media just wouldn’t have anything, that is a pretty fair trade off to me

    • TheMachineStops
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      11 hours ago

      From experience AI translation is still garbage, specially for languages like Chinese, Japanese, and Korean , but if it only subtitles in the actual language such creating English subtitles for English then it is probably fine.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        That’s probably more due to lack of training than anything else. Existing models are mostly made by American companies and trained on English-language material. Naturally, the further you get from the model, the worse the result.

        • TheMachineStops
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          6 hours ago

          It is not the lack of training material that is the issue, it doesn’t understand context and cultural references. Someone commented here that crunchyroll AI subtitles translated Asura Hall a name to asshole.