Hi there, I’ve recently tried to use the Usenet and I am amazed how much stuff is on there and at which speeds it can be accessed. Yet… Readarr has been giving me a headache recently and I think this is due to some peculiarity of the Usenet.

It recently started downloading sources to many files with wild naming schemes at the end of the file like

(2019).zip.vol31+32.par2 yEnc

just to complain that it didn’t find any files in the download. Now I get that yEnc is some sort of cypher-format and since the files are usually under 10mb, I get that these are probably single chapter or something. Searching the Usenet by hand, I’ll usually find many parts of the same audio book with those numbers slapped onto them. Some don’t even follow consecutive numbering and contain vol3+79 or something.

So: How am I supposed to download those and how am I supposed to teach Readarr how to handle them?

  • Laser@feddit.de
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    yEnc isn’t a cipher, but rather an encoding for mapping binary to text, similar to base64 (but much more effective). So this denotes yEncc encoding.

    The files you’re seeing are PAR2 files, which are used for repairing. They’re useless without the base file. The file in your example contains 32 recovery blocks. That means if your base file has 32 or less damaged blocks, this parity file can repair it.

    Usually, you’d download all files belonging together in a single download and let your downloader do the rest. This is normally done by loading an NZB file that you either get from a Usenet search engine or an indexer.

    • NorgurOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks :) Yet… why keeps Readarr fetching those and not the base file I wonder?

      • Fisch@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        You can make a custom format, which filters the stuff you don’t want and add that to your quality format with a negative score. Radarr won’t pick them up then anymore. If you don’t know how to do that, I could explain it.

        • NorgurOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Hey, wwhile I indeed didn’t even know one could do a custom format with a negative score, I solved the issue by adding the corresponding releases those par2 files were for manually.

  • Backfire@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Sounds like there’s a step missing somewhere.

    You should probably use a downloader such as NZBget or SABnzbd to download the whole collection of .rar archives (if it’s archived at all) and .par2 files associated with it. You’d download an NZB file for which content you wish to have, which is a simple file that tells the downloader how to download everything, fix them up when necessary and to repair the files, to finally extract a usable file for you.

    That output is something Readarr should be able to use, like an epub.

    • NorgurOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Those do come from a private Indexer (and given the explanation by @Laser@feddit.de, they are supposed to be there

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        But only finding the yENC means there was no better named file around so radarr tryharded with lose fitting name of file. If your indexer, or maybe your Usenet provider, don’t have the right file you want.