I recently got it into my head to compare the various popular video codecs in an effort to better understand how av1 works and looks compared to x264 and x265. I also had ideas of using a intel video card to compress a home video security setup, and what levels of compression I would need to get good results.

The Setup
I used the 4k 6.3gb blender project, tears of steel as a source. I downscaled the video to 1080p using all three codecs, and then attempted to compare the results using various crf levels.

To compare results I used imgsli, FFMetrics, and my own picture viewer to try and see what the differences are.

The Results

crf av1 KB x265 KB x264 KB
18 419,261 632,079 685,217 – x246 visually lossless
21 352,337 390,358 – x265 visually lossless 411,439
24 301,517 – av1 VAMF visually lossless 250,426 263,524 – x264 good enough
27 245,685 165,079 – x265 good enough 176,919
30 205,008 110,062 122,458
33 168,192 73,528 86,899
36 139,379 – av1 My visually lossless 48,516 63,214
39 116,096 31,670 47,161
42 97,365 – av1 my good enough 20,636 35,801
45 81,805 13,598 27,484
48 69,044 9,726 20,823
51 58,316 8,586 – worst possible 16,120 – worst possible
54 48,681 - -
57 39,113 - -
60 29,062 - -
63 16,533 – worst possible - -

Here is av1 rcf 36 vs crf 24.

I go into more detail with the hows and whys of my choices, in my journal-style blog post, as well as how i came to these conclusions, But in essence, if you want to lose practically no visual information, crf24 through 36 for av1, crf 21 for x265, and crf 18 for x264 will do the job.

If you are low on space, using my ‘good enough’ choices will get you practically the same visual results while using less space, depending on the codec.

  • DaGeek247@kbin.socialOP
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    From my blogpost, i’m using the following command to encode the video;

    ffmpeg -i source.2160p.mkv
    -map 0:v:0
    -map -0:a -map -0:s -map_metadata -1
    -c:v libsvtav1
    -preset 3
    -vf scale=w=1920:-2
    -crf 23
    dest.1080p.av1.mkv
    
    
    • Atemu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      That is not representative of what you’d get with an Intel card then. While they implement the same standard (AV1), they’re entirely different encoders with entirely different image quality characteristics.

      • Victor@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        How does that work? Aren’t two encoders of the same format supposed to produce the same output for the same input and configuration using some given algorithm? Otherwise I’d consider them different formats/codecs… 🤷‍♂️ Maybe that’s wrong of me?

        • LufyCZ@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          The issue is, you can optimize a software encoders continually, you can use tricks for better quality etc.

          A hardware encoder is just that - hardware. As soon as it’s burned to the silicon, you’re not making any (at least substantial) changes to it. You might also be limited by what you can actually do directly in hardware without using too much die space.

          Tldr.: no, you won’t get the same result

          • Victor@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Tldr.: no, you won’t get the same result

            What I’m saying is, shouldn’t you?

            • rentar42@kbin.social
              link
              fedilink
              arrow-up
              8
              ·
              edit-2
              1 year ago

              What you describe is true for many file formats, but for most lossy compression systems the “standard” basically only strictly explains how to decode the data and any encoder that produces output that successfully decodes that way is fine.

              And the standard defines a collection of “tools” that the encoders can use and how exactly to use, combine and tweak those tools is up to the encoder.

              And over time new/better combinations of these tools are found for specific scenarios. That’s how different encoders of the same codec can produce very different output.

              As a simple example, almost all video codecs by default describe each frame relative to the previous one (I.e. it describes which parts moved and what new content appeared). There is of course also the option to send a completely new frame, which usually takes up more space. But when one scene cuts to another, then sending a new frame can be much better. A “bad” codec might not have “new scene” detection and still try to “explain the difference” to the previous scene, which can easily take up more space than just sending the entire new frame.

              • Victor@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                the “standard” basically only strictly explains how to decode the data and any encoder that produces output that successfully decodes that way is fine

                Ah, okay, this explains the whole aspect of it then, for me. :-) If this is how a certain format is described, then it makes sense that encoders can produce different data, which then will be decoded as different output as well, all while all parties are compliant with the specification. That makes much more sense. Thanks for taking the time to explain everything, including I-frames and P-frames! ;-)

      • jbk
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Doesn’t libsvtav1 do the same on all platforms since it’s CPU-based? At least that’s the exact encoder OP specified

        • Atemu@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Yes, yes it will. (Well, at least it should. If it doesn’t, that’s a bug.)

          The problem here is that the premise of this post is evaluating buying a GPU with AV1 encoder in order to transcode a media library. Any GPU-based AV1 encoder will produce very different results than svt-av1, likely much worse results that is.