Nearing the filling of my 14.5TB hard drive and wanting to wait a bit longer before shelling out for a 60TB raid array, I’ve been trying to replace as many x264 releases in my collection with x265 releases of equivalent quality. While popular movies are usually available in x265, less popular ones and TV shows usually have fewer x265 options available, with low quality MeGusta encodes often being the only x265 option.

While x265 playback is more demanding than x264 playback, its compatibility is much closer to x264 than the new x266 codec. Is there a reason many release groups still opt for x264 over x265?

  • icedterminal@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    4 months ago

    It’s not odd at all. It’s well known this is actually the truth. Ask any video editor in the professional field. You can search the Internet yourself. Better yet, do a test run with ffmpeg, the software that does encoding and decoding. It’s available to download by anyone as it’s open source.

    Hardware accelerated processing is faster because it takes shortcuts. It’s handled by the dedicated hardware found in GPUs. By default, there are parameters out of your control that you cannot change allowing hardware accelerated video to be faster. These are defined at the firmware level of the GPU. This comes at the cost of quality and file size (larger) for faster processing and less power consumption. If quality is your concern, you never use a GPU. No matter which one you use (AMD AMF, Intel QSV or Nvidia NVENC/DEC/CUDA), you’re going to end up with a video that appears more blocky or grainy at the same bitrate. These are called “artifacts” and make videos look bad.

    Software processing uses the CPU entirely. You have granular control over the entire process. There are preset parameters programmed if you don’t define them, but every single one of them can be overridden. Because it’s inherently limited by the power of your CPU, it’s slower and consumes more power.

    I can go a lot more in depth but I’m choosing to stop here because this can comment can get absurdly long.

    • cuppaconcrete@aussie.zone
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      4 months ago

      My understanding is that all of the codecs we are discussing are deterministic. If you have evidence to the contrary I’d love to see it.

      • RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 months ago

        GPU encoders like NVENC run their own algorithms that are optimized for graphics cards. The output it compatible with x265, but the encoder is not identical and there are far fewer options to tweak to optimize your video.

        The output is orders of magnitude faster but (in my experience) objectively worse, introducing lots of artifacts

      • Randomgal@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        4 months ago

        This. It sounds really odd to me that the GPU would make what is pretty much math calculations somehow “different” from what the CPU would do.

        • entropicdrift@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          11
          ·
          4 months ago

          GPU encoders basically all run at the equivalent of “fast” or “veryfast” CPU encoder settings.

          Most high quality, low size encodes are run at “slow” or “veryslow” or “placebo” CPU encoder settings, with a lot of the parameters that aren’t tunable on GPU encoders set to specific tunings depending on the content type.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          4 months ago

          So the GPU encoding isn’t using the GPU cores. It’s using separate fixed hardware. It supports way less operations than a CPU does. They’re not running the same code.

          But even if you did compare GPU cores to CPU cores, they’re not the same. GPUs also have a different set of operations from a CPU, because they’re designed for different things. GPUs have a bunch of “cores” bundled under one control unit. They all do the exact same operation at the same time, and have significantly less capability beyond that. Code that diverges a lot, especially if there’s not an easy way to restructure data so all 32 cores under a control unit* branch the same way, can pretty easily not benefit from that capability.

          As architectures get more complex, GPUs are adding things that there aren’t great analogues for in a CPU yet, and CPUs have more options to work with (smaller) sets of the same operation on multiple data points, but at the end of the day, the answer to your question is that they aren’t doing the same math, and because of the limitations of the kind of math GPUs are best at, no one is super incentivized to try to get a software solution that leverages GPU core acceleration.

          *last I checked, that’s what a warp on nvidia cards was. It could change if there’s a reason to.