mpv Resampling

A sequel in the making

Introduction

This page is meant to be treated as a follow-up to this scaler comparison done with ImageMagick. I'm only going to talk about niche topics here, so just refer to that other page if you only want to read about scalers.

The Effect of the Downsampler in Upsampling Evaluation

Downsampling in linear light is widely accepted as the "usually correct" approach, it can better preserve bright highlights which can make a huge difference depending on what you're downsampling. That doesn't mean that linear light downsampling will absolutely always look better, if you're downsampling black text on a white background you're probably better off doing it in gamma light to prevent the background from dilating into the text for example, since that would hurt legibility.

This usually isn't a problem when using the built-in filters because mpv doesn't upsample in either linear or gamma light by default, it does it in sigmoid light instead to reduce ringing artifacts. However, that's usually not true for shaders, specially not when they're trained with artificial data (the LR image is created by downsampling the HR reference). If the test procedure includes downsampling in linear light, it's obvious that any model trained with images downsampled in linear light is going to have an unfair advantage.

In short, downsampling in linear light dilates bright structures while eroding dark ones (relative to downsampling in gamma light). This is generally accepted as correct but it might not always be ideal depending on the content.

The problem arises when we have to decide how to downsample training data before feeding it to a model. If we feed images that have been downsampled in linear light, the model will produce images with dilated dark highlights and eroded bright features. The exact opposite of what happens when you downsample in linear light. The model is simply learning how to "undo" what linear light downsampling "does", so this behaviour makes perfect sense.

Interestingly though, most models seem to show better reconstruction quality when you feed them LR images that have been downsampled in gamma light, and pretty much all mpv shaders have been trained with data downsampled in gamma light (including NNEDI3, FSRCNNX, RAVU, etc).

For these reasons, the upsampling tests will be done with an image downsampled in gamma light.

Upsampling Shaders

I still do not have a good way of automating mpv tests and therefore I'll have to stick to a single test image, which is going to be aoko.png:

Aoko

The small number of samples under test makes this very unscientific, but it is what it is.

Upsampling Methodology

The following shaders were "benchmarked":

  1. ArtCNN
  2. RAVU
  3. FSRCNNX
I've also added a few built-in filters to the mix just to have some reference points.

The test image is downsampled with:

magick convert aoko.png -filter box -resize 50% downsampled.png

It is then converted to grayscale with:

magick convert downsampled.png -colorspace gray downsampled.png

The original image is also converted to grayscale using the same command to create the reference:

magick convert aoko.png -colorspace gray reference.png

We need to convert them to grayscale because some of these shaders do not have RGB support.

The images were then upsampled back with:

mpv --no-config --vo=gpu-next --no-hidpi-window-scale --window-scale=2.0 --pause=yes --screenshot-format=png --sigmoid-upscaling --deband=no --dither-depth=no --screenshot-high-bit-depth=no --glsl-shader="path/to/meme/shader" downsampled.png

Upsampling Results

Shader/Filter MAE PSNR SSIM MS-SSIM MAE (N) PSNR (N) SSIM (N) MS-SSIM (N) Mean
ArtCNN_C16F64 4.64E-03 36.8640 0.9916 0.9995 1.0000 1.0000 1.0000 1.0000 1.0000
ArtCNN_C4F32 5.48E-03 35.6177 0.9893 0.9994 0.9455 0.8879 0.9662 0.9901 0.9474
ArtCNN_C4F16 6.39E-03 34.4153 0.9860 0.9992 0.8866 0.7798 0.9173 0.9741 0.8895
ArtCNN_C4F8 8.24E-03 32.7009 0.9802 0.9989 0.7659 0.6256 0.8306 0.9446 0.7917
FSRCNNX_x2_16-0-4-1 9.75E-03 31.7201 0.9758 0.9976 0.6679 0.5374 0.7654 0.8232 0.6985
FSRCNNX_x2_8-0-4-1 1.08E-02 31.0438 0.9705 0.9975 0.6020 0.4766 0.6863 0.8140 0.6447
ravu-lite-ar-r4 1.09E-02 30.4805 0.9681 0.9974 0.5941 0.4259 0.6505 0.8062 0.6192
ravu-lite-ar-r3 1.12E-02 30.2648 0.9666 0.9974 0.5741 0.4065 0.6277 0.8046 0.6032
ravu-zoom-ar-r3 1.11E-02 30.0447 0.9684 0.9972 0.5796 0.3867 0.6552 0.7853 0.6017
ravu-lite-ar-r2 1.14E-02 29.7235 0.9660 0.9975 0.5602 0.3579 0.6191 0.8131 0.5876
ravu-zoom-ar-r2 1.15E-02 29.6350 0.9661 0.9970 0.5544 0.3499 0.6208 0.7612 0.5716
lanczos 1.53E-02 28.3051 0.9497 0.9962 0.3093 0.2303 0.3759 0.6897 0.4013
polar_lanczossharp 1.61E-02 27.8741 0.9448 0.9949 0.2535 0.1915 0.3033 0.5721 0.3301
bilinear 2.00E-02 25.7442 0.9245 0.9889 0.0000 0.0000 0.0000 0.0000 0.0000

Upsampling Commentary

As we can see in the table above, ArtCNN is seems to be the best option when it comes to luma doubling. I personally think the C4F16 version of it should be good enough for most video content, but if your hardware can handle the bigger models and you don't mind the heat/noise then why not.

For lower scaling factors between 1x and 2x you can expect this list to remain mostly the same as long as you use a sharp downsampling filter for the luma doublers. The choice of filter actually has a huge impact on how sharp the final image will be, so sticking to the hermite default makes all doublers much softer than ravu-zoom for example. Also keep in mind that the difference between the shaders also becomes smaller as you decrease the scaling factor.

Chroma Shaders

The following shaders were "benchmarked":

  1. ArtCNN
  2. Chroma From Luma Prediction
  3. JointBilateral
  4. KrigBilateral

Chroma Methodology

A "near lossless" 420 version of aoko.png was created:

avifenc aoko.png --min 0 --max 0 -y 420 420.avif

mpv options remains the same with the exception that we don't need --window-scale=2.0 anymore, and since we're comparing chroma here, the images weren't converted to grayscale.

Chroma Results

Shader/Filter MAE PSNR SSIM MS-SSIM MAE (N) PSNR (N) SSIM (N) MS-SSIM (N) Mean
ArtCNN_C4F32_Chroma 2.98E-03 42.8633 0.9917 0.9980 1.0000 1.0000 1.0000 1.0000 1.0000
ArtCNN_C4F16_Chroma 3.59E-03 40.9809 0.9895 0.9977 0.8393 0.7828 0.8648 0.7843 0.8178
cfl 3.81E-03 39.8314 0.9893 0.9977 0.7822 0.6502 0.8534 0.8190 0.7762
cfl_lite 3.90E-03 39.5782 0.9889 0.9977 0.7575 0.6210 0.8308 0.7970 0.7516
krigbilateral 4.22E-03 38.6022 0.9873 0.9976 0.6746 0.5083 0.7326 0.7517 0.6668
fastbilateral 4.37E-03 37.1016 0.9858 0.9974 0.6350 0.3352 0.6391 0.6289 0.5595
jointbilateral 4.60E-03 37.3093 0.9852 0.9973 0.5750 0.3592 0.6040 0.5840 0.5305
lanczos 5.70E-03 36.3881 0.9800 0.9972 0.2878 0.2529 0.2864 0.4703 0.3243
polar_lanczossharp 5.89E-03 35.9492 0.9795 0.9972 0.2385 0.2022 0.2530 0.4910 0.2962
bilinear 6.80E-03 34.1966 0.9753 0.9965 0.0000 0.0000 0.0000 0.0000 0.0000

Chroma Commentary

If we look at the numbers, ArtCNN is easily the best option. I actually wouldn't recommend using your resources on this unless you're playing native resolution video and your GPU has nothing better to do.

The other shaders all suffer from chromaloc issues and they can be easily fooled when there's no correlation between luma and chroma. I'd personally skip all of them.

Antiring

Antiringing solutions is a topic that I hadn't covered in the previous iteration of this page, but now that we have more than a single option we can also compare them.

In short, antiringing filters attempt to remove overshoots generated by sharp resampling filters when they meet a sharp intensity delta. What is commonly referred to as ringing is simply consequential to the filter's impulse response.

The following image shows this very well:

ar_comparison

The negative weights in the filter are there for it to be able to quickly respond to high-frequency transitions, but it makes the filter overshoot a little bit before reaching its final destination. The "intensity" of the ringing is directly related to the magnitude of the secondary lobes. The second lobe, which is almost always negative, is responsible for the overshooting in can see in this example, but filters with more lobes ring once per lobe, and the ringing can be "positive" as well (within the range set by the original pixels) with positive lobes. The "length" of the rings is directly related to the length of the lobes, which is why filters like polar lanczos have "longer" rings (the zero crossings don't fall exactly at the integers, but rather slightly after them).

Aoko.png isn't a good test image to see ringing problems so I'm switching to violet.png here. If you've never seen it before, this is what violet.png looks like:

Violet

Antiring Methodology

The methodology here almost is equal to the one used for upsampling, with the only difference being that we have to include --scale-antiring to use mpv's (or libplacebo's) native AR solution. We also don't have to convert to grayscale this time since the shaders support RGB images just fine.

AR is only really necessary when you're using sharp filters, it makes no sense alongside blurry filters because blurry filters don't ring hard enough for it to be noticeable. There are a few sharp memes that are worth trying with AR though if you feel adventurous, but enerally speaking I think polar lanczossharp are pretty well balanced (a bit blurry even).

I'm including my AR shader in this comparison because I think it's better at keeping everything but the overshoots intact, which does create some weird artifacts with sharp transitions sometimes, specially if you use it at ludicrous scaling factors, but the output is generally good enough on real content.

Antiring Results

Filter MAE PSNR SSIM MS-SSIM MAE (N) PSNR (N) SSIM (N) MS-SSIM (N) Mean
polar_lanczossharp_ar_060 4.62E-03 38.3812 0.9826 0.9984 0.8805 0.8303 0.9474 0.5981 0.8141
polar_lanczossharp_pc_080 4.65E-03 38.3944 0.9825 0.9984 0.7577 1.0000 0.8078 0.6900 0.8139
polar_lanczossharp_ar_055 4.63E-03 38.3829 0.9826 0.9984 0.8318 0.8527 0.9138 0.6534 0.8129
polar_lanczossharp_pc_085 4.64E-03 38.3931 0.9825 0.9984 0.7828 0.9829 0.8244 0.6539 0.8110
polar_lanczossharp_ar_065 4.61E-03 38.3782 0.9827 0.9983 0.9228 0.7924 0.9731 0.5380 0.8066
polar_lanczossharp_ar_050 4.64E-03 38.3834 0.9826 0.9984 0.7784 0.8586 0.8732 0.7045 0.8037
polar_lanczossharp_pc_075 4.66E-03 38.3939 0.9825 0.9984 0.7191 0.9934 0.7781 0.7222 0.8032
polar_lanczossharp_pc_090 4.64E-03 38.3904 0.9825 0.9984 0.7989 0.9483 0.8331 0.6221 0.8006
polar_lanczossharp_ar_070 4.60E-03 38.3741 0.9827 0.9983 0.9563 0.7400 0.9906 0.4747 0.7904
polar_lanczossharp_pc_070 4.67E-03 38.3934 0.9824 0.9984 0.6775 0.9875 0.7445 0.7500 0.7899
polar_lanczossharp_ar_045 4.66E-03 38.3826 0.9825 0.9984 0.7183 0.8491 0.8231 0.7515 0.7855
polar_lanczossharp_pc_095 4.64E-03 38.3850 0.9825 0.9984 0.8039 0.8794 0.8319 0.5998 0.7787
polar_lanczossharp_pc_065 4.68E-03 38.3921 0.9824 0.9984 0.6305 0.9702 0.7055 0.7747 0.7702
polar_lanczossharp_ar_075 4.59E-03 38.3685 0.9827 0.9983 0.9827 0.6682 1.0000 0.4073 0.7646
polar_lanczossharp_pc_100 4.64E-03 38.3808 0.9825 0.9984 0.8014 0.8251 0.8272 0.5919 0.7614
polar_lanczossharp_ar_040 4.68E-03 38.3803 0.9824 0.9984 0.6523 0.8197 0.7634 0.7948 0.7575
polar_lanczossharp_pc_060 4.69E-03 38.3898 0.9823 0.9984 0.5876 0.9415 0.6656 0.7990 0.7484
polar_lanczossharp_ar_080 4.59E-03 38.3615 0.9827 0.9983 0.9960 0.5779 0.9981 0.3335 0.7264
polar_lanczossharp_pc_055 4.70E-03 38.3878 0.9823 0.9984 0.5422 0.9158 0.6240 0.8203 0.7256
polar_lanczossharp_ar_035 4.69E-03 38.3765 0.9824 0.9984 0.5820 0.7701 0.6947 0.8333 0.7200
polar_lanczossharp_pc_050 4.72E-03 38.3835 0.9822 0.9984 0.4902 0.8598 0.5697 0.8427 0.6906
polar_lanczossharp_ar_085 4.59E-03 38.3533 0.9827 0.9983 1.0000 0.4731 0.9879 0.2573 0.6796
polar_lanczossharp_ar_030 4.71E-03 38.3716 0.9823 0.9984 0.5073 0.7070 0.6182 0.8690 0.6753
polar_lanczossharp_pc_045 4.73E-03 38.3806 0.9822 0.9984 0.4285 0.8230 0.5117 0.8494 0.6532
polar_lanczossharp_ar_025 4.73E-03 38.3656 0.9822 0.9984 0.4296 0.6301 0.5353 0.9001 0.6238
polar_lanczossharp_ar_090 4.59E-03 38.3439 0.9827 0.9982 0.9922 0.3529 0.9688 0.1754 0.6223
polar_lanczossharp_pc_040 4.75E-03 38.3763 0.9821 0.9984 0.3813 0.7680 0.4600 0.8662 0.6189
polar_lanczossharp_pc_035 4.76E-03 38.3720 0.9821 0.9984 0.3320 0.7127 0.4066 0.8836 0.5837
polar_lanczossharp_ar_020 4.75E-03 38.3581 0.9821 0.9985 0.3488 0.5339 0.4432 0.9280 0.5635
polar_lanczossharp_ar_095 4.59E-03 38.3329 0.9826 0.9982 0.9704 0.2117 0.9398 0.0888 0.5527
polar_lanczossharp_pc_030 4.77E-03 38.3667 0.9820 0.9984 0.2725 0.6445 0.3413 0.8941 0.5381
polar_lanczossharp_ar_015 4.78E-03 38.3496 0.9820 0.9985 0.2660 0.4253 0.3446 0.9518 0.4969
polar_lanczossharp_pc_025 4.79E-03 38.3606 0.9819 0.9984 0.2203 0.5660 0.2776 0.9081 0.4930
polar_lanczossharp_ar_100 4.60E-03 38.3206 0.9826 0.9982 0.9369 0.0539 0.9039 0.0000 0.4737
polar_lanczossharp_pc_020 4.80E-03 38.3541 0.9819 0.9985 0.1672 0.4828 0.2127 0.9175 0.4451
polar_lanczossharp_ar_010 4.80E-03 38.3398 0.9819 0.9985 0.1803 0.2999 0.2376 0.9717 0.4224
polar_lanczossharp_pc_015 4.82E-03 38.3473 0.9818 0.9985 0.1135 0.3962 0.1430 0.9272 0.3950
polar_lanczossharp_pc_010 4.83E-03 38.3407 0.9817 0.9985 0.0691 0.3107 0.0826 0.9356 0.3495
polar_lanczossharp_ar_005 4.82E-03 38.3288 0.9818 0.9985 0.0916 0.1580 0.1233 0.9884 0.3403
polar_lanczossharp_pc_005 4.84E-03 38.3330 0.9817 0.9985 0.0240 0.2130 0.0178 0.9423 0.2993
polar_lanczossharp 4.84E-03 38.3164 0.9817 0.9985 0.0000 0.0000 0.0000 1.0000 0.2500

Antiring Commentary

Previously, this section talked about the different strengths and weaknesses of Pixel Clipper and libplacebo's AR, but that's not really neccessary anymore. Libplacebo's AR has been recently updated and it looks great now. It's actually shocking how similar the solutions look at 2x considering how different the mechanisms are.

The only caveat is that libplacebo's AR is still a bit blurrier at similar strengths, so keep that in mind.

As you can see in the table above, the sweetspot for libplacebo's AR seems to be around 0.6 taking all the metrics into account. I could end the commentary here but if you pay attention you can also see that 0.6 isn't at the top of any of the individual metrics, but it does score above average on all of them. This is actually pretty interesting as these metrics tell us different things:

If we choose to focus on MAE, the sweetspot seems to be around 0.85. MAE is very good at telling us what is numerically closer to the reference without biasing this towards any specific type of artifacts regardless of how humans perceive them.

If you only want to know how bad the worst pixels are you can focus on PSNR instead, and in this case Pixel Clipper seems to be much better than libplacebo's AR (presumably because the latter is smoother). The sweetspot seems to be around 0.8 here for PC, and 0.5 for libplacebo's AR.

SSIM is good at telling us about sharpness and the whole structural profile of the picture, it doesn't care too much about brightness or contrast variations but it'll heavily punish blurriness or sharp transitions that shouldn't exist. We can see that Libplacebo's AR is better than PC here (again because it's smoother), with 0.75 strength at the top.

MS-SSIM is just SSIM at different scales to emulate a viewer looking at the picture from different distances. This metric tends to correlate better to the human perception of quality than the other 3, and if you're not very close to the display this is probably a better metric. For MS-SSIM, the picture with no AR at all is the top scorer. FIR filters need to overshoot a little bit to reach some intensity levels and in this case (since polar lanczossharp isn't that sharp to begin with), the ringing doesn't seem to be a big enough problem for it to hurt the score. This isn't true for all filters and it might not be true for all scaling factors either, but for polar lanczossharp at 2x you could perhaps make an argument that AR might be unnecessary.

To conclude, my opinion is that you can probably skip AR unless you're using a very sharp filter. If you want to use AR though, the ideal strength will depend on the choice of filter, scaling factor and content type, but something around 0.6 should be pretty safe (if you want to use PC for the higher PSNR you should probably use it at higher strengths like 0.8).

Downsampling Antiring

Pixel Clipper actually has a downsampling variant as well, which I believe is probably less broken than mpv's native --dscale-antiring solution for orthogonal filters. Libplacebo's new polar AR has been recently enabled for downsampling as well, so we can also test that here.

Downsampling Antiring Methodology

My current method to evaluate downsampling filters is to concede that at 0.5x linear light box is as good as it gets, since in this case we're just averaging 4 pixels together. We can't use box itself for other scaling factors though, so the aim is to find a filter that produces a similar result while also being usable at any other arbitrary scaling factor.

The same violet.png test image was used to test downsampling AR, and it was downsampled with ImageMagick as follows:

magick convert violet.png -colorspace RGB -filter box -resize 50% -colorspace sRGB reference.png

The images were basically generated using the following commands:

mpv --no-config --vo=gpu-next --no-hidpi-window-scale --window-scale=0.5 --pause=yes --screenshot-format=png --linear-downsampling --correct-downsampling --deband=no --dither-depth=no --screenshot-high-bit-depth=no --dscale=filter --glsl-shader="Pixel Clipper_downsampling.glsl" violet.png

mpv --no-config --vo=gpu-next --no-hidpi-window-scale --window-scale=0.5 --pause=yes --screenshot-format=png --linear-downsampling --correct-downsampling --deband=no --dither-depth=no --screenshot-high-bit-depth=no --dscale=filter --scale-antiring=1.0 violet.png

Please note that you have to set --scale-antiring=1.0 instead of --dscale-antiring=1.0. That's probably a bug but it is what it is.

Downsampling Antiring Results

Filter MAE PSNR SSIM MS-SSIM MAE (N) PSNR (N) SSIM (N) MS-SSIM (N) Mean
catrom_pc 1.69E-03 49.0252 0.9976 0.9997 1.0000 1.0000 1.0000 0.9904 0.9976
catrom 1.70E-03 48.7158 0.9975 0.9997 0.9889 0.9342 0.9861 1.0000 0.9773
lanczos_pc 1.78E-03 48.7595 0.9974 0.9997 0.8918 0.9435 0.9396 0.8517 0.9066
polar_lanczossharp_ar 1.85E-03 47.3664 0.9970 0.9997 0.8022 0.6470 0.7183 0.7255 0.7232
polar_lanczossharp_pc 2.05E-03 46.9530 0.9968 0.9997 0.5401 0.5590 0.6008 0.6447 0.5862
lanczos 2.09E-03 46.4377 0.9967 0.9996 0.4900 0.4493 0.5663 0.5351 0.5102
hermite 2.09E-03 45.8359 0.9966 0.9996 0.4854 0.3213 0.5352 0.3482 0.4225
hermite_pc 2.17E-03 45.7873 0.9965 0.9996 0.3857 0.3109 0.4918 0.2945 0.3707
polar_lanczossharp 2.46E-03 44.8483 0.9957 0.9996 0.0147 0.1111 0.0607 0.2214 0.1020
mitchell 2.41E-03 44.3562 0.9956 0.9995 0.0863 0.0063 0.0362 0.0535 0.0456
mitchell_pc 2.48E-03 44.3264 0.9956 0.9995 0.0000 0.0000 0.0000 0.0000 0.0000

Downsampling Antiring Commentary

The first thing we can see is that AR seems to be a net-positive in general when using ringy filters. Only Mitchell and Hermite got worse with AR.

It's also good to see that libplacebo's AR seems to work even better when downsampling, as it can remove ringing without introducing any significant blur. This consolidates that there's little reason to use Pixel Clipper over Libplacebo's native AR if you're using polar filters.

Personally, I'd say you probably need some kind of AR for anything sharper than catrom, but catrom itself is still perfectly usable without AR.

Performance Benchmarks

Benchmarking mpv performance is a bit tricky due to a few things:

  1. Due to the relatively cheap load, it's difficult to predict at which power state the GPU will stay at while we benchmark. To virtually increase the load, in an attempt to level the ground, we can cheese mpv into outputting frames as quickly as possible, but that's not necessarily representative of a real-world scenario.
  2. Hardware, OS, drivers, display servers, etc... All of those things affect mpv performance in complex ways that are usually unpredictable, so even if my benchmarks were perfect they'd still only be relevant for those with similar systems.

In any case, the following numbers were produced with these mpv settings with a 720p anime episode (~24 minutes long video) on a 5600X+6600XT+Windows11 setup:

Measure-Command { mpv --no-config --vo=gpu-next --gpu-api=vulkan --audio=no --untimed=yes --video-sync=display-desync --vulkan-swap-mode=immediate --window-scale=2.0 --fullscreen=yes }

Preset/Shader FPS Time Relative
fast 2057 41.71 1.0000
default 1615 53.13 0.7851
fastbilateral 1605 53.46 0.7802
jointbilateral 1544 55.57 0.7506
cfl_lite 1509 56.85 0.7337
ravu-lite-r2 1453 59.04 0.7065
cfl 1435 59.80 0.6975
krigbilateral 1417 60.55 0.6888
ravu-lite-ar-r2 1411 60.81 0.6859
ravu-lite-r3 1384 61.98 0.6730
ravu-lite-ar-r3 1369 62.68 0.6654
ravu-zoom-r2 1364 62.89 0.6632
high-quality 1351 63.50 0.6568
ravu-lite-r4 1292 66.40 0.6281
ravu-lite-ar-r4 1287 66.66 0.6257
ravu-zoom-ar-r2 1278 67.16 0.6210
FSR 1255 68.35 0.6102
high-quality+AR 1178 72.86 0.5724
pixelclipper 1127 76.13 0.5479
ravu-zoom-r3 1088 78.83 0.5291
nnedi3-nns32-win8x4 1008 85.13 0.4900
ravu-zoom-ar-r3 1006 85.33 0.4888
FSRCNNX_x2_8-0-4-1 912 94.08 0.4433
ArtCNN_C4F8 903 95.03 0.4389
nnedi3-nns64-win8x4 640 134.07 0.3111
ArtCNN_C4F16 402 213.66 0.1952
FSRCNNX_x2_16-0-4-1 359 238.75 0.1747
nnedi3-nns128-win8x4 334 256.81 0.1624
nnedi3-nns256-win8x4 188 455.49 0.0916
ArtCNN_C4F32 133 643.60 0.0648

Please keep in mind that the choice of built-in filter doesn't matter that much as the weights get stored in a LUT, so something like lanczos and spline36 have virtually identical performance. In short, filters with larger radii will obviously be slower, polar filters are slower than their orthogonal counterparts and AR obviously also makes things slower.

For the shaders, the number are more or less what I expected, but it's nice to see that in the grand scheme of things these shaders don't really slow the playback down that much. Good old polar lanczossharp with AR is already as slow as most shaders.

Still though, the conclusion here seems to be that if you're running mpv on a semi-decent computer, these shaders will never be a problem.

Outro

I want to make it clear though that you shouldn't take the results as gospel. Mathematical image quality metrics do not always correlate perfectly with how humans perceive image quality, and your personal preference is entirely subjective. You should take this page as what it is, a research that produces numbers, but you should not take these numbers for granted before understanding what they actually mean.