• wavebeam@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      2 days ago

      Count my opinion as however invalid you feel like after knowing that I’m one of those CRT screen lovers for retro video games, but I honestly think the AI smoothing look WAY WORSE than just the low res original. I don’t know why people like it more? But i guess people also like that TV motion smoothing junk that’s enabled by default on new TVs too.

      • Dozzi92@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I think it’s good on small (phone) screens. The second I zoom in, yeah, the lines re sharp, but shit looks weird. Zoomed out, where the picture is like 3"x3" (7.6csq or some shit, I dunno, I mental.math’d it), it seems to pop. But I also hate designing anything for phone screens, because it inevitably makes the viewing on a larger screen with a tv-like aspect ratio terrible.

      • ricecake@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        It depends on which type of ai upscaling is being used.
        Some are basically a neural net that understands how pixelation works with light, shadow, and color gradients and can work really well. They leave the original pixels intact, figure out the best guess for the gaps using traditional methods and then correct the guesses using feedback from the neural net.
        Others are way closer to “generate me an image that looks exactly the same as this one but had three times the resolution”. It uses a lot more information about how people look (in photos it was trained on) than just how light and structure interact.

        The former is closer to how your brain works. Shadow and makeup can be separated because you (in the squishy level, not consciously) know shadows don’t do that, and the light reflection hints at depth and so on.
        The latter is more concerned with fixing “errors”, which might involve changing the original image data if it brings the total error down, or it’ll just make up things that aren’t there because it’s plausible.

        Inferring detail tends to look nicer, because it’s using information that’s there to fil the gaps. Generating detail is just smearing in shit that fits and tweaking it until it passes a threshold of acceptability.
        The first is more likely to be built into a phone camera to offset a smaller lens. The second is showing up a lot more to “make your pictures look better” by tweaking them to look like photos people like.

      • bridgeenjoyer@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        We are just old. I HATE tv smoothing, it looks like ass. I love crts and good flat screens but man that smoothing shit is awful. Also gaming on crt is peak for me.

        Kids know nothing else.

    • Windex007@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      3
      ·
      2 days ago

      Yeah, sure, but have you considered the feelings of the people who came here to just comment “AI slop”?

    • anomnom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I think it mostly looks so good because this was taken at a bill summoning be a professional photographer with a big, fast lens and very good camera sensor, and decent lighting in that room. Possibly with a fill flash indirectly lighting too.

      AI filters are only necessary when you are compromising one of the other ingredients, bad lighting lens or equipment primarily. Or a bad/lazy photographer.