• UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    9 months ago

    The Google AI that pre-loads the results query isn’t able to distinguish real photos from fake AI generated photos. So there’s no way to filter out all the trash, because we’ve made generative AI just good enough to snooker search AI.

    • samus12345@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      A lot of them mention they’re using an AI art generator in the description. Even only filtering out self-reported ones would be useful.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        8 months ago

        That still requires a uniform method of tagging art as such. Which is absolutely a thing that could be done, but there’s no upside to the effort. If your images all get tagged “AI” and another generator’s doesn’t, what benefit is that to you? That’s before we even get into what digital standard gets used in the tagging. Do we assign this to the image itself (making it more reliable but also more difficult to implement)? As structured metadata (making it easier to apply, but also easier to spoof or scrape off)? Or is Google just expected to parse this information from a kaleidoscope of generating and hosting standards?

        Times like this, it would be helpful for - say - the FCC or ICANN to get involved. But that would be Big Government Overreach, so it ain’t going to happen.