Setting aside the usual arguments on the anti- and pro-AI art debate and the nature of creativity itself, perhaps the negative reaction that the Redditor encountered is part of a sea change in opinion among many people that think corporate AI platforms are exploitive and extractive in nature because their datasets rely on copyrighted material without the original artists’ permission. And that’s without getting into AI’s negative drag on the environment.

  • barsoap@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 months ago

    But I also feel that to a large extent, honing the craft also hones the intuition (and some knowledge as far as it can be distilled) for what makes things resonant with others.

    Oh, definitely. I’d also say that if you want to make art, starting out with AI isn’t a good idea, do literally anything else until you have developed an artistic eye: If for no other reason that it is developed faster by trying to appease even an underdeveloped one than by using it. Just to make this a bit more concrete, if you can sculpt or paint a smile that doesn’t look freaky which is a low bar aesthetically speaking but not trivial for a beginner sculptor or painter, then you can properly judge whether what AI is giving you is something resonant, or forgettable. The untrained eye putting “woman with big tiddies” in the prompt certainly isn’t going to notice finer details of a smile, what with eyes being on the tits.

    I feel like a vegan about the currently available models - once there is something made from public domain art only I’ll experiment. But right now I’m sitting in front of them like a vegan in front of sausage: For others the result is food but for them, they just see the process turning individuals into sausage.

    I don’t consider models learning from stuff, as in, the pixels can be accessed without a paywall or they’ve paid for that wall, as infringement. If it was then every artist who ever used reference should be in prison, and we shouldn’t.

    Note that this is actually quite a different situation in diffusion models than it is with LLMs which are notorious for returning their training data verbatim: All the NYT needed to do to get their articles back is to put in the first paragraph of the article. Getty, meanwhile, is arguing their court case in the abstract because they can’t get models to reproduce their images, certainly not for lack of trying or resources. When working with the models it also quickly becomes apparent that they can abstract over concepts.

    At the most it’s the difference between organic and barn eggs. Yes, organic ones are nicer. No, barn eggs aren’t terrible (depending on local regulations etc. yadayada). Vegans might disagree but, then, well, I’m flexi.