Remember how we were told that genAI learns “just like humans”, and how the law can’t say about fair use, and I guess now all art is owned by big tech companies?

Well, of course it’s not true. Exploiting a few of the ways in which genAI --is not-- like human learners, artists can filter their digital art in such a way that if a genAI tool consumes it, it actively reduces the quality of the model, undoing generalization and bleading into neighboring concepts.

Can an AI tool be used to undo this obfuscation? Yes. At scale, however, doing so requires increasing compute costs more and more. This also looks like an improvable method, not a dead end – adversarial input design is a growing field of machine learning with more and more techniques becoming highly available. Imagine this as sort of “cryptography for semantics” in the sense that it presents asymetrical work on AI consumers (while leaving the human eye much less effected).

Now we just need labor laws to catch up.

Wouldn’t it be funny if not only does generative AI not lead to a boring dystopia, but the proliferation and expansion of this and similar techniques to protect human meaning eventually put a lot of grifters out of business?

We must have faith in the dark times. Share this with your artist friends far and wide!

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I wasn’t aware of the licensing terms - that immediately drops my opinion of it too. I was thinking about looking into it (soon.gif, due spoons) to see if I could make a run-your-own-nightshade deployable for people, but I guess that’s off the table now

    as to effectiveness: as I’ve said elsewhere, the genie is out of the bottle. unless this shit gets (toothfully) regulated out of existence (and that might be impossible for a variety of reasons), I fear that it’s going to become a similar arms race as spam, and there will continue to be a tug-of-war for a while.

    gutfeel, it strikes me that the (current) biggest hope is that the models themselves not only fail at providing but don’t really have a path to achieving that either, so it’s mostly a case of how long VCs can fund the hype. from previous cycles it looks like that spans 2~5y windows. at the point the hype funding runs out, this stuff will significantly lose traction even though it won’t disappear entirely just yet.

    damn large amount of damage that’ll happen in