

sigh
You said it doesn’t matter if we can tell how something was made.
This conversation is over. Thanks
previous lemmy acct: @smallpatatas@lemm.ee see also: @patatas@social.patatas.ca
sigh
You said it doesn’t matter if we can tell how something was made.
This conversation is over. Thanks
Ah to be fair i did misinterpret your previous statement.
But no, I am arguing that we are not able to ignore knowledge of the production process. Nothing mystical about that.
That’s not a rejection of what I said, so I assume you agree.
No, I’m saying we can no longer meaningfully separate the product and the process.
I am saying that we can no longer meaningfully separate the two things.
First you said “it doesn’t matter if we can tell or not”, which I responded to.
So I’m confused by your reply here.
This argument strikes me as a tautology. “If we don’t care if it’s different, then it doesn’t matter to us”.
But that ship has sailed. We do care.
We care because the use of AI says something about our view of ourselves as human beings. We care because these systems represent a new serfdom in so many ways. We care because AI is flooding our information environment with slop and enabling fascism.
And I don’t believe it’s possible for us to go back to a state of not-caring about whether or not something is AI-generated. Like it or not, ideas and symbols matter.
If you can tell it was produced in a certain way by the way it looks, then that means it cannot be materially equivalent to the non-AI stock image, no?
I mean, it can’t really do ‘every random idea’ though, right? Any output is limited to the way the system was trained to approximate certain stylistic and aesthetic features of imagery. For example, the banner image here follows a stereotypically AI-type texture, lighting, etc. This shows us that the system has at least as much control as the user.
In other words: it is incredibly easy to spot AI-generated imagery, so if the output is obviously AI, then can we really say that the AI generated a “stock image”, or did it generate something different in kind?
I dunno what to tell you other than that I have been consistently pointing out that AI is a process, not a tool.
If the result of that process is the same wherever it’s introduced, then your model of the world has to be able account for that.
Want to know how I know that it does?
Because the result is the same over and over and over and over and over again. Every single time!
Look at how great AI is for Colombian students as a way to change their material conditions 🙌 https://restofworld.org/2025/colombia-meta-ai-education/
Hold on though there are talking droids in the Star Wars documentaries and those happened a LONGGG time ago apparently
Edit: to be fair that was in a galaxy far, far away so it’s entirely possible that neither Karl nor Richard were aware of the technology
You think? I dunno, I could totally see Marx getting ChatGPT to generate a quick first draft of Capital, would definitely speed up the process of spreading his ideas … I mean spreading his material
Richard
edit: folks don’t like Richard Marx, hey believe me I get it
If I made a tool which literally said to you, out loud in a tinny computerised voice, “cognitive effort isn’t worth it, why are you bothering to try”, would it be fair to say it was putting forward the idea that cognitive effort isn’t worth it and why bother trying?
If so, what’s the difference when that statement is implied by the functioning of the AI system?
And that social role is, at least in part, to advance the idea that communication and cognition can be replicated by statistically analyzing an enormous amount of input text, while ignoring the human and social context and conditions that actual communication takes place in. How can that not be considered political?
Will read your link, but when I saw the phrase “democratising creativity” I rolled my eyes hard and then grabbed this for you from my bookmarks. But I’ll read the rest anyway
https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything
Edit: yeah so that piece starts out by saying how art is about the development of what I’m taking to be a sort of ‘curatorial’ ability, but ends up arguing that as long as the slop machines are nominally controlled by workers, that it’s fine actually. I couldn’t disagree more.
Elsewhere in a discussion with another user here, I attempted to bring up Ursula Franklin’s distinction between holistic and prescriptive technologies. AI is, to me, exemplary of a prescriptive process, in that its entire function is to destroy opportunities for decision-making by the user. The piece you linked admits this is the goal:
“What distinguishes it is its capacity to automate aspects of cognitive and creative tasks such as writing, coding, and illustration that were once considered uniquely human.”
I reject this as being worthwhile. The output of those human pursuits can be mimicked by this technology, but, because (as the link I posted makes clear) these systems do not think or understand, they cannot be said to perform those tasks any more than a camera can be said to be painting a picture.
And despite this piece arguing that the people using these processes are merely incorporating a ‘tool’ into their work, and that AI will open up avenues for incredible new modes of creativity, I struggle to think of an example where the message some GenAI output conveyed was anything other than “I do not really give a shit about the quality of the output”.
These days our online environment suffers constantly from this stream of “good enough, I guess, who cares” stuff that insults the viewer by presuming they just want to see some sort of image at the top of a page, and don’t care about anything beyond this crass consumptive requirement.
The banner image in question is a great example of this. The overall aesthetic is stereotypical of GenAI images, which supports the notion that control of the process was more or less ceded to the system (or, alternately, that these systems provide few opportunities for directing the process). There are bizarre glitches that the person writing the prompt couldn’t be bothered to fix, the composition is directionless, the question-marks have a jarring crispness that clashes with the rest of the image, the tablets? signs? are made from some unknown material, perhaps the same indistinct stuff as the ground these critters are standing on.
It’s all actively hostile to a sense of community, as it pretends that communication is something that can just as well be accomplished by a statistical process, because who cares about trying to create something from the heart?
These systems are an insult to human intelligence while also undermining it by automating our decision-making processes. I wrote an essay about this if you’re interested, which I’ll link here and sign off, because I don’t want to be accused again of repeating myself unnecessarily: https://thedabbler.patatas.ca/pages/ai-is-dehumanization-technology.html
This is a crisp answer, nice one.