spoiler

Context: The decline started way before AI.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    My “credibility on this topic” is of zero interest to me. I am not here to appeal to authority. I know you didn’t mean it like that, but man, it’s such a social media argument point to make it jumped right at me. For the record, it’s not that I haven’t heard about problems with training on AI-generated content (and on filtering out that content). It’s that I don’t need to flaunt my e-dick and will openly admit when I haven’t gone deep into an issue. I have not read the papers I’ve heard of and I have not specifically searched for more of them, so I’ll get back to you on that one if and when I do.

    Anyway, that aside, you are presenting a bizarre scenario. You’re arguing that corporations will be demonstrably worse off by moving all coding to be machine-generated but they will do it anyway. Ad infinitum. Until there are no human coders left. At which point they will somehow keep doing it despite the fact that AI training would have entirely unraveled as a process by then.

    Think you may have extrapolated a bit too far on that one? I think you may have extrapolated a bit too far on that one. Corpos can do a lot of dumb shit, but they tend to be very sensitive about stuff that costs them money. And even if that wasn’t the case, the insane volume of cheap skilled labor that would generate pretty much guarantees some competing upstart would replace them with the, in your sci-fi scenario, massively superior alternative.

    FWIW, no, that’s not the same as outsourcing. Outsourcing hasn’t “often been a bad idea”. Having been on both sides of that conversation, it’s “a bad idea” when you have a home base with no incentive to help their outsourced peers and a superiority complex. There’s nothing inherently worse about an outsourced worker/developer. The thing that closes the gap on outsourcing cost/performance is, if anything, that over time outsourced workers get good and expect to get paid to match. I am pretty much okay with every part of that loop. Different pet peeve, though, we may want to avoid that rabbit hole.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        You are saying a lot of things that sound good to you without much grounding. You claiming this is a “widespread and significant issue” is going to need some backing up, because I may be cautious about not claiming more knowledge than I have, but I know enough to tell you it’s not particularly well understood, nobody is in a position to predict the workarounds and it’s by no means the only major issue. The social media answer would be to go look it up, but it’s the weekend and I refuse to let you give me homework. I have better things to do today.

        That’s the problem with being cautious about things. Not everybody has to. Not everybody knows they should or when. I don’t know if you’re dunning kruger incarnate or an expert talking down to me (pretty sure it’s not the second, though).

        And I’m pretty sure of that because yeah, it is an infinite doomsday slippery slope scenario. That I happen to know well enough to not have to be cautious about not having done all the reading.

        I mean, your original scenario is that. You’re sort of walking it back here where it’s just some effect, not the endgame. And because now you’re not saying “if AI actually replaces programmers wholesale” anymore the entire calculation is different. It goes back to my original point: What data will AI use to train? The same data they have now. Because it will NOT in fact replace programmers wholesale and the data is not fungible, so there still will be human-generated code to train on (and whatever the equivalent high enough quality hybrid or machine-generated code is that clears the bar).

        AI has a problem with running out of (good) data to train on, but that only tells you there is a hard limit to the current processes, which we already knew. Whether current AI is as good as it’s going to get or there is a new major breaktrough in training or model design left to be discovered is anybody’s guess.

        If there is one, then the counter gets reset and we will see how far that can take the technology, I suppose. If there is not, then we know how far we’ve taken it and we can see how far it’s growing and how quickly it’s plateauing. There is no reason to believe it will get worse, though.

        Will companies leap into it too quickly? They already have. We’re talking about a thing that’s in the past. But the current iteration of the tech is incapable of removing programmers from the equation. At most it’s a more practical reference tool and a way to blast past trivial tasks. There is no doomsday loop to be had unless the landscape shifts signfiicantly, despite what AI shills have been trying to sell people. This is what pisses me off the most about this conversation, the critics are buying into the narrative of the shills aggressively in ways that don’t really hold up to scrutiny for either camp.