• 0 Posts
  • 8 Comments
Joined 3 days ago
cake
Cake day: August 11th, 2025

help-circle
  • It seems like the most immature and toxic thing to me to invoke terms like “gaslighting,” ironically “toxic,” and all the other terms you associate with these folks, defensively and for any reason, whether it aligns with what the word actually means or not. Like a magic phrase that instantly makes the person you use it against evil, manipulative and abusive, and the person that uses it a moral saint and vulnerable victim. While indirectly muting all those who have genuine uses for the terms. Or i’m just going mad exaggerating, and it’s just the typical over- and mis-using of words.

    Anyhow, sadly necessary disclaimer, i agree with almost all of the current criticism raised against AI, and my disagreements are purely against mischaracterizations of the underlying technology.

    EDIT: I just reminded myself of when a teacher went ballistic at class for misusing the term “antisocial,” saying we’re eroding and polluting all genuine and very serious uses of the term. Hm, yeah it’s probably just that same old thing. Not wrong for going ballistic over it, though.




  • I find it fascinating how oblivious people pretend to be about what our natural social hierarchies are, making fringe speculations ranging from proto-capitalism, over alpha male fantasies, to proto-communism.

    Maybe it’s too obvious, or too boring, but it’s families. Incidentally, happens to be the same for actual, natural packs of wolves.



  • I don’t know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn’t somehow stop or end AI, but cause a new wave of innovation instead.

    I’ve seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn’t exactly die.


  • Likewise, instruct the AI to break the word down into letters one per line first, and then they get it right more often. I think that’s the point the post is trying to make.

    The letter counting issue is actually a fundamental problem of whole-word or subword-tokenization that’s had an obvious solution since ~2016, and i don’t get why commercial AI won’t implement a solution. Probably because it’s a lot of training code complexity (but not much compute) for solving a very small problem.