Former New York Gov. Andrew Cuomo said in a video released Monday that he will run an independent campaign for New York City mayor, weeks after losing the Democratic primary to Zohran Mamdani.
The thing that led me to never do anything with it was that I didn’t feel like anyone would ever buy into it enough to even take part in a conversation where it was deployed
Yeah I think its got to work for people to buy into it. And frankly my earliest implementations were “inconsistent” at best.
My thought right now is that the tool needs to do a first pass to encode the “meta-structure”, or perhaps… scaffolding(?) of a conversation… then proceed to encode the impressions/ leanings. I have tools that can do this in-part, but it needs to be… “bigger”… whatever that means. So there is sentiment analysis, easy enough. There is key phrase extraction. And thats fine for a single comment… but how do we encode the dynamic of a conversation? Well thats quite a bit more tricky.
still seems to me u guys are doing it for witchhunting. if someone doesn’t like someone they can just ban them. you two going on and on about writing a program and using ai to catch peopel you don’t like is icky. I’ll be one of the people voting against this if it ever goes wide on lemmy. no thanks. u all need to touch grass, ur way too caught up in lemmy
Yeah, generally having it read the conversation (I think as JSON, maybe in markdown for the first pass, I can’t remember, it’s a little tricky to get the comments into a format where it’ll reliably grasp the structure and who said what, but it’s doable) and then do its output as JSON, and then have those JSON pieces given as input to further stages, seems like it works pretty well. It falls apart if you try to do too much at once. If I remember right, the passes I wound up doing were:
What are the core parts of each person’s argument?
How directly is the other person responding to each core part in turn?
Assign scores to each core part, based on how directly each user responded to it. If you responded to it, then you’re good, if you ignored it or just said your own thing, not-so-good, if you pretended it said something totally different so you could make a little tirade, then very bad.
And I think that was pretty much it. It can’t do all of that at once reliably, but it can do each piece pretty well and then pass the answers on to the next stage. Just what I’ve observed of political arguments on Lemmy, I think that would eliminate well over 50% of the bullshit though. There’s way too many people who are more excited about debunking some kind of strawman-concept they’ve got in their head, than they are with even understanding what the other person’s even saying. I feel like something like that would do a lot to counteract it.
The fly in the ointment is that people would have to consent to having their conversation judged by it, and I feel like there is probably quite a lot of overlap between the people who need it in order to have a productive interaction, and those who would never in a million years agree to have something like that involved in their interactions…
Yeah I think its got to work for people to buy into it. And frankly my earliest implementations were “inconsistent” at best.
My thought right now is that the tool needs to do a first pass to encode the “meta-structure”, or perhaps… scaffolding(?) of a conversation… then proceed to encode the impressions/ leanings. I have tools that can do this in-part, but it needs to be… “bigger”… whatever that means. So there is sentiment analysis, easy enough. There is key phrase extraction. And thats fine for a single comment… but how do we encode the dynamic of a conversation? Well thats quite a bit more tricky.
still seems to me u guys are doing it for witchhunting. if someone doesn’t like someone they can just ban them. you two going on and on about writing a program and using ai to catch peopel you don’t like is icky. I’ll be one of the people voting against this if it ever goes wide on lemmy. no thanks. u all need to touch grass, ur way too caught up in lemmy
Yeah, generally having it read the conversation (I think as JSON, maybe in markdown for the first pass, I can’t remember, it’s a little tricky to get the comments into a format where it’ll reliably grasp the structure and who said what, but it’s doable) and then do its output as JSON, and then have those JSON pieces given as input to further stages, seems like it works pretty well. It falls apart if you try to do too much at once. If I remember right, the passes I wound up doing were:
And I think that was pretty much it. It can’t do all of that at once reliably, but it can do each piece pretty well and then pass the answers on to the next stage. Just what I’ve observed of political arguments on Lemmy, I think that would eliminate well over 50% of the bullshit though. There’s way too many people who are more excited about debunking some kind of strawman-concept they’ve got in their head, than they are with even understanding what the other person’s even saying. I feel like something like that would do a lot to counteract it.
The fly in the ointment is that people would have to consent to having their conversation judged by it, and I feel like there is probably quite a lot of overlap between the people who need it in order to have a productive interaction, and those who would never in a million years agree to have something like that involved in their interactions…