There was a paper about this not long ago. The problem is, how LLMs get trained: a right answer gets a point, everything else gets no points. This rewards guessing (produces a point sometimes) over answering “I don’t know/I can’t do this” (produces never a point)
It’s like when developers give a wrong answer during technical interviews, rather than say “I’d have to look it up” or “I’d have to check the documentation” etc.
There was a paper about this not long ago. The problem is, how LLMs get trained: a right answer gets a point, everything else gets no points. This rewards guessing (produces a point sometimes) over answering “I don’t know/I can’t do this” (produces never a point)
It’s like when developers give a wrong answer during technical interviews, rather than say “I’d have to look it up” or “I’d have to check the documentation” etc.
I would code a GUI in visual basic