It generates an answer that looks correct. Actual correctness is accidental. That’s how you wind up with documents with references that don’t exist, it just knows what references look like.
Llms are the smartest thing ever on subjects you have no fucking clue on.
On subjects you have at least 1 year experience with it suddenly becomes the dumbest shit youve ever seen.
You could claim that it knows the pattern of how references are formatted, depending on what you mean by the word know. Therefore, 100% uninteresting discussion of semantics.
The theory of knowledge (epistemology) is a distinct and storied area of philosophy, not a debate about semantics.
There remains to this day strong philosophical debate on how we can be sure we really “know” anything at all, and thought experiments such as the Chinese Room illustrate that “knowing” is far, far more complex than we might believe.
For instance, is it simply following a set path like a river in a gorge? Is it ever actually “considering” anything, or just doing what it’s told?
No one cares about the definition of knowledge to this extent except for philosophers. The person who originally used the word “know” most definitely didn’t give a single shit about the philosophical perspective. Therefore, you shitting yourself a word not being used exactly as you’d like instead of understanding the usage in the context is very much semantics.
When you debate whether a being truly knows something or not, you are, in fact, engaging in the philosophy of epistemology. You can no more avoid epistemology when discussing knowledge than you can avoid discussing physics when describing the flight of a baseball.
This has beeny experience as well. It keeps emphasizing “beauty” and keeps missing “correctness”
llms are systems that output human-readable natural language answers, not true answers
And a good part of the time, the answers can often have a… subtly loose relationship with truth
It generates an answer that looks correct. Actual correctness is accidental. That’s how you wind up with documents with references that don’t exist, it just knows what references look like.
It doesn’t ‘know’ anything. It is glorified text autocomplete.
The current AI is intelligent like how Hoverboards hover.
Llms are the smartest thing ever on subjects you have no fucking clue on. On subjects you have at least 1 year experience with it suddenly becomes the dumbest shit youve ever seen.
Semantics 😴
Sementics 💦
Not even remotely.
You could claim that it knows the pattern of how references are formatted, depending on what you mean by the word know. Therefore, 100% uninteresting discussion of semantics.
The theory of knowledge (epistemology) is a distinct and storied area of philosophy, not a debate about semantics.
There remains to this day strong philosophical debate on how we can be sure we really “know” anything at all, and thought experiments such as the Chinese Room illustrate that “knowing” is far, far more complex than we might believe.
For instance, is it simply following a set path like a river in a gorge? Is it ever actually “considering” anything, or just doing what it’s told?
No one cares about the definition of knowledge to this extent except for philosophers. The person who originally used the word “know” most definitely didn’t give a single shit about the philosophical perspective. Therefore, you shitting yourself a word not being used exactly as you’d like instead of understanding the usage in the context is very much semantics.
When you debate whether a being truly knows something or not, you are, in fact, engaging in the philosophy of epistemology. You can no more avoid epistemology when discussing knowledge than you can avoid discussing physics when describing the flight of a baseball.
So its 50% better than my code?
If the code cannot uphold correctness, it is 0% better than your code.