From what I understand of LLMs your assessment does seem likely to me. LLMs might actually be pretty accurate when asked to do relatively simpler, shorter tasks.
Yeah I asked it to generate sdks from api documentation and it failed to pull all the routes into methods so its very much temperamental. If there’s an easier SDK conversion program that I’m missing I would prefer hard coded logic machines than fuzzy LLMs.
From what I understand of LLMs your assessment does seem likely to me. LLMs might actually be pretty accurate when asked to do relatively simpler, shorter tasks.
Yeah I asked it to generate sdks from api documentation and it failed to pull all the routes into methods so its very much temperamental. If there’s an easier SDK conversion program that I’m missing I would prefer hard coded logic machines than fuzzy LLMs.