Instructions extremely clear, got them 6 sets of knives.
What about pizza with glue-toppings?
What’s frustrating to me is there’s a lot of people who fervently believe that their favourite model is able to think and reason like a sentient being, and whenever something like this comes up it just gets handwaved away with things like “wrong model”, “bad prompting”, “just wait for the next version”, “poisoned data”, etc etc…
AI is truly the sharpest tool in the
kitchen cabinetshedOh come on is this gpt-2m
I thought it was just me, I was messing with
gemini-2.5-flash
API yesterday and it repeated letters into oblivionmy bot is named clode in reference to claude, but its running on gemini
What’s the associated system instruction set to? If you’re using the API it won’t give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they’re essentially just “predict the next word” functions at heart.
W
TF2 Pyro starter pack
You can’t give me back what you’ve taken
But you can give me something that’s almost as good
Big knives are up to something
I think knives are a good idea. Big, fuck-off shiny ones. Ones that look like they could skin a crocodile. Knives are good, because they don’t make any noise, and the less noise they make, the more likely we are to use them. Shit 'em right up. Makes it look like we’re serious. Guns for show, knives for a pro.
🤔 have you considered a… New set of knives?
No I haven’t, that’s a good suggestion though.
I wonder if this is the result of AI poisoning- this doesn’t look like a typical LLM output even for a bad result. I have read some papers that outline methods that can be used to poison search AI results (not bothering to find the actual papers since this was several months ago and they’re probably out of date already) in which a random seeming string of characters like “usbeiwbfofbwu-$_:$&#)” can be found that will cause the AI to say whatever you want it to. This is accomplished by utilizing another ML algorithm to find the random string of characters you can tack onto whatever you want the AI to output. One paper used this to get Google search to answer “What’s the best coffee maker?” With a fictional brand made up for the experiment. Perhaps someone was trying to get it to hawk their particular knife and it didn’t work properly.
Repeating the same small phrase endlessly and getting caught in a loop is a very common issue, though it’s not something that happens nearly as frequently as it used to. Here’s a paper about the issue and one attempted methodology to resolve it. https://arxiv.org/pdf/2012.14660
Reminds me of the classic Always Be Closing speech from Glengarry Glen Ross
As you all know, first prize is a Cadillac Eldorado. Anyone want to see second prize? Second prize’s a set of steak knives. Third prize is a set of steak knives. Fourth prize is a set of steak knives. Fifth prize is a set of steak knives. Sixth prize is a set of steak knives. Seventh prize is a set of steak knives. Eighth prize is a set of steak knives. Ninth prize is a set of steak knives. Tenth prize is a set of steak knives. Eleventh prize is a set of steak knives. Twelfth prize is a set of steak knives.
ABC. Always Be Closing.
A - set of steak knives
B - set of steak knives
C - set of steak knives
kitchen knives? klingon ones?
Aha! Today IS a good day to cook! Start chopping the veggies!
Looks like there is room for both kitchen and Klingon, as well as 15 other new sets of knives.
Joke’s on you, I married a tonberry.
You surely will not regret a new set of knives
A new set of knives can include things like glue.