I don’t see the problem. Sometimes it’ll be fifteen, and then it will be perfect every time. This saves the user literal hours of time poring over documentation and agonizing over which esoteric function to use, which far outweigh the few times this number will be nine.
Ok I see what happened here. You said the numbers “above” and it saw A in the column name. In hexadecimal that’s a 10. But you also said “numbers” plural, and “1” isn’t plural. So it took A + 2 + 3 = 15.
Makes perfect sense, maybe just write better prompts next time. /s
Doesn’t even need the /s. That is largely how those glorified search engines work.
Woah woah woah, stop it right there. I won’t stand for slander against actual search engines!
I doubt it. The column name isn’t part of the data.
However “the numbers above” is data.
3 letters + 7 letters + 5 letters= 15 letters.
Stunned that they’re fucking with their flagship Office product. Without Excel, everyone could simply drop Office.
Been a sysadmin at small companies for 10 years, and that means I’m the one vetting and purchasing software. Last shop was all in on Google for Business and Google for auth. Worked pretty well, but accounting and HR still had to have Excel.
It’s not even so much that other software can’t do simple Excel tasks, it’s the risk of your numbers getting lost in translation. In any case, nothing holds a candle to the power of Excel. And now they want to fuck with it?!
Excel is often used by people that don’t know what a database is, and you end up with thousands of rows of denormalised data just waiting for typos or extra white spaces to fuck up the precarious stack of macros and formulae. Never mind the update/insert anomalies and data corruption waiting to strike.
I have a passionate hate for Excel, but I understand that not everyone is willing to learn more robust data processing
The precarious stack of macros and formulae that you also can’t version control properly because it’s a superficially-XML-ified memory dump, not textual source code.
Almost every nontrivial use of Excel would be better off as, if not a database, at least something like a Jupyter notebook with pandas.
I haven’t even thought of that. I might just try this for fun.
A large language model shouldn’t even attempt to do math imo. They made an expensive hammer that is semi good at one thing (parroting humans) and now they’re treating every query like it’s a nail.
Why isn’t OpenAi working more modular whereby the LLM will call up specialized algorithms once it has identified the nature of the question? Or is it already modular and they just suck at anything that cannot be calibrated purely with brute force computing?
Yep. Instead of focusing on humans communicating more effectively with computers, which are good at answering questions that have correct, knowable answers, we’ve invented a type of computer that can be wrong because maybe people will like the vibes more? (And we can sell vibes)
OpenAI already makes it write Python functions to do the calculations.
So it’s going to write python functions to calculate the answer where all the variables are stored in an Excel spreadsheet a program that can already do the calculations? And how many forests did we burn down for that wonderful piece of MacGyvered software I wonder.
The AI bubble cannot burst soon enough.
There are math-specific LLM’s, & coding-specific ones
( Yi Coder is one, which I’ve used to translate bits of code into some language I can sorta understand… Julia… I’ve been trying to learn programming for decades, & brain-injury can go eat rocks. : ), too.
LM Studio has a search-function, so search for “math” in its models-search, & see what it comes up with.
I’ve used such things to give me a derivative of some horrible equation NASA published decades ago, & then go finding an online derivatives-finder to check it with…
The thing that kills me is that IT SHOULD BE CHECKED, dammit!
ie: IF the LLM did some bullshit “arithmetic” on a column-of-numbers, THEN the regular code of the spreadsheet should
- display the function that the AI used, if any, &
- suggest the SUM() function, AND SHOW THAT-FUNCTION’S RESULT.
This whole “LLM: take the wheel” idiocy … incomprehensible.
DuckDuckGo’s AI is hit-or-miss, & sometimes it is stubbornly wrong: no correction gets through to it.
_ /\ _
One of the other replies said that: “1”+(2+3) is “15” in JavaScript.". So my last theory as to what was going on, was that the creator of the meme had as cell contents =“1”, 2 and 3. And then copilot used python code to sum those, not sum() which would have answered 5.
But since the answer is a black box, who really knows. This blind trust that open ai+ms expect, makes it unusable for anything that needs to be correct and verifiable. Indeed incomprehensible that they think this is a good idea. I’ll have to try finding something better on lm studio the next time that I have a math problem, thanks for that tip.
Ed Zitron wrote an article a while ago about Business Idiots. From what I recall, the people in charge of these big companies are out of touch with users and the product, and so they make nonsense decisions. Companies aren’t run by the best and brightest. They’re run by people who do best in that environment.
Microsoft seems to be full of business idiots.
not shown: row 5 - “January 25th”
Relax bro its just vibe working it doesnt have to be correct
This will. Do not mess with Excel.
Simple, 3+2 = 5. Add 1, which came before the 3 and 2, and you get 1 then 5, 15. It’s new new math, or as I will henceforth call it, mAIth.
New-hoo-hoo math
It’s so simple, so very simple
That only an AI can do it
Damn it now this is going to be stuck in my head for the day. “64. How did 64 get into it? I hear you cry.”
It equally fascinates and scares how widespread AI is already being adopted by companies, especially at this stage. I can understand playing around a little with AI, even if its energy requirements can pose an ethical dilemma, but actually implementing it in a workflow seems crazy to me.
Actually, I think the profit motive will correct the mistakes here.
If AI works in their workflow and uses less energy than before… well, that’s an improvement. If it uses more energy, they will revert back because it makes less economic sense.
This doesn’t scare me at all. Most companies strive to stay as profitable as possible and if a 1+1 calculation costs a company more money by using AI to do it, they’ll find a cheaper way… like using a calculator like they have before.
We’re just nearing the peak of the Gartner hype cycle so it seems like everyone is doing it and its being sold at a loss. This will correct.You put too much faith in people to make good decisions. This could decrease profits by a wide margin and they’d keep using it. Tbh some would keep with the decision even if it throws them into the red.
You have more faith in people than I do.
I have managers that get angry if you tell them about problems with their ideas. So we have to implement their ideas despite the fact that they will cause the company to lose money in the long run.
Management isn’t run by bean counters (if it was it wouldn’t be so bad), management is run by egos in suits. If they’ve stated their reputation on AI, they will dismiss any and all information that says that their idea is stupid
The problem is how long it takes to correct against stupid managers. Most companies aren’t fully rational, it’s just when you look at long term averages that the various stupidities usually cancel out (unless they bankrupt the company)
unless they bankrupt the company
Even then it’s not a guarantee. They just get one of their government buddies to declare them two important to the economy (reality is irrelevant here), and get a massive bailout.
This doesn’t scare me at all. Most companies strive to stay as profitable as possible and if a 1+1 calculation costs a company more money by using AI to do it, they’ll find a cheaper way
This sounds like easy math, but AI doesn’t use more or less energy. It’s stated goal is to replace people. People have a much, much more complicated cost formula full of subjective measures. An AI doesn’t need health insurance. An AI doesn’t make public comments on social media that might reflect poorly on your company. An AI won’t organize and demand a wage increase. An AI won’t sue you for wrongful termination. An AI can be held responsible for a problem and it can be written off as “growing pains”.
How long will the “potential” of the potential benefits encourage adopters to give it a little more time? How much damage will it do in the meantime?
Simple:
Whenever you see a bad, incorrect answer, always give the AI a shit ton of praise. If it. Gives a correct answer, chastise it to death.
Great. Now we’re going to need therapists for AIs.
Make sure to thumbs up, like and subscribe!
One of the principal justifications for George Osborne’s 2010 austerity plans turned out to be erroneous thanks to an Excel error.
That was without any help from AI, things could be about to get much worse.
I’ve had some fun trying to open old spreadsheet files. It’s not been that painful. (Mostly because people I had to help never discovered macros. In optimal case they didn’t even know about functions.) After all, you don’t have weird external data sources. The spreadsheet is a frozen pile of data with strict rules.
I would love to be a fly in the wall when in 10 years someone needs to open an Excel file with Copilot stuff and needs fully reproducible results.
Wait, You’re telling me it redoes all of the prompts every time you open the document. That’s such a bad way of doing it this borderline criminal.
At the very least, why doesn’t copilot just replace that prompt with the appropriate sum(A1:A3) command?
Then Microsoft can’t motivate you to keep paying for the subscription
Need more context. If this is an engineering calculation then it’s wrong…but if it’s just an upper level manager doing numeric gibberish, then it’s probably no worse than their made-up input data anyway.












