Umm, AKCTSHUALLY it gets more like O(n2) in parallel, assuming you’re using a physically achievable memory. There’s just a lot of criss-crossing the entries have to do.
Strassen’s algorithm gets O(n2.8) in serial by being terrible, and the weird experimental ones get O(n2.3), but the asymptotic benefits of Coppersmith-Winograd and friends only kick in if you’re a Kardashev III civilisation computing a single big product on a Dyson sphere.
I can’t decide whether this sentence is a joke or not. It has the same tone that triggers my PTSD from my CS degree classes and I also do recognize some of the terms, but it also sounds like it’s just throwing random science terms around as if you asked a LLM to talk about math.
There’s O(1), O(n), O(nlgn), O( this code is crap ).
[Cries in matrix multiplication]
Matrix multiplication is O(n) if you do it in parallel /j
Umm, AKCTSHUALLY it gets more like O(n2) in parallel, assuming you’re using a physically achievable memory. There’s just a lot of criss-crossing the entries have to do.
Strassen’s algorithm gets O(n2.8) in serial by being terrible, and the weird experimental ones get O(n2.3), but the asymptotic benefits of Coppersmith-Winograd and friends only kick in if you’re a Kardashev III civilisation computing a single big product on a Dyson sphere.
I can’t decide whether this sentence is a joke or not. It has the same tone that triggers my PTSD from my CS degree classes and I also do recognize some of the terms, but it also sounds like it’s just throwing random science terms around as if you asked a LLM to talk about math.
I love it.
Also, it’s apparently also real and correct.
Thank you, I’m glad to share the pain of numerical linear algebra with anyone who will listen.