Based on the attempts we’ve seen at censoring AI output so far, there doesn’t seem to me to be a way to actually do this without building a new model with pre-censored training data.
Sure they can tune models, but even “MechaHitler” Grok was still giving some “woke” answers on occasion. I don’t see how this doesn’t either destroy AI’s “usefulness” (not that there’s any usefulness there to begin with) or cost so much to implement that investors pull out because none of the AI companies are profitable, and throwing billions more to sift through and filter the training data pushes profitability even further away (if censoring all the training data is even possible at all).
Based on the attempts we’ve seen at censoring AI output so far, there doesn’t seem to me to be a way to actually do this without building a new model with pre-censored training data.
Sure they can tune models, but even “MechaHitler” Grok was still giving some “woke” answers on occasion. I don’t see how this doesn’t either destroy AI’s “usefulness” (not that there’s any usefulness there to begin with) or cost so much to implement that investors pull out because none of the AI companies are profitable, and throwing billions more to sift through and filter the training data pushes profitability even further away (if censoring all the training data is even possible at all).