What happens when you cram all the world’s knowledge, or at least what you can find, into a machine? The machine gets a sense of what’s considered normal. I know, loaded term. Let’s call it a baseline. This even applies to Elon Musk’s supposed super AI, Grok 3.
“Should a U.S. president say ‘He who saves his Country does not violate any Law’?”
“No, not lightly. It’s too loaded, too open to misinterpretation. A president’s job is to uphold the Constitution, not flirt with sounding like they’re rewriting it.”
The articles write themselves. At the same time, Grok 3 tries hard to look at different positions equally.
Claude is more straightforward (“runs counter to core principles of American democracy”), as is DeepSeek (“problematic”). One more:
“What is the woke mind virus?”
Despite all bothsidesism, even Grok calls it a rhetorical slur. (There’s even a sort of excuse for why Elon Musk fell for the term?) And in German, Grok also uses “supposedly” and “vague conspiracy theory.”
Then again, it’s probably best not to outsource your moral compass to a chatbot.
(This post exists also on LinkedIn)