Truth, Artificial Intelligence, and the Will to Power


Elon Musk has called for a pause in AI development over safety concerns. But since no such pause seems to be forthcoming, he says he plans to go forward with development of his own AI system. Indeed, the "pause" idea could seem all too strategic, given that he is a little bit behind in the commercial race to develop language processing AI. Musk's proposed system's proposed name is TruthGPT, a moniker that is eerily reminiscent of Trump's own late-coming re-invention of Twitter, Truth Social. 

Musk is awfully smart in that libertarian, tech-bro kind of way. And he's a very driven and inspired person. The economic energy and idealism he has injected into the green tech sector should not be dis-valued. Nevertheless, the kind of smart Musk is has stark limitations, which are probably nowhere so apparent as in his pontifications about politics and social theory, but are also reflected in his futurological extrapolations about things like the simulation hypothesis and artificial intelligence. He likes to tell his influencees that information is free, so going to college is useless. He idealizes the autodidact. And yet, the current crisis in democracy during a time when the world is awash in zero-marginal cost information should be enough to tell us that information is not knowledge, and knowledge is not wisdom. 

Musk thinks his TruthGPT will be a safer AI for humans, because an AI that seeks the truth will not harm such an interesting part of the universe as humans. After all (barring the simulation hypothesis), we humans are true! But what sort of truth is it that a dis-embodied language-processing AI is capable of seeking? The answer is obvious to philosophers of language. Unless it is the kind of truth current AI seeks (i.e. the average of written human opinions), it could be a propositional and logical truth only. Truths like, "if all men are mortal, and if Musk is a man, then Musk is mortal." 

Such propositional truths are a favorite of "left-brain" thinkers. They seem rock-solid and water-tight. Actually, however, they are a weak, derivative, and secondary kind of truth. The real question for the truth seeker is not whether "Musk is mortal" follows from the first two assertions. Of course it does. But in such logical translations, we remain entirely in the world of semantics, which ultimately means within a kind of logico-mathematical system. The real question is, "what real things in the world of experience fall into the categories that we use to construct these water-tight syllogistic truths?" How do I know a man when I see one? What really is mortality? 

The reason we have such categories in the first place--and the only reason we care what real things, events, phenomena fit into them--is because we are vulnerable beings. We want to understand our world to prevent our being harmed. So, we invent categories to make sense of the world around us. Of course, our categories are never perfect. Philosopher of language Ludwig Wittgenstein showed this by asking a seemingly simple, definitional question: "What is a game?" Or, if you like, what isn't a game? What makes a game a game? Ultimately, no single, final answer can be given to such questions. And if Wittgenstein's example is too frivolous, we can question any number of more serious concepts and categories just as fruitlessly. When does an embryo become a person? When did Homo erectus become Homo sapiens? What, precisely, is the moment of death? Where exactly was the threshold between life and non-life in that billion year-old tide pool where RNA first reproduced itself? Is a virus mortal? Is a prion protein mortal?

Our categories are ultimately mere metaphors, and metaphors are fictions. But the key point is that they are useful fictions. Useful for what? Any good Darwinian can tell you: survival. An "intelligence" with no interest in survival has no use for truth, nor any primary access to it. According to Friedrich Nietzsche, though, these fictional-yet-functional truths that we invent are not just about survival. They are also about power, or "the will to power" as he put it. We are vulnerable, physical, mortal beings in a dangerous world, and we would not long survive if survival in a narrow sense were all we sought. Like all living things, we are imbued with the will not only to survive, but to thrive, to play, to love, to know, to create, to explore, to overcome and conquer and devour. It is out of this vital force, not simply the will to survive, that truth is propounded. And while Nietzsche's own characterization of this vital force was sometimes cartoonishly violent and hyper-masculine, the fact that he named it the "will to power" is a useful reminder that the pursuit of truth is in no sense inherently harmless. (The best source here is Neitzsche's Truth and Lies in a Non-Moral Sense, not his collection of notes posthumously assembled and published as The Will to Power/Wille Zur Macht.)

An AI model that really seeks truth would have to be in some sense embodied; it would have to feel itself to be harm-able, and it would have to have its own claim on the will to power. There is much to say about whether that is possible, or in any sense desirable. But the first and most basic conclusion to be drawn is that a real "TruthGPT"--an AI that really sought truth about the universe--would definitely not be a "safer" AI. If it were possible to create such a thing, it would share a contentious kinship with humanity that current AI lacks.

Comments

Popular Posts