WEDNESDAY, MAY 17, 2023
Is that really the best it can do? Frankly, we were puzzled. A front-page report in today's New York Times offered this head-scratching headline:
Microsoft Says New A.I. Shows Signs of Human Reasoning
The new A.I, has begun "to show signs of human reasoning?" Despondent analysts asked us this:
Is that really the best it can do?
Please forgive these youngsters. They've been bred on the claims of the later Wittgenstein, roughly as described in this short essay by Professor Horwich, long ago in the New York Times.
To what sorts of claims do we refer? Along the way, Horwich says this about the traditional fruits of philosophy:
HORWICH (3/3/13): Philosophy is respected, even exalted, for its promise to provide fundamental insights into the human condition and the ultimate character of the universe, leading to vital conclusions about how we are to arrange our lives. It’s taken for granted that there is deep understanding to be obtained of the nature of consciousness, of how knowledge of the external world is possible, of whether our decisions can be truly free, of the structure of any just society, and so on—and that philosophy’s job is to provide such understanding. Isn’t that why we are so fascinated by it?
If so, then we are duped and bound to be disappointed, says Wittgenstein. For these are mere pseudo-problems, the misbegotten products of linguistic illusion and muddled thinking...
Ooh boy! According to the later Wittgenstein, the lofty work we often describe as the highest-order human thinking consists of solutions to "mere pseudo-problems, the misbegotten products of linguistic illusion and muddled thinking!"
Now, we're told that A.I. has begun to show sign of attaining the kind of human ability which culminated in that! The analysts turned to us with inquiring eyes, saying this:
Can this really be the best those Microsoft lunkheads can do?
For extra credit only: At the start of his report in the Times, Cade Metz describes the kinds of human reasoning this Microsoft system has been asked to perform:
METZ (5/17/23): When computer scientists at Microsoft started to experiment with a new artificial intelligence system last year, they asked it to solve a puzzle that should have required an intuitive understanding of the physical world.
“Here we have a book, nine eggs, a laptop, a bottle and a nail,” they asked. “Please tell me how to stack them onto each other in a stable manner.”
The researchers were startled by the ingenuity of the A.I. system’s answer. Put the eggs on the book, it said. Arrange the eggs in three rows with space between them. Make sure you don’t crack them.
“Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up,” it wrote. “The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”
Long ago and far away, as mere college freshmen, we took the philosophy department's introductory course, Phil 3: Problems in Philosophy.
We were asked to study six different philosophical "problems." As we've admitted before, one of the problems was this:
How can you know that 7 plus 5 equals 12?
Who are these "problems" problems for, we muttered under our breath one crisp, clear autumn morning. Still, we feel that we ought to be fair:
None of the "problems" we tackled that year were quite as confounding as that!