Maarten Sap, a computer scientist at Carnegie Mellon University, fed more than 1,000 theory of mind tests into large language models and found that the most advanced transformers, like ChatGPT and GPT-4, passed only about 70 percent of the time. (In other words, they were 70 percent successful at attributing false beliefs to the people described in the test situations.) The discrepancy between his data and Dr. Kosinski’s could come down to differences in the testing, but Dr. Sap said that even passing 95 percent of the time would not be evidence of real theory of mind. Machines usually fail in a patterned way, unable to engage in abstract reasoning and often making “spurious correlations,” he said.

Dr. Ullman noted that machine learning researchers have struggled over the past couple of decades to capture the flexibility of human knowledge in computer models. This difficulty has been a “shadow finding,” he said, hanging behind every exciting innovation. Researchers have shown that language models will often give wrong or irrelevant answers when primed with unnecessary information before a question is posed; some chatbots were so thrown off by hypothetical discussions about talking birds that they eventually claimed that birds could speak. Because their reasoning is sensitive to small changes in their inputs, scientists have called the knowledge of these machines “brittle.”

Dr. Gopnik compared the theory of mind of large language models to her own understanding of general relativity. “I have read enough to know what the words are,” she said. “But if you asked me to make a new prediction or to say what Einstein’s theory tells us about a new phenomenon, I’d be stumped because I don’t really have the theory in my head.” By contrast, she said, human theory of mind is linked with other common-sense reasoning mechanisms; it stands strong in the face of scrutiny.

In general, Dr. Kosinski’s work and the responses to it fit into the debate about whether the capacities of these machines can be compared to the capacities of humans — a debate that divides researchers who work on natural language processing. Are these machines stochastic parrots, or alien intelligences, or fraudulent tricksters? A 2022 survey of the field found that, of the 480 researchers who responded, 51 percent believed that large language models could eventually “understand natural language in some nontrivial sense,” and 49 percent believed that they could not.

Dr. Ullman doesn’t discount the possibility of machine understanding or machine theory of mind, but he is wary of attributing human capacities to nonhuman things. He noted a famous 1944 study by Fritz Heider and Marianne Simmel, in which participants were shown an animated movie of two triangles and a circle interacting. When the subjects were asked to write down what transpired in the movie, nearly all described the shapes as people.

“Lovers in the two-dimensional world, no doubt; little triangle number-two and sweet circle,” one participant wrote. “Triangle-one (hereafter known as the villain) spies the young love. Ah!”

It’s natural and often socially required to explain human behavior by talking about beliefs, desires, intentions and thoughts. This tendency is central to who we are — so central that we sometimes try to read the minds of things that don’t have minds, at least not minds like our own.



Source link

By admin

Malcare WordPress Security