AI through eyes of babies

Linguists have long debated how children learn language, with some arguing that babies acquire language through experience alone, while others believe innate capabilities play a role. AI models like GPT-4 have not settled the debate, as their language learning methods differ from babies. A team of scientists at New York University conducted an experiment using an AI model trained on the experiences of a single infant named Sam. The model was able to identify objects and learn associated words, even for objects Sam had never seen before. However, its vocabulary and language abilities did not match Sam’s. The researchers suggest that experience may be enough for matching words to objects, but skeptics question the model’s ability with abstract words and verbs. The mystery of language acquisition remains unresolved.

 Blackboard

The debate among linguists regarding how children learn language has been ongoing for decades. One school of thought suggests that babies are born as “blank slates” and acquire language solely through experience, while another argues that innate factors in their brain make language acquisition easier. Despite the advancement of AI models like GPT-4, which learn language differently from infants, the debate remains unresolved.

To shed light on this matter, a team of scientists at New York University conducted an experiment using an AI model trained on the experiences of a single toddler named Sam. Sam wore a head-mounted camera for an hour each week between the ages of six and 25 months, recording his interactions with toys, visits to the park, and interactions with his pet cats. The recorded videos and transcribed audio were then fed into the AI model, which was programmed to understand the relationship between images and words appearing simultaneously.

Surprisingly, despite the limited training data, the AI model successfully identified objects and learned their corresponding words. The researchers tested the model’s performance by asking it to identify objects that Sam had encountered before, such as a chair from his home or one of his toy balls. The AI model correctly identified the corresponding word 62% of the time, exceeding the chance level of 25%. Additionally, the model was even able to identify objects, like chairs and balls, that Sam had never seen before. However, the AI model’s vocabulary and language abilities did not match those of Sam by the end of the experiment.

The results of the experiment, published recently in the journal Science, suggest that learning from experience might be sufficient for matching words to objects. However, skeptics remain wary, doubting the AI model’s ability to learn more abstract nouns or verbs and questioning the true similarity between the learning processes of the AI model and the developing human brain. As a result, the mystery of language acquisition remains, signaling the need for further research to gain a more comprehensive understanding of this complex process.

Source: The Economist