Unleashing the Power of AI: Exploring its Ability to Replicate Human Compositional Thinking

Summary: Researchers have developed a technique called Meta-learning for Compositionality (MLC), which enhances the generalization skills of artificial neural networks. MLC involves episodic training to improve the network’s ability to grasp and expand on new concepts. Surprisingly, MLC has matched or surpassed human performance in various tasks. This technique could be used to enhance the capabilities of popular models like ChatGPT and GPT-4.

Key Facts:

  1. The MLC technique focuses on episodic training of neural networks to improve their compositional generalization skills.
  2. In tasks involving novel word combinations, MLC performed at the same level or better than human participants.
  3. Despite advancements in AI models like ChatGPT and GPT-4, they still struggle with compositional generalization, but MLC could be a solution.

Source: NYU

Humans have the ability to learn a new concept and immediately apply it to understand related uses of that concept. But can machines do the same?

In the late 1980s, philosophers and cognitive scientists Jerry Fodor and Zenon Pylyshyn suggested that artificial neural networks, which power AI and machine learning, are not capable of making these connections, known as “compositional generalizations.”

However, over the years, scientists have been working on ways to instill this capacity in neural networks, with mixed success. The debate has been ongoing.

Researchers from New York University and Pompeu Fabra University in Spain have now developed a technique called Meta-learning for Compositionality (MLC), reported in the journal Nature, that advances the ability of tools like ChatGPT to make compositional generalizations.

MLC outperforms existing approaches and performs on par with or even better than humans in certain cases.

MLC trains neural networks, which drive technologies like ChatGPT and natural language processing, to improve their compositional generalization skills through practice.

While developers of existing systems have hoped that compositional generalization would emerge from standard training methods, MLC shows that explicit practice of these skills allows the systems to unlock new capabilities.

“For 35 years, researchers in cognitive science, AI, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” says Brenden Lake, an assistant professor at NYU. “We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”

To enhance compositional learning in neural networks, the researchers created MLC, a novel learning procedure where a network is continuously updated to improve its skills over a series of episodes.

In each episode, MLC receives a new word and is asked to use it compositionally. For example, it might be given the word “jump” and asked to create new word combinations like “jump twice” or “jump around right twice.” MLC then receives a new episode with a different word, continuing to improve its compositional skills.

To test the effectiveness of MLC, the researchers conducted experiments with human participants, mimicking the tasks performed by MLC. In addition, they had participants learn the meaning of nonsensical terms and apply them in different ways.

MLC performed as well as or better than humans in these experiments. It also outperformed ChatGPT and GPT-4, which struggled with this type of learning task.

“Large language models such as ChatGPT still struggle with compositional generalization, though they have improved in recent years,” says Marco Baroni, a researcher at Pompeu Fabra University. “But we believe that MLC can further improve their compositional skills.”

About this artificial intelligence research news

Author: James Devitt
Source: NYU
Contact: James Devitt – NYU
Image: The image is credited to Neuroscience News

Original Research: Open access.
Human-like systematic generalization through a meta-learning neural network” by Brenden Lake et al. Nature


Abstract

Human-like systematic generalization through a meta-learning neural network

The power of human language and thought arises from systematic compositionality—the algebraic ability to understand and produce novel combinations from known components.

Fodor and Pylyshyn famously argued that artificial neural networks lack this capacity and are therefore not viable models of the mind. Neural networks have advanced considerably in the years since, yet the systematicity challenge persists.

Here we successfully address Fodor and Pylyshyn’s challenge by providing evidence that neural networks can achieve human-like systematicity when optimized for their compositional skills.

To do so, we introduce the meta-learning for compositionality (MLC) approach for guiding training through a dynamic stream of compositional tasks. To compare humans and machines, we conducted human behavioral experiments using an instruction learning paradigm.

After considering seven different models, we found that, in contrast to perfectly systematic but rigid probabilistic symbolic models, and perfectly flexible but unsystematic neural networks, only MLC achieves both the systematicity and flexibility needed for human-like generalization. MLC also advances the compositional skills of machine learning systems in several systematic generalization benchmarks.

Our results show how a standard neural network architecture, optimized for its compositional skills, can mimic human systematic generalization in a head-to-head comparison.

Reference

Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment