New findings reveal untrained AI can mirror human brain responses

Johns Hopkins says brain-inspired AI can mimic neural activity before training.

Brain-based designs could cut compute and speed AI learning.

Researchers at Johns Hopkins report that brain-inspired AI architectures can display human-like neural activity before any training. Structural design may provide stronger starting points than data-heavy methods. The findings challenge long-held views about how machine intelligence forms.

Researchers tested modified transformers, fully connected networks, and convolutional networks across multiple variants. They compared untrained model responses with neural data from humans and primates viewing identical images. The approach allowed a direct measure of architectural influence.

Transformers and fully connected networks showed limited change when scaled. Convolutional models, by contrast, produced patterns that aligned more closely with human brain activity. Architecture appears to be a decisive factor early in development.

Untrained convolutional models matched aspects of systems trained on millions of images. The results suggest brain-like structures could cut reliance on vast datasets and energy-intensive computation. The implications may reshape how advanced models are engineered.

Further research will examine simple, biologically inspired learning rules. The team plans to integrate these mechanisms into future AI frameworks. The goal is to combine architecture and biology to accelerate meaningful advances.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!