Humans and other animals learn to extract general concepts from sensory experience without extensive teaching. This ability is thought to be facilitated by offline states like sleep where previous experiences are systemically replayed. However, the characteristic creative nature of dreams suggests that learning semantic representations may go beyond merely replaying previous experiences.

Deperrois et al. suggest that generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations. Image credit: Stefan Keller.
“The importance of sleep and dreams for learning and memory has long been recognized — the impact that a single restless night can have on our cognition is well known,” said lead author Dr. Nicolas Deperrois, a researcher in the Department of Physiology at the University of Bern.
“What we lack is a theory that ties this together with consolidation of experiences, generalization of concepts and creativity.”
During sleep, we commonly experience two types of sleep phases, alternating one after the other: non-REM sleep, when the brain replay” the sensory stimulus experienced while awake, and REM sleep, when spontaneous bursts of intense brain activity produce vivid dreams.
In their research, Dr. Deperrois and his colleagues used simulations of the brain cortex to model how different sleep phases affect learning.
To introduce an element of unusualness in the artificial dreams, they took inspiration from a machine learning technique called generative adversarial networks (GANs).
In GANs, two neural networks compete with each other to generate new data from the same dataset, in this case a series of simple pictures of objects and animals. This operation produces new artificial images which can look superficially realistic to a human observer.
The researchers then simulated the cortex during three distinct states: wakefulness, non-REM sleep, and REM sleep.
During wakefulness, the model is exposed to pictures of boats, cars, dogs and other objects. In non-REM sleep, it replays the sensory inputs with some occlusions.
REM sleep creates new sensory inputs through the GANs, generating twisted but realistic versions and combinations of boats, cars, dogs etc.
To test the performance of the model, a simple classifier evaluates how easily the identity of the object (boat, dog, car etc.) can be read from the cortical representations.
“Non-REM and REM dreams become more realistic as our model learns,” said senior author Dr. Jakob Jordan, also from the Department of Physiology at the University of Bern.
“While non-REM dreams resemble waking experiences quite closely, REM dreams tend to creatively combine these experiences.”
Interestingly, it was when the REM sleep phase was suppressed in the model, or when these dreams were made less creative, that the accuracy of the classifier decreased.
When the NREM sleep phase was removed, these representations tended to be more sensitive to sensory perturbations.
Wakefulness, non-REM and REM sleep appear to have complementary functions for learning: experiencing the stimulus, solidifying that experience, and discovering semantic concepts, according to the team.
“We think these findings suggest a simple evolutionary role for dreams, without interpreting their exact meaning,” Dr. Deperrois said.
“It shouldn’t be surprising that dreams are bizarre: this bizarreness serves a purpose. The next time you’re having crazy dreams, maybe don’t try to find a deeper meaning — your brain may be simply organizing your experiences.”
The study was published online on the arXiv.org e-print archive.
_____
Nicolas Deperrois et al. 2022. Learning cortical representations through perturbed and adversarial dreaming. arXiv: 2109.04261