Learning and memory in modern-day digital learning environments

The researchers in this project want to generate completely new knowledge about the neural underpinnings of optimal learning during natural viewing behavior in digital multimodal learning environments. They hope this will provide a basis for more effective teaching and learning methods.

For thousands of years, people have acquired knowledge by reading handwritten or printed materials and talking with each other. These behaviors have changed radically following the digital revolution that began some twenty-five years ago and has continued apace ever since. Most of our learning now occurs in interaction with multimodal tools on computers, tablets, and cellphone screens. We rarely rely on a single source of information; we constantly flit between sources that combine material from printed words, still images, sounds, and films. 

The learning process in these dynamic multimodal environments requires us to constantly deal with a mass of impressions and information types, which places great demands on our cognitive abilities and impacts how we perceive, select, sort, process, and remember information. Therefore, future neurocognitive research must determine how knowledge is acquired and optimized in present-day learning contexts.

Current research tends to study learning and memory processes in somewhat unnatural laboratory experiments, where static information is sequentially presented in a single location on a computer screen and where participants are instructed not to move their gaze away from the stimuli. The researchers believe that such experimental situations cannot possibly capture the natural behaviors occurring as we freely look around and explore different types of information on, for instance, a computer screen or a tablet. They argue that knowledge of the neurocognitive foundations of learning in these new multimodal information environments is thus far very limited. A major gap in our knowledge needs to be filled if we are to understand learning and memory beyond current simplified models.

The project aims to generate new knowledge by using the latest developments in two technologies: eye tracking and analysis of brain activity (EEG). Eye tracking reveals how we visually “sample” the world as we process and learn new information, but it does not provide insight into the underlying neural mechanisms. EEG gives that insight but has traditionally not allowed for unrestricted eye movements. This shortcoming has prevented researchers from adequately understanding how learning and memory are orchestrated as we explore our surroundings. The project combines the strengths of both methods in a novel way.

The project uses the simultaneous recording of EEG and eye tracking, together with advanced machine learning methods of multivariate pattern analysis (MVPA). This allows the researchers to study how we piece-by-piece put together and learn new information as we visually sample the presented learning materials through our spontaneous eye movements and how this relates to critical internal and external factors that affect learning. The latest neurofeedback methods will also be used to improve and optimize the learning process.

The project’s combination of research techniques and methods will provide unique insights into how the brain uses our natural viewing behavior to build coherent representations of new knowledge. The project results will provide a basis for actively influencing and improving knowledge acquisition in modern-day multimodal learning environments.

Project:
“A closer look at knowledge acquisition in the digital era”

Principal investigator:
Dr. Andrey Nikolaev

Co-investigators:
Lund University
Inês Bramão
Nils Holmberg
Mikael Johansson
Roger Johansson

Institution:
Lund University

Grant:
SEK 4.5 million