Projects

ALGO – Adaptive Live Game Operations

The ALGO project started in 2019 in collaboration in Tactile Games with the prupose of investigating how to accurately estimate the players response to new content in puzzle games. During the project, we have investigated how to define concepts such as content difficulty and we have worked on building models that can accurately estimate difficulty of a neewly design digital puzzle. Towards this objective, we have developed reinforecement learning based agents that can play puzzle games and we have explored how to combine the synthetic behaviour data produced byt such agents with data produced by players players.

Deep Visual Perception Learning

Can we build a computational model of the brain? What can such an effort tell us about the human mind? Would such a model be a true example of artificial intelligence? These are some of the questions that have motivated many AI researchers in creating machines that can mimic natural intelligence. With this project, we aim at developing the foundations of a long term effort in attempting to answer the first questions, starting with human visual perception.

We are investigating how to build interpretable deep learning models of visual perception with the long-term goal is to establish deep neural networks as a tool to interpret the human brain. In particular, given the mediating role of eye movements in the visual perception process, we are working on building models that leverage both EEG and eye-tracking using neural network architectures that can process spatio-temporal data, so that we can capture both topological and sequential patterns in the brain response.

This project is part of the Pioneer Center of Artificial Intelligence.

APPLE – Adaptive Procedural Physical Learning Environment

Based on digital educational platform developed by YOLI, the APPLE project’s objective is to integrate multi-modal learning analytics and procedural content generation to improve the learning experience of pre-school children and to empower kindergarten educators.

We are investigating how the fusion of data that can be collected through YOLI’s hybrid digital-physical learning platform can be used to the learning experience, and how these captured experiences can be used to personalise the learning content. These studies are aimed to contributing to a better understanding of the role of multi-modal data in children learning analytics and the potentials of content personalisation to improve the learning experience.

This project is co-financed by YOLI, KMD and the IT University of Copenhagen