Small Language Models Alignment and Safety
Despite the burgeoning potential of Tranformer-based Language Models, the game industry, like many others, grapples with significant challenges, including resource constraints, stringent requirements for context sensitivity, and the imperative for robust safety measures when it comes to using Large Language Models (LLMs).
A player can’t wait several seconds for a response as it breaks immersion. The solution also can’t spend all the computational power or hallucinate answers, ruining the rest of the gaming experience. Current solutions fall short of meeting these needs comprehensively, requiring huge amounts of computational resources and lacking the necessary customization and safeguarding mechanisms to ensure interactions are both engaging and secure.
The project, conducted in collaboration with Raw Power Games, is focused on researching and developing technologies to improve the quality, safety and efficiency of machine-learning-based models of language to be deployed in digital games and other interactive applications.
Deep Visual Perception Learning
Can we build a computational model of the brain? What can such an effort tell us about the human mind? Would such a model be a true example of artificial intelligence? These are some of the questions that have motivated many AI researchers in creating machines that can mimic natural intelligence. With this project, we aim at developing the foundations of a long term effort in attempting to answer the first questions, starting with human visual perception.
We are investigating how to build interpretable deep learning models of visual perception with the long-term goal of establishing deep neural networks as a tool to interpret the human brain. In particular, given the mediating role of eye movements in the visual perception process, we are working on building models that leverage both EEG and eye-tracking using neural network architectures that can process spatio-temporal data, so that we can capture both topological and sequential patterns in the brain response.
This project is part of the Pioneer Center of Artificial Intelligence.
APPLE – Adaptive Procedural Physical Learning Environment and CREATE
Based on a digital educational platform developed by YOLI, the APPLE project’s objective is to integrate multi-modal learning analytics and procedural content generation to improve the learning experience of pre-school children and to empower kindergarten educators.
We are investigating how the fusion of data that can be collected through YOLI’s hybrid digital-physical learning platform can be used to the learning experience, and how these captured experiences can be used to personalise the learning content. These studies are aimed to contributing to a better understanding of the role of multi-modal data in children’s learning analytics and the potential of content personalisation to improve the learning experience.
This project is co-financed by YOLI, KMD and the IT University of Copenhagen
ALGO – Adaptive Live Game Operations [completed]
The ALGO project started in 2019 in collaboration with Tactile Games with the purpose of investigating how to accurately estimate the player’s response to new content in puzzle games. During the project, we have investigated how to define concepts such as content difficulty and we have worked on building models that can accurately estimate the difficulty of a newly designed digital puzzle. Towards this objective, we have developed reinforcement-learning-based agents that can play puzzle games and we have explored how to combine the synthetic behaviour data produced by such agents with data produced by players players.