top of page

EYE TRACKING

Analyzing Concentration Levels in Online Learning with Facial Features

cropped_edited.jpg
Home: Welcome

ABOUT OUR PROJECT

Our project involved collecting eye and facial data in a private way and feeding said data into machine learning models in order to create automated programs that can detect whether a certain data set represents someone who is distracted or automated. Our project was divided into three main periods, known as milestones, each with their own goals discussed below.

Happy Woman
Image by Joshua Reddekopp
Image by Florian Olivo

COLLECTING DATA AND FINDING AN EYE TRACKER

In our first period in the course, we focused on generating the seed data. All members participated in generating short, one minute video clips of ourselves while learning online. We intentionally made about half of them where we were distracted and looking around the room, while the other half was focused staring directly at a webcam. We then began using deepfake software to swap these faces with celebrity portraits in order to establish a way of maintaining privacy.  Additionally, we researched and selected an eye tracker to use for eye data collection and extraction. We ended up deciding on an open-source package called Gaze Tracking.

DEEP FAKING AND EYE DATA MODELING

​In the second milestone, we spent time generating a large pool of deep fake videos, 200 concentrated and 200 distracted, using our original recorded videos. This ensured we had a decent amount of eye data to extract using Gaze. Once this pool of data was generated, we used Gaze to extract the eye data from the deepfake videos and store them into files readable by the machine learning models. As a test, we went ahead and trained a few models with eye data only to verify we were on the right track, and the test was a success.

FACIAL DATA EXTRACTION AND FINAL MACHINE LEARING TRAINING

​In our third and final division of this project, the team, helped greatly by the sponsors, used a program to extract facial data and combine it with the eye data stored earlier. At last, both were fed into a machine learning model at once. We tried several different types of parameters in the code in order to find an optimal set though after trying multiple combinations it appeared that the accuracy was consistently falling within 60 to 70%. In other words, the machine had now learned to detect whether a combined data set of eye and facial features represented distraction or concentration accurately over half of the time. Considering that this project utilized multiple open-source tools and was our team’s first time working with machine learning, we feel that this accuracy could definitely be raised even higher in the future and reach more “deployment ready” target goals.

Home: Courses
bottom of page