WEyeDS is a webcam gaze estimation dataset meant to help address appearance-based gaze estimation challenges. The dataset was initially collected online and contained 38 participants. Later, we expanded it to 54 participants. We used a CNN model to test the dataset performance. The code for data collection & model training as well as the dataset itself are available below.
The dataset contains full-face images as well as left and right eye crops for each participant. Each image is accompanied by a label with a coordinate of a dot that a participant looked at. Each coordinate is a number between 0 and 1 and indicates a proportion of a vertical and horizontal user viewport. The dataset also contains metadata about user's machine information and screen details.
The data were collected online using JATOS and lab.js. The code for the data collection is also available on GitHub.
Data Collection CodeWe are happy to share the data with anyone interested! Please fill out the form below and we will send you the download link. Please make sure to use your institutional email. If you have any questions, please reach us at robbinslab@richmond.edu.
Download DatasetPlease use the following citation if you are using the dataset. Thank you!
@inproceedings{evdokimov2024weyeds,
author = {Evdokimov, Anatolii and Finegan-Dollak, Catherine and Robbins, Arryn},
title = {WEyeDS: A desktop webcam dataset for gaze estimation},
year = {2024},
isbn = {9798400706073},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3649902.3655646},
doi = {10.1145/3649902.3655646},
booktitle = {Proceedings of the 2024 Symposium on Eye Tracking Research and Applications},
articleno = {52},
numpages = {2},
keywords = {appearance-based models, dataset, eye tracking, gaze estimation, webcam},
location = {Glasgow, United Kingdom},
series = {ETRA '24}
}
We are happy to answer any of your questions! Please send an email to robbinslab@richmond.edu, and we will do our best to address your questions.