About

I'm a second year PhD student in Computer Vision at the University of Bristol. My area of interest is Video Understanding, specifically in Skill Determination and Egocentric Video. I'm supervised by Prof. Walterio Mayol-Cuevas and Dr. Dima Damen and began my PhD in September 2016. I completed my MEng in Computer Science from the University of Bristol in June 2016.

Projects

The Pros and Cons: Rank-aware Temporal Attention for Skill Determination in Long Videos

Hazel Doughty, Walterio Mayol-Cuevas and Dima Damen

We present a new model to determine relative skill from long videos, through learnable temporal attention modules. Previous work formulates skill determination for common tasks as a ranking problem, yet measures skill from randomly sampled video segments. We believe this approach to be limiting since many parts of the video are irrelevant to assessing skill, and there may be variability in the skill exhibited throughout a video. Assessing skill from a single section may not reflect the overall skill in the video. We propose to train rank-specific temporal attention modules, learned with only video-level supervision, using a novel rank-aware loss function. In addition to attending to task-relevant video parts, our proposed loss jointly trains two attention modules to separately attend to video parts which are indicative of higher (pros) and lower (cons) skills. We evaluate the approach on the public EPIC-Skills dataset and additionally collect and annotate a larger dataset for skill determination with five previously unexplored tasks. Our method outperforms previous approaches and classic softmax attention on both datasets by over 4% pairwise accuracy, and as much as 12% on individual tasks. We also demonstrate our model's ability to attend to rank-aware parts of the video.

[arXiv] [Dataset]

Scaling Egocentric Vision: The EPIC-KITCHENS Dataset

Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray in European Conference on Computer Vision (ECCV), 2018. (Oral)

First-person vision is gaining interest as it offers a unique viewpoint on people's interaction with objects, their attention, and even intention. However, progress in this challenging domain has been relatively slow due to the lack of sufficiently large datasets. In this paper, we introduce EPIC-KITCHENS, a large-scale egocentric video benchmark recorded by 32 participants in their native kitchen environments. Our videos depict nonscripted daily activities: we simply asked each participant to start recording every time they entered their kitchen. Recording took place in 4 cities (in North America and Europe) by participants belonging to 10 different nationalities, resulting in highly diverse cooking styles. Our dataset features 55 hours of video consisting of 11.5M frames, which we densely labeled for a total of 39.6K action segments and 454.3K object bounding boxes. Our annotation is unique in that we had the participants narrate their own videos (after recording), thus reflecting true intention, and we crowd-sourced ground-truths based on these. We describe our object, action and anticipation challenges, and evaluate several baselines over two test splits, seen and unseen kitchens.

[arXiv] [Webpage & Dataset]

Who's Better? Who's Best? Pairwise Deep Ranking for Skill Determination

Hazel Doughty, Dima Damen and Walterio Mayol-Cuevas in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. (Spotlight)

We present a method for assessing skill from video, applicable to a variety of tasks, ranging from surgery to drawing and rolling pizza dough. We formulate the problem as pairwise (who's better?) and overall (who's best?) ranking of video collections, using supervised deep ranking. We propose a novel loss function that learns discriminative features when a pair of videos exhibit variance in skill, and learns shared features when a pair of videos exhibit comparable skill levels. Results demonstrate our method is applicable across tasks, with the percentage of correctly ordered pairs of videos ranging from 70% to 83% for four datasets. We demonstrate the robustness of our approach via sensitivity analysis of its parameters. We see this work as effort toward the automated organization of how-to video collections and overall, generic skill determination in video.

[arXiv] [Bibtex] [Dataset]

Publications

2018

Doughty, H., Mayol-Cuevas, W., Damen, D., 'The Pros and Cons: Rank-aware Temporal Attention for Skill Determination in Long Video'. arXiv preprint arXiv:1812.05538

Damen, D., Doughty, H., Farinella, G.M., Fidler, S., Furnari, A., Kazakos, E., Moltisanti, D., Munro, J., Perrett, T., Price, W. and Wray, M., 2018. Scaling Egocentric Vision: The EPIC-KITCHENS Dataset. The European Conference on Computer Vision (ECCV). 2018

Doughty, H., Damen, D., Mayol-Cuevas, W., 'Who's Better? Who's Best? Pairwise Deep Ranking for Skill Determination'. Computer Vision and Pattern Recognition (CVPR). 2018