Terrifyingly, Facebook wants its AI to be your eyes and ears
They have gathered over 3,025 hours of daily-life activity video for their Ego4D dataset
What is Ego4D?
Facebook describes the heart of the project as a massive-scale, egocentric dataset and benchmark suite collected across 74 worldwide locations and nine countries, with over 3,025 hours of daily-life activity video.
The “Ego” in Ego4D means egocentric (or “first-person” video), while “4D” stands for the three dimensions of space plus one more: time. In essence, Ego4D seeks to combine photos, video, geographical information and other data to build a model of the user’s world.
There are two components: a large dataset of first-person photos and videos, and a “benchmark suite” consisting of five challenging tasks that can be used to compare different AI models or algorithms with each other. These benchmarks involve analyzing first-person videos to remember past events, create diary entries, understand interactions with objects and people, and forecast future events.
The dataset includes more than 3,000 hours of first-person video from 855 participants going about everyday tasks, captured with a variety of devices including GoPro cameras andaugmented reality (AR) glasses. The videos cover activities at home, in the workplace, and hundreds of social settings.
What is in the data set?
Although this is not the first such video dataset to be introduced to the research community, it is 20 times larger than publicly available datasets. It includes video, audio, 3D mesh scans of the environment, eye gaze, stereo, and synchronized multi-camera views of the same event.
Most of the recorded footage is unscripted or “in the wild”. The data is also quite diverse as it was collected from 74 locations across nine countries, and those capturing the data have various backgrounds, ages and genders.
What can we do with it?
Commonly, computer vision models are trained and tested on annotated images and videos for a specific task. Facebook argues that current AI datasets and models represent a third-person or a “spectator” view, resulting in limited visual perception. Understanding first-person video will help design robots that better engage with their surroundings.
Furthermore, Facebook argues egocentric vision can potentially transform how we use virtual and augmented reality devices such as glasses and headsets. If we can develop AI models that understand the world from a first-person viewpoint, just like humans do, VR and AR devices may become as valuable as our smartphones.
Can AI make our lives better?
Facebook has also developed five benchmark challenges as part of the Ego4D project. The challenges aim to build a better understanding of video materials to develop useful AI assistants. The benchmarks focus on understanding first-person perception. The benchmarks are described as follows:
What about privacy?
Obviously, there are significant privacy concerns. If this technology is paired with smart glasses constantly recording and analyzing the environment, the result could be constant tracking and logging (via facial recognition) of people moving around in public.
While the above may sound dramatic, similar technology hasalready been trialedin China, and the potential dangers havebeen explored by journalists.
Facebook says it will maintain high ethical and privacy standards for the data gathered for the project, including consent of participants, independent reviews, andde-identifying datawhere possible.
As such, Facebook says the data was captured in a “controlled environment with informed consent”, and in public spaces “faces and other PII [personally identifying information] are blurred”.
But despite these reassurances (and noting this is only a trial), there are concerns over the future of smart-glasses technology coupled with the power of a social media giant whose intentions havenot always been aligned to their users.
The future?
TheImageNetdataset, a huge collection of tagged images, has helped computers learn to analyze and describe images over the past decade or more. Will Ego4D do the same for first-person video?
We may get an idea next year. Facebook has invited the research community to participate in the Ego4D competition in June 2022, and pit their algorithms against the benchmark challenges to see if we can find those keys at last.
Article byJumana Abu-Khalaf, Research Fellow in Computing and Security,Edith Cowan UniversityandPaul Haskell-Dowland, Associate Dean (Computing and Security),Edith Cowan University
This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.
Story byThe Conversation
An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.
Get the TNW newsletter
Get the most important tech news in your inbox each week.