1. ABOUT THE DATASET -------------------- Title: Leeds Object Affordance Dataset (LOAD) Creator(s): Alexia Toumpa [1], Anthony G. Cohn [2] Organisation(s): 1. University of Leeds, 2. University of Leeds Rights-holder(s): Alexia Toumpa, Anthony G. Cohn Publication Year: 2022 Description: This is an RGB-D (no audio) video dataset collected with a Kinect camera. The videos capture everyday-life activities, where in every video a single human agent is interacting with various objects. The purpose of this dataset is to explore the different ways objects interact and the various affordances they hold in reference to the activities taking place in the scene. The data were collected in an indoor environment, at a 30 fps rate, and the camera was on a stable location. The collection of this dataset was approved by the University of Leeds Ethics Committee (ref LTCOMP-003, date of approval 16-10-20). Cite as: Toumpa, Alexia and Cohn, Anthony G. (2022): 'Object-agnostic Affordance Categorization via Unsupervised Learning of Graph Embeddings'. University of Leeds. [Dataset] https://doi.org/10.5518/1186 Related publication: Toumpa, Alexia and Cohn, Anthony G, "Object-agnostic Affordance Categorization via Unsupervised Learning of Graph Embeddings", Journal of Artificial Intelligence Research (JAIR), 2022 (Accepted) Contact: Alexia Toumpa, scat@leeds.ac.uk 2. TERMS OF USE --------------- Copyright 2022 Alexia Toumpa, Anthony G. Cohn 3. PROJECT AND FUNDING INFORMATION ---------------------------------- Title: Object Affordance Categorization Dates: 2018-2021 Funding organisation: 1) University of Leeds, 2) European Union's Horizon 2020 research, 3) Alan Turing Institute Fellowship Grant no.: 1) scholarship, 2) Grant agreement No. 825619 (AI4EU) 4. CONTENTS ----------- File listing: LOAD\ |-- images\ |-- Subject1\ |-- vid{1:18}/ |-- Subject2\ |-- vid{1:19}/ |-- Subject3\ |-- vid{1:21}/ images/: Contains the RGB-D images of all the videos and all the subjects in the dataset. Format: PNG for the RGB and Depth data. Image size: - RGB: 360x640 pixels - Depth: 424x512 pixels 5. METHODS ---------- The data was created in the Robotics Lab at the E.C. Stoner Building at the University of Leeds. All videos were captured in an indoor environment with artificial light. A Kinect camera was used for recording this visual-only video data, which was mounted on a stable surface.