This readme.txt file was generated on 13-10-2020 by Adam Richard-Bollans GENERAL INFORMATION 1. Title of Dataset: Comparing Category and Typicality Judgements for Spatial Prepositions 2. Author Information A. Principal Investigator Contact Information Name: Adam Richard-Bollans Institution: University of Leeds Email: mm15alrb@leeds.ac.uk 3. Approximate date of data collection: 10-08-2020 4. Information about funding sources that supported the collection of the data: EPSRC Studentship DATA & FILE OVERVIEW File List: analysis - contains collected data, python scripts used for analysis and results. See contained readme for further info software - contains the Unity3D data collection environment. See contained readme for further info METHODOLOGICAL INFORMATION 1. Description of methods used for collection/generation of data: Various accounts of cognition and semantic representations have highlighted that, for some concepts, different factors may influence category and typicality judgements. In particular, some features may be more salient in categorisation tasks while other features are more salient when assessing typicality. In this experiment we explore the extent to which this is the case for the English spatial prepositions (`in', `inside', `on', `on top of', `over', `above', `under', `below' and `against'). We hypothesise that object-specific features --- related to object properties and affordances --- are more salient in categorisation, while geometric and physical relationships between objects are more salient in typicality judgements. In order to test this hypothesis we conducted a study using virtual environments to collect both category and typicality judgements in 3D scenes. The data collection framework is built on the Unity3D game development software, which provides ample functionality for the kind of tasks we implement. Two tasks were created for our study --- a Categorisation Task and a Typicality Task. In the Categorisation Task participants are shown a figure-ground pair (highlighted and with text description) and asked to select all prepositions in the list which fit the configuration. Participants may select `None of the above' if they deem none of the prepositions to be appropriate. In the Typicality Task participants are given a description and shown two configurations. Participants are asked to select the configuration which best fits the description. Again, participants can select none if they deem none of the configurations to be appropriate. The current dataset is from an online study where participants were recruited via internal mailing lists along with recruitment of friends and family. We have created 18 virtual 3D scenes each containing a single highlighted figure-ground pair. Four scenes each were created for `in', `on', `over' and `under' and these scenes were also shared with their respective geometric counterparts: `inside', `on top of', `above' and `below'. Two scenes were created for `against'. In the Typicality Task, participants compare scenes/configurations associated with the preposition given in the description. The study was conducted online and participants from the university were recruited via internal mailing lists along with recruitment of friends and family. Each participant performed first the Categorisation Task on 6 randomly selected scenes and then the Typicality Task on 15 randomly selected scenes, which took participants roughly 5 minutes. 30 native English speakers participated providing 180 annotations in the Categorisation Task and 447 annotations in the Typicality Task. As the study was hosted online, we first asked participants to show basic competence. This was assessed by showing participants two simple scenes with an unambiguous description of an object. Participants are asked to select the object which best fits the description. If the participant makes an incorrect guess in either scene they are taken back to the start menu. 2. Methods for processing the data: The collected data (in 2020 study folder) is processed using the python scripts given in the analysis folder. First `process_data.py` is run to initially clean the annotations and calculate user agreements in the tasks. Then `cat_typicality_analysis.py` is run to test the hypothesis. 3. Instrument- or software-specific information needed to interpret the data: Python 3 DATA-SPECIFIC INFORMATION FOR: annotationlist.csv Variable List: For the categorisation (sv_mod) task: [annotation id, user id, date-time, figure, ground, task, scene, N/A, selected prepositions, camera position/rotation] For the typicality (typ) task: [annotation id, user id, date-time, configuration 1, configuration 2, task, selected configuration, preposition, N/A, N/A,N/A]