Database: Multi-Experiment Auditory Localisation Dataset with Individual and Non-Individual HRTFs (2024-2026)

Subtitle: Raw localisation data from three research projects investigating localisation performance across different HRTF conditions in virtual reality.
Cite as: Poole, K., Daugintis, R., Pirard, L., Barumerli, R., and Picinali , L. (2026). "Multi-Experiment Auditory Localisation Dataset with Individual and Non-Individual HRTFs ", The SONICOM Ecosystem: Database #58. URL: https://ecosystem.sonicom.eu/databases/58. Copy Citation to Clipboard

Metadata

Creators:

Publishers:

Rightsholders:

Keywords:

Relations:

Other:

  • DOI: not assigned yet
  • Uploaded by: Roberto Barumerli ORCID: 0000-0002-0155-3921 Email address: roberto@robertobarumerli.com
  • Date (created): 2026-01-27 14:59:57 (GMT)
  • Date (updated): 2026-01-30 12:20:48 (GMT)
  • Production Year: 2024-2026
  • Resource Type: Dataset (SONICOM Ecosystem)
  • Rights: CC BY 4.0 Attribution
  • Subject Areas: Life Science , Other SONICOM Ecosystem
  • General Description: This dataset contains raw auditory localisation responses from 34 participants tested across 7 HRTF conditions in virtual reality: Individual HRTFs (Measured or Synthetic) and Non-Individual HRTFs (Best-match, Worst-match, KEMAR, Random). The data comprises 17,227 trials collected from three research projects investigating how different HRTF types affect localisation performances. The 34 participants were distributed across the three studies, with partial overlap; 9 participants completed all three experiments, while others participated in one or two.
    All experiments used identical methodology and procedure: the same VR environment with procedural training protocol, main task with 33 target directions, and three blocks of 99 trials with randomised HRTF conditions (three per experiment, always including acoustically measured individual HRTF as reference). Participants provided directional responses using manual pointing to indicate the perceived location of a train of three white noise bursts. The dataset provides target directions and directional responses in both spherical and horizontal-polar coordinate systems alongside great circle error measurements.

    Authors: Roberto Barumerli (data management), Katarina Poole (data collection and management), Ludovic Pirard (data collection and management), Rapolas Daugintis (data collection and management), Lorenzo Picinali (supervision)
    Affiliation: Audio Experience Design group, Dyson School of Design Engineering, Faculty of Engineering, Imperial College London

    ------------------------------------------------------------------------------

  • Abstract: Understanding how Head-Related Transfer Functions (HRTFs) affect spatial audio localisation is critical for developing effective 3D audio applications. This dataset provides comprehensive localisation performance data comparing individual HRTFs acoustically measured to several alternative such as synthetic individual HRTF, and non-individual HRTFs (best-match, worst-match, generic KEMAR, random). The data enables analysis of localisation performances across both horizontal and vertical dimensions, investigation of individual differences in spatial hearing, and evaluation of HRTF personalisation approaches. Each trial records target position, participant response, and localisation error in multiple coordinate systems, providing rich data for auditory modelling, psychoacoustic research, and spatial audio algorithm development.

    ------------------------------------------------------------------------------

  • Methods: Experimental Setup: Experiments conducted in the Turret Lab at Imperial College London with individually measured HRTFs [1].

    Paradigm: Localisation experiment implemented in a virtual reality environment, with a standardised procedural training protocol. Three noise burst sound sources were presented at 33 spatial directions covering the full sphere. Further implementation details [2].

    Experiment 1 (Daugintis2025): assessment of Best-match and Worst-match non-individual HRTF selection strategies (18 participants).
    Experiment 2 (Poole2025): comparison of Individual Measured HRTFs against generic KEMAR and Synthetic individual HRTFs generated through upper body photogrammetry and bound elements methods (20 participants).
    Experiment 3 (Pirard2025): comparison of Individual Measured HRTFs against generic Random non-individual HRTFs and Synthetic individual HRTFs generated through photogrammetry-reconstructed head meshes ​ and boundary elements methods (20 participants).

    HRTF Conditions (7 total):
    • Individual: Participant's own measured HRTF as in [1]
    • Measured: SONICOM-measured individual HRTFs
    • Synthetic: Synthetically generated individual HRTFs
    • Kemar: Generic KEMAR mannequin HRTF (Knowles EarSim, large ears)
    • Best: Best-matching non-individual HRTF for each participant
    • Worst: Worst-matching non-individual HRTF for each participant
    • Random: Randomly selected non-individual HRTF

    Equipment: Sennheiser HD 599 headphones (no equalisation applied) or Sennheiser HD 650 headphones (with individual equalisation).
    Response Method: Participants indicated perceived sound direction using hand orientation in VR, recorded as azimuth and elevation angles.
    Data Collection: 2024/2026, with timestamps recorded for each trial.

    References:
    [1] Engel I., Daugintis R., Vicente T., et al. “The SONICOM HRTF Dataset.” Journal of the Audio Engineering Society 71, no. 5 (2023): 241–53.
    [2] Daugintis, R., Geronazzo, M., Poole, K. C. and Picinali, L. (2026). ‘Perceptual evaluation of an auditory model–based similarity metric for head-related transfer functions’. Accepted for publication

    ------------------------------------------------------------------------------

  • Technical Remarks: Dataset Structure: Single CSV file (3.9 MB) containing 17,227 rows (excluding header) with 19 columns per trial.
    Column Descriptions:
    • experiment: Study identifier (Daugintis2025, Poole2025, Pirard2025)
    • condition: HRTF condition (Individual, Best, Worst, Measured, Synthetic, Kemar, Random)
    • participant: Participant ID (P0001-P0351 – ID from SONICOM dataset)
    • hrtf_id: HRTF identifier used for the trial
    • repetition: Repetition number within condition
    • trial: Trial number within session
    • azi_target, ele_target: Target direction in spherical coordinates (degrees)
    • azi_response, ele_response: Response direction in spherical coordinates (degrees)
    • lat_target, pol_target: Target direction in horizontal-polar coordinates (degrees)
    • lat_response, pol_response: Response direction in horizontal-polar coordinates (degrees)
    • great_circle_error: Angular error between target and response (degrees)
    • hrtf_file: HRTF filename (.3dti-hrtf.bytes format – see SONICOM dataset)
    • headphones: Headphone model used
    • headphones_eq: Equalization status (NA = none applied)
    • time: Trial timestamp (DD/MM/YYYY HH:MM:SS)
    Missing Values: Encoded as "NA" where applicable.

    ------------------------------------------------------------------------------

  • Data Source: HRTFs measured following SONICOM dataset methods as in Engel I., Daugintis R., Vicente T., et al. “The SONICOM HRTF Dataset.” Journal of the Audio Engineering Society 71, no. 5 (2023): 241–53.

Comments

No comments found.


Uploaded by: Roberto Barumerli
Created: 2026-01-27 14:59:57
Updated: 2026-01-30 12:20:48