TY - JOUR AU - Lee, Hyunkook AB - The aim of the study was to develop a method for automatic classification of the three spatial audio scenes, differing in horizontal distribution of foreground and background audio content around a listener in binaurally rendered recordings of music. For the purpose of the study, audio recordings were synthesized using thirteen sets of binaural-room-impulse-responses (BRIRs), representing room acoustics of both semi-anechoic and reverberant venues. Head movements were not considered in the study. The proposed method was assumption-free with regards to the number and characteristics of the audio sources. A least absolute shrinkage and selection operator was employed as a classifier. According to the results, it is possible to automatically identify the spatial scenes using a combination of binaural and spectro-temporal features. The method exhibits a satisfactory classification accuracy when it is trained and then tested on different stimuli but synthesized using the same BRIRs (accuracy ranging from 74% to 98%), even in highly reverberant conditions. However, the generalizability of the method needs to be further improved. This study demonstrates that in addition to the binaural cues, the Mel-frequency cepstral coefficients constitute an important carrier of spatial information, imperative for the classification of spatial audio scenes. TI - Automatic Spatial Audio Scene Classification in Binaural Recordings of Music JF - Applied Sciences DO - 10.3390/app9091724 DA - 2019-04-26 UR - https://www.deepdyve.com/lp/multidisciplinary-digital-publishing-institute/automatic-spatial-audio-scene-classification-in-binaural-recordings-of-FK330N1CNS SP - 1724 VL - 9 IS - 9 DP - DeepDyve ER -