MLEnd London Sounds

A dataset for acoustic scence


About Dataset


Auditory perception is one of the most intriguing abilities of humans and many animal species. In addition to allowing us to recognise speech or music, auditory perception can help us make sense of our environment by recognising acoustic scenes. Can we develop machines that have the same ability?

The MLEnd London Sounds dataset will give you an opportunity to explore machine listening, specifically problems around acoustic scene recognition. The MLEnd London Sounds dataset consists of more than 2,500 audio files recorded across London, at iconic places such The British Museum, Covent Garden and The Southbank Centre.

The MLEnd datasets have been created by students at the School of Electronic Engineering and Computer Science, Queen Mary University of London. Other datasets include the MLEnd Spoken Numerals and the MLEnd Hums and Whistles datasets, also available on Kaggle. Do not hesitate to reach out if you want to know more about how we did it.

Enjoy!


Sample Dataset

Here are some samples of London Sounds dataset.

British Museum: Forecourt

Euston: Library

Euston: Gardens

Kensington: Marine

MLEnd Campus: Canal

MLEnd Campus: Square

Southbank: Bridge

Westend: Market

Westend: Trafalgar




Download Data

Install mlend

To download the Spoken Numerals data, first step is to install mlend library. Use pip to install library.

pip install mlend



Download subset of data

To download subset of the data, only one area ‘British Meusum’ with two spots namely; ‘forecourt’,’greatcourt’, use following piece of code:

import mlend
from mlend import download_london_sounds, london_sounds_load

subset = {'Area':['british_museum'], 'Spot':['forecourt','greatcourt']}

datadir = download_london_sounds(save_to = '../MLEnd', subset = subset,pbar_style='colab')

This code will download data in given path (‘../MLEnd’) and returns the path of data as datadir (='../MLEnd/london_sounds')



Download full dataset

To download full dataset, use empty subset, as in following piece of code:

import mlend
from mlend import download_london_sounds, london_sounds_load

subset = {}
datadir = download_london_sounds(save_to = '../MLEnd', subset = subset,pbar_style='colab')



Load the Data and benchmark sets

After downloading partial or full dataset, mlend allows you to load the dataset with specified method (‘Benchmark A’ or ‘random’) of training and testing split. Note, mlend doesn’t read and load the audio files in memory, instead it reads the path of files, for further reading and cleaning data as per requirement of the model. For more details, check help(london_sounds_load).


import mlend
from mlend import download_london_sounds, london_sounds_load

subset = {'Area':['british_museum'], 'Spot':['forecourt','greatcourt']}

datadir = download_london_sounds(save_to = '../MLEnd', subset = subset,pbar_style='colab'))

TrainSet,TestSet, MAPs = mlend.london_sounds_load(datadir_main = datadir,
                                            train_test_split = 'Benchmark_A', 
                                            verbose=1,encode_labels=True)



MLEnd Documentation

For mlend documentation use help(fun) in python terminal or Jupyter-notebook. Alternately, check out

MLEnd Documentation