We are the pioneers of artificial audio intelligence. By empowering technology with the sense of hearing beyond speech and music, our customers can create new and valuable experiences for consumers.
As part of AALabs, the company’s R&D division, you will contribute to researching and evaluating new algorithms to push the limits of our unique sound recognition system. Responsibilities include developing new algorithms in-house, identifying and reporting on state-of-the-art methods, and evaluating both types of solutions on large-scale field data sets.
You’ll be a key part of a highly innovative company working at the cutting edge of consumer technology. We are looking for people who thrive as part of a dedicated and innovative team, love challenges and are passionate about audio/sound, DSP and Machine Learning.
- Demonstrable skills in Machine Learning applied to Audio Signals.
- Demonstrable experience dealing with at least one type of Machine Learning algorithm (e.g., Deep Neural Networks, Hidden Markov Models, Support Vector Machines, Decision Trees etc.) applied to the processing of Audio Signals.
- Good knowledge of Digital Signal Processing for audio signals.
- Scripting and algorithm prototyping: Python, bash.
- Development under Linux/Unix mandatory, Windows optional.
- Experience with at least one standard Machine Learning package, amongst but not limited to: Tensorflow, Keras, MXNet, SciKit-Learn, Torch, Theano, HTK, Kaldi etc.
- Signal processing with microphone arrays.
- Advanced acoustics, e.g., sound field processing.
- Programming: C/C++ coding and code optimisation.
- Ability to deliver on research and evaluation methodology
- Good communication skills
- Excellent problem-solving skills
- Track record of academic publications is a strong plus
- Enjoy working as a member of a team and using your own initiative
- Self-confident and highly motivated
- Ability to deal confidently with a variety of people at all levels
- Able to manage own workload and meet deadlines
- Good organisational skills
- Good standard of written and spoken English.
Must have either a Master’s degree with two years’ industrial experience, or a PhD in one of the following topics:
- Machine Learning applied to Audio Signals
- Digital Signal Processing of Audio Signals
- Automatic Speech/Speaker Recognition
- Music Information Retrieval
- Acoustic Events Detection
- Statistical Speech Synthesis
- Thematic Indexing of Audio tracks (e.g., Speaker Diarization, Acoustic Segmentation of Video Documents etc.).
- Two years’ industrial experience if you hold a Master’s degree; or
- PhD entry level; or
- Equivalent combination of education and experience in an appropriate field.
You can download a PDF version of the job description here.