News & Events


​Best Paper Award for the IRC-SET conference on Science, Engineering and Technology

Published on: 22-Aug-2017

Congratulations to Liu Zhang and Professor Eddie Ng for winning the Best Paper Award for the IRC-SET conference on Science, Engineering and Technology by International Researchers Club! The paper is titled “When Siri Knows How You Feel: Application of Machine Learning in Automatic Sentiment Recognition from Human Speech”.

Set up in 2001, the International Researchers Club’s vision is to create a vibrant and innovative research community for Singapore. IRC-SET conference aims at providing a platform for young and talented researchers to share fresh results, obtain comments, and exchange innova¬tive ideas of the leading edge research in the multi-disciplinary areas.

The awarded paper explores the possibility of applying supervised Machine Learning in recognising sentiments in English utterances on a sentence level. It aims to examine the effect of combining acoustic and linguistic features on classification accuracy.

Research Approach

Six audio tracks are randomly selected to be training data from 40 YouTube videos (monologue) with strong presence of sentiments. Speakers express sentiments towards products, films, or political events. These sentiments are manually labelled as negative and positive based on independent judgement of 3 experimenters. A wide range of acoustic and linguistic features are then analysed and extracted using sound editing and text mining tools respectively.

A novel approach is proposed, which uses a simplified sentiment score to integrate linguistic features and estimate sentiment valence. This approach improves negation analysis and hence increased overall accuracy. Results have shown that when both linguistic and acoustic features are used, accuracy of sentiment recognition improves significantly, and that excellent prediction is achieved when the four classifiers are trained respectively, with kNN and Neural Network having higher accuracies. Possible sources of error and inherent challenges of audio sentiment analysis are discussed to provide potential directions for future research in achieving a fully automated audio sentiment analysis.​

Back to listing