Announcement for Downloading full text filePlease respect the Copyright Act.
All digital full text dissertation and theses from this website are authorized the copyright owners. These copyrighted full-text dissertation and theses can be only used for academic, research and non-commercial purposes. Users of this website can search, read, and print for personal usage. In respect of the Copyright Act of the Republic of China, please do not reproduce, distribute, change, or edit the content of these dissertations and theses without any permission. Please do not create any work based upon a pre-existing work by reproduction, Adaptation, Distribution or other means.
URN etd-0721108-005430 Statistics This thesis had been viewed 3508 times. Download 1261 times. Author Chen-yu Pai Author's Email Address No Public. Department Computer Science and Enginerring Year 2007 Semester 2 Degree Master Type of Document Master's Thesis Language English Page Count 47 Title Analysis and Detection of Emotion Change in Continuous Speech Keyword continuous speech emotion recognition emotion recognition continuous speech Abstract Speech communication plays an important role for human beings. Human speech is not only involving the syntax but also the feeling at the moment. In this thesis we use 11 kinds of speech features, including formant, shimmer, jitter, Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC), Mel-Frequency Cepstral Coefficients (MFCC), first derivative of MFCC (D-MFCC), second derivative of MFCC (DD-MFCC), Log Frequency Power Coefficients (LFPC), Perceptual Linear Prediction (PLP) and RelAtive SpecTrAl PLP (RastaPLP) as the features for emotion classification. These features are usually used in the speech recognition. We try to find the relation between emotion and these features. The methods that we analyze the features are called sequential forward selection (SFS) and sequential backward selection (SBS). Under the KNN classifier, 32 features was chosen, and we get a recognition rate of 84% using our emotion corpus database. We also use the weighted KNN and WDKNN classification method to classify the emotion in the speech. We compare the performance of SVM with respect to weighted KNN and WDKNN. These 32 features are the most appropriate features in the emotion recognition and are used in the continuous speech emotion recognition system. Advisor Committee Tsang-Long Pao - advisor
Chia-Ming Chang - co-chair
Shih-Hsuan Yang - co-chair
Files Date of Defense 2008-07-03 Date of Submission 2008-07-21