首頁 > 網路資源 > 大同大學數位論文系統

Title page for etd-0721110-131232


URN etd-0721110-131232 Statistics This thesis had been viewed 2501 times. Download 5 times.
Author Meng-Kai Jiang
Author's Email Address No Public.
Department Computer Science and Enginerring
Year 2009 Semester 2
Degree Master Type of Document Master's Thesis
Language zh-TW.Big5 Chinese Page Count 45
Title A Facial Expression Classification System based on Facial Components and Texture Information
Keyword
  • Active Shape Model
  • Support Vector Machine
  • Facial expression recognition
  • Face detection
  • Gabor filter
  • Gabor filter
  • Face detection
  • Facial expression recognition
  • Support Vector Machine
  • Active Shape Model
  • Abstract In recent years, emotion analysis is a popular research topic. Facial expression plays a very important role in emotion analysis because of its instant and changeability characteristic. Most traditional expression classification systems track facial component regions such as eyes, eyebrows, and mouth. Though the obvious and prominent basis features can be the main clues for facial recognition, the finer changes of muscles on face can also be used to perceive the variation of expressions. This paper deployed facial components and dynamic facial textures such as frown lines, nose wrinkle patterns, and nasolabial folds to classify facial expressions. Firstly, we integrate Adaboost and ASM (Active Shape Model) to accurately detect face. Then, we utilize the facial feature points from ASM to acquire important facial feature regions. Gabor filter and Laplacian of Gaussian edge detection are used to extract texture features in the acquired feature regions. These texture feature vectors represent the changes of facial texture from one expression to another expression. At last, Support Vector Machine is deployed to classify the six facial expression types neutral, happiness, surprise, anger, disgust, and fear. Cohn-Kanade database is used to test the feasibility of proposed method, the average recognition rate reaches 91.7%. In addition, we test the system with five persons and the results show that the expression recognition rate reaches 93% in real-time.
    Advisor Committee
  • Chen-Chiung Hsieh - advisor
  • Files indicate in-campus access at 5 years and off-campus access at 10 years
    Date of Defense 2010-07-12 Date of Submission 2010-07-21


    Browse | Search All Available ETDs