Announcement for Downloading full text filePlease respect the Copyright Act.
All digital full text dissertation and theses from this website are authorized the copyright owners. These copyrighted full-text dissertation and theses can be only used for academic, research and non-commercial purposes. Users of this website can search, read, and print for personal usage. In respect of the Copyright Act of the Republic of China, please do not reproduce, distribute, change, or edit the content of these dissertations and theses without any permission. Please do not create any work based upon a pre-existing work by reproduction, Adaptation, Distribution or other means.
URN etd-0905105-142322 Statistics This thesis had been viewed 3325 times. Download 1109 times. Author Hao-Chao Chang Author's Email Address firstname.lastname@example.org Department Communication Engineering Year 2004 Semester 2 Degree Ph.D. Type of Document Doctoral Dissertation Language English Page Count 84 Title AUTOMATIC TEXT EXTRACTION IN VIDEO USING ARTIFICIAL NEURAL NETWORK Keyword text recognition. text extraction text detection text detection text extraction text recognition. Abstract Videotext detection, recognition, and extraction are considered as one of the key component for video retrieval, commentaries, and analysis systems. There are many practical applications such as multimedia systems, digital libraries, and video indexing concerning it.
In this paper, we proposed a videotext extraction method which takes the text characteristics into consideration, including the brightness, physical constraints, restriction of geometry, and specific strokes directions. The system consists of four major modules. First, we employ a color image edge operator to detect the strong edge parts in an image. Next, we adopt a text enhancement operator to enhance the edges of potential texts. Then, a text range detection operator is used to identify blocks of text. Finally, we utilize a neutral network with a back propagation learning algorithm to extract text.
We have applied the proposed system to detect and extract text from a set of images with embedded with text in different languages. Experimental results show that it is robust for contrast, font-size, font-color, and background complexity.
Advisor Committee Jia-Ching Cheng - advisor
Shuenn-Shyang Wang - co-chair
Files Date of Defense 2005-07-29 Date of Submission 2005-09-05