Citation:
Abstract:
The use of audio information is useful for many applications, like e-learning, languages studies, conferences etc. The use of this kind of information is very frequent either for learning, for memorization or for the search of an audio subsequence dealing with a particular subject. People find difficulties in the use of information in his audio format and especially in the search for an audio sequence in different places. For this, and to help users, we focus our work on the problem of finding an audio subsequence in a distributed audio database. The search will be in voice mode where the user dictates a text into microphone. The goal is to receive the text dictated by user from the microphone and compare it with all sequences of the database and then returns the closest sequence to the dictated one. This goal requires a great work which will be proposed as a doctoral subject. In this paper we propose to acquire the query by playing a sequence of the database and to capture it by microphone. This paper presents a new system that identifies an audio subsequence using basic audio descriptors. Each audio document of corpus is pre-processed before the extraction of a set of basic audio descriptors, which characterize the temporal and the spectral information. Therefore, an audio signal is represented by a sequence of characteristics called in this paper "audio fingerprint". The search process is based on a new concept called “interference wave”. This wave is generated from the content of audio signals, and used to calculate the similarity rate between two audio signals.