In the last years, wireless sensor networking has become a key technology for making pervasive communications a reality. To this end, wireless sensor nodes need to consume as less energy as possible and, thus, the complexity of any onboard signal processing operation needs to be kept as low as possible. In this paper, we present a low-complexity detection approach for the recognition of different audio signal patterns, expedient, for example, for intrusion control in critical areas. To this end, the proposed detection algorithm evolves through two main processing phases: (a) coarse and (b) fine. The evolution between these two phases is described through a finite state machine (FSM) model. In fact, fine processing (in the frequency domain) is carried out only when an “atypical” audio signal is detected. On the other hand, coarse processing (in the time domain), performed a larger number of times, has a much lower complexity. Our results show that our processing technique allows to detect efficiently the presence of signals of interest (identified by properly selected spectral signatures) and to reliably distinguish different audio signal patterns, e.g., between speech and non-speech signals. While we first present simulation-based performance results of the proposed detection algorithm, we then validate our approach with realistic experimental results based on audio signals acquired with a commercial microphone.
In-sensor low-complexity audio pattern recognition for pervasive networking
M. Martalo';
2010-01-01
Abstract
In the last years, wireless sensor networking has become a key technology for making pervasive communications a reality. To this end, wireless sensor nodes need to consume as less energy as possible and, thus, the complexity of any onboard signal processing operation needs to be kept as low as possible. In this paper, we present a low-complexity detection approach for the recognition of different audio signal patterns, expedient, for example, for intrusion control in critical areas. To this end, the proposed detection algorithm evolves through two main processing phases: (a) coarse and (b) fine. The evolution between these two phases is described through a finite state machine (FSM) model. In fact, fine processing (in the frequency domain) is carried out only when an “atypical” audio signal is detected. On the other hand, coarse processing (in the time domain), performed a larger number of times, has a much lower complexity. Our results show that our processing technique allows to detect efficiently the presence of signals of interest (identified by properly selected spectral signatures) and to reliably distinguish different audio signal patterns, e.g., between speech and non-speech signals. While we first present simulation-based performance results of the proposed detection algorithm, we then validate our approach with realistic experimental results based on audio signals acquired with a commercial microphone.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.