In several domains, including healthcare and home automation, it is important to unobtrusively monitor the activities of daily living (ADLs) executed by people at home. A popular approach consists in the use of sensors attached to everyday objects to capture user interaction, and ADL models to recognize the current activity based on the temporal sequence of used objects. However, both knowledge-based and data-driven approaches to object-based ADL recognition have different issues that limit their applicability in real-world deployments. Hence, in this paper, we pursue an alternative approach, which consists in mining ADL models from the Web. Existing attempts in this sense are mainly based on Web page mining and lexical analysis. One issue with those attempts relies on the high level of noise found in the textual content of Web pages. In order to overcome that issue, our intuition is that pictures illustrating the execution of a given activity offer much more compact and expressive information than the textual content of a Web page regarding the same activity. Hence, we present a novel method to couple Web mining and computer vision for automatically extracting ADL models from visual items. Our method relies on Web image search engines to select the most relevant pictures for each considered activity. We use off-the-shelf computer vision APIs and a lexical database to extract the key objects appearing in those pictures. We introduce a probabilistic technique to measure the relevance among activities and objects. Through experiments with a large dataset of real-world ADLs, we show that our method significantly improves the existing approach.
Web mining & computer vision: New partners for object-based activity recognition
Riboni, Daniele;
2017-01-01
Abstract
In several domains, including healthcare and home automation, it is important to unobtrusively monitor the activities of daily living (ADLs) executed by people at home. A popular approach consists in the use of sensors attached to everyday objects to capture user interaction, and ADL models to recognize the current activity based on the temporal sequence of used objects. However, both knowledge-based and data-driven approaches to object-based ADL recognition have different issues that limit their applicability in real-world deployments. Hence, in this paper, we pursue an alternative approach, which consists in mining ADL models from the Web. Existing attempts in this sense are mainly based on Web page mining and lexical analysis. One issue with those attempts relies on the high level of noise found in the textual content of Web pages. In order to overcome that issue, our intuition is that pictures illustrating the execution of a given activity offer much more compact and expressive information than the textual content of a Web page regarding the same activity. Hence, we present a novel method to couple Web mining and computer vision for automatically extracting ADL models from visual items. Our method relies on Web image search engines to select the most relevant pictures for each considered activity. We use off-the-shelf computer vision APIs and a lexical database to extract the key objects appearing in those pictures. We introduce a probabilistic technique to measure the relevance among activities and objects. Through experiments with a large dataset of real-world ADLs, we show that our method significantly improves the existing approach.File | Dimensione | Formato | |
---|---|---|---|
17-wetice.pdf
Solo gestori archivio
Tipologia:
versione pre-print
Dimensione
695.02 kB
Formato
Adobe PDF
|
695.02 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.