Inferring human activity in mobile devices by computing multiple contexts

dc.contributor.authorChen, Ruizhi
dc.contributor.authorChu, Tianxing
dc.contributor.authorLiu, Keqiang
dc.contributor.authorLiu, Jingbin
dc.contributor.authorChen, Yuwei
dc.creator.orcidhttps://orcid.org/0000-0003-0148-3609en_US
dc.creator.orcidhttps://orcid.org/0000-0003-0148-3609
dc.creator.orcidhttps://orcid.org/0000-0003-0148-3609https://orcid.org/0000-0003-0148-3609
dc.creator.orcidhttps://orcid.org/0000-0003-0148-3609
dc.creator.orcidhttps://orcid.org/0000-0003-0148-3609
dc.date.accessioned2021-10-28T19:15:55Z
dc.date.available2021-10-28T19:15:55Z
dc.date.issued2015-08-28
dc.description.abstractThis paper introduces a framework for inferring human activities in mobile devices by computing spatial contexts, temporal contexts, spatiotemporal contexts, and user contexts. A spatial context is a significant location that is defined as a geofence, which can be a node associated with a circle, or a polygon; a temporal context contains time-related information that can be e.g., a local time tag, a time difference between geographical locations, or a timespan; a spatiotemporal context is defined as a dwelling length at a particular spatial context; and a user context includes user-related information that can be the user’s mobility contexts, environmental contexts, psychological contexts or social contexts. Using the measurements of the built-in sensors and radio signals in mobile devices, we can snapshot a contextual tuple for every second including aforementioned contexts. Giving a contextual tuple, the framework evaluates the posteriori probability of each candidate activity in real-time using a Naïve Bayes classifier. A large dataset containing 710,436 contextual tuples has been recorded for one week from an experiment carried out at Texas A&M University Corpus Christi with three participants. The test results demonstrate that the multi-context solution significantly outperforms the spatial-context-only solution. A classification accuracy of 61.7% is achieved for the spatial-context-only solution, while 88.8% is achieved for the multi-context solution.en_US
dc.description.abstractThis paper introduces a framework for inferring human activities in mobile devices by computing spatial contexts, temporal contexts, spatiotemporal contexts, and user contexts. A spatial context is a significant location that is defined as a geofence, which can be a node associated with a circle, or a polygon; a temporal context contains time-related information that can be e.g., a local time tag, a time difference between geographical locations, or a timespan; a spatiotemporal context is defined as a dwelling length at a particular spatial context; and a user context includes user-related information that can be the user’s mobility contexts, environmental contexts, psychological contexts or social contexts. Using the measurements of the built-in sensors and radio signals in mobile devices, we can snapshot a contextual tuple for every second including aforementioned contexts. Giving a contextual tuple, the framework evaluates the posteriori probability of each candidate activity in real-time using a Naïve Bayes classifier. A large dataset containing 710,436 contextual tuples has been recorded for one week from an experiment carried out at Texas A&M University Corpus Christi with three participants. The test results demonstrate that the multi-context solution significantly outperforms the spatial-context-only solution. A classification accuracy of 61.7% is achieved for the spatial-context-only solution, while 88.8% is achieved for the multi-context solution.
dc.identifier.citationChen, R., Chu, T., Liu, K., Liu, J. and Chen, Y., 2015. Inferring human activity in mobile devices by computing multiple contexts. Sensors, 15(9), pp.21219-21238.en_US
dc.identifier.citationChen, R., Chu, T., Liu, K., Liu, J. and Chen, Y., 2015. Inferring human activity in mobile devices by computing multiple contexts. Sensors, 15(9), pp.21219-21238.
dc.identifier.doihttps://doi.org/10.3390/s150921219
dc.identifier.urihttps://hdl.handle.net/1969.6/89920
dc.language.isoen_USen_US
dc.language.isoen_US
dc.publisherMDPIen_US
dc.publisherMDPI
dc.rightsAttribution 4.0 International*
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjecthuman activity recognitionen_US
dc.subjectmobile context computationen_US
dc.subjectlocation awarenessen_US
dc.subjectsmartphone positioningen_US
dc.subjecthuman activity recognition
dc.subjectmobile context computation
dc.subjectlocation awareness
dc.subjectsmartphone positioning
dc.titleInferring human activity in mobile devices by computing multiple contextsen_US
dc.titleInferring human activity in mobile devices by computing multiple contexts
dc.typeArticleen_US
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Chen_Ruizhi_sensors.pdf
Size:
1.23 MB
Format:
Adobe Portable Document Format
Description:
Article

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.72 KB
Format:
Item-specific license agreed upon to submission
Description: