Tuesday, January 02, 2007

[Reading] XPod: a Human Activity Aware Learning Mobile Music Player

http://ebiquity.umbc.edu/paper/html/id/335/XPod-A-Human-Activity-Aware-Learning-Mobile-Music-Player

Sandor Dornbush, Jesse English, Tim Oates, Zary Segall, and Anupam Joshi
Jan 08, 2007

I think the work is an on-going one. In the paper the show on Jan 08, 2007 (a coming day), I think the writers take out the part of emotion, which is the one I am interested in. The try to uses different method, including decision tree, AdaBoost, SVM, KNN, neural networks, to classify 5 different state. They collect GSR, acceleration (2D), skin temperature, BVP, time, song information and beats per minute to predict how a user would rate a song in the future. They have 565 training instances. The result consider states is a little better than the one without states. More precisely speaking, states are considering the physiological information gather from the sensors. The result range is from 31.87% to 46.72, and mean square error is from 0.17 to about 0.45.
I think the result is not very well. I think they should collect more training data. Some more interesting points should be added into XPod.

No comments: