Saturday, December 30, 2006

[Reading] XPOD - A Human Activity and Emotion Awake Mobile Music Player

http://ebiquity.umbc.edu/paper/html/id/280/

XPod - a human activity and emotion aware mobile music player

Authors: Sandor Dornbush, Kevin Fisher, Kyle McKay, Alex Prikhodko, and Zary Segall

Book Title: Proceedings of the International Conference on Mobile Technology, Applications and Systems

Date: November 17, 2005

Abstract: In this paper, we consider the notion of collecting human emotion and activity information from the user, and explore how this information could be used to improve the user experience with mobile music players. This paper proposes a mobile MP3 player, XPod, which is able to automate the process of selecting the song best suited to the emotion and the current activity of the user. The XPod concept is based on the idea of automating much of the interaction between the music player and its user. The XPod project introduces a "smart" music player that learns its user's preferences, emotions and activity, and tailors its music selections accordingly. The device is able to monitor a number of external variables to determine its user's levels of activity, motion and physical states to make an accurate model of the task its user is undertaking at the moment and predict the genre of music would be appropriate. The XPod relies on its user to train the player as to what music is preferred and under what conditions. After an initial training period, the XPod is able to use its internal algorithms to make an educated selection of the song that would best fit its user's emotion and situation. We use the data gathered from a streaming version of the BodyMedia SenseWear to detect different levels of user activity and emotion. After determining the state of the user the neural network engine compares the user's current state, time, and activity levels to past user song preferences matching the existing set of conditions and makes a musical selection. The XPod system was trained to play different music based on the user’s activity level. A simple pattern was used so the state dependant customization could be verified. XPod successfully learned the pattern of listening behavior exhibited by the test user. As the training proceeded the XPod learned the desired behavior and chose music to match the preferences of the test user. XPod automates the process of choosing music best suited for a user’s current activity. The success of the initial implementation of XPod concepts provides the basis for further exploration of human- and emotion-aware mobile music players.

==== After Reading ====
It is a prototype application. They collect acceleration, galvanic skin response(GSR), skin temperature, heart flow, and near body temperature as input data. By using fully connected neural network, the system predict whether the user will skip this song. It looks like the product recently provide by Nike and Adidas.
But I think that the claim of song decision by emotion detecting is a little weak. Mostly they did is putting the bio-sensor signals into NN. They considered the changing of the signals. They did not explain how to consider the emotion and how they inference the result.
Moreover, I think some bio signals can help the recognition of activity, especially the `actively'.

Friday, December 22, 2006

[Reading] A Study in Users' Physiological Response to an Empathic Interface Agent

Helmut Prendinger, Christian Becker, and Mitsuru Ishizuka.
A study in users' physiological response to an empathic interface agent. [pdf]
International Journal of Humanoid Robotics, Vol. 3, No. 3, Sept. 2006, pp 371-391.
http://www.worldscinet.com/191/03/0303/S0219843606000801.html


Game: Skip-Bo
Emotion recognition: SC, EMG, & game sate(time) as input, 2
-axes (arousal & valence) as output. (ref29)
Method: ANOVA
Subject: 32 (14m, 18f)
Hypothesis:
  • If the virtual game opponent behaves ``naturally'' in that if follows its own goals and expresses associated positively or negatively valenced affective behaviors, users will be less aroused or stressed than when the agent does not do so.
  • If the game opponent is oriented only toward its own goals and displays associated behaviors, users will be less aroused or stressed than when the agent does not express any emotion at all.
Agent's behavior:
  • non-emotional
  • self-centered emotional
  • negative empathic
  • positive empathic
Agent's abilities: (in PAD space)
  • auditory speech
  • facial
  • body gesture
Result:
  • The positive empathic condition was experienced as significantly more arousaling or stressful than the negative empathic condition.
  • Users seemingly do not respond significantly different when empahtic agent behavior is absent.
Notes:
  • EMG high, negative valence more
  • global baseline -> for individual differences

a good word to use

  empathy
n : understanding and entering into another's feelings

This is what I want the computer to know and to do.

Tuesday, December 05, 2006

[listen] Brooky's proposal

  • Learn ability - features number v.s training set size (how many data should I have to train a trusty model?)
  • Technical detail of methods - such as SVN, DBN
  • CRF how to use.
Go, go, go!

Experiment Prework

The induction has been record as many sound files. I have make some movie clips for different mood induction. I will finish the flash for connecting database and different playing tomorrow or on Thursday.

Something need to do is to ask SY to borrow the bio-sensors. I need time....!!!
So many things need to do.

Tuesday, November 28, 2006

Experiment Design

The early version has been done, but there still have something needs to be done.
  • make the self-evaluation table. By degree
  • 平靜&自我評估的順序誰先誰後?還是整個做完才做自我評估
  • 表情以及動作的enhancement要如何放入整個場景
  • 平靜的詳細指導
  • 事後處理的說明
  • music choice
This is tomorrow's list..

Friday, November 17, 2006

[NetArt] table

Some about the data we need to keep

User information:
  • Id
  • nickname
  • password
  • image
  • email
  • ...
Book information:
  • book id
  • book name
  • ISBN
  • cover(image)
  • side(image)
  • writer
  • publisher
  • pages
  • publish time
  • ranking
  • ...
Book comments
  • who give the comment (user id)
  • time
  • which book (book id)
  • content
  • ranking
  • ...
Linking Tables (who have what)
  • linking id
  • user id (owner)
  • book id (be owned)
  • date (the time the relation happened)
  • 要如何表示在書架中的位置
Book linkage
  • same book, different version
  • same book, different cover
  • same book, different language
  • different book, same ISBN (journal?)
  • ...
else need to consider
  • 擴充性
  • different between a user and a 2nd-hand book seller
  • the "location of book in the book shelf"
  • one people have only one bookshelf or many?
  • .... (I am still thinking)
ER model to be shown, where should we keep the database.

Wednesday, November 15, 2006

[Reading] From Physiological Signals to Emotion: Implementing and Comparing Selected Methods for Feature Extraction and Classification

Wagner, J. Jonghwa Kim Andre, E.

This paper appears in: Multimedia and Expo, 2005. ICME 2005. IEEE International Conference on
Publication Date: 6-8 July 2005
On page(s): 940- 943
ISBN: 0-7803-9331-7
Digital Object Identifier: 10.1109/ICME.2005.1521579
Posted online: 2005-10-24 09:39:34.0

elicitation method: music
elicited emotion: joy, anger, sadness, pleasure (arousal & valance)
measurement: SC, EMG, ECG, RSP
analysis method: kNN, LDF(linear discriminative, function), multi-layer perceptron
result: 9x%, better then the MIT one.

[Reading] Emotion Analyzing Method Using Physiological State

Mera, Kazuya and Ichimura, Takumi

Book SeriesLecture Notes in Computer Science
PublisherSpringer Berlin / Heidelberg
ISSN0302-9743
SubjectComputer Science
VolumeVolume 3214/2004
BookKnowledge-Based Intelligent Information and Engineering Systems
DOI10.1007/b100910
Copyright2004
ISBN978-3-540-23206-3
Pages195-201

1. using uttrance in dialoge to fine the emotion (by ECG)
2. using the method proposed by R. Picard to recognize the emotion (extrating 20 features)
3. using NN to combine 1 + 2

Condiser 20 emotions which are classified into 6 emotion groups :
  • Well-Being: joy, distress
  • Fortunes-of-Others: happy-for, gloating, resentment, sorry-for
  • Prospect-based: hpe, fear
  • Confirmation: satisfaction, relief, fears-confirmed, disappointment
  • Attribution: pride, admiration, shame, disliking
  • Well-Being/Attribution: gratitude, anger, gratification, remorse
A method combine context and physiological signal processing to detect the clients emotion.

Tuesday, November 14, 2006

[Readings] Emotion recognition form physiological signals using wireless sensors for presence techonologies

JournalCognition, Technology & Work
PublisherSpringer London
ISSN1435-5558 (Print) 1435-5566 (Online)
SubjectComputer Science and Engineering
IssueVolume 6, Number 1 / February, 2004
CategoryOriginal Article
DOI10.1007/s10111-003-0143-x
Pages4-14
Online DateThursday, February 19, 2004

Fatma Nasoz1 Contact Information, Kaye Alvarez2, Christine L. Lisetti1 and Neal Finkelstein

subject: 29
elicitation method: movie
method: kNN(71%), DFA(74%), MBP(83%)
elicited emotioin: sadness anger surprise fear frustration amusement (by 1995 Gross Levenson)
Measure: GSR, HR, tempreture
Data saving: 3-dimensional array, real number (see in page 12)



Friday, November 03, 2006

[readings] Multimodal Emotion Recognition & Expressivit Analysis

Multimedia and Expo, 2005. ICME 2005. IEEE International Conference on

Kollias, S. Karpouzis, K.


The paper present the framwork of multimodal emotion recognition and expressivity analysis. First, it introduces technologis that involve in emotion reocognition, such as speech, visaul, physiological signal processing, etc. Then, it mentions the important issues in emotional speech analysis, visual analysis, ECA expressivity, and physiological signal analysis in sections. Finally, some key problem identified for integration of sigle-mode emotion analysis techniques are listed.
I think this is a high level paper which shows up and lists the problem in the field of emotion recognition. The cited papers are distribute in three fields, which are speech, facial expression, and physiological signal. I will read more and part 3, the physiological signal process with emotion recognition.

Thursday, November 02, 2006

[listen] blog analysis

1. Defree of distribution
  • WWW: Power Law Distribution
  • Blog: Log-Normal distribution + Power Law distribution
  • Social network: log-normal distribution
2. Small world property -> blog linkage like the six degree of seperation
3. Blog have many property between WWW and social network
4. Some tech using in the blog search: PageRank[98], HITS[99]
5. Community discovery:
  • approach: a) mutual awarness. b) ranking-based clustering method
  • emerge through the sustained action of individual bloggers, NOT the navigation of casual web surface.
6. Trend Extraction:
  • sultion: statistic, SVD, HOSVD
  • limitaion: aggregation, single trend, unstuctured data
7. Spam Blog Deection: relate to web spam detection.
  • detection method: temporal coherence, link coherence
8. Some conference about this:

Wednesday, November 01, 2006

[∞]emotion in the proposal

The part of emotion is the one people like to criticize. Definition of emotion is subjective. People can define emotion by themselves. For example, in David's talk today, he wants to find out emotions in the lyrics. The emotion in the lyrics depends on listeners emotion or writer's emotion, which we should define.
So, in my proposal, I should show up my definition of emotion. I think the emotion is the patient's emotion. Observer is helper. He/she is not the one define the emotion in my experiment.
The define above may help me.

==
Another interesting point it difference betweeen listening a sentence and reading a sentence.

Tuesday, October 31, 2006

[group meeting]Something reminder

about proposal
  1. Problem definition need to include input, output, requirment, and assumption.
  2. We need a step by step plan with mileston and checkpoints instead of feature work. The plan inclueds time.
  3. What is the important of the problem should be described.
  4. If everything I was done, what is my contribution?
  5. Evaluation needs to be defined.

Friday, October 27, 2006

[ProComp] Acquiring Live Data

The following steps outline a minimal set of functions required to acquire live data from a connected encoder.

To acquire live data:
1. Initialize the COM environment by calling CoInitialize or CoInitializeEx.
    • HRESULT hr = CoInitializeEx(NULL, COINIT_ APARTMENTHEREADED);
    • CheckHRESULT(hr);
    • SUCCEEDED(hr);
2. Create an instance of a TTLLiveCtrl object and query for its ITTLLive or ITTLLive2 interface.
    • HRESULT hr = g_TTLLive.CreateInstance(CLSID_TTLLive);
    • CheckHRESULT(hr);
    • SUCCEEDED(hr);
3. Detect and open encoder connection(s) using the OpenConnection or OpenConnections method.
    • g_TTLLive->OpenConnections(TTLAPI_OCCMD_AUTODETECT, 1000, NULL, NULL);
4. Close any unwanted connections using CloseConnection.
    • g_TTLLive->CloseConnections();
5. Optionally define the encoder enumeration order by calling AssignEncoderHND.
    • not sure yet
6. Create logical channels corresponding to all encoder physical channels using AutoSetupChannels.
7. Synchronously start all channels using StartChannels.
8. Retrieve data periodically using ReadChannelData or ReadChannelDataVT.
9. When finished, close all connections using CloseConnections.
10. Release queried ITTLLive or ITTLLive2 interface.
11. Match the previous call to CoInitialize or CoInitializeEx with a call to CoUninitialize

==
basic from the dacument of ProComp.

Wednesday, October 25, 2006

[Proposal] note

content:
  • motivation
  • problem definition
  • survey of elated work
  • proposed solution
  • testing evaluation (option)

[Proposal] Coming soon..

Two weeks later, I need to propose my thesis. The main idea have been draw out, and I have a good story to tell everyone why I do this. What should I do now is working hard and writing down details.

Monday, October 23, 2006

[Readings]Evolving Pet Robot with Emotional Model

Kubota, N. Nojima, Y. Baba, N. Kojima, F. Fukuda, T.

This paper appears in: Evolutionary Computation, 2000. Proceedings of the 2000 Congress on
Publication Date: 2000
Volume: 2, On page(s): 1231-1237 vol.2
ISBN: 0-7803-6375-2
References Cited: 23
Digital Object Identifier: 10.1109/CEC.2000.870791

In daily, many pet robots have been existed. Writers show a pet robot with emotion model. They discribe the details in the paper. First, they have some sensors in the robot. People can interact with the robot via the sensors. The sensors' signal inputs feed into emotional model which is desing by themself. (The model include emotion, mood, and feelings.) Robots are express their emotion by actions such as moving, dancing, and sounding. Moreover, they use potential field to represent a trick. They try to teach the robot the ``trick''. This is the step of behavioral learning. Finally, they use GA to optimal fuzzy controllers and path planning for mobile robots.

In this paper, everything the robot feels is defined by human. I think the writers model the robot's behavior with emotional concept. But all the thing about the model is predefine without animal phsychology. It really work, but it is pre-define. That is another side of emotional using. I am not focusing this part now.

[Readings] Aritficail Emotion and Its Recognition, Modeling and Applications: An Overview

This paper really an overview. The writers using many reference to discribe the whole picture of artificial emotion. They using the view of medical science. But in the book I recently read, it is just one part of theory in the whole emotion psychology. Moreover, they discribe differenct emotion recognition methods. In the emotion modeling, analytic and symbolic method and botom-up method are both mentioned.

I think this is really an overview paper and no technical details inside. What I should do is to read the reference and to find the recognition method.

By the way, it is really strange that the papar is writen in simplify Chinese and published in Proceeding of the 5th World Cngress on Intelligent Control and Automatin. The conference was held in China. I think this is the reason why it is writen in Chinese.

Tuesday, October 17, 2006

[∞]Recently

  • Resently Progress.
    • Since the crack of my plog, I have moved my research blog to http://searchsheila.blogspot.com. The reading reports and ideas still shown in the page.
    • Recently, I have reading some application and models about the emotion, I am finding a good application to do my thesis and demo the idea.
    • In group meeting this week, others gave me some different ideas, such as making a ghost house. The ghost house is "personalized". Different peoples have different feelings of same ghost house. We can control the scene in the house to customize the ghost house. Other ideas are like ``sleeping helper'', ``living room arua controller'', etc. Here are the both way of emotion: one is possitive, the other is negative.
  • research goal this semanster
    • At the end of the semanster, I want to begin the experiment.
    • Improve my English writing skill.
  • resently research goal and readings
    • emotion model choosing
    • read research in MIT Media Lab Affective Computing Group.

Saturday, October 14, 2006

Self-Cam: Feedback from wat would be your social partner

Teeters, Alea and Kaliouby, Rana El and Picard, Rosalind
(MIT Media Lab, Affective computin)

Self-cam can record 24 feature points of personal face and infer the wearer's hidden affective-cognitive states. It can identify 20 facial and head movements and 11 gestures. It uses Bayesian Network to infer the affective-cognitive states. The states includes agreeing, disagree, interested, confused, concentrating and thinking. Wearer can explore his/her apperence in the interaction.

This is a little equipment that can be used to record the feature point. I may used it to detect some personal activities and emotions. As the result, I can provide feed back to help people to do some interaction. (CHI?)

I think it looks like that poeple like to take photos themself. In many camara, the producer provides the fucntion of "self-photoing". It take advantage human's instinct. I think this may be workable in the recognition of emotions.

Thursday, October 12, 2006

Ideas about app

  • Music can affect the emotion
  • Pictures (photo) can, too
  • An emotion treatment?
  • An enviroment make you comfortable
  • The ``role'' of emotion in the app -> should approve the system

emotion design

The book ``Emotion Design'' tell us why we love an artifac in the view of psycoology. The writer of the book disscuss the daily products, games, computers, and robotics. He using the model with three levels to describe the human actions and reflect the design. The three levels are visceral(本能), behaviofal, and reflective. Many issues discribing the interaction between the computer and human.

I get some idea from the book. I am trying to add the emotion to a computer system, but I still confuse about how to add into it and how much I should add. I think I should add core effect into the system. (System dependency?) I don't need to add too may emotions to the system. I just need to add those which are correct and useful. In the future, I should do some experiment about the `effect' of emotions.

I don't want to build an emotion model myself, I will try to use the exsit one. Now I need to do is choosing a good application to prove the emotion is important in a computer system.

Thursday, October 05, 2006

5 model of emotion

I read the first part of the book ``What is an Emotion''. It mentions five differnt view and research area in the emotion due to the personal phylosophy. They are
  • Considering emotions and metal phenomena are different. (distinguish their differnce)
  • Grouping emtions into generic type. (classification)
  • Using the physiological changes to detect emotional changes. (Dominant now, using the physiology)
  • Role of emotion. (Social effect of emotion, person to person effect, etc. )
  • Analyzing emotion into its components or aspects.
Because of the differnt phylosophys, there are diffent theories, such as sensation theories, physiological theories, behavioral theories, evaluative theories, and cognitive theories.

Monday, June 05, 2006

ANOVA

a test to measure the difference between the means of two or more groups.

Tuesday, May 09, 2006

group meeting (more about ColorCocktail)

  1. Hue 在Color當中要怎麼把它放進去?
  2. 選擇畫畫, 分析情緒, 產生文字, 對應show給input user看, 作cocktail recommandation?
  3. Error的算法 (快點看懂)
  4. 相關的paper of color on emotions.
  5. 大家喝cocktail會在乎甜味, 酒味
  6. Draw without context?

Friday, May 05, 2006

這兩天聽到的 & 要做的小整理

  1. 資料收集可以弄個小遊戲, 雖然說或許有點trick, 但是讓些人&機器來互補
  2. robotics想要跟living room的東西合在一起, 做soft robot
  3. paper要快點改, 深深覺得大家給的建議都很棒, 我寫的很心虛 XD
  4. 要運動才有體力想東西
  5. User interface? interaction? v.s. mood. 需要找找更佳的理論, 把它想辦法improve, 套回有趣的app.

Monday, May 01, 2006

這兩個星期

覺得沒啥進度, 真是糟糕
我要快點把research的時間放進來啦
不要被課業霸佔了 >"<

==
以上是抱怨文!!

今天meeting的時候提到
是該決定一下要做啥了...
我想做UI 想做emotion, 想跟人有互動
有點不知道論文scale的大小
我需要深思熟慮!

Monday, April 17, 2006

Paper結束了

寫paper是個思考的過程
邊寫就會邊怨自己怎麼不多看點東西
寫起來好難說服自己....

不過卻也從很多角度了解了一些東西
像是這次寫的color cocktail,
我覺得在color和mood的部份涉及的還不夠深入 (我該修修心理學了)
很多該定量或是定性的東西, 沒有很詳細的說出來

這總是個起頭, 下次會更好
希望robotics的FINAL會更棒 ^^

Wednesday, April 12, 2006

Robotics Perposal

Robot waiter, 我們在想的東西
現在focus on susi. 一個agent.
那麼. 接下來勒?

bartender -> waiter -> ColorCocktail

Writing Paper

這段日子一直慢慢的把想法變成文字,
剛開始, 覺得沒有很多特別的, 越寫下去越多問題
忽然發現有很多東西因此而深思熟慮.
這次的work我自己很不滿意.
沒有把時間弄好是一個原因, 詞窮更讓我覺得厭惡.

先整理一下心得吧:
1. 記得要將段落分好, 放在不依樣的檔案, 更利於同時work
2. reference可以用工具一邊做, 不要像這次都只有先留在檔案裡面
3. 東西是慢慢熬出來的. 從開始work就要開始把想法留下來, 圖表更是重要
4. 英文要多多練習寫!

其他想到再補充吧, 有點被逼的結尾, 不過很高興還是有try.
希望不要再有這樣的生活, 那不適合我阿 XD 我喜歡有schedule慢慢來的!!

Friday, April 07, 2006

Mood Light

今天在亂找東西的時候發現的
http://www.traxon-usa.com/
http://www.mood-light.com/
感覺上很好玩的東西...

還有找到本想看的書
Colors for Your Every Mood: Discover Your True Decorating Colors

明明我就還在寫color cocktail的paper. 加油加油!

Tuesday, April 04, 2006

Writing Paper

Paper比我想像中難寫出來,

詞語一直無法連貫, 需要有精確的言語來描述,
更要把自己腦袋中的想法, 詳細的說明出來
而所做所為, 要良好的呈現

第一次嘗試寫比較長的technical writing in English.
多多少少還是有收穫吧, 不過苦了老師(我想XD)

繼續加油!!

Thursday, March 30, 2006

LED 兩三想

今天跟理律提到LED的事情,
談論了一下...
很多好玩的東西就出現了

之前有報過一個prototype關於LED和牆的hello wall
理律在想說, 可以把它變成屏風, 因而實現
這樣的性質其實跟那個hello wall還滿像的
不過, 總覺得這樣不夠...

有想到說如果將每一個小小的"亮點"都當成一個獨立的個體
讓他們有一定的簡單行為去演化, 改變...
人們或許會把它當成一個寵物來看待...
如此一來, 人與他的互動必然會變多.
而這些集體也可以被視為一個agent, 當你的看門狗(這樣就有點像polly門口的螢幕了)
只是他比較抽象...

而老師比較想做跟小孩子有關的,
像是玩具之類的東西
這樣的互動當然更多, 但是忽然覺得有很多無法預期的參數在裡面..
總覺得行為很難被model

覺得這真的很有趣..
東西沒有定論, 但是很期待, 該去修心理學了 ^_^

不過在此之前, paper加油!!
答應老師說要4/2日給draft的

Thursday, March 23, 2006

day of ?

總是要個開始, 就只是這樣而已
日子到底要幹麻?