LED要串一個電阻
假設LED的規格是x V, 工作店最大事 y mA
則串接的電阻的電阻值為 (電源電壓 - x)/z, 其中z <= y
一般來說z=0.01
因此紅色1.8V的LED要串大概330歐姆的電阻
電阻的串法
LED長腳接五伏電壓
短腳接電阻
電阻的另一個腳接地
圖晚點補
Showing posts with label application. Show all posts
Showing posts with label application. Show all posts
Thursday, January 18, 2007
Tuesday, January 02, 2007
[Reading] XPod: a Human Activity Aware Learning Mobile Music Player
http://ebiquity.umbc.edu/paper/html/id/335/XPod-A-Human-Activity-Aware-Learning-Mobile-Music-Player
Sandor Dornbush, Jesse English, Tim Oates, Zary Segall, and Anupam Joshi
Jan 08, 2007
I think the work is an on-going one. In the paper the show on Jan 08, 2007 (a coming day), I think the writers take out the part of emotion, which is the one I am interested in. The try to uses different method, including decision tree, AdaBoost, SVM, KNN, neural networks, to classify 5 different state. They collect GSR, acceleration (2D), skin temperature, BVP, time, song information and beats per minute to predict how a user would rate a song in the future. They have 565 training instances. The result consider states is a little better than the one without states. More precisely speaking, states are considering the physiological information gather from the sensors. The result range is from 31.87% to 46.72, and mean square error is from 0.17 to about 0.45.
I think the result is not very well. I think they should collect more training data. Some more interesting points should be added into XPod.
Sandor Dornbush, Jesse English, Tim Oates, Zary Segall, and Anupam Joshi
Jan 08, 2007
I think the work is an on-going one. In the paper the show on Jan 08, 2007 (a coming day), I think the writers take out the part of emotion, which is the one I am interested in. The try to uses different method, including decision tree, AdaBoost, SVM, KNN, neural networks, to classify 5 different state. They collect GSR, acceleration (2D), skin temperature, BVP, time, song information and beats per minute to predict how a user would rate a song in the future. They have 565 training instances. The result consider states is a little better than the one without states. More precisely speaking, states are considering the physiological information gather from the sensors. The result range is from 31.87% to 46.72, and mean square error is from 0.17 to about 0.45.
I think the result is not very well. I think they should collect more training data. Some more interesting points should be added into XPod.
Monday, January 01, 2007
[Reading] Using Human Physiology to Evaluate Subtle Expressivity of a Virtual Quizmaster in a Mathematical Game
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WGR-4F4WYNR-1&_coverDate=02%2F01%2F2005&_alid=516189552&_rdoc=1&_fmt=&_orig=search&_qd=1&_cdi=6829&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=c9c7e4550c1e39c2d84e639d3e50adcd
Helmut Prendinger and Junichiro Mori and Mitsuru Ishizuka
year 2003.
Abstraction: The aim of the experimental study described in this article is to investigate the effect of a life-like character with subtle expressivity on the affective state of users. The character acts as a quizmaster in the context of a mathematical game. This application was chosen as a simple, and for the sake of the experiment, highly controllable, instance of human–computer interfaces and software. Subtle expressivity refers to the character's affective response to the user's performance by emulating multimodal human–human communicative behavior such as different body gestures and varying linguistic style. The impact of em-pathic behavior, which is a special form of affective response, is examined by deliberately frustrating the user during the game progress. There are two novel aspects in this investigation. First, we employ an animated interface agent to address the affective state of users rather than a text-based interface, which has been used in related research. Second, while previous empirical studies rely on questionnaires to evaluate the effect of life-like characters, we utilize physiological information of users (in addition to questionnaire data) in order to precisely associate the occurrence of interface events with users’ autonomic nervous system activity. The results of our study indicate that empathic character response can significantly decrease user stress and that affective behavior may have a positive effect on users’ perception of the difficulty of a task.
Keyword: Life-like characters; Affective behavior; Empathy; Physiological user information; Evaluation
==== After Read ====
The writers use physiological signal to evaluate user interface and interaction between human and computer game. Their primary hypothesis is that if a life-like character provides affective feedback to the user, it can effectively reduce user frustration and stress. They use bio-sensors including GSR and BVP. Another short questionnaire is requested. The feature they get from the sensor signal is mean. The easy game is summing up the given five numbers. The result shows that the hypothesis is held expect the relation between game score and empathy.
The feature they get is very simple. They use a few sensors to show the result is good. I think the write wants to tell us that we can use just a few sensors the justify the helpful of well-design user interface. Some more application can be derived from the work.
Helmut Prendinger and Junichiro Mori and Mitsuru Ishizuka
year 2003.
Abstraction: The aim of the experimental study described in this article is to investigate the effect of a life-like character with subtle expressivity on the affective state of users. The character acts as a quizmaster in the context of a mathematical game. This application was chosen as a simple, and for the sake of the experiment, highly controllable, instance of human–computer interfaces and software. Subtle expressivity refers to the character's affective response to the user's performance by emulating multimodal human–human communicative behavior such as different body gestures and varying linguistic style. The impact of em-pathic behavior, which is a special form of affective response, is examined by deliberately frustrating the user during the game progress. There are two novel aspects in this investigation. First, we employ an animated interface agent to address the affective state of users rather than a text-based interface, which has been used in related research. Second, while previous empirical studies rely on questionnaires to evaluate the effect of life-like characters, we utilize physiological information of users (in addition to questionnaire data) in order to precisely associate the occurrence of interface events with users’ autonomic nervous system activity. The results of our study indicate that empathic character response can significantly decrease user stress and that affective behavior may have a positive effect on users’ perception of the difficulty of a task.
Keyword: Life-like characters; Affective behavior; Empathy; Physiological user information; Evaluation
==== After Read ====
The writers use physiological signal to evaluate user interface and interaction between human and computer game. Their primary hypothesis is that if a life-like character provides affective feedback to the user, it can effectively reduce user frustration and stress. They use bio-sensors including GSR and BVP. Another short questionnaire is requested. The feature they get from the sensor signal is mean. The easy game is summing up the given five numbers. The result shows that the hypothesis is held expect the relation between game score and empathy.
The feature they get is very simple. They use a few sensors to show the result is good. I think the write wants to tell us that we can use just a few sensors the justify the helpful of well-design user interface. Some more application can be derived from the work.
Labels:
[readings],
application,
evaluation,
HCI,
related work
Saturday, December 30, 2006
[Reading] XPOD - A Human Activity and Emotion Awake Mobile Music Player
http://ebiquity.umbc.edu/paper/html/id/280/
==== After Reading ====
It is a prototype application. They collect acceleration, galvanic skin response(GSR), skin temperature, heart flow, and near body temperature as input data. By using fully connected neural network, the system predict whether the user will skip this song. It looks like the product recently provide by Nike and Adidas.
But I think that the claim of song decision by emotion detecting is a little weak. Mostly they did is putting the bio-sensor signals into NN. They considered the changing of the signals. They did not explain how to consider the emotion and how they inference the result.
Moreover, I think some bio signals can help the recognition of activity, especially the `actively'.
XPod - a human activity and emotion aware mobile music player
Authors: Sandor Dornbush, Kevin Fisher, Kyle McKay, Alex Prikhodko, and Zary Segall
Book Title: Proceedings of the International Conference on Mobile Technology, Applications and Systems
Date: November 17, 2005
Abstract: In this paper, we consider the notion of collecting human emotion and activity information from the user, and explore how this information could be used to improve the user experience with mobile music players. This paper proposes a mobile MP3 player, XPod, which is able to automate the process of selecting the song best suited to the emotion and the current activity of the user. The XPod concept is based on the idea of automating much of the interaction between the music player and its user. The XPod project introduces a "smart" music player that learns its user's preferences, emotions and activity, and tailors its music selections accordingly. The device is able to monitor a number of external variables to determine its user's levels of activity, motion and physical states to make an accurate model of the task its user is undertaking at the moment and predict the genre of music would be appropriate. The XPod relies on its user to train the player as to what music is preferred and under what conditions. After an initial training period, the XPod is able to use its internal algorithms to make an educated selection of the song that would best fit its user's emotion and situation. We use the data gathered from a streaming version of the BodyMedia SenseWear to detect different levels of user activity and emotion. After determining the state of the user the neural network engine compares the user's current state, time, and activity levels to past user song preferences matching the existing set of conditions and makes a musical selection. The XPod system was trained to play different music based on the user’s activity level. A simple pattern was used so the state dependant customization could be verified. XPod successfully learned the pattern of listening behavior exhibited by the test user. As the training proceeded the XPod learned the desired behavior and chose music to match the preferences of the test user. XPod automates the process of choosing music best suited for a user’s current activity. The success of the initial implementation of XPod concepts provides the basis for further exploration of human- and emotion-aware mobile music players.==== After Reading ====
It is a prototype application. They collect acceleration, galvanic skin response(GSR), skin temperature, heart flow, and near body temperature as input data. By using fully connected neural network, the system predict whether the user will skip this song. It looks like the product recently provide by Nike and Adidas.
But I think that the claim of song decision by emotion detecting is a little weak. Mostly they did is putting the bio-sensor signals into NN. They considered the changing of the signals. They did not explain how to consider the emotion and how they inference the result.
Moreover, I think some bio signals can help the recognition of activity, especially the `actively'.
Subscribe to:
Posts (Atom)