Loading…
FLEAT VI has ended
Thursday, August 13 • 1:35pm - 2:00pm
Interactivity' and Digital Content in 2015: Illustrations and Reflections

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Learner interactivity with digital content is usually determined by language-based responses and the use of menus, icons, and pointers. However, technologies increasingly available in our laptops, smartphones and game consoles enable the simultaneous processing of a user’s gaze, facial expression, speech and hand gestures.

Spearheaded by Intel’s $100 million initiative DARPA, and Samsung, these emerging technologies are becoming inexpensive and more commonplace. These “sensing” technologies allow the detection and logging of explicit and implicit responses from learners, providing a wealth of data pertaining to a learner’s attentional, affective, and cognitive states.

Detecting a user’s gaze may yield empirical evidence of attentional processes (Smith, 2012; Winke, et al 2013) and provide information about what learners are looking at when they are reading a text or watching a video.

Emotion-recognition technologies can detect a learner’s emotions well enough to make inferences about his/her affect and attentional resources (Moods, 2014). For example, a “sensitive artificial listener” (SAL) can detect and process gaze and facial expression, thus enabling it to respond to the learner with more appropriate listening behavior (Schröder et al 2012) and “backchanneling” (e.g., head movement, brief vocalizations, glances and facial expressions).

Computing applications use the Intel RealSense SDK enables learners to manipulate objects on a screen using finger and hand movements. The detection of gesture in the assessment of comprehension can eliminate the need to rely exclusively on the selection of multiple-choice answers or typing for younger or less verbal learners.

Bi-directional video, such as Kinect Sesame Street TV, enables children to engage in two-way conversations with onscreen characters in response to the child’s physical and spoken responses to questions and suggestions (Rothschild, 2013).

Through a variety of short video illustrations, this talk will provide a glimpse of some intriguing applications and promote reflection on the nature of possible online “interactivity”.

Speakers
avatar for Karen Price

Karen Price

Boston University


Thursday August 13, 2015 1:35pm - 2:00pm EDT
Barker 110 (Thompson) 12 Quincy St, Cambridge, MA