2018

Wrobleski, Brad; Ivanov, Alexander; Eidelberg, Eric; Etemad, Katayoon; Gadbois, Denis; Jacob, Christian
Lucida: Enhancing the Creation of Photography through Semantic, Sympathetic, Augmented Voice Agent Interaction Proceedings Article
In: International Conference on Human Computer Interaction, pp. 200-216, Springer 2018.
Abstract | Links | BibTeX | Tags: machine learning
@inproceedings{B2018,
title = {Lucida: Enhancing the Creation of Photography through Semantic, Sympathetic, Augmented Voice Agent Interaction},
author = {Brad Wrobleski and Alexander Ivanov and Eric Eidelberg and Katayoon Etemad and Denis Gadbois and Christian Jacob },
doi = {10.1007/978-3-319-91250-9_16},
year = {2018},
date = {2018-07-15},
urldate = {2018-07-15},
booktitle = {International Conference on Human Computer Interaction},
pages = {200-216},
organization = {Springer},
abstract = {Abstract. We present a dynamic framework for the integration of Machine
Learning (ML), Augmented Reality (AR), Affective Computing
(AC), Natural Language Processing (NLP) and Computer Vision (CV)
to make possible, the development of a mobile, sympathetic, ambient
(virtual), augmented intelligence (Agent). For this study we developed a
prototype agent to assist photographers to enhance the learning and
creation of photography. Learning the art of photography is complicated
by the technical complexity of the camera, the limitations of the user to
see photographically and the lack of real time instruction and emotive
support. The study looked at the interaction patterns between human
student and instructor, the disparity between human vision and the camera,
and the potential of an ambient agent to assist students in learning.
The study measured the efficacy of the agent and its ability to transmute
human-to-Human method of instruction to human-to-Agent interaction.
This study illuminates the effectiveness of Agent based instruction. We
demonstrate that a mobile, semantic, sympathetic, augmented intelligence,
ambient agent can ameliorate learning photography metering in
real time, ’on location’. We show that the integration of specific technologies
and design produces an effective architecture for the creation of
augmented agent-based instruction.},
keywords = {machine learning},
pubstate = {published},
tppubtype = {inproceedings}
}
Abstract. We present a dynamic framework for the integration of Machine
Learning (ML), Augmented Reality (AR), Affective Computing
(AC), Natural Language Processing (NLP) and Computer Vision (CV)
to make possible, the development of a mobile, sympathetic, ambient
(virtual), augmented intelligence (Agent). For this study we developed a
prototype agent to assist photographers to enhance the learning and
creation of photography. Learning the art of photography is complicated
by the technical complexity of the camera, the limitations of the user to
see photographically and the lack of real time instruction and emotive
support. The study looked at the interaction patterns between human
student and instructor, the disparity between human vision and the camera,
and the potential of an ambient agent to assist students in learning.
The study measured the efficacy of the agent and its ability to transmute
human-to-Human method of instruction to human-to-Agent interaction.
This study illuminates the effectiveness of Agent based instruction. We
demonstrate that a mobile, semantic, sympathetic, augmented intelligence,
ambient agent can ameliorate learning photography metering in
real time, ’on location’. We show that the integration of specific technologies
and design produces an effective architecture for the creation of
augmented agent-based instruction.
Learning (ML), Augmented Reality (AR), Affective Computing
(AC), Natural Language Processing (NLP) and Computer Vision (CV)
to make possible, the development of a mobile, sympathetic, ambient
(virtual), augmented intelligence (Agent). For this study we developed a
prototype agent to assist photographers to enhance the learning and
creation of photography. Learning the art of photography is complicated
by the technical complexity of the camera, the limitations of the user to
see photographically and the lack of real time instruction and emotive
support. The study looked at the interaction patterns between human
student and instructor, the disparity between human vision and the camera,
and the potential of an ambient agent to assist students in learning.
The study measured the efficacy of the agent and its ability to transmute
human-to-Human method of instruction to human-to-Agent interaction.
This study illuminates the effectiveness of Agent based instruction. We
demonstrate that a mobile, semantic, sympathetic, augmented intelligence,
ambient agent can ameliorate learning photography metering in
real time, ’on location’. We show that the integration of specific technologies
and design produces an effective architecture for the creation of
augmented agent-based instruction.