0% found this document useful (0 votes)
14 views2 pages

Statement of Purpose - GT HCC

Chenyang Zhang aims to pursue a Ph.D. in Human-Computer Interaction at Georgia Tech, focusing on designing next-generation spatial user interfaces and intelligent tools. His research interests include exploring intuitive interaction design for spatial computing, supporting human creativity in human-AI collaboration, and making technology accessible to underprivileged groups. Zhang's background includes multidisciplinary education and research experiences at prestigious institutions, and he is inspired by the potential of spatial computing and AGI to transform user interface design.

Uploaded by

manicekc09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views2 pages

Statement of Purpose - GT HCC

Chenyang Zhang aims to pursue a Ph.D. in Human-Computer Interaction at Georgia Tech, focusing on designing next-generation spatial user interfaces and intelligent tools. His research interests include exploring intuitive interaction design for spatial computing, supporting human creativity in human-AI collaboration, and making technology accessible to underprivileged groups. Zhang's background includes multidisciplinary education and research experiences at prestigious institutions, and he is inspired by the potential of spatial computing and AGI to transform user interface design.

Uploaded by

manicekc09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Chenyang Zhang [www.chenyang.

me]
Statement of Purpose
December 15, 2023

I want to design next-generation spatial user interfaces and intelligent tools. This idea
originated in my growth during my middle school years when the mobile internet was starting
to boom, and I was deeply inspired by design innovations of electronic products such as the
iPhone. This interest led me to a multidisciplinary education at ShanghaiTech University, where
I studied CS, design, and humanities, and further refined my skills and taste through research at
UIUC and Harvard. I find designing human-centered interfaces critical for effective technology
application, and exploring design challenges through various research methods is fascinating to
me. For these reasons, I am determined to pursue a Ph.D. in HCC at Georgia Tech.
Sparks of Artificial General Intelligence (AGI) and emerging spatial computing platforms
provide a vast design space for Human-Computer Interaction, and three challenges in it excite
me the most. First, spatial computing platforms break the boundaries of traditional screen-based
interfaces, calling for customized information input and display methods to maximize the high
interaction freedom in 3D space. Second, as Large Language Models (LLMs) begin to show AGI
capabilities, the boundaries between human and AI abilities have become increasingly blurred,
thus human-AI collaboration requires a more precise division of roles to support human values
such as creativity. Lastly, as rapid technology advancement imposes higher demands on
people's literacy and adaptability to new things, we need to design user interfaces for
technologically underprivileged groups—transforming technology into new opportunities for
personal growth rather than barriers. Over the past few years, I have conducted research on
several projects spanning these themes, primarily focused on human-centered user interface
design. For my Ph.D. studies, I want to take a broader perspective, while still addressing the
design challenges around the above three themes.
Exploring intuitive interaction design for spatial computing. Recognizing that human
gaze reveals spatial attention during the interaction, I explored using eyes as an input source for
spatial computing headsets and led a research project on VR gaze interaction. Existing gaze
interaction methods mainly use gaze direction for target pointing, but struggle with Midas
Touch problem for target selection—distinguishing between 'looking' and 'choosing' intents.
The key problem is that gaze direction does not naturally contain reliable information to
differentiate these two intentions; therefore an additional input signal for selection is needed.
After analyzing the design space of eye input, we discovered that visual depth calculated from
binocular vision is an intriguing dimension. For instance, we can look through a window at a
distant landscape or focus closer to observe the dust on the window—we can intentionally
change how far we see to selectively observe different objects. Inspired by this, we placed a
'virtual window' in front of users, so they can choose an object by bringing their focus closer
and displaying the information on the window. This idea led to FocusFlow, a hands-free gaze
selection method using visual depth shift for VR headsets. We also designed multiple user
studies to evaluate the usability and learnability of this novel eye input technique, and found
that providing immediate and clear visual feedback can greatly help users understand and
master this interaction process. This intuitive and efficient gaze-depth interaction method has
demonstrated application value in both general and professional scenarios and was showcased
as a demo at UIST 2023, with the full version currently under review. [project page]
Supporting human creativity in future human-AI interaction. The advent of LLMs has
provided a powerful technological foundation for human-AI interaction. During the summer of
2023, I explored the collaboration patterns between designers and LLMs at the Harvard Visual
Computing Group. We aim to leverage the LLM’s fuzzy matching capability to translate UI

STATEMENT OF PURPOSE PAGE 1 CHENYANG ZHANG


designers’ design principles into environment-customized visualization parameters, thereby
enhancing UI adaptability to context. This process revealed a dual ambiguity in describing
environmental conditions and human needs, posing challenges in communication between
LLMs and designers. While vague design principles ease designers' burden, they may lead to
unstable model decisions. In contrast, specific descriptions heavily rely on preset conditions
provided by the designers, diminishing the LLMs' inferential value. To address this, we
proposed a JSON-structured declarative natural language grammar for conveying design
constraints to LLMs, and explored ways to define and balance the ambiguity in human-AI
natural language communication. From this research, I realized that we need to rethink human-
AI values and offer a more human-centered mindset for AI applications. [project page]
I also approached HCI artifact values from the creativity perspective, understanding how
tools can aid the creative process. Joining Professor Sarah Sterman's newly formed lab focusing
on creativity research, I engaged in qualitative research on the roles of student documentation in
Physical Computing education. We first analyzed course syllabi across various majors and
educational levels using the open coding method, developing an understanding of the
motivations, tools, content, and learning goals for student-created documentation. Further, we
conducted semi-structured interviews with course instructors from various majors to dig into
their values on documentation and challenges faced in education practice. This hands-on
research experience opened me to qualitative research methodologies, enhancing my ability to
derive insights from textual data and learn directly from our users. [project page]
Making technology accessible to more people. Growing up in a small city in China with
limited resources, I benefited from the transformative power of HCI artifacts symbolized by
digital devices mobile internet in the 2010s. While appreciating the opportunities technology
provided, I also recognized its challenges. During an educational outreach experience in
Southwest China, I realized the complex impact of technology. Despite the widespread Internet
access enabled by government initiatives, the information divide persists in underdeveloped
areas. Children are often dangerously exposed to low-quality sensory-charged online content,
and local teachers struggle to integrate multimedia technology effectively in education. These
experiences highlighted the limitations of current HCI in marginalized communities, fueling my
motivation to develop more inclusive technological solutions.
During my doctoral studies, I aim to integrate the above three topics to explore user
interface and intelligent tool design in spatial computing scenarios. Specifically, I hope to
research telepresence technology in mixed reality, enabling users physically in different spaces
or times to communicate and collaborate as if they were in the same place. This involves
fundamental research questions about how to reconstruct the spatial and temporal context of
immersive communication, and interact with objects in virtual environment. Moreover, spatial
computing transforms human-AI collaboration space from 2D screens to 3D environments.
How to create more context-aware AI systems, and enabling more end-users to intuitively create
customized and flexible virtual tools in mixed reality excites me greatly. Georgia Tech is an ideal
research environment for me, where its extensive HCI community can offer a solid foundation
for my academic journey. In particular, the research of Professor Yalong Yang in display and
interaction techniques between 2D and 3D user interfaces, along with Professor Thad Starner’s
work in mixed reality interactive systems and multi-sensory interfaces greatly inspire my
spatial computing research. Additionally, the research conducted by Professors HyunJoo Oh
and Cindy Xiong in the fields of human-centered design, creativity, and cognitive processes also
offers wonderful perspectives for my artifact research. I hope to design intelligent user
interfaces to address challenging problems in spatial computing within the School of Interactive
Computing’s interdisciplinary research context, combining different technologies and research
methods, and thus contributing new possibilities to the field of Human-Computer Interaction.

STATEMENT OF PURPOSE PAGE 2 CHENYANG ZHANG

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy