Ongoing Research Directions

AI-Assisted Social Interventions for Autistic Employment

Collaborative Vocational Search for Autistic Adults (2023-present, sponsored by NSF)

Problem Statement

Autistic adults possess unique strengths that can benefit the workplace, such as hyperfocus, attention to detail, tolerance for repetition, and innovative thinking. However, the unemployment rate for autistic adults has been over 80% for a decade, and it is estimated that 4.5 million autistic adults--representing 1.9% of the population--remain unemployed in 2024.

Vision and Approaches

This high rate of unemployment disparity is caused by autistic "traits", such as communication style differences, executive functioning challenges, social networking phobia, and emotional dysregulation. In this research, we aim to build empirical insights and human-AI collaboration tools that autistic adults can leverage to be successful in their vocational search with their social surroundings. In particular, we focus on three primary areas: (1) collaborative communication support, (2) collaborative vocational search sensemaking, and (3) collaborative interview preparation

Impact

The successful implementation of this research will provide practical support for autistic adults struggling with the challenges of job-seeking. By fostering collaboration through innovative interventions, this proposal seeks to help autistic adults overcome these barriers and secure employment. On a broader societal level, increasing the inclusion of autistic individuals in the workforce will allow them to contribute their unique skills—such as hyperfocus, consistency, and creative problem-solving—toward addressing future challenges.

Collaborator(s) and Partner(s):

Elizabeth Foster (Melwood), Elizabeth Green (LinkTalent), Dave Caudel (Frist Center for Autism and Innovation at Vanderbilt University), Donna Peppard (University of Pennsylvania)

Outcome(s):

  • [CHI2024] Collaborative Job Seeking for People with Autism: Challenges and Design Opportunities
  • [NWRC2024] Collaborative Design for Job-Seekers with Autism: A Conceptual Framework for Future Research
LLM-Driven Agentic System for MCI and Early-Stage Dementia Support

Developing an LLM-Driven Agentic System to Support Individuals with Mild Cognitive Impairment (MCI) or Early-Stage Dementia (2024-Present)

Problem Statement

The global population is aging, and a growing share of older adults live with mild cognitive impairment (MCI), which affects roughly 22.7% of people in the United States. Despite technological advances, individuals and their caregivers struggle to obtain personalized, context-aware guidance for unique behaviors and social environments.

Vision and Approaches

The long-term vision is to create an agentic system that supports individuals with MCI or early-stage dementia by blending state-of-the-art AI with continuous sensing and caregiver collaboration. The system combines three components: (1) LLM-Driven Conversational Intelligence—An LLM serves as the system's conversational core, drawing on physiological data, living environment, dietary habits, and social engagement to tailor advice, personalize reminders and encouragement (e.g., gentle exercise, social interactions), and maintain an empathetic dialogue; (2) Wearable-Based Behavioral Sensing—Wearables continuously monitor physiological signals and daily behaviors (heart rate, heart rate variability, stress levels, sleep quality, activity data via smartwatches), capturing a real-time picture of the user's state and triggering timely in-app responses or alerts; (3) Caregiver-AI Collaboration Loop—The system will incorporate caregiver observations and preferences into an adaptive intervention loop, providing insights and reducing monitoring burden.

Impact

By integrating personalized conversational support, continuous behavioral sensing, and caregiver collaboration, the system aims to deliver timely, context-aware interventions that enhance daily routines, promote brain-healthy behaviors, and support long-term cognitive and emotional well-being, laying the foundation for scalable, human-centered digital health solutions for aging populations.

Collaborator(s) and Partner(s):

N/A

Outcome(s):

  • [JApplGerontol2025] Utilizing Conversational AI Technology for Social Connectedness Among Older Adults: A Systematic Review
Human-AI Collaboration for ADHD Construction Workers in XR

Extending attention span of ADHD construction workers through human-AI collaboration design (2023-present, NSF sponsored)

Problem Statement

ADHD, characterized by difficulties with attention, hyperactivity, and impulsivity, increases the risk of injuries in high-risk environments. While ADHD can bring strengths like creativity and hyperfocus, it can also lead to workplace problems such as difficulty in sustaining focus, challenges with time management, prioritization, task organization, missed deadlines, miscommunication, and frequent mistakes.

Vision and Approaches

This research aims to use extended-reality (XR) simulated future construction job sites and biomechanical/psychophysiological metrics to understand ADHD workers–technology interactions during human-machine collaborative tasks. The goal is to design AI systems that assess and improve work performance, maintain task focus, create supportive environments, and address safety concerns for ADHD workers in construction. Strategies include body doubling techniques, continuous performance monitoring, and AI-based stimuli in the XR platform to help workers sustain attention and enhance overall performance.

Impact

AI-based adaptive stimuli in VR can significantly benefit ADHD workers by creating a personalized and responsive environment that helps maintain focus and engagement. By employing real-time techniques responsive to a worker's cognitive state, this approach aims to reduce distractions, sustain attention, and improve overall task performance and productivity.

Collaborator(s) and Partner(s):

N/A

Outcome(s):

    Multi-Robot and Multi-Video Sensemaking for Public Safety and Disaster Recovery

    Multi-robot + Multi-video Sensemaking for Public Safety and Disaster Recovery (2023-present)

    Problem Statement

    Ground robots are increasingly available and autonomous, generating videos that can enhance situational awareness for public servants (e.g., police officers for public safety, disaster response for damaged infrastructure or people and animals in need). Robots can perform surveillance in areas where aerial images are unavailable. The challenge is that analyzing vast amounts of video from multiple robots and providing commands to them requires significant human attention. Current visual evidence collection through robots is increasing, but dedicated designs and technologies for supporting human reasoning in this context are ill-defined.

    Vision and Approaches

    The project aims to design, develop, deploy, and evaluate datasets, scenarios, and interactive systems that facilitate multi-robot + multi-video sensemaking for public sector workers to achieve situational awareness for both real-time decision-making and post-event investigations. Situational awareness applications include public safety (e.g., addressing assault, vandalism) and disaster recovery (e.g., clearing roads after a hurricane).

    Impact

    The ultimate goal is to improve the efficiency and effectiveness of group robot operations for public safety and disaster recovery, minimize officers' effort and risk, and foster seamless collaboration between humans, videos, and robots. This will help society leverage robots and videos for situational awareness and contribute to HCI, Robotics, and Computer Vision communities through a conceptual framework of multi-robot multi-videos for scenarios for social good, benchmark datasets and tasks, and advances in video understanding and interactive human-group robot control.

    Collaborator(s) and Partner(s):

    Steve Peterson (NIH), Michael Lighthiser (GMU)

    Outcome(s):

      Scalable Human-in-the-Loop and Actionable Explainable AI

      Connecting AIs and Humans through Scalable Human-in-the-Loop and Actionable XAI (2021-present)

      Problem Statement

      An unexpected AI failure can have severe consequences, affecting human lives (safety, productivity, trust, ethics). Understanding AI's vulnerability is essential but challenging and resource-costly for Machine Learning engineers.

      Vision and Approaches

      The goal is to develop novel human-AI collaboration designs to help ML engineers investigate and fix AI vulnerabilities more efficiently. Two main methodological aspects: (1) Scalable Human-In-The-Loop—maintain a reasonable level of human input in the modeling process, especially with large-scale data; (2) Actionable XAI (explainable AI)—enable humans to convert learned insights into direct actions that update the AI model after assessing its decision-making using XAI techniques.

      Impact

      The proposed solution can enhance human-AI interaction, provide ML engineers with better capability of "communicating" with future AI models, and empower ML engineers across diverse datasets (images, videos, text) to use more reliable and controllable AI.

      Collaborator(s) and Partner(s):

      Liang Zhao (Emory University), Young-Ho Kim (Naver)

      Outcome(s):

      • [ICDM2021] GNES: Learning to Explain Graph Neural Networks
      • [KDD2022] RES: A Robust Framework for Guiding Visual Explanation
      • [CSCW2022] Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment
      • [CSCW2023] Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
      • [CSCW2024] 3DPFIX: Improving Remote Novices' 3D Printing Troubleshooting through Human-AI Collaboration

      COMPLETED RESEARCH DIRECTIONS