AdaptAI work is accepted at CHI 2025; Late-Breaking Work [CHI Paper Link 🔗] | [arXiv Paper Link 🔗]
AdaptAI is a research project focused on AI-driven personalization to boosting workplace productivity and well-being. By integrating egocentric vision, audio, physiological signals, motion data and leveraging Large Language Models (LLMs) and Vision-Language Models (VLMs), AdaptAI offers:
- Physical Well-being Support: Timely interventions, such as movement reminders and micro-break suggestions.
- Mental Well-being Assistance: A Tone-Adaptive Conversational Agent (TCA) that provides tone supportive interactions during stress phases.
- Task Automation: AI agents handle small routine tasks like scheduling meetings and drafting emails.
We utilized Open-Source Groq hosted models during the development and testing. During theses phases some models used were under Groq preview models category. To run AdaptAI, you need to provide your own Groq-API-Key.
- Logitech C920 HD Pro Webcam (used for egocentric vision and audio)
- MoveSense HR2 (used for ECG, Heart Rate Monitoring and Motion Data)
Please cite the following if you reference our work:
@misc{gadhvi2025adaptaipersonalizedsolutionsense,
title={AdaptAI: A Personalized Solution to Sense Your Stress, Fix Your Mess, and Boost Productivity},
author={Rushiraj Gadhvi and Soham Petkar and Priyansh Desai and Shreyas Ramachandran and Siddharth Siddharth},
year={2025},
eprint={2503.09150},
archivePrefix={arXiv},
primaryClass={cs.HC},
url={https://arxiv.org/abs/2503.09150},
}
A big thank you to everyone who took part in our user study and to those who helped create the demo video! A special thanks to Groq for their incredibly fast model inference times!