%20(100%20x%20250%20mm).jpg)
ENHANCING STROKE CARE WITH VIRTUAL ASSISTANTS: CHALLENGES AND INNOVATIONS
Virtual assistants (VA) are transforming healthcare by using AI to support patients and physicians. They answer questions, collect patient data, and simplify complex analysis, making healthcare more efficient and personalized. For physicians, VA assist in simplifying complex tasks, such as analyzing patient data and predicting long-term health outcomes. By automating routine tasks and offering insights into care quality, VA enables clinicians to focus on making informed decisions and improving patient care. Though still in development, these systems are set to revolutionize stroke care by enhancing efficiency, improving outcomes, and making the entire process smoother for both patients and healthcare providers. And how is this tool being developed?
The development of the clinician assistant follows the five-step human-centered design cycle of research, defining problems, designing solutions, development, and user evaluation (Harte et al.). The VAs uses the RASA open-source software (Rasa open-source), which provides an easy platform to develop models for virtual assistants. Unlike other AIs, such as ChatGPT, the VA cannot formulate new sentences or execute any command. Instead, it uses only responses and workflows that have been created and evaluated with clinicians and clinical data scientists. While this limits the VA’s abilities, it eliminates the risk of providing incorrect suggestions and information. The VA is also open-source, meaning that all the code, logic, design, and documentation will be available in an understandable form for users to inspect and gain a better understanding of the system if needed.

Creating the VA involves a large interdisciplinary team to tackle the many challenges of building the system. For example, the lawyers, designers, and developers in the project must carefully inspect and integrate the EU’s ethics-by-design regulations (EU). These regulations require that AI systems should make their logic clearly understandable to most users, a problem that many AI researchers have been investigating due to the difficulty of representing complex systems (Sivaraman et al.). Another major challenge is ensuring the agency of clinicians so that the VA assists in analysis but does not disrupt or take over the role of the analyzer (Li et al.).
To ensure agency, the VA will not assist users with interpreting data or coming to conclusions, instead focusing on carrying out analysis defined by the user and suggesting additional analyses that they could conduct. Technical challenges include improving the performance of the system to ensure that the VA can understand and carry out as many of the users’ commands as possible and ensuring that the system can respond within seconds.
At the time of writing, 130 clinicians have been involved in the project by participating in interviews, workshops, or testing prototypes. As we follow the human-centered design cycle, we involve clinicians in every step to review, guide, and test our work. This feedback continuously guides the design and development of the system to ensure that the functionality aligns with clinical needs. When performing research and defining problems with AI that we need to overcome, interviews and workshops with clinicians guide our understanding of how shortcomings in AI systems (e.g., an AI that does not explain or visualize its logic) might influence clinical use. When designing the VA, clinicians may critique our work or even participate in co-design workshops where they have more control over designing the behavior of the VA. During development and evaluation, large groups of clinicians get access to early prototypes of the VA to test it, provide feedback, and share data about its user experience (UX). Finally, once we analyze data from all these activities, we come to clinicians with our conclusions to discuss future steps. Moreover, all of these user activities provide data that we use to train our VA to better understand clinicians’ requests and create new responses.
Unlike many AI systems, the VA will not automatically learn and adapt over time. Instead, the VA will log interactions (e.g., which analysis users conducted), and a team of designers and developers will use this data to manually update the VA with an improved model and new features. New versions will also be tested with users before being deployed. This will ensure that future versions of the VA will still maintain the same clinical validity and that human overseers can understand the VA’s evolution over time, as defined by the EU’s ethics-by-design regulations.
The VA will comply with GDPR and RES-Q’s data protections, processing only aggregated data without accessing user or patient information. It determines required analyses and informs RES-Q without handling sensitive details. A team of developers, lawyers, and GDPR experts continuously reviews its protocols to ensure compliance with evolving privacy standards.

26. 6. 2025 | Hamzah Ziadeh
%20(100%20x%20250%20mm).jpg)