%20(100%20x%20250%20mm).jpg)

Creating the VA involves a large interdisciplinary team to tackle the many challenges of building the system. For example, the lawyers, designers, and developers in the project must carefully inspect and integrate the EU’s ethics-by-design regulations (EU). These regulations require that AI systems should make their logic clearly understandable to most users, a problem that many AI researchers have been investigating due to the difficulty of representing complex systems (Sivaraman et al.). Another major challenge is ensuring the agency of clinicians so that the VA assists in analysis but does not disrupt or take over the role of the analyzer (Li et al.).
To ensure agency, the VA will not assist users with interpreting data or coming to conclusions, instead focusing on carrying out analysis defined by the user and suggesting additional analyses that they could conduct. Technical challenges include improving the performance of the system to ensure that the VA can understand and carry out as many of the users’ commands as possible and ensuring that the system can respond within seconds.
At the time of writing, 130 clinicians have been involved in the project by participating in interviews, workshops, or testing prototypes. As we follow the human-centered design cycle, we involve clinicians in every step to review, guide, and test our work. This feedback continuously guides the design and development of the system to ensure that the functionality aligns with clinical needs. When performing research and defining problems with AI that we need to overcome, interviews and workshops with clinicians guide our understanding of how shortcomings in AI systems (e.g., an AI that does not explain or visualize its logic) might influence clinical use. When designing the VA, clinicians may critique our work or even participate in co-design workshops where they have more control over designing the behavior of the VA. During development and evaluation, large groups of clinicians get access to early prototypes of the VA to test it, provide feedback, and share data about its user experience (UX). Finally, once we analyze data from all these activities, we come to clinicians with our conclusions to discuss future steps. Moreover, all of these user activities provide data that we use to train our VA to better understand clinicians’ requests and create new responses.
Unlike many AI systems, the VA will not automatically learn and adapt over time. Instead, the VA will log interactions (e.g., which analysis users conducted), and a team of designers and developers will use this data to manually update the VA with an improved model and new features. New versions will also be tested with users before being deployed. This will ensure that future versions of the VA will still maintain the same clinical validity and that human overseers can understand the VA’s evolution over time, as defined by the EU’s ethics-by-design regulations.
The VA will comply with GDPR and RES-Q’s data protections, processing only aggregated data without accessing user or patient information. It determines required analyses and informs RES-Q without handling sensitive details. A team of developers, lawyers, and GDPR experts continuously reviews its protocols to ensure compliance with evolving privacy standards.

26. 6. 2025 | Hamzah Ziadeh
%20(100%20x%20250%20mm).jpg)
.png)