Description
The increasing adoption of artificial intelligence (AI) is driving massive transformations across many sectors, such as finance, robotics, manufacturing and healthcare. It is critical to design, develop and deploy reliable and robust AI models for building trustworthy systems that offer trusted services to users with high-stakes decision-making, including AI-assisted robotic surgery, automated financial trading, and autonomous driving. Nevertheless, AI applications are vulnerable to reliability issues, such as concept drifts, dataset shifts, misspecifications, misconfiguration of model parameters, perturbations, and adversarial attacks on human or even machine comprehension levels, thereby posing tangible threats to various stakeholders at different levels. This workshop aims to draw together state-of-the-art artificial intelligence advances to address challenges for ensuring reliability, security and privacy in trustworthy systems. The following topics are welcomed but not limited to (i) trustworthy large AI models, (ii) bias and fairness, (iii) explainability, (iv) robust mitigation of adversarial attacks, (v) improved privacy and security in model development, (vi) scalability and (vii) resource efficiency.
We invite research work from all aspects of learning algorithms that can deal with reliable, robust and secure issues. The workshop will provide an excellent opportunity for AI researchers and analytics experts from academics and industries to build trustworthy AI systems by developing and assessing theoretical and empirical methods, practical applications, and new ideas and identifying directions for future studies.
Robustness of machine learning/deep learning/reinforcement learning algorithms and trustworthy systems in general.
Confidence, consistency, and uncertainty in model predictions for reliability beyond robustness.
Transparent AI concepts in data collection, model development, deployment and explainability.
Adversarial attacks - evasion, poisoning, extraction, inference, and hybrid.
New solutions to make a system robust and secure to novel or potentially adversarial inputs; to handle model misspecification, corrupted training data, addressing concept drifts, dataset shifts, and missing/manipulated data instances.
Theoretical and empirical analysis of reliable/robust/secure ML methods.
Comparative studies with competing methods without reliable/robust certified properties.
Applications of reliable/robust machine learning algorithms in domains such as healthcare, biomedical, finance, computer vision, natural language processing, big data, and all other relevant areas.
Unique societal and legal challenges facing reliability for trustworthy AI systems.
Secure learning from data having high missing values, incompleteness, noisy
Private learning from sensitive and protected data
Workshop's Paper Submission Deadline: 31 October 2025
Workshop's Notification of Acceptance: 15 November 2025
Workshop's Camera Ready Submission: 25 November 2025
Workshop Dates: 12 December 2024
Submission Template: ACML2025-Latex
Submission page: https://forms.gle/bBspZ5fu8D224tv99