2nd Workshop on Reliable Machine Learning and AI for High-Performance Distributed Systems: Privacy, Security, Trustworthy, Fairness (PSTF-AI 2025)

Scope

With the rapid advancement of artificial intelligence and machine learning technologies, their deployment in large-scale, mission-critical systems — such as cloud platforms, high-performance computing (HPC) environments, and critical infrastructure like power systems — raises pressing challenges in privacy, security, trustworthiness, and fairness. Ensuring reliability across these domains is essential for safe and ethical AI adoption.

This workshop aims to bring together researchers, industry experts, and practitioners to discuss the latest developments in reliable AI and ML. We focus on methods, frameworks, and applications that address privacy risks, strengthen security, promote fairness, and enhance trustworthiness — especially in distributed, large-scale environments including HPC, cloud computing, and smart energy systems. PSTF-AI 2025 will serve as a platform to exchange ideas, present cutting-edge research, and foster collaborations for building more robust and responsible AI systems.


WORKSHOP AREAS

Topic interest include but not limited to:

  1. Privacy-preserving machine learning and federated learning in cloud and HPC environments
  2. Scalable AI security solutions for high-performance computing infrastructures
  3. Trustworthy AI in large-scale distributed systems and real-time applications
  4. Fairness and bias detection in AI models for industrial and critical infrastructure applications
  5. Cybersecurity and AI-driven threat detection in smart grids and power systems
  6. Digital twins and reliable AI for monitoring and resilience in power and energy networks
  7. Blockchain and decentralized trust management for AI and machine learning ecosystems
  8. Quantum computing and post-quantum cryptography for secure AI deployment
  9. Explainable AI (XAI) frameworks and interpretable models for high-stakes decision systems
  10. Benchmarking and validation frameworks for privacy, security, fairness, and trust in AI across cloud, HPC, and distributed environments

PAPER SUBMISSION

All submissions should be written in English and submitted via our submission system. A paper submitted to PSTF-AI 2025 cannot be under review for any other conference or journal during the entire period that it is considered for PSTF-AI 2025, and must be substantially different from any previously published work. Submissions are reviewed in a single-blind manner. Please note that all submissions must strictly adhere to the IEEE templates as provided below. The templates also act as a guideline regarding formatting. In particular, all submissions must use either the LATEX template or the MS-Word template. Please follow exactly the instructions below to ensure that your submission can ultimately be included in the proceedings. If you have any question on PSTF-AI 2025, please feel free to contact Dr. Youyang Qu: quyy@sdas.org.


IMPORTANT DATES

  • Full paper due:  May 20,2025
  • Acceptance notification: June 20,2025
  • Camera-ready copy: July 20,2025
  • Conference Date: August 15-17, 2025

  • ORGANIZATION

    GENERAL CHAIR

    Youyang Qu Qilu University of Technology (Shandong Academy of Science), Jinan, China
    Longxiang GaoQilu University of Technology (Shandong Academy of Science), Jinan, China, Xi’an, China
    Keshav Sood Deakin University, Melbourne, Australia

    Program Committee

    Jianghua Liu Nanjing University of Science and Technology, Nanjing, China
    Yueyue Dai Huazhong University of Science and Technology, Wuhan, China
    Bruce Gu Qilu University of Technology (Shandong Academy of Science), Jinan, China
    Xuemeng Zhai University of Electronic Science and Technology of China, Chengdu, China
    Haibo Cheng City University of Macau, Macau, China



    footer