A Parisian Chapter in 6G Security: Concluding My Secondment Experience

The team at Montimage

My secondment at Montimage, Paris, marked an exciting and technically enriching chapter in my Ph.D. journey, focusing on the design and development of AI-driven security solutions for 6G networks. In this final and second blogpost about my secondment, I plan on summarizing my experiece with regard to both technical details and the life in Paris for the duration that spanned across 4 months. The work primarily explored how Explainable AI (XAI) can be integrated into 6G security systems to make machine learning (ML) models more transparent, trustworthy, and resilient against adversarial attacks.

Building Trustworthy AI for 6G

The central aim of my research during this period was to strengthen the trustworthiness of AI models used in 6G environments, particularly for adversarial attack detection. As AI/ML models become deeply embedded in 6G architecture, handling tasks like traffic prediction, dynamic resource management, and attack detection, they also become potential targets for manipulation. Adversarial examples, data poisoning, and model evasion attacks represent real threats that can degrade performance or compromise entire network segments.

To counter these challenges, my work focused on integrating XAI techniques into the AI pipeline, not just for post-hoc interpretability but as an active defense mechanism. The idea was to design systems that could detect evasion attacks through explanations promoting explanation-based security.

One of my major tasks during the secondment was to assess the practical role of XAI in mitigating adversarial threats to AI models. Through literature reviews, I observed that while interpretability has been widely discussed, its role in direct attack prevention and resilience enhancement is still underexplored.

The famous statue of Nike in the Louvre Museum

Exploring Montimage’s AI Ecosystem

At the beginning of the secondment, I familiarized myself with two of Montimage’s flagship open-source tools: Montimage AI Platform (MAIP) and MMT-Probe.

  • MAIP is a versatile AI platform for network traffic analysis, integrating capabilities such as data preprocessing, feature extraction, model training, and explainability visualization.
  • MMT-Probe, on the other hand, is a high-performance packet analysis tool capable of parsing .pcap files and extracting hundreds of protocol-level features across more than 700 network protocols. This level of granularity made it possible to create rich, labeled datasets for AI-driven traffic analysis and intrusion detection.

Working with these tools gave me hands-on experience with real-world telecom data and a clear understanding of how AI and network monitoring intersect in industrial-grade systems. It also enabled me to identify where XAI mechanisms could be embedded directly into operational pipelines, enhancing both transparency and defense capabilities.

Academic Meets Industry

This secondment provided an excellent bridge between academic research and industrial implementation. Collaborating with Montimage’s AI and development teams allowed me to refine theoretical ideas into practical, deployable solutions. I received invaluable mentorship from experts in network security, AI model validation, and protocol analysis, while contributing my own insights on XAI-driven defense mechanisms.

Beyond the technical work, the experience also enhanced my perspective on scalability, performance optimization, and data handling within real-time 6G environments, key aspects often simplified in academic prototypes.

The stunning Galeries Lafayette

Life in Paris

Outside the lab, living in Paris was an equally memorable part of the journey. The city offered a perfect balance between technical focus and cultural exploration. Even after long research days, walking along the Seine River provided a refreshing break. The diversity of ideas, people, and cultures in Paris reflected the same kind of interdisciplinary collaboration I experienced at Montimage.

Concluding Thoughts

As I conclude this final secondment blogpost, I reflect on the invaluable experience of working at the intersection of AI, cybersecurity, and next-generation networking. The hands-on exposure to Montimage’s tools and industrial practices has greatly strengthened my ability to translate academic research into tangible, real-world impact.

This secondment has not only deepened my technical expertise but also reinforced my commitment to developing secure, explainable, and trustworthy AI for 6G systems, an essential step toward a transparent and resilient digital future.

Previous Article

Two months at Z-RED: exploring validation and healthcare use cases

Next Article

Advancing Energy-Aware Trustworthy AI for 6G Security: Insights from a Secondment at Z-RED

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *