Skip to main content

Interview with Team GENSHIELD-AI, Third-Prize Winners of the COcyber AI Cybersecurity Deephack

Fri, 06/06/2025 - 04:33
description background

From April 24 to 26, 2025, COcyber AI Cybersecurity Deephack brought together a diverse group of participants - from PhD students and young professionals to startup founders and cyber enthusiasts - for an intense 48-hour challenge focused on one critical question: can artificial intelligence stop the next big phishing attack?

With dual-use applicability at the core of the challenge, teams had to consider the needs of both civilian and defence sectors, addressing issues like multilingual threats, adversarial techniques, and infrastructure limitations.

From natural language processing to anomaly detection and threat intelligence, participants had to think fast, collaborate efficiently, and push the boundaries of what’s currently possible in AI-driven cybersecurity. Working under pressure through checkpoints, mentoring sessions, and live pitching, they built prototypes that could function across languages, contexts, and threat levels. But behind the tech, the real engine of the Deephack was collaboration 

We sat down with the winning teams to hear how they approached this complex task and where they plan to go next. In this article, we introduce you to Team GENSHIELD-AI , the third-prize winner of the COcyber AI Cybersecurity Deephack.

 

Can you introduce your team? Who are you, what are your backgrounds, and what brought you together to take part in this Deephack?

We are a team of PhD researchers and research engineers at the School of Computer Science, University College Dublin, Ireland. We comprise of four members, Vidura Ravihansa, a 1st year PhD researcher, Isuru Pinto, a research engineer, Rashmi Ratnayake, a 3rd year PhD researcher and Chamara Sandeepa, a 4th year PhD researcher. We all are coming from security and AI background. We initiated our team GENSHIELD-AI for over a year now and are working on both aspects of the safety of AI and using AI for enhancing security and privacy. The Deephack challenge was a perfect opportunity for us to apply our ideas and vision in a real-world, dual-use context.

Phishing attacks are evolving and growing more sophisticated. What approach did you use to build an AI-driven system that can detect and prevent phishing in real-time?. How did you design an AI-driven solution, what technologies or methods did you use (e.g. NLP, anomaly detection, threat intelligence)?                              

To tackle phishing attacks, we built a system called PhishNet that uses a combination of Large Language Models (LLMs) and Graph Neural Networks (GNNs).

The LLM helps us understand the meaning of the message even if there are no links and works across different languages. It looks for suspicious wording or behaviour that might indicate phishing.

The GNN looks at how messages flow between people in an organization. It spots unusual communication patterns, like someone pretending to be a boss or sending messages at odd times. If something looks suspicious, the LLM checks the message content more deeply.

We also used open phishing threat intelligence databases to check if any part of the message matches known phishing sources. So, even without obvious signs like fake links, our system can catch tricky phishing attempts and help prevent them in real time. 

 

How did you ensure your solution could be applied in both civilian and defence contexts? Dual-use was a key aspect of this deephack. What design choices helped you meet that goal?

For defence contexts, the GNNs are implemented to understand how messages flow within an organization. This helps identify critical communication paths and spot any unusual or potentially malicious activity. To protect sensitive data during training, we used Federated Learning, so the models learn without sharing private information across organizations. Moreover, we have designed local phishing intelligence platforms, if these defence applications are required to be isolated.

For civilian use, we designed our LLM-based phishing detection to work independently for analysing email content and detect phishing — even in personal messages and in multiple languages.

By combining these flexible components, our system can adapt to different environments while keeping security and privacy in mind.

In your opinion, how can AI strengthen cybersecurity resilience in the years to come across both public and defence sectors? We’d love to hear your broader reflections based on what you learned through this challenge.

With the rise of powerful AI tools like LLMs and generative AI, we’re entering a new era where threats can be more convincing, personalized, and harder to detect. Attackers are already starting to use AI to craft sophisticated phishing messages and bypass traditional security.

To stay ahead, we believe we’ll need to use AI to fight AI — building intelligent systems that can detect and respond to these threats in real-time. But just as important is explainability. People tend more to rely on AI and offload critical decisions to AI. Therefore, it is important to set guardrails with interpretability and explainability for their actions. In both public and defence sectors, users and decision-makers are required to identify if AI systems are trustworthy and understand why something is flagged as a threat. So, making AI systems transparent and accountable will be key to strengthening cybersecurity resilience in the future.

What’s next for your project? Do you plan to further develop your solution or bring it closer to market or real-world deployment?

Our goal is to take GENSHIELD-AI further by developing practical AI tools that help both organizations and individuals strengthen their security and privacy.

We also want to focus on making sure these tools are not just effective, but also explainable and verifiable — so users can understand and trust the decisions made by AI. This is especially important for sensitive environments like defence and critical infrastructure. We’re excited to keep building on what we started with PhishNet and explore real-world deployment opportunities.