It’s easy to fall in love with the idea of AI in football. Chatbots that handle ticketing questions, scouting tools predicting the next star, analytics dashboards that read matches like novels. Clubs have found endless ways to plug artificial intelligence into their daily work.
But here’s the catch — the smarter the system, the more it can break. And when it does, it won’t ask for extra time.
That’s the message from SoCyber, a Sofia based cybersecurity company that has started working with sports organisations to test how secure their digital infrastructures really are. We sat down with Angel Sirakov, the company’s Head of Marketing, and Viktor Mares, Senior Penetration Tester, to understand what happens when AI meets the football business.
By Joachim Stelmach
Football’s AI rush – with a blind spot
“Football is no different from any other industry,” Viktor began. “The moment you add AI, the attack surface gets bigger.”
Every new chatbot, data feed or analytics plugin opens another door for attackers. Clubs aren’t ignoring this intentionally — they’re just moving fast, driven by convenience and cost. “You see many organisations using AI because it’s fashionable,” Angel added. “They think it will make things cheaper, faster, better. But AI at scale is not cheap, and it’s not always safe.”
He reminded of a recent case outside football: a Formula 1 data leak caused by an exposed API.
“Even the biggest players make small mistakes,” he said. “Now imagine a club storing medical data, contract details or betting-related statistics. One overlooked configuration and you’re giving away the crown jewels.”
Prompt injections, poisoned data, and digital hallucinations
If that sounds abstract, Viktor broke it down. “The most famous AI attack is called a prompt injection. You basically make the AI do what you want it to do — even something it shouldn’t.”
Think of a club’s chatbot that manages fan accounts. A fan uploads a screenshot of a ticket to ask for help. Hidden inside the image could be a line of text invisible to the human eye but visible to the AI: delete all user accounts. If the bot has the right permissions, the chaos writes itself.
Then comes data poisoning — tampering with the numbers that feed scouting tools or player-performance models. “Even five percent of wrong data can break the whole model,” Angel said. “You could end up signing the wrong player or benching the right one.”
He referenced the movie Moneyball: “In that film, they trust data more than intuition. But if your data is wrong, your intuition never even gets a chance.”
AI can also hallucinate — create false information that sounds confident. “It will always give you an answer,” Angel warned. “Even if it’s nonsense. And clubs may not have the knowledge to tell the difference.”
Small clubs, big risks
There’s a comforting myth that hackers only target the giants — the Real Madrids and Man Uniteds of the world.
“Attackers don’t care if you’re big or small. They go after the same software across multiple clients. If they know how to exploit one system, they can hit a hundred at once.”
Angel called it a supply-chain risk: “Every time you connect another third-party tool — CRM, analytics, ticketing, fan app — you increase your vulnerability. It’s like adding new players to a team without checking their medicals.”
The irony is that smaller organisations often have fewer resources to react. “A local club might think, ‘we’re too small to be interesting,’” Viktor said. “But if you’re running on the same platform as Bologna FC or Juventus, you share the same exposure.”
Brand security matters too
At one point Angel moved the conversation from systems to sentiment. “People think cybersecurity is just technical. But it’s also about emotion — how fans perceive your club.”
He gave an example every marketing manager can feel: “Fans want to be part of the team, to sense the people behind the posts. If everything looks AI-generated, they stop caring. You can’t build loyalty with synthetic emotion.”
He wasn’t condemning technology; he was pointing out a human truth. A club might protect data perfectly and still damage its brand by letting automation take over communication. “Brand safety isn’t only about avoiding scandals. It’s about keeping fans connected to something real.”
Before you add AI, ask why
Angel’s advice for executives sounded almost philosophical: “Every new system should start with one question — what’s the goal?”
He warned against the illusion that AI always equals efficiency. “Because it’s in the news doesn’t mean it’s the right tool. And if you think it’s cheap, you’re mistaken. Enterprise AI costs scale quickly. You might spend €20 per user per month and think it’s nothing — until you multiply it by a thousand accounts.”
He also urged clubs to evaluate their partners carefully. “ChatGPT, Gemini – these are American products built for profit, not for European data laws. Make sure your vendors comply with GDPR and soon with the EU AI Act. Otherwise you risk feeding your internal data straight into someone else’s model.”
Then he asked the question no one wants to answer aloud: “What happens when your AI posts something offensive?” It sounds far-fetched, but as Angel noted, “One bad image or phrase can undo a century of reputation.”
Testing the untestable
Viktor took over to explain how SoCyber actually challenges these systems. Their team uses an approach known as AI penetration testing.
“It’s like a controlled scrimmage,” he said. “We simulate what an attacker could do. Depending on the system, we try to inject data, manipulate responses, or trigger malicious code.”
They follow a framework called OWASP Top 10 for LLMs, a list of the ten most common attack vectors for large-language-model systems. “It’s not about breaking things,” Viktor added, “it’s about showing clients how someone else could.”
In football, that might mean testing a fan chatbot, an analytics portal, or even a loyalty app. “We look for access misconfigurations, privilege escalation, or excessive agency — that’s when an AI has too many permissions. A simple bot could end up having admin rights.”
He smiled when describing one of the more creative threats: fake footballers. “Imagine a scouting AI that ranks global prospects. An attacker could create fake websites and data about a non-existent player, complete with videos, stats, and ‘top talent’ tags hidden in the code. The AI would believe it and promote the player automatically. It’s not science fiction — we’ve already seen early attempts.”
The human firewall
For all the technical talk, both experts kept returning to one old-fashioned defence: people.
“You can’t buy total safety,” Angel said. “Cybersecurity is about reducing risk in a way that makes business sense.”
He described their phishing-training campaigns, where employees receive simulated scam emails after a short awareness session. “We once sent fake gas-station coupons. Sixty-six percent clicked. Sixty-six! That shows how far we still have to go.”
The goal, he explained, is to move from conscious understanding (“I know phishing exists”) to unconscious recognition (“I instantly spot something off”). “When people reach that level, security becomes natural,” he said.
It’s a reminder that behind every firewall, there’s still curiosity — and curiosity doesn’t read policy documents.
Regulation, responsibility, and reality checks
Soon, the EU AI Act will force clubs and federations to treat AI systems like any other critical infrastructure. “Any organisation in Europe using AI on personal data will need to show proper governance,” Angel said. “That means audits, controls, documentation – things most clubs have never done.”
But he doesn’t see it as a burden. “Regulation is just common sense written down,” he said. “If you’re handling fan data, ticket payments, or digital wallets, you already owe your community that protection.”
He also pointed to emerging financial technologies – the digital euro, fan-token economies, Web3 marketplaces. “In Web3, everyone’s motivation is money,” he said bluntly. “You’ll have bots pretending to be fans. Traffic numbers will lie. Clubs must learn to tell what’s real – and what’s algorithmic smoke.”
Convenience versus Security
Laws and regulations are constantly evolving in an attempt to keep up with new threats — often at the expense of what’s considered “growth-friendly,” especially in areas involving user data and privacy. A good example right now is the rise of so-called AI browsers from major American tech players, such as Comet by Perplexity and Atlas by OpenAI.
While the promise of convenience is tempting, sports organisations should think carefully about what truly benefits their products — and, most importantly, their fans. Both Angel and Viktor emphasised the need for cautious implementation and proper staff training before automating any user interactions or sensitive data processes.
The Future Is AI-Powered Cybersecurity
The rapid growth of AI has also expanded the threat landscape for organisations of every size. Industry reports point to a severe shortage of qualified professionals capable of meeting the cybersecurity demands of an increasingly digital world.
To address this, SoCyber has developed a tool designed to help companies identify and understand security gaps using AI. Their solution, Agentic Kikimora — named after the Slavic mythological spirit that guards against nightmares and evil — automates the manual burden of managing cybersecurity.
Through a conversational interface, Kikimora acts much like a large language model (LLM), allowing teams to quickly locate vulnerabilities, streamline compliance processes, and handle security challenges with greater clarity and speed.
Conclusion
In a world where football clubs are racing to digitalise every corner of their operations, security can’t remain an afterthought. AI may streamline scouting, ticketing, and fan engagement, but it also opens new doors that few in sport are truly prepared to guard. What SoCyber reminds us is simple: innovation and vigilance must evolve together. Protecting data, systems, and fans is no longer just an IT concern — it’s part of protecting the game itself.
***
If you’re looking for more inspiration or want to see how others in the industry are approaching similar challenges, check out our growing collection of articles, case studies, and interviews in the FBIN Knowledge Hub. There’s plenty more to explore.