Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Brain Circuits

What do you know about cybersecurity risks when scaling industrial AI?

Published May 14, 2025 in Brain Circuits • 2 min read

Companies are rushing to embrace the game-changing opportunities to improve operations and scale that AI offers; yet many are unaware of the pitfalls. Test your knowledge of the risks here – and read on for tips on building security into your strategy.

True or false?

  1. Industrial AI uses more advanced technology than generative AI.
  2. Chinese competitor DeepSeek is on a par with Silicon Valley products such as OpenAI, Google, and Meta in terms of capability to deal with cyber threats.
  3. The increased connectivity of industrial AI systems creates additional entry points for hackers.
While DeepSeek is fast and less expensive than its US competitors, independent security evaluations have found a series of weaknesses in it.

Answers

  1. False. Industrial AI relies on traditional machine learning (ML), rather than the newer and less well-tested generative AI, and has been used for tasks including predictive maintenance, quality checks, and energy management for more than 15 years.
  2. False. While DeepSeek is fast and less expensive than its US competitors, independent security evaluations have found a series of weaknesses in it and exposed susceptibility to cyber threats including prompt injection attacks, jailbreaking, and data poisoning, revealing that cheaper options may be a false economy in the long term.
  3. True. Scaling broadens the attack surface, making industrial systems more vulnerable to cyber threats. Models can be targeted by adversarial attacks, data poisoning, and model inversion techniques that expose sensitive information.
“By proactively integrating security measures into AI development and deployment, organizations can minimize risks.”

5 key strategies to improve cyber resilience

1. Secure AI model development and deployment

  • Explainability
  • Governance
  • Developer training
  • Secure software development

2. Adversarial AI and model manipulation

  • Bias audits
  • Adversarial training
  • Input sanitization
  • Validate training data

3. Data privacy and protection

  • Encryption
  • Privacy compliance
  • EU AI Act
  • Access control

4. AI supply chain and third-party security

  • Vet external AI tools
  • Security in the cloud
  • Software bill of materials

5. AI-specific incident response and monitoring

  • Continuous AI threat monitoring
  • Incident response for AI failures
  • Fair use policies

Key learning

By proactively integrating security measures into AI development and deployment, organizations can minimize risks while maximizing the potential of industrial AI tools.

Related