Featured News

How to Secure AI Systems Effectively in Current and Future Cybersecurity Challenges

2026-03-26 by AICC

An eBook titled “AI Quantum Resilience”, published by Utimaco, reveals that security risks are considered the primary obstacle to effective AI adoption using organisational data.

The value of AI highly depends on the quantity and quality of data an organisation accumulates. However, organisations face significant security risks when building AI models and training them on sensitive data. These risks extend beyond commonly highlighted threats like intellectual property theft during AI inference processes (e.g., prompt engineering).

The eBook’s authors emphasize that organisations must manage security threats throughout all phases of AI development and deployment. They also note the necessity for companies to proactively adapt their security protocols, especially considering that quantum computing-powered decryption tools could soon be accessible to malicious actors.

Utimaco identifies three main threat areas:

  • Training data manipulation by adversaries that subtly degrades model performance but is difficult to detect.
  • Model extraction or copying that violates intellectual property rights and undermines competitive advantage.
  • Exposure of sensitive data used during model training or inference phases.

According to the report, current public key cryptography is expected to become vulnerable within the next decade due to advances in quantum computing. Malicious groups are already collecting encrypted data with the intent to decrypt it once powerful quantum systems emerge. This creates an urgent need to protect datasets with long-term sensitivity, including AI training data, financial records, and intellectual property.

Transitioning to quantum-resistant cryptography will significantly impact encryption protocols, key management, system interoperability, and overall performance. Utimaco estimates this migration will take several years.

‘Crypto-agility’ — the ability to change cryptographic algorithms without redesigning entire systems — is recommended as a strategic approach. This approach often involves hybrid cryptography, combining established encryption methods with new post-quantum standards, such as those proposed by NIST.

The authors also caution that cryptography alone cannot address the full spectrum of AI security risks. They advocate deploying hardware-based trust devices to isolate cryptographic keys and sensitive operations from conventional IT environments.

For AI development lifecycles, security measures should cover data ingestion, training, deployment, and inference stages. Hardware keys used to encrypt data and digitally sign AI models can be securely created and stored within tamper-resistant boundaries. This enables verification of model integrity prior to deployment, while sensitive data processed during inference remains protected.

Specialised hardware enclaves isolate AI workloads, preventing even privileged system administrators from accessing data during processing. These secure modules can verify the enclave’s trusted state through a process called external attestation, establishing a robust ‘chain of trust’ from physical hardware up to application level.

Hardware-based key management also generates tamper-proof logs tracking access and cryptographic operations to help organisations comply with regulations such as the EU AI Act.

Though many AI system vulnerabilities are already known and sometimes exploited, the emerging threat of quantum-enabled decryption is less immediate but has serious long-term implications for data security and infrastructure design. Utimaco recommends:

  • Strengthening security controls across the entire AI development and deployment lifecycle.
  • Implementing ‘crypto-agility’ to enable smooth transition to post-quantum cryptographic standards.
  • Adopting hardware-based trust mechanisms wherever critical data or intellectual property assets are involved.

Image source: “Scanning electron micrograph of an apoptotic HeLa cell” by National Institutes of Health (NIH), licensed under CC BY-NC 2.0.


Interested in learning more about AI and big data from industry leaders? Explore the AI & Big Data Expo, held in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other premier technology conferences. Visit their site for more details.

AI News is brought to you by TechForge Media. For additional upcoming enterprise tech events and webinars, click here.

300+ AI Models for
OpenClaw & AI Agents

Save 20% on Costs