
Introducing AI to the C-suite: Three opportunities, three risks
Senior leaders can leverage AI to boost creativity, guide decisions, and optimize talent, while balancing risks like bias and overreliance....

by Naomi Haefner Published April 23, 2026 in Talent • 5 min read
AI is giving cyberattackers a wider range of targets and the opportunity to scale up. Hackers can manipulate AI tools such as chatbots by feeding them malicious information or asking them to reveal sensitive company data. At consultancy firm McKinsey, a vulnerability in an internal chatbot was recently shown to potentially expose millions of private conversations and other sensitive data, highlighting the scale of risk such tools can introduce if not properly secured.
Advances in AI also mean that, increasingly, attackers target employees directly. Sophisticated deepfake technology enables highly convincing phishing and impersonation attacks. Engineering company Arup was the victim of a HK$200m (around US$26m) deepfake fraud in 2024, after an employee was duped into sending funds to criminals via an AI-generated video call. UK retailer Marks & Spencer also attributed a 2025 cyberattack costing around £136m (US$180m) to a social engineering attack targeting a third party with access to its systems.
Businesses must think beyond technological solutions and recognize that adequately training and raising awareness among their people is their first – and potentially best – line of defense. Resilience also depends on how work is designed. Clear reporting lines, robust approval processes, and careful access control across the employee lifecycle, from onboarding to role changes to offboarding, can all reduce opportunities for manipulation or error. When it comes to reinforcing cybersecurity in daily operations as a shared responsibility rather than just IT policy, CHROs are becoming essential partners to CIOs and CISOs.

With new styles of attack breaking out all the time, ongoing training and clear communication are essential to keep employees up to speed with evolving threats. This means that “bite-sized” units of training delivered as a regular feature of the working day are more effective than, say, an annual 20-minute video. Shorter pieces of training are more likely to retain people’s attention and can carry the latest information in response to new threats.
A new trend is emerging among forward-looking IT teams, similar to the more established cyber technique of “red team” testing, in which companies stage simulated attacks on their systems. But rather than testing their systems, the new trend focuses on employee response to “fake” threats, such as simulated phishing emails. This allows them to measure how quickly and accurately employees report such threats and to tailor training accordingly, including to specific individuals.
This approach can also trigger instant interventions. For example, if an employee tries to bypass a multi-factor authentication system, an instant pop-up intervention warns them of the importance of complying with online security requirements. This is an effective reminder that saves both time and the blushes that can result from a call from IT.
Organizations are also recognizing the importance of gaining a general understanding of employees’ digital behaviors, including how different departments communicate, to reveal gaps in process or understanding. For example, is there an established process through which senior management can request and authorize the release of data or funds, and which is recognized by all employees?
Employees need to understand what is being monitored, why it is necessary, and where the boundaries are.
While close monitoring of employee digital activity is crucial, businesses should take care not to let security practices erode trust. There is a fine line between vigilance and overreach. If employees fear blame, feel constantly watched, or believe they will be punished for an honest mistake, they may delay reporting further, gifting attackers extra time.
If employees feel they will be supported and understood, rather than blamed, they will feel part of the cybersecurity team.
Supported cultural change should accompany these new testing processes, so the workforce understands that it is part of a necessary effort to protect everyone. Positive reinforcement – giving praise when an individual correctly identifies a phishing attempt, for example – is generally more effective than penalizing errors. C-suite collaboration between CHROs, CIOs, and CISOs will be more important than ever in creating psychological safety, building trust in management, and designing effective intervention mechanisms.
A crucial element of psychological safety and a positive cybersecurity culture in general is transparency. Employees need to understand what is being monitored, why it is necessary, and where the boundaries are, so that security strengthens trust rather than weakening it. Making every employee aware of cybersecurity processes makes everyone feel safer and, perhaps most importantly, like they are on the same side.
The ultimate goal for organizations should be to make cybersecurity measures second nature. Organizations that treat cybersecurity as a human issue as much as a technical one – shaping how work is designed as well as how people behave – will build the most effective resilience against the evolving risk environment. While attacks are inevitable, companies that instill strong habits and a culture of shared responsibility will be best placed to detect, respond to, and recover from them.

Professor of Artificial Intelligence and Innovation
Naomi Haefner is Professor of Artificial Intelligence and Innovation. Her research examines how artificial intelligence is reshaping innovation strategy, organizational design, and leadership decision-making.
Her work bridges rigorous academic insight with practical application. As a researcher, educator, and advisor, she designs and delivers executive education programs for board members, senior leaders, and public-sector innovators navigating complex technological change. Her approach integrates frameworks that equip leaders to respond to fast-moving developments with structure and impact.

March 17, 2026 • by Michael R. Wade in Human Resources
Senior leaders can leverage AI to boost creativity, guide decisions, and optimize talent, while balancing risks like bias and overreliance....

March 11, 2026 • by I by IMD in Human Resources
AI is eroding entry-level roles, threatening future leaders. Erik Brynjolfsson warns organizations to rethink hiring and invest in strategic early-career talent....

February 25, 2026 • by I by IMD in Human Resources
Spotify CHRO Anna Lundström drives AI readiness, culture, and personalized well-being to empower employees and sustain Spotify’s innovative, human-centric growth....

February 16, 2026 • by Zhike Lei in Human Resources
Some of China’s most profitable companies have thrived by piling pressure on workers – sometimes with tragic results. Zhike Lei outlines how multinationals can design work that avoids burnout and exploitation. ...
Explore first person business intelligence from top minds curated for a global executive audience