Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
cybersecurity

Technology

What every executive should know about AI and cybersecurity

Published 21 September 2023 in Technology • 10 min read

Understanding AI’s role in cybersecurity is crucial. By grasping its potential and challenges, you can ensure a secure and forward-thinking digital environment for your organization. 

Artificial intelligence (AI) is everywhere – from chatbots handling customer queries to analytics optimizing supply chains to increase efficiency and effectiveness. But beneath its shiny exterior, the rapid advance of AI has a dark downside. When misused, the tools designed for productive tasks can also identify security gaps and exploit them. As companies rapidly adopt AI to enhance their businesses, cybercriminals leverage it just as rapidly to power their malicious efforts. This situation is a new digital arms race. Companies must continually upgrade their AI-driven security measures to counter the AI used by hackers to craft ever more effective attacks. For business executives, the imperative is clear: stay informed and be proactive. It’s no longer enough to respond to threats as they come.

Understanding the AI-powered threat landscape

As you navigate this evolving digital terrain, you must understand how the development of AI is transforming the cyberthreat landscape. Three types of attacks are most concerning to companies – cyber fraud, data breaches, and ransomware encryption. We will not address a fourth type of potential attack – cyber sabotage – intended to wreak permanent damage to systems, especially those controlling infrastructure or essential services. But many of the ways that AI can be leveraged in cyber fraud, data breaches, and ransomware attacks can also be used for cyber sabotage.

Cyber fraud 

Cyber fraud involves deceptive schemes created for financial gain. It can target individuals, businesses, or institutions from any global location. Common forms include phishing, where deceitful methods are used to extract sensitive data; online identity theft for impersonating victims; unauthorized bank transfers and online credit card usage; deceptive e-commerce practices; advance-fee scams promising larger future sums; and online investment frauds. 

Data breaches

Cyberattacks that aim to steal data from companies are commonly called “data breaches” or “data theft”. When the intention is specifically to infiltrate a network or system and steal sensitive data for illicit purposes, such as selling the data on the dark web, using the data for identity theft, or engaging in corporate espionage, it is considered a data breach.  

Ransomware attacks

Ransomware attacks have become a critical cyber threat in recent years. At the heart of a ransomware attack is encryption – a method through which hackers make a victim’s files or systems inaccessible until a ransom is paid. Ransomware encryption attacks involve malicious software that, once installed on a victim’s computer or network, encrypts files, folders, or even entire drives. The encryption is often robust, rendering the files unreadable without a unique decryption key. Victims are then sent a ransom note demanding payment (typically in cryptocurrency like Bitcoin) for the decryption key. 

To raise the stakes even further, the most sophisticated hacks increasingly involve “double extortion,” where the attackers steal critical data – for example, customers’ and employees’ personally identifiable information – and use encryption to prevent the company from accessing it. The combined threat of a data leak and disabled critical systems increases the pressure to pay the ransom. 

The impact of AI on cyber threats

In cyber fraud, data breaches, and ransomware attacks, malicious payloads are often delivered through phishing emails, software updates, or infected websites. Unsuspecting users download an attachment or click on a link that appears legitimate, only to unknowingly enable the ransomware to infiltrate and encrypt their system.

AI cybersecurity
Cyber fraud involves deceptive schemes created for financial gain. It can target individuals, businesses, or institutions from any global location

Malicious actors can leverage AI to facilitate and enhance data breach and ransomware attacks in the following ways: 

Advanced phishing techniques  

AI has made it possible for hackers to craft highly personalized phishing emails. By analyzing vast data sets, from social media to online purchases, AI-driven tools like WormGPT, a hacking tool based on the AI model GPT J, generates phishing emails tailored to individual recipients, making them not only convincing, but virtually indistinguishable from legitimate correspondence.  

Efficient vulnerability exploitation  

AI tools can scan and identify weak points in networks faster than human hackers. Integrating AI-driven tools means vulnerabilities can be exploited more promptly and on a broader scale. Additionally, AI algorithms can analyze massive data amounts to pinpoint potentially vulnerable targets, from individual users to businesses with weak security postures. 

Accelerated data breaches  

Once a system has been infiltrated, AI can rapidly sift through and analyze vast amounts of data, pinpointing and extracting the most valuable information. This efficiency makes data breaches more damaging as sensitive data can be accessed and exfiltrated faster. 

Evasive malware  

AI-powered malware is designed to evade traditional security measures by adapting its code in real time, enabling it to make decisions on spreading techniques, targeting criteria, or data extraction methods. Similarly, AI-enhanced ransomware can navigate intelligently within a network, prioritizing the most critical files to maximize impact. 

Rapid password cracking  

With machine learning, algorithms can predict and crack password patterns more effectively than traditional methods, facilitating unauthorized access to systems and accounts faster than ever before. 

Augmented ransomware techniques  

AI can amplify ransomware attacks by developing complex encryption methods, making data recovery without the decryption key challenging. Once the data is encrypted, AI-driven ransomware can strategically demand ransoms, assessing the victim’s potential to pay based on the data’s sensitivity. 

Automated large-scale attacks  

Traditional cyber attacks require substantial human oversight. However, AI automation allows attacks to be executed at an unprecedented scale and speed. This makes the attack more extensive and challenges mitigation efforts due to the sheer volume and rapid evolution of threats. 

Leveraging AI to strengthen cyber defense 

AI will increase the likelihood that your company will be attacked and that the attacks will be more damaging. The foundation of effective defense is strong security controls and up-to-date, vendor-supported applications and hardware systems.  

An essential security control is Multi-factor Authentication (MFA). It needs to be in place for at least all internet-facing applications. Likewise, all your software applications and hardware systems must be upgraded to the latest vendor-supported versions. Having unsupported or out-of-date software or hardware/firmware is a sure-fire recipe for disaster. This is especially risky if your organization has internet-facing applications that were not developed and administered by the IT organization but were created by “shadow IT” groups. Getting your security fundamentals in shape was before AI, but the risks are increasing because attacks will be faster and more sophisticated.

malware
“AI-powered malware is designed to evade traditional security measures by adapting its code in real time, enabling it to make decisions on spreading techniques, targeting criteria, or data extraction methods.”

If all this leaves you feeling queasy, don’t despair – while AI introduces these potential avenues for enhanced cyberattacks, it can also be leveraged to augment your defense. AI-enhanced security systems can detect anomalies, predict potential threats, and respond quickly to breaches, often faster than human analysts. Thus, as AI shapes the future of cyberattacks, it’s equally shaping the future of cybersecurity. As cyber threats become more sophisticated, the tools we use to counteract them must also evolve. AI has proven to be a game-changer in this realm, offering enhanced capabilities to strengthen cyber defense mechanisms. These include: 

Predictive analytics 

By emphasizing a proactive stance instead of a reactive approach, where actions are taken after an incident, the effects of predictive analytics have been transformative. Companies can glean insights into potential future threats by training AI models on extensive historical datasets, which might encompass past cyberattacks, vulnerabilities, and network activities. These models can identify likely attack vectors or foresee vulnerabilities, enabling organizations to fortify their systems even before an actual threat materializes.  

Behavioral analytics

Traditional cybersecurity methods often relied on known malware signatures. However, as cyber threats evolve, solely relying on signature-based detection becomes inadequate. Enter behavioral analytics. By employing AI, systems can continuously monitor and learn a network’s ‘normal’ behavior patterns. Any deviation from this established norm– be it a slight anomaly in data transfer rates or uncharacteristic access patterns– can trigger alarms. This real-time monitoring and detection drastically cuts down the time between intrusion and detection, often catching breaches as they happen and dramatically limiting potential damage. 

Automated response 

When it comes to cybersecurity breaches, swift action is paramount. The longer a breach goes unaddressed, the more the potential damage multiplies. AI-driven systems are equipped to provide immediate responses when threats are detected. Instead of waiting for human intervention, these systems can autonomously take measures such as isolating affected nodes, shutting down specific processes, or even launching countermeasures. Meanwhile, human response teams are alerted and can step in for a more detailed intervention, assured that initial containment measures are already in place. 

Potential pitfalls in AI-driven cybersecurity 

It’s crucial to recognize that AI can be a force multiplier in defense strategies, but it also brings challenges. Embracing AI-driven cybersecurity requires not only understanding its strengths but also being cognizant of its pitfalls. 

Overreliance 

Given its capabilities, it’s tempting to view AI as a silver bullet for all cybersecurity woes. But this mindset can be a double-edged sword. AI tools, no matter how advanced, have limitations. They operate based on their training and the data they have been fed. Thus, they might miss new or nuanced threats that a human expert with years of experience and intuition might catch. Over-relying on AI and sidelining human judgment can lead to vulnerabilities. It’s essential to remember that the best defense often combines the computational power of AI with the discernment and expertise of human professionals. 

Manipulation 

One inherent weakness of many AI systems, especially those based on machine learning, is their vulnerability to adversarial attacks. In such scenarios, attackers craft specific inputs to deceive the AI model. Hackers can trick these systems into making false predictions or bypass security measures by feeding these systems misleading or poisoned data. For instance, an AI system trained to detect malware could be provided data that makes harmful software appear benign, potentially allowing threats to infiltrate the network. 

The longer a breach goes unaddressed, the more the potential damage multiplies.AI-driven systems are equipped to provide immediate responses when threats are detected.

Data privacy concerns 

At the heart of AI’s power in cybersecurity is data. The more data an AI system processes, the more accurate and efficient it becomes. However, this insatiable need for data intersects with growing concerns about user privacy. As companies amass vast amounts of data to feed their AI systems, questions arise: Where is this data stored? Who has access to it? How is it being used? Regulations like the General Data Protection Regulation (GDPR) have been implemented to address some of these concerns. Nevertheless, the ethical implications of data collection, storage, and usage remain a hot topic. 

A call to action 

Recognizing the evolving dynamics of the AI-driven cyber landscape is undoubtedly vital. But awareness alone isn’t enough. The next and equally significant step is action. Executives must transform their insights into tangible strategies to navigate the digital frontier successfully. Here’s a practical roadmap for companies to fortify their defenses while leveraging AI’s transformative potential. 

Fix the basics 

Before you start worrying about leveraging AI, make sure that you have a solid foundation of basic security controls and procedures in place. As mentioned above, this includes required use of MFA, upgrades of software and systems to the latest vendor-supported versions, and rapid implementation of security patches. 

Develop hybrid models 

In the world of cybersecurity, neither machines nor humans should be an island. While AI tools excel in rapidly processing vast amounts of data, detecting patterns, and automating responses, they can sometimes miss the intricate details or context behind certain activities. Humans can bridge this gap with their ability to understand nuance, intuition, and context. The optimal defense mechanism, therefore, is a harmonious fusion of AI and human expertise. In this hybrid model, AI systems manage vast datasets, rapid scans, and initial responses, all while human experts oversee these processes, providing the essential layers of context, judgment, and decision-making. This combined approach ensures a thorough, efficient, and adaptable defense mechanism capable of addressing both broad-scale and subtle threats. 

Engage in continuous training 

The cyber realm is fluid, with threats continuously evolving, adapting, and becoming more sophisticated. To stay a step ahead, it’s not enough to deploy AI tools and then leave them static. These tools need to grow and learn continuously. Regularly updating and training AI models with the latest threat intelligence is imperative. By exposing them to new attack patterns, strategies, and vulnerabilities, AI systems can be primed to recognize and counter emerging threats swiftly and effectively. 

Uphold ethical AI practices

Cybersecurity isn’t just about building digital walls and alarms. In an age where data is the new gold, it’s crucial to remember the ethical implications associated with its mining and use. In this digital Wild West, companies must ensure their AI-driven cybersecurity tools don’t tread on thin ethical ice. To support this, you should establish robust policies prioritizing user privacy, guaranteeing that AI systems handle data with the utmost confidentiality.  

Transparency in operations is another pillar, so users and stakeholders should understand how these tools function and make decisions. Lastly, aligning AI operations with broader ethical considerations ensures that companies don’t compromise on principles while pursuing security.

Authors

Michael Watkins - IMD Professor

Michael D. Watkins

Professor of Leadership and Organizational Change at IMD

Michael D Watkins is Professor of Leadership and Organizational Change at IMD, and author of The First 90 Days, Master Your Next Move, Predictable Surprises, and 12 other books on leadership and negotiation. His book, The Six Disciplines of Strategic Thinking, explores how executives can learn to think strategically and lead their organizations into the future. A Thinkers 50-ranked management influencer and recognized expert in his field, his work features in HBR Guides and HBR’s 10 Must Reads on leadership, teams, strategic initiatives, and new managers. Over the past 20 years, he has used his First 90 Days® methodology to help leaders make successful transitions, both in his teaching at IMD, INSEAD, and Harvard Business School, where he gained his PhD in decision sciences, as well as through his private consultancy practice Genesis Advisers. At IMD, he directs the First 90 Days open program for leaders taking on challenging new roles and co-directs the Transition to Business Leadership (TBL) executive program for future enterprise leaders.

Ralf Weissbeck

The former Group Chief Information Officer and a member of the Executive Committee at The Adecco Group.

Ralf Weissbeck is the former CIO of The Adecco Group. He co-led the recovery of the 2022 Akka Technologies ransomware attack and led the recovery of the 2017 Maersk ransomware attack that shut down 49,000 devices and 7000 servers and destroyed 1000 applications.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience