
How Coca-Colaâs business-first approach beat the digital transformation odds
With most digital transformation initiatives ending in failure, we explore the formula behind Coca-Cola Eurasia and Middle Eastâs success. ...
by Michael D. Watkins, Ralf Weissbeck Published 21 September 2023 in Technology ⢠10 min read
Artificial intelligence (AI) is everywhere â from chatbots handling customer queries to analytics optimizing supply chains to increase efficiency and effectiveness. But beneath its shiny exterior, the rapid advance of AI has a dark downside. When misused, the tools designed for productive tasks can also identify security gaps and exploit them. As companies rapidly adopt AI to enhance their businesses, cybercriminals leverage it just as rapidly to power their malicious efforts. This situation is a new digital arms race. Companies must continually upgrade their AI-driven security measures to counter the AI used by hackers to craft ever more effective attacks. For business executives, the imperative is clear: stay informed and be proactive. It’s no longer enough to respond to threats as they come.
As you navigate this evolving digital terrain, you must understand how the development of AI is transforming the cyberthreat landscape. Three types of attacks are most concerning to companies â cyber fraud, data breaches, and ransomware encryption. We will not address a fourth type of potential attack â cyber sabotage â intended to wreak permanent damage to systems, especially those controlling infrastructure or essential services. But many of the ways that AI can be leveraged in cyber fraud, data breaches, and ransomware attacks can also be used for cyber sabotage.
Cyber fraud involves deceptive schemes created for financial gain. It can target individuals, businesses, or institutions from any global location. Common forms include phishing, where deceitful methods are used to extract sensitive data; online identity theft for impersonating victims; unauthorized bank transfers and online credit card usage; deceptive e-commerce practices; advance-fee scams promising larger future sums; and online investment frauds.Â
Cyberattacks that aim to steal data from companies are commonly called “data breaches” or “data theftâ. When the intention is specifically to infiltrate a network or system and steal sensitive data for illicit purposes, such as selling the data on the dark web, using the data for identity theft, or engaging in corporate espionage, it is considered a data breach. Â
Ransomware attacks have become a critical cyber threat in recent years. At the heart of a ransomware attack is encryption â a method through which hackers make a victim’s files or systems inaccessible until a ransom is paid. Ransomware encryption attacks involve malicious software that, once installed on a victimâs computer or network, encrypts files, folders, or even entire drives. The encryption is often robust, rendering the files unreadable without a unique decryption key. Victims are then sent a ransom note demanding payment (typically in cryptocurrency like Bitcoin) for the decryption key.Â
To raise the stakes even further, the most sophisticated hacks increasingly involve âdouble extortion,â where the attackers steal critical data â for example, customersâ and employeesâ personally identifiable information â and use encryption to prevent the company from accessing it. The combined threat of a data leak and disabled critical systems increases the pressure to pay the ransom.Â
In cyber fraud, data breaches, and ransomware attacks, malicious payloads are often delivered through phishing emails, software updates, or infected websites. Unsuspecting users download an attachment or click on a link that appears legitimate, only to unknowingly enable the ransomware to infiltrate and encrypt their system.
Malicious actors can leverage AI to facilitate and enhance data breach and ransomware attacks in the following ways:Â
AI has made it possible for hackers to craft highly personalized phishing emails. By analyzing vast data sets, from social media to online purchases, AI-driven tools like WormGPT, a hacking tool based on the AI model GPT J, generates phishing emails tailored to individual recipients, making them not only convincing, but virtually indistinguishable from legitimate correspondence. Â
AI tools can scan and identify weak points in networks faster than human hackers. Integrating AI-driven tools means vulnerabilities can be exploited more promptly and on a broader scale. Additionally, AI algorithms can analyze massive data amounts to pinpoint potentially vulnerable targets, from individual users to businesses with weak security postures.Â
Once a system has been infiltrated, AI can rapidly sift through and analyze vast amounts of data, pinpointing and extracting the most valuable information. This efficiency makes data breaches more damaging as sensitive data can be accessed and exfiltrated faster.Â
AI-powered malware is designed to evade traditional security measures by adapting its code in real time, enabling it to make decisions on spreading techniques, targeting criteria, or data extraction methods. Similarly, AI-enhanced ransomware can navigate intelligently within a network, prioritizing the most critical files to maximize impact.Â
With machine learning, algorithms can predict and crack password patterns more effectively than traditional methods, facilitating unauthorized access to systems and accounts faster than ever before.Â
AI can amplify ransomware attacks by developing complex encryption methods, making data recovery without the decryption key challenging. Once the data is encrypted, AI-driven ransomware can strategically demand ransoms, assessing the victim’s potential to pay based on the data’s sensitivity.Â
Traditional cyber attacks require substantial human oversight. However, AI automation allows attacks to be executed at an unprecedented scale and speed. This makes the attack more extensive and challenges mitigation efforts due to the sheer volume and rapid evolution of threats.Â
AI will increase the likelihood that your company will be attacked and that the attacks will be more damaging. The foundation of effective defense is strong security controls and up-to-date, vendor-supported applications and hardware systems. Â
An essential security control is Multi-factor Authentication (MFA). It needs to be in place for at least all internet-facing applications. Likewise, all your software applications and hardware systems must be upgraded to the latest vendor-supported versions. Having unsupported or out-of-date software or hardware/firmware is a sure-fire recipe for disaster. This is especially risky if your organization has internet-facing applications that were not developed and administered by the IT organization but were created by âshadow ITâ groups. Getting your security fundamentals in shape was before AI, but the risks are increasing because attacks will be faster and more sophisticated.
âAI-powered malware is designed to evade traditional security measures by adapting its code in real time, enabling it to make decisions on spreading techniques, targeting criteria, or data extraction methods.â
If all this leaves you feeling queasy, donât despair â while AI introduces these potential avenues for enhanced cyberattacks, it can also be leveraged to augment your defense. AI-enhanced security systems can detect anomalies, predict potential threats, and respond quickly to breaches, often faster than human analysts. Thus, as AI shapes the future of cyberattacks, it’s equally shaping the future of cybersecurity. As cyber threats become more sophisticated, the tools we use to counteract them must also evolve. AI has proven to be a game-changer in this realm, offering enhanced capabilities to strengthen cyber defense mechanisms. These include:Â
By emphasizing a proactive stance instead of a reactive approach, where actions are taken after an incident, the effects of predictive analytics have been transformative. Companies can glean insights into potential future threats by training AI models on extensive historical datasets, which might encompass past cyberattacks, vulnerabilities, and network activities. These models can identify likely attack vectors or foresee vulnerabilities, enabling organizations to fortify their systems even before an actual threat materializes. Â
Traditional cybersecurity methods often relied on known malware signatures. However, as cyber threats evolve, solely relying on signature-based detection becomes inadequate. Enter behavioral analytics. By employing AI, systems can continuously monitor and learn a network’s ‘normal’ behavior patterns. Any deviation from this established normâ be it a slight anomaly in data transfer rates or uncharacteristic access patternsâ can trigger alarms. This real-time monitoring and detection drastically cuts down the time between intrusion and detection, often catching breaches as they happen and dramatically limiting potential damage.Â
When it comes to cybersecurity breaches, swift action is paramount. The longer a breach goes unaddressed, the more the potential damage multiplies. AI-driven systems are equipped to provide immediate responses when threats are detected. Instead of waiting for human intervention, these systems can autonomously take measures such as isolating affected nodes, shutting down specific processes, or even launching countermeasures. Meanwhile, human response teams are alerted and can step in for a more detailed intervention, assured that initial containment measures are already in place.Â
It’s crucial to recognize that AI can be a force multiplier in defense strategies, but it also brings challenges. Embracing AI-driven cybersecurity requires not only understanding its strengths but also being cognizant of its pitfalls.Â
Given its capabilities, it’s tempting to view AI as a silver bullet for all cybersecurity woes. But this mindset can be a double-edged sword. AI tools, no matter how advanced, have limitations. They operate based on their training and the data they have been fed. Thus, they might miss new or nuanced threats that a human expert with years of experience and intuition might catch. Over-relying on AI and sidelining human judgment can lead to vulnerabilities. It’s essential to remember that the best defense often combines the computational power of AI with the discernment and expertise of human professionals.Â
One inherent weakness of many AI systems, especially those based on machine learning, is their vulnerability to adversarial attacks. In such scenarios, attackers craft specific inputs to deceive the AI model. Hackers can trick these systems into making false predictions or bypass security measures by feeding these systems misleading or poisoned data. For instance, an AI system trained to detect malware could be provided data that makes harmful software appear benign, potentially allowing threats to infiltrate the network.Â
The longer a breach goes unaddressed, the more the potential damage multiplies.AI-driven systems are equipped to provide immediate responses when threats are detected.
At the heart of AI’s power in cybersecurity is data. The more data an AI system processes, the more accurate and efficient it becomes. However, this insatiable need for data intersects with growing concerns about user privacy. As companies amass vast amounts of data to feed their AI systems, questions arise: Where is this data stored? Who has access to it? How is it being used? Regulations like the General Data Protection Regulation (GDPR) have been implemented to address some of these concerns. Nevertheless, the ethical implications of data collection, storage, and usage remain a hot topic.Â
Recognizing the evolving dynamics of the AI-driven cyber landscape is undoubtedly vital. But awareness alone isn’t enough. The next and equally significant step is action. Executives must transform their insights into tangible strategies to navigate the digital frontier successfully. Here’s a practical roadmap for companies to fortify their defenses while leveraging AI’s transformative potential.Â
Before you start worrying about leveraging AI, make sure that you have a solid foundation of basic security controls and procedures in place. As mentioned above, this includes required use of MFA, upgrades of software and systems to the latest vendor-supported versions, and rapid implementation of security patches.Â
In the world of cybersecurity, neither machines nor humans should be an island. While AI tools excel in rapidly processing vast amounts of data, detecting patterns, and automating responses, they can sometimes miss the intricate details or context behind certain activities. Humans can bridge this gap with their ability to understand nuance, intuition, and context. The optimal defense mechanism, therefore, is a harmonious fusion of AI and human expertise. In this hybrid model, AI systems manage vast datasets, rapid scans, and initial responses, all while human experts oversee these processes, providing the essential layers of context, judgment, and decision-making. This combined approach ensures a thorough, efficient, and adaptable defense mechanism capable of addressing both broad-scale and subtle threats.Â
The cyber realm is fluid, with threats continuously evolving, adapting, and becoming more sophisticated. To stay a step ahead, it’s not enough to deploy AI tools and then leave them static. These tools need to grow and learn continuously. Regularly updating and training AI models with the latest threat intelligence is imperative. By exposing them to new attack patterns, strategies, and vulnerabilities, AI systems can be primed to recognize and counter emerging threats swiftly and effectively.Â
Cybersecurity isn’t just about building digital walls and alarms. In an age where data is the new gold, it’s crucial to remember the ethical implications associated with its mining and use. In this digital Wild West, companies must ensure their AI-driven cybersecurity tools don’t tread on thin ethical ice. To support this, you should establish robust policies prioritizing user privacy, guaranteeing that AI systems handle data with the utmost confidentiality. Â
Transparency in operations is another pillar, so users and stakeholders should understand how these tools function and make decisions. Lastly, aligning AI operations with broader ethical considerations ensures that companies don’t compromise on principles while pursuing security.
Professor of Leadership and Organizational Change at IMD
Michael D Watkins is Professor of Leadership and Organizational Change at IMD, and author of The First 90 Days, Master Your Next Move, Predictable Surprises, and 12 other books on leadership and negotiation. His book, The Six Disciplines of Strategic Thinking, explores how executives can learn to think strategically and lead their organizations into the future. A Thinkers 50-ranked management influencer and recognized expert in his field, his work features in HBR Guides and HBRâs 10 Must Reads on leadership, teams, strategic initiatives, and new managers. Over the past 20 years, he has used his First 90 DaysÂŽ methodology to help leaders make successful transitions, both in his teaching at IMD, INSEAD, and Harvard Business School, where he gained his PhD in decision sciences, as well as through his private consultancy practice Genesis Advisers. At IMD, he directs the First 90 Days open program for leaders taking on challenging new roles and co-directs the Transition to Business Leadership (TBL) executive program for future enterprise leaders, as well as the Program for Executive Development.
Former Group Chief Information Officer and member of the Executive Committee at The Adecco Group.
Ralf Weissbeck is the former CIO of The Adecco Group. He co-led the recovery of the 2022 Akka Technologies ransomware attack. He also led the recovery of the 2017 Maersk ransomware attack that shut down 49,000 devices and 7000 servers and destroyed 1000 applications.
3 March 2025 ⢠by Michael R. Wade, Massimo Marcolivio in Technology
With most digital transformation initiatives ending in failure, we explore the formula behind Coca-Cola Eurasia and Middle Eastâs success. ...
28 February 2025 ⢠by ĂykĂź IĹÄąk in Technology
Concerns over the âunpredictabilityâ of AI are widespread, but Industrial AI proves the value proposition of this tool, says ĂykĂź IĹÄąk. This article explains how to build security into your strategy....
6 February 2025 ⢠by Bridget McCormack, Jen Leonard in Technology
Two senior lawyers give their verdict on the American Arbitration Associationâs efforts to harness the power of generative AI. They explain why incumbent organizations must seize the opportunities without delay....
5 February 2025 ⢠by Esther Salvi, Muhammad Shehryar Shahid, Mehak Sajjad in Technology
Digital tech, established businesses, and governments can combine to help bridge formal and informal supply chains. ...
Explore first person business intelligence from top minds curated for a global executive audience