During the past 16 years, businesses have experienced many major, self-inflicted crises. Consider, for example, the malfunction of the Samsung Galaxy Note 7 (and several other completely different product lines) or the large scale opening of unauthorized accounts at Wells Fargo. These replaced Volkswagen’s emissions cheating and Takata’s faulty airbags on our front pages. The 5-year-old News Corp phone hacking scandal is mostly forgotten, but the BP oil spill hasn’t been. Everyone recalls the malfeasance at global financial institutions like Merrill Lynch, AIG, and Royal Bank of Scotland, that caused the worst global recession in 80 years. Satyam’s bankruptcy is also forgotten, though its falsification of financials was reminiscent of Enron’s 2001 meltdown. This list traverses industries and nations, but sadly, a cursory online search would find many more crises.

Because of the Columbia space shuttle disaster (insulating foam broke during takeoff and damaged the heat shields), and the Challenger disaster years prior, we believe we know why organizations self-inflict crises. Countless executives and MBAs have studied the key lesson, powerfully summarized by a Columbia investigation board member, General Duane Deal: “The foam did it … the institution allowed it.” They have learnt about individual and institutional biases that warp our worldviews. They know the absence of psychological safety keeps team members from disagreeing with dominant opinions. They understand organizational failures result from rigid reporting lines, “one right way” problem solving ethos, cultures that shoot – or specify unreasonable standards for – the messenger, and restrictive communications protocols. Hopefully, these lessons have saved lives on some occasions, and enabled more informed decisions among a broader swath of companies.

The companies named above were well known in their industries; most were globally known brands. So, why are we seeing so many self-inflicted crises now? They employed the best and brightest professionals and executives. Why did these smart people not prevent these crises and worse, why did some actively create them? When the crises erupted, their top executives denounced unnamed mid-tier managers, or other companies, for as long as they could. Sometimes, these efforts failed, as when the chief of Samsung’s mobile phone business blamed “a minor flaw in the battery manufacturing process.” At other times, they successfully deflected responsibility, as at News Corp and in the financial industry. What precise role did leadership play? These three questions are interconnected.

Digital technologies are permeating the core of 21st century business models. They enable firms to distribute intellectual property creation across time and over space, and then reassemble these in ways not possible before. Mid-tier executives, who have serious decision making power devolved to them (compared to 25 years ago) drive this workflow. They lead teams in which globally dispersed people from multiple organizations collaborate on mission critical tasks. But in most companies, they lack information they need, and/or can’t communicate with team members in real time and/or can’t foresee the implications of key decisions. Undoubtedly, one of them “pulls the trigger” when something goes wrong – whether it is a design problem at Samsung or the inability to design to needed standards at Volkswagen or the opening of unauthorized accounts at Wells Fargo. They are blamed because they can easily be blamed.

It isn’t hard to understand the earmuffs, shackles, and blindfolds that compromise these mostly decent people who usually try to do their best. Have we rethought how we work in a digital age when work increasingly requires large doses of unseen discretionary effort? Have we redesigned processes and structures to surface problems before these become crises? Have we allowed the free flow of key information to decision makers? Have we created collaborative, learning-focused cultures? As we worship at the venture capital created altar of releasing “minimum viable products,” have we addressed how to preclude the possibility of catastrophic failures? Have we created inter-company coordination protocols that don’t rely on threats of litigation? Have we … In most companies, we have not. And so in 21st century companies, we haven’t truly fixed how “… the institution allowed it.”

Why not? Simply because General Deal’s lesson is incomplete unless we add, “… the top leaders enabled it.” Top leaders enabled crises that ended lives, beggared people, destroyed institutions and reputations, and brought industries and economies to the precipice of disaster? Yes. The motive force behind institutional failure is leadership failure. The failure may be unintended, but that doesn’t exculpate individuals who spend their adult lives seeking the power and prestige of top positions.

Top leaders are enabling the current failures in two ways. First, though they speak of “ecosystems” and “a VUCA world,” they fail to rationally consider the implications of these realities for the day-to-day jobs their mid-tier executives. They can’t remove the earmuffs, shackles, and blindfolds if they don’t know these exist, if they think that 20th century human organizations can thrive amidst 21st century technology. They don’t even recognize that the slate of questions posed above are relevant, even critically important.  Second, they don’t consider at a human level how their stated strategic intents shape the acceptable ethical boundaries for those who must turn those intents into reality. In the highly interconnected digital world, it is very hard to rationally consider the many factors that affect any event. The difficulties are magnified when the factors change unpredictably and with great speed, and give rise to precious few “one right answer” and many “no good answers.” Given the archaic structures and processes, and without repeated, clear guidance on “what we don’t ever risk,” is it any surprise that decisions about ambiguous options subsequently turn out to be ethically compromised?

While an editor “pulled the trigger” to illegally hack the mobile phone of a kidnapped child, Rupert Murdoch enabled the decision. He didn’t set ethical standards in a scoop-focused media market, and hired executives who didn’t set policies and procedures to preclude such acts. (Indeed, he rehired an executive cleared of criminal wrongdoing, signaling that her ethical and managerial failures didn’t matter.) While mid-tier executives and engineers “pulled the trigger” to design Volkswagen engines that responded falsely to emissions tests, Ferdinand Piech and Martin Winterkorn’s demands of win-at-all-costs performance and the absence of appropriate procedural safeguards enabled – even encouraged – them to do so. While batteries from external suppliers might still be shown to have caused the Galaxy Note 7 fires, executives at Samsung’s pinnacle cannot be absolved of responsibility for its top-down, follow-your-orders culture.

Virtually every 21st century business scandal is reducible to this morality tale of a technology that allows us to do things we couldn’t before, coupled with major institutional failures that were enabled by failures of omission and commission of corporate leaders. We can’t do anything about the technology, but we can tackle the institutional and human failures. That responsibility ultimately lies at the very top. When they hire CEOs, Boards of Directors must make ethical norms the deal-breaking criterion. CEOs and their direct reports must rethink not just how to compete using digital technology, but more importantly, how work should be done in a digital-technology-mediated world. If not, we will experience more scandals, each of which will be succinctly diagnosable as, “The [X] did it … the institution allowed it … the top leaders enabled it.”


Amit Mukherjee is Professor of Leadership and Strategy at IMD business school in Singapore.