Why management cannot be reduced to an algorithm
Many managerial decisions are not optimization problems; they are judgment problems. The view that management is a mechanical activity – one that can be performed by algorithmic systems without meaningful loss – rests on an assumption that is false. This is the assumption that the optimal execution of strategy is a fully determined process that can, in principle, be specified completely in advance. There are at least two reasons why this assumption fails, one practical and one ethical.
The practical reason is that many management decisions involve weighing considerations that cannot be measured precisely or objectively and that cannot be compared to one another using some universal scoring system. In these situations, humans use their faculty of judgment. When a manager decides how to deliver critical feedback to an employee, they are balancing a highly complex web of interrelated considerations: the employee’s need to hear the truth, their emotional state that day, the relationship the manager hopes to maintain, the signal the conversation sends to the rest of the team, and a whole host of other factors. There is no formula for making such a decision. It is not a measurement problem that better data could solve; it is intrinsic to the situation that such factors must be weighed through judgment, not calculation.
The ethical reason stems from a basic fact: management decisions frequently have human consequences. When a customer service policy affects whether a vulnerable person receives help, when a staffing decision determines whether someone keeps their job, or when an algorithmic recommendation is applied to a real human case – these are not merely operational matters. They affect the lives and welfare of real human beings.
As such, these decisions fall into the realm of ethics – and ethical choices cannot and should not be outsourced to algorithms. They must remain the privilege and the responsibility of human beings. As an IBM training manual put it nearly 50 years ago: “A computer can never be held accountable, therefore a computer must never make a management decision.” This principle extends to any decision that affects human lives.
This point applies even to those who believe, as Milton Friedman does, that a corporation should be bound only by legal constraints and some minimal, poorly defined notion of ‘ethical custom.’ Ultimately, it doesn’t matter whether you think your company has moral obligations that extend beyond the most basic – what matters is that your customers, employees, and regulators do, and they will act accordingly if you fail to meet them. The algorithm cannot anticipate every situation in which applying its logic will provoke public outrage, regulatory scrutiny, or legal liability. Human judgment is required not only to do the right thing, but also to recognize when doing the wrong thing will be costly – and those two considerations are not always distinguishable in practice.