In the past couple of weeks, the business community has been bombarded with headlines about ChatGPT, the artificial intelligence (AI) software created by Microsoft-backed company OpenAI that can answer questions and write essays and lines of code. It has quickly amassed millions of users and been praised by business leaders, including the billionaire Elon Musk who tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI.”
This is not just a milestone in the development of AI; it has huge implications for many different types of businesses. Text-based generative AI models are still in their infancy, but they pose the potential for a revolution in productivity, the commodification of knowledge, and a reckoning for leadership development.
There are also major risks. Without the appropriate safeguards, bots like ChatGPT could facilitate plagiarism at scale – not to mention the amplification of human bias, to the detriment of diversity and inclusion.
Companies need to be aware of these upsides and downsides, and most importantly, that the current body of knowledge in almost any field could become commodified — that is, acquired as easily as goods, produce, or stock.
Will AI replace human workers?
I must make a frank admission: ChatGPT responded to my queries on leadership and the characteristics of high-performing teams and psychological safety even better than I could. So, it’s not hard to see how natural language agents could disrupt at least some parts of many professions, from journalism to law. ChatGPT has, for instance, already performed the job of an investment analyst, and written a research note on how stocks perform during layoffs, with impressive results.
The bot is also likely to add momentum to the automation of legal writing (such as drafting contracts) and basic news reporting. It produces convincing and coherent responses to questions, but often its answers need fact-checking. So, it is useful as a starting point, but not for complete tasks.
That could change. ChatGPT is trained on millions of data points, and it will get better if there continues to be exponential growth in information available for it to digest.
Yet while most of the excitement focuses on its ability to produce text, its major business impact comes from its ability to understand that text. What separates ChatGPT from powerful search engines such as Google and knowledge repositories like Wikipedia is its capacity for knowledge synthesis – or the ability to identify, appraise, and link information to distil and present arguments.
This talent offers users more personalized and relevant content, which also informs decision making. Eventually, ChatGPT should be able to help answer important business questions, such as how to form a competitive strategy. Users will still need to apply human judgement and context to those decision options, but the AI can help them to extract insights and take better actions.
So, ChatGPT may complement and augment human capabilities, instead of replacing them. It is likely that we will see people and machines working together in intelligent combinations that enrich each other’s strengths. This means people can produce more work more quickly – a potential revolution in productivity for many different types of businesses.
A new generation of “originality filters”
There will be major implications for leadership development, too. Today, much of what is put forward as “new” is old wine in new bottles. There is a strong market for leadership training – worth an estimated $378 billion – but it’s crying out for original ideas.
Here, even ChatGPT struggles to make the distinction. When I asked what fresh leadership ideas have been written about in the past five years, it highlighted “transformational leadership” and “emotional intelligence”. However, the former was developed by James MacGregor Burns in the 1970s, and the latter by Daniel Goleman in 1995.
If anything, ChatGPT could help to crystalize generally accepted leadership principles. And it could help academics to look up concepts to see if something identical or similar is already published. This could herald a new generation of “originality filters”. And that would help academics to focus on genuinely groundbreaking research (such as on leading in hybrid workplaces), rather than rehashing well-worn frameworks.
But there is also the potential for the bot to facilitate plagiarism on steroids because it can imitate academic work. It can also evade the current generation of plagiarism checkers – which look for similar phrases, not similar ideas.
Algorithmic bias is the enemy of diversity
It also risks reflecting and amplifying human biases and casual prejudices and becoming the enemy of diversity and all the benefits this is proven to bring to teams and organizations.
Because the system is trained on a huge data set, it will find whatever biases are built into it. For instance, when I asked ChatGPT about the differences between male leaders and female leaders, it said “male leaders are generally more likely to be seen as competent and decisive, while female leaders are generally more likely to be seen as likable and supportive”. Those gender stereotypes are widely regarded as regressive.
At a time when many companies will be looking to deploy AI systems in their businesses, they will need to be acutely aware of these risks and take steps to reduce them.
A first step would be greater visibility into how the system makes decisions. At present, it is a “black box” – meaning that humans, even those who design it, will struggle to fully understand how it reaches its conclusions. ChatGPT is likely to have major implications for both business and society, but we are only just beginning to uncover its potential.