“Data integrity” focus: At the foundation of a robust AI framework lies the imperative to mitigate bias, safeguard privacy, and respect intellectual property rights. With the advent of GenAI, the risk of inadvertently revealing sensitive training data becomes a pressing concern. This underscores the need for a pre-emptive focus on the implications of bias, discrimination, and decision-making with non-verified data. This requires interdisciplinary collaboration, bringing together expertise from data privacy, customer services, legal, marketing, and development to identify and mitigate risks.
Given the particularly sensitive nature of data within the insurance sector, Swiss insurer Die Mobiliar proactively formulated a data strategy well before the emergence of ChatGPT. It assembled an interdisciplinary team from compliance, business security, data science, and IT architecture. This team’s mission is to ensure that its data strategy and ethical considerations are aligned. They accomplish this by exchanging knowledge, offering advice, and maintaining a comprehensive perspective. The team is governed by a board represented by wider stakeholders who carry information back to their respective business lines and alert the team to relevant issues. This two-tiered governance model provides direction and encourages cooperative efforts on matters of digital responsibility, establishing a solid foundation for the use of GenAI that is committed to data integrity.
Collaborative intelligence: GenAI experimentation is not an isolated pursuit; it’s a collaborative effort that benefits from shared insights and advancements. The learnings from one department can propel innovation throughout an organization, emphasizing that the value lies not just in individual achievements but in how they collectively uplift the whole. This can be done by building what transformation experts describe as corporate centers of excellence (COEs), that is, a model that incorporates a centralized knowledge base to create hardwired processes and standardized systems that efficiently produce and deploy AI models and solutions across the organization. These COEs can be standalone cost centers that are either fully funded by corporate headquarters or cofounded by various business units. Alternatively, it can operate under IT as a separate unit with clear boundaries or under a dominant business unit or key function such as marketing.
GenAI also introduces a dynamic that requires continual interaction with technology, moving beyond static learning curves. As these tools learn autonomously, employees face the ongoing challenge of adapting to their evolving capabilities. The integration of GenAI into the workplace is redefining the interplay between human cognition and artificial intelligence, raising critical questions about how to employ it in a way that enhances rather than hinders human capabilities. To effectively navigate this, it is useful to first differentiate among the participants in the interaction – whether they are individuals, groups, or machines – and to identify the initiator of the interaction, be it a human or a machine. This categorization results in six possible collaborative uses of generative AI, such as a “CoachGPT”, which provides employees with suggestions on managing their work by observing what they do and their environment; or “BossGPT,” which advises and coordinates a group of people on what they could or should do to maximize team output. GenAI is becoming a core component of our cognitive processes, demanding a thoughtful approach to its adoption that prioritizes augmentation and seamless integration into the fabric of organizational intelligence.