Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Brain Circuits

Do you know your CPU from your GPU? Test your knowledge of AI terminology 

Published 12 February 2025 in Brain Circuits • 6 min read

Buzzwords and acronyms abound in the world of AI and seem to be multiplying faster than new model releases. Here are three questions to test your ability to talk tech (and a glossary below if you get stuck).

What’s the main difference between a CPU and a GPU?

A computer’s CPU (central processing unit) acts as its core processor, executing instructions and performing calculations to run programs. While CPUs handle general computing tasks, GPUs (graphics processing units) are specialized circuits designed specifically for processing graphics. Originally developed for gaming, the ability of GPUs to handle parallel computations also makes them ideal for generative AI tasks.

Why is it important to be able to tell a black box model from a white one?

Models can be classified as either ‘white box’ or ‘black box’ based on their transparency. White box models have clear, interpretable internal processes – you can trace exactly how they transform inputs into outputs. Black box models, while also taking inputs and producing outputs, have opaque internal workings that cannot be examined. Though this opacity makes some users uncomfortable, black box models are essential in fields like finance where they increasingly guide decision-making. When using black box software, ensure your non-technical teams understand what tools you’re using and their key capabilities.

What is ‘singularity’?

The technological singularity refers to a hypothetical future point when artificial intelligence advances beyond human control, leading to rapid, irreversible technological growth. This concept suggests AI would become capable of recursive self-improvement, creating increasingly intelligent versions of itself at an exponential rate. There is a debate on whether such an event would benefit humanity through unprecedented scientific and medical advances or pose existential risks through uncontrollable AI development.

Glossary

AGI (artificial general intelligence): A form of AI capable of understanding, learning, and performing any intellectual task that a human can do.

AI (artificial intelligence): The simulation of human intelligence in machines, enabling them to perform tasks such as learning, reasoning, problem-solving, and decision-making.

Anthropic: US public-benefit startup founded in 2021 by former members of OpenAI that researches and develops AI to “study their safety properties at the technological frontier” and deploy “safe, reliable models” for the public. It has developed a family of LLMs called Claude as a competitor to OpenAI’s ChatGPT and Google’s Gemini.

Black box model: A machine learning (ML) model with internal workings that are not easily interpretable or understandable by humans, even though they produce accurate predictions.

Chatbot: A software application that uses AI to simulate and process human-like conversations, enabling users to interact with digital systems through text or voice.

ChatGPT: A generative AI model developed by OpenAI and released in November 2022. Users enter prompts to receive AI-generated, human-like images, text, and videos.

Claude: A family of LLMs developed by Anthropic as a competitor to OpenAI’s ChatGPT and Google’s Gemini.

CPU (central processing unit): The primary component of a computer that executes instructions and performs calculations to run programs. Handles general-purpose tasks such as arithmetic, logic, control, and input/output operations.

Deep learning: A subset of ML that uses artificial neural networks with multiple layers to model and analyze complex patterns in data.

DeepSeek: A Chinese open-source AI model released in January 2025 that works in a very similar way to ChatGPT. It was allegedly built using less money and computing resources than that of its rivals.

Gemini: Formerly known as Bard, Gemini is a generative AI chatbot developed by Google and launched in 2023 as a direct response to the rise of OpenAI’s ChatGPT.

GenAI: AI that can create new content, such as text, images, audio, or code, by learning patterns from existing data. Examples include language models like ChatGPT and image-generation tools.

GPT (generative pre-trained transformer): An AI model developed by OpenAI and introduced in 2018 as part of a series of transformer-based language models. Transformers can process input data in parallel, making them faster and more efficient than traditional recurrent neural networks.

GPU (graphics processing unit): An electronic circuit designed to accelerate the processing of images and videos. Ideal for machine learning, deep learning, and data processing applications.

Hybrid AI: Combining multiple AI techniques, such as symbolic AI (rule-based systems) and ML to leverage the strengths of each, it aims to improve decision-making and problem-solving by integrating reasoning and learning-based methods.

LLaMA: A family of LLMs released by Meta AI starting in February 2023 to compete with OpenAI’s GPT and Google’s Gemini, but with one key difference: parent company Meta AI has made all LLaMA models free to use for almost anyone for both research and commercial purposes.

LLM (large language model): An AI model designed to process and generate human-like text based on large datasets of written language.

Meta AI: A research division of Meta Platforms (formerly Facebook) that develops AI, augmented reality, and artificial reality technologies. It considers itself an academic research laboratory focused on generating knowledge for the AI community and as such is separate from Meta’s Applied Machine Learning (AML) team, which focuses on the practical applications of its products.

ML (machine learning): A subset of AI that enables computers to learn from and make predictions or decisions based on data, without being explicitly programmed.

MLOps (machine learning operations): A set of practices and tools that aim to streamline the deployment, monitoring, and management of ML models in production environments.

Neural network: A type of ML model inspired by the structure and function of the human brain. Used to recognize patterns, classify data, and make predictions.

NLP (natural language processing): A branch of AI that focuses on the interaction between computers and human language. Enables machines to understand, interpret, and generate human language in a way that is meaningful and useful.

Nvidia: One of the largest and most profitable tech companies around, Nvidia was founded to make a specific kind of chip called a graphics card (also called a GPU, see above) for gaming that enables the creation of sophisticated 3D visuals. It subsequently expanded into more general computing processes, developing graphics cards that can multitask even better than the CPU and perform calculation-heavy tasks like machine learning fast.

OpenAI: A US AI research organization founded to develop “safe and beneficial” AGI. Best known for the ChatGPT family of LLMs, the DALL-E series of text-to-image models, and a text-to-video model named Sora. It is 49% owned by Microsoft, which has invested US$13 billion in it and provides computing resources to OpenAI through its cloud platform, Microsoft Azure.

Quantum computing: The ability to solve complex problems much faster than classical computers using the principles of quantum mechanics to process information.

Singularity: A point in the future when AI surpasses human intelligence, leading to rapid, self-improving advancements.

Supervised learning: A type of ML where a model is trained on labeled data, meaning each input data is paired with the correct output.

Symbolic AI: A branch of AI that focuses on using symbolic representations, such as logic, rules, and objects, to model human reasoning and problem-solving.

Transformer: A deep learning model architecture primarily used for NLP tasks, such as language translation, text generation, and sentiment analysis.

White box model: A model whose internal workings are transparent and interpretable. The decision-making process can be easily understood and traced, allowing humans to see how inputs are transformed into outputs.

 

Further reading

GAIN: Demystifying GenAI for office and home

AI x Sustainability: The new innovation engine

Generative AI: What comes next?

Bias in Generative AI: A risk that must be addressed now

Think AI is a useful ‘copilot’? Soon it will take the controls 

All views expressed herein are those of the authors and have been specifically developed and published in accordance with the principles of academic freedom. As such, such views are not necessarily held or endorsed by TONOMUS or its affiliates.

Authors

Michael Wade - IMD Professor

Michael R. Wade

TONOMUS Professor of Strategy and Digital

Michael R Wade is TONOMUS Professor of Strategy and Digital at IMD and Director of the TONOMUS Global Center for Digital and AI Transformation. He directs a number of open programs such as Leading Digital and AI Transformation, Digital Transformation for Boards, Leading Digital Execution, Digital Transformation Sprint, Digital Transformation in Practice, Business Creativity and Innovation Sprint. He has written 10 books, hundreds of articles, and hosted popular management podcasts including Mike & Amit Talk Tech. In 2021, he was inducted into the Swiss Digital Shapers Hall of Fame.

Amit Joshiv - IMD Professor

Amit M. Joshi

Professor of AI, Analytics and Marketing Strategy at IMD

Amit Joshi is Professor of AI, Analytics, and Marketing Strategy at IMD and Program Director of the AI Strategy and Implementation program, Generative AI for Business Sprint, and the Business Analytics for Leaders course.  He specializes in helping organizations use artificial intelligence and develop their big data, analytics, and AI capabilities. An award-winning professor and researcher, he has extensive experience of AI and analytics-driven transformations in industries such as banking, fintech, retail, automotive, telecoms, and pharma.

José Parra-Moyano

José Parra Moyano

Professor of Digital Strategy

José Parra Moyano is Professor of Digital Strategy. He focuses on the management and economics of data and privacy and how firms can create sustainable value in the digital economy. An award-winning teacher, he also founded his own successful startup, was appointed to the World Economic Forum’s Global Shapers Community of young people driving change, and was named on the Forbes ‘30 under 30’ list of outstanding young entrepreneurs in Switzerland. At IMD, he teaches in a variety of programs, such as the MBA and Strategic Finance programs, on the topic of AI, strategy, and Innovation.

Digital transformation and AI programs

Digital Transformation and AI programs

Prepare your organization’s digital future

 

Acquire all the digital transformation skills you need to accelerate your career and transform your business.

Find your program

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience