Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Technology

The ethics of digital persuasion: how businesses should navigate psychological profiling 

Published May 12, 2025 in Technology • 12 min read

Apple’s ‘Evil Steve’ test is just one way to ensure that you are handling your customer data ethically and responsibly.

In the rapidly evolving landscape of digital persuasion, companies face mounting pressure to collect and use consumer data responsibly. Consumer trust continues to erode as revelations about data misuse emerge with concerning regularity. This erosion coincides with a technological revolution that has made psychological profiling and behavioral modification more accessible than ever before.

The convergence of big data, artificial intelligence, and behavioral science has created unprecedented capabilities for organizations to understand and influence human behavior. Executives now confront difficult questions about where to draw ethical boundaries in this new frontier. Apple, known for its commitment to user privacy, developed an innovative approach to evaluate the ethical implications of its data practices. The company created the ‘Evil Steve’ test, named after their late co-founder Steve Jobs.

This thought experiment asks project team members to imagine a future where an evil CEO has taken control of Apple with the intention of harming and exploiting users. Team members must consider whether they would still feel comfortable with the data collection methods and product designs they’ve implemented under these hypothetical circumstances. If the answer is no, they return to the drawing board.

This practical exercise forces teams to confront worst-case scenarios and design products with safeguards that extend beyond the tenure of the current leadership, according to Sandra Matz-Cerf, a computational social scientist who serves as David W Zalaznick Associate Professor of Business at Columbia Business School.

“Apple has an interesting strategy for anticipating such negative outcomes and acknowledging that data is permanent – but leadership might not be,” explained Matz-Cerf, who said this approach represented a significant shift from compliance-focused ethics to values-based decision-making. As organizations face increasing scrutiny over their data practices, such proactive ethical frameworks may prove essential.

Back in the 1950s, Vance Packard exposed this in The Hidden Persuaders, documenting how marketers tapped into our subconscious motivations and emotional triggers to influence what we buy.

The current state of psychological profiling in business

How advanced are most organizations when it comes to using big data and related technologies for psychological profiling and behavioral modification? While adoption varies, Matz-Cerf said the potential for sophisticated targeting continues to grow.

“Many companies have a pretty decent sense of how they can use data to not only predict but also change behavior,” explained Matz-Cerf, who recently authored Mindmasters: The Data-Driven Science of Predicting and Changing Human Behaviour. “The equation is pretty simple: The more you know about your counterpart, the easier it is for you to influence their behavior. In most cases, we’re talking about consumers, but there’s certainly potential to use similar approaches internally with employees.”

However, she suggested that explicit psychological considerations remain somewhat limited in practice. Most organizations rely heavily on demographic data and past behaviors rather than deeper psychological insights. This gap stems partly from technological limitations that previously made psychological profiling difficult to implement at scale.

The emergence of large language models (LLMs) changed the landscape dramatically. These advanced AI systems have consumed vast amounts of internet content, making them remarkably adept at understanding human psychology.

“Because they have ‘read’ the entire internet, these models are behavioral experts and probably understand more about psychological science than most humans (or even researchers),” Matz-Cerf observed. “This allows them to not only infer psychological characteristics from pretty much any type of data input – whether that’s your purchases, searches, conversations, posts or more – but also generate content that speaks to these characteristics.”

Sam Kirshner, an associate professor in the School of Information Systems and Technology Management at UNSW Business School who studies how algorithms and AI impact the process of behavioral decision-making in operations and technology management, points to concerning examples of how these capabilities are already being used. He references Sarah Wynn-Williams’ memoir, Careless People, which claims that Facebook allowed companies selling beauty and weight-loss products to explicitly target teenage girls immediately after they deleted a selfie – likely due to feelings of insecurity.

“While such data-driven manipulation was once limited to major tech platforms, the rise of Generative AI has made these tools more widely accessible,” he said. “As a result, more companies will now have the power to influence people – for better or worse.”

Michael Yaziji, Professor of Strategy and Leadership at IMD, places these developments in a historical context. “Psychological manipulation for profit is nothing new,” he said. “Back in the 1950s, Vance Packard exposed this in The Hidden Persuaders, documenting how marketers tapped into our subconscious motivations and emotional triggers to influence what we buy. Every consumer-facing company has been doing this for decades – profiling customers and nudging buying behavior.”

The difference today lies in the scale and precision of these techniques. “We’re not talking about broad demographic targeting anymore – they’re tracking your scrolling speed, how long you linger on content, and even your momentary emotional states,” explained Yaziji, an expert in strategy, leadership, and sustainability who is currently researching the impact of artificial intelligence. “The goal remains the same as in Packard’s day – influencing behavior – but the methods have become frighteningly precise, with modern AI processing billions of micro-signals every second.”

Digital twins will understand not just our preferences but our deeper psychological motivations, allowing them to make increasingly complex decisions on our behalf

Potential benefits of psychological targeting

Despite valid concerns, psychological targeting offers significant potential benefits when applied responsibly. Matz-Cerf framed it as a neutral tool that can be directed toward positive outcomes: “As with every other technology, psychological targeting is a tool that allows us to tap into the needs and motivations of people and change the way they interact with the world. That can create incredible opportunities for supporting people to accomplish their goals even when doing so isn’t easy.”

She highlighted a real-world example where her team partnered with US non-profit SaverLife to help low-income individuals improve their savings habits. By targeting users with messages tailored to their personality traits, they achieved meaningful results.

“Among those who received the personality-tailored messaging, 11% managed to save $100 – up 4% among people who didn’t receive any messaging and 7% among people who were targeted with SaverLife’s best-performing generic campaign,” Matz-Cerf reported. “That’s still far from perfect, of course. But think of it this way: of every 100 people reached by our campaign, we managed to get an additional five to at least double their savings and build a critical emergency cushion.”

Kirshner pointed to the growing role of AI as a personalized assistant. He described how freely available LLMs provided him with customized travel recommendations during a recent trip to Scotland. “By telling Gemini that I’m a vegetarian who enjoys nature, visiting distilleries, nice hotels, and avoiding crowded tourist areas, I was able to get a personalized and highly relevant itinerary with activities, restaurants, and hotels, which I largely followed,” he explained.

He envisions a future where digital twins will understand not just our preferences but our deeper psychological motivations, allowing them to make increasingly complex decisions on our behalf. Even seemingly simple tasks like grocery shopping involve nuanced preferences these systems will need to grasp.

“Take the task of buying yogurt. My AI knows I prefer mango coconut yogurt. But what happens when the store’s AI offers vanilla Greek yogurt at a steep discount?” Kirshner asked. “Without knowing how price-sensitive I am or how much I like Greek yogurt, how would my AI know which choice I’d prefer?”

The same thing happens in traffic: even if everyone is driving just 5km/h over the speed limit, the moment a parked police car comes into view, drivers instinctively hit the brakes to get under the limit.

The dark side of digital persuasion

Despite these potential benefits, significant concerns exist about the risks associated with psychological profiling and behavioral modification technologies. Privacy loss is a primary concern, and Matz-Cerf challenged the common refrain that privacy concerns only matter to those with “something to hide.”

“Here’s the problem with this kind of mindset,” she asserted. “First of all, not having to worry about your personal data getting into the wrong hands is a privilege that not all people get to enjoy (think of gay individuals in parts of the world where homosexuality is still criminalized). But more importantly, it’s a privilege that might not be granted to you tomorrow. Because your data is permanent, but the leadership of companies or governments with access to the data isn’t.”

Privacy fundamentally concerns personal autonomy, and Matz-Cerf explained that giving up privacy means giving up control over your life and the choices you make. “Once a third party understands who you are, they can leverage those insights to influence your decisions – whether those are as small as the toothpaste you choose or as big as the political candidate you vote for,” she said.

Kirshner illustrated how surveillance itself shapes behavior with a relatable example: “I ask those who admit to being bad singers whether they ever sing in the shower. Most say yes. Then I ask: ‘If you knew your friends were secretly listening at the bathroom door while you belted out Taylor Swift, would you still sing?’ Most admit they wouldn’t – they’d shower in silence out of embarrassment.”

This principle extends to many contexts, including driving habits. “The same thing happens in traffic: even if everyone is driving just 5km/h over the speed limit, the moment a parked police car comes into view, drivers instinctively hit the brakes to get under the limit,” he observed. This constant sense of being observed tends to promote conformity rather than creativity or self-expression.

Yaziji reflected on the broader societal impacts of AI, particularly regarding mental health and social connections, by referencing media critic Neil Postman’s warnings about entertainment-driven media. “Neil Postman’s warning in Amusing Ourselves to Death feels eerily prophetic now,” he said. “He argued that media shapes not just what we consume but how we think, pushing society toward prioritizing entertainment above all.”

Modern platforms like TikTok and YouTube exemplify this concern, with AI systems “designed with one purpose: keeping you glued to endless streams of bite-sized content, sacrificing depth and critical thinking along the way.” Yaziji describes this as a paradigm shift. “We’re not the consumers anymore – we’re what’s being consumed.”

“Organizations and their leaders must look beyond internal compliance frameworks and consider the broader ecosystem in which their technologies operate.”

Organizational approaches and ethical considerations

When asked about organizational approaches to these technologies, the experts offered varying perspectives on corporate intentions and practices.

Matz-Cerf took a relatively optimistic view: “I genuinely believe that most organizations have good intentions when they use data,” she asserted. “Of course, the application of technologies like psychological targeting is meant to boost profits, but they often do that by creating value. It’s rare for a company to come in and ask how they can best exploit and harm their customers.”

However, she acknowledged that good intentions alone prove insufficient without proper safeguards. Without appropriate guardrails in place, even relatively benevolent actors can create harm. “This is partly because safeguarding data from outside attacks is hard, and partly because leaders will always face trade-offs, where using data in a particular way benefits consumers but not the company or vice versa.”

Kirshner offered a more cautious assessment, highlighting how competitive pressures can lead to ethical compromises. “While I would like to believe that most organizations have good intentions when they use data, competitive pressures can lead managers to morally disengage – that is, to find ways of justifying behavior they might otherwise consider wrong.”

He described how industry norms can normalize questionable practices. “If several competitors are already cutting ethical corners by launching AI tools that manipulate consumer behavior or compromise privacy, managers may begin to see these actions as normal. The more common the behavior becomes, the easier it is to shift responsibility onto ‘industry standards’ or say: ‘Everyone else is doing it.’”

Comparative justification represents another common rationalization, he added. “A company might justify using mildly manipulative algorithms by pointing out that a rival uses far more aggressive targeting. These rationalizations allow people to maintain a sense of moral integrity while still engaging in questionable decisions.” Practical guidance for ethical implementation

For organizations seeking to implement these technologies ethically and responsibly, Matz-Cerf encouraged a fundamental shift in mindset: “The simplest answer is to shift from asking yourself what you can legally get away with to what is ethical and aligned with your core values. It sounds so simple, but in my experience, the former approach dominates all too often.”

Emerging technical solutions that address privacy concerns while maintaining personalization benefits are an important consideration in this process, she added. Instead of pooling data on centralized servers, machine learning approaches like federated learning train algorithms directly on devices, ensuring that sensitive information never leaves users’ hands. Apple’s Siri, for example, or Google’s predictive text on Android devices, leverage the computing power of your smartphone to train their models locally.

The adoption of these privacy-preserving technologies continues to accelerate. “Although the transition to techniques like federated learning won’t happen overnight, their adoption is already expanding rapidly,” Matz-Cerf noted. In addition to companies like Google making much of their foundational research accessible through academic papers and open-source frameworks (such as TensorFlow Federated), she said there is a growing industry of consulting companies supporting the integration for SMBs that might lack access to internal expertise.

Kirshner emphasizes the importance of public education and awareness. To ensure an ethically beneficial approach to AI development and use, he said organizations and their leaders must look beyond internal compliance frameworks and consider the broader ecosystem in which their technologies operate. “One of the most effective long-term strategies is to increase public awareness and digital literacy,” he affirmed.

He also highlighted the important role of consumer demand in driving positive change, similar to sustainability trends in other industries. Just as growing consumer demand for sustainability is starting to reshape industries like fashion and food, a well-informed public can create new market dynamics around ethical AI. “If people begin to prioritize privacy, fairness, and explainability when choosing digital services, this will open the door for startups to build ethically grounded alternatives and put pressure on larger tech firms to reform questionable practices,” he said.

The path forward requires balancing technological possibilities with ethical responsibilities.

Key takeaways for business professionals

For business leaders navigating the complex landscape of psychological profiling and behavioral modification technologies, six practical insights emerge from these expert perspectives:

  • Implement proactive ethical frameworks like Apple’s ‘Evil Steve’ test. By considering worst-case scenarios for data usage, organizations can build safeguards that transcend current leadership and market conditions. This approach acknowledges that data collections created today will outlive current management teams and governance structures.
  • Recognize that technological capabilities now make psychological targeting accessible at scale. With the emergence of LLMs, organizations can implement sophisticated psychological profiling without specialized expertise. This democratization creates both opportunities and responsibilities.
  • Focus on value creation rather than exploitation. The most sustainable applications of these technologies help consumers achieve their goals rather than manipulating them against their interests. The SaverLife example demonstrates how personalized interventions can support positive behavioral change.
  • Explore privacy-preserving technologies like federated learning. These approaches enable personalization without centralized data collection, potentially resolving the tension between privacy and customization. Growing support from major tech companies and consultancies makes these solutions increasingly accessible.
  • Watch for signs of moral disengagement within your organization. When teams justify decisions by pointing to industry norms or competitor behavior, they may be rationalizing ethically questionable practices. Create space for explicit ethical discussions separate from legal compliance considerations.
  • Finally, prepare for increasing consumer awareness and demand for digital ethics. As public understanding grows, companies with strong ethical practices may gain competitive advantages similar to those enjoyed by sustainable brands in other sectors. Early adoption of responsible approaches may yield long-term market benefits.

The path forward requires balancing technological possibilities with ethical responsibilities. As Matz-Cerf explained, psychological targeting remains fundamentally a tool – one that can either support or undermine human autonomy and well-being. The difference lies not in the technology itself but in how organizations choose to implement it.

This article was first published by UNSW Business School in Sydney, Australia, and is republished with its permission. 

Related