In the last few years, UX design professionals, digital marketers, and conversion optimization ninjas have increasingly started using psychology to design intuitive websites, engaging apps and higher converting marketing campaigns.

There’s no shortage of evidence that a good understanding of interactive psychology can can transform your formerly unknown app into a trusted and addictive product.

However, there’s one elephant in the room that nobody likes to talk about.

banksy-elephant in the room v2

Tai, Banksy's “elephant in the room”. Photograph: Damian Dovarganes/AP

It’s the dirty little secret that scientists, web psychology gurus and digital media pros rarely acknowledge, let alone discuss.   It’s the topic that emerges the minute you stop “selling” persuasive technology as a magic formula, and take an honest look at it.

The elephant I’m talking about is the “backfire”, when we accidently misapply psychology so badly that we accidently trigger the opposite behavior of what we intended.

Like when we try so hard to prove that users can trust us that we accidently trigger distrust. Or when we launch an unprimed social media campaign and accidently advertise to a massive audience that couldn't care less about our company and its products or services.

What will follow is an overview of a soon-to-be published paper on how psychology can easily backfire in behavior change technologies for Springer Publishing's Lecture Notes in Computer Science's (LNCS) 11th international conference on Persuasive Technologies (PT) that Dr. Agnis Stibe (MIT Media Lab) and myself co-authored. In this article, I’ll give you a friendly introduction to our study and discuss the twelve ways we identified where that psychology can easily trigger more harm than good.

I’ll also discuss how you can take a realistic risk management approach to managing the threat of backfires, without blowing too much time and money scanning for every possible threat.


Common misperceptions around behavioral science

While teaching digital psychology, I often deal with misperceptions and unrealistic expectations when applying behavioral sciences principles to interactive media.

From my experience, there are a number of people who believe that they can simply design addictive technologies by gamifying their app, or layering on some social pressure. Without doubt, these principles can work in the right context, but the problem is that without practical or scholarly experience, professionals just don’t understand the conditions that can influence the effectiveness of these principles, how different principles combine, or how things can go dreadfully wrong.


How to discredit yourself (easier done than said)

Perhaps the biggest misperception about behavioural sciences is that psychology always produces the desired outcome, that interventions always work, and that data scientists can easily manipulate you into buying products or services by deploying mind reading algorithms and tailored ads.

There’s some truth behind each of these, but in practice, the outcomes are far smaller than most organizations are willing to admit, and there’s a lot that can go wrong.

Perhaps the biggest risk is trying to win user trust while accidentally losing it; a backfire that we call “Self-discrediting”, when someone accidentally trashes their own reputation.

Say a developer learns that customers don’t feel safe when making a purchases on their e-commerce site. They respond with a “hold no bars” approach to boosting user trust and roll-out the next iteration of their e-commerce checkout page with six safety certificates, anti-virus badges, a gold seal with their return policy, guarantees, testimonials, big brand logos, customer help lines, and so many pictures of locks and shields, that you’ll feel like you’re in Fort Knox. And of course, they tell users that their site is 100% safe and secure.


Is there any guarantee that all these trust-building design elements will increase user trust and sales? Is it possible to overdo it, by trying so hard that we actually trigger distrust?

Trust is a sensitive principle that can easily backfire due to the risks and rewards involved with trust judgements. Think about it in your daily life. How often do you trust anyone who explicitly asks “trust me” ?

I can’t speak about others, and I have no scientific paper to quote on this exact case, but from my personal experience, anyone who’s aggressively pressured me to grant them my trust has eventually betrayed me. I’m sure that over the course of your life you have learned to recognize the red flags of dishonesty. Chances are that you can name a few.

Trust evaluations can lead to dire outcomes, and fortunately our brain is constantly scanning for threats, so although you can consciously describe the red flags that signal a liar, when you distrust a person, you may not know why. Similarly, you may not know why you trust one person over another.

What's ironic about trust psychology is that people trust abstract things like brands, companies, and interactive technology. Just as it goes with humans, so it goes with websites and apps. People decide if they can trust a website in under a fraction of a second, often based on superficial design features, and some users may also seek out third party reviews or feedback.

For our fictional e-commerce example, there is a point where the developers may have employed so many trust mechanics that they may have produced the opposite effect. Instead of boosting trust, a tasteless overuse of confidence-winning measures triggers a threat response. The company is perceived as untrustworthy.  Instead of boosting sales, overdoing it has reduced sales. At this point, instead of boosting credibility, our developers have practiced self-discrediting by accidently undermining their credibility.


Overdoing it can backfire for many reasons, not just adding mistrust but complexity too

Four types of outcomes

When it comes to behavior change, we generally want to see a shift in someone’s beliefs, attitudes, and intentions, and ultimately, their behaviors. However, there is another way we can look at these outcomes, which Dr. Stibe and myself classified under four categories, listed in the Intention-Outcome matrix below.

Axis 1 shows whether an outcome is intended or unintended, whether we’re achieving our target behavior or some behavior that we didn’t expect. Axis 2 shows whether the outcome was negative or positive, whether it was a good outcome for us and our target audience, or whether some bad outcome that is contrary to our goals.


Intention-Outcome matrix


Within the Intention-Outcome matrix, we defined the four outcomes as follows:

  • Target Behavior: The intended positive behavioral outcome being sought, and typically reported, such as increasing user trust and sales in an e-commerce sale funnel.
  • Unexpected Benefits: An unintended positive behavioral outcome that was not intentionally sought but was positive nonetheless and may be reported as an additional benefit. This might include attempts to boost trust in the buying process, which rubs off onto a more trustworthy brand.
  • Dark Patterns: An intended negative outcome, such as deception or fraud, when persuasion is used (and potentially abused) for the benefit of the evil developer. This could include attempts to boost trust in phishing scams.
  • Backfiring: Unintended negative outcomes where the opposite outcome happens, such as triggering distrust instead of trust. This quadrant is further subdivided into a risk management matrix that contrasts the likelihood of backfiring with its potential severity.


A risk management approach to psychological backfires

We believe that backfires are present in all persuasive media to some degree, so it’s unlikely that you will ever build any interactive technology or campaign that is 100% backfire free. Instead of advocating a total war on psychological backfires, we thought it would be more practical to take a risk management approach which means learning to live with backfires, and learning to manage them.

Below is our Likelihood-Severity matrix, a risk management framework.  During our study, we used a consensus-based approach to arrange the different types and categories of backfires into a simple framework that you can use to not just learn about the backfires, but also get a sense of how likely they are to occur, and if so, get a sense of how sever the backfire could become.



Liklihood Severity Matrix

Within this matrix, we identified six broad categories with twelve types of backfires. However, to the best of my knowledge, this was the first attempted systematic study on backfiring risks in persuasive technology, so in the future, we hope other scholars will build on our research, and help extend this taxonomy. But for now, you can take this as a list of common and obvious psychological offenders.

Below is a description of each backfire category and type:


  • Superficializing: The superficial application of theory, such as copying surface tactics without understanding the underlying strategies, principles, or reasoning.


Backfiring-psychology-article-Foursquare pizza award via

Research shows that gamifying apps like Foursquare may cease to be persuasive over the long term, and may potentially deplete intrinsic motivation.

Fineprint Fallacy

  • Overemphasizing: Motivating people to take action for one strongly emphasized benefit, while omitting (or hiding) harmful factors that are hidden in the fine print, ingredients, details, etc…



Louisville-based KFC announced in 2006 that all 5,500 of its U.S. restaurants stopped frying chicken in artery-clogging trans fat.



You too can create overemphasized advertisements for low-sodium products with Shutterstock. Simply add a badge to your product's landing page, and fail to notify prospects about the high percentage of trans fats.

Personality Responses

  • Defiance Arousing: Triggering resistance to messages that are incompatible with a person’s self-identity, that can induce unpleasant cognitive dissonance, leading the audience to reject the message, or oppose it.


Cover of a Philip Morris study "Talk To Your Kids About Smoking, They'll Listen" that actually encouraged teens to smoke more.

Cover of a “Youth Smoking Prevention: Raising Kids Who Don't Smoke” Philip Morris study, Vol. 1: Issue 2. “Talk, They'll Listen,” was the tagline. It was found later that the study actually encouraged more teens to smoke.


  • Self-Licensing: When someone does something good in one area, they sometimes feel like they have a license to misbehave in other.

Credibility Damage

  • Self-Discrediting: When the source disseminates discrediting information, causing a misalignment of source and message credibility.
  • Message Hijacking: When a third-party actor re-contextualize the source’s message, bringing a new meaning, which in many cases undermines the intervention by turning it into a public joke.



Stoner Sloth was an Australian anti-drug campaign that backfired with drastic consequences. The campaign that portrayed a stoned sloth in teenage situations was mocked by the online community and even hijacked as was turned into a parody site, and a Twitter account was created.


Backfiring-psychology-article-stonerslothCo hijack twitter account

The logo for the hijacked Twitter account. From their profile: “Stoner Sloth seeks out and reviews the best Canna-companies in Colorado to help you enjoy your smoking experience with a whole lot less work on your part!”


bono - beat bullying bracelet campaign 0 bbc

Bono endorsed the BBC's “Beat Bullying” campaign, which backfired as bracelet wearers suffered the wrath of more attacks from their bullies.

Poor Judgment

  • Mistailoring: When a tailored messaging system provides information that produces negative outcomes in some users, that could have been avoided with an appropriate message.
A drinking screener showed both low and high drinking students how much they consume in comparison to an average consumption. Those that were above the norm felt encouraged to drink less, while those below received an implied message to drink more.

A drinking screener showed both low and high drinking students how much they consume in comparison to an average consumption. Those that were above the norm felt encouraged to drink less, while those below received an implied message to drink more.

  • Mistargeting: When a message that was intended for one audience segment is misinterpreted by another group of people.

The Playpump removed perfectly functional water pumps in South Africa and installed a cumbersome toy that relied on child labour to operate.

  • Misdiagnosing: When a behavior change intervention does not properly diagnose user behavior or psychological processes.
  • Misanticipating: Changes in policies or directives that lead to unanticipated shifts in beliefs, attitudes or behaviors.


Social Psychology

  • Anti-Modeling: Demonstrating negative behavior (often to demonize it showing the bad behavior), which raises people’s awareness and may trigger them to do the bad behavior, especially during moments of susceptibility.
  • Reverse Norming: Interventions that use examples of popular bad behaviours can establish the bad behavior as a social norm.



Two separate studies indicated that the D.A.RE. program was ineffective and in some cases, pushed kid toward drug use and lowered self-esteem. Researchers suspected that the intervention's message made some kids want to try drugs as a way of fitting in. Read this web banner, then ask yourself, what outcome may result from an anti-drug web banner that suggests kids want to be on drugs.

Practical steps to reduce backfires

It’s not practical to try and remove every single potential backfire from your communications, landing pages, interfaces, etc… So instead of advocating that digital media pros aim to remove all possible backfires, instead, we’re recommending that you learn to live with them, and manage them.

What follows are some principles I put together, to help you better manage backfire risks.

  • Know the limits of removing backfires: The first thing you need to do is adopt the right mindset, where you accept your limits, by acknowledging that no matter what you do, there is always a chance that some aspect of your design will cause negative impacts in some populations. As long as you’ve taken steps to hunt and remove the highest risk backfires, there is a point where you need to take the attitude that it’s not possible to predict how all people will respond, which will keep you on your toes, scanning for potential backfires.
  • Take a risk management approach: Instead of trying to remove every single backfire, what you need to do is to prioritize those backfires with the highest likelihood of occurring, so that you can focus on eliminating the highest risk backfires. For instance, superficializing is a lower risk backfire that's more likely to lead to something that doesn’t work (wasting your time and money), rather than the social psychology backfires, which is more likely to backfire. To get going, use the Likelihood-Severity matrix to scan for the highest risk backfires in quadrant D, where you’ll want to see if any of your materials may trigger anti-modelling and reverse norming outcomes. If you’re clean on those, move to quadrant C, then B, and finally A.
  • Nip backfires in the bud while it’s cost effective to address them: It’ll probably cost you less time and money to remove backfires from a new product/campaign that’s under development, rather than try to fix a problem after you’ve rolled it out. As one of my Krav Maga instructors told me, the best ways to get out of a bad situation, is to avoid being in it, in the first place. Look before you leap.
  • Systematically scan for emergent backfires: Given that you’ll never fully remove all backfires, you’ll need to be vigilant and on the lookout for backfires that may creep into your projects; either because you, others, or the changing world has caused them to emerge. You may need to scan for backfires like in a risk management approach where you ask yourself, “What can go wrong?”, and “How can we stop it before it emerges?”. Plan ahead by asking yourself, “If it did occur, what would we do?”
  • Become an obsessive tester: Even with years of scientific and practical experience, behavior change pros can rarely predict which messages and design patterns will be most effective when rolled-out to new populations. However, these pros can normally tell you how to run a simple experiment to test which messages work best. So if you want to become great at designing persuasive technologies, you’ll also have to develop a basic understanding of the psychological principles and architectures that drive behavior, and then learn which of them work with your target audience. Think of it this way: Science will tell you what psychological principles should work in theory, with very robust models. But then you’ll have to build messages, interfaces, and real-world materials that you’ll need to test, and optimize. I’m of the opinion that conversion rate optimization research helps professionals to identify the core psychological ingredients that boost a technology’s persuasiveness, while also reducing their worst backfires.


Ironic humor

We could not stop laughing while co-authoring this paper. And the reason is simple. When you finally see the psychological principle that triggered a backfire, it hits you like the punchline of a great joke. You realize how absurd it was to do X to achieve Y.

As long as the joke isn't about your digital marketing campaign or app, you’ll probably appreciate the humor. But you may have a different perspective if your backfired app that has become the laughing stock of your industry.

The stigma associated with rolling out a backfiring campaign or technology is perhaps too much to bear for most individuals and organizational leadership. Perhaps this is why we suspect there is probably a significant publication bias in the scientific community around unreported psychological backfiring, and why it’s so difficult to find much research on this subject.

The point of talking about taboo subjects is to broaden one’s perspective by not denying reality, but instead, developing a bigger understand of what’s going on. I hope this discussion on how psychology can go dreadfully wrong has helped to broaden your perspective on using digital psychology and inspired new ideas on how to make sure you’re hitting more of those intended positive outcomes.



This summary was authored by myself (Brian Cugelman), and I’d like to thank my co-author, Dr. Stibe, all those who contributed examples and feedback, the Persuasive 2016 conference who accepted our paper, and Jesse Ship who’s brought the editorial scalpel to this article.


Share your backfires

If you happen to identify any new types of backfires, or new examples of the ones we identified, please share them below. It would be great to get references with full names, URLs, and citations if possible. This would be an awesome help for our follow-up research. THANKS!!!




Learn the essentials of digital psychology
Get My E-Book

Get Digital Psychology Updates

Get Updates