Black Mirror’s Joan is Awful: Could Joan is Awful Happen to You?
- theghoulsnextdoor
- Aug 5
- 17 min read
Black Mirror’s Joan is Awful is a cautionary tale of our dangerous future in the shadow of hyper-surveillance, deep-fakes, and AI ownership. Joan is Awful explores the horrors of accepting shady terms and conditions.
The Ghouls unpack the ways that AI and Data Mining are horrifying even now.
Sources in this Episode:
Other Reading:
Media from this week's episode:
Black Mirror - Joan is Awful (2023): An average woman is stunned to discover a global streaming platform has launched a prestige TV drama adaptation of her life, in which she is portrayed by Hollywood A-lister Salma Hayek.
Directed by: Ally Pankiw
Black Mirror’s Joan is Awful: The Danger of Accepting Terms and Conditions
by gabe castro
RED: Quotes, someone else's words.
Synopsis
Joan is Awful is a cautionary tale of our dangerous future in the shadow of hyper-surveillance, deep-fakes, and AI ownership. Opening up the 6th season of the sci-fi horror show, Black Mirror, Joan is Awful explores the horrors of accepting shady terms and conditions. Black Mirror is known for its manipulation of our current tech, distorting it into something horrific and challenging. This episode demonstrates a very real fear surrounding ownership of our image. With the existence of AI that grows more powerful every day in recreating human likeness and is more frequently used to bring actors back from the dead, how can we maintain control of our image?
Joan Tait is an ordinary woman who, after a particularly challenging day, finds that her life has been turned into a hit Streamberry series called “Joan is Awful,” starring Salma Hayek as an overdramatized, villainous version of herself. All her intimate moments, revealing her as a flawed and incredibly human human, are on display. In the pilot episode alone, we are introduced to Joan as she fires an employee while admitting to shady environmental dealings in the company, has a sad therapy session, and cheats on her fiancé. Even her subsequent crashout over the show is on the show as the program uses Joan’s real data (listening through her phone and any tech available) to recreate the day’s events.
She’s fired from her job (for breaking her NDA - though that was Salma Hayek, not Joan!), broken up with, and utterly alone as everyone sees her worst moments play out on their home screens. She tries to fight back, to sue Streamberry, but she has no ground to stand on as they own her, since she had blindly accepted the platform’s terms and conditions. (Fun fact: the show launched a “make your own Joan Is Awful” poster application that required people to check a box of terms and conditions, which then owned your face, so.) She considers suing Salma Hayek, but it's revealed that isn’t even her on the screen - it's an AI-generated version. Ms. Hayek had also blindly signed away her life.
Ever clever Joan, as an act of desperation, commits a crazy act that is aimed at upsetting Salma Hayek, whose body would be the one on screen for the audience. After Salma Hayek learns she too has no power in this situation, the two team up and try to take down the entire operation.
During their attempt, they learn that there are many versions of this show with other people shown to be awful. Joan and Salma break in to destroy the equipment and discover that Joan isn’t Joan Prime at all but the AI version of Annie Murphy, the actress, portraying real-life Joan, who has teamed up with real-life Annie Murphy to destroy everything. The deception goes deep, several layers down, stemming from that original Joan. OG Joan destroys the machine and takes back her life in an unexpectedly happy ending for Black Mirror.
Accept the Terms and Conditions, Now They Own Your Face
In today’s world, accepting the terms and conditions can have deadly and life-altering consequences. While Joan is Awful is comical and over-the-top in its representation of the lengths greedy studios and companies will go to own our lives, we have seen this abuse of power in a variety of ways. Just a few weeks after this episode aired (June 15, 2023), SAG-AFTRA went on strike (July 2023). Actors and writers formed a dual strike, the likes of which Hollywood hadn’t seen since the 1960s. Of the many issues causing the labor disputes at the top were disagreements over “intellectual property rights, artistic integrity, the lack of residuals from streaming services, and new developments within artificial intelligence and synthetic media technology (media produced by generative AI, colloquially known as deepfakes.)” So it’s eerily prescient that Joan was fighting against Black Mirror’s version of their home network, Netflix, known as Streamberry, as they aggressively abused both the actors and consumers of their media.
While Joan’s horror story revolved around her life being stolen and manipulated for views, Salma Hayek (and by extension the “real” Annie Murphy)’s horror was their artistic and intellectual property being stolen, essentially robbing them of their well-earned residuals but also their entire personhood. The real villain in each instance (other than Big TV) is the fine print. We’ve already seen just how far companies will take their signed documents, going as far as using them to wipe away wrongful death. As was the case for Kanokporn Tangsuan’s family, who filed a lawsuit after she had a fatal allergic reaction after eating at an Irish Pub in Disney Springs. They had reported that she had a peanut and dairy allergy, but this was ignored, leading to her tragic death. Her husband Jeffrey Piccolo sought to sue the entertainment giant, Disney but they disputed the claim due to Piccolo agreeing to terms and conditions when signing up for a one-month free trial of the Disney Streaming Service, Disney+, which includes, “The first page of the Subscriber Agreement states, in all capital letters, that ‘any dispute between You and Us, Except for Small Claims, is subject to a class action waiver and must be resolved by individual binding arbitration’,” the company wrote in a motion seeking to have the case dismissed.” details were shared in an AP News article, Why Disney's wrongful death suit serves as a warning to consumers.
While they did backtrack on that defense, ultimately not using it to defend the suit, it still sets a bad precedent. In those moments where Joan sits across from a lawyer demanding that she get retribution, that surely there is someone she can hold accountable, we see this reality in sharp focus. No one is coming to save us as we’ve all dug our own graves. How many of us truly read the terms and conditions? And further, how many of us truly understand what we’re agreeing to, regardless? Not all of us are skilled at lawyer-speak, and these companies know that. This episode was rare for Black Mirror in that the technology did not threaten life (in the traditional sense, arguably all the copies in the system are ‘dead’,) the horror we see is more linked to our identity being stolen.
We’ve talked previously on the Ghouls (most recently in our Alien: Romulus episode) about how production companies were bringing actors back from the dead to star in their movies. As we saw in the newest Alien film, where they “resurrected” actor Ian Holms to appear as a synthetic human. By utilizing Ian Holm’s likeness in this new film, it unintentionally echoes the very abuses it condemns. Holm, much like the character he played, has been immortalized and puppeted—his image used to serve a larger, more powerful company. The same summer this episode premiered, a production house was trying to cast long-dead actor, James Dean in a new film, Back to Eden. The hope was to have AI-generated, deepfake technology walk, talk, and interact as Dean would with other actors in the film. Actor Susan Sarandon spoke out against the new technology and its threat to her and other actors’ livelihoods during the strike, saying they could make her “say and do things I have no choice about.” Much in the way we see Salma Hayek argue in the show. An article on BBC, How AI is resurrecting dead actors, drops a few names that have already suffered the reanimation horrors, “Carrie Fisher, Harold Ramis and Paul Walker are just a few notable celebrities who reprised iconic film roles posthumously. Brazilian singer Elis Regina was also recently resurrected for a car advert, where she was shown duetting with her daughter Maria Rita.”
This raises a variety of questions on ownership and identity. Who owns the rights to someone’s face when they’re no longer with us? What about their voice and brand? How can anyone control their image, manipulate them into performing in things they will never have a say over? Actors, influencers, and creators whose faces and likeness are their livelihood have every right to fear a future where they are essentially owned, and worse, indefinitely. In an article on Consequence Film, Hollywood studios proposed using AI to scan and own extras' likenesses forever, they report on the fears of actors during the strike. “During a press conference, SAG-AFTRA National Executive Director and Chief Negotiator Duncan Crabtree-Ireland said that the AMPTP made a so-called “groundbreaking” AI proposal that would allow the likenesses of film and television background performers to be used in perpetuity.
‘They propose that our background performers should be able to be scanned, get paid for one day’s pay, and their company should own that scan of their image, their likeness and should be able to use it for the rest of eternity in any project they want with no consent and no compensation,’ he said. ‘So if you think that’s a groundbreaking proposal, I suggest you think again.’”
Even looking past the obvious worries of their bodies being abused or manipulated for views, there’s also a case to be made against the deceased being used in advertisements. (Even without the AI ickiness, I’m against using dead identities to sell things. Ex: Casely had a whole line of phone cases based around artist Frida Kahlo, who, in her life, had been strongly anti-capitalist and would’ve hated that.) It’s a dangerous landscape right now where even our editing software threatens to steal our intellectual property, reusing our likeness for advertising even now while we’re alive and forever after. As is the case with CapCut’s recent updates to their terms and services. (I had to cancel our subscription to avoid our work being stolen.) CapCut rolled out new terms that included the right to copy, share, and modify people for marketing campaigns and promotional material; the right to use your name, image and voice; and the right to give other people or companies permission to use it too. Further, this wasn’t a temporary clause - they're described in the terms as perpetual, worldwide, royalty-free, and sublicensable.
While the Ghouls aren’t famous enough to have our faces stolen, it’s not a far-off possibility. Deepfakes exist and get worse every day. Hell, AI makes it so easy to manipulate images that it has a chokehold on entire generations. While it can be seen as an accessibility feature, I recently encountered content where people’s voices and faces were changed on Instagram. My phone is in Spanish, and so it started automatically changing Reels to be in Spanish - not dubbed, mind you! These people’s mouths were moving correctly to the words, and it was in their voices. It’s a scary time to be a content creator - whether you’re one of us lowly influencer/internet stars or a top-paid actor.
We talked at length about the impact of Deepfakes on women and creators, specifically in our CAM episode. Kat has a lot of helpful tips for protecting your work, and hopefully they still apply (the episode came out 2 years ago!) in that episode. So there is no end to the horrors we can both imagine and live through in a world where we are expected to sign our lives and likeness away any time we want to watch the newest media darling.
Two Last Thoughts, Sorry!
While the theft of Joan’s Life and Salma’s likeness is the overarching story being told here, there’s another one I failed to notice on my first watch of the episode. Before Joan watches, Joan is Awful; she goes to the therapist and waxes on about her sad life. And she expresses an emotion and feeling that was incredibly familiar. She explains how she doesn’t feel like the main character of her life. That at some point she had woken up and simply been here, never having consciously made the decisions that led her to this life - her job, her fiancé, and overall bleak life. While meta-speaking, this is entirely true, as Annie Murphy’s Joan didn’t make any of those decisions. She even expresses this thought in the end, while destroying the computer, Joan had already made her decision, and she was just living it out. But Prime Joan also felt this sentiment, which is why we see it unfold for Annie-Joan in the first place. So while there is a literal truth to this sentiment, what Prime Joan is feeling in that moment is dissociation. A very real trait of which is feeling like you’re not the main character of your life - that you’re watching your own life as if it’s on television. As someone who lived most of their youth in such a state, I identified heavily with all iterations of Joan in that moment. There is a truly isolating and depressing place to exist when you don’t feel like an active participant in your life. It added to the happy ending to see Prime Joan in the end, in her real therapy session, finally feeling like she wasn’t simply playing a part but was truly living her life.
Lastly, during an exposition moment in the episode where the head of Streamberry discloses to a reporter that there are more “Are Awful” episodes set to premiere. The reporter asks why “Awful” and not something nicer? CEO Mona Javadi admits they tried a series of “Are Awesome” shows with test audiences and found that they most enjoyed watching people at their worst. While I will argue that reality TV and most TV do support this argument, I still hate it as an excuse. I just don’t buy it. I love messy TV, don’t get me wrong, but I also love sappy and sweet programs (Love on the Spectrum is my palate cleanser after any trashy reality love show.) I’ve just never been a fan of the narrative that people only want to see others fail, that we are all waiting to watch a disaster. While we certainly love to watch a trainwreck, there’s more to us as viewers than that. We love a happy ending.
Black Mirror’s Joan is Awful: Our Flaws Fuel Marketing Trends
by Kat Kushin
RED: Quotes, someone else's words.
Surveillance and Data Mining
We’ve all had a conversation with our friends about something and later seen ads on our computers or phones about it. That is because our data is being tracked CONSTANTLY. Even when you go out of your way to disable tracking, there are ways that it is still being exploited without consent. For some tech, even just using it makes you vulnerable to this kind of privacy invasion, as there are settings you cannot turn off. One of the biggest issues is that this level of surveillance is not being regulated at the level that it needs to, and many companies do it until they get caught and subsequently sued…looking at you, Meta. The issue largely is that data mining is encouraged by many companies under capitalism because our information, attention, and identities hold value.
The ways that this has been manipulated aren’t yet at the point of Joan is Awful in the sense that we do not have the camera and drone distribution at the rate shown in the show; however, many tech companies have already been sued for overreaching in their data collection. Apple has been sued for Siri listening in on our conversations, and agreed to pay 95 million dollars in a 2021 lawsuit. That was reported as of May 2025. A Settlement was reached in a lawsuit against Meta’s CEO Mark Zuckerberg, from the privacy scandal involving the Cambridge Analytica political consulting firm, as reported on July 17th, 2025. Microsoft has been sued by authors over use of books in AI training, which was reported on as of June 25th, 2025. So this is all happening presently. As we delve into AI and AI training, there have also been countless companies that have faced extensive backlash over their use of AI and the risks of art and data theft from the AI models being trained. Adobe, Microsoft, Meta, and Apple have all heavily integrated AI into their products without the regulation necessary to protect their user base. Meta’s and Adobe’s AI models pull directly from artists, graphic designers, and the like. Microsoft and Adobe’s AI are using user data, screenshots of writing, and other works to train their models. Apple similarly has continued to embed AI more heavily into its systems. It’s becoming harder and harder to avoid and harder and harder to disable the features that track our user data.
As gabe discusses in their section, AI is trying to exploit literal identities through the AI extras, but it extends even further than that. We could very well see a Salma Hayak AI movie in our lifetime, or at least another actor. If you want to hear us unpack tech companies misusing user data through data mining I recommend our Possessor episode, and for surveillance stress, check out our surveillance episode where we discuss Snowden.
Deepfakes and AI
Deepfake technology has expanded through the use of AI. Like anything, there are good ways you can utilize this tech that increase accessibility, and other extremely harmful ways you can use this technology to defame a person, exploit their image, and cause harm. We see one of the positive ways this technology could be used in a Malaria ad, where David Beckham allows his voice and face to be altered with AI to speak different languages to reach more people, which is discussed in an article on WIPO Magazine titled Artificial intelligence: deepfakes in the entertainment industry. While there are positive use cases for this, there are a lot of ethical issues to consider. While an art installation, which allows users to take a “surreal” selfie with Salvador Dalí is fun, it’s also questionable whether Salvador Dali would want his image manipulated in this way, as he could not have imagined the way technology could be used in this way..
The much more concerning side of this is what is now developing, which mimics what we see take place in Joan is Awful, but without as much automation. Deepfake actors are being hired to do all the work of acting, but under the wrapper of more famous actors. So Kayla Lorette wrapped in an Annie Murphy wrapped in a Salma Hayek suit, and so on. The process is described somewhat upsettingly: “ Commercial applications of deepfakes currently include both hiring the underlying “deepfake actors,” as well as individuals whose likeness is used as a “wrapper” (i.e., the visage or likeness portrayed in the content) for the underlying performance. Where the so-called wrapper is a famous personality, this may save the underlying talent hours of time they would otherwise need to spend on set; that burden can be shifted to the deepfake actor instead. Additionally, such technology allows influencers to create personalized messages for hundreds or thousands of individuals without the need to actually record each message”. In essence, you save the company money but rob the actor.
This technology was recently used to change Snoop Dogg’s lyrics for an ad he previously participated in, using synthesia AI: Snoop Dogg | Synthesia.io The website brags about the money saving potential, and It seems that in this instance Snoop Dogg permitted them to do this, but before this technology, they would have just paid him again for his time to record the video again, as well as all the other actors and performers in the commercial and tech crew. Now they are making a quick edit that does save them a lot of time and money, but in a world where money is life, it’s ethically questionable.
Our Flaws Fuel Marketing Trends
Even more upsetting, in case you have not heard of this, Meta has been flagged for manipulating user mental health to feed ad revenue. In an article titled: Meta whistleblower Sarah Wynn-Williams says company targeted ads at teens based on their ‘emotional state’ on Tech Crunch, stating that in interviews with Senators Marsha Blackburn, Sarah Wynn-Williams claimed “that Meta (which was then known as Facebook) had targeted 13- to 17-year-olds with ads when they were feeling down or depressed. She said: “It could identify when they were feeling worthless or helpless or like a failure, and [Meta] would take that information and share it with advertisers,” Wynn-Williams told the senators on the subcommittee for crime and terrorism within the Judiciary Committee. “Advertisers understand that when people don’t feel good about themselves, it’s often a good time to pitch a product — people are more likely to buy something.” She continued that advertisers were informed by the company when teens were depressed, so they could deliver the ads at their most vulnerable. An example they provide was a teen girl deleting a selfie, and then an advertiser being queued in to push beauty products to that child because she may feel bad about her appearance. Similarly, they pushed weight loss products to teens who expressed concerns around body confidence.
In the usage of the data, Wynn-Williams claimed that the company, Meta, claimed that the 13-17-year-olds were very valuable to advertisers as a demographic because of their vulnerability. Wynn-Williams expressed the ethical problems with this to their exec, especially since Meta was a trillion-dollar company, it did not need to exploit a vulnerable population in this way. The tech crunch article suggests that because of the manipulation of teen emotional states, it is reasonable to assume that the same was done to adults to fuel advertisements. More concerning, they stated that Facebook was doing research into young mothers and their emotional states for undisclosed reasons.
Showcasing a bit of their intent, it was stated in the hearing that Meta Executives would not allow their children to use the products they built, Wynn Williams saying: “I would say, ‘Oh has your teen used the new product we’re about to launch?,’” Wynn-Williams said. “And they’re like, ‘My teenagers are not allowed on Facebook. I don’t have my teenager on Instagram.’ These executives … they know. They know the harm this product does. They don’t allow their own teenagers to use the products that Meta develops. The hypocrisy is at every level.” Meta has since denied these claims, although considering the nature of the lawsuits they have settled in recent years, we’ll leave it to you listeners to decide what you believe to be true.
Could “Joan is Awful” happen to you?
I’ll close my section out with some final thoughts on whether Joan is Awful could happen to you. In an article of a similar title: Could “Joan Is Awful” Happen To Me? An Australian Legal Perspective On The Dangers of Not Reading The Terms and Conditions unpacks just that. The short answer is…yes. The long answer is that the show does sensationalize a bit. There are things you can do to fight this kind of data infringement. They highlight specifically infringements on public policy. The level of surveillance taking place in Joan is Awful could more easily be argued to break public policy laws, even if you’ve signed your personal rights away. It would be against the best interest of society as a whole if everyone had a “_ is awful” story shown about them, as the level of surveillance would breach countless NDA’s, governmental secrets, etc. The article does helpfully give some next steps that were not highlighted in the show, and that is the use of a Cease and Desist Letter, which is recommended as the first reaction to something like this. Even if you would not win a case, many people could likely seek out compensation from the company, or at least delay them in their distribution of it if there were enough lawsuits to happen. They list templates from their site for Cease and Desist Letter templates, such as a standard letter and a letter alleging defamation.
The only risk here is if something like this were to happen, the company doing so would likely already have prepared a defense, as the act of doing so would be so bold that it’d be very unlikely that a company of that size would attempt this without preparing. The ultimate moral of the story is, read the terms and conditions if you can for as much as you can.








