Scammy slop or useful information?
It might be the news. It might be social media. It might be emails. Whatever it is, there’s a good chance you wish there was less of it bombarding your brain.
We’ve been here before. The biographer of René Descartes, Adrien Baillet, warned in 1685 that “the multitude of books which grows every day” would bring Europe into “a state as barbarous as that of the centuries that followed the fall of the Roman Empire”. We feel you, Adrien. But he was wrong. I mean, completely wrong.
And yet we can identify with what Baillet was worried about: being overwhelmed by alerts, videos, ads, and, yes, even books. It has already been one year since an r/singularity subreddit post claimed that “Amazon Is Being Flooded With Books Entirely Written by AI”. This January, Wired magazine headlined an article: “Scammy AI-Generated Book Rewrites Are Flooding Amazon”.
How on earth did our predecessors who suffered from the same info-dilemma almost 350 years ago manage not only to survive, but to thrive? Did Baillet have a similar mindset to today’s AI doom-mongers? Or was his Cassandra-like warning due to an incomprehension of how the human brain adapts to new situations?
One roadmap that might help us find an exit from the traffic-clogged information superhighway (remember that?) is generative artificial intelligence. You might think that LLMs are the villains of this piece as they remorselessly churn out answers to queries, beautifully composed emails, and lazy student papers, all while using significant energy resources. Is it possible that chatbots and their AI ilk might help navigate the torrent of digital do-do?
Silicon Valley tech bro culture is famous for the mantra of “move fast and break things”, which implies that innovation requires disruption, and short-term consequences be damned. I’m certainly no cheerleader for this toddler mentality as corporations like OpenAI propel new AI products into the public sphere without care for copyright or chaos. But if there’s one thing that gen AI accomplishes at superhuman scale, it’s structuring information. We can argue over whether LLMs are truly intelligent or actually creating things, but they definitely structure unimaginable quantities of data into a form that we puny humans can interpret (even if hallucinations still happen all the time).
The new skill set that 17th-century intellectuals needed to develop was figuring out how to structure information into useful books and then categorize those books in such a way as to make the information they contained discoverable and retrievable. In our era, this was precisely what Google did when its algorithms made keyword search possible. The pre-Google era of Geocities, AltaVista, and the Yahoo directory seems positively prehistoric now, yet it was only 25 years ago.
We need a tech solution to a tech problem. The unfortunate reality is that Google might be the corporation that saves us yet again. Google’s new mantra is, “Let Google do the Googling for you,” with its generative AI. What this does to the web remains to be seen, but at least we might be shielded from the glut of crap.
Speaking of crap, some tech experts are proposing a new term for the mudslide of AI-generated material that threatens to make the internet unusable: slop. The idea is that until the term “spam” was proposed, annoying scammy emails were categorized by users as simply the cost of doing business in this miraculous world where envelopes, stamps, and mail carriers were no longer necessary. By labeling crap as spam, regular folk found it easier to push back, email services found ways to flag and hide spam from inboxes. The idea is that “slop” would work the same way.
I’m not sure whether slop will catch on. Junk mail existed (and still exists) before mail was electronic, whereas images of Jesus as a crustacean are a whole new world of weird. I keep mentioning in this newsletter and elsewhere how surrealism would be a key esthetic of this decade, and that was before genetically modified prophets appeared on the scene.
Let Shrimp Jesus help us pray for salvation from information.
Skyanora from Scarlett
You might have already seen this on mainstream media due to the celebrity angle, but OpenAI’s new voice LLM ChatGPTo (just call it “Omni”) withdrew one of its artificial voice characters this week when actor Scarlett Johansson threatened to sue the company used a voice that sounded like hers even though she had refused to do a deal with them. The AI character, called Sky, was remarkably similar to the AI chatbot Samantha in the 2013 film Her, also starring Joaquin Phoenix. OpenAI CEO Sam Altman has received a ton of online flak for ignoring Johansson’s refusal, especially after he pre-confessed his evil plan in movie villain style by tweeting the single word “her” right before Omni launched.
In a statement on the matter from Johansson, she said, “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.” Her union, SAG-AFTRA, also called for legislation to protect its members against unauthorized digital replication.
The OpenAI Sky debacle isn’t about moving fast and breaking things – it’s moving fast and stealing labor. The current batch of generative AI products are built on the theft of IP (intellectual property) backed by the assumption that infinitely funded legal defense teams and mass adoption by consumers will shield the thieves from consequences.
With OpenAI showing blatant disregard for IP rights, disbanding its safety team, and looking set to power the generative AI behind Apple and Microsoft products, we might be entering a bizarre new corporate world where Google is actually the good guy.
Or… the US could act as quickly as the EU did on Tuesday by enacting the world’s first major law regulating AI. The AI Act provides for rules on artificial intelligence that become more strict as the risk increases, as well as fines that can amount to 7% of an offender’s annual global revenue. In a nutshell, European lawmakers have basically decided to “move fast and regulate things”. Putting public safety and creator protection first is surely the best path for everyone except the billionaires in this destabilizing new reality.
Baguette-scented stamps from France
What I love about this story is how it melds a basic human sensory experience with an old form of communication and a modern technology, none of which is digital. The French postal service La Poste has just launched a new postage stamp in celebration of the iconic baguette featuring scratch-and-sniff ink that mimics the scent of a bakery.
Calling the world-renowned behemoth bread stick “the jewel of our culture” on its website, La Poste printed images of baguettes using fragrance-filled microcapsules to create a €1.96 letter stamp with an added olfactory offering. The French baguette was awarded UNESCO heritage status in 2022, which gives me hope that my local delicacy of poutine might one day achieve similar cultural recognition worldwide.
Total Recall from Satya
This week Microsoft announced that new Copilot+ PCs will be shipped with the company’s Recall AI integrated within the Edge browser. As CEO Satya Nadella explained to The Wall Street Journal, these devices will monitor and have the capacity to recall information when prompted using natural language searches. In other words, you vaguely remember spending hours searching for vegan handbags or who built the pyramids, and now your computer uses what Nadella calls its “photographic memory” to perform a semantic search over all your history to display a summary of things you had forgotten about.
As mentioned in the intro to this newsletter, how information is structured via interfaces for regular people is the new battleground in tech, culture, and (of course) finance. Google became the giant it is today by neatly structuring the sprawling web of the early 2000s in a way that was useful when somebody wanted to find something online. The AI era has arrived at a time when the amount of data available is exponentially higher and the need for structure therefore even more critical. I’m interested to see how commercial pressures intersect and interact with this market demand. After all, Google went from being useful to annoying at best, as it sold off SERP rankings to advertisers.
Of course, it is useful to be able to simply ask your phone or laptop to summarize web page visits on a particular subject, but as many online commentators have noted, the privacy risks are enormous, despite Nardella’s assurances that all search data will be conducted and stored locally on devices. Despite his promise, a lot of male Twitter/X commenters don’t want their laptop to take automatic snapshots of what they’re doing online at 3 a.m. for some reason. If they’re skilled Incognito mode users, they shouldn’t worry – Recall can be turned off. But the default for AI tech appears to be continuous self-surveillance and it’s hard to imagine that Apple will not be releasing something similar this summer.
Sara Wilson – brand, content & community strategist
Sara is an expert in the online behavior of younger audiences so I was extremely interested to find out how digital communities have evolved over the last few years and how she categorizes TikTok. (Okay, that’s two questions in a 1Qi, but I’m the boss so I can bend the rules!)
Q. You coined the phrase “digital campfires” to refer to private social media spaces in a Harvard Business Review piece that was published the month before the pandemic hit. The beforetimes now feel almost like ancient history, so I’m curious to know what has changed for digital communities and their users in the four years since then?
A. When I wrote The Era of Antisocial Social Media in 2020, I wanted to name a trend I had observed wherein younger audiences were moving away from open social platforms like Facebook and Instagram to smaller, more intimate digital platforms and space. I called these platforms and spaces ‘digital campfires.' I wrote the piece to help marketers understand how to show up in those spaces in authentic ways, because where audiences go, brands need to know how to go, too.
In the four years that have passed since, in some ways so much has changed, and in others, nothing has changed. In the immediate months following the publication of that piece, digital campfire platforms like Discord, Roblox, and Snapchat exploded in popularity. Others became the stuff of legend (remember Clubhouse?!)
Of course, that explosion tracked with the broader momentum all digital platforms were experiencing at the time which cooled off as the pandemic wound down. Now that we're a few years out, what I see is that online communities have come to play a much more central role in the way we are experiencing life, both online and off. Micro-communities and niche cultures are on the rise as a whole, algorithm-free spaces are driving business, cultural, and societal shifts, and shared cultural experiences are declining. These are just three data points I observe that suggest what was once a nascent trend has metastasized into a dominant paradigm of consumption.
Q. Do you see TikTok as social media in any way, or simply an entertainment and social commerce platform?
A. I hesitate to get too preoccupied with definitions or semantics because TikTok has completely redefined how we consume and create online. The way I see it, TikTok is a social media platform, the primary currency of which is entertaining creator-led content. Entertaining creator content is what connects people to each other and to myriad communities based on users' interests, passions, values, and beliefs, which are often hyper-niche.
What TikTok has done that is so novel, among other things, is making commerce a form of entertainment as well. In many ways, I see TikTok as the mother of all digital campfire platforms because it's an entry point for zillions of online communities, each with its own set of creators.
Sara Wilson is a journalist-turned-marketer who has worked with YouTube, Nike, Bumble, the New York Times and others to find, engage, and grow obsessive digital communities through her consultancy SW Projects. Sara frequently writes and speaks on the subject of digital marketing trends. She coined the term "digital campfires" in the Harvard Business Review to describe the types of spaces where young audiences gather online today, co-authors a report called The Brand Yearbook featuring the handful of brands that captured attention and mind-share and wallets among Gen Z audiences in the previous year and often speaks and leads workshops related to social innovation and Gen Z consumption trends to global companies such as Microsoft and McKinsey.
She has been featured as an authority on digital strategy in The Washington Post, AdWeek, Forbes, and Fast Company, among others, and on multiple podcasts including Creator Upload, Girlboss Radio, AdWeek's Brave Commerce and the Stylus' Future Thinking Podcast. Previously, Sara ran lifestyle partnerships at Facebook & Instagram, was an editor at The Huffington Post and Los Angeles magazine, and wrote for several leading publications, including the Economist, People magazine, and The Independent. Find her on LinkedIn or at swprojects.co.
EV stealth collides with pedestrian health
Trying to figure out whether what we see and read is real isn’t the only way that humans are having to adjust to new technologies. If ever there was a good example of “every silver lining has a cloud” a study published in the British Medical Journal this week shows that urban UK pedestrians are three times more likely to be injured by an electric or hybrid car than by one with an internal combustion engine (ICE).
The study calculated casualty rates per million miles driven since there are far fewer EVs on Britain’s roads than ICE cars. But increased adoption of electric and hybrid vehicles could produce lower emissions and more injuries. This is a perfect example of how social habits (you can’t rely on your ears anymore when crossing the street) and public safety (better climate impact, worse pedestrian health) need to be analyzed and publicized independently of whether a technology is deemed “good” or “bad”.
These issues are worth keeping in mind as we collectively debate or panic over the impact of AI. The bottom line: whatever benefits might accrue from tech, there are bound to be unintended consequences that should lead to caution when implementing products and services.
Is Transhumanism a New Religion?
In this recent interview with Paris Marx for his Tech Won’t Save Us podcast, American writer Meghan O’Gieblyn explains how the popular conception of transhumanism may have its roots in religious belief, despite the professed atheism of its supporters. One of the movement’s key goals is to develop technology that would allow anybody to “upload” their consciousness to a device or cloud and, in doing so, free themselves from the confines (and mortality) of their human biology. Raised in a strict religious environment, O’Gieblyn (who also writes the advice column for Wired magazine) is well placed to observe that, for transhumanists, “it’s almost like information has become a metaphor for the soul.” Unfortunately for atheists who are not okay with being dead one day, tech won’t save them, as the podcast’s name implies.
Dreamy donuts
I think I know what this Midjourney user was going for with his detailed prompt requiring “long flexible cylindrical pink guns that shoot glaze” but maybe his vision was a bit too specific (Pantene? Okay…) for the current capacities of image AI generators.
Please let me know what you think about Discomfort Zone by commenting directly in the online version of the newsletter, or by emailing me at john@johnbdutton.com.
And why not connect with me on LinkedIn if you haven’t already?