Human intelligence is still improving
OpenAI, the company behind ChatGPT, still the most well-known generative AI tool, has a vested interest in improving its products. But there are signs that LLMs are approaching a performance ceiling, despite claims that ChatGPT-5 will perform at “PhD levels”. (Sidebar: OpenAI CEO Sam Altman often talks about achieving AGI, or “artificial general intelligence”, but a person with a PhD is by definition highly specialized, and that’s the opposite of general intelligence.)
Why would there be a limit on LLM performance? In a nutshell, the quantity of training data available for the models to ingest is shrinking fast. Once you’ve scooped up (in other words, stolen) the entire internet, all you can do to improve performance is to tweak how the data is processed. In addition, the compute power and the energy required to run AI platforms are becoming prohibitively expensive.
These diminishing returns mean that the performance curves of the various LLMs are tapering off. Sure, give ChatGPT or Perplexity a narrowly defined task that would require a great deal of time for a person to complete and it might beat the best available human in a given organization. If your goals are to save time and to produce an above-average output, an LLM might do the job just fine, being a tirelessly perky and helpful chatbot sidekick who performs at 80-90% of expert proficiency.
But if you need an output in the 90-100% range… if “good enough” isn’t good enough… you’ll need a genuine expert.
This all makes sense, right? Well, the reality is that OpenAI needs to maintain the hype cycle if it wants businesses to invest millions in its AI products. Right now, this approach appears to be working. However, OpenAI is also carrying out in-house studies that are presented as pseudo-academic (i.e. non-peer-reviewed) papers to give the illusion that the hype is based on science. These studies provide a sheen of objectivity to hide extravagant, unsupported claims.
Here’s a great example: in the conclusion of a paper published last week, LLM Critics Help Catch LLM Bugs, OpenAI boldly stated, “From this point on the intelligence of LLMs and LLM critics will only continue to improve. Human intelligence will not.”
The vast, vast majority of people who read that paper will, like me, only skim it. They will probably also read the opening abstract and the conclusion. The latter is where the outrageous hype is neatly slotted in. The trouble is, the two ideas presented in the claim above – that LLMs will get smarter but people won’t – are based on… nothing.
First of all, the law of diminishing returns outlined above and the natural limits on processing power and energy all preclude gen AI models from infinite improvement. But secondly – and more importantly – there’s no reason whatsoever to believe that human intelligence will not improve. That claim is based on a fundamental misconception of human intelligence: that it is centralized in a single brain. In truth, our intelligence is distributed.
From the 2004 book The Wisdom of Crowds by James Surowiecki to a variety of scientific studies here, here, and here, the idea that the human race’s superpower is its collective, socially organized intelligence, has been accepted for some time. We are a social species that evolved incredibly complex brains and languages to collaborate more successfully than other creatures. This collective intelligence is the reason we now – for better or worse – rule the world. No single person’s super smart spaceship skills propelled us to the moon. Even a unique genius like Isaac Newton knew he stood “on the shoulders of giants”. Intelligence is a team sport.
Our ability to share, compare, and process information is what constitutes our intelligence, not an IQ test or the ability to win at chess. The simple fact is that diverse groups of people perform better than individual experts (or even groups of experts).
Ironically, the very existence of LLMs provides proof that human intelligence is still improving. They are tools that leverage the collective intelligence of the past to fuel the collective intelligence of the future. No technology appears like magic. Aliens didn’t show us how to make computers. Gods didn’t teach us how to tame fire. Everything in human culture, including artificial intelligence, is an expression, a crystallization, and a commodification, of human intelligence.
This is why I’m a pragmatist about AI, not an optimist or a pessimist. We will find ways to use this technology; some good, some bad. But I am a pessimist about the influence of giant corporations and venture capitalists on the development of AI. Beware of the hype. And believe in the human.
Gen AI video ads for major brands have arrived
A slightly different format for the Discomfort Zone news items this week: one big story in two parts, plus some snippets!
I don’t usually write about specific advertisements but I’ll make an exception for these ads from retailer Toys “R” Us and tech firm Motorola. Why? Because they were made using generative AI video tools.
Toys “R” Us and Motorola are two brands whose corporate owners flirted with bankruptcy this century. Is it a coincidence that both have released AI-generated ads in the last two weeks?
Toying with the brand
Toys “R” Us debuted what it calls the “first ever brand film created with Sora” at the Cannes Lions Festival in June. Sora is Open AI’s video generation tool, which is still not available to the general public. There’s good storytelling (clearly human-written) in the Toys “R” Us video but the esthetic still veers into the uncanny valley, which might not be what a toy store should be aiming for.
Sora is technically impressive. However, as was made clear in my interview with Walter Woodman of the Shy Kids production company last year following the release of the Air Head video made using Sora, creating shots is one thing, making a film that works in its entirety requires a lot of human creativity and skill. From strategists to scriptwriters, and from musicians to actors, making ads is so much more than entering a prompt or ten into an AI tool.
Nik Kleverov, CCO at Californian creative agency/production company Native Foreign, which made the video, told CNN, “Everything you see was created with text but some shots came together quicker than others; some took more iterations. The blocking, the way the character looks, what they’re wearing, the emotion, the background – it has to be a perfect dance. Sometimes you would create something that was almost right and other times not so right.”
The use of AI in creative production has its advantages and disadvantages. Advertisers typically demand perfection when investing in TV commercials and right now shots made with Sora have a lot of inconsistencies and rough edges. Is gen AI a tool or a toy? For Toys “R” Us, it doesn’t seem to matter.
Toys “R” Us Global CMO Kim Miller Olko was quoted as saying, “Our brand embraces innovation and the emotional appeal of Toys 'R' Us to connect with consumers in unexpected ways." Whether the unexpected is appreciated by fans of Geoffrey the Giraffe remains to be seen.
Motorola steps onto the AI catwalk
The formerly huge brand Motorola is leveraging a combination of high fashion plus gen AI video and audio in an effort to catapult itself into relevancy for chic consumers with this new ad campaign called “Styled by Moto” for its Razr folding smartphones.
The 30-second ad reportedly took months to create using a variety of image and video generation AI tools, including Sora, Adobe Firefly, Midjourney, Krea, Magnific, and Luma, to subtly reproduce the Motorola logo and brand colors in clothing items worn by AI models.
The ad looks good and the concept is perfectly serviceable, but Motorola has opened the door to some legal issues by using gen AI. First of all, the jingle featured in the ad was created on Udio, one of the two music generation tools that was sued last week by the Recording Industry Association of America (RIAA). Secondly if any real-life human fashion models look remotely like the AI-generated models in the ad, expect lawyers to start smiling and rubbing their hands together.
I love a good “big idea” brand TV commercial, but by embracing gen AI in the production process both Toys “R” Us and Motorola might discover that the upside (saving money on talent) might be outweighed by the downside (imperfect production and legal troubles). I’d love to hear your thoughts!
More AI takeover stuff
While some people in entertainment and marketing worry about their career prospects, others are cashing in on the AI boom.
The company that manages the IP of four dead Hollywood legends, Judy Garland, James Dean, Burt Reynolds, and Sir Laurence Olivier, has sold the rights to audio AI company ElevenLabs to use the stars’ voices to read in-app texts.
Legendary (and still very much alive) NBC sportscaster Al Michaels has sold his soul/voice to the corporation he used to work for so that fans of Olympic events can hear his AI-generated voice provide “Your Daily Olympic Recap on Peacock”
Finally, even if you aren’t legendary you may see your work as an influencer disappear into the AI black hole. TikTok has introduced avatar influencers in multiple languages as it rebrands its Creative Assistant as Symphony Assistant.
Chi Odogwu, ghostwriter and consultant
It’s fair to say that writers like me have a love-hate relationship with AI tools. Helpful for research and copy editing, might kill our careers. Chi Odogwu is a ghostwriter who is also a big proponent of the smart use of LLMs, so I was curious to know how he strikes a balance between adoption and terror when it comes to gen AI.
Q. As a professional ghostwriter who also leverages AI, do you see your services as under threat from increasingly capable LLMs such as ChatGPTo?
A. Many people imagine a ghostwriter as someone who writes on behalf of a client and then gets paid a considerable amount of money to allow the client to put their name on the piece. I see myself more as a co-writer and thinking partner. My approach is best described by the phrase I often tell my clients, "Your words, my hands."
A great ghostwriter serves as a catalyst for their clients' ideas and as a collaborator in shaping their vision. You need to ask the right questions, listen actively, and think critically about verbal and nonverbal communication, such as body language, tone, facial expressions, and mannerisms conveyed by the client. These nuanced details add critical elements that make the story compelling and authentically captures the client's voice.
Today, no large language models (LLMs) can offer the tangible and intangible benefits a human ghostwriter offers. LLMs like ChatGPT-4 make writers' lives easier by speeding up the research process and performing analytical tasks like data analysis, text summarization, and content categorization. But at this stage, they are glorified word calculators that excel at predicting the next word in a sentence.
However, as LLMs evolve, the human element will become crucial in writing. Clients will need a human ghostwriter to strategize on a piece's direction and critical themes. Clients will also need help with editorial discretion on what to include or leave out to make the strongest impact. None of these benefits can be provided by an AI at this time.
In addition, ghostwriters like me gut-check a piece of writing to ensure it resonates with the target audience by drawing on their life experiences, emotional intelligence, and understanding of human nature. I also conduct in-depth interviews with clients and ask for feedback and guidance to ensure the writing process is collaborative and inclusive.
My top AI tools are Open AI's Playground and ChatGPT-4. I use them to generate rough drafts from transcripts, provide inspiration when I'm stuck, and suggest different ways to structure an argument or story. Working from an AI-generated first draft helps me work faster, streamlines my workflow, and allows me to qualitatively and quantitatively analyze every piece of writing to ensure the proper context is preserved.
I don't believe advanced LLM systems like ChatGPT-4 threaten my services. Over time, there will be even greater demand for human content as people crave more authenticity and creativity in a world saturated with AI-generated content.
Just as AutoCAD didn't replace architects and Excel didn't replace financial analysts, there's a good chance professionals in my field will thrive in the coming years. However, these could also be famous last words. The only certainty is that nobody knows anything for sure. We have to buckle up, go along for the ride, and hope everything works out.
Chi Odogwu is an AI-powered ghostwriter and consultant who specializes in writing thought leadership content on LinkedIn and in business publications. He helps founders attract clients and drive revenue growth by using story-driven content that strategically positions them as industry leaders and experts in their respective fields.
His articles have been published in Entrepreneur, Bankrate, the NY Post, Fox Business, Forbes, NextAdvisor, and CNET. He has also ghostwritten for clients on platforms including LinkedIn, CoinDesk, CoinTelegraph, Insider, and more.
In addition, Chi hosts the long-running Bulletproof Entrepreneur Podcast, where he interviews experts in online business, finance, AI, crypto, and emerging tech. His interviews and conversations uncover valuable insights on building future-proof businesses in the age of AI.
Randomness and creativity
Mark Rank is a professor at Washington University in St. Louis. His latest book, The Random Factor - How Chance and Luck Profoundly Shape Our Lives and the World Around Us, makes the unsettling claim that randomness has more of an impact on our lives than predictability and individual actions.
In this article, Rank explores the key role that random chance played in the surrealist movement of a century ago. Quite simply, randomness is a fundamental feature of creativity. The parlor game known in the original French as cadavre exquis (exquisite corpse) was invented by the surrealists and relies on each player adding words or images to the unseen input of previous players. The result is a unique creation that no single person is ever likely to produce.
Interestingly, generative AI performs the opposite activity of an exquisite corpse. The generation of texts is based on the most likely next token in a string, not the most random. As noted in the intro above, human intelligence is collective, and better outcomes are achieved when a randomly diverse group is assembled to solve a problem. What the surrealists knew was that creation can also be collective and random.
The ultimate act of creation – the evolution of life itself – is driven by random mutations that produce organisms with increased chances of survival and reproduction. In other words, if you’re reading this, it’s thanks to randomness.
Blame it on the Fame
If you were around in the late 1980s you know who Milli Vanilli were. The R&B-dance-pop duo of Rob Pilatus and Fab Morvan recorded one of the biggest albums of the decade and had a string of hit singles. Except that they didn’t. The two young men from Munich had been cast as the faces of a fake group by German hitmaker Frank Farian, who had performed a similar piece of musical trickery in the 1970s with disco stars Boney M.
I usually recommend a single episode of a podcast in this section of the newsletter, but the limited series Blame it on the Fame is seven culturally fascinating episodes that range from the power of celebrity, the power of the recording industry, and the power of white businesspeople over a couple of poor black kids who couldn’t even speak English properly.
Not long after Milli Vanilli won the Grammy Award for Best New Artist in February 1990, the illusion crumbled when it was revealed that Farian himself performed the songs. The Blame it on the Fame podcast series retells the whole story in dramatic detail.
Mental health warning: you will have Girl You Know It’s True playing constantly in your head after listening to this.
Doom, gloom, or boom?
As asked in the FOMO Food section above, I’d love to get your thoughts on the impact that AI will have on creative careers. You can comment directly in the Substack app or drop me a line by emailing john@johnbdutton.com.
And why not connect with me on LinkedIn if you haven’t already?