Generative AI is at risk of suffering a similar fate as NFTs, the metaverse and crypto — more hype than a significant game changer.
This time last year, NFTs, the metaverse and crypto dominated headlines and conversations. McKinsey and Company predicted that by 2030, the metaverse would drive $5 trillion in value, and numerous brands wasted no time snapping up real estate in Decentraland. Meanwhile, NFTs were all the rage, with investors paying up to $300,000 for a Bored Ape Yacht NFT.
Today, no one talks about that stuff anymore. Generative AI is all the rage, and it’s all we talk about. Is generative AI really the next big thing, or is it simply a powerful tool that can drive efficiency for some people in certain scenarios that are being overhyped?
The Risk: Hype can be deadly for new tech. It can tarnish the reputation of innovations that actually deliver real-world benefits, but perhaps not at the level promised by its enthusiasts. Blockchains have very useful applications, and NFTs can play an interesting role in loyalty programs. The metaverse is coming, likely driven by a rising generation of Roblox users. That said, brands aren’t investing in NFTs and the metaverse anymore due to the numerous scandals over the past 12 months.
Generative AI is at risk of suffering a similar fate. Earlier this year, pundits promised that generative AI could generate flawlessly written news articles and research papers (ChatGPT was even listed as an author on some). But that was all before the general public understood that generative AI has an accuracy problem.
The FTC & Generative AI: Recently, the Washington Post reported that the FTC is investigating OpenAI over data leaks and ChatGPT hallucinations (a topic we covered previously).
According to the Post, “The Federal Trade Commission has opened an expansive investigation into OpenAI, probing whether the maker of the popular ChatGPT bot has run afoul of consumer protection laws by putting personal reputations and data at risk.”
The privacy breaches are scary. In March, OpenAI began contacting users to inform them that the payment information shared in a chat may have been breached. It turns out that chats can be shared with other users.
If the Commission determines that OpenAI violates such laws, it could be subject to a consent decree.
The Commission is also concerned with reputational harm ChatGPT can inflict on everyday citizens. We all heard about the disgraced lawyer who was fined and slammed by a judge over a ChatGPT-generated brief that cited hallucinated cases.
According to the Washington Post, the FTC wants OpenAI to share any research and assessments it has about the degree of ChatGPT’s hallucinations. Additionally, the FTC made “extensive demands about records related to ways OpenAI’s products could generate disparaging statements, asking the company to provide records of the complaints people send about its chatbot making false statements.”
The day before the Washington Post reported on the FTC investigation, Insider ran an article about ChatGPT getting lazier and dumber. Evidently, users of ChatGPT 4 have been complaining on Twitter about declining performance and worsening accuracy.
In a recent AdMonsters FAQ, Ryan Treichler of Spectrum Labs explained that we should think of large language models as “word calculators.” These tools are really good at predicting how to complete sentences in a particular context, period. Additionally, these models are designed to generate wholly new text in response to all prompts. That quest for new text, combined with an inability to verify the accuracy of its responses, goes a long way in explaining why generative AI is rife with hallucinations.
But is this a problem with LLMs or the users who expect too much from it? When the general public first learned about ChatGPT, it was positioned by many as a great way to create articles, case briefs and term papers quickly. The early articles had a ring of get-rich-quick schemes for those who rely on content for revenue generation or to complete class assignments. Many are learning the great pains that stem from believing that hype. Those people are a lot like the investors who lost their life savings investing in crypto or NFTs.
In the end, we may find that LLMs are closer to spellcheck for most users than a revolutionary technology that will change everything. It can’t complete tasks from A to Z, but in certain scenarios, it can help parse data and get insights sooner using natural language prompts. That’s a big benefit to be sure, but it’s a far cry from the hype we’ve been promised.