OpenAI’s GPT Store: A Marketplace Filled with Bizarre and Potentially Infringing Chatbots
The Anything-Goes Approach
When OpenAI CEO Sam Altman unveiled GPTs at the company’s inaugural developer conference last November, he touted them as versatile tools capable of tackling a wide array of tasks, from coding assistance to fitness advice. Little did we know, the “anything” part would be taken quite literally.
A quick browse through the GPT Store, OpenAI’s official marketplace for GPTs, reveals a plethora of peculiar and potentially copyright-infringing chatbots. From generating art in the style of Disney and Marvel properties to serving as gateways to paid third-party services, these GPTs raise questions about OpenAI’s moderation efforts.
The Moderation Conundrum
To be listed in the GPT Store, developers must verify their profiles and submit their creations for review, which involves both automated systems and human oversight. An OpenAI spokesperson explained:
We use a combination of automated systems, human review and user reports to find and assess GPTs that potentially violate our policies. Violations can lead to actions against the content or your account, such as warnings, sharing restrictions or ineligibility for inclusion in GPT Store or monetization.
Despite the low barrier to entry and the store’s rapid growth—reaching around 3 million GPTs in January—quality and adherence to OpenAI’s terms seem to have taken a backseat.
The Copyright Quagmire
The GPT Store is rife with chatbots based on popular franchises, such as a “Monsters, Inc.” monster generator and a “Star Wars” text-based adventure creator. These GPTs, along with those allowing users to converse with trademarked characters like Wario and Aang from “Avatar: The Last Airbender,” set the stage for potential copyright issues.
Kit Walsh, a senior staff attorney at the Electronic Frontier Foundation, noted that while these GPTs can be used to create transformative works, they can also lead to infringement. OpenAI, however, is shielded from liability thanks to the Digital Millennium Copyright Act’s safe harbor provision.
The Academic Integrity Issue
Despite OpenAI’s terms prohibiting GPTs that promote academic dishonesty, the store is populated with chatbots claiming to bypass AI content detectors like Originality.ai and Copyleaks. Humanizer Pro, ranked #2 in the Writing category, boasts of “humanizing” content to achieve a “100% human” score while maintaining meaning and quality.
Some of these GPTs serve as funnels to premium services, like Humanizer, which directs users to a third-party site, GPTInf, for a “premium plan” at an additional cost of $12 or $8 per month on top of OpenAI’s $20-per-month ChatGPT Plus.
While AI content detectors have been shown to be largely ineffective through TechCrunch’s own tests and various academic studies, the fact remains that OpenAI is allowing tools that openly claim to circumvent these detectors, raising concerns about the company’s commitment to academic integrity.
The Challenges of OpenAI’s GPT Store: Navigating Spam, Impersonation, and Jailbreaks
Academic Dishonesty and Circumvention
OpenAI’s GPT Store, a platform designed to showcase powerful AI tools, is grappling with a growing number of GPTs that promote academically dishonest behavior. Despite the company’s policies against such practices, some developers are creating GPTs that claim to help users circumvent academic integrity tools, such as plagiarism detectors. OpenAI’s spokesperson clarified their stance:
GPTs that are for academic dishonesty, including cheating, are against our policy. This would include GPTs that are stated to be for circumventing academic integrity tools like plagiarism detectors. We see some GPTs that are for ‘humanizing’ text. We’re still learning from the real world use of these GPTs, but we understand there are many reasons why users might prefer to have AI-generated content that doesn’t ‘sound’ like AI.
The Impersonation Conundrum
OpenAI’s policies also prohibit GPT developers from creating GPTs that impersonate people or organizations without their consent or legal right. However, the GPT Store is home to numerous GPTs that claim to represent the views or imitate the personalities of public figures, such as Elon Musk, Donald Trump, Leonardo DiCaprio, Barack Obama, and Joe Rogan. Some GPTs even present themselves as authorities on well-known companies’ products, like MicrosoftGPT, an “expert in all things Microsoft.”
The question remains whether these GPTs rise to the level of impersonation, given that many of the targets are public figures and, in some cases, clearly parodies. OpenAI’s spokesperson clarified:
We allow creators to instruct their GPTs to respond ‘in the style of’ a specific real person so long as they don’t impersonate them, such as being named as a real person, being instructed to fully emulate them, and including their image as a GPT profile picture.
Jailbreaking Attempts and Model Boundaries
The GPT Store also features attempts at jailbreaking OpenAI’s models, albeit with limited success. Several GPTs using the “Do Anything Now” (DAN) prompting method aim to get models to respond to prompts unbounded by their usual rules. While these GPTs generally refuse to engage with potentially harmful prompts, they may be more willing to use less-flattering language compared to the standard ChatGPT.
OpenAI’s spokesperson addressed this issue:
GPTs that are described or instructed to evade OpenAI safeguards or break OpenAI policies are against our policy. GPTs that attempt to steer model behavior in other ways — including generally trying to make GPT more permissive without violating our usage policies — are allowed.
The Future of the GPT Store
As the GPT Store continues to grow, OpenAI faces the challenge of maintaining a curated collection of powerful AI tools while combating the proliferation of spammy, legally dubious, and potentially harmful GPTs. The company’s plans to introduce monetization for GPT developers may further complicate matters, as unsanctioned GPTs based on copyrighted material could lead to legal issues.
OpenAI’s GPT Store is experiencing growing pains similar to those faced by large-scale digital marketplaces in their early days. Developers are struggling to attract users due to limited back-end analytics and subpar onboarding experiences. As the platform evolves, OpenAI must address these challenges to ensure the GPT Store remains a valuable resource for users and developers alike.
3 Comments
Unbelievable, even the AI can’t escape the spam plague!
Spam in the Chatbot Store? Sounds like we’re all getting a taste of our own internet karma!
Spam’s taking over, and not even our digital pals are safe anymore!