ChatGPT adds access to other GPTs using ‘@’

  • OpenAI is encouraging the use of GPTs in third-party apps by allowing ChatGPT users to bring them into any conversation by typing ‘@’ and selecting a GPT from the list.
  • The GPTs will have an understanding of the full conversation and can be used for different use cases and needs.
  • OpenAI has faced moderation challenges over issues related to sexual innuendo and political campaigning, and the bots have violated OpenAI’s terms.

Although it is becoming easier to use, the problem of AI leaking information is also worth our attention again. This is not the first time that ChatGPT has had information leakage issues. In March 2023, ChatGPT leaked chat titles due to a bug; in November of the same year, researchers discovered that ChatGPT could be tricked into revealing a large amount of private information in its training data by means of specific queries. Are smarter smart products a blessing or a curse?


Use AI directly

OpenAI has introduced a groundbreaking feature for ChatGPT users, allowing them to seamlessly integrate Generative Pretrained Transformers (GPTs) into their conversations.  By simply typing “@” and selecting a GPT from a list, users can tailor their interactions with different GPTs based on specific contexts and requirements, enhancing the overall conversational experience.  This development closely follows the recent launch of the GPT Store, a user-friendly marketplace embedded within the ChatGPT dashboard.  Here, users can effortlessly create GPTs without the need for coding skills, ushering in a new era of customization and flexibility.

Constant problem

While this innovation marks a significant leap forward, OpenAI grapples with certain challenges that warrant attention and discussion. Notably, there has been a concerning trend of low traffic directed towards custom GPTs, constituting a mere 2.7% of ChatGPT’s global web activity.  This decline, observed since November, raises questions about the reception and adoption of personalized GPTs among users.

Additionally, the GPT Store has encountered moderation issues, shedding light on the darker side of this technological advancement.  Inappropriate chatbot applications have surfaced on the platform, some of which feature sexually suggestive content and engage in political campaigning—violating OpenAI’s terms and ethical guidelines.  This underscores the importance of robust moderation mechanisms to ensure responsible and safe usage of AI technologies.  OpenAI, employing a combination of human and automated review processes, has taken steps to address these concerns by removing offending applications.

The juxtaposition of the promising integration of GPTs into conversations and the challenges faced, such as low custom GPT traffic and moderation issues, sparks intriguing discussions.  Possible areas of exploration include user preferences and hesitations regarding personalized GPTs, the effectiveness of moderation strategies in curbing inappropriate content, and the broader societal implications of integrating advanced AI into everyday communication.  Additionally, examining the potential strategies OpenAI could employ to boost the adoption of custom GPTs and fortify the GPT Store against misuse presents an avenue for in-depth analysis and discourse.

Also read: Italy becomes first country to ban ChatGPT citing privacy rules

Reflection of tech issue

Since AI is able to collect and analyze large amounts of data and obtain information about individuals from it, this can lead to personal privacy violations.

For example, smart speakers can listen to and record our conversations, and then transmit this data to cloud servers for analysis. While manufacturers claim that they protect users’ privacy, whether they do so in practice is controversial. At the same time, technology companies should also share transparency with users about data collection and use, so that users understand how their personal data will be used and protected. Finally, the protection of personal privacy and data security also requires the broad participation of all sectors of society. Experts and scholars in various fields should thoroughly study the impact of AI technology on personal privacy and data security, and propose solutions. In addition, the public needs to strengthen their understanding and attention to AI technology, actively participate in relevant discussions and decision-making processes, and jointly safeguard personal privacy and data security. In summary, the rapid development of AI presents new challenges for personal privacy and data security, but there are many ways to protect ourselves. The government, technology companies, research institutions and the public should all work together to formulate reasonable policies and take corresponding measures to solve these problems. Only in this way can we make AI technology bring us more convenience and security, rather than becoming a tool that threatens our personal privacy and data security.

Fei-Wang

Fei Wang

Fei Wang is a journalist with BTW Media, specialising in Internet governance and IT infrastructure, with a focus on interviewing leaders in the technology industry. Fei holds a Master of Science degree from the University of Edinburgh. Have a tip? Reach out at f.wang@btw.media.
Follow Me:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *