OpenAI expands ChatGPT custom personality feature

2023-08-15
关注

  •  

OpenAI has announced it is expanding access to a feature that lets ChatGPT users give it a custom personality. It was previously only available to Plus subscribers in the US but will now also be accessible on the free plan and is coming to the UK and EU ‘soon,’ the technology firm confirmed in a statement. It comes as a new poll for Reuters found that a third of US employees were regularly using ChatGPT at work, despite only 22% of employers explicitly allowing its use.

OpenAI launched ChatGPT in November 2022 and it quickly became one of the fastest growing consumer apps in history (Photo: Iryna Imago / Shutterstock)
OpenAI launched ChatGPT in November 2022 and it quickly became one of the fastest growing consumer apps in history (Photo: Iryna Imago / Shutterstock)

OpenAI first launched custom instructions in ChatGPT last month to allow users to set rules the chatbot has to follow when responding to questions. One example of its use could be to require formal tones and force the service to place a greater priority on accuracy over and above guesswork in its answers.

With the new instruction set, accessible via the user menu, ChatGPT will consider the instructions in every conversation going forward. As such, OpenAI confirmed that users ‘won’t have to repeat [their] preferences or information in every conversation.’ Previously, those using ChatGPT’s free plan had to set a custom instruction per chat to get the agent to respond in a specific way. Once the platform’s new custom instructions are in place, users of the platform’s free version could, for example, specify that any outputs related to meal preparation do not include specific ingredients in deference to friends and family with allergies, or respond to more formal questioning using a specific house style mandated by the user’s employer.

In pursuing the enterprise users for ChatGPT, OpenAI has been explicit about security measures it has put in place to protect corporate and personal data. For example, no input data from business account users has been used to re-train the model. However, while it is possible to stop ChatGPT from using input data on a normal user account, this feature isn’t on by default. Custom instruction sets from free users are being used in further training the model, OpenAI has confirmed, but the firm says it strips personally identifiable information before they are utilised in this way.

Additionally, the technology firm says it has updated safety measures to consider new ways users can instruct the model, including ensuring that instructions don’t violate usage policies. The model has also been given the freedom to refuse or ignore an instruction if it would lead to responses that violate those policies. 

ChatGPT office politics

Employees are using ChatGPT to speed up their work, conduct research and even have it write reports. This is causing a headache for companies and security teams, as proprietary information is often put into the public version of ChatGPT and then used to further train and fine-tune the model. That data can then resurface in the future to be used by competitors or hackers.

In recent poll for Reuters carried out by Ipsos, 2,625 adults across the US were asked in mid-July about their use of ChatGPT at work. A third responded that they used it regularly. Only 10% of employers explicitly ban its use. 

Some firms have imposed a complete ban on its use, including blocking it within their own firewalls so it won’t load on company devices. In May, Samsung became one of the first largest such players to ban its use. Meanwhile, Google parent Alphabet has warned staff to be cautious of the data they enter into chatbots.

Content from our partners

AI will equip the F&B industry for a resilient future

AI will equip the F&B industry for a resilient future

Insurance enterprises must harness the powers of data collaboration to achieve their commercial potential

Insurance enterprises must harness the powers of data collaboration to achieve their commercial potential

How tech teams are driving the sustainability agenda across the public sector

How tech teams are driving the sustainability agenda across the public sector

There is a growing enterprise AI market, with companies like SambaNova, IBM and Databricks building out custom models that run on sandboxed data. This allows enterprise companies to utilise the power of foundation model AI but in a completely controlled environment and ensure the data remains in that system.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Others are licencing the OpenAI technology and using it in a controlled way, creating custom interfaces, instruction sets and limits. Reuters found that Coca-Cola was building its own large language model for productivity that was helping employees work more efficiently.

For its part, food and beverage giant Tate and Lyle is finding ways to use the public version of ChatGPT “in a safe way.”

“We’ve got different teams deciding how they want to use it through a series of experiments,” the firm’s chief financial officer Dawn Allen told Reuters. “Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?”

There are other risks associated with using AI services like ChatGPT in a corporate setting, including employees producing false information or even breaking copyright laws by inadvertently reproducing unlicensed training material. Kaspersky published a survey recently that revealed 56% of employees were actively using ChatGPT to generate fresh content. 40% of respondents, meanwhile, admitted that they did not check the accuracy of these outputs before using them.

“Despite their obvious benefits, we must remember that language model tools such as ChatGPT are still imperfect as they are prone to generating unsubstantiated claims and fabricate information sources,” warned Kaspersky data science lead Vladislav Tushkanov when the survey was published.

Read more: OpenAI adds persistent personality options to ChatGPT

  •  

  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C012EN RF 音频麦克风

C012E射频从上到下由实心黄铜制成,非常适合于要求音质的极端环境,具有非常坚固的外壳。内置的幻像电源模块具有完全的射频保护,以防止在800 Mhz-1.2 Ghz频段工作的GSM设备的干扰。极性模式:心形频率响应:50赫兹-18千赫灵敏度:-47dB+\/-3dB@1千赫

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘