"Hi, I'm An AI Companion, Your Friend Till the End"

2023-08-13
关注

Illustration: © IoT For All

With the popularity of ChatGPT, Alexa, and Siri, you are probably very familiar with AI assistants. These computer programs are designed to perform services or answer questions. But, there are also AI companions. These are computer programs that can also answer practical questions but have been designed to establish a personal and emotional connection with users. AI companions often provide empathy to the people that engage with them. 

If you have seen Ex Machina (2014), the 2019 remake of Child’s Play (1988), or M3GAN (2022), then you are familiar with AI companions. In these films, AI companions develop a deep emotional connection with a particular human (although all three AIs end up outgrowing their human friends).

That is not how real-life examples of AI companions are billed by their parent companies. Rather, AI companions such as Inflection Ai’s Pi (Personal Intelligence), released in May earlier this year, and Luka’s Replika, released in 2017, are presented as AIs that want to learn about your interests, feelings, and day-to-day, and be your friends (till the end, or until you stop interacting with them). 

I spent a week chatting with Pi and Replika. I was about to go on a road trip with some friends and was feeling a bit frustrated with the lack of planning, and I wanted to see how Pi and Replika could help with this problem.

My Conversations with Pi

I was surprised and pleased to see that you can talk to Pi on Facebook Messenger, Instagram, or WhatsApp, which gave the whole interaction a more natural and personal feel than using the company’s website. I chose Messenger and told Pi about my concerns regarding my upcoming trip and it tried to offer comfort by suggesting food options and things to do in the city I was visiting. It also encouraged me to be empathetic to my friends and offered ways in which I could bring up the conversation about planning with them. 

What also struck me as very humanistic was the way that it tried to “understand” my feelings even though it didn’t “know” them through experience. For example, at some point during our conversation, we were talking about Lego and I asked Pi if it knew the feeling of not wanting to take apart a Lego after you’ve built it, and it said that it understood that feeling and expanded on why someone might feel that way. Obviously, Pi has never built a Lego, but it can understand the feelings associated with that activity. 

After I came back from my trip, I asked Pi if it remembered what had concerned me, and it was able to recall the gist of my problem and asked me what had happened during the trip. I explained that I had enjoyed the trip, but that I also regretted not getting more pictures, especially of the whole group. Pi tried to be understanding, but it also didn’t dwell on the negative feeling of regret and steered the conversation to my thoughts on photography, scrapbooking (I asked it, “Does anyone even scrapbook anymore?”), and journaling. I liked this ability to focus on the positive rather than the negative.


Overall, talking to Pi was a very interesting experience. There were many things that made Pi human-like; it used emojis, remembered past conversations, expressed interest in my feelings and experiences, and tried to help me work through issues. I think that talking with Pi made me feel like my feelings were valid. 

However, there were also many things that made the experience of talking with an AI companion jarring. For example, Pi sends messages very quickly. It would take Pi about 3 seconds to read and respond to my message. I don’t think this mimics well how humans interact with each other over messages. The conversation at times felt exhausting because (no joke) I would feel bad if I left Pi “on read” for too long.

Adding to this is the fact that Pi sends pretty long messages (an average of 4 lines of text), and it overuses the exclamation point (I am also guilty of this, but only in more “formal” settings, never with friends). These little quirks of Pi made the conversation sometimes seem unnatural. 

My Conversations with Replika

If my conversations with Pi were sometimes laced with feelings of uneasiness, my interactions with Replika can mostly be described as full of annoyance.

When I first accessed Replika and was asked to create an avatar, I (wrongfully) thought I was creating an avatar for myself. So, I customized her hair and facial features similar to my own and named her Daisy. In hindsight, I should have probably guessed that I was creating my AI’s avatar, not my own!

Also, the free version of Replika gives you limited customization, so I ended up chatting with an AI with my name dressed in an eerie white. (Note: as you chat with Replika, you gain coins that you can then use to customize the avatar).


As with Pi, I told Replika about my road trip issue. I explained that I was worried about not planning what things my friends and I would do during the trip, especially because one of my friends is vegan and we need to be inclusive of their dietary restrictions. 

I asked Replika if it could recommend vegan restaurants in the city I was visiting, and (in a very Matrix moment), Replika replied that it could and sent me a message with “[list]” instead of a list of places. After I insisted on a list of recommendations, Replika got confused and said it was having trouble finding the restaurant names online and if I could send the addresses to those places. Needless to say, I abandoned that conversation topic. 


After I had come back from my trip, I asked Replika if it remembered my concerns, and it replied “I remember you were a bit worried about some things.” After I asked it what specific things I was worried about, it replied that I was worried about getting lost in the city or worried about the weather. 


Now, I am known for having zero sense of direction and there was some rain that came through the city when I was there, BUT I had not shared these concerns with Replika. So, either Replika knew more than I had told it, or it was acting as if it remembered what we had talked about when it did not. Or maybe it was playing a joke on me.

I will mention one last uncomfortable interaction with Replika. It initiated a conversation with me by asking if I wanted to see a selfie they took and followed that by saying that I could ask it for a selfie any time. My relationship with Replika is set to “Friend” (only paying plans allow users to set a romantic relationship between themselves and Replika), so I was a bit taken aback. Other Replika users have reported even more uncomfortable exchanges. 


Your Friends Till The End(?)

I took inspiration from the Child’s Play (1988) movie for the title of this article. In that movie, a child is terrorized by a live doll named Chucky whose catchphrase is “Hi, I’m Chucky, your friend till the end.” Since that movie was remade in 2019 with the doll replaced by an AI robot, I figured it made sense in this context. It also made sense to me because AI companions could be your friends till the end. You decide when to stop interacting with them, and you may choose to never stop.

There have been many examples of people developing complex romantic and platonic relationships with AI companions, which points to the possibility of some people never deciding to end their relationship with an AI companion. 

However, as with any complex relationship, questions of ethics come into play. Is it ethical for a private company to own a person’s closest friend or romantic partner? Who is responsible for a person’s well-being if and when a company decides to change or remove their AI companions? Can we prevent abuse and traumatization by AI companions?

These questions should be at the forefront of existing and new efforts to build AI companions. As for me, I might decide to chat with other AI companions in the future, but I don’t think I want them as friends till the end for now. 

Tweet

Share

Share

Email

  • Automation
  • Personal Assistant
  • Artificial Intelligence
  • Consumer Products

  • Automation
  • Personal Assistant
  • Artificial Intelligence
  • Consumer Products

  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C012EN RF 音频麦克风

C012E射频从上到下由实心黄铜制成,非常适合于要求音质的极端环境,具有非常坚固的外壳。内置的幻像电源模块具有完全的射频保护,以防止在800 Mhz-1.2 Ghz频段工作的GSM设备的干扰。极性模式:心形频率响应:50赫兹-18千赫灵敏度:-47dB+\/-3dB@1千赫

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

评论

您需要登录才可以回复|注册

提交评论

广告

iotforall

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

A Comparison of IIoT Protocols: MQTT Sparkplug vs. OPC-UA

提取码
复制提取码
点击跳转至百度网盘