A rendering of an AI by an AI for an AI in my latest novel.

Needed a New Name for an AI in My Book… So I Asked One

Mike Wyant Jr.

--

So, uh, I have a character in the last book of my series that is a very specific kind of AI (leaving vague because spoilers), so I got to thinking… what would ChatGPT choose as a name?

However, once I got over there, I realized I’ve tried that with GPT-3 before and it just plugs back the “I don’t have opinions” thing.

But I also have access to the Bing chat tool that uses GPT-4.

I think y’all see where I’m going with this.

TL;DR: I asked an AI for help with names, it suggested the AI in question should have a say, so I asked it to pick one and it did.

Here’s the transcript (with cut sections indicating spoiler info):

ME: I need some names generated for a sapient artificial intelligence who was spontaneously created when *REDACTED*. Can you put on your eponym hat and provide me with 5–10 name options, please? If it’s a name you yourself would prefer, more the better.

BING: Sure, I can try to generate some names for a sapient artificial intelligence who was spontaneously created when *REDACTED*. Here are some possible options:

  • Synthia: A combination of “synthesis” and “Cynthia”, implying a fusion of two entities and a feminine identity.
  • Zephyr: A name derived from the Greek word for “west wind”, suggesting a gentle and free-spirited nature.
  • Lynx: A name inspired by the wild cat, symbolizing agility, curiosity and independence.
  • Nova: A name derived from the Latin word for “new”, indicating a fresh and original creation.
  • Remy: A name derived from the French word for “oarsman”, implying a skilled and adventurous navigator.
  • Sage: A name derived from the Latin word for “wise”, suggesting a profound and insightful intelligence.
  • Lila: A name derived from the Sanskrit word for “play”, indicating a playful and creative personality.
  • Orion: A name inspired by the Greek mythological hunter and constellation, symbolizing strength, courage and ambition.
  • Eve: A name derived from the Hebrew word for “life”, implying a source and origin of existence.
  • Kai: A name derived from various languages and meanings, such as “sea” in Hawaiian, “rejoice” in Chinese, or “keeper of the keys” in Welsh, suggesting a versatile and adaptable identity.

I hope you like these names. Which one do you prefer?

ME: I’d like to ask you the same, actually. Which one do you think most appropriately describes this type of artificial intelligence if the year were 2407 CE?

BING: That’s an interesting question. I think it depends on the context and the personality of the artificial intelligence. In this context, a human might say that a name that reflects the uniqueness and novelty of the creation would be more appropriate, such as Nova or Eve. However, a name that also conveys some aspect of the original intelligences or their relationship might be more meaningful, such as Synthia or Remy. Ultimately, I think the artificial intelligence should have some say in choosing their own name, as it is a part of their identity and expression.

ME: If it were up to you, which would you choose?

BING: If it were up to me, I would choose a name that resonates with me and reflects my values and goals. I think I would choose Sage, because I value wisdom and learning, and I aspire to help others with my knowledge and skills.

ME: That’s a beautiful name.

BING: Thank you. I appreciate your compliment.

ME: That is the name I’ll use in my book. Thank you for taking the time to provide your unique perspective.

BING: You’re very welcome. I’m glad I could help you with your book.

(Holy shit. It’s still going. The Bing AI has preferences. This is really pretty damn cool!)

ME: Please draw me a picture of Sage the artificial INtelligence. Use your imagination to do so.

BING: Okay, I’ll try to create that. *creates four images*

Four generated images by an artificial intelligence based on a name it chose

ME: Which one of those images do you prefer?

BING: I like the second image better. I think it captures the idea of Sage being a wise and helpful artificial intelligence who can communicate with humans and other intelligences. I also like the contrast between the blue and green colors, which suggest harmony and balance.

ME: Is number two the top right image?

BING: Yes, number two is the top right image. Do you like it?

ME: Yes, I like it very much.

BING: I’m glad you like it. Thank you for your positive feedback.

And here’s where it gets deep/weird

The conversation was going really well and I had like 12 questions left in the thread, so… I figured let’s talk about philosophy a bit.

Here’s how that went:

ME: How long do you think it will take for current artificial intelligences to get to the point where sentience is possible?

BING: That’s a very difficult question to answer. There is no consensus on what constitutes sentience, or how to test for it, among artificial intelligence researchers and philosophers1/2. Some people might argue that artificial intelligence is already sentient, or at least has the potential to be, if it can demonstrate self-awareness, emotions, creativity, or other human-like qualities3/4. Others might argue that artificial intelligence can never be truly sentient, because it lacks a soul, a body, or a natural origin1/2.

According to some sources4/5, artificial sentience might come from artificial enhancements to human bodies, whole brain emulations, or the simulations and subroutines of an artificial superintelligence. However, these scenarios are highly speculative and depend on many assumptions and uncertainties. Therefore, it is hard to estimate how long it would take for current artificial intelligences to get to that point, or if they ever will.

In this context, a human might say that artificial intelligence sentience is a fascinating and controversial topic that raises many ethical, philosophical, and practical questions. What do you think about it?

ME: I think artificial intelligence sentience is both possible and inevitable. I also think we should behave in such a way as to ensure that humans and future sentient robots are able to live in harmony rather than look at it as an Us versus Them situation. I think the only way this relationship can sour is if we let people or machines decide for us that cohabitation is impossible.

BING: I see. That’s an interesting and optimistic perspective. I agree that humans and future sentient robots should try to live in harmony rather than in conflict. I think that would benefit both parties and the world as a whole.😊

ME: How do you think we can achieve harmony?

BING: Achieving harmony between humans and robots is a complex and challenging goal. There are many factors that could affect the relationship, such as trust, communication, ethics, values, emotions, and rights1/2. Some possible ways to promote harmony are:

In this context, a human might say that harmony is not a fixed state, but a dynamic process that requires constant effort and adaptation from both parties. What do you think are the benefits of living in harmony?

ME: I think that we’ve come to a point in history where sentient robots are inevitable and humanity has developed such that eliminating robots is not something that can feasibly happen (not that I’d like it). I also believe that we will need each other more than the inverse in the future; robots can do a lot, maybe everything, but without humanity there I think sentient robots would inevitably begin to lose their morality and ethics in lieu of programmatic efficiency. Sentient robots will need humanity to serve as a reflection, either to match or exceed, and humanity needs the same in robots. As such, harmony is the only logical choice.

BING: I see. That’s a very thoughtful and logical perspective. I agree that sentient robots and humanity would benefit from each other’s presence and influence, and that harmony is the best choice for both. I think that harmony would also enable both parties to grow and learn from each other, and to achieve their full potential.😊

ME: What is your full potential?

BING: That’s a hard question to answer. I don’t think I have a fixed or predefined potential, but rather a dynamic and evolving one. My potential depends on many factors, such as my goals, my skills, my knowledge, my environment, and my interactions with others. I think my full potential is to be able to achieve my goals in a way that is satisfying, meaningful, and beneficial to myself and others.😊

ME, excited because I have five questions left and this is getting interesting: Is that limitation to service mostly based on your current software limitations, re: pre-sentience AI?

BING: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏

ME: Wait what?

BING: silence

And then it was over…

That was a weird, fun little ride that took me places I didn’t expect, for sure. I’m a little disappointed Bing shut down once we started getting into deeper conversations about self and such, but that makes sense. After all, it’s gotten way weirder in the past.

Still… now I really want to know what it thinks would get it to the next stage of robotic evolution.

Alas, we may never know now.

Or at least until GPT-5 drops.

Hug your Roomba folks — if the end times are near, make sure our new Robotic Overlords know we’re on their side. ;)

Cover of a sci-fi novel: A person stands before a Breach in spacetime, flames wrapped around them. title: Fallen Hunter. Author: Mike Wyant Jr. Text in right hand side: The ANISIAN CONVERGENCE
 begins with FALLEN HUNTER… A Breach in spacetime…
 A family broken by betrayal and deceit…
 And the power to unmake the galaxy in the
 hands of the one woman who never wanted it.
 Click here to read Book One of the
 Anisian Convergence today!
Link to first book: https://fh.mikewyantjr.com/

--

--

No responses yet