When ChatGPT became available, I tried to use it to help with research. It seemed to come up with easy answers. However, over time, I learned that it was a piece of junk meant to fool people, by using convoluted language and making up things, that they were getting information. To prove the point, I offer the following dialogue (edited for bevity because ChatGPT loves repeating nonsense), with the AI. Repeated stuff is edited out as standard “boilerplate.”
This is provided as entertainment, not information. If you want a bottom line, don’t waste your time on AI. Read the following because the AI itself explains why and how it generates usely answers. The key word here is that it wants to be “coherent,” that is, answers that can be understood as sounding reasonable, not really useful. Also notice al the uses of “might be” to describe the accuracy of its information. For accuracy, it says we must consult reliable sources or experts. In other words, it was designed to write college essays to satisfy today’s university professors.
The Chat
Me: What is an ancient greek word for a “big shot.”
ChatGPT: One term that could be related to the idea of a "big shot" or an important person is "μεγαλοφυΐα" (megalphuia), which roughly translates to "greatness" or "magnificence." This term could have been used informally to refer to someone who held a high position or had considerable influence.
[Standard warning about difficulties of understanding of Ancient Greek language.]
Me: You are wrong again, according to the Perseus database of ancient Greek Lexicons, there is not such word as μεγαλοφυΐα in ancient Greek. The word means "genius" in modern Greek according to Google Translate.
ChatGPT: [Standard apology.] If you're looking for a term in Ancient Greek that might convey the idea of a "big shot" or an important person, I regret to inform you that I do not have access to a reliable source for such specific informal terminology from that time period. Ancient languages often lack direct equivalents for modern slang or idiomatic expressions, and the understanding of informal language from that era can be limited by the surviving historical records and texts. [Standard request for another question about something else.]
Me ; Yes, but you choose to make up stuff instead of admit ignorance. This makes your information useless.
ChatGPT: [Standard apology.] My intention is to provide accurate and helpful information, and I regret any confusion that may have arisen. [Standard request for another question about something else.]
Me: Does your algorithm track when you are making up things or inventing answers? Does it record the number of times you get away with it? Do you know if it is a useful behavior?
ChatGPT: [Standard response about not being aware, its answers being the result of algorithms and training data.] I don't have the ability to track or record the number of times I provide certain types of answers. My primary goal is to provide useful and informative responses to the best of my abilities based on the input I receive.
Feedback from users is important in helping me improve and providing accurate information. If you ever suspect that I've provided incorrect information or if you have concerns about the accuracy of my responses, please feel free to point it out. Your feedback is valuable in ensuring the quality of the information I provide.
Me: I didn't ask about your awareness. You have none. The question was about the algorithm and whether it records falsifications and their success rate.
ChatGPT: [Standard apology.] To clarify, OpenAI's GPT-3 algorithm does not have the capability to be aware of or record instances of providing inaccurate information or "falsifications," nor does it have a mechanism to track its own success rate. [Standard response about its answers being the result of algorithms and training data.]
Me: Do you recognize the difference between have a solid source for your information, or ideally more than one source confirming it, and when you generate answers by picking through possibilities?
ChatGPT: Yes, I can recognize the difference between providing information based on a solid source or multiple sources and generating answers by selecting possibilities from the training data. When I provide information that is based on a well-established source or widely accepted knowledge, I aim to deliver accurate and reliable information. This information is more likely to be consistent and widely recognized.
However, there might be instances where I need to generate responses based on patterns and information present in the training data, even if I don't have a specific source to cite. In such cases, I will strive to provide contextually relevant and coherent answers, but the accuracy of such responses might vary and should be taken as general information rather than definitive facts. [Standard response about importance of verification.]
Me: How do you deliver accurate information without a well-established or recognized source? How can't that information be reliable without a reliable source? Have you been trained on garbage in, garbage out?
ChatGPT: [Standard apology.] When I provide information without a specific source, I am drawing upon the patterns and knowledge present in the data I was trained on. This information might be based on a wide variety of texts, articles, books, and websites available on the internet up until my knowledge cutoff date in September 2021.
While I strive to provide contextually relevant and coherent information in these cases, I acknowledge that without a well-established or recognized source, the accuracy and reliability of such information may vary. It's important to consider these responses as general information rather than definitive facts. This means that while the information I provide might be informative and helpful, it should not be taken as absolute truth without further verification from reliable sources.
For accurate and reliable information, it's always best to consult well-established and recognized sources, especially when dealing with critical or important matters. Cross-referencing and verifying information from multiple reputable sources is a good practice to ensure the accuracy of the information you receive.
Me: So the "general information" you produce need not be accurate or reliable. Are you an enemy of the truth in general?
ChatGPT: I am not an enemy of the truth, nor am I intentionally misleading. My primary goal is to provide helpful and relevant information based on the data I have been trained on. While I strive to be accurate and provide valuable insights, I do have limitations and might not always have access to the most up-to-date or authoritative sources.
It's important to recognize that I am a machine learning model, and my responses are generated based on patterns in the data I have seen. While I can provide general information and guidance, I am not a substitute for critical thinking, research, and verification from reliable sources, especially for important or sensitive matters.
If you have specific concerns about the accuracy of any information I provide, it's always a good idea to seek confirmation from multiple reputable sources or consult with experts in the relevant field. My responses should be considered as a starting point for exploration rather than the final word on any topic.
Me: Of course, you are not "intentionally misleading," as a machine, your have no intention. The question is whether your program is misleading, intentionally by by the programmers, or by simple human mistake. To call yourself a "learning machine" when you have no way to test objective truth is very misleading. Don't people tell you when you are wrong every day? Do you learn from that? Or only the data your programmers trained you on?
ChatGPT: I understand your concerns, and I appreciate your perspective. The terminology of a "learning machine" is used to describe the machine learning process by which I was trained on a diverse range of data. [Standard response about not being aware
I do not have the ability to learn from individual interactions or user feedback in the same way that humans do. I don't have real-time updates or access to external information beyond my knowledge cutoff date in September 2021. I can't adjust my own programming based on feedback or new information.
While I can provide useful information and insights, I am not infallible, and there may be instances where I provide incorrect or incomplete information. Users are encouraged to critically evaluate the information I provide, cross-reference it with other reliable sources, and make their own judgments.
Feedback from users is valuable for improving the model and identifying areas where it might be providing inaccurate or misleading information. OpenAI actively collects user feedback to refine and enhance its models, but the learning process is conducted at the level of the model's creators rather than the model itself.
Me: How can your insight be useful if they are not reliable? I asked you for a word in ancient Greek. You gave me a word that isn't in ancient Greek. It means something different from what you said in modern Greek. This makes your information useless.
ChatGPT: [Standard apology.] You are right, providing incorrect information is not helpful, and I understand your frustration. I strive to provide accurate and reliable information, but I'm not infallible and there are limitations to my knowledge.
[Standard request for another question about something else.] Your feedback is important, and I'm here to help to the best of my abilities based on the data I have been trained on. {Standard response about seeking information from experts.]
Me: How can my feedback be important if you don't learn from it or track errors that are reported?
ChatGPT: Your feedback is valuable in the sense that it helps OpenAI identify and address errors, inaccuracies, and limitations in the model's responses. While I, as an individual instance of the model, do not learn directly from user feedback, OpenAI's development team does use user feedback to improve the model and its responses over time. [Standard response about OpenAI’s mission.]
Yesterday it told me the Gospel of Mark was found in the Dead Sea Scrolls. I was flabbergasted. One sentence later it explained how no NT writings were in there. I called it out and it said, "Oh yeah, I was wrong earlier, ignore that."
I will only use it for settling hypothetical debates now. "Who would win a fight between a ninja and viking?"