top of page
Writer's pictureSophia Behar

November Reflections

Today, it is extremely easy to find articles on the news outlining the recent progress we have made with Artificial Intelligence, especially with ChatGPT, which many believe is the most human-like AI chatbot. However, have you ever wondered whether chatbots are actually very human-like, or whether our brains are just tricking us into thinking they are? That is the question that Celeste Rodriguez Louro’s article “The Unspoken Rule of Conversation that Explains Why AI Chatbots Feel So Human” aims to investigate.


The article first explains that from a linguist’s perspective, text produced by generative AI tends to be “ungrounded” because it lacks the mutual understanding critical in typical human conversations. The average person interacting with AI, though, is more likely to assume that whoever or whatever they are conversing with can also comprehend what is being said. However, even though AI is able to use language to some extent, it is not actually able to think. In the past, when AI was less developed, it was quite common for chatbots to respond in ways that were grammatically accurate but did not fully address the prompt, making it clear that the chatbot was not human. Yet, with more recent versions of AI, this distinction is less obvious, and the reason for this can be explained using pragmatics.


Pragmatics is a subfield of linguistics that studies the use of language in actual conversations, in particular the impact of context on meaning. So, pragmatics looks at the underlying ‘rules’ of conversation as well as assumptions that individuals make about each other when they converse. Take the cooperative principle as an example. The cooperative principle is the understanding that what someone says is intended to promote a successful conversation. Now, what is a “successful conversation”? 


In 1975, Paul Grice developed four conversational principles, formally known as maxims. If, during a conversation, an interlocutor does not follow one of these maxims, it is said that they have violated the maxim. The first maxim is the maxim of quality: interlocutors should not give false information or information that is not supported by evidence. Lies are a clear violation of this maxim, and the reason why they are effective to some extent is because it is assumed that the speaker is trying to be cooperative and is therefore telling the truth. The next maxim is the maxim of quantity: interlocutors should be as informative as needed, but without being overly informative. For example, when you ask someone that you just met where they live, you would be satisfied with hearing about the country or city that they live in. However, if they respond with the specific address of their house and its exact distance from the nearest supermarket, they have violated the maxim by giving too much unnecessary information. Similarly, if they simply respond that they live on Earth, then they have also violated the maxim by giving too little information. Another maxim is the maxim of relevance: interlocutors are expected to only provide information that is relevant to the current topic of discussion, and the last maxim is the maxim of manner: interlocutors should try their best to avoid obscurity of expression and ambiguity, and instead be brief and clear.


When it comes to AI, in the past, it mainly struggled with the maxims of relevance and quantity. Today, though, as it has begun to master these, AI can come across as increasingly human. The biggest problem for AI recently is violating the maxim of quality, as AI often makes up information. However, many people are unaware of this because they assume that AI will also be following all of the maxims in order to be cooperative, such as when a lawyer in British Columbia unknowingly cited two cases that were fully ‘hallucinated’ by ChatGPT.


All in all, I leave it to you to decide whether you think that AI really is human-like, or whether it is just a model that has gotten better at following Grice’s four maxims, the pillars of how we converse with others. 


Credit: Julia Forneck (Medium)


Works Cited


Bailey, Rania. “AI & the Quantity Maxim.” Medium, 5 July 2024, raniabailey.medium.com/ai-the-quantity-maxim-304be66e4fa1. Accessed 30 Nov. 2024.


Louro, Celeste Rodriguez. “The Unspoken Rule of Conversation That Explains Why AI Chatbots Feel so Human.” The Conversation, 21 Nov. 2024, theconversation.com/the-unspoken-rule-of-conversation-that-explains-why-ai-chatbots-feel-so-human-243805. Accessed 30 Nov. 2024.


Proctor, Jason. “B.C. Lawyer Reprimanded for Citing Fake Cases Invented by ChatGPT.” CBC, 26 Feb. 2024, www.cbc.ca/news/canada/british-columbia/lawyer-chatgpt-fake-precedent-1.7126393. Accessed 30 Nov. 2024.


bottom of page