February Observations
- Sophia Behar
- Feb 28
- 3 min read
Updated: 5 days ago
It is commonly agreed upon that lying typically does more harm than good. However, if someone does choose to lie, others often do not even realize it. Humans actually struggle greatly with detecting lies, and Paul Grice’s four maxims provide the reason for this. As I explained in my “November Reflections” post, the assumption that whoever we are conversing with is cooperating with us means that we are inherently made to believe that whatever they say is true. Combining this with the fact that the accuracy of polygraphs is somewhat disputed, it appears that lie detection is an unusually challenging task. Well, it turns out that AI and large language models may be the solution to this!
Earlier this month, I came across a Ted Talk titled “What if AI Could Spot Your Lies?” that psychologist and PhD candidate Riccardo Loconte gave to highlight his recent research project findings. The project intended to examine whether large language models could be trained to detect lies effectively. A key aspect of this study was the fine-tuning of existing models. Since large language models are so widely applicable and contain vast amounts of data and knowledge, providing a model with more focused training can allow it to reach its maximum potential in a specific task. In this case, the fine-tuning involved training the model on particular datasets that included a mix of truthful and deceptive statements. These were collected by asking interviewees to either answer a question truthfully or make up a story that would justify a dishonest answer. After training, the remaining statements were used in the testing process on Google’s large language model FLAN-T5. For different experiments, the fine-tuning was also done in slightly different ways, which affected the final results.
The study’s key takeaway was that large language models can, in fact, perform better than humans when it comes to detecting deceptive statements. However, they have a key limitation: language models struggle to generalize their learning across different contexts unless they are shown similar examples in the training phase. This limitation originates from the fact that no singular rule can determine if someone is lying; instead, linguistic cues of deception depend on context and situation.
What does this mean for the future? As AI continues to grow, it is probable that this lie-detection technology will be perfected and integrated into our lives. This would have significant benefits for security, as we would be able to spot individuals’ malicious intentions before they are acted upon, and would also help flag fake news online. The applications of this technology are endless. Yet, Riccardo Loconte warns that this progress could also result in a loss of trust and more people accusing others of lying just because of AI’s opinion, fracturing relationships. Therefore, the key is for AI to eventually be able not just to provide an opinion on whether a statement is deceptive or truthful, but also list the precise reasons why they think someone is lying. This would allow us to maintain our critical thinking skills to make the final judgement.
All in all, lie detection is one of the areas that can benefit the most from artificial intelligence, and I am highly intrigued to see how language models will eventually overcome their generalization struggles to create the most accurate and effective piece of technology possible.

Credit: William Oles (Medium)
Works Cited
Loconte, Riccardo. “What If AI Could Spot Your Lies?” TED Talks, Oct. 2024, www.ted.com/talks/riccardo_loconte_what_if_ai_could_spot_your_lies. Accessed 28 Feb. 2025.
Oles, William. “The Linguistics of Lying.” Medium, 5 May 2020, medium.com/@williamoles19/the-linguistics-of-lying-26e80151f62e. Accessed 28 Feb. 2025.