Member-only story

Why AI Struggles with Counting A’s in ‘Banana’ — And Why ChatGPT Proved Me Wrong

Mohamed Elrefaey
5 min readNov 18, 2024

--

Today, I dove into the fascinating peculiarities of artificial intelligence with a simple yet surprisingly tricky question:

“Why do AI models struggle with questions like, ‘How many A’s are in the word ‘banana’?’”

What followed was an eye-opening (and a little ironic) chat with ChatGPT. It explained its limitations, nailed my test question, and left me both entertained and impressed. Here’s how it all unfolded.

The First Question: Why the Struggle?

ChatGPT started with an honest and insightful answer about why AI often struggles with questions that seem straightforward:

1. Tokenization Issues

  • Many language models operate on tokenized text, where a word like “banana” might be split into subwords or tokens. The model might not process the word as a whole, making it difficult to count specific letters.
  • For instance, “banana” could be tokenized into [“ba”, “na”, “na”], complicating the count of “A”s without additional logic.

2. Lack of Character-Level Understanding

  • LLMs are optimized for understanding and generating text at the word or phrase…

--

--

Mohamed Elrefaey
Mohamed Elrefaey

Written by Mohamed Elrefaey

Pioneering tech visionary: 18+ years in software at Intel, Orange Labs, and Amazon, 5+ US patents, AI enthusiast, shaping the future of smart technology.

No responses yet