Many capabilities, but no reasoning

I was blown away by a demonstration of the lack of reasoning in ChatGPT, Llama, Claude, and the other LLM systems.

If you ask a LLM to finish a sentence with one word, explicitly stating that that word should have four letters, it cannot do it. You supply the sentence fragment “the cat sat in the ” and it will add a word, but not a four letter one.

I figured this was a funny example, but no way was it something that would still happen. Surely this was fixed and the models upgraded. I was able to instantly reproduce it:

Surely this was just a problem with ChatGPT 3.5, though?

This is a failure endemic to the entire LLM technology. It is possible to add some post-processors to check for obvious mistakes like this, and surely it will not be so simple to prove the flaws for very long, but the technology is incapable of reasoning. It cannot think. If you find yourself using an AI Chat to do thinking or to make decisions, please learn more about how the thing really works before it leads you into a dangerous error.

The video below is a good start.