ChatGPT and Consequence Space

Back to Taleb's tweet

What ChatGDP is missing is that while academia is in probability space, life is in consequence space, and it really, really, is constructed to think like an academic. The switch is not trivial.

Yesterday we discussed about probability space vs consequence space. Taleb says: What ChatGPT is missing that while academia is in probability space, life is in consequence space.

What does this mean?

Here is where we need to understand how ChatGPT works. When you ask a question, ChatGPT uses a (very sophisticated) probability model. ChatGPT uses the question and prior context as input to the model, which returns what is the highest probability words to follow that. Kind of like a very, very sophisticated autocomplete. ChatGPT doesn't really 'understand' what it outputs, or the consequence of the output apart that it is the 'highest probable' autocomplete.

To take an extreme (hypothetical) example, if you ask "What to do about depression", it could well recommend suicide, simply because depression and suicide often appear together in the training corpus. OpenAI has implemented external guardrails to prevent this (ChatGPT will refuse to answer the question or point you to seek professional help), but the model itself has no means of evaluating the consequence of the answer. Humans manually add guardrails from the outside, but you can't guard rail millions of such situations, and many people have found trivial ways around the guardrails.

The issue is the design of the system itself - it's based 100 % on probabilities, 0 % on consequences. For an academic, the highest probable answer is the best answer. But real people live life in consequence space, where higher consequence require much higher confidence.

That said, for low consequence situations it can be a really great tool. Want to automate trivial, monotonous tasks? Go for it. Just don't trust it with your life.