We have been talking the last few days about ChatGPT. One of the interesting things about a deep learning AI system is how it 'stores' knowledge.
In traditional computing systems, we can point to part on the storage device and say "these bytes are stored here" and "these other bytes are stored there". If a part of the storage was damaged, we might lose access to the file that had its bytes stored in that part, but we might have perfect access to other files that are stored elsewhere.
By contrast, in a deep learning AI system there isn't a single place where we can say "this information is stored here" or "that information is stored over there". Any particular knowledge is distributed across all parts of the system. And any one part of a system contains some component of every piece of knowledge in the system. If we were to lose certain slivers of the system, we don't lose certain information while keeping everything else. Instead the AI system just gets less accurate as a whole.
Think about our own brains. As neurons atrophy, we don't completely forget one thing while perfectly rememebering everything else. Instead the memory as a whole gets hazy.
And probably the simplest everyday example are spectacles (reading glasses). If you have ever worn scratched glasses, you know that the scratch doesn't block you from seeing behind the scratch. That part will just be blurry. That is because every spot of the glass captures the entire scene.
This idea is called superposition, because any part is a summation of pieces of the whole information.