Although I will have to write under the assumption that I will be misunderstood. There is no way to avoid that, however, and keeping silent is not an option any more.
Instead of four levels of complexity, I'm going to go for just two. There are two classes of machines:
- Machines that can be understood, and
- Machines that cannot be understood.
Now there is a nice division, and a proper binary partition, I think. Except, it's semantically incomplete. Understood by whom? Is it
- Machines I can understand, and
- Machines I don't understand.
- Machines that some human somewhere can understand, eventually, and
- Machines that no mortal human anywhere could ever understand?
There are machines that a human might reasonably expect to understand in a lifetime of attempts to characterize them, and there are machines that would take longer to understand than any human is either willing or able to devote the time to understanding.
Now the boundary is fuzzy, but it provides a better working basis for discussion.
The next problem is to try to characterize the boundary.
Woops. There are two ways to cut this boundary. One is the boundary between real machines and ideal machines:
- A human might understand an ideal machine.
- A human can never fully describe and understand a real machine.
Fortunately, we can understand many real machines sufficiently for practical purposes. That is, we can build understandable models of ideal machines that match real machines closely enough for many practical purposes.
That makes the pill a little easier to get down. Unfortunately for patent examiners, judges and juries, plaintiffs and defendants, the question of which practical purposes remains open for any specific machine.
There is another division, somewhat orthogonal to the division between engineering specifications and mathematical models on the one hand, and actual machines on the other.
It applies to our engineering models. and projects onto the real machines modelled thereby.
Again, the boundary is a bit fuzzy, but it involves an arcane device from computer science called a stack.
A stack is a place to remember things. We push facts onto the stack, do some work that might make us forget those facts, and then come back to the stack. The "last-in, first-out" nature of the stack helps us keep our work organized and flowing. Until we find that something in our work alters a fact we have buried down the stack somewhere.
So it's useful to have another stack, and we can shift facts from one stack to the other, keeping them in order, and then shifting them back where we are done.
Some people don't like stacks. Too many constraints. It's easier to have a pigeon-hole rack of boxes to keep those pesky facts in, and then you can just grab whatever facts you need when you need them. Pigeon-hole racks are useful for many things, like post-office boxes and such. But they really don't provide a basis for remembering what you need to work on next in solving a problem, or in maintaining control of a machine. Stacks provide that organization.
This is the best basis for the partition that I know of:
- Machines that only have to track a few facts (operating states) can usually be understood. But they aren't very flexible. Think of a simple light switch.
- Machines that have a single stack to track facts can be described and understood in most cases. The stack provides our basis for understanding. If we or the machines lose track of what we are doing, we can go back to the stack to remind us.
- Machines that have two (or more) stacks are a bit trickier, but still are generally within reach, as long as we don't end up shifting too many things from one stack to another.
- Machines that keep too many facts (states) randomly accessible are easy to lose control of, easy to think we understand when we don't.
Unfortunately, human language, when analyzed mathematically, falls into the latter class. So does pretty much every machine, tool, or system with enough flexibility to be useful. Animals, also, when we try to analyze them in some methodical way, fall into the last class. Humans? Of course we fall into that last class. Simply feels free, but we quickly find simple to confining. Free of cares is not freedom.
The tools we call computers? Well, that's a good topic for discussion and consideration. Later.