The things you can fix in an organisational process are not the things that are broken
The process you can see is not the process that is happening
I watched a fascinating Equinox programme on Odyssey Channel last night, it was from 2001 and called Battle of the Robots – The Hunt for AI which follows three teams in their attempt to get real machines to behave like ones in the movies. Apart from being an object lesson in the impossibility of defining AI, let alone achieving it, the programme was a beautiful example of how completely normal human processes can sabotage and eventually destroy the very things they think they are trying to achieve. It also intersected with a conversation I had with a potential client a couple of days ago, which then lead me straight to the Tao.
The potential client and I have been talking since last year about some work they say they want me to do in their organisation. It has to do with the way information is managed and controlled between the organisation and its client group and the website that is supposed to mediate and facilitate the process. Except that it doesn't. It fails on many levels.
My contact and I were talking about how to go about my intervention and she simply added to a long list of postponements; there is always a fresh reason why they are not yet "ready" for me to start the programme. But in fact they will never be ready, the readiness is the problem I'm supposed to be working on. I tried to explain that the reason their discussion tool, for example, sucks so badly is because that is how this organisation thinks, it wants to control the flow of information and so the decisions it makes will all arise from that unspoken, unacknowledged imperative. It is not the people who do this thinking, it is the organisation that does it, and it guides their decisions into reliable, comfortable paths. But the thinking process is not on a scale that is accessible to the people in it.
Her response to that was to note that I am " a great believer in process". Actually, no, I think most overt organisational processes are a waste of time.
All of this organisation's decision making processes are clear, documented, carefully followed and all of them lead inevitably to bad information management decisions because (Ding! Ding!) the processes we can see and document are not the processes that are going on. We can consult on, reach consensus on and make changes to the sequence of events that describe how decisions are made and it will make not one iota of difference to the quality of those decisions. The kind of decisions an organisation makes are entailed in its DNA, the spoonbill will always do spoonbill things, and the tiger will be tigrish and the way they solve the problem of hunger or a place to sleep will arise effortlessly from their needs and their available tools.
That's why I don't want to fiddle with process, partly because I can't understand what is really going on to produce the results we all agree can be much better, partly because what I want to do is give the spoonbill a set of claws and the tiger, wings. Then see how the decisions change, then add some more tools and iterate, enabling the new thinking to evolve and emerge from the organisation that is being changed from the new claws and the unfamiliar wings inwards. I also suspect that the organisation as it is understands that and, through its decision makers, tries to avoid those changes.
For an excellent book on the subject, try Richard Farson's Management of the Absurd. I especially like his comment that "We want for ourselves, not what we are missing, but more of what we already have". The physically able want more physical ability, the intellectual want more brainpower, the beautiful want more beauty. But if what we already have could solve the problem, we wouldn't have the problem. Problems are, like pain, a useful sign that something is wrong and that changes in something are needed to fix them, not more of what you already have.
So, how did I get here from watching a programme on AI? Damned if I know, but the trigger was watching the three participants in the programme trying to hash out their ideas into something concrete that worked. Hugo de Garis approach was to go virtual with an extremely high powered processor and work on evolving neural nets. Now whether that is the way is open to question, but I admired their willingness to ignore the physical in an attaempt to get to the essence of thinking, recognition etc. The project became bogged down in ramapant organisational snits and bitchery and eventually fell into the dotcom hole, so we'll never now the answer there, although a study on the human relationships and their contribution to the failure might be fun, if hazardous.
The MIT project and Steve Grand's almost solo effort were both interesting because they both tried to build anthropomorphs to one degree or another. They both had heads with eyes and ears, both had 2 arms, one on each side a little below the head and attached to torsos. And both failed. The MIT project because it wouldn't work, and Grand's one, paradoxically, because it did.
Towards the end, Steve finally fired up his little machine called Lucy, and discovered that it could "see", "hear" and "speak", exactly as he had planned. And then he sat there and looked at the system in complete silence. His wife, who was also his assistant remarked that he was, for the first time she could remember, speechless.
It seems to me that both he and MIT had revealed something fundamental about how we think about intelligence; we can't begin to think about it until we put it somewhere, preferably somewhere that looks like us. But what if intelligence doesn't have a somewhere hook that we can grab?
Steve Grand's speechlessness says to me that he had discovered the Tao of AI which is the inverse of the truth that "a journey of a thousand miles begins with one step". Steve had taken that one step and then been floored by the Tao which says something like, "on the journey towards AI, every step leaves you as far away from your goal as before".
I think we can't create AI and never will because intelligence is not a property of artifice, it is an emergent property of systems that extends without boundary through time and space. We can plug into it, express it, we can act in it, we often misinterpret it, but we can't contain it or make it because it exists on a scale to which we don't have access, a bit like organisational decision making processes. No doubt Archimedes was right when asked for a lever long enough and a place to stand so that he could move the world, in principle he could do it, but in fact he can't, his picture was a mirage. The search for AI is the modern equivalent of that lever, we can imagine it, but we wont ever build it.
The intelligence we can imagine is not the intelligence that we use
The process we can see is not the process that is happening
The things we can fix are not the things that are broken.
Time for tea.