June 21, 2010

My theory on software problems

We seem to be rapidly converging on three types of problems in the software domain:

A) Problems that can’t be solved easily by humans, but are trivial for computers, even at large scale

B) Problems that can’t be solved easily by humans or computers (primarily because of scale)

C) Problems that are easily solved by humans, but nigh-impossible for computers

Over time, we continue to see set (B) shrink and set (A) increase.    But we see very little improvement in (C).

Examples of (C) include:

  • Voice recognition
  • Establishing connections between pieces of data, based on semantics
  • Natural Language Processing
  • Monster AI (in games)
  • Forecasting & Predictions
  • Troubleshooting and Debugging
  • Developing Software

Over the last 20 years (or so), I have seen people predict confidently that any one of these problems would be easily solved in the next few years, and, without exception, they have been wrong.  Not a little wrong.  Not slightly wrong – spectacularly and utterly wrong.

I know this, because I remember the frustration associated with arguing with the “visionaries” about these problems – they would posit “X” – “We will see computers automatically connect semantic markup”.  I would object that this was a far more complicated problem than they thought, and they would sniff at me, and roll their eyes – I “just didn’t get it”.     Or they would predict that no-one would be writing software in 10 years, or that IT would disappear, etc, etc, etc.

Well, I’m tired of this disdain for the real world, and here’s the graph to show exactly how right I was.

The fact is – set (C) above is the set of things that require human-level AI to solve.   But once we have human-level AI, all of these problems become trivial at the same time.  (This is analogous to the NP-complete problem)

We chip away at the AI problem a little every year – computers get faster, algorithms get smarter, things that were essentially impossible become simply difficult, etc.   But until we get a near-complete AI model working reliably, I submit that we will not see the problems at the top of this post go away.

Think about it – each of these problems requires a rich understanding of human context.  Of judgement, of positing alternate universes in one’s head, in order to determine if alternate paths lead to success or failure.   And until a piece of software has the ability to do those things, it will not solve these problems.

So from now on, my answer to all pretenders to this throne will be:  “The problem you describe requires human-level AI to solve.  If you want to make progress on your problem, go solve the human-AI problem first.”

/rant off