Technology Faith

I just returned from America a few days ago.  There was a BriForum conference in Chicago on June 16 to 18 and I was lucky enough to attend.  It’s worthy of describing the conference but I’ll save that until after I write up a summary first.

I didn’t have time to do any blogging during this work/holiday trip so this is the first post in around six weeks.  It seemed strange at first to stop but now it seems just as strange to start up again.

The topic of this post is what I would call technology faith.  The title probably gives it away but for those of you that aren’t already several paragraphs ahead, I’ll explain.

We, as a civilization, have built many inventions and some of these inventions have become embedded in our society as daily tools.  Things like the automobile, television, and the elevator are used without even any thought.  It is just expected that these things will work.  We have faith that these items will do what they are meant to do and will not break down.

The newer the technology, the more likely it will be faulty.  We still have faith that it will work well but we are often tolerate of its failings given the benefits and its young age.

Enter the era of computers and suddenly the opportunity for failure increases exponentially.  Not only can things fail from a hardware perspective, but also from bad code as well.  Add to this the expected universal addition of components and software and you have something that would be hard to test and guarantee.

This certainly is not new news.  However, during this last trip I came to realize some things that are new to me.

People are still treating computers like they are a new technology.  The truth is that core of the computer architecture dates from the 1940s.  It was heavily popularized in the business world in the 1950s and even more boosted by the introduction of PCs in the 1970s.  The point is that they have been around for quite some time.  Yet, people are still acting like it is a technology that is brand new.

Looking back, technologies that we use in our daily lives were also at one point considered unstable or difficult to use.  The original automobiles were so complicated that it took quite a bit of training to operate one.  It took nearly 20 years to build a successful popular automobile.  In general, it takes about 10 to 20 years to stabilize and popularize a new technology.

So why hasn’t this been true with computers?

The simple answer is that computers are always obsoleting previous generations.  The new generations actually inherit a core base but perform so much better that the original weaknesses are often overlooked.  In other words, there is no common technology base since it keeps on getting thrown out.  The real twist comes from the fact that each new generation makes it that much more complicated than the previous.  The guarantees that it the new platform will be more fragile than the previous.

I had a bizarre thought about how computers have so much further to go.  For example, if the human brain was modeled like a computer, any fault would cause instant paralysis.  Worse than this, most unexpected situations would likely trigger an error.  Being that the world is full of unexpected situations, this quickly leads to a very unhappy experience.

The point of this is that computers should function more like a brain in its ability to function regardless of the environment.  Obviously the brain has limits as well but the ability to function and not to freeze is a key difference.  Implicitly there is existing technology that would already allow for more tolerance of unknown events but the real flaw seems to be the fully digital nature of computer processing.  What I mean is that most decisions are made based on true/false analysis and this means that vague data does not lead to a maybe.  If the code accesses memory that causes a fatal exception (like in a driver), it seems a bit extreme to take down the whole machine.   I understand the reason why and based on the current thinking it is only way to go (except for exception handling) but the point is that the computer is so intolerant of this simply because it has no recourse for handling this kind of problem.  Its kind of like saying something bad happened and because the computer had no idea of what to do, it just gave up.  A brain wouldn’t give up that easy.  It prefers life and either finds a solution or a way to not answer the annoying question.

At this point it would be wiser to not be so faithful in computer technology.  It does seem like a good time to make it more accountable for its bad behavior.  I suspect that either people don’t know it could be better or they have just become used to the weaknesses.  Personally I’m getting tired of errors with no cures that stop me from getting something done.  Its much better to be intolerant and expect change.

It seems like the first step is to say “we aren’t going to take it anymore”.  The next step is for hardware and software companies to evolve the model so that the platform becomes much more reliable and usable.  There is still too much technology loving going on in the industry and it would be much wiser to address the old weaknesses and provide a truly usable platform.  It does seem that Apple is much more interested in this approach than others.

About

Live near Brisbane, Australia. Software developer currently focused on iOS and Android. Avid Google Local Guide

Tagged with:
Posted in Observations
One comment on “Technology Faith
  1. http://jonathanc.myopenid.com/ says:

    Welcome back Jeff!

    The problem with faster computers is that they crash faster 😉

    Most of time the ability to recover or continue depends on the actual breakage. For example, if the handle of your coffee mug is broken, you (the human brain that all machines need to mimic) can still continue to use the mug if you don’t mind lukewarm coffee. However, if the coffee mug is shattered, then your option is pretty much limited.

    There has been quite a few discussions around whether crashing is a better option than to try and recover and continue, and particularly interesting from the security perspective. A couple of links below, but there are plenty more I am sure.

    http://blogs.msdn.com/larryosterman/archive/2008/05/01/resilience-is-not-necessarily-a-good-thing.aspx

    http://blogs.msdn.com/jmstall/archive/2007/07/26/there-are-things-worse-than-crashing.aspx

Comments are closed.

Archives
Categories
Follow Red Circle Blog on WordPress.com
%d bloggers like this: