I’ve had a bad software day. Occasionally I want to just toss the computer in the rubbish bin (Aussie/UK slang). Usually I just accept it and move on and try something different. Usually that works. Today it did not seem to matter what I tried.
It would be comical if it was not happening to me. Most computer users could sympathize with the predicament.
The most common problem I have is when computers will not talk to each other. In this case it is between a virtual machine and a host computer. All I wanted to do was use the nifty feature of being able to debug a .Net service from another machine. You would think that this kind of operation would be relatively straight forward. I would say that assumption is dead wrong.
http://www.ross.navy.mil/images/oman/fortress.jpg
There is a classic battle being waged in the computer world. The good guys want to keep their machines running and isolated from the actions of the bad guys. The bad guys treat it like a game to take over and do whatever they want to the good guys machines. The classic media model is Mr IT versus the teenage hacker. This battle has been going on long before Mr IT was born but the point is that it is now so much easier to attack given the range of the Internet. There are certainly hackers out there watching these words in the hopes that I might reveal something that would make it that much easier to break into something somewhere. I’m really not interested in this battle directly for this post. I’m more interested in talking about the consequences.
Because of the hacker pressure (and even research groups that specialize in exposing weaknesses), software companies are putting in new code to make their products more secure. Other companies are building new products to make insecure things more secure. It is kind of like going to the airport. As the years progress, it gets harder and harder to get through to the other side. It could get the point that many travelers will just skip traveling in the first place. I think the same kind of things are happening with software. Microsoft is continually sending out updates to address security flaws. Some of these updates are rather radical in nature (like XP SP2 for example). Because of these changes, it has become very difficult to correctly configure XP to allow valid things to happen. The same is apparently even more true for Vista.
If two machines do not talk to each other it is most likely related to the firewalls that are installed on the machines. It could also be related to fringe things like anti-virus but usually it is related to policies within the firewalls.
This is where I take the gloves off. This is my opinion and is not a statement from Citrix. Most firewall software sucks. I’m not just saying that it is bad, I’m saying that it is outright terrible. You would expect software that can potentially shutdown access to the outside world would be a bit more easy to manage and also allow for easy ways to form links between two machines. I suspect that much of the firewall framework is left over from the Unix model which tends to have little sympathy for the average user.
This is where it gets even more ugly. Today I tried to change the configuration on my firewall (I’m not going to say who it is but you might be able to figure it out). I was having terrible trouble getting the two sides to work together and figured that it would be good to look at the policies and edit them if necessary. I opened up the manager for this and almost straight off got a scripting error for what looked to be a web page popup. This was very odd. A quick search on the net and I found that this was a well known problem with mixing IE 7 with this particular firewall. And, get this, the company had not provided a fix even though IE 7 has been available for months now. They knew about it last October at least. The problem was grown so annoying that a particular person has volunteered to fix the offending DLL in the firewall (not connected with the company at all) for $5 over PayPal. And it is not a scam! It really is being fixed by this person and people are just excited that it is working for them after this fix. This is terrible! What the hell is going on? How did it get this bad?
Why do we buy bad software? In my case I didn’t buy it. My company provided it to me. Can I buy something else instead and get paid for it? Probably not. Company policies on purchases tend to force people to just use one thing. So, is this a potential answer? Yes, but it probably isn’t the whole story.
The biggest names get the biggest amount of users. It’s a trust thing obviously. Even if their product is inferior and does not do the job it was meant to do, people will still buy that product based on name. I think this is probably what is going on. I admit that I would be hard pressed to come up with an alternative product. I would need to research it and perhaps even try a few before I was happy.
Another reason we end up with bad software is based on bundling. If there is a collection of average products for a certain price, we will buy that before going out and buying individual products. This is the whole Microsoft philosophy on bundling things with the operating system over the last 20 years. It might not be the best but it comes with all these other things. I would classify this as the thought that you are getting more for your money if you buy in clusters. It seems that this would appeal to the buyer but not necessarily the user. If the buyer and user are the same, then the buyer will learn to do something else. If the user and buyer are different people, then you have a pattern that will go on for potentially decades.
If you could not tell, I’m a bit grumpy about this situation. The last reason why I think we buy bad software and use it is that we have too high a tolerance for poor behaviour and performance. We expect it to be bad and we let it be bad in the hopes that it does something good and will just leave us alone. I could care less about firewalls personally for my day to day job. I want them to protect my systems but I also want them to let my systems work together. I would prefer to have a hands off relationship with my firewalls but I’m willing to help them learn what the best thing to do is. Is that too much to ask?
This really doesn’t leave me or you in a better place. Perhaps this is my way of dealing with a very frustrating day. It is like blogging theory isn’t it? Does any of this ring a bell for you?
I could spend the next fifteen minutes complaining about how Microsoft made it so difficult to debug .Net remotely (using DCOM for goodness sake!) but I’ll stick with the firewalls tonight. That is my long term beef with non-friendly security barriers.
(many hours have passed)
There was a major setback with this post last night. The server citrite.org failed as I tried to update the post. I guess I’m not supposed to post this. Or maybe it was trying to give me another example of software going bad. Either way, I lost the last few ending paragraphs. WordPress was nice enough to save a copy with the autosaving feature. It just didn’t get that last bit. Perhaps the rant factor got too high.
I was thinking about this last night after it failed and I concluded that there must be something positive to do or say. There is. It is possible to learn from the mistakes made.
The question is “How can you make bad software good?”.
You could spend a life time answering that question. I’m only going to spend the next few minutes to highlight what could be done.
- Keep it simple
- Make it very visual experience based on existing real world models
- When it fails, make sure it can recover without data loss
- When it fails, allow for it to find another way to complete the task
Simplicity is the key to making it more stable and also making it easier to manage. Complexity breeds instability and is difficult to maintain from both an user and administrator point of view. It has been said that personas can help to simplify the software product by giving focus to the market it is meant for. Picasa is a good example of this. Most users only want certain features anyway. The advanced features are really used but the programmers thought it would be cool to include it (thinking that they are typical users). It just muddies the waters to have too much and it really does not help the average user to get things done.
Tasks based on text are not as effective as visual diagrams. The mind is more tuned to analyzing pictures than to building pictures in the brain based on words. This is especially true since words can often be misunderstood. In the firewall world, much of the controls are based on text that is not even English. What user is going to know the depths of TCP/IP with all its lovely acronyms? (also thanks in part to the Unix programmers). It is unrealistic that a user (like an older parent) is going to know anything about this. The result is that the user is going to make some poor choices and either make it too secure or not secure at all. If, however, this problem was presented graphically with real systems with real names, it would be more like forming friendships. If I like that system and trust it, then I’m going to make it my friend. Otherwise I’m going to be wary of new comers. They say that a picture is worth one thousand words. I think it is worth more than that related to security software. Personally, I would be impressed with a more graphical network display with active traffic represented. It is just as valuable to know what gets through as what does not. I’ve seen some of this in network management software from years ago but I have not seen it in any PC firewall software.
If something does fail, then you do not want it to lose data or disrupt communication. In the firewall case, this problem equates to telling the firewall that it is not behaving and is cutting off too much. It would be like a virtual slap to the firewall to tell it to come back into line. The trouble is that it would be difficult to get live data to not get lost. Editors, on the other hand, can potentially recover. It has been a paradigm in editors to support auto save for years. However, no editor (that I know of) can guarantee full recovery. If you typed something between saves, it is gone. That is what happened to me last night. I think it is time that this moved forward. There is plenty of technology available to handle this. For example, it would be possible to form links with other programs on the local system or remote that could gather all the data in case of a failure. These programs would be fairly simple data collectors that are basically there just to guarantee that nothing is lost EVER. If you distribute the location of the data being input, it is that much more unlikely that the data will be lost. So, if the editor does die, the other programs will live on. The other programs could guarantee permanent storage of the data so that when the original program finally comes back, everything will be right where you left it. The beauty of this is that it could be completely offloaded from the local CPU so that even if the machine had a blue screen you would still have your data intact. I’ll call this dispersion theory. If you disperse your data through the network with live data, you can later gather it when things go wrong. I also suspect it is more like how the brain works with redundancy built in.
If a task fails with a specific technique, then there should always be a backup. I used to use this strategy with an old program I wrote at IBM years ago called QCONFIG. It was a simple DOS program that would go and collect how your machine was configured and what kind of hardware it had. At the time, it was very popular within IBM and often used for diagnostics. Part of the secret of how it worked was using alternative techniques of getting the information. It would always try the best one first but if the system did not support that function it would move on to the next best. This layering approach meant that it was unlikely to not be able to find something out about the machine in that area. Most programmers like to focus on just one technique of doing things. This might be fine in the latest environments but it might not apply to the older ones. Or, perhaps there is a timing failure with a specific transaction and it needs to learn to handle the transaction differently or with a different interface. The point is that redundancy is a good thing when it comes to handling errors. I’m reminded of the movie “The Hunt for Red October” where the US submarine detects something unusual in the ocean. The software things it is some kind of seismic activity. I remember the technician saying that the software was “going back to its roots”. This odd behavior turns out to be a major plus for the crew when they realize that this signal is actually a Russian submarine with new quiet technology for the propulsion. The point of bring this up is that sometimes software that is considered to have no value does have value if it means that it is more unlikely to fail. It is still possible to keep simplicity if the layered interfaces are hidden from the top layers. In fact, having a layer dedicated to dispatching the tasks is key to hiding the complexity of dealing with more than one situation.
If you have made it this far, congratulations. It took me two days to get this far. I think the software was trying to inspire me to say something beyond what I would normally say.
Thanks for being patient. Here’s to a better software day!