In my career, I’ve been paid to program at ten different companies. Of those companies, only two of them have taken computer related security very seriously and three have had serious security breaches. There is no overlap between these two groups.
Of the three security breaches, two of them were known security issues that had been brought to the attention of management but management chose to ignore them. One of these caused serious financial harm¹. Due to the nature of the problem and management’s reluctance to discuss it, we couldn’t determine the exact amount of damage, but between known financial losses and the cost of responding to the incident, I would conservatively estimate that we lost at least $100,000 and possibly up to a quarter million. Had we fixed this problem before it occurred, it only would have taken two or three days of developer time. Given the relatively small cost of fixing the problem, why didn’t it get fixed?
Let’s say you buy a house. The bank requires you to take out homeowner’s insurance and even if they didn’t, you probably would. Why? No one else can share the risk and you have no one to answer to except yourself. You’d be a fool not to take out the insurance. Paying attention to security for a company is like buying insurance. It may cost money up front and you may never need it, but if you do, you’ll be grateful.
Perhaps the largest problem with dealing with security is the “blame game”. Everyone wants to pass the buck to others regarding the cost justification. Management often has huge pressure to either immediately make money or immediately save money. As a result, no one wants to be the person responsible for justifying the cost of added security as, on the surface, it appears to do neither. Further, the opportunity costs ensure that for all the time spent dealing with security issues, new features and “real” problems don’t get solved. Coupling this with not understanding risk means that security holes don’t get fixed.
Another problem is that we often don’t know if our security fixes work. One programmer told me that he had one manager tell him that just because he might fix one security hole doesn’t mean that others get fixed. No work on the problem was therefore chosen over incomplete work. A manager’s ignorance helped to ensure that a system remained vulnerable. Perversely, there’s an important business reason for this. The business side knows that many programmers who are left to their own devices can find tons of fun things to do without improving the bottom line. Programmers often have to be reigned in and it can be very difficult to figure out if this is the case when it comes to security.
Also, how do you judge if your security fixes have worked? If you’re not careful, there’s a great chance that a secure system will repel enough assaults that you won’t even notice. A successful result can therefore look like money wasted!
Admittedly my work history is a small sample, but knowing that 30% of the companies I’ve worked for have suffered serious computer security failures is disturbing. Constantly reading about the computer security failures of other companies further disturbs me. If things go really poorly, the worst that is likely to happen is that you’re looking for a job. That’s a small price to pay in the grand scheme of things and our industry is unlikely to change due to self-interest. Who’s going to agitate for better data protection laws when they know that they themselves might be bitten by it? I do because I think it’s the ethical thing to do, but I’m clearly a minority.
1: One of the other breaches probably would have bankrupted the company due to a serious contractual violation but it was quickly fixed and hushed up.