The Halting Problem described by Alan Turing in 1936 states that is impossible to come up with a general algorithm that can compute if a given algorithm will terminate (for all possible input pairs). In other words, the only way to generally determine how a program will behave is to execute it, and this may not give you a definite decision either (just because an algorithm seems to be in a loop for example, doesn’t automatically imply that it will never terminate at a later stage). The keyword here is general (it may be possible to tell if a particular instance of an algorithm will terminate).
The case of the Halting Problem is often brought up to suggest that it is impossible to write perfect application security assessment tools. While this is formally true, take the limitations posed upon the abilities of static code analysis tools for example (true, but static code analysis tools are useful regardless, more on this below), I’ve come across numerous situations where people invoke the Halting Problem to form irrational arguments. The conclusions reached in these situations may end up being true, but the arguments are themselves illogical if the premises and inference do not flow into the conclusion.
Here are two examples of bad arguments I’ve heard people make in the name of the Halting Problem:
Static Code Analyzers are not useful at all because they are limited by the halting problem.
Response: It is true that a code analyzer cannot include a general signature that can solve the halting problem. However, code analyzers are extremely useful regardless of this limitation. They have the ability to parse source code for known patterns of vulnerabilities with a reasonable degree of assurance. Most (think 80-20 rule) High risk vulnerabilities affecting web applications are often found to be instances of popular classes of defects, many variants of which are not limited by the halting problem. Therefore, static code analyzers are useful to information security regardless of the halting problem.
Logic flaws (that give rise to security vulnerabilities) in applications can never (ever!) be found by an application assessment tool because of the halting problem.
Response: Assessment tools could have indeed been a lot more useful in automatically finding logic issues had there been a solution to the halting problem. However, this limitation does not imply that assessment tools can never be programmed to find logic issues that have security implications: it is possible to deduce some defects that appear to be ‘logic only issues’ into known patterns that may be automatically parsable with a reasonable amount of certainty. Therefore, it is not true that *all* logic flaws are impossible to detect via automated analysis.
I’ve noticed that marketing departments of some information security companies like to throw around the limitations of Turing’s problem to sell their consulting services. I agree that a human brain must always be involved during security assessments (a fool with a tool is still a fool), so much so that I consider assessment tools to only be a first-pass sweep for vulnerabilities during any security assessment. This is because security assessment tools are limited in their ability to find many classes of security issues - however their limitations arise from factors that are in addition to the undecidable problem proposed by Turing. Furthermore, as I stated above, certain problems, that may at first glance seem to be purely logical in nature can be deduced to specific patterns based upon probability and frequency.
I’ve also noticed that some security tool vendors like to use the halting problem as an excuse for why their commercial tools lack the ability to test for certain classes of vulnerabilities, even when the feature set being requested is not bound by the halting problem.
It is possible to arrive at true conclusions based on flawed premises and inferences. However, such arguments are inherently flawed because, for an argument to be logical and rational, the premises and inferences must deduce to the conclusion. I hope you will keep this in mind the next time anyone postulates Turing during a discussion.