Security Watch: Problems with Penetration Testing

November 6, 2008
By Kenneth van Wyk

Kenneth van Wyk


Penetration testing is as popular as ever, yet it continues to miss the mark. As a means of validating the security of an application system, it fails miserably on several counts.

I continue to find organizations that make extensive use of penetration testing as their primary means of security testing systems before they go live, or periodically while they are in production. There are a myriad of problems with this approach, but I’d like to address one particular here that you likely haven’t considered.

My principal gripe with penetration testing is language. I’ll explain.

Over the years, I’ve seen, reviewed, and participated in hundreds of “pen tests,” and I’ve seen security engineers neglect the issue of language over and over. That is, they fail to adapt to the language of their audience. Ironically, those same engineers can almost always cite one of Sun Tzu’s admonishments: know your enemy as you know yourself and you need not fear the outcome of a thousand battles.

Why is this such an important issue? Well, consider what the pen test report and its findings are intended to accomplish.

If the pen test is intended is to provide the CIO, IT security manager, or IT manager with visibility into the system’s vulnerabilities, that’s one thing. But if the pen test is intended to help the software developers who wrote the application being tested go and fix their mistakes, then that’s an entirely different thing.

Although these two purposes share the same goal of securing the “system,” they differ significantly in their audience. Not convinced? Consider the following scenario.

The pen test team does their test and finds numerous SQL injection defects in a Web-based application. They deliver their report and the security manager sets up a meeting with the software development team and presents the findings. The security manager delivers a message saying, “SQL injection is bad. Your software contains SQL injection flaws (see here!). Make it stop.”

A perfectly natural human response to this message is to retreat and patch the software to stop that SQL syntax from being injected into the Web application. The developers are likely to write some logic that goes like: if (SQL syntax is present in an input) disallow the input.

Then, the pen test is repeated, the problem is resolved, and everyone is happy. Right? Wrong.

The problem with this approach is that it is almost always a negative model, not a positive one. That is, the programmers will naturally be drawn to checking a “blacklist” of banned SQL syntax, and then disallowing the input. This type of negative validation can invariably be broken by a determined adversary.

Now, consider this alternative approach to the same scenario. Instead of saying “SQL is bad…,” our software-savvy security manager says, “our pen test team uncovered several mutable database queries in your application and were able to exploit them. Since mutable queries can by definition be altered, we’d like you to change your queries to use immutable calls. Java, for example, can do this via an API called PreparedStatement.” (Implemented properly, PreparedStatement or other forms of parameterized SQL queries in languages other than Java, stop SQL injection in its tracks.)

The message here means the same thing as in the first case. The difference, on the other hand, is that the security team is giving the developers actionable guidance in language that makes sense to the developers. It is specific. It tells the developers what to do.

It also requires the security team to understand the software technology they are testing, however. That can make it tough for many security engineers and managers, but it is nonetheless vital to accomplishing the goals of the penetration test. Consider looking for software development skills in your in- or out-sourced pen testing team!

If you want to affect change in the software you’re testing, you need to speak to the software developers, and you need to speak to them in language that is meaningful to them.

That’s not to say there’s anything wrong with a pen test report that has an executive summary or even a list of findings in terms we’re all familiar with today. Vulnerability descriptions, screen shots of successful attacks, and all of these things are useful and meaningful to the security and IT management team. We want and need this information.

But if our message also needs to be sent to the software team who wrote the code we’re testing, then we need to adjust our language significantly. It’s also useful to be aware of and to make use of mechanisms that the developers use, such as bug tracking databases. Security teams possess enormous pools of vulnerability and testing data, but we often fail to get that data into the tracking tools used by the development teams.

I’m convinced of this approach, and I’m convinced it’s a direction we all need to be going. Attackers are increasingly focusing their attention on application-level vulnerabilities. We in the security field have to learn to speak to the application developers in meaningful ways. Don’t just tell them what they’re doing wrong; tell them how to do things right.

Posted by CEOinIRVINE
l