Inconsistent Scan Results (Dynamic & Static)

Day in, day out, of office there will be multiple dynamic & static scanning in progress. This has been the case for past 4+years. Couple of things that was constant, while the time & team members kept changing was self & the routine complaint of the scan results being inconsistent. Be it dynamic or static the scan results were inconsistent across time even if all other factors like “code/app version, scanner settings etc…” remained unchanged.

With the scanners being proprietary we had little to understand the internal details. What followed all this time is ticket after ticket after ticket after ticket on vendor portal. 

With a demand to lower the cost to secure SDLC our focus started shifting to the open source security tools. However open source tools were far off from what the COTs scanners would report. So it required us to customize the open source scanners & add more custom rules. This is when we could figure out why results across time would differ with rest of the variables remaining the same.

Every scan involves a series of tests. Let's take the example of finding SQL Injection in the application. To find SQLInjection there are 100s of payloads. Each payload must be run across SiteX’s entire attack surface. Since there is an Estimated Time of Completion, each payload is given a maximum time to run, and once this time period is reached, the pay load times out. When a time out occurs for a payload, no results are reported for that payload.

The reason why the test payload will fail is because scan performance is a function of many factors (current load on the machine, disk I/O, etc.) a scan test may time out in one run but complete in another, leading to inconsistent scan results across time.

The solution to this would be tracking number of unexecuted payloads in the scanners & rerunning them individually to ensure a consistent output.

Views by Somen Das

Rate this article: 
Average: 2 (4 votes)
Article category: 

There is 1 Comment

Just a thought :
Can we try for a configuration in the framework , which would ensure any unexecuted / failed payload to run again for atleast 3-5 times till it is successfully executed (pass or fail).This will also ensure us to regarding presence of a genuine cause of not being executed.

But lets say 100 of payloads are unexcuted in the scan of a single day which is quite possible.In an standard Vulnerability Management environment , we do have certain allowed time windows to run scans on approved zones (esp. production env ) .
Once the time window expires / scan completes , we may get the notification of list failed executions.
But again we need to wait for the window period of that particular section to excute the failed list of payloads.
This might cause a overhead of piling of unexecuted scans. 
Re-running them individually will definitely be the best solution for sure. But this seems to be a process based work around ,which resolves 50% of the problem. If we can come up with remaining 50% solution based on technical perspective , i believe we can have a good grip on defeating this ever occurring loophole of an VM solution.