Day in, day out, of office there will be multiple dynamic & static scanning in progress. This has been the case for past 4+years. Couple of things that was constant, while the time & team members kept changing was self & the routine complaint of the scan results being inconsistent. Be it dynamic or static the scan results were inconsistent across time even if all other factors like “code/app version, scanner settings etc…” remained unchanged.
With the scanners being proprietary we had little to understand the internal details. What followed all this time is ticket after ticket after ticket after ticket on vendor portal.
With a demand to lower the cost to secure SDLC our focus started shifting to the open source security tools. However open source tools were far off from what the COTs scanners would report. So it required us to customize the open source scanners & add more custom rules. This is when we could figure out why results across time would differ with rest of the variables remaining the same.
Every scan involves a series of tests. Let's take the example of finding SQL Injection in the application. To find SQLInjection there are 100s of payloads. Each payload must be run across SiteX’s entire attack surface. Since there is an Estimated Time of Completion, each payload is given a maximum time to run, and once this time period is reached, the pay load times out. When a time out occurs for a payload, no results are reported for that payload.
The reason why the test payload will fail is because scan performance is a function of many factors (current load on the machine, disk I/O, etc.) a scan test may time out in one run but complete in another, leading to inconsistent scan results across time.
The solution to this would be tracking number of unexecuted payloads in the scanners & rerunning them individually to ensure a consistent output.
Views by Somen Das