Secure software development is an old wine in a new bottle.
Above statement is true to some extent. Secure software developers are not aliens. They are humans who by interest, situation, accident or any other cause did become a secure software developer. When you want to learn Car driving, learn the foundations first: like making turns, handling steering, controlling speed, following rules and finally become a resilient driver who can to some extent predict situations, avoid getting hit because of other driver’s mistake too. A secure application is one that is developed by a team whose motto is to develop secure software, goal is to make it resilient, focus is to ensure that the software that not only meets the purpose, but also can withstand atleast known attacks.
There are two types of programmers, normal programmers and secure-programmers. Normal programmers are those who are shy, perturbed by hearing some of the jargons like buffer-overflow, SQL injection etc., who think that secure programming is a rocket science and the fear inhibits them to learn the trick of making secure applications. There is also one category of programmers who think that why should I learn these trick when I do just code using HLD and LLD, coding became very scientific due to software engineering, very limited scope for being creative and which is no more artistic. Another interesting set of programmers are those who have embraced the legacy platforms, and argue to the core that mainframe applications are insulated from modern day attacks. Suggestion to both of these breed is that knowledge about secure programming is no harm and still they can try to use the tricks whenever it becomes necessary. In fact there are already several aspects of secure software development that gets covered when a proper quality management processes are followed, just a delta needs to be learned – atleast to start with. The intention of this article is to encourage a normal programmer to learn a few extra steps so that he can become a secure software developer.
We all are well aware of phases of SDLC viz., Requirement, Analysis, Design, Code, Test and Implement. Phases are similar across models like waterfall, modified waterfall, iterative, agile, etc., A programmer who is well aware of these phases, activities that are performed in these phases, the quality aspects that needs to be followed in each phases will need to understand a few more delta concepts to become a secure software developer.
Firstly, like quality cannot be added just before implementation of code into production and quality can be ensured only when processes are followed right from requirement phase to implementation, aspect of security also should be thought right from requirements phase. Below are the phases of software development where the ‘delta’ is highlighted in bold.
While capturing the functional requirements, security requirements also should be studied. Whether do we need this functionality through online at all? Can it be achieved through a more secure way? What are the statutory/regulatory compliances the application need to address? Are those identified? Whether everyone involved in the development are aware of these along with the technical expertise? Training plan to be drawn & enforced for every team member to undergo so that they produce a secure code. Most importantly if there are certain requirements that are insecure which the client still want to be implemented due to business requirement, an explicit sign-off need to be taken so that everyone understands and accept the risks involved. While writing test cases to address requirements, test cases for addressing security also need to be written.
During the analysis phase, threat modelling needs to be performed. This is done to first understand what the threats the application may encounter are, mitigation steps to make sure that security is built-in from the beginning of the application development. Threat modelling to some extent can be equated to site selection before we choose a place to live – Is it located near an airport? Is it in a crowded area where day-to-day commute may be a problem? Is there any law-and-order problem in the locality? In a tangible world these are easy to study, whereas doing a threat modelling in an application that is intangible, it is little difficult which comes by practice and perseverance.
Secure coding standards, guidelines are being developed for some of the niche platforms/languages whereas it is available for the very standard ones like Java, C etc. These are available aplenty in public forum which could be easily learnt and implemented. Such guidelines will help in avoiding mistakes that could become costly if they are not identified in the further phases. Some of these checklists are already prepared a decade ago to ensure quality, a small delta changes need to be added in these to address the security. Situations like buffer-flow could be predicted and avoided during the coding phase itself. A peer review is an exercise where a peer checks the code to see whether coding is done just to ensure that the code written is only for the purpose mentioned in requirement, this is done as part of quality process. The delta here is that the peer should review to check for presence of any malicious code or any extra statements that are not required as part of specifications.
In any learning of new things there will be a little amount of unlearning of past and re-learning of new happens. Developers used to write detailed error messages including the exact SQL which had caused the error with the intention of helping someone who support the application (say when it breaks down at 2 AM!). From security angle, such detailed error messages will help attacker to craft focussed attacks and it is recommended to provide error numbers to the user and there can be a separate known error database created for the support personnel to interpret and handle the situation.
As part of quality processes test cases are written to check if the functionality is being met. End user was assumed to be a sober who just want to use the application for the purpose it was meant to. For testing whether an application is secure, abuse cases needs to be checked to make sure that application is resilient to attacks. For example, if a user ID field is not limiting the number of characters could be entered, an attacker can make use of this application to supply more input than the field requires so that the outcome of this exercise could not be predicted at all, which is called as a buffer overflow attack. Edit checks are already known to us, so, it is always safe to check whether a field is expecting “Name” of individual should accept only alphabets and should throw an error when a “<”, “>” symbols are entered, this will prevent someone to inject scripts. Such edit checks will avoid an attack called cross-site scripting where an attacker can inject client-side script into Web pages viewed by other users.
Learning is a journey, not a destination, in this current world where application hacks are increasingly happening, it is better to learn such tricks so that a developer can promote himself to the next level and by which applications will become more resilient for attacks. Good luck.
Authored by Natarajan Swaminathan