It takes very little convincing to prove that automation saves time and increases efficiency. But when it comes to security tools , the ease of use has increased drastically; where you can spider and scan a website by just entering its URL. The efficiency of these automated scans , however, have not increased by a considerable amount. Usually, most pages of a web application require some sort of authentication to allow users to access it. Entering parameters for HTTP/NTLM and Form-Based Authentication is possible, more complex authentication methods require some sort of script based authentication. Scanning pages that are behind login screens is still a pain.
While it is possible to scan pages that require authentication with a manual walk through of the web application via a proxy and then have a tool like zap or w3af with mitm either listen to that proxy or capture the request - response respectively and then scan that, doing this on a regular basis is recommended but definitely not feasible. Automating these repetitive tasks is necessary. Selenium scripts were a part of the solution to this problem. Writing walk-through scripts of web applications using selenium has its fair share of advantages. While it is time-consuming, it definitely saves a ton of time on the long run. If there are changes in the web-application like addition of new features or pages, it can easily be added into the script. You also have the flexibility of ignoring a few pages that you don't want the scanners to scan.
Tools like zap and w3af are extremely useful, open-source, customizable and relatively simple to use. These tools spider and scan all the pages and urls it can get its hands on, find vulnerabilities and give detailed information about each vulnerability found. The GUI versions of these tools just need url of a site to scan it and the scans can be customized to a certain extent, but the problem with scanning pages behind authentication like I mentioned above still exists. Over these past few months, we have been automating these tools to increase their efficiency drastically and making it a lot easier to integrate them into the devops cycle. To automate or customize something, you need to understand how it works so that you know exactly what it is that you're changing.
Making these tools spider an application that we had written a selenium walk-through for, made no sense. We knew exactly what pages and parameters we wanted to and did not want to scan . Being extra cautious while you're writing your selenium script is highly recommended. Tools like zap and w3af; fuzz a lot. Having forethought is always helpful . Say for example, you decide to create or delete something simple like an empty folder on your web application with your selenium script. Zap and w3af capture those parameters required to create or delete the folder and fuzz it and you'll either be greeted with hundreds of empty folders or you would've deleted a few important folders. Trust me, this has happened. These tools sometimes show a few vulnerabilities that might not exist. Careful inspection by a professional pentester is necessary to mark it as a false positive. Integrating these tools with dockers was the perfect recipe to solve the scaling issue. We could spawn as many scans on how many ever web applications we wanted as long as our servers could handle the load of these tools. Concentrating on the product at the beginning and postponing security for a later date might seem like a good idea, but it is quite possibly the worst mistake any company can make.
Writing these scripts and customizing these tools taught me how dangerous and damaging some of these vulnerabilities can be and how important it is to implement SecDevops to fix these before the application goes into production.