Developers strive to release bug-free applications. But apps are complex, humans are fallible, and deadlines are always looming. Hence, bugs happen. A strong development process establishes a feedback loop to discover and fix bugs before an app ever reaches production. Yet far too many bugs will still sneak through, giving us things like Patch Tuesday, always-critical updates, forced password resets, and underground markets for stolen credit cards and personal info.
Conducting security testing after an app reaches production should never be the only stage where security appears in the app’s lifecycle, but it’s still an important spot for it. Modern software development approaches like agile and devops concepts emphasize frequent releases with ever-evolving features. This rate of development makes it even harder for security teams to keep pace with releases. They can turn to automation and scanners to ease the burden, but those tools have blind spots. This is especially true when vulnerabilities (vulns) appear in production apps that have been tested by such tools.
In recent years, app owners have embraced Bug Bounty programs to reward researchers who discover and report vulns in a way that minimizes risk to the app and its users. This approach helps to scale a security team and meet the continuous nature of software releases. Bug Bounties can be effective, but they’re also chaotic. They tend to reveal vulnerability hotspots rather than provide comprehensive insight into an app’s security.
Penetration tests (aka pen tests) are a type of manual security testing that provides insight into an app’s security by systematically reviewing its features and components. This type of exercise improves coverage of an app’s security because the test is intended to explore the complete app rather than just focus on one type of vuln or one particular section. Pen tests follow methodologies related to topics like input validation, authentication, and access controls in order to identify flaws in the app’s implementation. The results of these tests help give developers a sense of confidence in how well the app protects its users, their data, and the systems it’s built upon.
Cobalt pen tests harness the experience and skills of security professionals around the world. They focus the chaotic nature of a bug bounty crowd into the methodological scrutiny of an app’s security. Not only does this help ensure the app’s security is effectively tested, it also raises the quality of reports by providing detailed, validated vulns.
The Elements of a Cobalt Pen Test
A typical Cobalt pen test is lead by a researcher who organizes the collaboration with two to three others. The Lead will have extensive skill and experience within the security industry. Their role includes validating the findings from the other researchers, coordinating with the client throughout the test phase, and communicating those findings in a final report.
The researchers focus on testing the application for various security vulns. For web apps, this methodology aligns with the OWASP Top 10 and its Application Security Verification Standard (ASVS). While researchers may rely on various tools for analysis, the majority of their effort is manual and serves as a complement to automated scanning. The researchers turn their understanding of the app into creative ways to bypass security controls or break assumptions in the app’s design.
A pen test kicks off with a meeting between the lead researcher and the app’s owners and developers. This discussion covers topics like verifying the scope of the testing, explaining key features and data flows, and ensuring test accounts are in place. It’s also a chance to talk about some high-level threat models in order to help shape the pen test and make it more effective.
The test itself typically lasts two weeks. During this period the researchers will distribute the work of reviewing the app’s various features and components among themselves. They’ll share notes with each other, describe tests they’ve tried or plan to try, and document vulns in the Cobalt platform.
Once the test is complete. The lead researcher collates the individual findings into a report that provides background about the engagement as well as recommendations based on themes or repeated issues that the researchers observed. For example, recommendations may note that a lack of input validation is pervasive throughout the app, that the app relied on trivially-spoofed tokens for enforcing privilege levels, or that it’s missing a centralized anti-CSRF tokenization.
The benefit of a pen test shouldn’t just be in discovering vulns in an app, but using that knowledge to reduce the risk associated with the app. Each finding has a risk score associated with it to help the app’s developers understand and prioritize the work needed to resolve them. In this phase, the lead researcher is available to re-test and verify that vulns have been fixed correctly. This step helps ensure that the fix addresses the underlying problem in the vuln.
The Development Ecosystem
Creating a secure application requires care to design, effort to implement, and diligence to maintain. Programming is an ongoing process. Security testing should be as well. A security development lifecycle guides a team from awareness of security concepts to releasing secure code. Cost, coverage, and quality also influence how security testing is done who does it, from internal security teams to hoards of bug bounty researchers.
Bringing the scrutiny of a bug bounty crowd into methodic pen test is a way to bridge the gap between internal and external testing. Whether aligning pen tests with major feature releases or using them as periodic checkups, you can discover what kinds of vulns have slipped through your development process. Use a pen test to find vulns, reduce risk, and provide feedback for developers.