Refocusing Vendor Security on Risk Reduction
Modern software companies use a lot of software services. Data flows across organizational boundaries, and security risk moves from first-party to third-party, challenging security teams that have responsibility without control. Traditional security teams address this through certifications and questionnaires, supporting risk visibility and acceptance. What’s often overlooked is the opportunity to actually reduce risk by collaborating with implementation teams on secure configuration decisions specific to the software in question. Similar to the challenge of design involvement (threat modeling) in AppSec versus post-implementation analysis (running scans), this approach requires an embedded, empowered, and empathetic security team.
The old way: Marketing wants to buy HubSpot. Security asks for a SOC 2 and pen test, sends a 200-question spreadsheet, waits 3 weeks for responses, notes that the vendor doesn’t enforce password rotation, asks an exec to “accept the risk,” then moves on. Six months later, a developer connects HubSpot to the entire customer database via a Zapier integration using an API key that never expires, tied to an intern’s account.
The better way: Security sits with marketing during onboarding. Together they: enable OIDC, set up role-based access so only marketing can see marketing data, configure audit logging to the SIEM, document that the Salesforce integration uses a service account with read-only permissions that auto-rotates credentials.
Avoid generic “vendor risk”
Security teams that care about third-party SaaS risk find themselves involved in the procurement process. This is reasonable - knowing (asset inventory) is half the battle. But typical security team activities are surface-level and generic:
- Request and review a SOC 2 report
- Request and review a penetration test
- Request and review responses to a security questionnaire
- Consult a third-party vendor risk service (e.g. SecurityScorecard)
Sure, sometimes these exercises reduce risk. Don’t have a SOC 2? No purchase. Unresolved high-risk findings? Commit to fix before we sign a contract. But that only works when there’s a favorable power differential. While a large purchaser may influence a small supplier’s security practices, there’s less potential in more equal or inverse dynamics. If the supplier won’t budge, the security team needs to spend reputational capital (or convince someone else to spend it) to push back against compelling business interests.
In the median case, vendors have already prepared industry-standard responses by hiring audit and pen test firms incentivized to give them the sales-enabling materials they need. The riskiest outcome from a “vendor security review” is asking a harried executive to approve and accept risk for deficiencies divorced from the context of how the software will actually be used. A marketing team isn’t abandoning their new analytics platform because the vendor’s audit noted they didn’t offboard personnel within 30 days, or had a stored XSS vulnerability.
Perhaps most distressing for those who’ve seen the sausage made: the nature of SOC 2, pen tests, and other external security attestations is very low value for understanding risk. It’s extremely difficult to reason about first-party risk, let alone third-party risk. Large security teams spend tremendous effort identifying and reducing causes of security breaches in their own infrastructure and often fail. The idea that we can elucidate third-party system risk as part of a brief “vendor review” feels unrealistic. Cookie-cutter audits and commoditized pen tests simply don’t provide much assurance. We might get some signal about security maturity, but exhaustive security analysis that would let organizations make confident buy/no-buy decisions wouldn’t be cost-effective or practical for the hundreds of vendors organizations use.
Like many “supply chain” problems, there aren’t easy answers. As an ecosystem, we’ve accepted the risk of this free-flowing interconnectedness to enjoy the benefits of collaborative integrations and software specialization. That said, my take is we spend too much time on risk visibility and not enough on risk reduction. For SaaS vendors, risk reduction activity is essentially hardening the service - making sure any decisions made while configuring the software make optimal security choices, accounting for tradeoffs.
I’m not saying don’t ask for SOC 2 or pen tests. Maybe you even want security questionnaires to look good for auditors or cover your bases with regulators in the event of a breach. But if you do it and don’t do what I describe below, you’re missing an opportunity to optimize the energy you spend on your security program.
One note: these activities are different from the very “external” vendor security review process that asks for evidence and risk approvals. They require the security team to be deeply connected to the IT function that clicks the buttons to enable identity/access/logs, and to have strong relationships with teams using software so there’s trust to accept recommendations. Expect less paperwork and more teamwork.
Instead, support smart security decisions
Understand data flows
New software review should start by understanding what data the vendor will collect and how. Similar to threat modeling, this requires context gathering - sitting down with implementers to make sure the security team understands what the heck this software is doing in the first place.
Recommendations often come out of just this step: Can we self-host rather than use cloud? Do we intend to turn on such-and-such third-party integrations?
Define access
Pretty much any software will have you make security-related decisions about access. Typically this means roles and permissions (coarse- or fine-grained) plus identity (username/password, MFA, OAuth, SAML). I’ve seen plenty of wild west situations with access to new software tooling. This is an ideal opportunity for a security team to have a win-win by making easy, out-of-the-gates access work through well-defined role-based access control strategies that can be applied to new use cases.
This is where you make an impact: “Hey, they offer SAML and SCIM, let’s use that.”
Configure auditing
If the software has particularly sensitive data, it may be wise to track access, changes, or sensitive operations. Consider what you’d do in the event of a breach - are there audit logs configurable for the software? Would it make sense to write detections? Does the software have capability to ship audit logs to your SIEM?
Lock down integrations
This feels like the biggest area where vendor risk gets introduced, and the part that’s hardest to get a handle on as a security team because these integrations are often nuanced and not well understood except by those configuring or administering software.
Nearly all software these days integrates with other systems. Does it read/write from your data analytics pipeline? Tie into your observability stack? Does it need to integrate with your marketing website?
The boom in “Non-Human Identity” security points to the risk associated with these integrations if they’re created and maintained without security rigor. We see API keys stored plaintext on endpoints, full administrative access, integrations tied to accounts belonging to real humans that break when they leave the company. Rather than trying to solve these problems after the fact with tools, providing guidance upfront during onboarding helps reduce risk from the get-go and obviates the need for potentially breaking changes later.
Harden with security guides
Beyond these standard security configuration areas, many software services have unique configuration knobs that can profoundly affect security. Larger vendors have started publishing “security guides” that help implementers understand their options and how they work. It should be the security team’s responsibility to consume these and ensure best practices are followed if they make sense for the organization’s context.
The bottom line: vendor security isn’t just about collecting attestations and checking boxes. It’s about being embedded enough in your organization to understand how software actually gets used, building relationships strong enough that people trust your recommendations, and focusing your energy on the configuration decisions that actually reduce risk. That’s harder than asking for a SOC 2 report, but it’s also where the real security work happens.