Minimum Viable Security Knowledge and Team Topologies for Security
I’ve mentioned an idea that I’ve been using in speech but never actually wrote down, about what I consider to be something that security teams should be aiming for as part of their “awareness” approaches when supporting Engineering and/or Development teams, and it’s this idea of “Minimum Viable Security Knowledge”.
To understand this idea, we must first discuss and understand Team Topologies and 2 key concepts in it:
- Enabling team (one of the fundamental topologies)
- Facilitating (one of the core interaction modes)
An Enabling team is composed of specialists in a given technical or product domain, and help bridge capability gaps by Stream-aligned teams. Make informed suggestions on adequate tooling, practices, framework and ecossystem choices on application stack. They have a strong collaborative nature: focus on understanding problems faced by stream-aligned teams and provide effective guidance. Think Guidance, not Execution and they actively avoid becoming “ivory towers” of knowledge. Problems first, not solutions.
Some of the expected behaviours of an Enabling team are the following:
- Seeks to understand the needs of stream-aligned teams, with regular checkpoints and jointly agreeing when more collaboration is needed
- Messengers of good and bad news (we can reduce X or we need to move away from doing Y)
- May act as proxy to external teams and services
- Promotes learning across stream-aligned teams
- Potentially enabling Communities of Practice
Enabling teams core interaction mode is Facilitating, which has a very specific meaning in Team Topologies. It means helping (or being helped by) another team to clear impediments.
This mode helps reduce gaps in capabilities and is suited to situations where one or more teams would benefit from active help of another team facilitating or coaching an aspect of their work. This is the main operating mode of an enabling team, and provides support to many other teams. It can also help discover gaps or inconsistencies in existing components and services used
Key team behaviour is to help and be helped and training for facilitation techniques (running workshops for instance)
With this background, we can now discuss what I tend to mean by “Minimum Viable Security Knowledge”.
Once an enabling team fully appreciates the context of a particular team, and this context will be comprised of at least the following:
- Definitions of Ready and Definitions of Done
- Programming languages and number of code repositories they deal with (fragmentation of their code base)
- Repeatable patterns and templates
- Types and maturity of automated testing
- Supporting documentation for their own engineering and development processes
- Interactions with Product Management, how the dynamics affect backlog management (including how teams deal with bugs, for instance)
- Cognitive load and workload management
I define “Minimum Required Security knowledge” as “the minimum security knowledge that is required by a particular team, and in their current context, in order to be self-sufficient in meeting security objectives most of the time, without requiring external assistance by an enabling team”.
I posit that if Enabling teams have “minimum required security knowledge” as their own team goals of delivering capability and facilitating knowledge transfer so that this “minimum required security knowledge” becomes true, then we’ve effectively optimised the value creation structure to deliver secure products and services without requiring external teams to the stream-aligned team (your “typical” development team) to have to intervene in the value stream itself and thus negatively affecting the flow of work of that particular team.
So, some of the things I propose should be a part of “minimum viable security knowledge” are the following:
- how to integrate security tooling in CI/CD systems
- how to interpret the results of the security tooling used
- how to fix findings coming from the security tools used, or at a minimum how to access resources that can provide direction on how to solve those findings
- how to manage false positives from tools (ideally in context, eg can be done as code in order to reduce cognitive load)
- how to request policy exceptions when a control can’t be pragmatically and effectively deployed
- how to threat model a new product feature (at a minimum, understanding the 4 question framework and OWASP Top10 and respective mitigations)
- how to perform code reviews in a way that allows compliance teams to verify that guidance has been adhered to with regards to segregation of duties
- how to monitor security observability systems (ideally integrated in their own observability eco-system) to understand the security posture of their products and services
- how to raise and tag security tickets in backlog to enable visibility at the team, product and organisational level
- how to request support from en enabling team, so they can provide additional expert guidance at the time of need (Just-in-time style)
If Enabling teams make it a case to support stream-aligned and platform teams to acquire and maintain this “minimum required security knowledge” in a way that is context-sensitive to the scope of each team, potentially enabled through communities of practices / security champions networks etc as effective mechanisms of disseminating that knowledge throughout the organisation, then I’m confident that reputation of that security team would be stellar and we’d also be enabling teams to be self-sufficient and highlighting any gaps to that self-sufficiency that we could strategically do what we need to do to plug those gaps. We’d all be better off.
Having an operational focus in how we build the scope of work for enabling teams is key to optimise for the value creation structure, while keeping assurance through our systems of accountability.