Fluid Security: Managing Risk in Hybrid Networks
For years, security teams have been tasked with supporting the shift to the cloud, all while maintaining the responsibility of securing on-premise infrastructure and assets. Many are no doubt dreaming of the day they reach cloud-only status and they can shake off the fragmentation that creates so much complexity and inefficiency.
But even within a cloud-only scenario, there still exists a great deal of variation. A mixture of public and private clouds (potentially run by multiple cloud service providers [CSPs] or on-premise) only means a change in what is causing complexity.
As hybrid networks are going to be the reality in enterprises for the next several years, it is important for security teams to have a strong strategy in place to manage risk throughout the organization. That strategy must include management of physical infrastructure through its retirement and the cloud in its many varieties and as it evolves. It should also be a strategy that ensures that a heterogeneous environment is free of redundancy—in technology, processes and personnel.
One of the most important concepts behind this strategy is the idea that security should be fluid. Fluid security can expand to meet new demands and apply to new technologies and to new vendors, all without a great upheaval at each change. To achieve fluid security, the security program needs to be agnostic, unified and continuous.
All Data Are Created Equal
To be agnostic in security terms means no matter where data come from—whatever type of environments, vendors, etc.—data need to be regularly collected in a central location, and like data must be normalized and merged.
Normalization means data are no longer vendor-specific and, instead, fit a taxonomy universal to the organization; data merging creates clean, de-duplicated data sets to increase efficiency in analysis and other processes. These data handling practices are the first crucial steps to simplifying and centralizing management of a complex, fragmented environment.
Such practices also ensure that all processes are based on a comprehensive and accurate data set, free of blind spots. For example, in vulnerability prioritization, it is more efficient for analysis to be done on one complete set of vulnerability occurrences instead of analysis of multiple sets from multiple scanners, running on-premise and in the cloud, each with a different outcome of priorities.
Having centralized data repositories goes a long way toward reducing complexity, but the ability to model those data is a game changer. Having an always up-to-date model of hybrid network infrastructure, security controls, assets, vulnerabilities and threats opens up new possibilities of insight to the interrelationships of an environment. Modeling can support a variety of security management processes, helping to unify teams with an ultimate view of the organization’s attack surface.
Having an always up-to-date model of hybrid network infrastructure, security controls, assets, vulnerabilities and threats opens up new possibilities of insight to the interrelationships of an environment.
Disconnected processes are a key ingredient in the recipe for failure in securing any environment. The likelihood for process-disconnect only increases in hybrid environments. One of the main reasons behind this is the separation of teams responsible for different portions of the network. In hybrid environments, not only can there be separation between the security and operations teams, the growing DevOps/DevSecOps team also adds yet another layer of departmental complexity.
While each team has its own focus and processes, it is important that those processes are aligned toward a common goal. For example, while DevSecOps may have processes for “security in code,” the services they create are part of a larger ecosystem, and they need to be monitored for how their risk status may change or how they are affecting compliance status. This requires that security teams have proper oversight of the cloud to discover and analyze vulnerabilities on services and containers, and operations teams need to be able to test accessibility, cloud firewall rules, security tags and configurations for policy compliance.
Both of these examples are instances where the model of the hybrid environment is particularly useful. With an offline model updated regularly via application programming interface (API) connections, security and operations teams do not require administrative access to cloud platforms and can perform their processes without disruption to the cloud deployment. When risk or violations are discovered, security and operations teams can report back to DevSecOps and work with them to make the necessary changes to eliminate the issue.
For fluid security strategy to live on, cyberhygiene processes to reduce risk and compliance violations must be ongoing. An all-too-common perception of cloud deployments is “set it and forget it” as services may have incredibly short lifespans. But, due to the replicative nature of DevOps (e.g., the move from image to instance, the easy construction of container-based services), it is important to understand that risk is also easily replicated—and on a larger and faster scale than in traditional, on-premise infrastructure. The cloud must have the same vigilance that is applied to other portions of the infrastructure, even if the tools and processes to achieve that vigilance are different.
Ensuring that the data handling and unified management processes described herein become the new working order is crucial to the security of hybrid networks. With this fluid security approach, a foundation is laid on which a mature program ready to tackle the challenges of today and to support innovation into the future can be built. Because, while cloud may be the cutting edge today, there is no telling how dynamic computing will continue to evolve, or what will outdo it as the next big thing. If the rapid adoption of the cloud is any indication, what is next will be here sooner than anyone thinks.
Is vice president of products at Skybox Security. He brings more than 20 years of product innovation and thought leadership experience in the cybersecurity space. He has conceived, developed and delivered to market dozens of award-winning products in both the consumer and enterprise arenas. Before working at Skybox Security, Williams held various executive technology roles in leading security companies including McAfee, nCircle, BigFix, IBM and CloudPassage. Additionally, as a research and security strategist for the Information Security and Risk Practice at Gartner, Williams was influential in defining the security category and conducting analysis in multiple research areas including vulnerability and threat management, network security, and risk management.
KEEP UP WITH THE LATEST IN CYBERSECURITY
Stay ahead of the ever-changing cyber landscape. Sign up to receive our monthly newsletter, The Nexus.