
Mike Spanbauer, Senior Director and Technology Evangelist, Security, Juniper Networks
When it comes to securing an organisation, there are several questions to answer. Where are the organisation’s users predominantly located – onsite or remote? Are there any predicted shifts in these locations over the coming years? Is the organisation running on-premises applications in the data centre or primarily using cloud-based applications? How diverse are the applications and data types that users depend on for productivity?
No two organisations are set up the same. Some possess industry-specific custom applications while others use common productivity apps for a simpler office environment. Combine these factors with distributed sites in multiple geographies, each with different service providers supporting the links, and disparate security controls, each with their own management system – and you get a sense of the number of variables that must be considered, if not addressed. Once the organisation understands these concepts, the complexity of securing an infrastructure becomes clearer. Securing this complex tangle of technology has been an operational balancing act for many security and infrastructure teams for more than 20 years.
Intrusion detection systems, intrusion prevention systems, firewalls, endpoint protection, secure web gateway and many other individual technologies require specific expertise, coupled with applicable domain knowledge and some degree of operational investment to initially deploy and then continually run smoothly. These are not “set and forget” technologies and must be adapted to changing situations as adversaries rarely sit idle.
In the past few years, we have seen an increased interest in moving these technologies to cloud-hosted services, delivered as a package. Who better to tune these technologies than the vendors who develop, deploy and run them every day? This innovative concept has a few different labels around the industry, but the one that resonates the most is the secure access service edge (SASE).
Different paths for different needs
Over the past decade, I have worked with many vendors in the security ecosystem and, therefore, can appreciate the appeal of SASE for organisations of all sizes. As a concept, SASE purports to deliver most of the critical capabilities that security architectures aspire to provide, including production class uptime, frequent security efficacy tuning, deployment flexibility and operational ease of use.
This concept works very well for new sites or wholesale cutovers to this service. However, nearly all organisations have some degree of investment in technologies or processes already, making wholesale adoption difficult, if not impossible. As a result, adoption for most organisations will be a lengthy project due to service and application dependencies, internal stakeholder buy-ins and uptime mandates that do not permit a single operational window in which to execute a cutover.
Remember that data centre application question at the beginning of this article? Customers who still have applications in their data centre and use SaaS and/or public cloud-hosted applications can complicate the operational benefits of SASE. When troubleshooting user experience or application issues, policy consistency is essential. Having multiple policy engines, where the configurations are difficult or it is impossible to resolve conflicts, creates a new operational challenge that clouds rather than clarifies visibility. If one of the tenets of this new service security vision is operational agility, one policy engine should be the goal. Otherwise, it creates new maintenance or operations challenges for day-to-day activities, and incident or investigation challenges that delay the response or resolution for operations folks.
What about performance from or to the point of presence for the SASE service? Or aggregation of multiple links, with both point-to-point links between data centres and campus or home offices to cloud? SD-WAN solves some of this; still, it can be challenging to ensure a high-quality user experience when security services are introduced that obscure the telemetry and details that are often relied on for service health.
Choose your own adventure
Today, many offerings in the market have multiple policy configurations and deployment mechanisms for different locations in the network. With each instance having its own management UI, policy structure and event format, troubleshooting is nearly out of the question, much less having visibility into the behaviour of any given service or application across the environment. Cloud-hosted security should make the operational consumption simpler, not more complex.
User experience remains the measure of success for most organisations. Excellent security should aspire to be as transparent as possible to the end-user. However, some approaches propose additional UIs, additional premises hardware and multiple policy engines, all of which increase complexity. Meeting customers where they are on their journey to the cloud, where they still have data centres and distributed campus requirements, is how to best address these challenges without further complicating operations.