Video Quick Take: Palo Alto Networks’ Tim Junio on Cybersecurity

Julie Devoll, HBR
Welcome to the HBR Quick Take. I’m Julie Devoll, Editor for Special Projects and Webinars at HBR, and today I’m joined by Tim Junio, Senior Vice President of Product at Palo Alto Networks, and co-founder of Cortex Expanse, a Palo Alto Networks company.

Today we are discussing Attack Surface Management and how network security has changed during the past year and a half.

Tim, thank you so much for joining us today.

Tim Junio, Palo Alto Networks
Thank you so much for having me.

Julie Devoll, HBR
So Tim, the pandemic caused almost an instant shift to more work from home and accelerating the move to the cloud. How has that impacted cybersecurity?

Tim Junio, Palo Alto Networks
So the move to the cloud and increased work from home has expanded what we call the enterprise information technology attack surface. The attack surface is basically any asset on your network that is also connected to the public internet and can be a point of entry for somebody attempting to do harm to your organization.

Julie Devoll, HBR
What risks highlight this new normal for enterprises?

Tim Junio, Palo Alto Networks
So the problem with a wider attack surface is that it’s very difficult for large organizations to know absolutely everything that’s on the network all of the time. And when you add commercial cloud environments and people working from home, it increases the complexity of that problem, because if a new asset is stood up somewhere in the world– including for legitimate business reasons– if your employee, who is now empowered to make purchasing decisions at the level of individual IT assets– if they didn’t follow the central protocol for setting up that asset, registering it with corporate, you now have a risk somewhere in the world, somewhere in the internet that may contain sensitive information about your organization or about customers of your organization. So we’ve seen, over the last year, a wide range of those types of events where employees– in good faith– have unfortunately left access open to sensitive data on the public internet in unmonitored internet assets.

A second problem associated with having a wider attack surface and, more broadly, a large enterprise IT attack surface, is when there are sensitive systems exposed to the public internet, and what’s called a new exploit or vulnerability is published– meaning somebody in research discovered a new flaw in the security of that device– it becomes possible for malicious actors to start to look throughout the world for any systems that are vulnerable to that exploit or vulnerability, and can then attempt to take over those devices.

So what you see is organizations in a race to try and find all vulnerable assets when new research is published, and because inventory is hard to do, it’s difficult to patch all of those devices quickly, and that means that this race is often a losing race for large organizations like companies and government agencies, whereas if a new exploit or vulnerability is published, malicious actors can make use of it almost immediately to start looking for systems that they can break into.

Julie Devoll, HBR
So with the shift to the cloud and more remote workers, what does this do to the traditional ideas around network security and asset management?

Tim Junio, Palo Alto Networks
So the traditional approach has been try to have a perimeter and keep bad things out of your network, and have better and better perimeters, basically– so micro segmentation– and try to carve up the network so that you have what we would call defense in depth, or if an attacker got in, it would be hard for them to move around.

That is still part of core principles for how to do enterprise IT and corporate security, but the overall architecture is moving in a different direction, which is toward continuous validation that authorized access is occurring to your systems and your data. So you’ve probably heard of this referenced as zero trust architecture, which is basically the idea that any asset on your network, or any identity on your network, any person on your network, has to be continuously validated as authorized.

And so there’s a lot of technology and development that works together to create a total zero trust solution, but what we’re seeing is organizations in a transition mode where most are not quite in a zero trust environment yet. They’re on some maturity path. And along the way, it can be quite messy to be using multiple cloud providers to have regional offices who are connecting via traditional gateways and points of presence, employees who may be using workstations that are not configured for zero trust model, and instead have a classic VPN into the corporate network.

All of that legacy IT creates a lot of risk for organizations, even as they’re trying to upgrade to newer, more sophisticated technologies. So the point is, for any large organization, if you think of the cumulative history of decades of enterprise IT, that is already a risk, because that needs to be upgraded and built into new network architectures. And then if you look at something such as commercial cloud environments, those are presently, for the most part, loosely managed, I would say, by large organizations.

Or they may be using tools like what we sell at Palo Alto Networks in our Prisma cloud product, or comparable management tools for cloud security. But usually they’re not uniformly applied. Companies might be in AWS, Azure, Alibaba, GCP– which is Google’s Compute Platform– and may not be uniformly applying their security software across all of those different commercial cloud platforms. And not applying the policy and software systems uniformly increases risk, because that’s where bad actors are going to try and get in.

Julie Devoll, HBR
So you mentioned that scanning from inside out as traditionally done in VM doesn’t work. What’s wrong with this approach?

Tim Junio, Palo Alto Networks
Traditional vulnerability management scanning works for a particular part of the solution, which is finding whether or not a known system is exploitable– so if the software can actually be hacked to do something bad. However, vulnerability management is particularly weak at discovery of new assets.

And so if you have things on your network that you don’t know about, either because you never had a complete inventory, or maybe you just acquired a new subsidiary and you’re still learning about what their IT posture is, or maybe an employee did something that was not authorized or exceeded the scope of what they were supposed to do– such as set up assets in a cloud environment that was not approved by corporate– all of those scenarios can result in not having a complete inventory to feed into vulnerability management products. And so if you don’t even know that the asset exists, it’s very difficult to check whether or not it is secure, and then to prioritize which of those assets need to be secured most quickly.

Julie Devoll, HBR
So getting visibility into all of these assets seems like it could take a long time, an arduous task. How long does it really take?

Tim Junio, Palo Alto Networks
So it depends on the size of the organization. If you’re a major internet service provider, it can take you literally years, but let’s call that the upper boundary. For most organizations, you can get a pretty good understanding of what are your assets on the public internet, and build inventory in two to three weeks through a combination of data, software, and people who can help curate network maps for you.

Julie Devoll, HBR
So then once a company does have a complete record of all its assets, what do they need to look out for?

Tim Junio, Palo Alto Networks
So for all large organizations, continuous monitoring is critically important, because there are new exploits and vulnerabilities being published regarding assets on the public internet on a regular basis, and very severe ones at a rate of roughly every other week right now. So if you pay attention to what’s happening in internet security news, every couple of weeks some major product manufacturer– and by major product manufacturer, I mean the companies that are creating the guts of the internet, the hardware and software that the internet runs on, and your large organization depends on– every couple of weeks there has been some published internet-facing exploit where it could be that the administrator login has a login bypass vulnerability.

It could be that you can do remote code execution on the device, meaning you would be able to install software remotely over the internet. Those types of access risks that come from newly published exploits and vulnerabilities, in addition to occurring on a regular basis and getting published publicly on a regular basis, because the companies are publishing that security information themselves– in addition to that, we’re seeing criminal actors, malicious actors begin looking for those exploitable systems very quickly after publication of the new security information, the new exploits and vulnerabilities.

So we’re seeing a pattern where things on the internet are broken. The bad people– criminal actors, nation-state actors– find out very quickly how that particular asset is vulnerable, and then they start going after it faster than most large organizations are able to find and fix. So large organizations– like companies, government agencies– need to use their inventory, most importantly, for continuous monitoring.

So continuous monitoring means understanding the inventory at all moments in time, as well as the security status of each asset within that inventory. And that’s really important, because what we’re seeing is, on a regular basis, product manufacturers and security researchers are publishing new exploits and vulnerabilities regarding assets– meaning software and devices– connected to the public internet.

So if you’re a large company or government agency and a new exploit or vulnerability is published regarding some type of asset that you are operating on your network, you need to very quickly find all of them and ensure that they’re patched such that they’re not vulnerable to malicious actors trying to find them and break into them over the public internet.

Julie Devoll, HBR
Tim, thank you so much. This has been a great discussion.

Tim Junio, Palo Alto Networks
Thank you.

Julie Devoll, HBR
To learn more about Attack Surface Management, visit paloaltonetworks.com.

Read More

Related posts

Scientists Looking at Vaccines With Time-Released Microparticles Believe No Booster is Necessary

An In-Depth Guide To Training Employees With Videos

Alert for Parents on Outbreak of Hepatitis among Children