EP. 40 AI Era Cloud Breaches with Matthew Toussain
EP. 40 AI Era Cloud Breaches with Matthew Toussain

About This Episode
Join host Matt Pacheco as he sits down with Matthew Toussain, Founder and CEO of Open Security and former US Air Force cyber warfare specialist from the military’s first specialized cybersecurity class. In this episode, Matthew reveals the shocking reality of cloud security in the AI era—from unsecured S3 buckets that still plague enterprises to the terrifying new world of AI-generated vulnerabilities being created faster than defenders can handle.
Know the Guests
Matthew Toussain
Founder and CEO of Open Security, Inc.
Matthew Toussain is the Founder and CEO of Open Security, Inc., bringing over 15 years of cybersecurity expertise shaped by his service as one of the first US Air Force cyber warfare specialists. He is recognized for his work in penetration testing, red team operations, and as the Lead Developer of the Subterfuge Project, a widely adopted security tool released at DEFCON 20. Through Open Security and his role as IANS Faculty, Matthew delivers military-grade cybersecurity practices to the private sector, with a focus on cloud security, vulnerability management, and AI security.
Know Your Host
Matt Pacheco
Sr. Manager, Content Marketing Team at TierPoint
Matt heads the content marketing team at TierPoint, where his keen eye for detail and deep understanding of industry dynamics are instrumental in crafting and executing a robust content strategy. He excels in guiding IT leaders through the complexities of the evolving cloud technology landscape, often distilling intricate topics into accessible insights. Passionate about exploring the convergence of AI and cloud technologies, Matt engages with experts to discuss their impact on cost efficiency, business sustainability, and innovative tech adoption. As a podcast host, he offers invaluable perspectives on preparing leaders to advocate for cloud and AI solutions to their boards, ensuring they stay ahead in a rapidly changing digital world.
Transcript
00:00 – Cloud Career Journey
Matt Pacheco
And welcome to the Cloud Currents podcast where we explore the strategies and technologies shaping the future of cloud computing and cybersecurity. I'm your host Matt Pacheco and I help businesses understand cloud trends to help them better make decisions about their IT strategy. Today in our episode, we're diving into the critical intersection between cloud and cyber security with Matthew Toussain, Founder and CEO of Open Security. Hello. Matthew brings a unique perspective from his background as a former US Air Force cyber warfare specialist where he was part of the military's first specialized cyber security class. Really cool. Matthew's journey from military service to entrepreneurship offers valuable insights for organizations navigating complex security challenges. His company, Open Security, brings military grade cyber security practices.
That's a mouthful, but it's really awesome way to describe what you guys do and you guys bring it to your private the private sector with a particular expertise in cloud security, vulnerability management and emerging thread threats like AI enabled attacks, which we'll talk all about today. Since AI is a hot topic throughout the conversation. We'll also explore cloud security incidents, examples, implications of AI on security postures, vulnerability lifecycle management, and approaching to building successful security teams in this era of potential security talent shortages. So that's a lot I just threw at you. Short thing is welcome the welcome to the podcast, Matthew. We're happy to have you.
Matthew Toussain
Absolutely. Matt. It is fantastic to be here. I can't wait to get started. You're right. That is a ton of different topics to go through and I can't wait to have the conversation about basically all of them, especially the AI one. AI is, it's just fun right now.
Matt Pacheco
Oh. And it gets more fun as we advance and every few months it gets even better. So I'm really excited to talk about that with you, but let's start a little bit about you and your career path. So you took a, as I mentioned before, you took a very interesting turn from military to cyber security in the private sector. Can you walk us through your entire journey, where you started and where you're at today?
Matthew Toussain
Absolutely. Interesting is a very nice way to put it. The turn might have been a little more rough than interesting. I in fact was supposed to be a lawyer. I don't know if you're familiar with the old show JAG from back in the with Harm and they were in their navy whites and all that stuff. Oh man, I needed that. I was absolutely going to be a JAG for the entirety of my life. And then I got in the Air Force and they just started up this new cyber security thing and they had a club about it. I was like, you know, I like computers. Why not gave it a shot. It was the best thing ever. And I immediately changed kind of the destiny of my career as a result of that.
So I went to the Air Force, did a lot of work with cyber security. When were starting to kind of build that as a program within the military in general, I got to do some work with the nsa, which was fantast. And then at the back end of that, I, I got to make my own little company to do cybersecurity work and take all of the things that we've learned and employ them for organizations out there in the wild.
Matt Pacheco
It's definitely interesting. Really cool. So how would you say having experience in the Air Force kind of shaped your approach to your work in the private sector?
Matthew Toussain
That is a very interesting question. Most, most time when folks ask me that kind of question, they're asking about how did I learn everything in the Air Force? And then my weird response is, actually I didn't because the government is terrible at teaching anybody how to do anything. But the government is great at figuring out larger, more strategic problems. And so one of the things that I've been able to take and more into the private sector is an understanding of what really matters from a risk perspective. And I'll give a little bit of story here. When the Colonial pipeline breach happened, what a lot of people don't know about it is that although the pipeline went down, the pipeline itself, the operational technology, physical, cyber, physical systems related to the pipeline, they were never touched by the attacker. It was their business decisions.
And as a result, the organization made the business decision to turn off the pipeline because they didn't know how to make money without it. And I think one of the best things that the military does at like teaching you about the more strategic, larger implications about things is the interrelationship between them where you can look at something and recognize, hey look, there are secondary and tertiary exploitation realities that are going to happen as a result of this one core problem that are much larger than the individual issue that we're looking at. Because I think as a nerd myself, and I am definitely a nerd, I like to code from a cloud based perspective. I was just working on a full CICD git actions with a little bit of cloud AI workflow on Memorial Day weekend because what else am I going to do?
You know what I mean? Fun. But from all of those things kind of Combined, what we recognize is that the nerd stuff is fantastic and it's cool and it's a lot of fun and it might have really big implications. But the larger, more strategic fallout is kind of starting there, but really relates to how it gets used in the real world by the folks who, you know, live and breathe.
Matt Pacheco
What led you to make that move? Like, what was that spark that led you to found Open Security? And how has the company evolved over time since its inception?
Matthew Toussain
So Open Security has become a little bit of a lifestyle company for me. So it's evolved kind of as I've evolved as well, which is kind of interesting type of situation. But the way it all started is I was in the Air Force, I was super duper interested in cyber security. I mean, I had changed my whole directive, if you will, my prime directive from going into law into going into cybersecurity. So I was all in about that. But graduating from the Air Force Academy, I was an officer. And as a result, what that meant was I was a manager and I didn't really have to do very much actual hands on keyboard technical things.
And so I went to my boss at the time, my commander, and I said, hey, look, I would love to be able to do this on the side as well. I'd like to do off duty employment so that I can keep my skills as relevant as possible and I can bring that knowledge back in my, you know, Air Force job. And it might help with what we're actually doing. I got approval for that as a result, and that's how Open Security got started. But effectively what happened is I was doing this that I could kind of keep my skills a little bit more vivid, if you will. But a lot of my friends and colleagues, they looked at that and said, hey, you know what, I want to do that too.
And so that's kind of how Open Security got started is everyone was looking at this and saying, hey look, we want to do that too. And I went from getting hands on keyboard experience to being more of a manager again, but now a manager of my colleagues as opposed to myself, which was not the transition I had intended, but, you know, it was the hero that Gotham needed, if you will.
Matt Pacheco
That's a cool origin story. I love that. So can you tell us a little bit more about Open Security and what you do and what you provide to your customers just so we can level set for the rest of the questions I'm going to ask you for the rest of the episode?
Matthew Toussain
So open security is A cybersecurity services consulting firm. We do penetration testing, vulnerability assessment, and a bunch of other services as well. Sometimes it's incident response. So we might show up on the worst day that an organization has ever had. They're compromised, everything is ransomware out the wazoo, and we're helping them negotiate to pay Bitcoin to get their systems back. One of the things that I'm most passionate about is vulnerability assessment, specifically because when I show up for instant responses, the vast majority of the time it's something basic, something really obvious that could have been fixed, didn't need to happen in the first place. From a cloud perspective, one of the more recent vulnerability, sorry, instant responses that were doing was for an organization that had MSSQL servers.
So Microsoft SQL that were exposed to the Internet because they had cloud implementation and aws, and so there's opportunistic capability to go after those things. They had passwords that were embedded in source code inside of internal applications. Who knows how those got out? But they were on every single system in the entire environment. And they just didn't think about it because they were compiled. And if they're compiled, you can't see them, Right? You can't. So the attackers, they logged straight into the environment and they immediately started pivoting laterally into the internal environment to try to get access to domain controllers, internal systems, all of that stuff, which was, you know, wildly dangerous for that organization. And we do all of this type of stuff.
We often show up on the worst days of organizations, but our ideal situation or circumstance is to show up on an average day and make sure that the worst day never happens.
09:06 - Common Cloud Security Gaps
Matt Pacheco
I like that. That's kind of cool. So digging into cybersecurity incidents and response, you just shared a story of a client you've worked with having potential gaps. Can you elaborate on some of the more common security gaps you see in cloud environments?
Matthew Toussain
I can. Unfortunately, the most common vulnerability that I tend to see in cloud environments is so darn, not just basic, but annoyingly basic. Right. It's passwords, and it's not just passwords, it's authentication in general. So this might be your IAM roles that you've got within AWS. It might be the way that you're integrating those IAM roles into an IAM system, like, let's say, JumpCloud or SailPoint or something like that. It might just be straight up basic passwords or API keys, maybe API keys that are embedded in source code. Maybe we're doing infrastructure as code and As a result, if you're doing infrastructure as code, every credential for everything must be somewhere some. Why? Because if you're doing it right, maybe those are environment variables that aren't actually accessible to an attacker.
If you're doing it wrong though, then maybe they are a little bit accessible to the adversary. And these are the 10. These tend to be the most common environment tool style problems that we see in cloud situations. The next style is no passwords at all, unsecured S3 buckets. Literally just unsecured S3 buckets. I can kind of like full stop right there. Yeah, we have Azure cloud storage too. We got buckets of all kinds of sorts. But S3 bucket storage specifically being unsecured has been the underpinnings to so many cloud related compromises that it is absurd.
Matt Pacheco
And people must have that aha moment when you shared that with them. And like wait, what do they not realize that's an issue? Like is it how, why is it overlooked so often?
Matthew Toussain
I think it's abstraction. Yeah. So why is it overlooked so often? I would say that it's abstraction. If we're looking at say a cloud environment, for example, there's so much difference between a cloud implementation and just building something in code. And they might be literally the same application that's being put out there. But if you're doing everything through GitHub Actions for a CICD workflow, do you have something that is, you know, scanning your terraform configuration to see if the other microservices that you're implementing are properly secured or not. On the other hand, if you have a more monolithic implementation, everything is kind of seeing is believing. Because the same thing that you have in dev is the same thing that you have in Prod is the same thing that the attackers are actually experiencing or going after.
That's not to say that the cloud style implementation is less secure, it just means that it's more complex and it's more situationally complex on a regular basis. This means that attackers have a lot of opportunity to aggress those kind of things in a way that they might not have for monolithic infrastructure. Whereas with monolithic infrastructure we're still always focused on the patch management problem. So both of these situations are not necessarily ideal, but they have different styles of kind of contemporary attack surface that we see exploited by adversaries.
Matt Pacheco
And I guess you mentioned environments and cloud environments. Are you talking more about the public cloud? So Azure aws, Google Cloud, or kind of multi cloud or even hybrid where people have on prem infrastructure, do you see any difference between the platforms and how you approach incident response and securing those? And also, are the issues different from platform to platform or approach to approach?
Matthew Toussain
That's a fantastic question. I wouldn't actually say that they're necessarily different from platform to platform. I do believe that from an incident response perspective, Google Cloud, platform, AWS and Azure, they all three offer essentially the same capabilities. And if we're seeing differences from platform to platform, they tend to be related more to configuration than possibility space. And if we're talking about configuration, the harder it is to manage the configuration, the more likely it is we're going to find vulnerabilities. And from that perspective, I can give you the three. Azure's the worst, then it's AWS and then it's gcp. Why? Because that's the level of complexity. If you've ever seen Azure's like, interface, it's a nightmare. It's an absolute nightmare.
AWS is so ridiculous that we prefer to do it through AWS CLI and like, manage our infrastructure via code that actually use the interface itself. Whereas with gcp, that's a lot less common to see. Now, GCP is of course the third most utilized out of the three, for good reason. And I'm not necessarily suggesting for folks that you leverage that over the other platforms because they're more secure. I'm just saying that ease of use is a really important thing to recognize because the more complex something is, and the more of that complexity is interfaced directly with the user, the more likely it is that we have configuration setups that aren't as secure as they could be. Now, all of that being said, I would say that most implementations of all three platforms tend to be reasonably similar.
You do have that hierarchy, but they tend to be reasonably. Where things go crazy is when we start to see hybrid cloud environments, we've got a public and a private set of implementation. The MSsql example that I was giving a little bit ago, that was a private cloud with a little bit of public cloud. So they had AWS and they had an internal private cloud as well. And what they decided to do, and this is not too atypical, is they just created a straight VPN tunnel between the two that bridged the environments together and that made it possible for them to directly, by them, I mean the attackers to directly pivot and move laterally into the internal domain environment once they got that initial compromise, which is terrifying. Like, that's not necessarily how you want to Design environments.
And I would say that if you have a hybrid environment, you typically have a lot more attack surface than if you have any environment that is fully public, regardless of which three providers you tend to leverage. And of course there's more providers, don't get me wrong. But those three tend to be the largest by far.
15:10 - AI Impact on Cybersecurity
Matt Pacheco
Cool. Thank you for that. Yeah, I agree 100%. The, the more complex, the more opportunity for vulnerabilities. And we go a little more into vulnerabilities in a little. But I want to go into the fun topic right now, the really, really fun topic of AI and security, because I know there's a huge intersection between the two. I know a lot of people in the space are leveraging AI to identify threats and do all that. Cool. Manage detection and response, all that fun stuff. Let's start with an easy one. AI is a major trend in cloud security. How are you seeing organizations implement AI? And do you think they're doing it too soon before they're ready for it?
Matthew Toussain
Yes. So yes to both, technically. So how am I seeing organizations implement environments with artificial intelligence enabled within them? Yes. What do I mean by that? Everything that you could possibly imagine we're seeing getting thrown, like everything is getting thrown at the kitchen sink at the moment when it comes to artificial intelligence implementation. We've got AI chatbots being retrieval, augmented generated, giving data that is say, corporate sensitive to interface with third party customers. We're seeing artificial intelligence utilization for standardized logic inside of applications, though a little bit less of that to be honest. We are seeing a huge amount of artificial intelligence being leveraged for automatic generation of code right now. So agentic artificial intelligence doing generation of code and such, very common. And we're seeing a huge amount of this. I have done quite a bit of development myself.
In fact, I've done quite a bit of development with agentic AI myself as well. And I can tell you that it's shockingly powerful. Like it creates sometimes really good code, but most importantly, it creates a lot of code. And that code tends to work just well enough. And if we're talking about code that works just well enough, we're also talking about a lot of cybersecurity attack surface. So I would say that if I'm. If there's something keeping me up at night right now, from a cybersecurity perspective, it's artificial intelligence and it is the generation of code via artificial intelligence models right now that are being automatically run into production and deployed by many organizations out there. Because it doesn't necessarily give us new categories of vulnerabilities.
Like, we're not talking about AI specific vulnerabilities here, but we are talking about a lot of the same OWASP top 10 vulnerabilities, just more of them faster by folks who know a lot less about development than what our traditional folks have known for the last 20 years. And that's rather terrifying.
Matt Pacheco
That is terrifying. Would you say. How do I put this, that AI is helping more with identifying vulnerabilities, or is it doing more to create more vulnerabilities?
Matthew Toussain
I love this question. Let me actually add a third pillar to it. So is AI creating more vulnerabilities, is it helping to identify more vulnerabilities, and is it helping to exploit more vulnerabilities? Because there's actually the blue team, the red team, and then the resiliency side of the topic. And I think that this is really important to recognize because the first folks who have really started to adopt artificial intelligence very heavily are our developers. So the most significant implication of artificial intelligence as a result of that is the creation of new vulnerabilities faster. The second biggest community in this are actually the defenders. People think the attackers are going to, like, take up all of his AI stuff and use it. But what folks don't realize is how big the dang cybersecurity market is right now.
I was just at RSA a month ago, and every vendor on the. On the conference floor had AI in the name. In fact, I met two different people whose names were Al, and they spelled their name capital A, capital L because they didn't want to get mistaken for AI. So that's how bad the darn thing was. So from like a cyber defensive perspective, we are throwing money at this like never before. And I do think that some of those things, not most of them, but some of those things, are actually bearing some very significant fruit. If we look at some of the work that CrowdStrike has done from an artificial intelligence defensive perspective, really valuable. If we look at ephemeral endpoint detection response systems like Prisma Cloud by Palo Alto, some of the AI work that they've done, really potent.
And so I think that from a cybersecurity perspective, we're seeing a lot more implementation on the defensive side right now of AI than we're seeing on the offensive side. Which brings us to the third piece. How are adversaries employing artificial intelligence to attack us more heavily? And I think the number one way that we're seeing that happen right now is actually the bottom of the barrel easy attacks, not the hardcore stuff. People always think AI is going to elevate the attack. It doesn't. All it does is take the bottom of the barrel stuff and make it a lot more common. Let me give you a direct example. So the MGM breach, that happened a year and a half ago now.
So when the MGM got breached, effectively it was folks with a British accent calling them up on the phone and saying, hey, let me reset some passwords and give me access to accounts. Now, if you can talk somebody through giving you access, that's obviously bad, right? But that's how social engineering works in general. Phishing attacks, same kind of idea, maybe a little bit easier with a link click, but still it's really the same thing, convincing somebody to do something they shouldn't be doing so we can get access to their systems as if were them ourselves, if ourselves being the attacker in this kind of situation. Now, the problem with voice based phishing, or potentially the benefit from a defensive perspective of voice based phishing, is that we're trading off 1 to 11 minute per 1 minute human time on the attacker side.
If we talk about it with email based phishing, I might spend a little bit of time creating a ruse to send a phishing email out to a thousand different email accounts at the same time. But if we're talking about voice based phishing, every single call is taking my time at the same amount of time as it's taking from whoever I'm trying to exploit. This is where artificial intelligence really gets interesting. Because if AI can actually impersonate voices to a certain degree, then I can suddenly escalate my voice based social engineering attacks the same way that we traditionally were able to do with email based social engineering. We're starting to see attackers do this and we're expecting over the next year to two years for this to become kind of a primary vector of attack by adversaries.
And if you are more vulnerable than the MGM Grand Hotel or Caesar's palace, then suddenly this might be a really large issue for you. So if we look at this from the perspective of three different tiers or pillars of attack, we've got developers making more vulnerability attack surface, we've got defenders trying to protect that attack surface better, and then we've got attackers leveraging AI to aggress the attack surface even more heavily in a more automated sense. So it's kind of a one defensive too offensive style situation. I don't think that we're winning as a result of that, but it is kind of a full spectrum of change that's happening at the moment.
Matt Pacheco
Wow, that is. That is alarming. Definitely. I haven't experienced personally the AI calls yet. I've heard of people getting them with the voice of a family member. Sometimes even it's. It's kind of weird. But I've never actually had.
Matthew Toussain
Maybe it was too good or maybe.
Matt Pacheco
I just don't pick up the phone because I'm a millennial.
Matthew Toussain
That's me. That's definitely me.
Matt Pacheco
But anyways, you mentioned developers creating more attack surfaces potentially through their code, and then also attackers using and leveraging AI to potentially increase those low hanging fruit attacks. What are some pieces of advice you would give to an organization coming to you on how to defend against some of those things or high level advice on what to do about these things?
Matthew Toussain
Absolutely. I think that's a great question. Let me actually pivot the AI vulnerability space into a slightly tangential vulnerability style. M365 copilot is what I'm talking about here. So M365 copilot is AI. It is pseudo cloud related, I guess. It's very cloud related, I guess if you really think about it, because it's all connected to The Microsoft Graph API, we're all talking about O365 at this point, all of those types of things. Does this make you more vulnerable than not having M3C5 copilot at all? And the answer is a little bit nuanced because what do we even mean by vulnerability? Vulnerability or cybersecurity risk is this kind of confluence of impact. So like the impact of a negative and occurring times the likelihood of that actually happening in the first place.
What we see with M365 copilot is less of an increase in the impact and much more of an increase with the likelihood. And so what effectively is happening here is were already vulnerable, we just didn't know it and the attackers couldn't figure it out. But when we add AI into the pipe, it suddenly becomes a lot easier for the attacker to figure it out. For example, I don't know if you use SharePoint before, but I have. And when I say the word use, I'm using the word use very weekly because it's impossible to use SharePoint. I don't know if you ever try to find something on SharePoint, but it's impossible. You know what can find things on SharePoint? M365Copilot. And so what? Most enterprises don't know right now is how much is exposed to their average user via SharePoint in general.
And the users don't know that either, but Copilot does. And if your users are asking questions of Copilot, they're going to get answers that relate to your real environment. I'll give a couple specific examples. What if you have performance improvement plan going on in your organization because somebody might be about to get fired? Is that performance improvement plan stored somewhere oneDrive on SharePoint inside of your Microsoft centric environment? If so, does that mean Copilot has access to it? Yes, it does. If your entitlements management, which is to say permissions on individual files are not set properly and who amongst us has proper permissions on everything, then guess what? Copilot knows everything.
A colleague of mine is actually working on a incident response style engagement right now for a major organization where Copilot was able to identify criminality on behalf of the C level individuals inside of the organization and a employee found out about it and started sending emails to other folks within the organization about it. This is a massive problem. This is the kind of problem where the FBI gets involved. This is the problem where everybody involved gets fired on the low end and on the high end it's a nightmare. And it's all related to entitlements management. The impact was already there, but the likelihood is being incentivized by artificial intelligence.
Matt Pacheco
Wow, that is. That is crazy. Yeah, I was going to ask about permissions is. So is that on a administrator level or is that something each individual has control over? Like when I'm in SharePoint, well, I'm working through Teams and Teams uses SharePoint on the background and it's all SharePoint. At the end of the day, when I I guess do permissions on my files, I can say who I want. But is there is Copilot bypassing that or is that something that has to happen on the administrative level to make sure it doesn't happen?
Matthew Toussain
Great question. So Microsoft did a very stringent about making sure that Copilot can't access anything that you can't access. Which is a good thing, right? It's a fantastic thing. So from the perspective of Copilot, it really isn't increasing that impact metric, as in it isn't giving you access to anything you didn't already have access to before. It's just you were already much more vulnerable than you realized. And what we're trying to find out with Copilot is that all of that vulnerability space Was there. Let me give you one extra example here. We're starting to see a lot of extortion only style compromise events by adversaries.
And so what that effectively means is traditionally the adversary might get into your environment, they might escalate privileges, they might move laterally, get domain administrator, look for your backups, delete your backups, ransomware or encrypt, excuse me, all of your systems and then ransomware extort you for those things back. Now what we then started to see as an evolution of this type of compromise is double extortion. This is also often referred to as multifaceted extortion. And the idea there is, hey, pay me if you want your systems back and also if you don't pay me, I'm going to give all of this information to the Internet and your competitors are going to have access to it. Everybody's going to know that you were compromised. Etc, etc. This is double extortion. We see double extortion in almost every single ransomware event that happens contemporarily today.
However, one of the things that's really interesting, a lot of folks don't know about this is that we see a lot more, about 4% more. So what is that? About 40% more extortion only attacks against organizations than we see extortion plus ransomware. What that effectively means is they get access to your data and they say, pay me or I'm going to send this data out to the Internet and everybody's going to know that you were hacked. That's successful enough for most attackers to actually do that more frequently than ransomware. Guess what? AI gives you as the first user that you phished, you get access to all of the information inside of the environment that a standard domain user has access to.
What that means is I can steal all of that data, say I have all of this and I'm going to give it all to the Internet unless you pay me right now. And so I think that M365 copilot specifically is making it much more common for us to see ransomware less style extortion attacks against organizations because they don't have to do nearly as much of that attack campaign. They don't need to move laterally, they don't need to escalate privileges, they don't need to look for your backups. All they have to do is get access to that first user, ask copilot, and then extort you.
29:07 - Vulnerability Assessment Best Practices
Matt Pacheco
My mind is blown. I had no idea that was a vulnerability. And then the implications of that because one, you have people who could be phished. All it takes is one person and they have access to your systems, like you said. Absolutely. But then that also brings back that your internal. Sometimes there's internal malicious actors depending on the company who could share that information as well. Especially now, you think about companies that have contracts with the government and there's a lot of different scary things there. So, so I guess my next question, and that's moving into assessing your organization for vulnerabilities. Can you tell us some of the best practices organizations can actually take and use to start looking at some of these things?
Matthew Toussain
Yeah. So from the perspective of most organizations out there, what can you really do to help secure yourself? And I do think that entitlements management is really important. I hate to say this because this is so boring, but what is old is new. Again, all of the things that we didn't want to do from a security perspective in the past because were lazy or wasn't worth the juice, wasn't worth the squeeze, or we're going to put it off to next year, all of those things are coming back to bite us. Because if we look at AI vulnerabilities, they're not really increasing the attack surface. They're just enabling adversaries to exploit the things that we didn't bother with in the past because they just weren't worth the time invested. Let's take a look at a specific example from let's say a web application perspective.
So let's say that we have a traditional web app, right? It's maybe a banking website, just for example, and it's got a login page, okay? That means you type in username, you type in your password and you log in, hopefully successfully. Guess what? Because you have a login feature that could enable SQL injection attacks. Are you doing balance checking? Are you doing proper filtering? Yes. No. Maybe. Guess what? That's a standard OWASP top 10 vulnerability. These have been around for a long period of time. You're probably secured against those. Well, from an artificial intelligence perspective, what you're doing is you're securing the inbound information. You're saying, hey, look, make sure that this inbound information is valid and not malicious, and if it is, we'll filter it. But what about the outbound return information?
You see, in the way that we're implementing artificial intelligence in a lot of mobile applications, in serverless applications, even in monolithic servers, is we say, hey, look, we're going to treat this artificial intelligence agent as if it were our database. Because you get a request in, you give it to your database, and deterministically the database is always going to give you the same response back. But guess what? AI is non deterministic. And so what that means is I can ask the same question, get through the filter, because the question was allowed through the filter, and AI is going to give me a different response back every single time. And so if I ask, hey, give me an example of a JavaScript message and it gives me a pop up as a result of that is exploitation, opportunity.
And if your guardrails, filters inside of AI models and such aren't protecting against those kind of things, then that type of attack surface is suddenly possible. But what is the actual defense? It's the same thing as we had before. Whereas with typical web applications we're protecting against the, we're filtering the information coming in. Guess what? The exact same filters are now necessary for the information coming back because we don't trust the information coming in arbitrarily. But we did in the past trust our databases because it was always the same. It's deterministic with AI, we shouldn't trust those at all either. So we shouldn't trust the information coming back. Which means the same defenses were using on the front end now need to belong on the back end as well. Whenever AI is involved, that's not anything particularly novel per se.
It doesn't mean that we need to change or invent new technologies, but it does mean that we need to employ them in different ways across our new level of attack surface.
Matt Pacheco
So from a vulnerability lifecycle perspective, we've heard about assessing these types of things. So assessing your vulnerabilities, what are some effective vulnerability lifecycle assessments? What do they include? What would you advise that's included in something like that?
Matthew Toussain
Absolutely, yeah. So vulnerability life cycle assessments have been one of my passion projects over the last couple years, very specifically because I think that if we look at what's been emerging from the perspective of threat actor intelligence and their tactics, techniques and procedures, what we're seeing is a need for a much more deep levels of resiliency and organizations to be able to stave off these attacks kind of fundamentally. For example, if we look at the, the exploitation window. So if we look at the exploitation window, traditionally, you might see an organization get compromised in December and then a couple months go by, the attacker's trying to get access, they're doing lateral movement, they're escalating privileges, all of this stuff, and they Maybe get ransomware come April or May or wherever it might be.
This means that from a defensive perspective, we've got like four, five, six months even of opportunity to detect and respond. And if we find them before they've ransomware us and we kick them out of our environment at that point, guess what? It doesn't matter that were vulnerable. We won. Unfortunately, today that exploitation window is really starting to shrink. One of the most recent incident responses my organization was doing was this past February. And what we found was, is there was this organization that reached out to us because they were ransomware. They were ransomware on the same 2nd of March.
But what effectively occurred is that a vulnerability got added to the cybersecurity Infrastructure Security Agency CISA's Kev list, their known exploited vulnerabilities list, and that got added to that known exploited vulnerabilities list on the 18th of February and they were ransomware by the 2nd of March. This is just a couple weeks of opportunity to find the vulnerability, remediate the vulnerability, and try to become secure before the attacker gets in, exploits you and walks away with some cash. That is rough. That is super duper rough. What if you don't even have a vulnerability scanner? Suddenly you're vulnerable and you have no opportunity to defend yourself against that. What does your vulnerability lifecycle look like? I think this is a really important question for folks to answer because if you knew what your vulnerability life cycle looks like, it tells you two things.
Either A, we need our vulnerability life cycle to be faster so that we can be commensurate with attackers, or we need our baseline resilience to be more significant so that we can stretch the exploitation window so that we have more time. If you don't know the answer to that question, if you don't know how long it's going to take for them to get initial access and then deliver effects in your environment, then you don't know how resilient you are. If you don't know what your vulnerability life cycle looks like in the environment, then you also have no idea about what level of resiliency is being implemented in your environment at all. So the first thing tells you how resilient you probably need to be in your organization. And the second side, that vulnerability lifecycle tells you how resilient you actually are today.
Unfortunately, most organizations at the moment aren't really looking at either side of that question, and it's a really important thing to have answered.
Matt Pacheco
Are there specific metrics or processes that security teams should implement to improve Vulnerability management programs like that.
Matthew Toussain
I love this question. I'm a big fan of metrics because I think that if we're doing any kind of work and we have no ability to evaluate how effective it is, are we even doing work at all? The most common metric that we see in vulnerability management programs is vulnerabilities over time. Where you say, hey, look, if we have vulnerabilities over time going up, that's bad. If we have vulnerabilities over time going down, that's good. Now unfortunately, this is not a very good metric. It is the default metric, the de facto one that you're going to see from vulnerability scanners. And the idea there, of course is pretty straightforward. If we're treating all vulnerabilities, if they were equal and we have more of them, then we're probably more vulnerable. If we have fewer of them, we're probably less vulnerable. But that's really not true.
What about highs and criticals versus lows or median level vulnerabilities? Are we treating these all equal? Guess what that metric most assuredly does. I think that having standard SLAs, excuse me, and then having the ability to detect our time from detection of a vulnerability to time where we remediate that vulnerability is perhaps the most valuable singular metric that a vulnerability management program can have. Because we can say our SLA for critical vulnerabilities is 24 hours. If we find a critical vulnerability, we want to detect it and respond to that vulnerability within 24 hours. Guess what is our program achieving that metric? If not, maybe we want to look at advancing the program next. Maybe we have high level vulnerabilities and our time from detection of those vulnerabilities to remediation is maybe seven days.
If it is a vulnerability affecting one of our systems that is publicly exposed to the Internet, maybe that same vulnerability that we call a high on our internal environment is considered a critical. But our internal environment, since we have more defenses, maybe we have a firewall, EDR in place, all kinds of other defenses that you can imagine, maybe a proxy, all of those types of things, then suddenly that vulnerability is mitigated and downward adjusted. And suddenly once we start to look at medium and low vulnerabilities, if they have those same mitigations, we might actually be able tolerate them and accept those vulnerabilities, create exclusions for them as opposed to consistently tracking them over time and allowing those to deteriorate our attention against the things that matter the most, like our criticals and highs.
38:27 - Building Security Teams
Matt Pacheco
That's excellent and excellent. Advice for those listening in who may be concerned. After hearing everything we talked about earlier, let's. Let's talk about a subject that's near and dear to my heart, and it's about security teams. So we talked about vulnerability management programs, we talked about AI kind of helping security programs do what they do and identify threats and all that great stuff. However, we constantly hear about this ongoing cybersecurity shortage and an overall IT skills gaps shortage. A lot of people don't have the skills needed or there's not enough people out there for all these organizations to do what they want to do with the cloud and with security. Especially as now, as they say, every company is an AI company these days, which is pretty funny.
But all this new technology, specifically in cybersecurity, how do you approach building and developing talent to do this stuff within your organization or when it comes to recruiting efforts?
Matthew Toussain
I think that's really challenging. This is also a fantastic question. I think it's really challenging to build and develop experience at the moment because it's really about experience versus training. And we tend to value experience so much above training that we don't actually invest in training whatsoever. And who can't even invest in training? Mostly the larger organizations. And a lot of the larger organizations prefer to hire train talent as opposed to develop it. And so as a result, what we're seeing here is this big split. We're getting a dispersion between highly experienced cybersecurity talent and folks who really desperately want to get into cybersecurity. And we're seeing very little in the middle. And that's exceptionally unfortunate. Which means that we're seeing the highly experienced cybersecurity professionals getting paid absurd salaries.
So for myself, as an employer of cybersecurity folks, it's really difficult to retain that talent because once they get to a certain level, Meta is going to swoop in and they're going to say, hey look, we got a salary for you of 300 or 500 grand per annum. You want that, don't you? Oh, plus stock options. Don't forget about the stock options. And that is really hard to compete with. And I think that the biggest problem is that we see a very fundamental lack of education in the cybersecurity field. You can't really go and get a computer science degree at a university and then immediately dive into cybersecurity from a professional perspective. Instead, you probably need to spend a lot of time doing it help desk work and then slowly graduate away from that. That's boring. That's terrible.
And it shouldn't necessarily be a requirement because it's only tangentially related to the actual, primary, you know, work effort, if you will. And there's a lot of different fields in cybersecurity, be they incident response or vulnerability assessment or penetration testing. But the problem across all three of those disciplines is they all require someone who really knows what they're doing, because if you don't, you could cause more harm than you cause good very easily. And there aren't a lot of fields that have that kind of problem. So I think the fundamental problem that we really experience here is that our training is many opportunities in cybersecurity are extremely expensive. I'm not sure if you're familiar with the SANS Institute, but they're probably the largest training organization inside of cybersecurity at the moment. Very expensive.
We're talking about 10, $13,000 per class and they have, I think 80 classes available now. So if you were to pay for their entire curriculum, were talking about going bankrupt 10 times over. Really, really difficult to achieve that kind of experience. For me personally, I found that doing open and available capture the flag exercises to be way to build the experience. And then a fantastic way to prove that you have that experience is often via open source contribution. For example, I've been developing software for years and years now. Open source software, specifically things to help people study for exams.
I've got a vulnerability scanner that's open source and free to help people find vulnerabilities in their environment so they don't get ransomware and so they don't have to call me up so that I show up on a Friday afternoon when they have a really bad situation, these kind of things. And it doesn't mean that you have to build a program from scratch in order to get hired. In cybersecurity there are lots of open source projects, Serious Scan, Metasploit, nmap, all of these kind of things. And contributing to those projects is just the best way, in my opinion, to pad a cybersecurity resume because it demonstrates knowledge and it demonstrates intense, absolute passion.
Matt Pacheco
I love that answer. When we ask this question a lot, we usually hear about upskilling and courses and stuff like that. That's a very unique answer that I really like. Getting your hands dirty and doing it and those exercises you talk about. So thanks for sharing that. I'm going to move us on to another interesting topic that intersects with security and it's one that often comes up. It's cloud Cost and budget management. How do you balance, or how can organizations balance the need to manage those costs with the need to have a truly secure infrastructure? That sounds challenging. And it seems like there has to be some kind of compromise. Is there a compromise? I guess. And what's the best way to balance the two?
43:53 - Balancing Cost and Security
Matthew Toussain
There's no compromise. Spend the money. I'm kidding. To a certain extent that is true, though. To a certain extent it is about spending the money in the right places. Cybersecurity isn't free. It's not going to be free. And we can see the industry just exploding from the perspective of just sheer market capitalization. And there's a reason for that. The question is, are you going to pay an attacker after you get ransomware, or are you going to pay a defender to prevent you from getting ransomware in the first place? I think the most slept on way to do defense that organizations are just not taking advantage of today is investing in human capital. There is no better defender than a human defender. I don't care about your AI software implementation of a super proxy solution.
And we'll throw in a flux capacitor here because we're just talking buzzwords at this point. I don't care. People are better than technology. And if you have technology, you need people to operate that technology. You need trained people to operate that technology. So if I'm recommending anything to an organization to become more resilient against cybersecurity, it's invest in people and it's invest in your people.
Matt Pacheco
And I agree 100% with that. Many organizations, usually who we're talking to about some of these things, they still have to sell the concept of investing in cybersecurity to their leadership teams. Like, there's always someone you need to sell this to. How do, what advice do you give? And how can security leaders effectively communicate things like roi, let's say, to their leadership team to ensure the right programs are put in place to protect them?
Matthew Toussain
That is a fantastic question. I think that if we're looking at ROI and trying to identify what that could be and then communicate that to the organization, we should leave no stone unturned because there's a lot of money to be had in small individual components. And if we put those together, we end up getting a pretty significant cybersecurity budget. Let me give a couple examples. One of the styles of cybersecurity engagement that we've been seeing a lot of desire for lately are hiring process penetration tests. And so the North Koreans, they've been Very commonly going after getting hired into organizations, perhaps even paying real American citizens so they can use their identity as opposed to stealing their identity to get hired. Now they're insiders inside the organization and they can do exploitation. We've seen this reported by the Wall Street Journal, by BBC.
We've seen know before a social engine, cybersecurity defensive company get compromised through that exact kind of attack. It's certainly something that's contemporary and happening right now. If we want to stave off this type of compromise, it requires a much more multifaceted approach rather than just more traditional security. And so if we say something like, hey look, this is a new attack surface and we have no plan for this in our organization at all, let's get a third party to come in and test this for us. And the third party comes in and says, oh wow, this would work against you very, very well. Same idea for vishing with like the MGM beach breach, for example. Guess what this is. It's ammunition to take to the board and say, hey look, we saw no before and 10 other companies get exploited through this thing.
We saw the MGM and Caesar's palace and 10 other companies get exploited by this thing. We have the same exact problems in our organization as both of these do. That means either of those negative events could occur to us. Let's secure those. Here's how much it's going to cost. I tend to find that conversation is a lot easier to have than saying, hey look, the industry standard best practice is to have a quarterly vulnerability assessment and annual penetration test. At that point you're often saying, okay, yeah, that's the industry standard, it's the best practice. But what's the value proposition? I think that having a value centric conversation about cyber security tends to get a lot more buy in than having an industry best practices conversation.
47:48 - Future Cybersecurity Trends
Matt Pacheco
It's a great point. Let's talk about the future and potential trends that we see looking ahead. What emerging cybersecurity trends do you believe will have the biggest impact on cloud environments?
Matthew Toussain
Agentic AI in order to do CICD pipeline creation from scratch? I think that is terrifying. I don't see a lot of organizations quite doing this just yet, but I do expect to see this over the next year or two. This is a bit op ed my side, to be completely honest with you, but I've been using Claude 4 recently which just came out, and its capability to take actions is insane. In order to take it for a spin over Memorial Day weekend, one of the things that I did is I had it make a full on CICD pipeline around GitHub Actions using Claude and AWS lambdas. And it actually did the implementation of that, not just the code of that on its own. I literally just talked into a microphone and it did all of the stuff.
And looking at what it accomplished is rather terrifying because it works very well, but it doesn't have any security associated with it at all. Every single S3 bucket that it made, and it made three of them unsecured. The CLAUDE API that it used, it didn't have any limits on those. So if I wanted to like spam those as a third party in order to cause ridiculous amounts of AI spent expenditures, I totally could do that with the thing that I created. And these are the styles of vulnerabilities that are being made via agentic artificial intelligence at the moment. But I think that the attack surface is so much larger in cloud applications than it is in on prem or monolithic applications, that we see the potential risk much higher in that sector than the more traditional ones.
Matt Pacheco
So would you say AI is making the same mistakes humans make, based on what you said earlier?
Matthew Toussain
More of them faster? Yes, 100% it's making the same mistakes humans are making. You're exactly right about that, but more of them faster. And that's the terrifying part.
Matt Pacheco
And I guess there's an opportunity for the AI that can identify those vulnerabilities to come fix the AI's work that it did to build all that.
Matthew Toussain
And so. So let's go back to the three pillars, right? I mentioned. You first have AI, then you've got AI for defense, like AI for development, AI for defense, and then AI for offense. And we're seeing the development of those three sectors happen in that exact order. So right now we're seeing a lot of vulnerabilities coming out, particularly in the cloud space. Like unsecured lambdas is just like serverless functions. The ability for you to create code that just gets executed by some arbitrary provider like AWS has so much potential for risk actualization, it's insane. And so now if you've got an AI agent that's making this and setting it up for you on its own, and you never looked at the configuration settings that it's even configuring in the first place because it just worked, that is. That is horrific. That is absolutely terrifying.
And so what is the only way we can defend against that? That's stage two, right? That is having AI for defense to do the level of scaling and coverage that is being caused by the amount of AI for development that's happening now. We haven't had the AI for defense in the cloud space yet, which means that we're seeing all the development happen. I do expect for us to see a lot of opportunistic cloud vendors come out here over the next two years and I think that they're going to be necessary. But effectively they're only necessary because of what we're doing with AI right now from a development perspective.
Matt Pacheco
Very interesting final question for you. This is more of a fun one. What are you most excited about? So I asked you about emerging cybersecurity trends related to the cloud. I'm going to ask you what's the most exciting upcoming within the next few years piece of emergency emerging cybersecurity technology that gets you excited that you're looking forward to?
Matthew Toussain
So I have answer, but I don't necessarily believe that my answer will happen. I think the third tier is the most interesting part. So AI for offense and we've already had a vulnerability GPT that came out. We've had a penetration testing GPT that has come out and they've all been duds, but that doesn't necessarily mean that they have to be that way into the future. And I very specifically think that when we have vulnerabilities that are detected at the moment, we might be able to have those automatically generated into vulnerability scanning opportunities via AI because it's not necessarily too difficult to find a vulnerability. But at the moment I show up to so many instant responses like the one were talking about earlier where the organization had a vulnerability, came out 18 February. Right. The ransomware on 2 March.
They don't have a vulnerability scanner. They don't have one at all. And that's because there doesn't exist a free vulnerability scanning option either picking rapid 7 tenable or Qualys and you're paying out the wazoo for all three of them. Could we have a vulnerability scanner that is automatically generating vulnerability scanning modules based off of nist, which is supported by the government's CVE repository which, you know, they always come out. Could we have that automatically occur? Could that be free and could that be accessible to bottom tier or even mid tier organizations? I think the ANSW is yes. I've in fact been working on something of that sort lately myself. I've called it Serious Scan and I hope that it'll work out. But it's gotten me the most excited by far of what's going to be happening over the next several years.
And I think that from the perspective of configuration vulnerabilities inside of cloud environments, the same exact kind of vulnerability scan should, could, and maybe will be possible.
Matt Pacheco
Awesome. Well, Matthew, thank you for being on Cloud Currents today. I appreciate the conversation. This is great. I feel like we could probably talk for another two hours, but this was a lot of fun speaking with you and learning from you. So I appreciate you being on the show today.
Matthew Toussain
Absolutely, Matt. My absolute pleasure. I don't know if you're going to be at a cyber security conference like defcon, but if you are, we are going to have to grab a beer or something.
Matt Pacheco
Absolutely. I'll look to see if they'll send me. Oh, thank you. And to our listeners, thank you for listening in. You can hear this awesome conversation and many more on our Wherever you find podcasts, wherever your favorite place to get podcasts, YouTube, Spotify, Apple, wherever. We appreciate you all listening in and stay tuned for the next episode. Thank you.