EP. 38 – The CISO’s Guide to GenAI Security with Mohit Kalra
EP. 38 – The CISO’s Guide to GenAI Security with Mohit Kalra

About This Episode
In this episode, host Matt Pacheco sits down with Mohit Kalra, CISO and VP of Security at Typeface, to explore the evolving landscape of AI security. Mohit shares how traditional SaaS security principles must evolve to address GenAI challenges, from data governance and model selection to building enterprise trust through transparency. The conversation covers the intersection of FinOps and security, balancing innovation speed with security rigor, and the essential skills security professionals need to thrive in an AI-first world.
Know the Guests
Mohit Kalra
Chief Information Security Officer and VP of Security at Typeface
Mohit Kalra is the Chief Information Security Officer and VP of Security at Typeface, where he leads trust and safety initiatives across the organization. With over 20 years of experience in technology, Mohit has transitioned from software engineering to security leadership, playing a crucial role in helping companies navigate evolving security landscapes.
Know Your Host
Matt Pacheco
Sr. Manager, Content Marketing Team at TierPoint
Matt heads the content marketing team at TierPoint, where his keen eye for detail and deep understanding of industry dynamics are instrumental in crafting and executing a robust content strategy. He excels in guiding IT leaders through the complexities of the evolving cloud technology landscape, often distilling intricate topics into accessible insights. Passionate about exploring the convergence of AI and cloud technologies, Matt engages with experts to discuss their impact on cost efficiency, business sustainability, and innovative tech adoption. As a podcast host, he offers invaluable perspectives on preparing leaders to advocate for cloud and AI solutions to their boards, ensuring they stay ahead in a rapidly changing digital world.
Transcript Table of Content
Transcript
00:00 - Career Journey and Typeface
Matt Pacheco
Hello everyone and welcome to Cloud Currents, a podcast that explores the cutting edge technologies and strategies shaping the future of the cloud computing world and cybersecurity. I'm your host Matt Pacheco from tierpoint and I manage the content strategy for our organization today. I'm thrilled to welcome Mohit Kaura, Chief Information Security Officer and VP of Security at Typeface, a leading generative AI platform transforming enterprise content creation. Mohit brings over two decades of technology experience, having transitioned from software engineering to security leadership across companies like IBM, Adobe and sprinklr. At Typeface, he built the company's security compliance and AI governance frameworks from the ground up while overseeing finops, cloud infrastructure and incident management processes.
In our conversation today, we'll talk a little bit about security challenges facing his business, how companies can build trust in their AI implementations, and the ever evolving intersection between cloud security, AI and cost optimization. All really interesting topics that people love listening and hearing about and talking about. So we're really excited to have you on here today, Mohit.
Mohit Kalra
Awesome. Great to be here, Matt.
Matt Pacheco
Excellent. So let's jump right in and talk about your career journey and where you started. So can you walk us through your career journey from where you started to where you are today at Typeface?
Mohit Kalra
Yeah, of course. So I fell in love with computers when I started using DOS 3.3 just to date myself, but I think I pursued education in comp science and then I went into the industry as a software engineer. I worked at IBM for a couple of years and then I had a pretty long stint at Adobe. I wrote a lot of software and then eventually I moved to the security space where I set up the application security and the cloud security programs, including the cloud engineering part. From there on, after I think about 19 years, I moved on to Sprinkle. I helped set up their product security program and then here I am at Typeface. It's almost two years and I've learned a lot in this company because it's just not security that I do today.
There's a lot of other functions I play a role of. A CISO and a CIO handle a lot of functions such as incident response, gpu, cloud management, it, et cetera. So the multiple roles and multiple hats I play here, it's an immense learning opportunity. It's kind of a. So I have a blend of, you know, software engineering as my start of the career, done engineering management, and then I've kind of moved into security positions, both in management and leadership positions.
Matt Pacheco
Excellent. I'm curious what initially sparked Your interest in moving to cybersecurity in that world?
Mohit Kalra
Yeah. I was part of a team back at Adobe that had to do some security work in order to secure the applications. And as I was on the receiving side of security, I wanted to get into the doing side of security and things clicked. There was a need in the company. I happened to fill that gap. And then eventually I went full time into cybersecurity and never looked back. So it's super exciting times, you know, to do both security as well as kind of influence the entire company to do security well. And I really love the latter part, so I stuck to it.
Matt Pacheco
Really cool. And it sounds like you've been at a lot of different companies of varying sizes. Has that influenced your or how has that influenced your leadership style, being in all these different types of companies doing security?
Mohit Kalra
Yeah, so I think you've observed correctly. So I was at a very big company than a smaller company, and the size of company just keeps shrinking as I go forward. And I think the best part is that as long as you like learning the size of the company, it doesn't matter. But what happens is that in bigger companies, you're working in a smaller team, and as the size of the company becomes smaller, you get an opportunity to go much broad. So I don't think much changes in the style of leadership, but what changes is the breadth and impact that you can have in a smaller company and the amount of learning and the accelerated learning you can have in companies such as Typeface, which is a startup and provides an immense opportunity to. To learn as much as you can absorb.
Matt Pacheco
Excellent. And how has your breadth of experience shaped how you approach security in today's AI focused landscape?
Mohit Kalra
Yes. So I did talk about this earlier, but I started as a desktop developer, so fixing desktop applications, writing DLLs, et cetera kind of made me very close to the operating system. And then I moved on to mobile and cloud. And I think what I lear that the foundational computer science concepts that I learned either in college or as part of my initial jobs was super foundational in how I approach security. Like we have, you know, when you come from a software engineering background, you can, you know, empathize with the developers or on the receiving side, but you can also help solution things. And you understand the complexity of what software engineering brings.
And then when you're on the security side, I think it works very well because you understand the architecture better, you understand code better, and you use a combination of your human intelligence to do security in addition to the tools to scale it up. So I think that background coming from strong foundational computer science skills does help do security a bit deeper and a bit better.
Matt Pacheco
Really interesting. And we're going to dive into the AI piece in a second, but can you tell us a little bit more about Typeface and what you guys do to kind of set the stage for all the questions I'm going to ask you going forward?
Mohit Kalra
Yeah, of course. So typeface is a Genai application. We are an application that is built one of the best large language, some of the best large language models that exist out there. And we are catering to primarily the marketing teams in other companies and we're trying to add a lot of value to their marketing cycles by generating content for text, for video or for imaging and make generative AI the enabler for them to generate good quality content faster and at a much lesser price. So we are trying to reinvent how marketing is done in companies and through generative AI. And as generative AI is at an inflection point, is kind of changing how things are done in every industry out there. We are there to enable marketers to do marketing differently with the help of Gen AI.
Matt Pacheco
Really interesting. As a marketer, I love that. So let's talk a little bit about security in Genai. Is there a difference, Juana, to start, is there a difference between securing AI systems compared to traditional SaaS applications?
Mohit Kalra
So I think there's good news and bad news. Like, the good news is there's a lot to learn in Gen AI security. The bad news is that you cannot give up on what you were doing earlier. Like, the security is a place where you don't give up on the old, but you add the new on top of it. So it's kind of a stacking thing. So you. So you continue to do SaaS security as you are doing. That doesn't change. But yes, you do generative AI very differently because generative AI brings in new challenges as new technology does. So security teams need to be ready that they're not going to give up on old security asks.
The style of doing it might change with the advent of AI, but then on top of that, they need to think very clearly on how Genai is being done in the company. And you need to put two hats there. You need to think that. You need to first think about, what if my company's using Genai, what are the security that I need to put around that? And then the other one is that what if I'm offering Genai as part of My product, How do I need to think about Genai in that case? And Genai brings challenges around, not just security, where the underlying model might be used, misused by attackers, or might be a source of data leak, but it also kind of has more privacy and legal concerns.
And what AI has done today is that brought legal privacy and security teams much together. And the collective trust in Genai is what the security teams need to establish. When you're bringing technology inwards and starting to use that in your company, or when you push it outwards to your customers, who will ask, how do I trust the AI that you're using? And how can I feel good about the content that you're creating using your gen AI?
10:10 - GenAI Security Fundamentals, Ethical AI and Responsible AI
Matt Pacheco
Are there any, I guess, misconceptions that you encounter when it comes to securing AI generative AI systems?
Mohit Kalra
So Genai starts with doing basic security first. I think one of the misconceptions people have is that when Genai is in play, you don't have to do traditional security. So that's one misconception that I usually see. So old security doesn't go away. And the new way of securing gen AI certainly comes in play. Also, Genai, like other AI, is a data problem to start with. The data that was fed into the large language model, where did that come from? What did you overlay on top of that large language model? And then how is the customer data being used in the context of a large language model? Once you have the data governance and the data flows in check, you need to be very careful about the models you pick and choose. You need to vet the models you're bringing.
Like any other technology stack piece, it needs to be secure, it needs to be well created, it needs to be coming from a good supply chain. And then eventually, once you are in the large language model space, you certainly want to make sure that the large language model still follows the principle of tenant isolation. That is, one customer's data is not leaking into another customer, or customer's data in general is not leaking into the large language model, or there are no privacy concerns, and PII is not kind of going into an ether of large language models and then resurfacing in some unexpected ways. So a combination of, I think traditional security that cannot be given up and then specialization in gen AI, I believe is the right way to approach gen AI security.
Matt Pacheco
Excellent. And then there's also the dimension of ethical AI and responsible AI. Can you talk a little bit about the approach when building security frameworks for AI systems, how you factor in ethical AI?
Mohit Kalra
Do Completely agree. AI safety is extremely important. Ethical AI. How are you using the AI is an extremely important part that any organization needs to think through. On one hand there is security, there is privacy, and then there is the legalities of using gen AI. But on the other side, safety is an important part of any AI system out there. Through experience and through talking to our customers, we have seen that's one of the things that they really care about, that they do not want to generate content that is not ethical or is not responsible. And to dig a bit deeper into that, if you use AI to generate things that are unacceptable and they would, such as Gore or csam, et cetera, that kind of leads to tarnishing of the brand.
So enterprises are super careful to make sure that the AI that is in play is not being used irresponsibly. And there are a checks and balances that the AI application provides to quickly catch these. And then there are workflows that brings human in the loop that allows them to review the AI before publishing stuff. So we see that AI safety is a combination of the burden on the AI application provider to check for the prompts coming in that they don't intend bad stuff to be generated. The output that is generated is not bad. And once the output is generated, there are some automated checks to make sure that it is good quality in terms of responsible AI.
And then we put the burden back on the consumer of that content to say that please make some final checks before you publish to the Internets out there. And there are other technological things that for example, you can sign your image with a signature, with the C2PA signature. And that helps people outside the ecosystem identify that the image that they're seeing is actually generated through an AI.
Matt Pacheco
Very interesting. And it's interesting that a lot of enterprises are focused on the safety of AI and all of that. So when it comes to these enterprises that you work with, what are they most concerned about when adopting gen AI technologies as it relates to security?
Mohit Kalra
Yes. So the first thing, just to dial back a bit, whenever we work with enterprises, their basic stuff still starts with the traditional security asks, hey, where's your compliance? Can we see your pen test report? How do you do basic security in your company? We have a lot of enterprise customers who first will start with traditional security questions. These are table stake questions about our compliance, which compliance we have, whether we are, you know, securing our company in the traditional SaaS security way. Once that is out of the way, we are seeing that a lot of customers are coming and asking Very good questions around AI security. What they care about are many things. For one of the things they care about is that, what are you going to do with my data?
Like, are you going to take my data and put it in a model and then can my competitor benefiting from that? Or am I using a model that is being influenced through external factors such as other people's data, or are you providing me proper data isolation within the model itself? So there is a data element to it, then there is an ownership perspective that, hey, who owns the output? Once we generate the data through your software, is it going to be you or can we take our output and use it as is? So there's that ownership question that we get in terms of AI trust as well. And in terms of security, they do want to know that, hey, how are you picking your models? How are you testing it for responsible AI? Also, can we know the models you're using in place?
So usually security teams have usually a trust portal on their website and on the trust portal they can check all the compliances and the reports that the security team wants to give to the customers. I think one add on that we have added as a company is also the list of models so that our customers can get to know the models in play. So it's not a black box that they're using a SaaS application that does Gen AI, but they're also aware of the underlying models and choices we have made so that we can have more trust with them in terms of trust and transparency with our customers, so that they feel good about what we're implementing at our end.
Matt Pacheco
Are there specific security frameworks or controls that you found most effective for establishing trust within these AI implementations?
Mohit Kalra
So I think we have one thing that we do internally is that we have a model governance program internally. So any new model being adopted by the company goes through my team so that we can check it for some basic parameters such as safety or where's the source coming from the supply chain. And a few basic checks around data hygiene. And then there's also some data set hygiene we do. So if you're about to overlay a data set on top of a model, we would like to do a review that's like, I think rudimentary review that every security team needs to do, irrespective of a framework out there. But that said, there are new frameworks that are coming up. We are exploring ISO 42001 as one of the frameworks. It's similar to SoC2 compliance and ISO 27001.
So it kind of provides a rubric for doing AI security in a much more holistic manner. There is a NIST AI framework as well that is a good reference point for teams who are looking for knowledge that has been commoditized into a framework that they can follow and then kind of accelerate their path to AI security.
18:27 - Enterprise AI Security Concerns
Matt Pacheco
So how do you balance innovation and speed with some of the more rigorous security demands of your enterprise customers?
Mohit Kalra
Firstly, when AI is being adopted within our own enterprise, it is being adopted because it's a force multiplier. It is adding value to the productivity of your developers or to your cloud engineers, et cetera, or even to your GTM team. So when that is being used, I think companies need to be very quick in understanding the basics of AI security and ask the right set of questions and get out of the way quickly. If the AI being adopted is not aligned to the way you want to send your data to the underlying AI SaaS application, you should be able to evaluate that quickly. It'll be slow when you do it for the first couple of times, but after that it picks up.
So on the consumer side of AI technology, security teams need to quickly develop a framework that they can evaluate an incoming AI SaaS application against on the way out. When you are a company that's providing AI technology and the security team needs to kind of work in tandem with the product team to make sure that AI security is happening correctly, I think you have to balance between agility and having an enterprise mindset. So when the teams are experimenting at that time, it is important that the security team is aware of the AI being used or the data sets being explored. And during that phase, the awareness is the most important thing that the security team should have, but they should not really get in the way unless they see early red flags.
They see early red flags, hey, this is not the right model to use or this is the bad data set to use. You should flag as early as possible, but usually you should kind of do that evaluation while innovation is happening. Because any product kind of goes through two phases. The innovation phase, where I think the security team needs to have a light touch. And then it goes into the productizing phase where the research is actually put into the product, which needs to be done more methodically, not just for security. You need to do testing for errors, you need to do resiliency, you need to do security in the same vein over there.
And when that is happening, the security team should vet the AI technology being adopted and pushed to customers because there's a basic enterprise expectation that the company we're using knows what they're doing with their AI. And they're not just shooting from the hip in AI adoption, they are actually being more intentional and methodical in their AI adoption. So I think it's a mix of you get out of the way during innovation phase, but in productizing phase you need to have a formal sign up from the security team. If you balance it, if you don't balance it enough, you could really slow down the company's innovation. But if you kind of don't have the checks and balances, then you could be shipping something that would not be aligned to your customers expectations.
Matt Pacheco
I love how you frame that around balance. And there's also another piece that needs to be factored in is cost and cost optimization, potentially finops as well. Can you talk a little bit about the relationship between FinOps and security?
Mohit Kalra
Yeah, so I, as I said, I have taken a couple of roles at Typeface, so I am lucky to be doing security and cloud optimization as well as cloud architecture and generally GPU related architecture as well. When it comes to finops, you know, there's a pretty interesting relationship between security that I did not know of, but I discovered as I kind of took that journey. Like, you know, in security for example, you have this concept of attack surface reduction that essentially means that hey, you don't want to expose things that you want to expose minimal things as much as you can. For example, simple example could be in your cloud infrastructure.
You want to expose maybe port HTTP and HTTPs so that's the interface that your customers are talking to, but you don't want to expose all the internal ports of your database or your SSH port to your world. So the attack surface reduction says if you reduce the attack surface you become more secure. And that's not the end of security. You have to do more. But that's a start. I think similarly in FinOps complements attack surface reduction because FinOps is about are you using this machine? One of the parts of FinOps is to shut down waste in addition to optimizing having a more optimal architecture. So when you're shutting down waste, usually those are the same machines that are also in some form of neglect because nobody was even looking at them.
So the security and finop can really work in tandem that when you reduce waste you benefit on security or when you are going and securing something that doesn't look right, you might end up saving costs because people realize you don't need it in the first place. So that's kind of, I feel like one of the best combinations I've seen on how to make them work in tandem. But there are other similarities as well between finops and security. Like it's a holistic view of your entire environment. You're not looking at a feature set, you're not looking at one account, you're looking at everything across the board. It is also facilitated through tooling because tooling will give you the insights that a human will take a lot longer to gain out of manual analysis.
So having tooling that gives you either visualization or a list of vulnerabilities, that kind of really helps you do your job quickly. And then also it's a data problem at the end of the day, both finops as well as security is a data problem that you might have a longer list, but you as a leader in that space need to put your finger on the right problem to solve. And if you do that, you become secure in a more strategic way or you reduce cost and the highest spend areas. And usually if you're going to secure something that has high spend, usually that means that high spend area is also important for the company. So sometimes you can learn the architecture by looking at spend and sometimes you can learn about cost by looking at the security posture of the architecture.
So I feel like you can always feed one learning into the other and also have a common framework on how you interact with customer. Your own product teams create the backlogs on what they should be doing in order to better in both of these fields. So it's a fun experiment. We did a typeface and I felt really good about handling both of these and driving results one way or the other.
25:10 - Balancing Innovation, FinOps and Security
Matt Pacheco
No, and I appreciate that. It's a pretty cool role to be able to have that insight and to have that oversight over some of those things. So between those different functions there's security, operations and finance. How do you build kind of that cross functional alignment when it comes to things like cloud governance between those teams?
Mohit Kalra
Yeah, I think the. I usually split work into two parts. One is the capabilities that you need to develop. Right. And security, cloud ops, finops. Like what is the tool chain you as a team will run? What is the automation that you need to develop? What is the understanding you need to gain where the burden of delivery is on you as a team? So that's one half of the problem. But if you keep finding issues and not fix them, or you keep enumerating the risk but there's no remediation, that kind of is the job that's half done.
So on the other side of the job is that you have to influence the right teams to listen to you and work with you on changes that you're asking them to make and, you know, work in tandem with you to either reduce cost, reduce risk, or have a better cloud architecture. The combination of the two essentially is what drives the final result. If you did half of it, you would know what's wrong, but you wouldn't have fixed it. So once you know that this is the basic framework in which you need to operate, you're looking at your own deliverables, maybe quarterly deliverables, if you track it, that are sprint deliverables. On what do I need to do as a team in order to have good impact in these areas?
And on the other side, you have the backlog that you need to develop and centralize and then put in the ticketing system of the receiving team to say that in addition to all the feature work and bug fixing you're doing, here's your backlog for security for FinOps and cloud that we would like you to execute upon. And then of course, we need to work in a healthy way with our teams to influence them so that they are picking that work and having impact. And then of course, through dashboarding, you can project on the good work that your team has done, but also the good work the other teams have subscribed to in order to help in the larger vision.
And that of course, kind of is predicated by we first messaging that why are we doing and why is it important to the business and the company? Like, why is securing, why is cost optimization, why is a good cloud architecture important for the company? So I think if you think in that terms, they are not very different in terms of the execution process and framework. It's just that the specialization might be around a dollar value for FinOps, but it could be risk or vulnerability or compliance for the security side. But the framework is pretty much similar on how we execute them.
Matt Pacheco
That's really interesting, working with those various teams. You mentioned your team and I'm going to switch gears a little. So a big concern in the entire cybersecurity world and the landscape, IT skills, gaps, in some instances, finding the right talent to fill those gaps. My question to you is one, how do you keep your team up to date on all of the latest threats and vulnerabilities? Things are evolving all the time. There's new threats. How do you keep your team on top of all of those things?
Mohit Kalra
Yeah, I think one is to always be in the learning mode. I think that's a bare minimum, that if you have a penchant for learning, you are going to go into the latest trends in the security space to understand that there is a new class of vulnerabilities that are sprung up. Let's say prompt injection or MCP servers need to be treated like third party, partly libraries, which could lead to, you know, vulnerabilities in your ecosystem. So I think once you have that always learning hat on, you will tap into various ways of finding the new classes and new techniques that the attackers are using. I use podcast as, you know, when I'm driving into office and back as a way to update myself and I encourage my team to do that as well. And I think everybody has a very unique learning style.
For some it's podcasts, sometimes it's YouTube videos, and others like to do go through a formal curriculum in order to learn what they're looking for. And as a leader, as a manager, you need to support these learning styles. So it's kind of a mix of having the desire to learn and then as a leader, you need to kind of also tap into the style of learning that your team might have and then, you know, create those unique packages from them and encourage them to kind of pursue that path also, you know, challenging them with things that they've never done.
And I think that creates a huge learning opportunity for them to challenge themselves and, you know, not know anything when they start and then kind of exiting with high impact around the same areas that they thought that they had no idea that they could really excel on.
30:59 - Team Building and Skills Development
Matt Pacheco
Yeah, I love that approach to growth and growing your own team. So the other part of my question was going to be about skills gaps. Hiring is pretty difficult in some instances to get the right security professionals with the right skills, especially if the world's constantly changing. Is that a challenge you guys face? And if so, how do you overcome that challenge?
Mohit Kalra
It is a challenge for sure. My strategy has been kind of, you know, I've modeled it based on my own career move into security is like when I have done my software engineering, it makes it easy for me to do product security. Similarly, if people who have been SREs in their past life and they want to do something new, you know, having them do cloud security architecture becomes a normal transition. So I don't think that we have to really look for like, you know, starting from scratch. I think if you pick the right people who have done, you know, like, for example, if they've done it, then they make really good enterprise security engineers because they can, they know how to say, they know how to configure the it. So now they just need to learn how to secure it.
So I think if you transition people from their core area into the new area that you're looking for that ends with security, you usually can make that transition happen and then upskill the industry in terms of the gap that they might be having around hiring. So that's what I found has been one of the successful formula. And then once they're here, then you give them the opportunity and the learning ecosystem so that they can continue to learn and have that environment where they can come in with that past expertise and build on top of it and then give back to the company.
Matt Pacheco
Like I said, I love that opportunity to grow. That's cool. Getting them in there with some of those skills and those natural skills that fit and then helping them grow additionally while you're there. That's really cool and that's a great environment and sounds like you guys have there. So let's talk a little bit about the future. This is my favorite part of the show where we talk about what's coming, what we think is going to come, what we're excited about. How do you see AI security evolving over the next, let's say three to five years as these new technologies become more widespread?
Mohit Kalra
Yes. So I think we have to wear two hats. One is of course, when we are. With the advent of ChatGPT back in 2022, things have really changed. Code is getting generated at a speed that was never possible earlier through AI. Content is getting generated, a lot is happening, images are getting generated. So there's a responsibility of the security team to learn these technologies, learn how there are various inflection points in various industries and be able to, you know, reprogram the security program to align with these new changing technologies. So there is, this is a basic need that I don't think that it requires you to be in a gen AI company. You need to be in any company and AI is going to be knocking on your doorstep for sure.
I think the other part is that if AI is enabling so many other industries to do things faster, better, with less people, then the same is true for the security industry as well. And you have to find opportunities that how is AI there to be a force multiplier for you? Can it, you know, it'll start with, it'll take care of the mundane, but with the speed at which technology is changing today, we Know that it's not only going to take care of the mundane, but it's going to take care of the creative and it's going to also take care of the intelligent part of security that we do today.
So today the need is for us to think that our workforce in future is going to be humans and AI agents or some form of AI technology packaged in some form that is going to be a force multiplier. And the way all of us do work, including the security teams, is going to drastically change and we should be ready for that change. We should learn about the change happening now and kind of have the strength to kind of adopt it when the time comes and make our teams better, faster, and delegate a lot of the work that we're doing today to the AI so that we can move to the next tier of problem solving.
35:44 - Future of AI Security & Industry Insights
Matt Pacheco
Are there any emerging security technologies or technology in general that you're excited about for your line of work?
Mohit Kalra
Yeah. So I've been seeing AI being used in, you know, log analysis for quite some time. It's not new right now. So finding that needle in a haystack through AI, finding those anomalies has been traditionally part of a lot of security ecosystems since quite some time now. What I'm liking right now is that you will now see AI being used not just for. So we have tools that were telling you what was wrong, but now AI is there to analyze what is wrong, whether it's applicable to you, and even go all the way down to give you a remediation path. So the time to remediate an issue with the help of AI is very similar to how AI is being used for co generation.
So the compression of remediation timelines, because AI can tell you what is wrong, it can do your code review, it can do a lot of things that traditionally require humans to spend time and humans don't scale very well. I think AI is here to help you both on the proactive development side of doing when you're building out your product, but also on the reactive side when things are not going so well and you need assistance from technology to kind of quickly get to the answer on why things are not going well and what are the steps I can do. So it's kind of a combination of like a guide that knows everything and you can ask it. Of course you have to be careful how you use technology, cannot completely trust it.
So you have to have that logical mind when you're using your AI. But I feel like AI is about to change how remediation is done, how forensics is done, how analysis is done, how situations and security are managed today, and then also bring subject matter expertise that did not exist in your team earlier, but it exists because you have an AI assistant of some sort, a copilot of some sort that's willing to help either you or your developers in giving that security advice that really is.
Matt Pacheco
Exciting for productivity for a lot of things. So let's flip it a little. So we just talked about things that are interesting in the future. What is one thing right now that if you could change one thing about how the industry approaches cloud or AI security, what would it be?
Mohit Kalra
So I think one thing that I would change a bit is that we need to find the right balance between security done via tooling and security done through logical analysis. I sometimes find we tend to gravitate too much towards tooling to solve all our problems and then we end up creating other kind of problems, such as we get huge amount of security debt or cloud architecture debt that people don't just have the time to solve for it. So if we over rotate on the humans doing all the work, we lack scale. And when we over rotate on tooling or AI doing all the work, we get a data problem where there's just too much issues identified or data generated that is not feasible to consume as a regular development team.
So if I could change one thing, I would ask all teams to find that white balance where you're not over rotating one side or the other and you're using a mix of both human intelligence and tooling scale in order to find the right fit so that you can either reduce risk at the right time, you can reduce cost in the right manner, or you can build cloud architectures in the right style. Excellent.
Matt Pacheco
And another question, my final question, what skills do you think will be most valuable for security professionals working in the AI space in the coming years?
Mohit Kalra
I think one is going to be you need to firstly use AI to your advantage. You need to see the time you spend and how can you be more productive with the use of AI. Once you do that, you will be able to appreciate how others are going to use AI, so you will be more of an enabler rather than impediment to AI adoption. Also, once you're out of the way in terms of understanding the value of AI in today's world, I think the most important part is that in addition to your core computer science skills, your networking, your operating system or your cloud architecture skills, you now need to invest your time into understanding, you know, basic machine learning. I think everybody should take a basic course in one of some of the online courses that are out there.
You need to get your vocabulary right in terms so that you can at least connect with the development teams. And then you need to understand, you know, data governance and how is data going into this ether of large language models. You need to understand the variety of large language models available through various companies out there. And then you also need to understand that some of these models will be self hosted, powered by GPUs in your environment. Some of these would be kind of a rest API away into some other cloud that you don't own.
So I think moving your skillset from basic cloud security, you know, architecture stuff that you already knew, kind of now transforming that to be also specialized in AI, how AI is built, how our company is adopting it either as a SaaS application or as a large language model that they will utilize in some part of their product. If that would be one skill I would ask every security professional to take back with them because that's the most relevant change that we're seeing in the industry and there's no escaping it.
Matt Pacheco
Excellent advice and I appreciate you being on today. Mohit. This is a great conversation about various AI topics and security, so I really appreciate you taking the time to speak with us today.
Mohit Kalra
Matt, it was a pleasure. Thank you for having me and thank.
Matt Pacheco
You and for our listeners, thank you for listening in. Excellent episode with a cyber security focus. Always great to talk about these things and the intersection with AI. Really cool Typeface is doing some really cool STUff, so be sure to check them out and check more episodes out of Cloud Currents.
Mohit Kalra
You can find it anywhere you get.
Matt Pacheco
Your podcasts and we will see you soon.