Skip to content

EP. 36 Sustainable AI Infrastructure with Clean Energy Innovation with Tamanna Sait

EP. 36 Sustainable AI Infrastructure with Clean Energy Innovation with Tamanna Sait

cloud-currents-ep36

About This Episode

Host Matt Pacheco sits down with Tamanna Sait, VP of Engineering for Cloud at Crusoe, who brings over 25 years of experience spanning IBM’s public clouds, Facebook’s massive private cloud architecture, and customer-side optimization at Netflix and Airbnb. In this episode, we discuss how Crusoe is pioneering the world’s first vertically integrated AI infrastructure platform by harnessing large-scale clean energy sources, including their groundbreaking 1.2-gigawatt Abilene Data Center and fully geothermal-powered Iceland facility. Tamanna shares practical insights on cost optimization strategies that don’t compromise reliability, the evolution from traditional cloud to AI-optimized platforms, and leadership principles for building resilient technical teams.

Know the Guests

Tamanna Sait

VP of Engineering for Cloud at Crusoe Energy

Tamanna Sait is the VP of Engineering for Cloud at Crusoe, where she leads the development of energy-efficient AI infrastructure solutions. Born and raised in India, Tamanna earned her undergraduate degree in Computer Science and Engineering before moving to the United States in 1999 to pursue a Master's in Computer Science with a focus on optical networks. Her extensive 25-year career spans the evolution of infrastructure technologies, from traditional networking to modern cloud platforms.

Know Your Host

Matt Pacheco

Sr. Manager, Content Marketing Team at TierPoint

Matt heads the content marketing team at TierPoint, where his keen eye for detail and deep understanding of industry dynamics are instrumental in crafting and executing a robust content strategy. He excels in guiding IT leaders through the complexities of the evolving cloud technology landscape, often distilling intricate topics into accessible insights. Passionate about exploring the convergence of AI and cloud technologies, Matt engages with experts to discuss their impact on cost efficiency, business sustainability, and innovative tech adoption. As a podcast host, he offers invaluable perspectives on preparing leaders to advocate for cloud and AI solutions to their boards, ensuring they stay ahead in a rapidly changing digital world.

Transcript

00:01 - Introduction & Guest Background

Matt Pacheco
And welcome to Cloud Currents, a podcast that navigates the ever evolving landscape of cloud computing and its impact on modern businesses. I'm your host, Matt Pacheco, and I help businesses understand cloud trends to better navigate their IT decisions. In today's episode, we're diving into a fascinating evolution of cloud infrastructure and emerging challenges of building sustainable AI platforms with someone who's seen it all from multiple angles. Our guest today is Tamana Seit, VP of Engineering for Cloud at Crusoe. With over 25 years in the industry, Tamana has built public clouds at IBM, architected Facebook's massive private cloud, and optimized cloud operations from the customer side at Netflix and Airbnb. Today she's at Crusoe Energy, where she's applying this wealth of experience to create AI infrastructure solutions that address not just performance and scale, but the significant energy challenges that come with modern AI computing.

In our conversations, we'll explore the evolution of cloud infrastructure, strategies for optimizing performance and costs, and the sustainable future of AI computing infrastructure. So thank you for joining us today.

Tamanna Sait
Sure. Hi, Matt. Thank you for having me here. Pleasure to be here.

Matt Pacheco
Excellent. So let's jump right in. Can you walk us through your journey in your career in cloud and where you are today?

Tamanna Sait
Absolutely. I would like to start out from my roots. So I actually have an undergraduate degree in computer engineering from India and I got my master's in computer science from State University of New York at Buffalo. And as you highlighted, I've spent over 24 plus years infrastructure as a service and infrastructure, and 14 plus years have actually been fully dedicated on cloud and building cloud infrastructure, all the way from bringing infrastructure as a service for the first time to a public cloud to actually building a private cloud at scale. And then I actually hopped on and embraced this journey of being a customer of a public cloud. And that was at Airbnb and Netflix, where we actually built highly reliable, secure, scalable and efficient cloud platforms.

So now I have taken all my learnings with all these experiences and I'm back to my roots of building a public cloud. And I recently joined Crusoe. Crusoe is actually a leader in the AI space. It is the industry's first vertically integrated AI infrastructure provider. And we provide a reliable, cost effective, an environmentally aligned AI platform. And we do this by harnessing large scale clean energy. We actually build AI optimized data centers on top of that, and then we actually build AI solutions through our Crusoe cloud that can enable all our customers all the way from AI startups to enterprises. So that's a little bit about me in a nutshell and what I'm currently doing.

03:12 - Evolution of Cloud Perspectives

 

Matt Pacheco
That's really cool. So you have a lot of various experiences between small and large businesses. How have your perspectives kind of evolved after working with so many different companies? And how are you applying that to what you're doing right now?

Tamanna Sait
Yeah, absolutely. So if you really look at my experience and a part of my journey, it spans across public cloud, hybrid clouds and private clouds. Now it has been a very interesting experience and the whole journey of even transforming from traditional infrastructures and traditional clouds to even getting to cloud computing and infrastructure as a service has been a very interesting journey. So as we move towards the public cloud, a few things that we saw is at the time were making the shift, the boundaries between compute, network, storage and people who were operating this were pretty much going away. People really wanted simple APIs. They actually wanted three click enablement of their infrastructure and services in the cloud.

And moving to the public cloud was more around taking a minimal set of features and capabilities and really being able to scale them out reliably, efficiently and securely. So that was a big shift that we had seen. Now, taking this a little bit from say, our traditional stuff of going across the whole element of public cloud to a private cloud to hybrid. So as I mentioned, the public cloud, it is about taking minimal set of capabilities and then being able to scale them out reliably, efficiently and securely as we build the public cloud. And this is more up from a view of a public cloud provider. But if you really go and look at the same thing from a view of a customer of a public cloud, and there it was more about scaling and building a reliable, secure, scalable and efficient cloud platform.

But what also became important, there was more of the cost aspect of things, making sure that you're thoughtful about your cost efficiencies, your other pieces that you drive. And in addition to that, it was also about scaling and building your cloud platform within the boundaries and limits of your public cloud provider. And how would you architect and build this at scale, but also work within the boundaries and limits of your public cloud provider? So that was more from a public cloud provider and a customer perspective. Private cloud was very interesting because having built one of the largest private clouds, definitely growth and scale was our primary focus. But it was also very interesting that at the same time we also had to be thoughtful about efficiency and we also had to think about bending the demand curve.

And the reason for that was primarily because of the physical infrastructure and Hardware constraints that were hitting as were growing at that scale. So that was a little bit around how I felt about the private cloud journey. And coming to the hybrid cloud was even more interesting. And actually, this was actually the best way for us to go into this space because we had a lot of traditional infrastructures on prem infrastructures. It was very easy for these kind of, you know, traditional infrastructure companies, which ran on prem, to quickly go and adopt new features, new capabilities, and new services in the public cloud while they continue to run the critical workloads and data on prem. And this was the fastest way for them to expand and make use of these capabilities.

And what we also saw is that even at that time, when people were transitioning to the cloud, even if they wanted to, there were always critical workloads and critical data that people still wanted to run on prem and did not want to migrate it to the cloud. And that's where the hybrid connectivity and hybrid clouds played a very important role. And this really served as well. Another piece that I could also even think of, the hybrid cloud, was more about latency presence. Colos and pops, where people wanted to run things in colos and pops that were closest to the user. We keep it as close as possible to the last mile and then still be able to connect back into the public cloud to leverage their capabilities and services.

Matt Pacheco
That's excellent. You had mentioned your work on the largest private cloud around. So you mentioned that there was potential limits with hardware and resources. Can you elaborate on how, when it came to scale, you addressed some of those challenges?

Tamanna Sait
Absolutely. So some of these limits were literally around how were actually scaling out, because there was only limited terrestrial fiber you could lay out, or subsea cables you could lay out, or even your hardware as you were vertically scaling it up, the number of ports, the bandwidth, the pieces that you were getting constrained by the hardware. And that's where we had to absolutely think of the elements of horizontal scaling. What does it mean to really horizontally scale things? And that's why it's not just about, like the largest private cloud. But overall, when you're thinking about scaling, these are the dimensions you want to think of. How can you think of your capacity blocks to stay within the limits of your providers and then find ways to horizontally scale them out? So those were some of the things were looking at.

The other piece that I spoke about were things around bending the demand curve. We were also going and looking at the efficiency side of things and trying to see, can we bend the demand curve Here do we. How can we get more efficient from the demand perspective as well? So there were multiple different strategies we put out here, all the way from being able to figure out a horizontal scaling strategy, staying within the limits and boundaries that were running into, and then still being able to horizontally scale it out. And the second element was some of these efficiency initiatives. So were really going and bending the demand curve and also looking at the architectures and how could we efficiently optimize and build architectures which would bend some of these demand curves.

 

09:13 - Cost Optimization Strategies

Matt Pacheco
Excellent. It sounds like you did some great stuff throughout your career on these different cloud platforms and strategies. I want to talk about cost because you mentioned cost a little earlier and it's really important to our audience. It's a topic that keeps coming up over and over. When we survey our target market mid size IT decision makers, cost is usually one of the ones at the top with security and AI and all that, which we'll also get to later.

So from a cost perspective, you have a unique perspective from both working with cloud providers, but also as a customer. How do the approaches different or differ on cost optimization between those two perspectives?

Tamanna Sait
Absolutely. I think this topic is very near and dear to my heart, but I would start out by taking it back to something that is even more near and dear to Manhattan, which is reliability and security. So every time we talk about cost and we talk about efficiency, I think it's very important for us to start out by talking about reliability. The way I look at it is that reliability is always your number one priority and at the same time you need to be thoughtful about your efficiency aspects of things. So when we put these two things in the picture, let me actually take an example. Right. Because no matter what you do, I believe that reliability and security is always going to like. I'm never going to trade off that against the cost efficiency of things.

But what does it mean to look at? How can we balance and achieve both of them? Reliability as well as efficiency. And I definitely want to double click one example which I always use, which is about spot instances. So what is a spot instance? So when you actually go and public cloud providers provide this option, it's a very cheaply priced compute instance. And why is it cheaper than the other compute instances that are being offered? One of the reason for that is that on a very short notice, the cloud provider can actually take away this compute instance from you as a customer of the public cloud and reassign this instance to other customers when there is new demand coming in now think of a situation where I want to spend less money on the spot instance.

But your cloud provider does not know what kind of workloads, services, business critical applications and real time applications you might be running in these spot instances. So when they actually come and try to reclaim back the spot instance, it could potentially impact your real time workloads, your critical traffic and your business critical applications. Now let's look at it from a little bit of a different angle. If I took the same element and I built my own spot market. So as a customer of a public cloud, if I build a cloud platform and I build a reliable, secure, efficient and scalable cloud platform, and if I build my own spot market, I can make sure that I know the configuration of my workloads, I know what my business critical workloads are, what SLAs they need to provide, what SLOs they need to provide.

So making sure that those business critical applications continue to run. But at the same time I can start looking at some of my opportunistic workloads. These are offline, maybe some offline data, big data workloads or some spark workloads. And these can be scheduled offline, they're not time sensitive, they're not real time, they're not business critical. But what we could do is we could treat them as opportunistic workloads and very efficiently schedule them on the same compute instances in off peak hours. And what this gives us is reliability because we won't affect our business critical applications. But at the same time it also gives us a cost benefit and it helps drive efficiency because we are very effectively and efficiently using our compute instances in off peak hours to take them to the maximum utilization and efficiency.

So that's an example of how we've looked at cost efficiency and cost management. But I do strongly believe that it's very important to look at both of them together. But how can you still achieve efficiency but not lose reliability and security, which still should always be the first priority for any always available 24.7cloud platform that you want to operate? And I actually want to take this a little bit to Crusoe as well and how we are doing this, coming back to a public cloud provider dimension and how we are doing this at Crusoe, and I did want to highlight that at Crusoe we take reliability very seriously. We are serving enterprise scale customers, which means reliability will always be our number one priority over efficiency. And even here we are doing a lot of initiatives around it.

We are trying to make sure that we maintain the integrity and the continuity of our systems as we continue to scale them. We are trying to do very proactive rollouts with short term maintenance windows so that we are very proactively rolling out fixes for reliability, performance and scale issues that we see. And this is an ongoing effort. You will see us invest a lot in this. We'll continue to invest and we'll continue to make sure that reliability stays the number one priority for us over efficiency.

Matt Pacheco
That's really interesting. So we talked a lot about public cloud. What about private cloud? Were there any techniques or perspectives you had on cost optimization there as well? I know you've met your reliability, but any strategies you can share for private cloud?

Tamanna Sait
I mean, that's where I was talking about this point around, you know, bending the demand curve. And that's where you know, and that's what happens, that even if you're ready and willing to scale, it could be your physical infrastructure and hardware constraints that could, you know, actually inhibit you from going and meeting those kind of scale and growth needs. And that in some ways helps and starts driving you towards efficiency or towards bending the demand curve. And that's where you want to go back and really look at your utilizations and all these pieces and find areas of optimizations which could go all the way from figuring out how you're going to horizontally scale by staying within the limits of your capacity that you're constrained by. That's one aspect.

The second aspect could be looking into the architectures of how you have designed your services and what does that mean? And you know, how can we optimize at those levels as well?

Matt Pacheco
Really interesting and thank you for sharing that perspective. What common mistakes do you see organizations making when trying to optimize their cloud spend?

Tamanna Sait
Yeah, and it goes back to the same thing I always talk about. The number one thing is reliability and security. So it's about, I believe no matter what you build, if it's not available, if it's not up 24,7 and you cannot use it, nothing matters. And providing that reliable, secure experience to our customers, to our services, to our partners, to our stakeholders is super critical. So we do want to optimize for cost, but we want to do it in a very thoughtful way. We definitely want to make sure we are prioritizing reliability and security over those efficiencies. And I think it's a very delicate balance. And it's also an art on how you balance the two. But very clearly knowing what your priorities are around reliability, efficiency and the availability of your cloud, your infrastructure is super critical as you make these decisions.

 

16:54 - Security & Reliability Integration

Matt Pacheco
Excellent. And let's talk a little bit about security as well. In relation to reliability. How do you maintain, I guess, the balance between security and reliability as you're scaling your cloud infrastructure?

Tamanna Sait
Right. I always put reliability and security at the same level of criticality, priority and importance because I think the security, the privacy of anything that we run and build on our cloud infrastructure is super important. It's as critical as, you know, your reliability because your vulnerabilities are the one that will drive and bring down your cloud. They will bring down your platforms. They will go. And for me that's equal to reliability because if there's anything that's going to go interfere with your cloud being operational, with your platform being up 24 7, it's equal to reliability for me. So I always have put security and reliability at the same level of priority and importance and I feel they just tied at the hip. They're one and the same when you look at it, because the impact is the same.

Whether there's a security breach or your customers cannot use your cloud, the impact is the same.

Matt Pacheco
Yeah, I guess that's a perfect way of saying it because they do go hand in hand. I like that. What reliability challenges are unique to AI infrastructure compared to traditional cloud workloads?

Tamanna Sait
Yeah, I wouldn't say that there's anything different we're doing there. I think there's this element of making sure that your AI infrastructure is designed with your energy first mindset. It's designed to meet the right power and cooling needs in those pieces. But ultimately those are some of the nuances or differences that come in because you're dealing with high density and power intensive infrastructure. And it's about making sure that this infrastructure stays up, it's reliable, it's meeting your performance needs. So from the type of infrastructure or the type of hardware or the type of compute you're dealing with could be a little bit different. But essentially the principles of your compute network, storage, your AI infrastructure being up available 24 7, it's pretty much the same whether it's a traditional cloud platform to an AI infrastructure specific platform.

But the variables that really change is making sure that the reliability continues with this high density compute with those massive power needs that we have and the cooling needs that we have and all of that.

 

19:36 - Sustainable AI Infrastructure at Crusoe

Matt Pacheco
That's a great segue into what I want to talk about next sustainable AI infrastructure. So let's talk about Crusoe a little bit. How are you guys approaching significant energy demands of modern AI? Computing.

Tamanna Sait
So if you really go to sea, right, we sit at the intersection of compute and energy and we are actually building systemic solutions and we continue to balance as we continue the drive to innovation and the energy that powers these solutions. And as I mentioned before that we are the leaders in the AI space, but we are the industries first vertically integrated, purpose built AI infrastructure platform. And we are building a reliable, cost efficient and environmentally aligned AI cloud platform. And we build this AI platform primarily by harnessing a lot of large scale clean energy and then we build our AI optimized data centers on top of it. And then we are actually providing and developing all these different AI solutions through our Crusoe cloud to serve our customers all the way from AI startups to actual enterprise.

And it is a part of our strategy that we will always first go to places where we are going to find abundant, low cost clean energy and then go build the data center there and then go build our cloud to enable our customers all the way from startups to enterprises. And I'm super excited about our Abilene Data Center. With 1.2 gigawatts, this is going to be one of the biggest AI data centers in the world.

Matt Pacheco
That's so cool. I love what you guys are doing with the data centers and your whole platform. It's really cool. One of the coolest things I think is cooling in data centers that handle AI workloads. You have to be efficient with cooling, you have to be sustainable with it. What are some cool, interesting cooling technologies and energy efficiency practices you're using in your data centers?

Tamanna Sait
So definitely we are employing a bunch of things, liquid cooling. And as we are bringing in newer GPUs which are more power and cool power hungry and they also need the right cooling technologies in place, we are definitely investing in all of that and liquid cooling has been a big one for us at this point.

Matt Pacheco
Yes, it's really cool. For anyone who's seen it in a data center, it's one of the coolest things you could see. I geek out about that a little. So another question for you. How do you balance the performance requirements of AI workloads with your environmental responsibility?

Tamanna Sait
Absolutely. So as you see, we are going into these low cost, abundant clean energy sources. We do have our data center in. This one is actually in Iceland and that one is fully geothermal powered. So we are using a lot of these environmentally friendly clean energy sources to generate the power and all of this to meet those. And as we said, we use an energy first approach and that has really helped us a lot in this space. So we are going to those areas where we actually get low cost, abundant clean energy. And then that's where we start building our AI optimized data centers and then we actually run our Crusoe cloud.

So that has really enabled us to maintain those aspects to meet the power needs that AI brings to the table, but also do it with an energy first approach using some clean energy sources.

Matt Pacheco
Interesting. And a lot of companies are looking to have AI workloads and use AI in their infrastructure, whether it's offering a product or with whether it's managing their infrastructure. What considerations should organizations use to evaluate the sustainability of their AI initiatives?

Tamanna Sait
Absolutely. I think the first thing it starts with is coming to companies like Crusoe, coming to companies like us who are building this energy first and clean energy approach into the heart and soul of how we are approaching and building this vertically integrated AI infrastructure platform. So definitely being aware of where this infrastructure is being hosted, what kind of energy sources are being used, what kind of power is being used for it, what kind of cooling solutions are being provided. I think being aware of all of that and Crusoe is definitely one of the first vertically integrated AI infrastructure solution that is actually building this with environmentally friendly by harnessing clean energy and these sources. So definitely being aware of those aspects would help them a lot in really understanding what it means for their sustainability and their clean energy needs.

Matt Pacheco
Excellent. So talking about energy and power, every it's in demand right now, to say the least. It's lots of companies are looking for places where they can leverage infrastructure that supports this AI infrastructure. So my question to you is, with energy constraints in the future, how might that affect where AI infrastructure is deployed? You mentioned Iceland and Texas. Already I'm curious about how energy constraints in certain regions might aff where you.

Tamanna Sait
Already have started seeing this in the way Crusoe is building its AI optimized data centers, the way it's doubling down on its strategy to really go first look for the clean energy sources and figure out how we can actually build an environmentally aligned data center. And then we actually start building our AI optimized data center on top of that. So I think that, and then if you see the places where we are building this is not where traditionally you had everything ready and the prior data centers were built. So what that means is we are definitely going to be going to these kind of sources and we'd have to build everything ground up from an infrastructure perspective to make sure that we are powering the data center in these areas where we are getting this clean energy.

So that's how I think I see the landscape evolving. And definitely Crusoe has made a lot of progress in this space in terms of what it has acquired from a clean energy perspective with our Abilene, Texas data center, our Iceland data center as well. It's again going to the clean energy sources and using our energy first approach to AI.

26:32 - Building Teams & Culture at Crusoe

Matt Pacheco
That's so cool that geothermal energy. I watched a documentary on Iceland and it's just so interesting how the whole country is of using this energy in a really exciting way. That's very sustainable. So really cool. So we talked about building data centers, scaling cloud and your experience in doing all these things. I'd love to talk about building teams and culture at Caruso because you need people to do and help you operate some of these really interesting projects. What's your biggest focus as you scale Caruso's cloud organization?

Tamanna Sait
You absolutely need it. Matt. My. My biggest focus and priority as we are scaling Crusoe is hiring. We are hiring and making sure that we continue to instill the culture and core values of Crusoe as we move forward. And I definitely wanted touch upon one of our core values which is we strive very hard to tap into our collective genius. And our collective knowledge actually draws from a diverse set of experiences and expertise that we bring and bringing this back to the table. We very recently actually opened our European headquarters and we are hiring there as well. And I feel this is where our culture and our hiring comes together.

Because I believe that with this European headquarters we have actually been able to tap into our collective genius and we are actually now and this has positioned us to actually go and solve problems that we have not solved before. And it has unlocked so many potential for so many new opportunities to build these innovative AI solutions. So with that, I just wanted to say that we are hiring a lot. We are hiring in Dublin, Ireland, we are hiring in the US. We are looking for software engineers, network engineers, SREs, data center and site operations folks. We are just hiring across the board and we would love, you know, please feel free to reach out to me. But we are hiring and that has been my top focus as we scale and grow at crews.

Matt Pacheco
That's great. So I can only imagine the market for talent who can do a lot of these interesting things. I mean, everyone's trying to get into the AI infrastructure space and do these types of interesting workloads and do things in their data centers. How do you approach talent acquisition in such a competitive market?

Tamanna Sait
And it's actually very interesting. Right. I want to take it back to the fact that you said this space is growing so quickly, but I also want to go back to what I mentioned before this, which is we are learning every day. We are solving problems and challenges that we have never solved before or embraced before. So what that means is we are all moving fast, we are failing fast and we are learning fast. And when we are looking at this talent, that's what we want. Because when you come and work in this space and the space is growing so quickly, we are looking for people who do bring a lot of prior good experience, but we are also looking for people who are willing to learn very quickly because there's so much of innovation happening here. There's so much of.

And that's why I always talk about our core value on tapping into your collective genius. Because I don't think this is a problem a single person and a single skill set can solve. It's going to be the collective set of experiences, the diversity of experiences that will need to come together to unlock this new potential and solve these new problems and challenges. Definitely we are looking for a big growth mindset, people willing to constantly looking to think out of the box, bring diversity, be able to learn very quickly, but bring different diverse experiences that we can solve these problems together with our collective genius.

Matt Pacheco
So culturally, how do you foster innovation while maintaining that energy first mindset across your entire organization?

Tamanna Sait
Yes. So, so while your strategy and your plan is pivoted on the energy first mindset, I just feel every piece of that pie has so much innovation to do. Just the energy just in the energy space is so much innovation to do. Now you take that piece and you tie it into the AI world and bringing it into your AI world and then solving problems, there's so much innovation to do there. And that's why I, and I believe that one of the biggest things that fuels innovation is diversity of thought. And as you're bringing this collective genius together with these diverse experiences, this diversity of thought and this curiosity, this constant curiosity to keep asking questions, learning and really pushing yourself to go solve those problems just sets the whole thing apart. And that's what sets us apart.

Matt Pacheco
You know, that's really cool and I'm sure your team appreciates that. So this one's. Personally, what leadership principles have you found most effective when building these high performing technical teams, doing all these interesting things at Crusoe or anywhere you've worked, because you've worked In a few places you've had a lot of experience. Just some advice for the listeners.

Tamanna Sait
Absolutely. I think one thing I would say is as you navigate through all of this, there is one piece that is constant and that is change. I think one thing that I've seen over my last 24 years, over my last 14 plus years that I've been in the industry is change is constant. And a leader who's transformative, a leader who's able to take people through change, someone who's able to adapt quickly, but also move quickly and take everybody along with them through the change, I think becomes super critical because our tech industry is constantly evolving. Right. When you feel you nail something, you're onto something else. And how do you quickly keep pivoting? How do you embrace this change?

So a leader who's able to embrace the change and also be able to take their organization through this change, be able to, you know, be strategic, but also be able to get into the weeds to execute and convert your strategy into action. I think these are some of the things that, that play a very important role in all of this. Hiring the best, empowering them, but also moving together and moving together through the change as well.

 

33:21 - Future of Cloud Infrastructure

Matt Pacheco
That's some excellent advice. Thank you. Okay, so for the. We're gonna, we're gonna look towards the future with the next few questions. This would be kind of fun. I love to do this on the podcast. So I'm going to ask you, how do you see the relationship between traditional cloud infrastructure and AI compute evolving over the next few years?

Tamanna Sait
Absolutely. So there was this. When we talk about the traditional compute infrastructure, there has been a big amount of evolution all the way from an on prem traditional infrastructure to we saw the evolution to the cloud, to the public cloud. And this is where the first set of boundaries started going away between your compute network storage folks. You were looking for simple APIs, you were actually looking for simple three click enablement of your infrastructures, of your services in the cloud. It was not about complexity, but it was about simplicity where you take a minimal simple set of capabilities, but be able to scale it out and scale it out in a reliable, secure and efficient way. As we look into the AI infrastructure, I don't think it's going to be very different. It's just the nature of the workloads.

As I said, we're going more into high density compute, we're going into much larger power requirements, but we are still going to be in the cloud, we are still going to be in a public cloud and it's about building those, you know, and it's about building that vertically integrated stack on providing all the way an excellent infrastructure as a service which is optimized for your GPUs, for your AI workloads, providing the best performance at that layer.

I think that becomes super critical for your training and then providing the right requirements for serving your inference workloads and then at the same time going up the stack to be able to provide these AI as a service and managed AI solutions that our enterprise and our different customers could very easily adopt as a part of our Crusoe cloud or our public AI clouds, AI focused clouds that we are building. So that's where I see it all evolving. It's going to be the same thing. It will be your managed AI solutions and services, but more purpose built and focused on AI needs. And the same goes for your infrastructure as a service layer and then trying to keep up with those intense needs of power and cooling for that.

Matt Pacheco
Do you see any emerging technologies that will have a big impact on cloud infrastructure over the next few years?

Tamanna Sait
I think the biggest evolving technology is AI.

Matt Pacheco
AI itself.

Tamanna Sait
AI itself is your biggest evolving technology and it's that vertically integrated stack that we go over where there's going to be evolution and innovation happening in every layer of the stack, all the way from what we use to power these, to cool these, to actually be able to operate these at the right level of reliability, efficiency, security, scale, and then going into these very simple managed AI solutions to solve different business problems and business use cases that folks can just plug and play.

Matt Pacheco
So what skills do you believe will be the most valuable for the next generation of cloud engineers?

Tamanna Sait
So definitely things are changing. This whole AI evolution that's happening there was actually something that I was thinking about is it's also going to be how close we are to the hardware, to optimize it, to use it efficiently, to really be able to have a very high precision, well performing infrastructure that will really unleash the innovation in this space of AI. I think that's where a lot of innovation is going to lie. That's where a lot of our next generation of skills and talent evolution is going to lie. And then a lot of the pieces that AI is really going to automate are pieces that will go on the lower side of things.

I would say I'm at a point I'm cautiously watching because we absolutely make sure that anything we build and do, we are being responsible and we are bringing responsible AI into anything and everything. We do. So I'm just going with the flow. I'm watching the evolution, but definitely I feel that this void of AI is just going to bring us so much closer and closer to the hardware, the power and the cooling innovation challenges that lie ahead of us to be able to power these other use cases of AI.

Matt Pacheco
That's very exciting. Final question for you and this is an advice question for our listeners. What advice would you give organizations just beginning their journey towards sustainable cloud operations?

Tamanna Sait
Yeah, the main thing I would tell them is understand your use cases, understand what problem you're looking to solve. Definitely. If you're looking to build and a purpose built AI optimized infrastructure as a service solution is what you need. Definitely. And which is aligned with clean energy and environmentally aligned, then definitely I would recommend you to talk to Crusoe. Come talk to us. We are here to support you for your needs. We are also here to provide you with a vertically integrated AI infrastructure platform which will provide you different AI solutions to solve your different business use cases as well. So think of us all the way giving you the clean energy source to your infrastructure as a service, your cloud platform to even providing you managed AI solutions that will solve different problems with you.

Matt Pacheco
Thank you for that. That's great advice.

Tamanna Sait
Leverage Crusoe.

Matt Pacheco
That's a reoccurring theme. Yeah, you guys are great. You're doing some really interesting things and I appreciate you being on the episode with us today and discussing your leadership a little bit about Crusoe and all the expertise and experiences you had. So thank you for being on Cloud Currents today.

Tamanna Sait
Thank you so much, Matt. Thank you Cloud Currents for having me. Pleasure being.

Matt Pacheco
Thank you and for our listeners, thanks for tuning in today. Look out for more episodes wherever you get your podcasts and we'll talk to you soon. Have a great day.