Skip to content
EP 04: AI in the Cloud: Revolutionizing IT Operations with Andi Mann

EP 04: AI in the Cloud: Revolutionizing IT Operations with Andi Mann

cloud-currents-ep04

About This Episode

In this episode of Cloud Currents, host Greg Ahlheim engages with Andi Mann, the founder and global CTO of Sageable. With over 25 years of diverse experience in IT, Andi shares his invaluable insights into the evolving world of cloud technology and digital transformation. The conversation delves into Andi’s professional journey, his strategic approach to technology research and implementation, and his perspectives on the future of IT operations. Join us as we explore key trends in cloud computing, the impact of AI on IT strategy, and the critical role of innovation and leadership in driving business success in the digital era.

Know the Guests

Andi Mann

Founder and CTO of Sageable

Andi, the Founder and CTO of Sageable, boasts over 20 years of global experience in large-scale enterprise systems software, spanning mainframes, midrange, server, and desktop systems. Renowned for his roles as the former CTO of Qumu Corp, Splunk, and CA Technologies, Andi has played instrumental roles within the IT departments of global corporations and enterprise software vendors. His extensive leadership experience includes guiding diverse technical, sales, and marketing teams. A recognized technology expert, Andi is celebrated for delivering business results through innovation and excellence. He excels as a strategic leader, proficient in technology, strategy, marketing, sales, and partnerships.

 

In addition to his notable roles, Andi serves as a trusted advisor, providing invaluable guidance to executives and teams in technology strategy, product innovation, and business development. His digital proficiency extends to being an award-winning producer in social, owned, earned, branded, and multi-media realms, driving transformative engagements. An exceptional communicator, Andi is a published author, compelling speaker, and motivational leader for global conferences, events, social media, press, and sales enablement. As a consistent achiever, he exceeds corporate, team, and personal goals across various key metrics.

Know Your Host

Greg Ahlheim

Sr. Vice President of Product Development

Greg Ahlheim is the Sr. Vice President of Product Development. With over 20 years of experience, Greg joined TierPoint in 2019 and today manages the team responsible for conceiving, designing, and building the industry-leading colocation, cloud, and managed service solutions that help the company’s thousands of clients on their IT transformation journeys. Before TierPoint, Greg held leadership positions at World Wide Technology and CenturyLink (now Lumen). He started his career as an engineer with Bridge Information Systems. Greg holds a Bachelor of Science in Information Technology from Lindenwood University.

Transcript

(0:00) Introduction to Andi Mann

Greg Ahlheim: Hi, welcome to the Cloud Currents podcast. My name is Greg Ahlheim. I'm your host. And today we're being visited by Andy Mann. Andy is the founder and global CTO of Sageable. It's a research and advisory firm focused on digital transformation in the cloud. Andy has more than 25 years of experience across a lot of different IT roles, CTO, cloud architect, VP of products, as well as many other things, and he's worked with very large companies like Uber, Box, and Splunk. And he helped those companies shape technology strategy and execution. So Andy, welcome to the podcast. Glad to have you today. I've been looking forward to this.

Andi Mann: Thanks, Greg. It's great to be here.

Greg Ahlheim: Yeah. Yeah. So you have a really diverse background and it looks like it's all centered around technology research allows you to help other companies with their strategy.

Tell me about how that started for you and how you got to where you are. It seems like a really interesting story. Yeah,

Andi Mann: look, I mean, I started doing some research and analysis and working with the analyst community way, way back in my days when I was living in Australia, back when I was running IT operations, doing work at Coalface for banks, insurance companies mining, oil and gas.

And I started to do evaluations of software solutions for implementation. I was leading a small team, project manager. So I'm trying to figure out what software do we need? What's going to make us better as a business? What can make my team more efficient, more productive? So I start to look for different opportunities to acquire different software products.

And start to work with software vendors, start to work with analysts that do research into the market to figure out what's good, what's great. And subsequently, I actually spent some time in IT and in vendor world with companies like New Dimension Software of Israel, BMC Software, owned by Houston, Texas.

And subsequently joined a company called Enterprise Management Associates. Actually here where I live now in Boulder, Colorado in the United States. And that got me into 5 years worth of deep research market research, use cases ROI of calculation, doing feature function comparisons, and competitive intelligence.

Helping a lot of the software vendors who were my friends at that time to grow in the marketplace. Subsequently took all of that knowledge and started building products again at CA Technologies and subsequently at Slunk where I was working with, as you pointed out, some amazing large enterprises in the world, some of the best enterprises in the world.

And I continue to look at this function as an advocacy function. How can I advocate for my customers to help my business build better software for them? Because that lets me advocate to other customers to sell my software to help them do better work. So it's always been about growing businesses.

Creating innovation, doing new work in new ways to help grow my business. And I think if you don't understand where you're coming from, so the market research and analysis side, it's really hard to build that strategy and that plan that's going to get you where you want to go. And so now with Sageable, I'm back in the analyst business, doing original research, analyzing results of other people's research, looking at the market features and functions, products, solutions, challenges, use cases.

And just trying to match that all up so that my customers, whether they're technology innovators, technology investors, or end users of technology are meeting at the right point to drive all of their business up.

(3:30) How to Navigate the Information Overload to Drive Actionable Outcomes

Greg Ahlheim: You know, one of the things when I was reading a little bit about your company that I found just really interesting and cool, you know, in there somewhere, I read that your advisory services are centered around actionable research. And, you know, I, that, that landed with me because I think about a lot of companies who are probably, they have access to research articles, all of the different places that you can get research.

It's, it's taking that. Those inputs, how do you, how do you think about using them to customize to you know, direct outcome for some transformation or some strategy or some growth, whatever the, you know, whatever the business driver is, you know, and I think that's a, that's an interesting thing because there's sometimes an overwhelming amount of research.

Being able to sort through all of that and figure out like, what's, what's important for me? What can I use as a business leader to advance my company's agenda, the growth, the savings, whatever it is. Do you, is that, is that, that sort of you know, the way you think about research is taking all of that together, focusing that narrowly on the outcomes of the business you're helping, I guess.

Andi Mann: Right. Yeah, absolutely Greg. You've got to understand what does it mean? What can I do with this? Otherwise, you're just reading for reading sake. Don't get me wrong, I'm a big fan of reading, but when I want to read for no reason at all, I'll read fiction or maybe an autobiography or something. But if I want to drive my business forward, yeah, there's a lot of research.

And some of it's really good, some of it's actually really not. But some of it really is good, but it's a case of how do I then take that and do something with it? So if I've got a piece of research that talks about, for example, development and how, how developers are active and how they're not teaming and collaboration techniques and processes, you know, and I've written research like this, you know, the site of DevOps reports.

I was an author of multiple versions of that seminal research in my view. But what we looked at there was, for example, not just. What practices are driving business forward? What practices and processes and attitudes make developers or help developers to be more? We actually looked at one of the antecedents of that and what happened first and then what outcomes happened afterwards when you followed these known good patterns.

And so that's been a thread throughout my research, throughout my analysis and my advice to my Is getting down to that bottom line of what I actually do about this. Is it even helpful to say these things to reveal this data? If none of my clients can actually do anything, I'm just wasting it. So, I'm really looking for those actionable lawsuits.

Whether it's data or analysis or even opinion. Having a path forward where I can take this data point and say, Well, if I extrapolate from this, then I can do these things which will have these results. You know, going back years, I've talked about measuring activities versus your metrics that matter.

There's so many vanity metrics in the world, especially in middle management. But the metrics that matter are the things that actually deliver results against your business goal. And that's what I'm trying to get to. What can we do to help achieve those business goals, whether it's productivity or revenue, profit, growing partnerships, growing a customer base, sustaining in new markets.

What do I do that's going to make a difference? That's where I focus.

Greg Ahlheim: Yeah, and for those who like you, have a focus on research and advisory, how do you tell a good piece of research input from something you may not want to use? Are there any markers or anything that you would say, well, I'm going to question that piece because X, Y, or Z, whatever those reasons might be, how do you tell?

Andi Mann: Yeah, there's some things that I see, you know, I always take a look at vendor research a little bit skeptically. Look, I've owned and run vendor research at CA Technologies, at Slumgap, Partners, you know, and I'm doing that for two reasons often. One is inbound insight. Learning what is happening, actually using that research for product planning or strategy planning for M& A, these sorts of things.

The other thing I absolutely am using that to gain awareness and gain optimism and prospect. So I publish certain parts of my research. And this is actually a little, a dirty little secret that people don't know. When vendors publish, publish research, they don't necessarily publish all of their research.

They publish the good stuff, right? And they publish the stuff that’s not gonna help their competitors do. So I always look a little skeptically at vendor research. Not that vendor research is necessarily bad, but always take that skeptical eye and trying to figure out what's missing as well as what's including.

But look at things like who ran the research, a reputable organization or was it just, you know, some product marketing leader who just put a survey on job, right? Do they understand analytics? Do they understand statistics? Do they understand, you know, standard deviations and how population samples work?

Just look at that fundamentals of statistics of who did the research. Look at the demographics. Are they your kind of business? Did they show you if you're running a 500 person business and the research is all 10, 000 people businesses? It's not going to really play well for you. You know, you have to make your own choices and balance on that.

Similarly, geographical locations, roles and titles that were surveyed. So a lot of it is basic statistics, Greg. It's understanding how research data comes to be and then analyzing whether it applies to you. I wouldn't say that if it fails some of these characteristics, it's bad. But it might not apply to your situation, your business, your use case.

That's probably the biggest thing that I start with, is just looking at the fundamentals of what the research is, who and how it was conducted, and whether it applies to me and my business. You'll rule out some 60 percent of all researchers starting there.

Yeah, lots of reasons. You know, I finished up a role by design as a CTO of a public company. Turning that around, doing cloud transformation, doing DevOps transformation, doing innovation planning and programs. And was pretty successful at what I did there, have been successful in the past being an advocate for technology innovation, advocating for my customers at Slunk, at CA, and I thought, you know what, I want to have a broader remit.

To be able to advocate on more things than just the products my company believes in and sells. So, you know, I'd slug at Kumo, at CA, I was helping to sell products by advocating for my company and helping build better products by advocating for my customers. But it was still an abandoned site. At Sageable, I get to focus all the things that I've done my entire career without bounding it by my latest in, you know, all the things in IT operations.

I started out in Coldface, it becomes like America Express and Westpac and NatWest Exxon, you know, through these large multinational vendors, working with big companies, enterprises, with great success. Like you say, large multinational brands helping them run their IT, drive better businesses as a result.

I just got excited by the idea of working with more people in different industries, in different companies. And just covering all things, IT operations and cloud operations, whatever you call it, right? AI ops, cloud ops, prod ops, dev ops, sec ops, IT ops. If it's got ops in it, I'm about it, and it's been my life.

And so having that broad remit, being able to use my knowledge to help other people grow and get better, and also partially to do some of the non-product stuff. When you work for a software vendor, a lot of the time you have to focus on the product and sales and that sort of thing. And that's fair, that's how you make money.

But I also wanted to work on some of the leadership things, the future of work, remote teaming, coaching senior leaders to build their teams and grow them even better, even remote. These are things I haven't had the opportunity to do in the past, so I'm super excited to be able to. Yeah, that's great.

(11:52) Fundamentals for Crafting a Successful IT Strategy

Greg Ahlheim: I'm definitely picking up the passion for what you do.

All the ops things, right? So let's talk a little bit about IT strategy. You know, when you,  in your experience and in your helping companies you probably are finding that they're just, they may be in the very beginning. They may have tried to execute on a strategy and, and. Not getting the results they want.

Maybe they decide they need help or they've done something and they need to undo it and redo, right? Kind of a reset. What, what would you say makes, you know, a successful strategy? What are the pillars? What are the core fundamentals that make a successful strategy for a company around it?

Andi Mann: Yeah, that's a great question, Greg.

Look, I'm going to say data starts, you know, you've got to start with data. If everyone just brings their opinions, then let's use mine, because why not, right? It doesn't have to be right. I just have to talk loud, sit in that room. So start with data and start with real basis for making these decisions.

What are you going to do? Why are you going to do it? Look at examples, look at case studies, understand the impact of what you're planning. Executive support. Oh, my goodness, Greg, the number of times I've gone to clients and I've gone in like a VP level or even an SCP level and those, those leaders have been very excited to do some radical transformation things, but they're not getting the full level of support they need from a CEO and a CTO you know, whether it's that financial support, whether it's allocating resources, teams, strong teams to new develop efforts, Allocating humans to innovation, not just business as usual.

These are hard decisions which take people away from, you know, things like customer support and other things to do innovation. So having that executive support is super important. Having the budget is important as part of that executive support too, right? You can do a lot on the sniffing oil of an oily rafting, but ultimately, if you want to drive successful business innovation, you've got to have that budgetary support, which gives you the ability to, to partner banners employee, great experts to take time away from business as usual and do imagination, creativity, innovation plan.

It's absolutely critical to have a good sense of the  market. Who are your competitors? What is the value there? What do customers really want? So all of this is strategic information which helps you build that plan. Then you can start to build your vision, your overall vision and your product vision.

You can start to map that out in a horizon, you know, the old McKinsey thing, the horizon one, two, three. Which is, I'm a bit of a fan of that myself. You can start to allocate resources along those horizons as well. And start to actually do the work of executing on your strategy. Which, by the way, you have written down and shared with everyone.

So everyone knows they're all the same age. So collaboration is super important as well. So these are some of the things that I look for when I go into an organization or such, or review their strategy. You know, you always look at what you did in the past that did not work. That should be part of your strategy going forward.

You should admit to the challenges you face, the things you did right to face them, the things you did wrong as well, and what you're going to do differently. You need to have, I think, all of that data, decision making authority, that executive support, budget, and resources. And a real understanding of what you're doing and why as it relates to your customers, your competitors and the employer.

I think that's a really strong starting place if you can get there.

Greg Ahlheim: Yeah. I think in my career, looking back on where some of the things that the companies that I have worked with, where, where they have It's sort of stumbled and had to regroup is around that executive alignment. Right. And really setting and sharing and communicating priorities for the company and making sure that all the downstream work is lining up to, you know, those, whatever those key objectives are for the, for the, for the organization to be successful.

I love that. You don't, I, I was was thinking about. You know, some of the trends that I've seen over the last couple of years. And one of the things that just constantly emerges. Is that the rate  of change in and around technology now, and I know it's more so all right, it's always going to be accelerated, but it just seems it seems like it leaped to a hyper, you know, speed in the last few years, and probably some of that could have been the global pandemic.

You know, financial things changing, all of this stuff, it must make it more difficult for companies to map out a strategy when, when, when they're, when they're really kind of surrounded by technology, that's changing  the global finance market, you know, things are changing. Customer use cases are changing.

It all just seems to be happening very, very fast. How would, how should we think about that in, in the context of setting a strategy?

Andi Mann: Yeah, look, Greg, that is so hard to manage because you're right. I think you are right. There's a couple of specific trigger points that would cause explanation, but obviously the internet was one.

The web, not the internet itself, web and web applications, web 2. 0, the interactive mode definitely part of the conversation. And then we move into high speed automation, we move into cloud computing, of course, the ability to scale and build fast, the flexibility, agility, not to mention  the low barriers to entry for new technologists, startups doing new cool things.

You don't need to buy all your kit anymore, you just rent it for a couple of hours and see if it works, right? And then now, of course, we've got automation and AI. We can talk AI, is it AI, is it not? But it's absolutely revolutionary. It's changing the way we work. And again, it's this exponential inflection.

What can you do about it as a business leader is really hard. You need good technology people. You need to rely on them. You know, as a CEO or a COO, for example, even a chief revenue officer or a chief customer success officer in a business, you need to be able to rely on your technology leaders to follow all that and let you know what's real and what's not.

You know, I talk about AI and AI is real to some and in a lot  of ways, but also it's still not really passing the Turing test. It's still using, you know, very biased data sets. There's issues there. You can use it. I've used AI in my own practice in, in ANGAP in my last role, certainly. And even in my current role, I'm using AI, but you've got to understand what it's good for and what it's not.

What are you setting yourself up for and what are you not? Also things like cyber security, what are the security implications of this new solution that you're looking at, these new technologies that are staying up to speed. You know, as a non-technical leader, you really need to be careful about management by magazine.

I have had so many leaders who aren't specifically technology leaders come to me and say, Hey Andy, what about this thing? What about blockchain? What about AI? What about quantum computing? In the past even, what about cloud? What about virtualization? Are these going to help my business for a COO, for a non-technical person, for a CEO, having trust in your technology advisors on staff or not to follow these markets and understand one of the things that I used to do in my business when I was an employee was actually produce quarterly reports, quarterly technology reports for my leadership.

On what's happening in our world that we should be aware of, even weekly readouts of that sort of thing at very fast times, you know, right now, if I was still employed, I'd be doing weekly readouts on AI. So you've got to trust your people. You've got to get that constant communication. You've got to have technology leaders who are committed to understanding and staying on top of all these changes and putting them into a business context for you.

So you can focus on running the business, selling the stuff keeping customers happy, whatever it is you're doing. I really fundamentally believe in that partnership model.

Greg Ahlheim: Yeah, yeah, I think you're right. And, you know, when you think about the responsibility of those IT leaders to track all the  things that are going on, to be able to communicate that in a way that, like, you know, like you said before, is actionable, right?

And leads to real outcomes or well informed decisions, maybe, is a better way to think about it. What are the skills that make up you know, a great I. T. leader from your perspective in your experience, aside from being great at technology things, right? Because just like you, you hope that your non technology leaders We'll, we'll, we'll delve in a little bit, right.

And be informed and be participate. You kind of have to have the other side of that from your it leaders. I think. Do you see that?

Andi Mann: Yeah. Great. Yeah. Look, there are a lot of things I look for the soft skills or the non-technical skills when I'm looking for leaders and when I'm working with leaders and helping them.

Excuse me. There's a lot of the human side, right? Just leadership, just being a real human. As an IT leader especially, you can't fall into the trap of the management consultants of treating people like resources the same way as you might in a customer service environment or retail or something like that.

Almost by definition, every single person in your team is going to be pretty smart, pretty self-actuated. Technology is an interesting area. We attract a lot of people who want to work with computers and not with humans. So you need to be able to understand diversity of opinion, diversity of views, diversity of brain function, you know, neurodiversity.

'cause you don't be working with Neurodiverse people. I do look for and encourage people to foster diversity, equity, and inclusion just generally because in my world, you know, I believe in this, you know, inherently. But more importantly for me, I'm a business consultant. You do better business, you get better revenue, you create better products.

If you have more ideas, I try and look for and try and try inquisitiveness. You know, it's one of the things that I love to do is ask people questions rather than tell them. I'd much rather  mentor someone by asking them to think deeply about something and try and tell them what I think about something.

And so inquisitiveness drives innovation. And honestly, Greg, one thing that I do look for, a soft skill that I look for is impatience. Not inpatients with, but inpatients with inefficiencies, with productivity loss, with you know, dumb ways of doing things. You know, I lived and breathed automation for so much of my career.

And a lot of that is because I'm in, I'm inpatient, stupid processes. I don't want to do mundane routine stuff over and over again. I want to get a computer to do more of that work so I can do creative things, innovative things. So I do actually try and teach my leadership you know, mentoring and so forth, this idea that you're going to want to demand better from your system.

So it could be people or process, it could be technology, but as a technology leader you should always be seeking to do better, drive more efficiency, drive more productivity, prevent problems from reoccurring, prevent them from recurring in the first place. And this sort of inquisitiveness to drive productivity and efficiency is such a powerful attitude towards IT, specifically IT operations, but also all of IT.

(23:23) Cloud Governance: Proactive Strategies for Seamless Adoption and Management

Greg Ahlheim: Yeah, it is. I've seen the same in my career too. And I also think it's infectious. Once, once, once, once. An organization figures out that you can automate some of those routine Mondays and, and, and that you can monitor or manage things proactively. It's that in itself is transformational. When you get everybody think, I love what you said about asking questions and getting the ideas.

And I think, you know, I've seen organizations come from a, from the perspective of, Hey, we've got some issues and then, you know we don't know what to do with them to. You know, the entire company is aware of the issues and they're  crowdsourcing ideas and thoughts from the people doing the work oftentimes know better than others, you know, and, and they don't always, you know, they don't always seem to get a voice because they're very busy and, you know, decisions might be being made other places.

If we could talk a little bit about cloud things, cloud adoption, cloud management. You know  the, the public cloud big three are, are still growing, you know, exponentially. And, and you also see a lot of still even a lot of private cloud use cases depending on where you are globally.

Some places there's, you know, it's, it's a little, little more public cloud thing, some places, private cloud things, but. You know, I like what you said about observability and, and, and having the ability to manage those things proactively and with a foundation to start with not, well, let's, you know, we built some things, let's go build some, some best practices around them.

How should companies think about building governance into their practice, not bolting it on after, right? What are the, what recommendations would you give?

Andi Mann: Yeah, look, that's, that's a really good point, Greg, that a lot of people need to get on top of. You know, cloud is other people's computers. You've got to have extra governance when you're there.

You know, notwithstanding private cloud is your computers. But it's different techniques and different accessibility, different integrations. You know, you get these atomized environments. Every time you analyze an environment, you get exponential number of connections. You think about the desktop, how do we see them, architecture diagrams, cloud architectures, right?

And by the way, everyone in Hunted can do this right now. I mean, a piece of paper, get a pen and draw one dot. That's like one server. Okay, draw another dot, draw a line between them. That's communication between two servers. Now, draw two more dots, connect them all. Count the lines. Exponentially more complex.

That's just four servers. Imagine if I'm running a thousand microservices.  So, biggest piece of advice is don't try and do it manually. The volume of data, of information, of, you know, even just log data, let alone that deep observability data coming directly out of applications. There's too much of it. Most of it's meaningless.

That's a bit of a a hot take everyone says, I'll collect all the data, got to get it all. And you don't, because you need that bit that is meaningful. But if your application is continually sending you messages saying I'm doing fine. You know, you don't really care that much about that until it's not doing fine.

So you've got to collect all this data and you can't not, but you can't process it manually. So automation data processing, observability tooling. Honestly, I'm from an IT officer background where I hug servers and I read long files directly and so forth. And in IT ops, in cloud operations, We got to give, we're going to give that up.

You know, we're going to get to where developers have been for a long time. Abstraction. You know, developers are comfortable with abstraction. They abstract memory. They abstract processes, functions. They call functions. They don't know what they're doing inside. They just know about call and response.

And they're comfortable with that because that's normal. In IT operations, in cloud operations, we're a little bit more circumspect about that. We got to see what we're running. We literally used to see physical servers that we were running. You've got to get away from that. So deal with abstractions virtual machines containers serverless functions.

Get into these reusable capabilities, it will reduce your complexity over time, because just think of maybe an addressing service. If I'm selling t shirts or widgets or whatever, I might want to use my addressing service in marketing to go and talk to my customers about what's coming. I might want to use my addressing service in delivery to send them a product they just bought.

I don't need to write that code twice. So start to look at consolidating, you know, using  multifunction services, you know, reuse the cloud for what it's good for, scale, agility, flexibility, and really taking advantage of those architectural features. But yeah, absolutely operate and manage it like it's at scale because it is.

And use the tools, use the automation, use the observability, use the analytics to really get inside and understand what data matters and what doesn't.

Greg Ahlheim: Yeah. I like that a lot. I've talked to a lot of business leaders who have adopted a cloud first strategy and I oftentimes hear, hear them say things like, we didn't think we were going to spend as much, right.

We thought this would be cheaper. We thought, you know, this, that, the other. And, you know, if you think about it. In some instances, if you don't have the proper governance and observability A private cloud is, is a fixed cost some, you know, oftentimes, you know, if you're buying it from an MSP, you know what your monthly fee is.

And, and, and it may be fixed if it's not variable in usage, or if you've put it in your own data center and you've got a private cloud, you've paid for that hardware and have you got licensing fees, but you know, in, in, in a more public cloud arena you're, you're dealing with, well did I allow this application to spin up more compute?

Perhaps it was misconfigured, you know, Or we forgot to turn something off after we did a proof of concept or something like that. How do you, in your experience, what have you seen as far as how companies are thinking about getting better at managing their costs?

Andi Mann: Yeah, that's a really good point. I mean, I'll tell you my own story as CTO of my last company.

I was consistently being asked to take money out. You know, my company was aiming for an acquisition offer. Yeah, we were looking for a good X. We did eventually get that offer, but as CTO, I had to take costs out of the business. And part of that, I mean, and every CIO and CTO who is watching you listening will know your cloud bill is probably one of the biggest single line items in your budget.

Especially if you add up  all your cloud builds, right? SaaS, PaaS, infrastructure, on prem, public, whatever it is. So yeah, you've absolutely got to manage those costs, stay on top of it. Data analytics to do cost management, to do, to understand what assets are actually driving value in your business versus what are maybe not driving as much value.

Here's an exercise I did with a customer while I was at Vendar. We're doing data analytics, which we looked at all of the development time spent in certain developer productivity tools. So we're thinking here things like Atlassian tools and, and GitLab, GitHub tools, these sorts of things. And there, and this business was a large development shop for other businesses.

So obviously profitability and productivity was super important too. As were the cost of their cloud products, right? So we looked at all of the, how much time and effort did people use these specific products? You know, competing repositories, a GitLab, Subversion, what have you. What applications were those specific developers and teams working on?

What applications drove most of the revenue for the customer? And that's a really easy conversation. This is what I mean by data driven decisions, by the way. Rather than going to the development team and saying, Hey, I think we need to rationalize our cloud tools. And so I'm going to stop paying for this one.

That CTO was able to go to the team and go, look, I know you're all using the tools you want, and they're probably all pretty good, but bottom line, we can't afford to have all these tools, and this is why, and here are the numbers, and here's why your tool is going away, because honestly, your application is not making enough money to support that tool purchase.

Little things like that, staying on top of, staying on top of obviously overruns tests and dev, automate the goodness out of those environments, right? So I was with a vendor and infrastructure as code vendor recently, they were talking about their  ability to create ephemeral instances of new testing environments.

So exactly what you're saying, shut down those environments automatically after a certain time. You know, make that the default, make people justify keeping instances up rather than forcing them to remember to bring them back down again. I personally know CIOs who have been sacked for six-figure cloud overruns, right?

No one wants to be there. Stay on top of your costs, obviously, automate the heck out of your dev test environment specifically, but also if you can, your private environment and make sure your engineers all know what the data says about the profitability of what they're doing, how they're doing it, the tools they're using, the ecosystems they're using.

And by the way, Greg, last thing you talked about private cloud, have that authority to make those decisions to run things on prem if you want to. If that's your shop, if that's what you've got, you're absolutely right. A lot of steady workloads are going to be more expensive in the cloud. Look, you get agility, you get flexibility, you get interconnectivity, you get global reach, you get a lot of other things.

Repatriation is, you know, a conversation item at the moment amongst a lot of my clients. Very few of them are thinking too seriously about it, but I'll tell you, most of my large enterprise clients have on premises infrastructure too, and it's a choice they can make. I think it's a legitimate choice too.

Greg Ahlheim: Yeah, you know, you think if you really want to get the most out of your IT, whether it's, you know It could be speed to market. It could be cost. It could be whatever. I love the best venue approach. What is the best venue for this application based on the cost to operate, based on the security profile, based on our ability to manage it and support whatever it is.

Right. I think that's brilliant. I think I do, when I talk to business leaders who say we're, we're all in on  public cloud, we're never going to go to the pub. I don't like the we're all in on this or we're never this because I think you really, I feel like you're. You're, you're limiting your ability to get the most out of the IT, right?

Andi Mann: Exactly. You've got to make these pragmatic choices. It's something I'm, I'm, I've built a reputation around pragmatic choices because and I annoy some people because I don't go with  the stylish dogma. You know, I mean, this book is actually on building your own private cloud. You know, I'm absolutely a cloud advocate.

But there are times when you want to do it a little bit differently. There are times when you want to use your own system. There are times when you don't even want to use a VM. Look, my last role, I was a CTO of a video company. We had physical hardware that did video encoding, decoding. It was not just standing out of the box, Intel or R.

It was dedicated chipsets and dedicated boxes that did video encoding. How do I move that to the cloud? Like, I used to be in mainframe. I'm still covering mainframe operations as of now. There are opportunities. There are companies that are doing mainframe hosting or mainframe emulation. Even Big Blow is getting into that and providing mainframe type services.

I mean, even a mainframe cloud. So. But it's a platform choice. It's architectural choices of business decision. I have problems with people saying we're cloud only, we're open source only whatever it is only. You're locking yourself into a decision you don't know that's going to be the right one. Make pragmatic decisions when you need to and they might be we're cloud only now.

But it's got to be a pragmatic decision. Cloud is just another architecture. It's just another platform. You can't say that I'm not going to use anything else. I agree with you, Greg. I think that's just the wrong decision.

Greg Ahlheim: Yeah, and you know, we were talking about we were talking about The, the cost aspect of it, let's, let's maybe talk a little bit about security.

And I, I think  your security, whether you're private cloud and you, you know, you're autonomous in, you know, physical access and probably logical things, or you're in a you know, a shared responsibility model with a public cloud provider it, it, it feels as if companies really have to be on top of things nowadays, the, the rate of change the, the number of exploits.

the routine sort of. You almost see, you don't even have to be super sophisticated anymore to get access to things that could damage a company or you know, take their data, take, you know, whatever, when you, when you meet with companies, what, where do you start in that conversation about security, best practices, governance, all of that?

Andi Mann: Yeah. I tell you, Greg, I'm a technologist. I love technology. Right. I've been hands on coding and I built my own video game system when I was a kid, you know, a lot of technology, but where I start. Policy, you've got to have the policy. Technology is only as good at the policies that it implements, right?

When it comes to cybersecurity, governance, compliance, all of these sorts of things you've got to start with policy. Privacy policy, compliance policy, information, asset use policy you know, access control policies. Look, it's not exciting work as a CTO. You gotta do it. Because that's set in writing the standard for everyone to follow.

And if it requires you to implement technology to help them follow it, then go for it. But bottom line, you've got to have policy you want a policy review and enforcement as well. So I always start with policy. It's really important to get the fundamentals right. And so everyone's on the same page, you know what they have to do.

From there, you can start to do some really useful things with technology. So look, obviously, I do the in access management, you're going to have passwords, you have logons, you have restricted access to certain things in certain places, right? I do believe in observability for cybersecurity. Looking in that operational mode,  SecOps, the security operations, understanding who's getting into systems, where, why, and how.

Is it legitimate? Is it not legitimate? And I do believe that AIOps for cybersecurity is a really important part of that. Applying machine learning to understand what is known good security practice versus a potentially known bad. And being able to identify those again with automation, because the data sets you're trying to pick these needles out of a massive I definitely believe automation generally, this is.

So automation is just a win-win for everyone, right? You save money because you do things faster, better, in process with automation instead of with humans. You free up creativity and innovation because that mundane routine stuff that I never wanted to do is being done by the computer, which is great.

Frees up humans to think and imagine and do new things in new ways. It reduces errors because it does the right thing every time, or I assume you've coded it properly. And it improves compliance because of all of those things. If, so I'll give you a very specific example. If I want to deploy a new server into production, so I can go through all my change approvals and change approval boards and all this sort of stuff, I do think a lot of that still matters.

But also I, I get to do it with an automated practice if I'm using automation. If I don't use automation, I get to put a server into production, which is. Something I think is probably known good, but might contain a vulnerability that I didn't know about. So all of a sudden I'm opening myself up to exposure.

If I use an automation routine which is vetted and known to be good, then I just click a button and I get a server deployed in ProdWords. It's safe and secure. So I definitely believe in these things. Start with policy. Absolutely. You've got to do the basic blocking and attack. Identity and access management, password controls, privileged password management, these sorts of things, you know,  separation of duties, using technologies.

But then also apply automation while you're knowing good practices so that even people who are not necessarily think security first. Don't even have to, they can just do secure things because as the leader, as a technology leader, you're putting in place technology to help people meet your policy object.

Greg Ahlheim: So automation is super key there. Yeah. So just to reiterate, so if you're, if you're a business leader, you, you want to evaluate where you are or you want to begin fresh, you're thinking, start with corporate policy. That's going to drive everybody in the correct direction. Then you set up, you know, sort of a defense in depth, sort of layered approach at all the different layers of security.

And one of the things that I love to see, I don't know how many years ago this was, but you start to see self-healing in security. Right. And so you talk about automation. I love automation that deploys things and that can change things. But self-healing, I think, just took that kind of net to the next level, right?

So you can see a bad thing happening. And if you see this bad thing happening with these markers, execute this, right? Stop bad things. Wonderful. I love that. So we talk about security. We talked about sort of cloud governance around cost. What, what are the markers of a company who's really on top of these things?

Have you met with a company and they've, and they've got a cloud adoption strategy, whatever that means, you know, it could be variations of cloud, hybrid, multi cloud, how do you know, Hey, they're, they're really, they've really got it together. What are the, what are the key indicators that they're doing things?

Andi Mann: Yeah. And it's all the KPIs. And I literally take this as metrics, by the way. Some of the things I look at is collaboration between teams. So cross boundary collaboration. This is one of the most important things that I've done research on into how you drive business results, drive governance and cybersecurity benefits drive productivity, actually drive  employee happiness as well, by the way is how well disparate teams collaborate and integrate and communicate with each other.

So there you look at things like process you look at efficiency and release, how quickly, so from a CTO perspective, I'm looking at developing, delivering, and running software whether it's in the cloud or on-premises doesn't really matter. And so I'm looking for how efficient is that process. I'm looking for, where in those boundary conditions do things fall through the cracks?

So you look at value streams, right? This is a concept from the Toyota way and manufacturing, this idea that every step along the process adds value. For manufacturing, I put a steering wheel on a car. That adds value to the manufacturing of that car. I'm going to sell a car for a lot less without a ceiling.

If I add a feature or a function to my software product, I'm adding value to that. When I do QA, I'm adding value. When I release a pod, I'm adding value. So looking at these steps and making sure there's no drop-offs in productivity in between those boundary lines, that's a huge indicator for me that they are connecting, collaborating, integrating.

It's a DevOps concept. And I know from my own research that directly correlates to better outcomes for the business. So I absolutely start by looking at those boundary conditions. And I look at measuring. From start to finish, and to start with, if you can't measure from start to finish, and by that I mean idea to cash, right?

I've got an idea for a new product in the marketplace, blah, blah, blah, blah, blah, blah, now I'm selling it and I'm earning money, right? That's a whole cycle. If I can't measure that whole cycle, that's a red flag to me. I know at that point there's inefficiencies, there's gaps, there's people making decisions that are maybe less informed, or maybe less.

Appropriate for the entire business because they're not seeing outside of their own box. So yeah, I've got some metrics who say that there's goodness going on. It's mostly to do with collaboration, efficiency, and productivity. Some of those same metrics will throw up red flags for me.

Greg Ahlheim: Yeah, yeah, that's great.

Is there an easy rule that you would use to know if a KPI is worth measuring? Is it a good KPI? Is this something worth measuring?

Andi Mann: I don't know if it's an easy rule, but it's a rule, what are you going to do with this information? I literally have asked my people that come to me and said, I, and you look at this, look at this number, look at this data point or whatever it is.

Like, okay, great. What are you now going to do about that? And again, I like to ask the question, right? I don't turn around and say, I think that's a useless number. I just say, well, that's interesting. What are you going to do to change that? If there's no answer to that, just stop, go move on to the next thing.

We've got enough stuff in our lives. In our business lives specifically, we don't need to ration spin wheels on numbers that don't matter. So yeah, that's, that's the big one for me. It's just, what are you going to do about it? If there's no doing, maybe move on.

(45:09) Use Cases for Enterprise AI Adoption & AI Predictions

Greg Ahlheim: Let's switch up a little bit. You had mentioned AI earlier and you were, you were I think at the point we're making was around using AI for, you know AI ops or, you know, operations, what's your take on what's going on with AI and, and where are the use case?

I mean, I, I do agree with you. I think there's a lot of, there are some very, very good use cases. And then there are some things where I think there's, there's thought or time being spent where I'm going on. I don't know if that's going to get you what you're looking for, but maybe I'm wrong. I don't know.

How do you think about it? What are the, what are the real use cases versus maybe where, you know, it still needs to bake a little bit.

Andi Mann: Yeah, I think you're spot on, actually, Greg. It's, look, I'm a bit of a skeptic and a bit of a pragmatist. And so I look at all this stuff going, yeah, golly, but what am I going to do with it?

You know, coming back to that fundamental question, right? In my world, dev and ops, right, there are lots of great use  cases for AI, specifically the newest, you know, generative AI using existing language models to understand what language is. And that could be, by the way, English or German or French, but it could also be C or Python.

It could also be the language of my documentation set. And using that, that generative AI to take a lot of the mundane routine away from technology-focused students. Now, this is really interesting to me. My first degree was a Bachelor of Arts in English. So I understand language to an extent. And I look at this, I look at all the people I work with and often they don't.

And so for a developer to write an epic, a story, a persona, that can be a bit challenging sometimes. So I get them to use GNII to write those. You know, put another use case for in my world of IT operations is the post-incident review. So when you have a problem, you go through the incident, then you fix the problem and, you know, review that incident.

What happened? Why did it happen? What did you do to fix it? What should you do next time to stop it happening again? So when you're doing problem resolution, you are 100 miles an hour. Everyone is panicked. You've got your system down. Customers can't buy. Your executive's on your case. Maybe just leaning over your shoulder.

You've got to get it fixed, get it done. Afterwards, you've got to go back and look at all your notes, chat logs. Console records, what commands did you issue? What systems did you change? What happened? What worked? What didn't? To do this post-instrument review, feed it all into generative AI. Especially if your Gen AI has been trained on not just English language, but also on your systems language.

So I'm not talking about code, but what does your system look like? What is your architecture? What is a known good deployment? What is a known good performance level? These sorts of things, and Gen AI is actually pretty spectacular in synthesizing,  taking the mundane routine work of a human going out and collecting all these data points and all this narrative, putting it all together and presenting it to the team, get that all done by a computer because it's all there.

So there are really good use cases, but you know what? It's also a case of Amara's Law. I think Amara's Law, if I can quote it properly, is we overestimate the impact of technology in the short term, and we underestimate the impact of the technology in the long term. Right now, we're overestimating the impact of AI in the short term.

They're so far to go, you know, I look at, you know, I just look at the suggestions on Netflix for my queue the suggestions on Google for my news reading. Today I got a news article from Asheville, North Carolina, because yesterday I looked up burritos in Boulder. And so there's this new burrito joint in Asheville, I don't know, is this actually useful?

So look, I think there are magnificent use cases for AI, I use AI even now. I don't get it to write my stuff, I get it to turn my bullet points into products. It's a routine, mundane thing that it can do easily because I've trained it on my style. It's still my words, it's still my ideas. What we are not going to be able to do is just set AI free to go and run our businesses, to find our customers, to build new products, to create marketing messages, even as we've seen to do some imagery with five, seven fingers and three legs and stuff, AI is getting really good with some amazing use cases and will be revolutionarily like the internet was, like automation was, like cloud was.

Greg Ahlheim: Yeah, I think I agree with you that, that  the cape of the promise is definitely there, right? Although  the power to run everything that everybody wants may not be yet or, or, or whatnot I'm, I'm really interested. And I'm curious in your thoughts on how AI play could transform  operations in the coming years.

If you, if you think about being able to turn AI loose on your CMDB, so it knows all the assets and the connectivity of those assets, and then being able to see at an organization level. You know, small changes in how, what's happening with the infrastructure, how it's being used and what you can infer from that, which, what's your best, and I know it's like, you know, no one has a crystal ball, but what would you expect to see?

What do you think we're going to see there?

Andi Mann: I'm actually saying, do we know what you call it? AI, ML, a series of complex if they don't strike modes. I'm actually already seeing this in operations, so predictive analytics is actually already a thing in IT operations. Looking at known good patterns of performance and availability and uptime and so forth, and then using that to predict what you shouldn't be doing in terms of the capacity growth, but also looking at known bad Process and bad situations so that you can look at what was those you go back and you train your machine learning algorithms to understand what, what happened leading up to that problem.

And is that consistent? Every time we have that problem, the same things happen in the leader. So now I can learn my machine can learn what the problem is before it happened. So when with one customer, one large financial organization, I remember just this process saved them nine seconds. From when the problem was going to happen and it gave them nine seconds warning and we think, you know, that's nothing, right?

Nine seconds. What do you do in nine seconds? I'll tell you what, automation in nine seconds can fix the problem. Humans can't. Automation can fix that problem in nine seconds. And it's not just that you fix the problems that you didn't get the outage. You didn't make the call out. You didn't get 15 people on our conference call.

You didn't shut down your t shirt ordering business, right? So AI and AIOps is already playing in these areas of predictive analytics and so forth for  capacity planning, but also for problem remediation and problem prevention. I love what you're talking about, the idea of making your assets dynamic in a cloud computing environment, especially.

The idea of a CMDB just rankles with difficult, you know CMS, even a content configuration management system, little bit better. It's federated. There's more opportunity to have variation, but every single CMS or CMDB I've ever seen or worked with is always out of date. Literally all once. It doesn't matter how good your discovery is, for the most part, if you're not constantly discovering, continuously discovering, it's out of date.

Using AI to understand what a good configuration looks like, what your entire architecture looks like, what a good architecture looks like in a busy period versus in a slow period, and then using that to drive automation to right size your configurations and provisions. Keep your cloud costs down by dynamically working on a known set of knowledge, you know, knowing good process, knowing good architecture, knowing good performance rates, knowing good security, you know, detection prevention.

And then you can apply that. So yeah, look, AI is already making a pretty big impact in IT operations. I want to see a lot more because I do want to see that intelligence coming out of a system to be able to make those decisions proactively, but pragmatic.

Greg Ahlheim: Yeah. Yeah. Again, the promise is there. I guess we'll see where it lands.

Andi Mann: We're also getting a lot of the vendors talking about, well, using AI to help people do their job. So, you know, observability vendors who are training AI on their documentation. So that when I jump in and there's a problem, I go, what's this problem and how do I fix it? And the AI will go, well, I've seen this problem three times before.

Two times before we fixed it this way. Here's some documentation of our product on what commands you should issue. Should I issue them for you now? This is going to be super positive for people and for our system.

(54:05) AI: The Next Industrial Revolution?

Greg Ahlheim: it's going to do a lot for site reliability and availability for companies who use those systems to run their business or, you know, deliver critical infrastructure.

It's sort of a more philosophical question. Where, where is, what, what can AI do to make lives better for people outside of IT operations? Do you have a sense of what it's going to do for us in general?

Andi Mann: Yeah. I mean, outside of IT, I mean, certainly there's also benefits in development, in compliance, in security and cyber security teams and so forth.

Outside of AI, outside of IT specifically, but not my special subject, IT is definitely my special subject. Yeah. In, in, in the mode of your question about crystal balls. Look, I see it as an industrial revolution again. It's an industrial revolution just like the industrial revolution, of course. But I feel like internet was a bit like the same, cloud was a bit of the same.

This is going to be along that same vector, I believe, the same magnitude as well. It'll fundamentally change a lot of things. It'll help a lot of people. You know, I see it being applied to creative industries. I find problems with that because Humans are the creators, we're great at creativity, and I would rather see AI be used to get rid of the mundane, routine, boring, the automatable.

I do think that it's going to help us with cultural, cross cultural communication and collaboration. The idea that I can use an AI to understand someone in Japan or Tajikistan or, or, or, or Brazil, and I can talk to them in their language and my language, and they can understand that because AI is, you know, this ability to understand human communication to a large degree, and I can just talk English to them and they can talk Brazilian or, or TKI or, or, or, or whatever it is, Japanese to me.

That sort of thing is really cool. I with that on video in my last role. I do believe that AI will put a lot of people outta work. Yeah, you know, the  automobile did as well, right? No money with manufacturers, we talked about. It's going to do that. We're going to have to resell people. I think it's going to take a lot of, it's going to take a lot of political will.

I'm not going to get into politics or economics too much, but the idea that we free up lots of people by using computers, what are the people going to do then? Where are they, how are they going to get paid? How are they going to buy the goods and services from the companies that are putting in AI and getting rid of workers?

There's going to have to be some significant cultural and political and economic changes, I think, because this will upset a lot of industries, a lot of individual roles and jobs will go away. But new jobs will come as well. We're already seeing prompt engineer as a role. Right? Someone who knows how to ask the right questions of the artificial intelligence.

Oh my goodness, it's Douglas Adams all over again, the Hitchhiker's Guide to the Galaxy, right? Yeah. What is the question to the answer of 42? The life of the universe and everything. So these things, new jobs are going to come about, right? We're not going to have telephone cleaners, but we are going to have prop engineers, and data science is going to boom.

There's going to be a lot of great jobs for people, but we do have to deal with the idea that we're going to get rid of a lot of jobs as well, specifically working-class, unskilled jobs. And we have to figure out how do we keep those people in our society. And that's a whole different question, which I do not have the qualifications to answer.

Greg Ahlheim: Yeah, you know folks of the age of myself and perhaps you, you know, they, they can remember when the internet wasn't a thing and then it became a thing and we had the same conversations. What, well, you know some people said, oh, this is, you know, this is where no one's going to conduct business over the internet or, you know you know, it's not going to transform the economy and, and, you know, here we are.

And we tuned to that, right? You know, the culture shifted, jobs changed, some jobs went away, they were replaced with other  things. And I expect that will probably happen just like you said. And,

Andi Mann: and, I mean, do you remember when they said never put your credit card on the internet? Yes. Never put your credit card on the web, right?

Yes. Is that going, right? We will change our attitudes to this as well, right? But you're right. It's something we've got to pay attention to. Yeah.

Outro - (58:18) Advice and Takeaways for CTOs

Greg Ahlheim: Well, listen, I know we've taken a lot of your time and really appreciate it. My, my last question for you is when you get the chance to talk to you know a leader, a business leader, if you could tell them one or two things that are really important about thinking about their it strategy for success, what are those general things?

Do you have any words of wisdom for those leaders who might be listening?

Andi Mann: Oh, I don't know how wise I am, mate. Look, some of these I would say set your people up for success with guardrails but not roadblocks. Like, give them guardrails. Don't let them go off the freeway, but let them go fast on the freeway.

Give them brakes so that if they go too fast, you can slow down. So what I'm talking about here is things like give them resources, give them tools. Make sure the tools monitor what they do. One of those metrics that matter, interdict when you need to, but don't go looking for activity metrics or vanity metrics just to, you know, getting people shorts give your people, set your people up for success.

Innovation is a contact sport. Collaboration is a contact sport. You know, I'm a very big fan of remote work, work from home. I think it is the future of work. By the same token, individual humans have as a technology leader, understand the humanity of what we do. Tech is great and I love it, but it doesn't exist without the humans creating it, running it.

So, understand the human aspects of build and, well, plan and build and run. Connect teams together. I cannot overemphasize this enough for C-level leaders in technology especially. You're in control of your org chart. Make it work, make people work together. At my last job, I crushed three teams into one.

That word, crushed. I  brought three teams together because I wanted them to collaborate and communicate. We had separate teams for off-site and on-prem, right? For cloud versus on prem operations and development. It's like, no, we're a hybrid business. Everyone needs to understand the trade offs we're making, whether we go to cloud or stay on-prem, right?

So I wanted to bring all those teams together. So collaboration, innovation, they are absolutely contact sports. Flow, understand and honor flow in your world. You know, this is the idea, you get in sports, you get in writing, you know, basketball players talk about, Michael Jordan talked about, oh, the bucket is big, right?

You get in the zone, is what sports players talk about. I get up riding a bicycle, sometimes I get it skiing. Sometimes I get it writing, developers get it writing code. It takes, so this is the idea that you get into a hyper-focused state. When you're doing this creative work, it takes a while to get into it.

So if you interrupt one of your team, and it's so bad, managers do this all the time. Management by walking you around, I get it. But if you walk up to a developer and say, Hey, can you just answer this question for me quickly, you've just wasted a cost of an hour of their creative time takes statistics show it takes about 30 minutes, 20 to 30 minutes to get into flow, and then you are disrupted for about 20 or 30 minutes when you're taking out of flow.

Productivity, if you care about productivity, you've got to understand flow and honor it for your team. I mean, like I said before, diversity, look at your diversity for a business reason. It'll help you make money. It'll help you come up with better ideas. Give your people time and space to be creative.

If you want to drive innovation, now don't get me wrong, some C level people don't want to drive innovation. They're in boring industries where that's not important. Honestly, these are just not my custom. If you want to drive innovation, you've got to support that. As I said before, you've got to have executive support.

You've got to give time, resources, and people. You've got to free up that time. to be Google time, like  one day a week. By the way, Google time doesn't exist anymore. But give time. Protect that time to let people innovate. And the last thing, to a degree, you gotta tolerate failure. Failure is the story of innovation.

Innovation is the story of failure. 95 percent of startups fail. But, it's within reason, right? Fail fast. Fail small, fail cheap, and fail forward, and I will support you every time. It means you tried something new, maybe it didn't work, didn't cost a lot, didn't take a lot of time, didn't mess up badly, and you learned from it.

So tolerate failure, otherwise you'll never encourage innovation. So those are some of the things that I would always encourage in senior leadership in a, in a technology world.

Greg Ahlheim: Yeah, that's great advice. There were so many, there were a lot of really good nuggets in there. That whole thing about you know you said build them guardrails, not roadblocks.

That needs to go on a t shirt, Andy, or put it on a t shirt, right? Hey, listen, thank you so much for joining us today. You know, really appreciate it was delight to talk with you and where can folks find you? I'm assuming your, your Twitter and, and other things, right?

Andi Mann: Yeah, Greg, I'm on all the things.

So you can always find Sageable.com Sageable on various media, Twitter, Mastodon I'm at Andi Mann. And yeah, I'm on X/Twitter, Mastodon, and Bluesky. Since the diaspora I'm on all social media, LinkedIn, and, of course, threads. Find me anywhere. Look, you can find my phone number online.

Greg Ahlheim: Listen, thank you. Really appreciate it, Andy. And we'd love to have you back again if you're, if you're open to it at some time in the future.

Andi Mann: Absolutely. It was so much fun. Really interesting to share my ideas. I really enjoyed talking. Great.