Episode 7: Geek Out – Part 1. Mike D’Amato - Shifting Culture and Software Stacks to Avoid Vendor Lock-In
Welcome to episode seven of the Geek Out Podcast. Pete Tseronis, our host, discusses how Shifting Culture and Software Stacks to Avoid Vendor Lock-In with Mike D’Amato.
Meet Our Host and Guest
Listen Now
Interested in learning more about our secure by default Kubernetes solutions?
Episode Transcript
Pete Tseronis: |
Hey, this is Pete Tseronis with Dots and Bridges, and I'm very excited today to facilitate another episode of Rancher Government’s Geek Out with Mike D’Amato from Rancher Government. Great to see you, Mike. |
Mike D’Amato |
Yes, thanks for having me. |
I'm fired up, man. And a little bit about you, if I can brag. Seasoned DevOps engineer. You specialize not only in container and Kubernetes and infrastructure and the stack, the software stack that we'll get into. You're a wonderful storyteller, and this is a passion of yours, and I'd like you to kind of give a little bit more background about you, yourself, your passion, and then what brought you to Rancher. |
|
Mike D’Amato |
So originally I was a Java developer and the software developing was learning management systems, but we had a whole branch of that software code that was just for automated testing. And for some reason I honed in on getting those tests that run faster and scale more. And I was trying to find interesting ways of doing it. This was probably 10 or 15 years ago when Kubernetes was just brand new. And I discovered that technology and thought this could really help me build this testing infrastructure out very wide. And it turns out that that was also just a movement of the entire industry. So I started off there and then learned a lot about OpenShift and then got a few contracts there. And then eventually I just started having some issues with, I guess using OpenShift and I was trying to discover other technologies and I discovered Rancher and then got very interested in that. And I implemented Rancher on a couple contracts. And then I sought out a position here because I thought the technology was really cool and I wanted to just do more with it. So I've been here for about four years now at Rancher Government. |
Fantastic. And you're hitting on your journey, and I think of as a former federal chief technology officer, when I would meet with the Mike D’Amato’s of the world about what's next, where's the puck going? In government, we're at a point where it seems like we're in a continuous modernization, transformational state, the public sector, the federal government, even state local, we invest in technologies and solutions like Rancher, not just the products, but the translation of what this means. The capabilities that map to that mission are more than critical, especially now with so much money through infrastructure and Inflation Reduction Acts and Chips and Science Acts. So many federal agencies will benefit from understanding some of the terms you used. And as we jump into that, I like to not play what I call buzzword bingo, but maybe Mike, you're such a great storyteller, Kubernetes containers, DevOps, people hear the term stack, and today we're going to talk about a specific situation. So Mike, the federal government is in a constant state of transformation. Executive orders come out, guidance documents come out, industry acquisitions if you will, industry acquisitions occur. Let's take the Broadcom acquisition recently of VMware, and it sometimes can create that uncertainty in the marketplace or in the federal government of, I had a customer, I had a product, and now there's been an acquisition, and so many of these acquisitions occur. But specifically with Broadcom in this space of Kubernetes and containers and VMware and users of, can you talk or speak to that specific acquisition as one that could dispel |
|
Mike D’Amato |
Yes, so the Broadcom acquisition definitely has caused some panic. Their changes that they made to their licensing structure is encouraging vendor lock-in their support concerns is increasing costs, things like that. And I'd like to make the argument that they're overdoing that these people that are a victim of these changes are kind of overdue in analyzing their infrastructure anyway. And I think that this is a perfect opportunity to kind of take a step back and look at their infrastructure, the way they're doing things, even the culture itself, and look at some different technologies, some newer technologies and modernize in that way. And so I think that's why that's a big piece of this conversation because it's kind of causing the industry as a whole to look at them, introspect and consider alternatives. |
So let's riff off of that for a minute. When I think of something at the most basic level, whether that's the software stack, the development, we use the term DevOps, and now DevSecOps is out there. Mike maybe help with some examples of how folks should think about that, because whether it's Kubernetes, which we all know helps manage containerized workloads, containers themselves are a transformational way for application development. But this continuous integration, the continuous delivery, the continuous monitoring is the thing that federal government is very focused on. And again, when you have this type of merger, if you will, capabilities in terms from the customer standpoint, don't want to be lost. So let's break some of that down and discuss where there is some risk, but also opportunity to look at some other solutions that are in the marketplace. |
|
Mike D’Amato |
Yes, yes, definitely. I mean, I feel like everyone ultimately wants the same stuff. They want to reduce their costs, they want to increase their security, they want more functionality. And when you're talking DevSecOps and DevOps, it's all about agility and speed and automation and like that when you're stuck in your expertise. So I always compare VMware from the bottom up. They're very good at doing VMs and things like that. People who have been using VMware for years are very good at doing those things, and they're stuck in their ways of managing things at that level, whereas we're kind of from the top down. So we'd like to transition from that level up to the higher levels is a bit of a transformation there. But that's basically what we do is all DevSecOps, all those principles about making things more efficient, making things more secure, the things we spoke about. |
So Mike, when the federal government or any customer of yours is thinking about a migration, a transformation of its infrastructure and something like going from a virtual machine environment to a containerized or a Kubernetes environment, is it really hard? Does it need to be hard or is it easier than people think? |
|
Mike D’Amato |
It depends on a lot of factors. Sometimes it's very easy, sometimes it's very dependent on how the developers themselves and how the applications were actually designed. Sometimes I can go into a place and just take an application, rebuild it in a container and deploy it and no big deal. Other times it's a very hefty migration pattern. So I would have to say that generally it's not easy. I would say generally it's very, very difficult and not only just for the software and technology side of it, but also just the way of thinking. We were just talking about DevSecOps and DevOps and the mentality around that, and unfortunately that's hard to completely grasp at first. It takes a couple of years for people to fully understand why it's even called that. And I think that's the hardest part. You have these VMs and we're starting to say that virtual machines are essentially almost a legacy technology. Now, you still need those things, but at a very, very core level. And then another example of that is an operating system. Even at that level, people are very attached to their operating systems and they know what they know and they don't want to deviate from that. But nowadays, it's kind of getting to the point where operating systems are not really all that interesting and they're not a general purpose operating system is almost not that useful anymore. And so we're trying to say, let's stop using an operating system as a general purpose playground. Instead, it's going to be like an IMiD operating system that's only its sole purpose is to run containers. And so even that idea is difficult for some people. |
How is this conversation we're having similar to one you may have with a customer in government though, when you may have a senior executive or the director of such and such support services. I mean, we all know there are contractors and tools, many tools in an agency. How would you characterize that kind of technical conversation? I mean, we're on here Geeking Out a bit, but there's also a translation that must occur. What should the federal customer be hearing in terms of, it may be hard, but it needs to happen. We hear about the DevOps and then the outgrowth of DevSecOps and we all know that's the integration of security building it in. Are those discussions challenging with Feds trying to understand that the migration is necessary and it's not simple. |
|
Mike D’Amato |
It doesn't go that well because of the layers that we're dealing with. So when you are talking about a very easy layer, if I'm just making a change to application, like adding an environment variable or something stupid, that's easy for us to understand. But when we start talking about different operating systems and different hypervisors and changing the way we provision things as a whole that now we're talking multiple layers of approvals that you have to go through. And I feel like that's actually the biggest constraint I ever have. I'll go to a customer site and be like, all right, let's run this Terraform code and spin up a hundred VMs in their infrastructure. And they can't, because we have to get a hundred IP approvals first, which takes three to six business days or something. So at the very core, we're not prepared for this shift. And so that's really, I'd say the hardest part is not even the technology itself. I mean, there is a learning curve of course, but it's the current processes that we have to go through, the red tape that we have to go through to actually do some of these things that we are now calling best practices. |
So the workforce development component or just the red tape of introducing a new solution into a customer environment. I get it being a former federal employee that you want things done, but there are a lot of T's and I's that needed to be crossed and dotted. Well, let's pivot and Geek Out a little bit. I mean, the migration from traditional VM environment to one that leverages Kubernetes and containers, talk about what makes it so special. What impact does that have potentially, does it make it more efficient workforce? Does it allow for more capability? Can you kind of distinguish between the two and that migration? |
|
Mike D’Amato |
Yes, so I always use the same analogy with this whole thing. You're building layers on layers of things and you're abstracting different things away. And so when VMs were introduced, now you can carve up a machine into smaller pieces and that can be advantageous. You can isolate things for security purposes, you can change vol – you can standardize the volumes across all these things and you can do backups and snapshots. There's a lot of cool things you get from VMs. And then Kubernetes is yet another layer of abstraction. So now I can move things around from VM to VM, let's say, without even realizing it. So now I'm saying I want to run this thing, but I don't even know exactly where it's going to run. I know it's somewhere in my data center, but I'm not really sure exactly where. And so I feel like at a very basic level, that's what we're doing here or just creating yet another layer of abstraction and then that standardizes the entry point for people who want to add to that. An analogy to that would be like say everybody knows AWS, we all say we're going to go all in on AWS and we're going to put our VMs there and we run all our cloud stuff there. I'm essentially proposing that instead of saying that, we're saying we're doing that on a Kubernetes layer, and as long as you write your applications to work on Kubernetes, it'll work wherever Kubernetes is running. And so that's the new baseline. |
Now, do you find that Kubernetes adoption, we talked a little when we were preparing for this, you and I on just some DevSecOps best practices. I mean, there is an opportunity to benefit, if you will, from new capability using Kubernetes and containers. Is the security question something that comes up that often or often about cyber not knowing where data may reside or if it resides in a certain place? How significant is that part of the conversation or are those anxiety levels when you talk about cyber or security? |
|
Mike D’Amato |
All of the above is a problem. Usually there's lots of different technologies that play together. I tend to focus on the core first and then we kind of go up. So when you're talking service meshes or runtime security, things like that usually come later. But every piece of these things I always joke, is another job. Like logging and monitoring, for example, you could build an entire career off of just logging and monitoring. And so yes, I guess to answer your question, it's very difficult to shift from more of traditional types of security where we're creating network rules and routing tables and things, and then now trusting a technology like NeuVector where we're creating network policies in Kubernetes format or YAML format. Again, another learning curve that you kind have to get through. |
I've heard two terms today, Harvester a NeuVector. Let's unpack those a little bit. The benefit of, let's take Harvester, break that down for us. |
|
Mike D’Amato |
Harvester is a hypervisor, you could say, but it's built specifically for Kubernetes. You could also say, but it does support legacy types of VMs. So you could run any kind of VM on Harvester, but I would say it's primary a shining point, I guess, I don't know what word you want to use for that is that it can run VMs that ultimately will run Kubernetes, so it really integrates that layer. I mean, Harvester is actually a combination of tools that are flagship tools that we all love. So I guess Harvester itself doesn't really mean much, but you have SLE Micro, which is an operating system that's immutable and again, built for containers specifically. And then we have inside of that RKE2, which is our Kubernetes engine, and then it runs some, it's kind of proprietary software that makes it what Harvester is. So when you go to the Harvester UI, you see the word Harvester and stuff, so there's some applications there and then has load balancing and networking features and storage features. So it uses Longhorn as its storage back in, and so you can distribute your storage across VMs or across your Kubernetes clusters. |
Is there a heavy training learning curve with some of this, or is it adaptable if you're currently supporting a legacy environment? Because legacy, I think people hear the terms lift and shift or overnight we're going to convert over. How is that transition if you're the customer, what is that full disclosure? Is it It's not simple. |
|
Mike D’Amato |
No. So I've done it a couple times now. Usually it's a side by side kind of migration pattern. So you have your data center and you have your workloads running usually, and we'll kind of have to double up a little bit. And usually this is a conversation of should we buy more hardware or something like that? Usually I'm lucky enough that I can get rid of some of the older hardware that people have. So say you had 10 servers, I can say, could you get by on eight? And then we free up two of them, so we'll put Harvester on those two, and then we kind of slowly migrate things from the old system to the new system, and then we scale up Harvester as it kind of takes over. So that's usually how that goes. So we'll have Harvester up at the same time as your VMware, and then we'll look at the workloads that are probably the easiest to move, and then we'll move those over first and then we'll start to analyze the other pieces one by one, and then what we can move, we'll move and then as we're at capacity for each thing, we'll kind of scale up and eventually the idea is that you completely migrate over |
The migration, the cutover. I liked your continuously examining through each step of the way. It's not an all for one, and that's pretty common in these types of migrations. Let's make sure the customer's happy too, because some of that legacy environment is still going to be comfortable to them. |
|
Mike D’Amato |
There's still things you need to take in consideration. You can't just pull the plug on some applications. That could mean in some cases, downtime can be catastrophic in that sometimes it costs a lot of money to have downtime, so we need to be cognizant on when we can do those things. And so because of that, sometimes there's a little bit of a dance you have to play, so we try to do the smaller things first, and then that way it also gives the team a bit of time to learn the technology a little bit with some less critical applications. And then once we are very confident that everything is going well, then we'll start pulling the plug on some older things. It is definitely a daunting task, and we try to take as many precautions as we can, backups, snapshots, everything we can do to prevent a potential outage. There's going to be an outage. Unfortunately, we have to move things over like lift and ship like you were saying, but we try to minimize that as much as we can by just taking as many precautions as we can. |
A lot of this is, again, very consistent with material and for the audience, I mean the 2024 report on cybersecurity posture of the United States just published in May of 2024, we addressed back to the building blocks, a path towards secure and measurable software. You and I talked a lot about that leading up to this back in February, speaks to this, embrace the migration and transformation because at the end of the day, it's the protection of those assets and with this so much role-based access and digital devices that are connected, the leveraging of the cloud, yes, I'm starting to really get a sense that it sounds simple, but it sure as heck isn't. And working with folks like yourselves at Rancher, in addition to the products you have that can help make that transition very seamless and efficient. Appreciate that. Let me ask you, you mentioned NeuVector and RKE. Can you kind of distinguish those because again, products aren't just bolted on, but they all work together in a complimentary role. |
|
Mike D’Amato |
Oh, yes. Yes. I mean you need them both. RKE two is the Kubernetes layer. So at its core, if you install RK two by itself, you get basically just vanilla Kubernetes and there's nothing else there. There's no service measure or anything like that. But with a NeuVector introduces the idea of runtime security, like container security. It also will scan your pipeline, your container images at build time. So you build your images and you can scan them and then publish your scan results or you are running them. And then if something were to change NeuVector can real time detect various vulnerabilities. So definitely you would want all these things together, but they're very different tools |
And that is part of the, I'll call it teaching moments and opportunities and working with folks like yourselves to just understand what the customer, what you're buying. You may never see it or use it, but it's intended to create, as we said early on, a much more efficient and create ultimately that resilience. I mean, maybe we hit on that for a minute. The migration and transformation to this type of an environment. Can you speak to what kind of resilience component? What makes you feel good if you're that customer, that all is not lost if something were to happen in a prior version of a traditional VM software stack. |
|
Mike D’Amato |
So I mean, there are features of a VMware hypervisor. They have features of making their VMs highly available, but it's a much more heavy and complex task. It's not like taking a container and clicking the plus button on the Rancher UI and making it more than one replica and then scheduling it across the whole thing. It's a very complicated process, essentially making an entire operating system highly available versus just your one little application running multiple times. So these are very different concepts, I think. |
Well, I think of, again, to riff off that moment differences between, and I think it's important because teaching me, and I'm learning here, a traditional virtual machine of VM versus containers, key differences. I'm going to list a few that I was writing down in my own research and you tell me which ones are really, really significant, if not all resource usage is a key difference. Portability, isolation, performance, and scalability. Not to say one is better, but would you say that those are five that resonate with you, or are there others that customers should think about? |
|
Mike D’Amato |
It's hard to name all the benefits of a container based application versus the other. I mean, even dumb stuff like portability, I can take my container, run it here, and then send it to you, and it runs it exactly the same. Was that one of your fine? Yes, |
Yes. I mean, they're all true. Definitely. Everything's about being faster and cheaper, so an entire operating system is a heavy thing to carry around. But a container essentially just uses the operating system for its kernel, so don't actually, you're not moving that much, right? You're just moving the application itself around. So even dumb things like electricity is now cheaper because I can run a container on top of, especially if it's operating system made specifically to run containers, it's much cheaper for me to now kick off a container than it is for me to kick off a VM, which is really a big part of this whole thing is I want to get away from heavily relying on their VMs and creating an abstraction layer where the VM is really just there to facilitate Kubernetes, spreading your workloads around for you rather than you doing that yourself. |