M.K. Palmore, Founder and CEO of Apogee Global on DIY Cyber Guy

Episode #91: When AI Becomes the Cybercriminal (And Not Just the Criminal’s Tool)

About M.K. Palmore

Malcolm K. Palmore is the Founder and CEO of Apogee Global RMS and the Apogee Speakers Bureau, M.K. helps organizations transform security into strategic advantage. M.K. is a nationally respected cybersecurity executive with over 30 years of leadership experience across the Marines, FBI, and Google Cloud. He specializes in scaling AI responsibly, defending against cyber threats, and building tech leadership. M.K. also leads efforts to advance diverse talent in cybersecurity as President of Cyversity and offers insightful commentary on AI and cybersecurity trends. With frequent keynote appearances at RSAC, Black Hat, HITRUST, and UC Berkeley CLTC, plus recognition in outlets like Business Insider and SiliconANGLE, he consistently brings fresh ideas to the public eye.

M.K.’s Links

M.K’s Company: https://apogeeglobalrms.io/

M.K.’s personal website

https://mkpalmore.io/

M.K.’s LinkedIn Profile:

https://www.linkedin.com/in/mkpalmore/

Summary:

The discussion focused on the evolving landscape of cybersecurity in relation to AI technologies, highlighting both the challenges and opportunities presented by these advancements. M.K. Palmore emphasized that while AI enhances the capabilities of cybercriminals, it also equips cybersecurity professionals with improved tools for defense. He introduced the OODA loop principle, underscoring the importance of rapid decision-making and the adoption of AI tools, particularly for small to medium-sized businesses, to strengthen their security foundations.

Palmore stressed the necessity of conducting gap assessments to identify vulnerabilities, noting that many organizations neglect basic cybersecurity measures, leaving them exposed. He also highlighted the importance of communicating cybersecurity risks in business terms to engage executives effectively. The conversation concluded with David W. Schropfer expressing gratitude to M.K. for his insights and providing information on how listeners could connect with him, while also outlining the timeline for the episode’s release.

SHOW NOTES:

Episode 91: When AI Becomes the Cybercriminal (And Not Just the Criminal’s Tool)

Welcome back, everybody, to DIY Cyber Guy.

HoF: 3/5

For: Everyone online (everyone)

A fundamental, existential shift is happening in cyberspace.

The enemy is no longer just a human hacker behind a screen. Autonomous, or Agentic AI is being trained to attack. This is not some future sci-fi movie; it is happening today. These AI agents can find and exploit zero-day vulnerabilities in minutes—before you or your security team even know the hole exists.

In a recent article on the future of cybersecurity, the author put it bluntly:

“AI agents are essentially autonomous intelligent systems that can perceive their environment, reason about it, and take actions to achieve specific goals… The speed and scale at which autonomous AI agents operate are beyond human capability.”

SOURCE: Heather Atkins of Google https://www.csoonline.com/article/4069075/autonomous-ai-hacking-and-the-future-of-cybersecurity.html

If a machine can breach your systems faster than you can blink, how do you even begin to build a defense? The human model of defense is now officially obsolete.

This is why the conversation has to pivot from reacting to proactive strategy in cybersecurity.

Here with me to discuss this today is today is MK Palmore.  Founder and Principal Advisor at Apogee Global RMS.  MK is a Global CISO, a former Marine Corps officer, and a retired FBI Special Agent who led cyber security branches in San Francisco. He has dedicated his 30-plus-year career to crisis management and building world-class security leadership.

Welcome MK.

Does AI give cybersecurity pro’s an advantage, or a disadvantage?

TRANSCRIPT

0:00 – David W. Schropfer
Welcome back everybody to DIY Cyber Guy. This is episode 91, when AI becomes the cyber criminal and not just the criminal’s tool. So this is a hair on fire three out of five. This is really something that everybody who is online, which is pretty much everybody listening to this podcast, truly needs to understand. It’s the arms race that we’re in, that AI is providing to all of us who want to keep our networks safe, and the cyber criminals who look to exploit those networks for financial gain, or unfortunately, in some cases, their own personal entertainment. So think of it this way. The enemy is no longer the human hacker, behind the screen. Autonomous, or what’s also known as agentic AI, is being trained, actually trained, just like we’re training AI to make our jobs easier, do our jobs more efficiently, the threat actors are using it to attack, are training AI to attack. This isn’t something out of a sci-fi movie. This is happening now. It’s happening today. And these agents are finding and exploiting what we call zero-day vulnerability. Which are essential abilities cybersecurity community hasn’t seen yet and therefore hasn’t passed yet or hasn’t reacted to yet. A very interesting article came out written by Heather Atkins from Google that was published in csonline.com. And I do have the link for this in the show notes. And the quote is this: Agents are essentially autonomous intelligent systems that can perceive their environment, reason about it, and take actions to achieve specific goals. The speed and scale at which autonomous AI agents operate are truly beyond human capability. So if a machine can be used to breach your computer faster than a human being possibly can, working at the speed of processing, as opposed to working at the speed of a threat actor, the field has changed, the landscape has changed. And like every other cybersecurity threat, we have to react to it. So here with me to discuss all of this today is M.K. Palmore. M.K. Is the founder and principal advisor at Apogee. M.K. Is the founder and principal advisor at Apogee Global RMS. He is also a global CISO, a former Marine Corps officer, a retired FBI special agent who led cybersecurity branches in San Francisco. And he’s dedicated his 30-plus year career to crisis management and building world-class security leadership. Welcome, MK.

3:11 – MK Palmore
Thanks, David. Thanks for having me on. Appreciate it.

3:14 – David W. Schropfer
It’s a pleasure. Let’s dive in. Does AI give cybersecurity professionals an advantage or a disadvantage?

3:25 – MK Palmore
Great question. But like most questions, there are two sides to the answer. The reality of the situation that we are experiencing and is continuing to evolve is that AI gives certainly an advantage to the attacker. As you described in the opening statement about Heather’s article, Heather is a former colleague of mine from Google. It certainly gives the attacker the ability to move at a speed and pace with which they probably didn’t have prior to the advent of agentic AI. While attackers have always had the upper hand because they have the ability to sort of operate without being seen, issues like speed of detection, speed of response have always been sort of the lagging point for practitioners in terms of identifying adversarial behavior. Both sides now get an opportunity to operate operate in a fashion that, at the very least, I think will create a level playing field. Let me tell you what I mean by that. As you indicated in the opening, I spent a bunch of time in federal service. Some of that was done in the military. There was a theory in the military that’s still continually espoused. It’s taught in business schools. And it has to do with an individual’s ability to observe, orient themselves, decide, and act. It’s called the OODA loop principle. And it was developed. As a warfighting principle back in the 60s, originated, had it to do with aerial dogfighting. And I’ve taken that framework and aligned it to what I believe to be sort of the principal issue that has plagued cybersecurity practitioners from the beginning of this challenge of the cybersecurity landscape for most practitioners. And that is our ability to identify, make assessments of what the environment are, make determinations as to what’s going to impact us, and then a decision on how to act or react to those pieces of information. Speed and velocity have always been the issue, and they are different terms. Speed means moving fast. Velocity means moving in a particular direction with particular intent to act on it. And I think that the advent of AI for both the attacker and the protector now give the protector an ability to move with velocity to assess information, make decisions, and then and ultimately put their environment in a better position to protect themselves. So I think for the first time, practitioners really have an opportunity, especially if we lean into and leverage these new tools and this new tool making. It gives us an ability to really position ourselves to respond in a way that’s substantive and gives the attacker some pause. It’s not going to be easy. Don’t get me wrong. I mean, what Heather described in that initial statement I think it’s kind of chilling when you think about it, that the fact that we now have LLM models or agentic AI models that essentially can reason their way through, can go up and grab exploits that they think might appropriately work on a particular targeted system. Once it’s reasoned through and assessed what system it is that they’re dealing with, it can go out and do a search for the available CVEs associated with that system, grab an exploit and detonate it as needed because it’ll know exactly what works. Think of the speed.

6:40 – David W. Schropfer
And she was even talking about some of the exploits, some of the exploits that haven’t even made any listings yet.

6:49 – MK Palmore
Yeah, haven’t made listings, but maybe there’s evidence of it that exists somewhere and an AI agentic model would be able to go out, scour the internet and pull the necessary research it would need to inform itself to then make a better exploit and or a better decision around exploit to deliver. So for sure, there I think the pros outweigh the cons, but I still remain bullish and excited about the ability of the practitioner to really engage in a way that substantively they haven’t been able to engage. And I think there’s, so there’s automations to be made on both sides of the house, but I think that ultimately as practitioners and protectors, we’re gonna be in a better position to protect the environments that we’re responsible for. That’s good news.

7:34 – David W. Schropfer
was chilling and it does describe the thinking and reasoning where the elements that weren’t, computing wasn’t really being utilized for that in the past. That was up to the attacker himself or herself to try to figure out, pulling in as much data as they could from various publicly available sources and maybe some deep web sources, but figuring out how to do the attack, where’s the exploit, what’s the known exploit that can be used against a given company, a given target at a given time. But now that’s being reasoned in real time by AI. The exploits are being attempted in real time by AI. And it’s happening, as you said, a lot faster, as the article said, a lot faster than the threat actor acting on their own could possibly do. But you’re saying that the cyber security professionals have the upper hand as a result of AI.

8:31 – David W. Schropfer
If the threat actors have such a powerful tool and can use it for things that they truly couldn’t use it before, where does the cybersecurity professional gain come from when it comes to AI?

8:41 – MK Palmore
So for years, we’ve been espousing this idea of defense in depth. And unfortunately, humans have been responsible for identifying where those layers are and what necessary tooling might have to be in place in order to protect an organization with a defense in depth model. Imagine, if you will, an agentic that could actually make those decisions in a much more reasoned fashion and identify the tooling necessary in order to protect an environment that it’s completely familiar with. I mean, you can, in a relatively short period of time, train an agent to understand the entirety of a global environment and make reasoned decisions about where to add protections or where to identify where heightened risk might exist, rather than simply relying on humans And although we’ve done a pretty decent job of that, it typically has required a lot of manual engagement on our part, especially the risk part. I mean, I established a business to do enterprise risk management specifically because that manual process is tedious. And in that tedium of response, organizations oftentimes choose a much faster route. In other words, they like to cut corners in order to get to the end of the state and identify where their risks are. There’s a possibility that with AI agents, we can make a much more full-throated assessment of an environment and actually identify and help organizations prioritize where they should put their resources to better defend against the most relevant types of attacks that might come at their system.

10:14 – David W. Schropfer
And when we’re thinking about those types of attacks, you are on the front lines at Apigee Global. So maybe talk about some of the things that you’re actually seeing if there’s a real-world scenario you can give us, perhaps without naming the client, of course, but that has taken advantage of some of the benefits that AI can bring to this fight.

10:39 – MK Palmore
Yeah, I’ll tell you the way I answer that question, David, because it’s a great question, but at the same time, the sweet spot that we like to operate in at Apigee is getting folks to just simply understand the fundamental practices that they need to engage in in order to build a functioning My experience over the years, again, with two Fortune 500 companies, and certainly my time in the FBI, is that most small to medium-sized businesses, and that’s kind of the space that I operate in, haven’t done enough to even do the basics. They haven’t built the necessary governance. They haven’t built the necessary risk aperture in order to even understand how their environment might be impacted by an attack. So our goal is to come in and get them to a baseline where they can actually operate and make decisions and then build upon that because they haven’t done the hard work necessary to get to that baseline. Oftentimes, organizations are operating with such speed and such focus on things like revenue generation or just getting clients in the pipeline, if you’re talking about professional services firms, that they haven’t taken time to pause and really understand what the threat is to their organization. That’s where we come in, help them understand what they are really at risk and potentially being how they might be impacted, and then offering solutions and recommendations around how they could change that scenario. Oftentimes, of course, AI may be a solution that we recommend, or at least implementing some portion of AI or AI-infused tooling in order to help them better defend their environments. Certainly, I think, in answer to your question, the way that I think it helps clients, especially SMBs the most, is that it will allow them to scale, in a way that historically would have required them to maybe go out and hire multiple positions in order to be able to satisfy a particular requirement. You can cut that across many ways today by simply implementing potentially an AI tool that can do the job of several different people.

12:36 – David W. Schropfer
And since you mentioned AI tools, let’s say one of our listeners is curious, does not use AI tools for the purpose of cybersecurity. Today and not quite ready to hire a firm like Apigee. What kind of tools would you recommend for that person to just dip their toe in the water, as it were, to try to explore their own environment to see or to expose the vulnerabilities that are probably there?

13:06 – MK Palmore
Yeah, I mean, I would start with acquainting yourself with AI tooling. I’m pretty sure. Now, I don’t know what the numbers are currently. Folks are using these tools, ChatGPT, Gemini, Claude. People are experimenting with them in their personal lives to try and see what use they can make out of them. I can tell you as a new business owner that we use all of those tools heavily to help us do things in a business environment, again, that I historically would not have been able to do as a business owner even a year ago. So it helps me scale my business, marketing, everything from marketing to program management to drafting content, everything that typically would take several man hours to do, and excuse the use of the term man hours, but productivity hours to do, it now can be done using AI tooling. So first thing I would recommend is they just get acquainted with the tooling in their own personal environment, and then look to adopt this tooling with some governance on top of it within their enterprise environments and identify specific use cases where it can help them solve problems that typically have taken them weeks or months in order to gain access to a deliverable. There are some environments where this may come to mind, auditing environments, certainly, again, professional services. These are folks who typically spend hours upon hours and bill lots of hours associated with the review of documents. Can get through that aggregated. And I tell you, the return on investment and information that’s provided in terms of assessing hundreds or scores of documents at one time and then providing summarizations, like that’s easily a an interesting and quick return on investment where you can see the benefit of using these types of tools.

14:47 – David W. Schropfer
Can you give an example of, let’s say one of our listeners says, MK, I’m in. I’m going to open up my chat GPT right now and try to begin to use it to understand how to use it, not for drafting an email or some other kinds of marketing content, but I want to use it to start to see if my system is safe, if my data is safe, if my network is safe. Can you give an example of the type of command that you can give to GPT? Obviously, you can’t ask it to do an entire penetration test. It will do nothing like what a firm would do if you hired them to do a penetration test or something of that kind. But word for word, what command would you recommend my listeners put into a GPT or Gemini to start the process of thinking about using AI for cybersecurity?

15:42 – MK Palmore
That’s an interesting question because I think if you were to enter in any type of prompt that essentially says, give me an idea as to how better to assess the risk of my environment to a potential cyber attack. Those three models that I mentioned will come up with pretty legitimate answers as to how you start it. I’d be willing to bet that the first one or two items that it will recommend is that you get a vulnerability and or gap assessment done by either an external agency or you conduct one yourself internally using some of the aligned frameworks that we have all agreed upon in the industry work to help organizations reduce the risk of a cyber attack to their enterprise. So that might be the NIST CSF. It might tell you if you’re in financial services to use the PCI DSS. If you’re in government space, the CMMC is a quickly emerging as the standard for organizations to align to. But then, of course, you have the NIST standards across some of the frameworks that have been established via NIST. And it will tell you that you’ve got to conduct that gap assessment first before anyone can tell you exactly what types of tools you may need. Because without identifying where your gaps and vulnerabilities are or your governance structure, you really have no idea where to start. So many of those responses would simply just take you through the marked steps of get a gap analysis or vulnerability analysis done. It might recommend things like penetration testing, because as you indicated, we haven’t discovered anything yet that really beats the actual attacker’s viewpoint, although there are things like threat attack surface management tools that you can use to identify open vulnerabilities. There are lots of methodical steps and ways to go about it. And the truth of the matter, David, is that organizations haven’t taken in the time to go through those steps, bring in organizations like Apigee or others to conduct those gap assessments, and give them the answers to the question that you just prompted, which is, how do I better defend our environment, essentially?

17:45 – David W. Schropfer
Got it. So it sounds like gap assessment is the first threshold you step over to begin that type of analysis.

17:53 – MK Palmore
You’ve got to know where you stand, where you’re starting.

17:55 – Unidentified Speaker
Exactly.

17:56 – David W. Schropfer
So I’ve been in cybersecurity for over 15 years, I’ve had this podcast for over five. And in that time, I’ve come to the conclusion that there are two types of executives at two types of companies, those who know that they’re under some sort of cybersecurity attack, and those who don’t, because everybody is. So what would you say, as somebody who works with these clients, and I’m sure you have clients coming to you proactively, and you’ve got some coming to you reactively, after the house is fully on fire. But what would you say to those of my listeners who believe they are not under cybersecurity attack, that their system is safe, what they’re doing is, quote, good enough to take the next step and start with that gap analysis or reach out to a company like Apogee?

18:45 – MK Palmore
Yeah, I would say, first of all, hope is not a strategy.

18:51 – MK Palmore
You’ve got to do the work. And at the end of the day, because of the environment we’re in, every organization is a digital one today. There is virtually no business that’s meaningful work and looking to build revenue or generate and have impact that doesn’t have a digital presence of some kind. That digital presence, the reliance on digital equipment, materials, access, is a source of potential risk to the operations of the business. If you haven’t taken time to properly assess where the risk to your individual business stands and you are simply opening yourself up to possible attack. And as you indicated, every organization is under attack. When someone releases an exploit out into the wild, there’s two things. They are either targeting specific organizations or they’re just looking for low-hanging fruit. And you don’t want to be on the end of either of those. You don’t want to be a targeted organization or you don’t want to be low-hanging fruit. But the truth of the matter is that if you aren’t protecting yourself in the same fashion that the big folks on the landscape are protecting themselves. You are opening the door to potential exploitation, and you should be prepared across the line. And there are things that you can do, the fundamentals, to at least set yourself at a good starting point. But what we found over time, my time in the FBI, my time as a business owner, my time in enterprise, is that folks still aren’t doing the fundamentals.

20:14 – David W. Schropfer
And a big part of the reason I wanted you as a guest on this podcast is for that reason. You were in the Marines, you were in the FBI, you were in a Fortune 500, Fortune 5 with Google, and you’re a small business owner and a startup founder. That is a remarkable, I’ve never really thought about those four corners of professionalism, but you’ve touched them all. And I’m curious how the kind of rigor that you would have to approach any task with, for example, in the military, translates the way around that diamond to a small business owner like yourself who’s trying to grow a business and protect a network.

21:00 – MK Palmore
Yeah, I mean, at I’ve had deep experiences that have helped me to learn how to advise clients and how to take the necessary steps in order, again, to reduce the risk. You can’t completely eliminate it, you’re operating as a business. That risk reduction comes with required action, required intentionality. I think that’s the piece that we’re missing from the landscape today. I had a conversation with a cybersecurity practitioner a little over two weeks ago who had done all the work, had identified the gaps, and for some reason cannot get the executives to lean in on what he has determined to be, here’s what we need to do in order to make ourselves safer. And part of the challenge I told him he’s probably not having the business and risk discussion he needs to have with those business owners and leaders. And he’s coming across simply as someone spewing tech terms at them, cybersecurity terms. And to them, they have trouble weighing what’s the benefit of us turning additional dollars over to you and your team when you can’t really outline to us what the potential risk and impact was. And I think that challenge still exists for many cybersecurity practitioners. I purposely went back to school many, many years ago, two decades plus ago now to get an MBA, even though I wasn’t in business, because I needed to understand business terminology and the business landscape. I think that’s helped me quite a bit as I now start my own business. It allows me, and it has over time, and my experience with the Fortune 500 companies was that when you get in front of executives, you have to be speaking in terms that are relatable to business risk, otherwise they will not take it seriously enough to make a decision around the need to invest additional dollars because it just comes across as a cost expenditure to them.

22:49 – David W. Schropfer
And where’s the biggest dollar amount that threat actors using AI is gonna impact in a given business?

22:56 – MK Palmore
Well, so interestingly enough, the threat of AI now lowers the barrier to entry for threat actors. The tooling that’s available, it’s relatively cheap when you think about it to get access to these high-level models and to be able to build, their cost or barrier to entry has actually been lowered. And so I don’t know that Heather covered that in that article, but it means now that not only do they have access to new tooling, they actually have an ability to increase their velocity of attacks, which means that we are almost mandated on the defense side to use very similar tooling in order to be able to provide defense against those. If you are not thinking about using AI in cyber defense, you are essentially missing a whole pivot in the industry. And it definitely increases the possibility or risk to your organization. And so it’s important and imperative even for business owners and stakeholders, especially technology leaders, to get their hands wrapped around this topic.

23:56 – David W. Schropfer
It sounds like a perfect place to leave it. So incredible answer. I really appreciate your feedback. And thanks for being on the podcast. MK. Find out more about what you do.

24:07 – MK Palmore
So they are free to reach out. I’m pretty prolific on LinkedIn, so MK Palmore on LinkedIn. But we also have a company website, ApogeeGlobalRMS.io, where you can go see our company service lines. And if you ever want to reach out to simply have a free console, feel free to do so.

24:25 – Unidentified Speaker
Fantastic.

24:26 – David W. Schropfer
And if you missed any of those URLs that MK just said, you can also go to DIYCyberGuide.com, search for episode 91. That’s 9-1. And I will have a list of the article and MK’s LinkedIn profile, company profile, company website, et cetera. Thanks very much for being on the podcast. This has been great.

24:46 – MK Palmore
Thanks for having me, David. Appreciate it.

Published by

Unknown's avatar

David W. Schropfer

David W. Schropfer is a technology executive, author, and speaker with deep expertise in cybersecurity, artificial intelligence, and quantum computing. He currently serves as Executive Vice President of Operations at DomainSkate, where he leads growth for an AI-driven cybersecurity threat intelligence platform. As host of the DIY Cyber Guy podcast, David has conducted hundreds of interviews with global experts, making complex topics like ransomware, AI, and quantum risk accessible to business leaders and consumers. He has also moderated panels and delivered keynotes at major industry events, known for translating emerging technologies into actionable insights. David’s entrepreneurial track record includes founding AnchorID (SAFE), a patented zero-trust mobile security platform. He previously launched one of the first SaaS cloud products at SoftZoo.com, grew global telecom revenue at IDT, and advised Fortune 500 companies on mobile commerce and payments with The Luciano Group. He is the author of several books, including Digital Habits and The SmartPhone Wallet, which became an Amazon #1 bestseller in its category. David holds a Master of Business Administration from the University of Miami and a Bachelor of Arts from Boston College.