Neil Gad, CPO/CTO at RealVNC on DIY Cyber Guy

Episode 87: How to Use AI to Create a Competitive Advantage Without Losing Control

About Neil Gad

Neil Gad is the Chief Product & Technology Officer at RealVNC

Neil has 20 years’ experience in technology, commercial and operations roles, Neil was appointed to the role of Chief Product & Technology Officer to define and deliver RealVNC’s product vision.
Neil has a background in strategy consulting from BCG and PwC, leading value creation programmes and supporting M&A deals across TMT and many other sectors.
He has since led tech functions in both large corporates and start-ups, with a proven track record of delivery build on creating winning teams that collaborate effectively and focus on customer value.

Neil’s Links

https://www.realvnc.com/
https://www.linkedin.com/in/neilgad/

Summary:

Summary:

The discussion focused on the implications of AI on security for both home and enterprise networks, highlighting findings from a Microsoft report indicating that 94% of executives experienced identity and access incidents related to AI in the past year. David W. Schropfer emphasized the risks associated with AI, noting that 57% of organizations reported security incidents linked to its usage, particularly when employees upload sensitive data to AI tools. Neil Gad elaborated on the zero trust security model, advocating for minimized access to data and the segmentation of workspaces to mitigate risks.

They discussed the varying security levels of AI products, especially between free and enterprise versions, and the importance of strict data management policies to prevent data leaks. Gad also mentioned the trend towards on-premise solutions in high-security industries to reduce attack surfaces, while both speakers underscored the need for organizations to balance leveraging AI for competitive advantage with ensuring data safety. The conversation concluded with brainstorming ideas for a podcast episode title that reflects the discussion and the importance of providing actionable advice to listeners.

SHOW NOTES:

Hair on fire 2

The workplace has changed faster than most cybersecurity teams can adapt. According to Microsoft’s Secure Employee Access Report, “the rise of cloud, distributed workloads, and SaaS applications made it easier for employees to work from anywhere and connect. Now, generative AI is once again reshaping work and collaboration, creating even more access points, workload identities, and access permissions.”

With over 300 billion passwords in circulation and AI expanding the attack surface, identity has become both the new perimeter and the prime target. Nearly every enterprise surveyed—94%—experienced an identity or network access incident last year. As one CISO put it, “complexity is the enemy.” Yet most organizations still rely on a patchwork of tools and siloed teams to protect what has become an AI-driven identity ecosystem.

As AI accelerates productivity, it also accelerates risk. Microsoft’s research shows that 57% of organizations are already seeing security incidents linked to AI usage. Protecting enterprise identity in this new era requires something few companies have achieved—true unification between identity and network access management.

Today, we will explore what that unification really looks like, how AI is forcing enterprises to rethink trust, and what leaders can do before attackers do it for them.

My guest is Neil Gad, Chief Product & Technology Officer, RealVNC – He is a global cybersecurity strategist who has spent two decades helping enterprises modernize identity architecture and bridge the gap between security and innovation.

Can you give me an example of a actual security issue that was caused by AI?

TRANSCRIPT

0:00 – David W. Schropfer
Welcome back everybody to DIY Cyberguy.
This is episode 87, and it’s a hair on fire two out of five, meaning that this is stuff you should know about, especially for people who are in any way responsible for a network, small, medium sized businesses, large enterprises, or even just your home network with a few kids at home, and you’re just trying to keep your network safe and out of trouble. And the topic is AI.

1:52 – David W. Schropfer
Anybody who’s using AI, and if you’re not, you’re probably not listening to this episode, really should pay attention to this because AI really does change your security footprint and the security vulnerability of your network and of your systems. So the workplace has changed, home usage has changed faster than most cybersecurity teams can adapt to it. I think we all know that. According to Microsoft Secure Employee Access Report, and I’ll have a link to that in the show notes. And I’m quoting now, the rise of cloud distributed networks and SaaS, software as a service, makes it easier for employees to work from anywhere and connect. But generative AI is once again reshaping work and collaboration, creating even more workload identities and more access permissions. And all of that adds up, and I’m not quoting now, but all of that adds up to more security security vulnerabilities. I don’t know if you knew this or not, and I try to keep track of this number. We’re currently over 300 billion passwords in circulation today. And AI is only expanding that attack surface. And we’ve talked about that many, many times on this podcast, how the password itself is an attack surface for the threat actors. In other words, the bad guys can, if they your password on a list, whether they’ve exploited another company or just bought the list off the deep web, or whatever the case may be, there are ways of figuring out your password. And of course, the good old fashioned spear phishing of just trying to figure it out, the name of your cat, the name of your dog. Again, we’ve talked about this, don’t do that. That’s not a good password. But the fact that we’re all using AI is all of a sudden ballooning the number of apps that we’re using and the way that we’re using those apps. And the Microsoft report is basically a survey of multiple enterprise executives who said, and 94% those executives said that they experienced an identity and access network management incident in the last year. 94% said they experienced an incident that was related to AI. And as one chief information security officer put it, complexity is the enemy of security. And you may have heard me say that before, and lots of people have said that as well, but that’s really true. Complexity is the enemy of enterprise security, of home security, of network security. Any type of security really becomes more risky, the more complexity that you put into it. So AI, it’s great. And we’ve talked about this before. It’s accelerating productivity, but it’s also accelerating risk. And understanding how it’s accelerating risk as it’s accelerating productivity, whether you’re talking about the family you’re managing at home or the employees you’re managing at work. It’s something you’ve got to understand. So Microsoft’s research shows that 57% of organizations are already seeing security incidents linked to AI usage. That’s incredible because AI, as we all know, is not that old. Really, it’s become more widely used only in the last two, maybe three years, depending on how much you are an early adopter or your organization is an early adopter of that technology. 57% is nothing to sneeze at. That’s a huge number, directly attributable to AI usage. So here with me to talk about this risk and all these other topics today is Neil Gatt. Neil is the Chief Product and Technology Officer at RealVNC. He is a global cybersecurity strategist. Who has spent the last two decades helping enterprises modernize identity architecture and bridge the gap between security and innovation. Welcome, Neil.

5:49 – Neil Gad
David, thank you for having me.

5:51 – David W. Schropfer
Thanks for being on the show. I’m looking forward to the conversation. So, first question, can you give me an example of an actual security issue that was caused by AI?

6:03 – Neil Gad
The typical use case here is an employee in an organization has access to a system, and they upload a bunch of sensitive data to a Gen AI tool. In the spirit of them being more productive and getting some insights distilled out of chat GPT or another tool, they inadvertently upload a bunch of user data or data that’s from a source system that they have access to. And great, within 30 seconds, it gives them the insight that they want neatly, but they’ve accidentally exposed the user details of a bunch of their customers, for example, or they’ve taken a load of IT support tickets and the AI has ingested those. So firstly, that’s exposing sensitive data in a way that it shouldn’t. It’s also then potentially being used to train those models. So that data then is gone and it’s uploaded to a cloud, to a data center somewhere, and can’t easily be recovered. So this happens quite a lot.

7:08 – David W. Schropfer
There’s a couple of things I want to unpack there. For one, there are obviously different AI tools. You know, some AI tools like built into Google Docs, for example, that will take a ton of data and extrapolate and turn that document from 10,000 words into 500 words. There’s also, you know, Read.AI, which is transcribing this episode episode and lots and lots of other more widely available tools like Google Gemini or chat GPT, of course. So is there a difference between how those different products are or the different level of security exposure that is created based on the product that you use?

7:59 – Neil Gad
I think most of all of those products will have some kind of upper level, like an enterprise or pro level, where the data can remain private. So if you’re using the lower tiers of ChatGPT or another tool, typically you’re allowing it to ingest and use your data for its training purposes. If you buy one of the upper tiers, you typically then will get your own instance where the data is not exposed more widely than your own organization. So for example, if you’re in the software engineering industry, as we are at RealVNC, we use cautiously AI tools to help us with our software engineering, but in a way that does not expose our source code to the models, because everything stays within our environment. So if you’re on one of those upper tiers, then it’s much more secure.

8:56 – David W. Schropfer
And we’ve talked about having proprietary data, or rather training AI in a way that doesn’t let that information out into the public many, many times in this podcast. So I’d really like to emphasize that point. So it’s if you will, one of the things I’ve said many times is, if you’re not paying for the product, you are the product. So if you’re using a free version of any of these, it’s taking all of that information as much as possible. And it’s using it to train its core large language or its core training to make the whole product better. So it really sounds like, especially if you’re talking about using AI to help with code, using AI to help with security tickets, using AI to help even with documents that refer to sensitive client information, your law firm, and you’re talking about a criminal case that you’re representing or whatever that might be, putting that into AI could expose it your head unless you are using a professional version, and you have set up that professional version not to share the information with a large language model. So walk me through how if you’re a CEO listening to this podcast, and your head just exploded, because you’re not sure if you did or didn’t, how would you check very quickly, your chat GPT instance, or your Google Gemini instance, or even if you’re using JIRA, or another tool that has has this fancy or convenient AI summary feature, how do you check to make sure that you’re not sharing that with a publicly available large language model?

10:35 – Neil Gad
It will usually be very clear. You should check which tier you are on. You will be on some kind of pro or enterprise subscription tier, exactly. And it usually will very clearly in the feature list tell you that it is or is not using your data to train its models or where it’s exposing your data to. It is the first thing that larger enterprises would typically look for when they are purchasing these things, and it’s usually pretty clear in the pricing models.

11:06 – Neil Gad
I would also say it’s really, really important to then look at yourselves in terms of your organization, at what data could potentially be exposed to the AI. So I think it’s also a common thing to then say, OK, great. We bought the upper tier of this product. It’s not exposing our data beyond our own organization. But actually, it’s much more important to then go back one step and say, well, actually, let’s lock down how our data is being exposed to our employees, first of all, so that they could not inadvertently upload something they should not to an AI tool. Set of policies around how you organize and manage your own access, networking, least privilege is then the next frontier that you should be looking at before you even think about entrusting the AI tool.

12:02 – David W. Schropfer
And a lot of people don’t realize that. So when you talk about training and even using a, call it a proprietary version of an AI tool, because you’re paying for the pro or enterprise level, if one employee says, remember, all of these account numbers for all of these clients, and then they provide all the client names and account numbers. And it’s trained to remember all of that. It is possible that another employee can say, write a summary of all of our clients and their account numbers, and your AI may spit that out, even if that employee isn’t authorized to have access to that information. Can you say more about that as an attack surface?

12:45 – Neil Gad
Yeah, most organizations now will talk about concepts of least privilege or zero trust, where you are, as a practice, isolating data and users and computing devices from each other, and assuming that nobody has access unless you are granting it, and that you are granting it in the minimal possible way. And that way, you remove the possibility of employees inadvertently accessing devices or information that they should not, and then doing something with it that they should not.

13:21 – David W. Schropfer
Okay, so give me an example that lots of people can relate to. So lots of our listeners use Google Workspace. They use it for email, they use it for Google Docs, document sharing, etc. And they use Gemini, which is attached to that. So how can I be sure, or at least a little more comfortable if I’m the CEO of such an organization, and I’m already tools within Google Workspace to make sure everybody’s logging in to Gemini and every other SaaS product that they use. So I’ve solved that problem. So I can see who’s logging into what, and I’ve got that consolidated within the Google Workspace environment. But how can I be sure that the example I said doesn’t happen, where one employee says, hey, Google Gemini, remember these account numbers, and another employee says, give me a summary.

14:16 – Neil Gad
So the word workspace, I think, is doing a lot of the work there. So having the larger that workspace for an individual employee, the more the risk. So it’s really, really important to break down the concept of a shared workspace into 1000 different workspaces, or as granular as possible, so that the amount of information is completely segmented into the narrowest possible slice that you give an employee or set of employees. Real VNC, we use the concept of my organization and sub-organizations. So we allow our customers to break down the way they think about devices and users into a very granular way. So organizations, when they’re thinking about shared workspaces, they should think about it in terms of the narrowest possible definition of that as for a given employee. It’s really hard because now you have a high proliferation of different systems. Employees are using many, many applications. There’s a concept called application sprawl. The access to those applications can often be decentralized. So folks in a marketing function, sales function, finance function, they all have their own apps that they’re using. Often the administrators of those apps are within that function, not necessarily the CIO or the IT function anymore. So decentralization of access management creates a greater risk. And so I would encourage all IT folks to get as much centralized control over that access as possible. So as to further reduce the risk of the information being in different parts a supposedly shared workspace without visibility from one central position. All right, great.

16:13 – Unidentified Speaker
And before my next question, I just want to emphasize to my listeners that RealVNC is not a sponsor. This is not a pay to play podcast.

16:21 – David W. Schropfer
I do not now nor have I ever charged a fee to anybody for being on the show unless they wanted to be a sponsor, which we talk about at the end of the episode. But if one of my listeners who works for an enterprise in the IT, in an IT role or information security role, or a medium sized business, and they’re in charge of all of IT or whatever the case may be, if they were to contact RealVNC and say, hey, I heard DIY CyberGuy episode 87. I’m not sure about, you know, my 1000 employees, who’s using what, who’s training my professional instance of chat GPT or what have you, and the possibility that information is being shared unknowingly between people who do not have access to that information, I need RealVNC to lock it down and to help me out. What would RealVNC do in that instance?

17:17 – Neil Gad
Thank you, David. So yeah, we offer a product that allows this zero trust approach. Many SaaS products that you buy will offer offer this concept so that the access is zero unless it’s granted. There’s also the concept of on-premise. So for the last two decades, the world’s been going more towards the direction of cloud. What we are seeing in industries that we serve anyway, is an increasing preference for offline on-premise controlled environments that are not connected to a cloud anymore.

17:55 – Unidentified Speaker
And so it- controlled environment in the middle of the city with high redundancy, et cetera?

18:04 – Neil Gad
Yeah, precisely. So it’s a very simple but effective way of basically hiding. And you remove an attack surface immediately by saying, well, do you know what? We’re just going to do everything from within our own organization. And so this is usually a really common approach in high security industries.

18:28 – David W. Schropfer
So that’s the first way of reducing attack surface.

18:31 – Neil Gad
And thereafter, you can then segment your infrastructure, your organization into different parts, and therefore you further hide the respective parts of your network from each other. And again, reducing another layer of attack surface.

18:46 – David W. Schropfer
Okay, so in that scenario, a company could still get the full advantage of the large model that’s generally driving a chat GPT app or Gemini app or any other AI powered app, but you’ve got your own instance of it. So you’re not feeding the generally available publicly available large language model. You’re feeding your own, right? Now talk about how that step could actually create a proprietary proprietary information or even a competitive advantage for a given company. If you are training your own AI internally as maybe part of your product, whatever your product is that your particular company sells. Do you have examples of clients that do that?

19:37 – Neil Gad
So the advantage that gives you is that the training is happening in a more specific way that’s relevant for your organization. So it can give you an advantage because your use case is more prevalent in the training of the large language model. But what you still want to do is leverage incoming knowledge from outside your organization. So you kind of want the traffic to be one way where you get the benefit of learning from the external environment, but without exposing your information outside.

20:15 – David W. Schropfer
And is it possible for an actual product or somebody’s actual app to tap into that type of siloed or that type of proprietary learning. So if you happen to be a company that has an AI, one of these AI products that we’re talking about that offers it to customers, to clients, can this type of architecture that you’re describing, using a data center, taking things off the public cloud and putting it into a private cloud or even an on-premises security center, if your organization has that, to create a better product or more proprietary product out of your own data?

20:58 – Neil Gad
Some of the largest organizations in the world are adopting this approach where they purchase a proprietary instance for their own organization that is able to learn from the external environment, accelerate its learning from the internal environment, and it makes them get a more efficient and better product also makes their employees more productive because they’re able to get to insights faster. Some of the large consulting firms, for example, like McKinsey, Deloitte, PwC, they will work in this way where they have their own instance, they’re able to actually offer better services to their clients because they’re able to learn in a much more efficient, faster way.

21:42 – David W. Schropfer
That’s fantastic. Great advice.

21:47 – David W. Schropfer
great insight, and especially in the age when everybody is accelerating the use of AI, making sure we all understand what’s happening with that data, how to protect it, and even how to turn it into an advantage is such a critical thing at this stage because, I mean, this technology is changing the way so many industries even work and operate. And I’d argue faster than any other new technology, including the personal computer itself, the internet itself was never adopted as quickly as, nothing’s been adopted as quickly as AI-related few years.

22:26 – Neil Gad
Yeah, it’s scary how fast the adoption is happening. Everybody’s racing to catch up and to be on the right side of the change so that they can harness the power of it and not be left behind. I think it’s creating a stratification of companies that are either catching the wave or missing the wave. So it’s in equal part, I think, really exciting, but also quite scary. So I think educating oneself as to how best to leverage and stay safe in this environment is really, really important.

23:01 – David W. Schropfer
And keeping on top of things by listening to podcasts like DIY Cyber Guy.

23:07 – Neil Gad
Couldn’t resist, couldn’t resist.

23:09 – David W. Schropfer
Neil, it’s been great having you on the Where can people find out more about what you do? Thanks, David.

23:15 – Neil Gad
It’s been a pleasure to be on the podcast. Thank you for inviting me. You can check out realvnc.com. RealVNC was the inventor of remote access technology. We have the most secure remote access product globally. Most folks in the tech industry will have used or heard of RealVNC through their careers. I would encourage you to check out our webpage, check out our latest products, we talk all about zero trust on premise, how you can stay safe in modern world. And you can also find me on LinkedIn. My URL is just my name slash Neil Gad at the end.

23:51 – David W. Schropfer
And that’s Neil Gad, for those of you who don’t know how to spell Neil’s name. You can also find all those links, everybody, on DIYcyberguy.com. Just search for episode 87. Neil, thanks for being here. I can’t wait to have you on the show again. Thank you so much, David.

Published by

Unknown's avatar

David W. Schropfer

David W. Schropfer is a technology executive, author, and speaker with deep expertise in cybersecurity, artificial intelligence, and quantum computing. He currently serves as Executive Vice President of Operations at DomainSkate, where he leads growth for an AI-driven cybersecurity threat intelligence platform. As host of the DIY Cyber Guy podcast, David has conducted hundreds of interviews with global experts, making complex topics like ransomware, AI, and quantum risk accessible to business leaders and consumers. He has also moderated panels and delivered keynotes at major industry events, known for translating emerging technologies into actionable insights. David’s entrepreneurial track record includes founding AnchorID (SAFE), a patented zero-trust mobile security platform. He previously launched one of the first SaaS cloud products at SoftZoo.com, grew global telecom revenue at IDT, and advised Fortune 500 companies on mobile commerce and payments with The Luciano Group. He is the author of several books, including Digital Habits and The SmartPhone Wallet, which became an Amazon #1 bestseller in its category. David holds a Master of Business Administration from the University of Miami and a Bachelor of Arts from Boston College.