
About Cordell Robinson
Episode: #84 – Is Your Data Secure When You Use AI?
Cordell Robinson, JD, BSCS/EE
Mr. Cordell Robinson is a nationally recognized cybersecurity authority whose rare combination of technical expertise, military intelligence experience, and legal training uniquely position him at the forefront of today’s cyber defense challenges.
He holds a Bachelor of Science in Computer Science and Electrical Engineering from Pepperdine University and a Juris Doctorate (JD) from Georgetown Law School. A decorated U.S. Navy veteran, Mr. Robinson served honorably as a Senior Intelligence Analyst, with duty stations at the U.S. Naval Computer and Telecommunications Station Diego Garcia and Fleet Air Reconnaissance Squadron II in Rota, Spain. He completed advanced naval intelligence and aviation training—including the rigorous Survival, Evade, Resist, and Escape (SERE) School—and earned multiple commendations for his service.
After transitioning from military service, Mr. Robinson charted an unconventional yet impactful path into cybersecurity, where he quickly distinguished himself in compliance, governance, and regulatory frameworks. His early work in the Department of Defense provided a strong foundation that he later applied to civilian agencies, where he redefined cybersecurity governance for the National Oceanic and Atmospheric Administration (NOAA) and National Weather Service (NWS).
At the U.S. Department of Commerce Headquarters, Mr. Robinson designed and implemented a comprehensive Certification & Accreditation process and developed a robust compliance and governance program aligned with National Institute of Standards and Technology (NIST), Federal Information Security Management Act (FISMA), and Office of Management and Budget (OMB) directives. His leadership extended to serving as Certification Agent for high-profile government and commercial systems, ensuring security at the highest levels of critical infrastructure.
Today, Mr. Robinson is an innovator in cybersecurity automation, advancing new methods to streamline assessments and proactively manage organizational security posture. His expertise bridges the gap between technical rigor and executive decision-making, making him a trusted advisor to senior leaders across government and industry.
Mr. Robinson holds numerous certifications, including:
- NSA IAM/IEM (Infosec Assessment & Evaluation Methodology)
- CRISC (Certified in Risk and Information Systems Control)
- CAP (Certification and Accreditation Professional)
- CISM (Certified Information Security Manager)
- CDPSE (Certified Data Privacy Solutions Engineer)
He is an active member of the Information Systems Security Association (ISSA) and remains committed to shaping the next generation of leaders through his nonprofit, the Shaping Futures Foundation. Outside of his professional work, he enjoys traveling, reading, fitness, and community service.
LinkedIn: www.linkedin.com/in/cordell-robinson-a2213a4
Website: www.bcf-us.com
Instagram: https://www.instagram.com/brownstone_consulting_firm
SHOW NOTES
Summary:
The discussion focused on the critical importance of data security in the context of advancing AI technologies. David W. Schropfer highlighted the historical evolution of internet security, emphasizing the need for responsible engagement with AI and encouraging participants to seek resources for safe usage. He and Cordell Robinson addressed privacy concerns, noting that many users are unaware of how their data is processed by AI systems. They advised caution when sharing sensitive information and stressed the importance of treating AI-generated content as drafts to protect personal data.
Robinson raised issues regarding the careless sharing of personal information on social media, warning that even with privacy settings, data remains vulnerable to misuse. He emphasized the significance of being mindful of one’s digital footprint and the potential consequences of impulsive online behavior. Both speakers discussed the necessity for regulatory frameworks to govern AI, drawing parallels to early internet challenges and advocating for measures similar to GDPR and CCPA.
Robinson also introduced Brownstone Consulting Firm’s role in compliance and AI governance, highlighting the importance of these regulations as businesses prepare for upcoming fiscal directives. The session concluded with plans for promoting their discussion on various platforms.
TRANDSCRIPT
0:15 – Unidentified Speaker
Okay. Welcome back, everybody, to DIY Cyber Guide.
0:19 – David W. Schropfer
This is episode 84, Is Your Data Secure When You Use AI? Now, this is a hair on fire one out of five, something that you should really understand, especially if you use AI. And this episode is for everybody that either uses AI today or plans on using it in the near future, which I hope sincerely hope, is every last man, woman, and child that’s listening to this podcast right now, because it is an important tool. And if you’re not using it, then you don’t understand it, and you don’t understand the impact that it could have, both positive and negative. So I’d like to start here. On October 29th, 1969, a UCLA student named Charlie Klein attempted to send a word login from UCLA’s computer to a machine in the Stanford Research Institute. That was using a system called the ARPANET, which we now know today as the internet. And it took about an hour and a couple of failed attempts, but after that point, they actually got the word login to transmit through the ARPANET from UCLA. To Stanford. That was the first successful message of any kind to be sent through what we now know as the internet. Talking or hearing Charlie Klein and others who were involved in that project speak about it later, they were kind of surprised that it worked. And when they added multiple nodes all across the country with the idea being that if any one link from any two points in the matrix were to go down, then the network itself would still work because it had lots of other ways to get that data there. So the ARPANET expanded and expanded and expanded again, all the while more and more messages, things like websites came into existence, things like email came into existence, all using this network that these guys were really surprised that it even worked in the first place. The last thing they were thinking about was security. They just wanted They wanted it to work. And of course, as time went on, usage improved, use cases changed, websites became more and more complicated and used for more and more things. The issue of security crept up and kind of bit them, right? Because now they had to backtrack and figure out how to make the system that was really just designed to function, just to send messages from here to there. It had to be a secure network. How do you layer that onto a network that was not designed for security? Fast forward to today, ChatGPT exploded into our consciousness and into our daily lives just a couple of years ago. It was not that long ago. And we all started using it in different ways. And those of you who haven’t, Please listen to my last couple of episodes about how to use it and how to use it carefully. And feel free to write me at David at diycyberguide.com. And I’ll tell you exactly what episodes talk about using it, using it carefully, using it judiciously, but getting to know it because getting to know it’s important. But we’re also sprinting ahead, just like the internet sprinted ahead from this weird connection that could transmit the word login in October of in 1969, and then it had to become the secure thing. Well, the Internet’s kind of the, well, the AI is kind of at that same place in time right now. We started using it, we were amazed by what it could do, and we’re very, very quickly piling up new use cases on it, just like we did with the Internet. But the question has to come up in your mind and mine and everybody that’s building and working on these things. Privacy. What is happening to this data that’s going back and forth at literally light speeds? What happens as you ask questions and you make challenges of your AI app to produce a picture, to write an email, to write a poem, to correct your spelling, whatever the case may be, or to do research, whatever that case may be, you’re using it, you’re giving it information, but where’s that information going? Because it says a lot about you, the questions that you ask, as much as it does the answers that you get. So there was a very interesting article in USA Today back in June, on June 30th. And I’ll Read you a quote from this article. The quote is, everyone wants to be the first to market with their AI-powered product, but they’re not thinking about the implications. The second you input sensors data into a public model, it’s out there. We’ve got to stop treating cybersecurity like an afterthought. And the person who said that was Cordell Robinson, who joins me today as my guest. So Cordell has a Bachelor’s of Science in Computer Science and Electrical Engineering from Pepperdine University. He has a Juris Doctorate from Georgetown Law School, and he’s a decorated Navy veteran. Cordell, thanks for joining me today.
5:58 – Cordell Robinson
Thank you for having me.
6:00 – David W. Schropfer
It’s really great to have you on the show. I’m looking forward to this discussion. So let’s start here, man. Let’s start here. Does anybody really know what happens to their data once it’s input into AI?
6:14 – Cordell Robinson
So David, I don’t think anybody really knows exactly. Some people know, like some of the technical people know that are in the industry that’s in IT. Information technology, they may know, but your average everyday person, they don’t really know what happens to your data when you input it into AI, artificial intelligence. But what does that tell us about what that’s doing to our privacy?
6:37 – Unidentified Speaker
If I’m asking it to edit emails, or if I’m, I mean, when you ask it to edit an email, you’re delivering the email that you wrote, that you drafted into AI.
6:46 – David W. Schropfer
So that’s just one type of use case. When you ask it to make a picture of you know, a butterfly named Sam or whatever you might ask it to do, you’re putting that information into AI. So if we don’t know what’s happening to it, does that mean that anything could be happening to it?
7:06 – Cordell Robinson
No, I mean, so it doesn’t mean anything could be happening to it, but it just means that just be cautious, you know, because if you want, if you’re going to put in sensitive data, especially if you’re sending an email or something like that, you’re drafting an email and it’s supposed to be private, put in only the information that you need to articulate to get your message across if you want to use AI for it, and then any of the details that are sensitive, add those details to the actual email that you’re going to send later, and don’t put it in AI, because if you put all that data in AI, now that and plus what AI is going to give you to complete the whole email is out there on the internet. And so if it gets leaked, now that sensitive email could be leaked.
7:59 – David W. Schropfer
Well, there’s a difference between out there on the internet and information that could be leaked.
8:05 – Unidentified Speaker
Like when I use BillPay in my banking app online, it’s out there technically on the internet, but I’m comfortable that it’s secure, and that’s why I use a system like that. So when you’re chatting with ChatGPT or Perplexity or Gemini, you’re not putting that information out there for anybody to see.
8:26 – David W. Schropfer
Something would need to be breached for that information, that email that I put in a ChatGPT or whatever else, to make that public knowledge, correct? Yes, correct.
8:37 – Cordell Robinson
It would have to be breached for someone to get that, but there’s hackers out there all the time penetrating. Into all these AI models and pulling data every day.
8:49 – David W. Schropfer
Yes, sir.
8:50 – Unidentified Speaker
Every day, thousands of times a day.
8:54 – Unidentified Speaker
Yes.
8:55 – David W. Schropfer
Okay. So what’s the cautionary tale here? What’s the worst case scenario here when, let’s say you put sensitive data just because you’re trying to get an email edited by ChatGPT, but that email can say contains sensitive information, what’s the worst case scenario of what could happen with that data?
9:21 – Cordell Robinson
Well, I mean, worst case scenario, if you put sensitive data, so for example, if you put in like, like a bank account information or something in the email, that’s very sensitive data in that email.
9:35 – Unidentified Speaker
Yeah, don’t do that. Yeah. Don’t do that.
9:38 – Cordell Robinson
Or if you put someone’s birth dates or their social security number, like there’s sensitive information that is personal, then that information can be taken and utilized for, you know, someone to, you know, steal your identity or use it for, you know, different means. So it’s always be cautious to put in only the data that you need to get the information that you need to do what you, what you’re trying to accomplish with AI instead of all of the extra stuff, because it’s not really necessary. It’s AI’s MML machine learning. And so what you put in is what you’re going to get out of it, right? So be very detailed and exact of what you want, and be also very cautious of what you want as well, so that you, one, can get the right result. And at the end of the day, what you should not be the final, the final thing that you would send off, let’s say it’s an email, it’s not going to be the final email that you send off, it should be the final draft. And then you take that final draft, and then you insert that sensitive information in there. Because if worst case scenario, you insert that sensitive information into AI, now it’s out there. Okay, something could happen or something could not happen, right? A lot of times that things don’t happen for years, because sometimes they people sit on data and information for a very long time before they decide to like do something about it. So you never know. So it’s just not good to even put it out there if you don’t have to put it out there.
11:22 – David W. Schropfer
Excellent advice. I agree with that. One of the things you mentioned in the USA Today article was that you’ve seen people put sensitive information in social media posts, email posts, that type of thing. Is that something that that you have direct experience? I mean, do people come to you and, you know, show you what they what they’ve done? Or is this just social media posts and things that you’ve seen coming through your feed? Or can you talk more about that? Sure.
11:55 – Cordell Robinson
It’s a little bit of both. So, like, I see some social media posts and I’m like, why did this person and just post all of this personal information. And it doesn’t have to be a post, like a typewritten post. It could be even a video of them talking and saying things. And some things that they say could be some sensitive information or personal information about themselves that could be the detriment of them. So I’m like, be very careful of what you are putting out there on social media because you’re putting it out there for the world. And I know people block it off and have like, okay, well, I only, only like my friends can see it or my followers can see it. Or, you know, these people could see it. Yes, that is true, but it’s still on the internet, which means it’s still accessible to attackers.
12:49 – David W. Schropfer
And it’s also accessible. Like if you make that post on Instagram or Facebook, for example, it’s accessible to meta. The owner of those companies, and they can use it to profile you all they want. Because in fact, you’ve agreed to let them do that. When you click that little box, when you opened your account, and when they update their terms of service, it’s always in there. We’re going to take your data and try to profile you so we can sell that profile.
13:19 – Cordell Robinson
That’s what they do. Exactly. Because it’s called a digital footprint. So all of us now, basically, as soon as you get online, get on the internet, and you have a social media profile, you have an email address and so on and so forth, your digital footprint begins to build. And then you build out this huge digital footprint, but you are in control, ultimately in control of your digital footprint because of what you control of what’s going to either be put out there or not. So you make sure you decide what should be posted and what should not be posted. You know, I tell people don’t post, with emotion, never post out of emotion. Sit back and think about it.
14:00 – David W. Schropfer
Just like never pick up the phone and call somebody out of emotion.
14:04 – Unidentified Speaker
Right, call somebody, have a conversation. Don’t run to the internet and post because you’re having this emotional crisis because that’s not going to bode well for you.
14:13 – Cordell Robinson
You know, you’ve seen many people, you know, come on the internet and say all kinds of really outlandish things and then they lose their job and like all these things happens because, you should have had that conversation privately with someone to get it out and not for the internet.
14:31 – Unidentified Speaker
It’s not for the internet.
14:33 – Cordell Robinson
That’s not what the internet is for and social media.
14:37 – David W. Schropfer
I was going to say what you’re saying right now is probably something that every single listener has lived through at least once. They’re upset about something. They just saw something on, you know, in the town square that was either really offensive or something along those lines, and they’re really angry, their blood’s really up, and then they post something about it. If that person was sitting in front of you right now and say, hey, I’m not in the middle of that in this moment, tell me how to process that. What do I do in a moment like that when I just took a picture, I picked up Instagram, I opened my Instagram app on my phone, and I want to post an inflammatory commentary comment with this picture that I just took because I can’t believe it. What would you say to that person not in the in the moment to as as a practice to make sure that they don’t go too far?
15:36 – Cordell Robinson
I would say for everyone before you post, always take a beat, take a moment, think and process what you’re about to post. Also, especially if you’re making a video, watch it all the way through, process it before you post it. If you write a post, write it out, Read it a couple times before you post it to make sure that it doesn’t have any information that could be damaging to you. It’s just, you know, just, you know, you don’t, you know, like, you know, everyday things, haha, you know, funny things is fine. But when it comes to like very serious things, you know, people should be very cognizant of of that because every, you know, high sensitivities are heightened around the world with so many different things going on all the time. So, you know, just because you have this voice on the internet, doesn’t mean that you have to say everything all the time, every day, because now you’re drawing too much attention for the wrong thing. And it, it, it’s, uh, it’s counterproductive, I would say.
16:48 – David W. Schropfer
OK. And it could also that’s good advice that could avoid stepping on the proverbial digital landmine, something that actually affects your potential, your relationships, your job security or things that, you know, one would think would be very, very important to you. Therefore, they should take that advice very, very seriously, I would think. Right.
17:12 – Cordell Robinson
Because your digital footprint can affect your real life. So you have to be very cognizant. Of your digital footprint because it does translate into real life and it could affect your real life. And some people get so, I guess they get so into social media. It almost seems like it’s not a real place sometimes. And so just go on and it’s so exciting and everybody get excited and you just, ah, okay, I’m going to, you know, I’m going to post or I’m going to do this video. No, don’t do that.
17:41 – David W. Schropfer
It’s not necessary. It’s like chill out.
17:45 – Cordell Robinson
Take a beat and think about it. Is it really worth me posting this? What is the purpose of me posting this? Is it gonna benefit me or is it gonna harm me? Is it gonna benefit or harm someone else? And so that’s what people should always think about before posting sensitive information because your digital footprint employers go and look, people that you meet, they go and look you up, look up your socials and see what you’re posting about your life. And they get an idea of your personality from your digital footprint. And sometimes, a lot of times, what people post on the internet and who they are in real life could be very different people, because people are caricatures of themselves. And so it’s like, if you’re going to be that caricature, be a positive caricature so that nothing’s mistranslated in real life.
18:45 – David W. Schropfer
So for all of you that are listening, that is such good advice. You just tell if you have friends and family that post haphazardly or irresponsibly, tell them to listen to Cordell on Episode 84 of DIY Cyber Guy, and that’ll bring them back to center and maybe give them some advice so they don’t, you know, nuke their their life or their career with a single post. Cordell, let’s talk about you for a second. You served in the US Navy, is that correct?
19:15 – Cordell Robinson
That is correct.
19:16 – David W. Schropfer
Thank you for your service, Cordell, really. Thank you. And you also have a JD, which means you have a law degree and you’re a computer programmer as well with a Bachelor of Science in computer programming.
19:29 – Cordell Robinson
Yes.
19:29 – David W. Schropfer
I’ve had, you know, you’re the 84th guest I’ve had on this episode, give or take, and that is, I’m certain a unique background for any guest. How do you see the law, your study of the law, and your understanding of computer programming fit together?
19:49 – Cordell Robinson
So it’s very interesting. At first, I did not even know exactly how it fit together, but it was organic. It made sense. So when I was a software engineer, I was in law school at the same time as a software engineer. And so I’ve taken the different classes, torts, and contract law, and all those different things. And I start reading through. And then I started looking. And then, of course, I know technology, my job, my day job. And I’m like, wait a minute. And then I started looking at policy, and then compliance, and privacy. And I’m like, oh, well, this is all legal. So this marries. And so I was like, you know, I don’t want to go into the field of being an attorney and, you know, doing all that. I want to do something a little bit more behind the scenes in the background and something that’s unique. And so I think, you know, the legal side of cyber and then like compliance and regulations and policy was, you know, very exciting to me.
20:58 – David W. Schropfer
It sounds like an interesting combination, and I’m glad that you excited about that because regulation of some of these things is one way to fix it. I mean, the conversation about regulating the internet has been an incredibly hot topic ever since people kind of woke up after 1969 and said, wow, there’s so much going on here. Good, productive, also bad and malicious. How do we regulate that? And that’s been a question that’s been wrestling with a lot. And I’m fascinated with about the idea of, okay, take the word fascinated broadly, because the idea of regulating AI is interesting. Can it even be done? What exactly, what elements of AI would be regulated? What would be the body to regulate it? What would enforcement even look like, And how do you collaborate in a world of, where there are honest actors, good actors, and there are bad actors. There are threat actors out there using the same tools at the same time. So how would you think about regulating AI? Is it even possible? It’s possible.
22:12 – Cordell Robinson
I mean, you have to put in a governance first, compliance and governance first. And there are frameworks that are out there. They’re in a pretty, they’ve gotten really good. Recently, I’ve Read a lot of the latest frameworks that have come out for AI governance. Actually has an AI framework and there are some different AI governance frameworks that are out there by IAPP and different other organizations that I’ve Read through and to understand. So I think looking at those guidelines and then turn those into regulations for your yourself, you know, and for your organization, for your company, if it’s on that side, but for people, for you personally kind of take it as, okay, how do I, what do I need to do to regulate my data and my security when in this, in this new AI world, because AI is not just chat GPT and some videos AI is, is, like actual people creating videos of someone and it’s not even that real person. And it’s like, you know, and they’re like them talking and everything. So, and it’s, uh, you know, deep fakes and deep, you know, like all these different things, um, with AI. And so once you put around that compliance or regulatory umbrella over it, then it, it keeps it a little bit under control. People need to like kind of think back and slow down and everybody’s like chasing it.
23:57 – David W. Schropfer
Cause they, you know, it’s a lot of money.
24:00 – Cordell Robinson
They want the money, but it’s like, let’s, you know, slow down just a little bit and let’s make sure we get some, you know, regulation around it and secure it well before it goes to Haywire. Cause AI, I mean, it’s just so much information is so fast and people are just using it and just put it, pushing all this data. I mean, imagine there’s, you know, almost 8 billion people on this planet. So you imagine like, and all those with access to AI and the internet is just like constant every day, all this terabytes and terabytes of data, constant. So it’s like, okay, well, we need to, you know, we need to find some type of structure to it. It just can’t be like the wild, wild west because then it becomes chaotic and then it becomes very dangerous.
24:47 – David W. Schropfer
Well, that’s where we’re heading. Right. That’s that’s where we started in 1969 with the Internet. It became a useful tool. Then it went haywire.
24:57 – Cordell Robinson
Then guardrails and security layers were built up around it.
25:01 – David W. Schropfer
And, you know, regular regulations like the GDPR and the EU, CCPA in California attempted to say, OK, with all this electronic data, people need the right to pull it back.
25:13 – Unidentified Speaker
People need the right to see what you’ve got, see people need the right to to have an understanding of what’s out there, and that’s definitely curbing the tide.
25:24 – David W. Schropfer
And so it sounds like, in your opinion, AI, it sounds like, in your opinion, Cordell, responsible organizations can do things today, looking at NIST and, I apologize, what was the other organization you mentioned in this text? International Standards Organization, I’ll just leave it at NIST. I thought you mentioned two, NIST and somebody else, the GI something, but I missed it. Okay. So when you look at standards that are put out there like NIST, those are good standards to follow in the AI category. Do you think we’re going to get to the point that AI privacy becomes a part of regulations like GDPR, CCPA, and all other data privacy regulations that are really coming up state by state around the US?
26:20 – Cordell Robinson
Definitely. We definitely will. Yeah, we’re headed there very fast. And especially because some people use AI as their therapist.
26:31 – Unidentified Speaker
And I’m like, so you’re telling AI a lot of your deepest, darkest secrets.
26:41 – Cordell Robinson
And if someone gets that, it could be very damaging, you know, because no one’s perfect. Right. So everybody has these has things. But like you tell me, I all this stuff. I don’t think that that’s very wise.
26:55 – Unidentified Speaker
Exactly.
26:55 – David W. Schropfer
I I’m going to edit this part out, but I have to admit, I enjoy South Park, especially this season is excellent. And there’s there’s a scene where one of the characters is using literally using Chachi PT as it’s there as his therapist. It’s just funny. If you haven’t watched it, I’m sure it’ll give you a good chuckle. But that’s why I was smiling so hard when you were saying that. All right. All right. All right. So let’s wrap up. Cordell, it’s been great having you on the show today.
27:26 – Unidentified Speaker
Thank you for having me. You bet.
27:28 – David W. Schropfer
Where can people find out more about what you do?
27:31 – Cordell Robinson
Sure. So they can go to my website, http://www.bcf. That’s B as in Bravo, C as in Charlie, F as in Foxtrot-us.com. Find me on LinkedIn, Cordell Robinson. And then also I just started Instagram, Brownstone, underscore consulting, underscore firm on Instagram. And I’ll be posting, you know, little fun videos on security and how to keep things safe.
27:54 – David W. Schropfer
I’ll tell you what, let me ask you that question again, but this time when you say the URL of Brownstone, talk a little bit about Brownstone and say what kind of just, you know, not for 10 minutes, but give an overview. If you need this kind of help, come to bcf-us. Okay, so I’ll start again. Cornell, it’s been great having you on the show today.
28:20 – Cordell Robinson
Where can people find out more about what you do? Sure. So they can find me on my website, http://www.bcf-us.com. That’s bravocharliefoxtrot-us.com, which is my company, Brownstone Consulting Firm. We’re a global compliance firm. We will help you with AI governance. We will help you with anything in your compliance house. There’s a new one out regulation. Well, not new but regulation out directed by the president for CMMC. Look it up. Google it and there’s so many other compliance regulations that are required especially now as we’re moving into the new fiscal year. So, we are your company to go to to get your compliance fees so you can get those It also found me on LinkedIn, Cordell Robinson, and an Instagram, Brownstone, underscore consulting, underscore firm.
29:12 – David W. Schropfer
I got to believe with your background in the Navy, plus your legal background, plus your computer science background, you would be uniquely positioned to help companies get government contracts and stay in compliance while they’re operating those contracts. Definitely. Excellent.
29:28 – Cordell Robinson
Well, it’s been great to have you on the show.
29:31 – David W. Schropfer
I’ll certainly have you on again. And thanks for being here.
29:34 – Cordell Robinson
Sounds good. Thank you so much. This is great.
29:37 – David W. Schropfer
Likewise.
Published by