Episode 36

full
Published on:

3rd Feb 2025

Ep36: DeepSeek AI – The Real Issue Isn't China, It’s AI Security

AI Security and Competition: Unpacking the Debate Around DeepSeek

This episode delves into the controversy surrounding DeepSeek, a Chinese AI considered by some as a national security threat. It questions whether this stance is legitimate or merely a tactic by big tech to stifle competition. The episode highlights multiple security breaches across the AI industry, including OpenAI and Google, arguing that the core issue lies in how AI handles security rather than its origin. The discussion also explores the suspicious uniformity in the anti-DeepSeek narrative, the potential motivations of big AI corporations to maintain monopolies, and the necessity of reading AI privacy policies. Additionally, the episode critiques the U.S. response to AI competition, drawing parallels to historical moments like the Sputnik era, and advocates for stronger AI security regulations and more open-source innovation. Listeners are encouraged to reflect on whether the fear of DeepSeek is justified or manipulated by big tech interests.


00:00 Introduction: The DeepSeek Controversy

00:08 Data Leaks: A Global Issue

00:39 The Suspicious Narrative Against DeepSeek

01:24 Big AI's Fear of Open Source

01:35 Smart AI Usage Tips

02:29 The Real Issue: AI Governance

03:15 The AI Moat Playbook

04:08 Big Tech's Control Over AI

05:49 The Global AI Competition

09:45 Security and Privacy Concerns

17:22 Conclusion: The Future of AI

---

I do hope you enjoyed this episode of the podcast. Here's some helpful resources including any sites that were mentioned in this episode.

--

Sites Mentioned in this Episode

--

Find subscriber links on my site, add to your podcast player, or listen on the web players on my site:

Listen to Byte Sized Security

--

Support this Podcast with a Tip:

Support Byte Sized Security

--

If you have questions for the show, feedback or topics you want covered. Please send a short email to marc@bytesizedsecurity.show with the Subject line of "Byte-Sized Security" so I know it's about the podcast.

Connect with me on TikTok: https://www.tiktok.com/@bytesizedsecurity

Transcript
Speaker:

DeepSeek is a Chinese AI, it's

a national security threat,

2

:

and that's what they're saying.

3

:

But is it true, or is

it just a fear tactic?

4

:

So yes, DeepSeek has a serious

data leak uncovered by WizSecurity.

5

:

No security, no protection, that's bad.

6

:

But let's not pretend that

this is just a China problem.

7

:

OpenAI had a chat history leak.

8

:

Google's BARD exposed user data.

9

:

Microsoft leaked 38

terabytes of private files.

10

:

So cybersecurity failures happen

across the board and that problem

11

:

isn't where AI comes from.

12

:

It's how it handles security.

13

:

But here's the part no

one's talking about.

14

:

And where I think it gets a little

bit suspicious, why are so many

15

:

influencers, blogs, and security

experts suddenly saying the same thing?

16

:

DeepSeek is a national security threat.

17

:

Don't use DeepSeek.

18

:

China's AI can't be trusted.

19

:

Well, if you want to look at who benefits

from this narrative, internal leaked

20

:

memos 2023 literally said, We have no

moat, open source AI is outpacing us.

21

:

And in that same memo it says, People

will not pay for a restricted model

22

:

when free, unrestricted alternatives

are comparable in quality, we should

23

:

consider where our value add really is.

24

:

So big AI corporations fear open

source models like DeepSeek.

25

:

And others, because they

threaten their monopoly.

26

:

They don't want competition,

they want control.

27

:

So avoiding AI tools isn't the

answer, but using them smarter is.

28

:

So, I encourage you to read privacy

policies and know what AI tools collect.

29

:

So you should read that privacy policy

of DeepSeek and see what it collects, and

30

:

what they're going to do with your data.

31

:

Consider using burner accounts.

32

:

Never attach real credentials,

but if you do, Make sure they're

33

:

secured and they have 2FA enabled.

34

:

And before you type, I think this

is the most important part, AI

35

:

chats aren't private, period.

36

:

Be okay with whatever

you type in going public.

37

:

If you can do that, then whatever

model you decide to use, eh.

38

:

If it gets out, and maybe

it will, you'll be okay.

39

:

So don't be putting in your

social security and really super

40

:

private information, proprietary,

confidential, your company's data.

41

:

Don't do that.

42

:

Scrub that first.

43

:

So the answer is, what is the real issue?

44

:

Well, AI governance is lagging.

45

:

So we need one stronger AI security

regulations for companies, not

46

:

just Chinese ones to transparency

and accountability, especially

47

:

an open source AI like deep seek

three less corporate gatekeeping.

48

:

Because AI should be open and

competitive, not monopolized.

49

:

So just saying China bad, don't use

DeepSeek, it doesn't fix the problem.

50

:

AI is here to stay, and security should

be the focus, not fear mongering.

51

:

So, do you really think

DeepSeek is a security risk?

52

:

Or are we just being played by big tech?

53

:

Now

54

:

I wanted to keep this a little

bit for a separate podcast.

55

:

But I do want to talk about

the AI Moat Playbook and

56

:

using regulations as a weapon.

57

:

So continuing on with our conversation,

if big companies don't actually fear

58

:

AI risks, they fear AI competition,

here's how they keep that moat intact.

59

:

Step one, you push a narrative.

60

:

Open source AI is a security risk.

61

:

Right?

62

:

So they're not too worried about models

that don't perform, but anything that

63

:

does perform from any other country, and

obviously China is a main competitor,

64

:

then it's going to be a security risk.

65

:

So you push that narrative.

66

:

Step 2.

67

:

You lobby for regulations.

68

:

So you frame it as necessary

for national security.

69

:

Right?

70

:

You gotta frame it.

71

:

And step 3.

72

:

You get governments to ban or

restrict open source AI, but allow

73

:

their own proprietary models to

operate under controlled conditions.

74

:

And, I don't know if you've

noticed, but have you noticed

75

:

how AI executives keep testifying

before Congress how dangerous AI is?

76

:

They're crafting AI laws to benefit

themselves, not to keep us safe.

77

:

Step two is that push corporate

controlled AI governance.

78

:

So the fact that big are writing their

own AI governance policies under the

79

:

guise of safety is the biggest red flag.

80

:

I mean, if AI really needed

stronger governance, shouldn't it

81

:

be government led and independent?

82

:

Instead, we see Microsoft,

OpenAI, Google, and Anthropic

83

:

writing the rules for themselves.

84

:

And why?

85

:

Well, they're locking their dominance.

86

:

And it's not really any different

than what happened with social media.

87

:

Remember how Big Tech pushed for stricter

privacy laws that ended up hurting

88

:

smaller platforms but left them untouched?

89

:

They're running the same playbook for AI.

90

:

So, 3 is Big Tech's endgame.

91

:

AI is a government approved monopoly.

92

:

And if this works,

here's what happens next.

93

:

So governments will only be allowed to

use AI from corporate approved models.

94

:

Private sectors will face

restrictions using open source AI.

95

:

The narrative will shift from safety

concerns to compliance requirements,

96

:

making it legally impossible for

smaller AI companies to compete.

97

:

And open source AI gets strangled by

red tape while Big Tech stays dominant.

98

:

So the real question

isn't, is AI dangerous?

99

:

Who gets to control AI?

100

:

And right now it looks like big tech

is doing everything it can to make sure

101

:

they're the only ones who can play.

102

:

So give that a thought.

103

:

There's also other things that if

you look at different articles, you

104

:

can see that, you know, the This

different AI that's being dominated

105

:

by other countries, it, uh, you know,

it interferes with lucrative defense

106

:

partnerships like OpenAI and Anthropic

that recently signed with defense

107

:

tech firms like UnDrual and Palantir.

108

:

I mean, don't want them taking away money!

109

:

All right.

110

:

And so what did the, what did the

United States do during Sputnik?

111

:

Well, I had a little conversation

with my buddy, ChatGPT about that.

112

:

And I thought it was rather enlightening

because during Sputnik, it was sort

113

:

of a wake up call that led the U.

114

:

S.

115

:

to double down on science, technology,

and education, culminating in

116

:

NASA, the space race, and a

massive investment in STEM fields.

117

:

It was about outperforming.

118

:

But with DeepSeek and other Chinese AI

advancements, the reaction from some U.

119

:

S.

120

:

firms seems to be about

restriction than acceleration.

121

:

So instead of boosting domestic AI

research or reforming policies to

122

:

encourage innovation, we're seeing

moves to ban or limit access.

123

:

As if blocking the competition

will slow down its progress.

124

:

So, you know, there are legitimate

concerns about national security, and

125

:

AI alignment, and economic impacts.

126

:

But if the goal is to mainly, you

know, maintain that leadership in AI,

127

:

the better response would be investing

more heavily in domestic AI research.

128

:

fostering public private AI

collaborations and creating an

129

:

ecosystem that attracts global AI

talent rather than driving it away.

130

:

So, you know, instead of out innovating

rivals like DeepSeek, so far the U.

131

:

S.

132

:

response has been, you know, restrict

access, increase regulation, limit

133

:

open source AI, under, you know, the

guise of safety and national security.

134

:

And while those concerns could

be real, They're incredibly

135

:

convenient for the companies that

already control that AI space.

136

:

And the result?

137

:

It's going to be a widening gap that the

open source AI community, which is getting

138

:

choked out, and the corporate backed

models that get regulatory preference.

139

:

So, I don't think we really want to,

you know, let's freeze AI progress

140

:

at the level where we control it.

141

:

I don't, that's a good idea.

142

:

It feels like an attempt

to keep that AI moat alive.

143

:

Which clearly is not going to be a winner.

144

:

Open source will outperform that.

145

:

If that's all you got is, is just these

models and nothing that you're really

146

:

offering, you know, then you should be

responding with better, faster innovation.

147

:

But, you know, it's, it feels like

more that they're trying to make

148

:

the AI progress a walled garden.

149

:

But it's global competition.

150

:

And so by stifling open source AI and

foreign competition, You know, the U.

151

:

S.

152

:

might actually be accelerating

its own decline in AI leadership.

153

:

So, China, the EU, independent

researchers, they're iterating fast.

154

:

And if you think about it, we've

seen this in other industries before.

155

:

So, the U.

156

:

S.

157

:

dominated semiconductors, but

now Taiwan, TSMC, and South

158

:

Korea's Samsung, they lead.

159

:

And the U.

160

:

S.

161

:

had an early lead in 5G, but Huawei?

162

:

Well, they surged ahead with the U.

163

:

S.

164

:

focus on sanctions.

165

:

And in EVs and batteries, China

scaled production while the U.

166

:

S.

167

:

lagged behind.

168

:

So if this trend continues in

AI, DeepSeek and others could

169

:

outperform simply by outpacing.

170

:

So instead of banning them,

a smarter move would be to

171

:

supercharge domestic innovation.

172

:

More funding, better infrastructure,

fewer roadblocks for AI startup

173

:

and open source projects.

174

:

But, if they're just trying to preserve,

you know, the moat instead of building

175

:

a faster boat, it's only a matter of

time before somebody else sails past.

176

:

So, there's a lot of concerns about this,

but you have to understand that these same

177

:

concerns were raised internally with U.

178

:

S.

179

:

companies when they were breached

and their data was put out.

180

:

And what is the response of that?

181

:

Well, Chad GPT obviously

locked it down more.

182

:

They responded to that.

183

:

DeepSeek has responded to

that breach that they had.

184

:

That's not available for you to

get to anymore, but it was there.

185

:

So they did respond to that.

186

:

Security incidents are part

of putting out cutting edge,

187

:

fast paced products like this.

188

:

And responding to them appropriately

is also how you And as a user reading a

189

:

privacy policy, I know sometimes they're

really long, but understanding what

190

:

data and where it goes and how it's used

and how it's sold and where it's being

191

:

stored, that should then direct you to

what do I want to put into this model?

192

:

So if I'm running a model, an open

source model locally on my laptop.

193

:

small one, right?

194

:

And I have no internet connection, or

I do, but I, but it's open source and

195

:

I know where the data is going and how

it's connecting and things like that,

196

:

then maybe what I put into it is going

to be completely different than what I

197

:

put into any model that's on the internet

that could potentially get breached.

198

:

I like to say in a lot of my videos and

podcasts, assume a breach mentality,

199

:

because it's going to get breached.

200

:

So everything that you're putting

out there, Your emails, your

201

:

usernames, your passwords, what

you're saying on social media.

202

:

Just assume it's going to get out there.

203

:

It's going to get breached.

204

:

Someone's going to find it.

205

:

Someone's going to make a video on it.

206

:

Who knows?

207

:

Whatever you need to be semi

comfortable with that because it will

208

:

eventually or potentially be out there.

209

:

So when using deep seek, what

I as a security professional

210

:

say, don't even touch that.

211

:

Don't load it.

212

:

Don't go to the website.

213

:

Oh my God, it's terrible.

214

:

No, that is exactly how you fall behind.

215

:

Especially in the tech

industry, understanding things.

216

:

Would I say, read the privacy policy

and make sure what you're putting

217

:

into that, regardless, is something

you'd be okay with it going public?

218

:

Yes.

219

:

I've read the piracy policy.

220

:

They're gonna collect everything they can

possibly collect on you from your browser

221

:

and what you're putting in and things.

222

:

They're gonna collect all of your

chats, everything that you say, and

223

:

use that for their training models.

224

:

And sure, you can delete your account,

but they I've already said that they're

225

:

gonna probably keep that much longer for

legal and regulatory, whatever, and all

226

:

the data is stored, uh, not in the U.

227

:

S.,

228

:

it's stored in the, uh, Republic of China.

229

:

So, if you're okay with that, like, you're

doing, you just want to test it out and

230

:

see what it's like on the web because you

don't want to run it locally, and you're

231

:

just having chats about, you know, public

stuff that you don't care, then whatever.

232

:

That's fine, but just

be aware of that, right?

233

:

I would never be putting proprietary

information into a model like that, or

234

:

having a conversation with it that was

personal and private, that if I said

235

:

this ended up on the front page of a news

story, I would be completely embarrassed.

236

:

I wouldn't put stuff in like that.

237

:

And if, and if you have that mentality,

then you're ahead of the game.

238

:

And then you can see what these

open source models are doing.

239

:

And you look at it and you say, yeah,

it's not as good as the current ones

240

:

that people are paying hundreds of

thousands of dollars for a mil or billions

241

:

to train, but it's pretty darn good.

242

:

And then you wonder what's,

what's your real advantage here?

243

:

Because consumers.

244

:

You know, whether they're small

businesses, are going to go

245

:

for whatever makes sense, gets

the job done, and is cheaper.

246

:

And if it's an open source model,

that's better, that gets that

247

:

job done, that's cheaper, that

they can run locally in a hosted

248

:

environment, then they'll go for that.

249

:

Because at the end of the day, it's

the answers and the output that you're

250

:

getting, not how much money you're paying.

251

:

So these other companies, they shouldn't

be putting Walls and trying to keep a

252

:

moat that they have some proprietary smart

model They should be looking at it as to

253

:

what are the services and benefits and

things that we offer beyond the model To

254

:

make it something that you have to come

here to get and I'll tell you right now

255

:

if I was looking at if I was Looking at

deep seek and chat GPT, which I have I

256

:

would say deep seek is pretty good Chat

GPT with the various models is really good

257

:

If I'm just doing some writing and stuff

like that, I would go for the free model.

258

:

However, there are things

in chat I really like.

259

:

I, I really like the canvas and

the ability to upload files and

260

:

to switch models in between.

261

:

I like the advanced voice chat.

262

:

I like the dictation in it.

263

:

I like building custom GPTs.

264

:

There's a lot of other things that.

265

:

OpenAI is offering as part of their

package that makes paying for me

266

:

that 20 a month seem reasonable.

267

:

As opposed to just using a

really good open source model

268

:

to get answers or a cheap way to

make API calls to get answers.

269

:

Maybe I would use both in different

circumstances or situations.

270

:

One could be a really cheap way to get

data and make an app work or something.

271

:

And another would be, Hey, I

really actually want to get.

272

:

I want to get real work done and I

want a much higher level of service.

273

:

So if chat can keep, if open AI can

keep innovating and offering more

274

:

features and services and things built

inside of their product and not focus

275

:

on keeping a moat, then I think they

potentially could be a very good winner

276

:

because it is a better product overall.

277

:

It isn't just about the model, it's about

how the tools interact with the model and

278

:

what you can get done from a productivity

standpoint and what you can build.

279

:

As opposed to just a model.

280

:

If you're just looking at just

a model, you know, then building

281

:

that moat is not a good idea.

282

:

What else do you offer?

283

:

And I think that's really

where it's gonna come down to.

284

:

But all I see on LinkedIn most

of the time, or anywhere else I'm

285

:

going, is how bad it is, and China

bad, and It misses the point.

286

:

It misses the point how important

AI security and AI governance

287

:

from a worldwide perspective is.

288

:

Because if you're just going to try

to out regulate it, or not allow

289

:

people here to use it, or companies

to use it, or whatever you want to

290

:

do, that doesn't make you a winner.

291

:

I mean, you've already seen that with

other countries where they just They

292

:

basically block, uh, ChatGPT because it

doesn't fit with their GDPR regulations

293

:

or they don't want this or they don't

want DeepSeek or they don't want whatever.

294

:

But they're never regulating.

295

:

They're never on the cutting

edge in the forefront.

296

:

They're always using government

and leverages to try to slow

297

:

down the pace of innovation.

298

:

But at the end of the day,

It doesn't really do that.

299

:

It just puts you behind the,

it just puts you behind.

300

:

And my concern is more with these

companies and these CEO techs and

301

:

things are having conversations with

the government not because they're

302

:

worried about our safety or they

want to make it better for everybody.

303

:

It's because they want to make

sure that they have the contracts.

304

:

They have the government contracts

to use their services, their

305

:

models, and charge what they want.

306

:

Right?

307

:

So you make that a law, and then

you can't pick another product.

308

:

You can't do a, a bake

off, or the best in class.

309

:

You have to use these certain things,

and then they can just control

310

:

the price, and they can do that.

311

:

It doesn't mean it's the best.

312

:

And that, that's just not great.

313

:

So AI is It's an interesting,

it's interesting to watch

314

:

where this is going to go.

315

:

And if it really is being compared

to a Sputnik moment, I don't know

316

:

that that's a great comparison,

but it's, it's, it is what it is.

317

:

Think about what did the United

States during, do during Sputnik

318

:

as opposed to what we're doing now.

319

:

They didn't say that you can't, they

didn't try to regulate it and stop.

320

:

They innovated the heck out of it until

they were better than the competition.

321

:

They used that as a drive

to be better, to offer more.

322

:

As opposed to, hey, let's just block

this and put out a narrative that it's

323

:

terrible and nobody should ever use it.

324

:

You'll have to be the own judge of that.

325

:

Do keep an eye on this space,

because I don't think that the

326

:

real issue is, I don't think the

real issue is where it comes from,

327

:

where the AI is, that is China.

328

:

I think it's about AI security, I think

it's about contracts, I think it's about

329

:

money, and trying to sanction and leverage

and lock in models and products here

330

:

for much, things beyond our control.

331

:

Open source is good when it comes to this.

332

:

Open source is good for a lot of things.

333

:

And limiting it is not

good for a lot of reasons.

334

:

So keep an eye on the space.

335

:

Send a message if you agree or disagree.

336

:

You can find my contact information.

337

:

I'd be really curious to what

you think that DeepSeek has

338

:

done or these type of models.

339

:

There's been other open source

models that have been out there.

340

:

But it really hasn't

worried the competition.

341

:

Eh, that model's not that good, or

it's super expensive to train, or

342

:

no one's ever going to use that.

343

:

Uh, it doesn't give you very good

answers, so we're not worried about it.

344

:

But suddenly something comes

out that you are worried about.

345

:

Then it's like, oh, and if you know

that they already came out with

346

:

this and then they came up with a

reasoning model, you know, they're

347

:

going to start coming out with more.

348

:

So how are you going to keep innovating

to make sure that you're at the

349

:

top of the game, that people in the

world want to use your products as

350

:

opposed to others, as opposed to

just using sanctions and regulations

351

:

and laws to block the competition.

352

:

Give that some thought.

353

:

And if you have a comment,

find me in the show notes.

354

:

Love to hear from you.

Support the Podcast with a Tip

If you're enjoying Byte-Sized Security and finding these practical tips useful, please consider supporting the podcast with a small contribution. It costs $17 per month just to cover podcast hosting fees, and your support helps offset the costs of producing this security resource and keeping episodes free. Even a tip of $1-5 per month from loyal listeners adds up and allows me to continue providing great cybersecurity info. Please considering a donation. I appreciate you helping sustain Byte-Sized Security! Now back to the security tips..
Support the Podcast
A
We haven’t had any Tips yet :( Maybe you could be the first!
Show artwork for Byte Sized Security

About the Podcast

Byte Sized Security
Snackable advice on cyber security best practices tailored for professionals on the go
In a world where cyberattacks are becoming more commonplace, we all need to be vigilant about protecting our digital lives, whether at home or at work. Byte Sized Security is the podcast that provides snackable advice on cybersecurity best practices tailored for professionals on the go.

Hosted by information security expert, Marc David, each 15-20 minute episode provides actionable guidance to help listeners safeguard their devices, data, and organizations against online threats. With new episodes released every Monday, Byte Sized Security covers topics like social engineering, password management, multi-factor authentication, security awareness training, regulatory compliance, incident response, and more.

Whether you're an IT professional, small business owner, developer, or just someone interested in learning more about cybersecurity, Byte Sized Security is the quick, easy way to pick up useful tips and insights you can immediately put into practice. The clear, jargon-free advice is perfect for listening on your commute, during a lunch break, or working out.

Visit bytesizedsecurity.com to access episodes and show notes with key takeaways and links to useful resources mentioned in each episode. Don't let cybercriminals catch you off guard - get smart, fast with Byte Sized Security! Tune in to boost your cybersecurity knowledge and help secure your part of cyberspace.
Support This Show

About your host

Profile picture for Marc David

Marc David

Marc David is a Certified Information Systems Security Professional (CISSP) and the host of the cybersecurity podcast, Byte-Sized Security. He has over 15 years of experience in the information security field, specializing in network security, cloud security, and security awareness training. Marc is an engaging speaker and teacher with a passion for demystifying complex security topics. He got his start in security as a software developer for encrypted messaging platforms. Over his career, Marc has held security leadership roles at tech companies like Radius Networks and Vanco Payment Solutions. He now runs his own cybersecurity consulting and training firm helping businesses and individuals implement practical security controls. When he’s not hosting his popular security podcast, you can find Marc speaking at industry conferences or volunteering to teach kids cyber safety. Marc lives with his family outside of Boston where he also enjoys running, reading, and hiking.