How Many Forms of ID Do I Need to Buy This Gift Card?

https://cisoseries.com/how-many-forms-of-id-do-i-need-to-buy-this-gift-card/

Getting someone to purchase gift cards is a popular vector for theft. Given that the gift card theft technique is so well known, many online sites have put up additional barriers to purchasing gift cards. Trying to buy them legitimately has become increasingly difficult.

This week’s episode is hosted by David Spark (@dspark), producer of CISO Series and Andy Ellis (@csoandy), operating partner, YL Ventures. Our guest is Ariel Weintrab (@securitymermaid), CISO, MassMutual.

Full transcript (from cisoseries.com)

[Voiceover] Ten-second security tip, go!

[Ariel Weintraub] The key to cyber resiliency is agility. The threat actors that we’re up against are changing their techniques faster than we’re changing our own controls. So, why are we so afraid to change the projects and implementations that we’ve already started? Our threat actors aren’t afraid to change theirs.

[Voiceover] It’s time to begin the CISO Series Podcast.

[David Spark] Welcome to the CISO Series Podcast. My name is David Spark. I’m the producer of the CISO Series. Joining me, my co-host for this very episode is Andy Ellis, who is also the operating partner over at YL Ventures. Many of you know Andy because his voice sounds exactly like…

[Andy Ellis] This.

[David Spark] Exactly like that. You’ll hear a lot more of it later in the show. I just want to mention that we are available at CISOSeries.com, and our sponsor for today’s episode is PlexTrac. PlexTrac is a phenomenal company that allows you to create a proactive security environment by sort of orchestrating your purple teaming efforts, if you will. They have been a phenomenal sponsor with us, and we’re so thrilled that they’re sponsoring yet again with us.

Now, Andy, this episode is going to drop in July, and what we’re talking about now will be old history, and that is I am excited because two of my favorite teams in the NBA are going to the Western and Eastern Finals. Should they win and go to the finals, it will be a real Sophie’s Choice for me in terms of who to root for. Because I lived in the Bay Area, I’m a huge Warriors fine, but I’m originally Bostonian, and I’m a huge Celtics fan. My two favorite teams. Do not know who to root for. So, those listening will know either… Because by the way, this happened once before, and both teams lost and never even made it…

[Andy Ellis] Yeah, didn’t do the Finals.

[David Spark] …to the Finals.

[Andy Ellis] You’re like, “Oh. Why did I waste air space on that?”

[David Spark] I know. I did too. So, this may be that situation again. God willing, both of them made it to the Finals. You live in Boston, but you’re not a fan of the Celtics, yet…

[Andy Ellis] That’s correct.

[David Spark] Because you originally are a fan of the Lakers which are the sworn enemy.

[Andy Ellis] Right. I grew up in LA in the ’70s and ’80s, so I have a lot of disdain for the Celtics, so my attitude…

[David Spark] As a true Celtics fan have disdain for the Lakers.

[Andy Ellis] Yeah. My attitude about the Celtics is if they win and do well, then a Boston sports team did well, and I’m happy because this is the new title town. And if they do badly, then I’m happy because the Celtics suck. And if they do mediocre, it’s like, “That was a waste of energy.”

[David Spark] Let’s hope not. I am hoping for a Celtics-Warriors Finals, and we will see…

[Andy Ellis] That’ll be hard because I don’t actually really like the Warriors. I don’t know why, just they’ve never really attached to me, and I’d hate to have to root for them in the Finals.

[David Spark] Well, I lived in San Francisco for 20-some-odd years, and I grew up in Boston.

[Andy Ellis] Yeah. But there’s a reason that my household is diehard Patriots fans. Because my wife who grew up in Boston, and I who grew up in LA, realized very early that we should mostly just ignore the fact that basketball exists.

[David Spark] I am not that same way. I’m a huge basketball fan. But that is not the point of this show. This show is not about sports. It’s about cybersecurity. Although – and we’ve talked about this before – that sports and games in general actually do conceptually help with cybersecurity. But let’s leave that for another show because we’ve done this before. Let’s introduce our guest. Thrilled to have her onboard. She’s the CISO of MassMutual, Ariel Weintraub. Ariel, thank you so much for joining us.

[Ariel Weintraub] Thanks for having me.

Well that didn’t work out the way we expected.

3:48.025

[David Spark] When we can, it’s nice to keep security hidden so it doesn’t interfere with people’s jobs. But sometimes we need non-security people to be security aware, and we want them to participate in the security user experience or SUX, as you called it, Andy, in your opinion piece on CSO Online. You talked about the experience of reporting phishing and how there’s always a form response to the user, and they never get any real follow-up as to what actually happened with that phish. To the user, that email just went into a black hole. But you also looked, and this was a part that really surprised me about your piece, at the experience of trying to legitimately buy gift cards, a very common phishing ruse. And honestly, I didn’t realize how difficult it was and how often legitimate purchases of gift cards do get rejected, which is what you experienced.

[Andy Ellis] Yep.

[David Spark] So, two questions. Were you surprised by how often your legitimate purchases were getting rejected? And besides better follow-up on phishes, what are areas we should focus on improving the security user experience for non-security people?

[Andy Ellis] I was really surprised by how inconsistent the experience felt. Once I’d reverse engineered, it’s, “Oh, you can’t buy multiple gift cards at the same time,” or “You can’t buy gift cards with a retailer you’ve never had a relationship with before,” both doing those online, that those are sort of the big red flags. But like I’d buy a gift card and it’d be fine, and then I’d go to buy two gift cards, and they would just disappear from my cart. Or I would come back later, and my password had been reset, and they said, “Your password has been compromised,” and I’m thinking, “Really? I don’t think so.” And then later they canceled my account. So, I have to call Amazon and say, “Please unlock my account so I can do stuff.” And the guy just said, “Yeah, stop buying gift cards.” That’s the worst advice ever! So, what if I like giving gift cards to people? When I went to CVS and that was just…

[David Spark] Which, by the way, is the world’s easiest gift to buy for anybody.

[Andy Ellis] It really is. But my favorite was I went to CVS, and I bought two gift cards. I’d figured out it was two was the issue, and like I physically scanned them in at the self-checkout, and the machine locks up and waits for a manager to come over.

[David Spark] So, this is not just online, this is in person as well?

[Andy Ellis] No. In person. So, it won’t let me check out until a manager comes over and clears the warning. And so I’m like so excited. This is the moment where the manager comes over and says, “Hey. We see you’re buying multiple gift cards. We just want to make sure you’re not the victim of a scam, and this isn’t somebody claiming to be the IRS.” Like, 15-second engagement would have been fine. Nope. Walked over, scanned their card, walked away.

[David Spark] Because they don’t care

[Andy Ellis] They don’t care. And I’m like, “So, why are we having this experience moment where security is so clearly disconnected from the experience of the user and everybody who’s going to have to operate it?” We’re just not connected, and that’s a problem.

[David Spark] All right. I want to throw this to you, Ariel. Sometimes, you got to get into the system. You got to understand why are gift cards getting rejected, why am I submitting this phish to you, and should I know was it a valid phish or not. So, at some points, we do want to hear back, and when does that work and when does it not work, and are we doing a better job, or can we do a better job?

[Ariel Weintraub] I mean, on the phishing side, what you’re describing is more of a historical process. I can tell you we do provide feedback back to the sender. So, we tell them, “Hey, this was legitimate. Here’s your email back,” or “Thank you for reporting this. This was actually part of a broader campaign that we know about,” or “Thank you for reporting this, this is actually something new that we haven’t seen before, and we thank you for contributing to improving our controls.” So, maybe just not everyone’s caught up to that type of feedback loop, but absolutely, that’s key in terms of a phishing program.

On the gift card side, I think maybe that’s just where we have machine learning implementations that are overzealous in trying to automate some of the controls, and not sure what the feedback loop is there. But yeah, no, absolutely I think we have to be more transparent back to the end user to explain why the friction was added to them. Ultimately, we want to have security for a purpose. And so in the off chance that we do add a little bit of friction, as long as we can explain it to the end user, I think they’ll know they’re in good hands.

[Andy Ellis] And I think, Ariel, that’s a great point around we have to, if we add friction, get back to the user. And I think too often, we add these systems assuming that we’ll, of course, do good customer support, but we don’t plan for it. But I’m reminded of a retailer that I used to work with who had this great system. Because they had a set of users who were high net worth individuals, had online accounts with the retailer, didn’t change their password, were really poor at security, and were compromised on a regular basis. And they would detect it in real time that somebody had come in, logged into the person’s account, put in some fictitious buying activity, and they didn’t stop the person.

What they did was – because there was an off chance that they might be wrong – was they had customer service call the person, because they had the person’s phone number, and they made it white glove service. They said, “Oh, we see you just ordered some things online. We wanted to let you know we had a special shipping deal,” or whatever it’s going to be. And the person’d be like, “I didn’t order anything,” and they’re like, “Oh, we think you might have had an account compromise.” So, they social engineered the people who it was. And if it was a legitimate thing, the people were…they loved it. They’re like, “This is amazing.” And so what should have been a moment of friction, someone calling up and saying, “Hi, I’m with the anti-fraud team. Did you really do this?” became a moment of customer support.

Hey, you’re a CISO. What’s your take on this?

9:32.849

[David Spark] Does it get easier at the top? That’s what one redditor on the cybersecurity subreddit wants to know. From their viewpoint, it does, but from the commentor’s response it all depends on the organization. So, I’m going to start with you, Ariel. Was it tougher for both of you in middle management before you became CISOs, or is it the other way around? And what factors do you think result in the workload being tougher or easier for a CISO? Ariel?

[Ariel Weintraub] I think it’s just different. I think easier or tougher are sort of subjective. So, the type of work is certainly very different. I would say I probably don’t do any work anymore. I go to meetings, I talk to people, maybe make some decisions. My hands aren’t on the keyboard doing the actual work, right? The individual contributors, the engineers, even sometimes the middle management, right? They’re doing the work. So, I think it’s relative and subjective a bit.

And it depends on the person. I mean, I’m super detail-oriented. I want to know all the things, which my team loves – not really. But I want to be informed, and when I get asked a question, I want to know the answer. I don’t want to have to go back and go to the team and ask them the question. So, for me, it’s gotten easier because it’s just even more information that I want to know about. I want to know what the latest techniques that the threat actors are using, I want to know the status of the programs we’re implementing, I want to know what’s working.

So, I think it’s the work is just different. For me, it’s about consuming a lot of information, parsing it out, ensuring that it aligns with our strategy, communicating horizontally, vertically, making sure all of our stakeholders are informed, making sure our executive team feels that they have the information they need, preparing for the board. So, I don’t think it’s easier, it’s just a little different.

[David Spark] All right. Andy, what say you? You went up the ranks when you were over at Akamai. Easier, different, harder, what do you think?

[Andy Ellis] Different is really the right word there. I think that the closer you are to being an individual contributor, which I include middle management is often in that category, you can directly affect a number of things, and there’s a lot of things that matter that you don’t have to worry about. Like, you might stress about them, “Oh, why is nobody fixing that thing over there?” But it’s not your problem to fix.

As you go higher and higher in the ranks, you start to realize that all those things that nobody is doing, you’re the one who has to stress about them. And so there’s a lot more stress about is prioritization happening correctly, what’s really going on. While at the same time, you have a lot more leverage to get things done because while you don’t do work – though I think leadership is its own serious heavy job – you have this amazing arsenal of people who get a lot of work done if you only set them up correctly to do work that will be effective for the business. So, you have a lot of stress there.

And of course, whenever you have mismatched expectations between you, your organization, and then the outside organization, obviously that’s a huge tension point, a huge place where risk comes in. And the bigger your organization is, the bigger your remit is, the more likely that is to go south at some point. And so you always carry that stress and that worry of, “Will I and the executive team continue to get along next year?”

Sponsor – PlexTrac

12:58.940

[Steve Prentice] PlexTrac likes to position itself as the purple teaming platform. It understands both sides of the pen testing battle, as well as the pressures and the tedium that come with each. As Dan DeCloss, founder and CEO of PlexTrac, explains, this helps security professionals focus on the right things.

[Dan DeCloss] When you’re focusing on the proactive side, you can really focus on that collaboration between the different teams and the different people. You start to build empathy for the different sides of the house, right? The people doing the attacking – it’s hard work in trying to solve a puzzle. How do I break in? On the flip side, the red team, the people attacking, can really identify how hard it is for the defenders to know what to be looking for and how to stop it, and they’ve got all the different holes to fill, whereas the attackers only have one. So, you build that empathy and that collaboration, so that definitely helps build the morale of the team. We know, in some instances, what to expect if we were to get hit by ransomware. So, you feel like you have a little more control.

[Steve Prentice] It also helps pen testers deal with all that paperwork.

[Dan DeCloss] We make it so much easier to write reports, and then you get to stay focused on the real work. Eliminating some of that mundane work really boosts morale, like, “Hey, I was actually able to focus on the right things and just continue to improve our security posture,” rather than always being in this reactive responsive mode where you just feel like you’re getting pummeled with data and pummeled with alerts and never feel like you’re making progress or catching up.

[Steve Prentice] For more information, visit plextrac.com.

It’s time to play “What’s Worse?”

14:28.270

[David Spark] All right. We’ve come to the game “What’s Worse?” The title says it all. Ariel, essentially, two bad scenarios. Which one is worse, truly from a risk management exercise? You’re not going to like either one, but one has to be claimed as worse than the other. I always make Andy answer first. If you agree with Andy, he wins. If you disagree with Andy, I win. Here it is. It comes from Jason Dance of Greenwich Associates, who’s given us a number of great “What’s Worse?” scenarios. Here we go. Your incident response team, whose skills are more on premise centric tells you that your cloud instance is pwned. Or your cloud incident response team tells you that your on premise private cloud is pwned. Which one is worse, Andy?

[Andy Ellis] So, I’ve got a compromised system, I’m being told about a compromised system that is either in my cloud environment, and I’m being told by my on premise security team…

[David Spark] Right.

[Andy Ellis] …or it’s in my hybrid environment on premise, but I’m being…

[David Spark] It didn’t say, it says my on premise private cloud. Yeah, I guess we could call it… But it’s an on premise private cloud. Isn’t that the same darn thing as just being on premise at that point?

[Andy Ellis] I think so. I was trying to figure out if there was something about hybrid, but let’s just assume hybrid doesn’t matter. That’s a really interesting one because I don’t think either one of these is bad.

[David Spark] Well, you’ve got a problem. That’s the bad part.

[Andy Ellis] I mean, I’ve got a problem but this…

[David Spark] I don’t think you’re going to make any friends with answering either of these. Because on the one hand, you’re going to say, “Oh, well, you know. It’s worse that your cloud team finds something on prem because they shouldn’t know about all those legacy technologies that have been around for 40 years.” But then you’re going to frustrate the mainframe operator who’s been doing that for 20 years.

[Andy Ellis] Yeah.

[David Spark] So, I think either way, you’re going to frustrate part of your team with answering that.

[Andy Ellis] So, I actually think that both of these are good scenarios because it means I have teams that look outside their purview.

[David Spark] Mm-hmm.

[Andy Ellis] But I most like that my on premise team is learning about cloud and is able to do things because I’m going to have to retrain and reskill most of them over the next five years to move to cloud, so that’s a great sign. So, I like both of these, but I’ll go with the worse one is the fact that my cloud people have to cover for my on premise people who apparently don’t exist as the slightly worse version.

[David Spark] All right. Ariel, I’m throwing it to you. Do you agree or disagree?

[Ariel Weintraub] Agree.

[David Spark] For the same reason or any other reason?

[Ariel Weintraub] For the same. No, that’s absolutely the challenge. I mean, upscaling team members. So, it’s probably less likely to train a cloud engineer to learn how to operate a mainframe, right?

[David Spark] By the way, let me ask this question. Has it ever happened in the history of computing?

[Andy Ellis] Oh, absolutely. You often…

[David Spark] They have to go back to mainframes?

[Ariel Weintraub] Well, on the mainframe side – I’m sorry, I’m probably going to upset a lot of our mainframe operators – but they’re all retired. So, now, actually, you do need to upskill others to learn how to support the mainframe. Because unfortunately, in many organizations, mainframe’s not going away. So, if all the originals are gone, you do have to upskill some of the younger ones.

[Andy Ellis] Yep.

[David Spark] Do they get upskilled kicking and screaming?

[Ariel Weintraub] Yeah. I mean, everyone wants to learn cloud. I’ve never heard of someone’s development plan saying, “Number one: Learn the mainframe.”

[Andy Ellis] I would like to learn AS 470 at this point in my career. Yeah. Not. Not so much anymore. Not in this millennium.

[Ariel Weintraub] Maybe it’s an opportunity.

[Andy Ellis] Well, so you do have the people on the threat research side who love going and finding dead technologies that are still in use, learning them so they can break them. But that’s a different sort of problem entirely. And maybe that’s what my cloud team was doing was they were compromising my on premise system to point out problems so that they could rid of it faster. I don’t know.

[Ariel Weintraub] Well, now, you’re just thinking like a threat actor.

[Andy Ellis] I’ve always done that in my career.

It’s time to measure the risk.

18:18.813

[David Spark] Can companies be more secure if they’re radically transparent? That’s what Jamil Farshchi, CISO over at Equifax argues. Equifax released its 2021 Security Annual Report that outlines the company’s cybersecurity investments and provides details about its policies and procedures. In an article by Mary Pratt on CSO Online, Farschi said, “If you’re a customer or an investor, it shouldn’t take a breach for you to find out a given company’s security posture.” Now a fair argument, that the SEC has proposed a rule on this very issue. So, how can radical transparency help and where can it backfire? Should we have been doing this all along and all our fears are unfounded? Or have some of these fears been realized and hence why some policies and procedures must stay hidden? Andy, what do you think?

[Andy Ellis] So, I love Jamil, and I think what he’s doing is fantastic for Equifax, but we should take with a grain of salt the applicability everywhere else. And I don’t think this is a binary situation, you have to be completely transparent or completely open, and it’s sort of being posed that way. I think Equifax needs to be radically transparent because they have an interesting business model that is based on accumulating open source knowledge and making judgments about people that a lot of people are uncomfortable with. And so maybe some radical transparency about how they’re going to protect all of that information is helpful to their continued existence.

[David Spark] Has anyone looked at their credit report and go, “Oh, yeah. That’s me exactly”?

[Andy Ellis] Yeah, it’s weird. But I think for many of us, I do think that often we are overly opaque about things that don’t matter when we should be transparent. Like for most companies, if you’re building into the BeyondCorp zero-trust model for your enterprise, don’t go it on your own. Talk to your peers, understand what other people are doing, share your lessons learned because you might end up down paths that are ugly and painful.

We implemented 802.1X for our wired connections when I was at Akamai because we were worried about things like the Pwn Plug, if people remember that. And at the end of the day, we ended up turning that off because we got rid of wired connections entirely. So, we spent like three years rolling out a security technology that three years later was basically DOA. And maybe if we’d been more transparent about what we were doing, we’d of come to that conclusion sooner.

I’m a fan of more transparency, especially around things that aren’t sort of key to your crown jewels but that are around that. I think you do have to consider whether being transparent about your security is going to give attackers really a roadmap or not. But I’m sort of on the fence on this one.

[David Spark] Well, a lot of people are. Ariel, where do you stand?

[Ariel Weintraub] I’m going to take the fence too, but I think I would separate the what versus the how. So, policies are the what you’re doing, and procedures are the how you’re doing it. So, I think it’s always important as a company to share the data that you’re protecting. What’s your mission statement, what’s your vision, and cybersecurity’s a component of that. So, I think transparency around the overall what of a cybersecurity program is absolutely an opportunity and have seen most recently also in ESG programs being more transparent with what companies are doing and disclosing.

But you shouldn’t disclose the how you’re doing it because that’s where we’re trying to be more innovative in our controls, trying to do things differently than our threat actors know that we’re doing, right? I keep coming back to knowing what the threat actors are doing because we don’t want to be explicit in what control they need to circumvent. So, I think you have to separate the two. But it is important to gain trust from our customers and from others that are looking to us and what we’re doing. So, I think that I’m still taking the fence here, but I think that there is a line that we can cross a little bit in terms of sharing more.

[Andy Ellis] And I think there’s a difference between the business-to-consumer market where I think more transparency is probably better because the consumers do have so little leverage. Where it’s a business-to-business market, where one or two hostile actors inside your customer – and I don’t mean that they’re threat actors, they’re just obstreperous and painful – can create issues. I’ve seen places where your radical transparency, “Oh, here’s our philosophy about this type of security,” derails an entire sales call because the security person on the other side wants to argue with our choices about what our source of entropy is. And so you have to sort of be aware that sometimes you have customers who have very deep, religiously-held opinions about what policies you should have, and when you disagree with them, sometimes it’s better not to expose that disagreement by shoving your policies in their face.

[Ariel Weintraub] That’s a good point. There’s not just one way to execute a control, right? I think this is also happening with the convergence of different types of technology, suites of controls. Like, for example, it used to be kind of a commodity to always have antivirus on your machine. But antivirus is kind of dead, right? But there’s still some that look for, “Do you have antivirus on your machine?” Well, maybe I don’t have something that specifically falls in the antivirus category because now I have something in EDR which is a little bit different, right? But if you’re kind of looking at it from a check-the-box exercise, nobody cares specifically about the implementation of it. It’s the what that we’re protecting ultimately.

There’s got to be a better way to handle this.

23:49.426

[David Spark] How do you protect your machine learning algorithms and AI from absorbing poisoned data? If desired, a malicious attacker could poison shared public data. During the learning process, a security tool could ingest purposely mislabeled data and throw off its learning process, making the results wrong, doubtful, or completely misdirected. It doesn’t take much. Researchers from Taipei showed that you can bypass defenses by simply poisoning less than 0.7% of the data, as reported by Tim Culpan of Bloomberg in the Washington Post. That means the mere use of public data could nullify the validity of your ML, thus making your security control send poor alerts, making your security defenses vulnerable. So, Ariel, what can we do to avoid poisoned systems and how can we tell if our systems have been poisoned?

[Ariel Weintraub] This is a really interesting and significant problem, and I think the one thing that comes to mind for me is testing your own controls. So, most companies have some sort of purple team or red teaming capability. And so if you simulate the activity that you’re expecting, where you have an expectation of what the output of that looks like, you can validate the models themselves are working correctly. I think that can be really difficult to validate when the statistic here, around 0.7%, you have to be really precise with the activity that you’re simulating. So, I think it’s test often, monitor drift of the alerts that you’re seeing. So, I think the more metrics that security operation center can develop in terms of what the day to day looks like. If you start to see even the smallest drift, given the small amount of input poisoning that can occur here, that’s the only way you can really do it.

But I think what this means is that we have to shift where we’re putting our protection of our controls, and where we’re investing our humans. So, historically, from a security operation center perspective, you’re investing your humans in monitoring the alerts that generate out of your SIM. Well, with more machine learning, maybe we don’t have to put as many humans towards those alerts. Maybe we have to have some data scientists that are actually looking at the input and the output, and does the output look like what it should be. But I don’t have a perfect answer to this. I think this is an evolving problem, and certainly something we should watch.

[David Spark] Andy, what’s your thoughts?

[Andy Ellis] So, first I want to point out that 0.7% is actually a huge volume of the data. Like, it sounds small to the human in us, but when you think about the size of a training dataset, 0.7% is pretty significant. You’ve made a lot of changes to it. So, first let’s just address, like, this is a core problem of machine learning. You basically get garbage if your training data is garbage itself. And that extends not just in security but just in real life.

Consider the case that’s been bandied about about the hand dryers that have an optical sensor that can see Caucasian skin really nicely but can’t see Black skin. Right? And that’s just bad training data that propagated through. Or practices that you’ve seen in the justice system where the machine learning algorithms tried to predict of who’s going to be a recidivist takes into…a lot of factors but often all those factors have a high correlation with race or income level. And so what it’s really just doing is the machine learning model does these quick shorthand, because it says, “Oh, there’s like 75 things in common plus this one thing.” And that one thing isn’t what matters, but it’s the easiest thing to do.

So, let’s recognize that machine learning is very problematic when it has inputs that include any form of bias already in them, and this is just an example of researchers creating an artificial bias in some fashion to make adversaries look legitimate rather than what’s often the case of making legitimate people look like adversaries. So, I think Ariel has some great ideas here that everybody should understand. But at the core, I think the biggest challenge is that we often use machine learning to try to do things that our humans can’t do yet rather than using machine learning to replicate what our humans have been doing to let our humans start to push the boundaries. And that’s where I think machine learning is far more valuable, is eliminating the tedious work that humans are currently doing. And look, the machines will get it wrong as often as the humans do, and they’ll just get it repeatedly wrong fast, instead of the humans getting it repeatedly wrong at high cost and slowly.

[David Spark] Go ahead, Ariel.

[Ariel Weintraub] I agree with you but back to something you said in the beginning. I think there’s a difference between where you’re using a model where you have control over the inputs. So, when you’re designing something that needs to be trained on a set of data, and you have control over that dataset, then you certainly have an opportunity to ensure that the actual input isn’t poisoned. But in many cases, there may be an opportunity for someone to interact with your dataset directly. Directly based on whatever, you know, inputting information in a website form, something like that. There’s no way to validate that there’s nothing invalid about the input data. So, I think it depends on the use case as well.

[Andy Ellis] Yep. Everyone remembers Microsoft Tay.

[David Spark] Yes. That was an enormous failure. And by the way, speaking of that, we have an amazing episode of Defense in Depth specifically about machine learning failures with the very wise Davi Ottenheimer of Inrupt, and I highly recommend people go check out that episode as well.

Closing

29:19.611

[David Spark] Well, that brings us to the very end of the show. Thank you very much, Ariel. Thank you very much, Andy. Ariel, I let you have the very last word on the show and, by the way, I always ask our guests are you hiring, so make sure you have an answer to that. I do want to mention our sponsor again, PlexTrac. Thank you so much, PlexTrac, for sponsoring this very episode of the show and for being a phenomenal continuing sponsor of the CISO Series. Andy, any last words on our topics today?

[Andy Ellis] Well, since we started with basketball, I’m just going to say the men’s football season is about to begin. The women’s football season just wrapped up, and I’m really and truly hoping that the renegades successfully defended their title. But I’ll find out by the time this airs.

[David Spark] Mm-hmm. And Ariel, any last words on any of our topics, and are you hiring over at MassMutual?

[Ariel Weintraub] I have to say I was a little bit quicker and wittier before I had COVID, so my response time is not as good here. But on the hiring side, we’re always hiring. So, we look for intellectual curiosity at all times whether we have an open position or not, always hiring and looking to bring in top talent.

[David Spark] All right. So, if someone’s looking to connect with you or to find jobs, how would they connect with you?

[Ariel Weintraub] Find me on LinkedIn.

[David Spark] Best way to do it. All right. Well, thank you very much, Ariel. Thank you very much, Andy. And thank you to our audience, as always. We greatly appreciate your contributions, as always. Thank you so much for supporting the CISO Series Podcast.

[Voiceover] That wraps up another episode. If you haven’t subscribed to the podcast, please do. We have lots more shows on our website, CISOSeries.com. Please join us on Fridays for our live shows – Super Cyber Friday, our Virtual Meetup, and Cybersecurity Headlines Week in Review. This show thrives on your input. Go to the Participate menu on our site for plenty of ways to get involved, including recording a question or a comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at David@CISOSeries.com. Thank you for listening to the CISO Series Podcast.


Posted

in

,

by

Tags: