May 14, 2021

Software Security Gurus Episode #17: Mike Shema

Welcome to episode 17 of the Software Security Gurus webcast.

In this interview, Matias Madou chats with Mike Shema, Product Security at Square. They discuss his take on proactive security, and how the relationship between developers and the security team impacts this approach. They also unpack the collaborative role these teams can play, and the secret ingredient to a successful shift left.

Have you got a topic idea in mind, or want to nominate a guru? Get in touch!

Introduction: 00:20

Proactive security, and how it impacts developers and the security team: 01:35

Should you teach developers "how to hack"?: 09:24

Successful security practices for developers: 12:50

Listen to the podcast version:

Read the transcription:

Matias Madou:
Welcome to today's Software Security Gurus Webcast. With me, Mike Shema. Welcome, Mike.

Mike Shema:
Hi Matias. Thank you.

Matias Madou:
Of course. Mike, do you mind saying a few words about yourself?

Mike Shema:
Sure. I think perhaps most relevant to security, my journey through the industry has touched on a couple areas from consulting and pen testing, to building a security product, to now working on product security teams. And I think that's ... I've enjoyed that because it's given me I think a pretty good perspective, or a bunch of different perspectives, I guess I should say, about: What does security mean? How do different people approach it? And honestly, what do users need? Users being either enterprises, customers, the population out there that everybody who has a web browser, or even developers for that matter, the people who are building software.

Matias Madou:
Do you consider yourself an engineer that moved into security?

Mike Shema:
That's an interesting question. I think probably most honestly, I'd have to say I'm a security person who dabbled in engineering, or perhaps dove into engineering for a while, and then kind of pulled back to security. Security is fun, so it's a lot of neat, creative thinking, pulling things apart, so that's the part that appeals to me.

Matias Madou:
Okay. So Mike, I have two topics in mind if that's fine with you. And I hope they are near and dear to your heart. The first one is a little bit more around proactive security, and especially that relationship, hence my question, between developers and security. So and maybe you can talk a little bit about Square too if that is possible. But did you one day walk up to your engineers and say, "Hey, today is the day that we shift left, go"? I guess not. But what is the secret ingredient to effectively collaborate? In your experience, what is the secret ingredient to collaborate devs and security?

Mike Shema:
Yeah. And I think I was laughing because, yeah, we don't just walk up to them and say, "Shift left. Time to go." We give them 24 hour warning, and then we're like, "Tomorrow, we're going to shift left." No, I think it's the two key points you put in that question were that collaboration and DevOps and security, or developers just in general, and security working together. And so one of the ways we approach it is the idea of developers, you're writing the code. It's probably good that you be responsible for a lot of security. But you don't just throw someone that mantle of responsibility and say, "Cool. Now, go."

Mike Shema:
And we could come up with some analogies, like you don't push somebody out of a plane and say, "Figure out how to work the parachute." Or you throw somebody off the deep end and say, "I hope you remember how to swim." You'll give them the tools, give them that. And so the way we've approached that then is to build that collaborative aspect of the threat modeling, basically. What are you building? And let's talk with the developers and say, "Okay. Great. This is what you're building. What can go wrong?"

Mike Shema:
And that's where we start to have a lot more of those conversations. And it's sort of a ... It's not exactly a Trojan horse way of approaching the responsibility, but it's planting that seed, or it's planting that way of thinking of you're building code, you're building features. But have you been thinking about the ways they could be abused? And it's sort of that abuse aspect that I think is where developers aren't necessarily always thinking of, and that's where security can come in and help with those conversations and help that idea of shift left.

Mike Shema:
And if you notice, what I'm saying here is very much an engineering focused approach in terms of just thinking through the software as opposed to saying, "Here's how you're going to use this scanner, this source code scanner, or this Webex scanner, or this dynamic scanner, or this other tool, or these things. I don't think it's successful just to say, "Here's a bunch of tools. I'll teach you how to use them."

Matias Madou:
Oh, I completely agree. I think there's a lot of human aspect to building code. And we didn't want to transform our developers into robots that only take orders and are controlled by other machines. So how is the culture? What do you do then from a cultural perspective, security versus developers? So that is not going to work, but how do you collaborate together?

Mike Shema:
Yeah. Very much it is, as you were just saying, it's a lot of important things to just framing it, saying, "We're here to help. We're here to work with you on to find out what's going wrong. What can we do about it?" And the way that culture has been really easy where I'm working now is that because it's a very engineering heavy company, there are a lot of engineers working on important problems. And they're working with other engineers through design documents. And they're saying, "Here's a broad plan of how we're going to build up this feature. Or here's some broad goals for this particular product." And they reason through a couple of those aspects, and then either really helpfully, they have a section that says, "Here are some security and privacy considerations."

Mike Shema:
And sometimes that will be one or two sentences, or sometimes that will be tagged in to be ... And some will say, "Hey, Mike, can you help us out here?" And other times, honestly, some of the engineers, who have been exposed to more security conversations, they have two or three paragraphs in there that say, "Well, we have some sensitive data that we're handling, so we'll make sure we think we need to encrypt it in these areas. We just want to double check it. Or here is some authentication decisions we want to talk about." What makes the best sense? Or what is the "industry best practice" around this?

Mike Shema:
And so it's sort of that great start within a document, being invited in for commentary. And I think being invited in honestly makes my job easier because I don't have to convince people that I'm here to help or have a good conversation.

Matias Madou:
So now I'm wondering what your ratio is, security people versus developers, because it sounds manageable. And a lot of organizations, you see very few security or AppSec people helping out developers, and somehow, it does seem manageable for you. So either you are a large group for a number of developers, or you have figured something out.

Mike Shema:
Well, possibly, I figured out how to focus on the parts that work best first. And I sort of maybe skipped over the parts that are pointing out kind of what you're alluding to. Maybe they don't quite scale as much. So I can't think of the exact numbers offhand, but I want to say our ratio of AppSec, or sort of my role to engineers is probably at least one to 50, one to 100 from an AppSec person to an engineer, and so that's not great.

Matias Madou:
No.

Mike Shema:
And so what that means ... Yeah. Exactly. So I'm invited into some documents for commentary, and I can have good relationships with many of the engineers. But that definitely sets up areas where my attention is over here, and I'm just either implicitly or explicitly ignoring some other areas because I don't have the time. Or honestly, maybe I just haven't been paying attention. I didn't realize there's actually some risk over there that actually really should be addressed.

Matias Madou:
It's hard to find talented people and increase that ratio. Maybe steal some developers at our security channel, maybe.

Mike Shema:
Well, and that's totally the way to do it. I try to avoid ... There's the teach someone to fish rather than be the fisherman for them. I messed up the analogy there. But I avoid it because, at least what I try to do, is with that threat modeling approach of: What are you building? What could go wrong? Is just planting that way of thinking, and so that's where hopefully the developers over time are thinking, "Oh, I recall this conversation we had about, not even something as simple as cross site scripting or input validation, but things of our business, or workflow, the business logic that we're building." We made these assumptions, maybe these assumptions would break. They have that sort of adversarial thinking in what they're building.

Matias Madou:
And actually, that leads into my second topic, if you don't mind, where the first one was really proactive. But then I hear also aspects of breaking. And to be honest, I'm not a big fan of learning your developers how to hack and tell them, "You just have to think like the adversary." The analogy that I like to use is if I'm going to make a meal later today, my wife cannot just tell me, "Think like a Michelin Star chef, and we will have a great meal this evening." You know it takes practice and skill, and maybe it's a little bit overkill to go through all that skill and development if I'm only cooking once a month.

Matias Madou:
Do you really want to teach them how to hack? Or how do you see that? Because it seems like you're doing something in that area, but a little bit different as well.

Mike Shema:
Yeah. And I think I do agree because I think one of the things that if you just say, "Think like a hacker," that sort of throws away. What kind of hacker? Do they have a particular goal? Are they trying to get into a particular secret? Is it a hacker who is just looking for some quick bug bounty results, just some simple cross site scripting, SQL injection, things you can just automatically scan for?

Mike Shema:
So a bit of a call back to a comment you made earlier, I would by all means avoid saying, "Think like a security scanner." Because why bother having somebody just do that? Just run the scanner, and then have them think like a developer with perhaps, I will say that adversarial mentality in the sense of: How can you be creative about the ways that your app could be abused or misused? Sort of from a, call it maybe a computer science aspect, appeal to that, or appeal to their engineering aspects, thinking. And say, "You've got a state machine."

Mike Shema:
Okay. What about state transitions? Could you execute any unexpected ones? Are there any undesired state transitions? What if you just broke a state transition or skipped one? Could you even bypass a particular state transition? And you know in a theoretically perfect state machine, you shouldn't be able to bypass a transition. But the code we write is imperfect. It has flaws, and so that's sort of how I would start to frame the discussion. And then say, rather than think like a hacker, just say, "How might somebody abuse this? Or what would be the benefit to someone abusing this?" And thinking to those benefits of, well, somebody just wants to gain access to accounts, or they want to DDoS us.

Mike Shema:
Or working in the fintech space, they have a credit card, and they want to basically monetize. How do I monetize this credit card? Or I have a bunch of stolen credit cards. How can I use them and get by fraud detection? Things like that. So focusing on more of the context of: What is the app that's being build? I think is a lot more helpful and a lot more useful advice rather than just a general think like a hacker because as I said, I don't really think we need to turn everybody into pen testers who's a developer because ... But we should expose them to some of the interesting ways that these attacks can happen.

Mike Shema:
It's always kind of fun to pull off a cross site scripting. It's neat to do a drop tables or something with a SQL injection. But you only really need to do that once to give an appreciation for what the flaw is. You don't need to give them the weird ... What is the detailed jargon of name three types of cross site scripting? Name how you do blind inference SQL injection through a timing channel attack using my MySQL version testing. All of that doesn't really matter. May be fun for pen testing on obscure areas. But as you're building code, what part of that code is creaky? And honestly, hey developer, how would you break your own app?

Matias Madou:
Love that. So sounds to me that you're putting a lot of, most of your efforts on proactive security, doing threat modeling.

Mike Shema:
Yeah.

Matias Madou:
And also educating your developers in the fastest way possible, that they don't have to know all the attack vectors. If they know one, and they know how to prevent an entire category, or build it into the framework, that is the fastest way forward. So seems like you're doing a lot of work upfront. How ... Go ahead.

Mike Shema:
Yeah. I was going to say, we're doing quite a bit of that in the sense of: How can we get rid of a particular attack class? Rather than have developers worry about, this is the content security policy headers, and we've got to retrofit them onto our app and figure things out. What can we be smarter about to deal with cross site scripting? Things like that. And I have avoided mentioning several tools, mostly because we're still building up some maturity around different areas of capability. But that's not to say that I would avoid the use of just linters, basically find those easy code smalls or those easy problems, especially as we have a lot of developers working within cloud environments, AWS.

Mike Shema:
So if we can do things within terraform, and just say, "Here's our infrastructure's code. Here's what your environment looks like," rather than trying to also teach a developer to now be a pen tester and a hacker, now need to teach a developer to have some really good infrastructure and network design skills. We'll say, "Well, let's model this within terraform, and we'll work in partnership with some experts in AWS, experts in cloud security, to figure out." What does a good paved road analogy? What does that good environment look like? And then we'll just give you that really immediate feedback that says, "Yeah, you're deploying your product in a great way. Or here are some ways that you're driving off of this paved road. Did you want to do that?"

Mike Shema:
And the answer could be, "Oops, we didn't realize that. We're going to go tweak a setting, lock down, make sure those S3 buckets don't get opened to the public," very, very common mistakes. Or in that sense of being collaborative, maybe they did want to drive off road. Maybe they want to use Lambdas now, and because we don't have a good story around Lambdas, so we have to work with them to figure out. Okay, how do we do good secrets management? Does the way that we drop secrets into Lambda functions, does that change some of our threat models? Meaning our threat models with security. How well do we understand this new area?

Mike Shema:
And I think just trying to answer that question, building confidence of how well we understand an area, also helps us to talk with the developers and say, "Go forth and conquer. Write the code that you need to write to build a function, to build a product."

Matias Madou:
I like your approach in just telling the developers what they do. It's not necessarily wrong. You tell them, "Hey, by the way, this is what you've done. Is that right or wrong? It can be right, but we just wanted to inform you that there's another way." So by the way, I think there's definitely going to be a revolution in giving realtime feedback to developers through linters and other tools, so that developers know what they're doing, the feedback, the instant feedback. Hey, this is what you're doing, it can be okay, it can be totally wrong, but know that these are your other options, so love that. Love that you're working with that too.

Mike Shema:
Yeah. I think one of the things too that it also helps us with that idea of there's some obvious things we could say that are wrong. I think those are also really easy to point out. If you're doing string concatenation for SQL queries, that's kind of obviously wrong. That's also pretty old school thinking. And one of the things I like to push on is, on the security community, to be like, "Are we giving advice that's actually relevant?" And I think that's one of the things why DevOps is doing so well, and why, if we want to play around with semantics, I'm using the term DevOps, and not DevSecOps because I think good application security just comes out of well written applications that are following good processes.

Mike Shema:
And those processes including have a great build environment that from end to end is, you have a way of watching the code has been committed. It's been reviewed. Your artifact has been built and has been deployed through mostly automation or services that are interacting together with very restricted privilege access. And you have linters, for example, I just said, giving that immediate feedback to developers. Or you have a security tool that's in that developer's IDE that's where the developer is living. So I think one of the ways I'd go about this is saying that, "Let's bring security to where the developers are because the developers are the ones that are building software." If the security industry were building software, and we just point to the OWASP top 10 and say that cross site scripting is still up there, number one, two, or three, security industry just keeps seeing what the problem is, but they haven't necessarily built anything to defeat that problem.

Matias Madou:
So maybe one final question to finish this one off. I looked at your LinkedIn page, and the Voight-Kampff test, that really drew my attention. So what interests you over there? Do you think that the universe is constructed, and we're all ... What is going on there?

Mike Shema:
So that was basically, that's the Easter egg that I threw in there I think possibly when I first created my LinkedIn profile. And it's been mentioned by I think only two recruiters ever and one interviewer have ever commented on it, so thank you for noticing that. I quite appreciate it. But I think perhaps, I don't know that I have a good story there to tie into security, other than being a fan of the aesthetics of Blade Runner, the music behind Blade Runner.

Mike Shema:
But I will say on the topic of movies, pop culture, I do love Tron would be my go to movie for hackers and hacking. And the reason I picked Tron is there's that tag line of fight for the users. And I think that's the really cool message for doing computer security, is rather than pulling off, this is the really cool hack, this is the really cool tool that we've created, or I've created, something like that, which I acknowledge there's some really fun engineering, there's some neat stuff in there.

Mike Shema:
But at the end of the day, what we're trying to do is build a better browsing experience, build a better mobile app experience, built a better app that protects the privacy of your data and the security of your data. And so that's where I think pulling in the user experience and fighting for the user is a great thing to go for. And the other thing I'll try and tie that back into, as I'm going on a stream of consciousness here, is think back to the idea of why Johnny can't encrypt, that the commentary on how GPG as a command line utility isn't the easiest thing to use.

Mike Shema:
I think I read something else the other day that somebody was pointing out that the GPG main page has I think a higher word count than Fahrenheit 451. So to me, Fahrenheit 451 is a far more interesting read, so I would rather give somebody a book and give them entertained, rather than try to put the burden and the responsibility on an end user to figure out, here is the key signing ceremony, here is your public, private key, here is what the key sizes means, here's the difference between RSA and ECC, all those things. Let's wave them away and figure out. How can we communicate securely, you and I talking over a video chat, if we're just talking over text chat, if we're using a web application for, whether it's a bank, an online game, or something else? What is the security aspect of that? And how are we making it better? So I realize that was perhaps a long answer to the Voight-Kampff test, but that's what you get.

Matias Madou:
It really drew my attention. And I barely watch movies, but thanks for the tip on Tron. I'll actually check it out. Mike, thank you very much for accepting to being the 17th guru on the Software Security Gurus Webcast. It was a fantastic chat. Thank you very much.

Mike Shema:
Thanks for having me. It was really fun to chat with you, Matias.

Matias Madou:
Thanks, Mike.

Never want to miss an episode? Get in touch and subscribe!