July 14, 2020

Software Security Gurus Webcast Episode #7: Clint Gibler

Welcome to episode 7 of Software Security Gurus, with Matias Madou. In this interview, he chats with Clint Gibler, security consultant, and owner of the TL;DR Sec blog.

They discuss his love/hate relationship with static analysis and the available solutions, as well as what he learned from attending 50 conference talks. Also tune in for deep dives into threat modelling as code, and good examples of security defaults.

Introduction: 00:00-02:01
A love/hate relationship with static analysis: 02:01-03:40
High-level findings from fifty conferences: 03:40-10:48
Scaling threat modeling: 10:48-17:40
Security defaults and threat modeling as code: 17:40-23:26

Listen to the podcast version:

Read the transcription:

Matias Madou:

Welcome to the Software Security Gurus webcasts. I'm your host Matias Madou CTO and Co-founder of Secure Code Warrior. This webcast is cosponsored by Secure Code Warrior. For more information visit www.softwaresecuritygurus.com. This is the seventh in a series of interviews with security gurus, and I'm super pleased to have with me today, Clint Gibler. Welcome Clint.

Clint Gibler:

Hey Matias. Glad to be here.

Matias Madou:

Thanks. Hey Clint, do you mind actually sharing a few words about yourself?

Clint Gibler:

Sure. My name is Clint Gibler. I am the head of security research for a small startup called RTC. We're building sort of a next generation lightweight static analysis. That's a fast, easy to customize and a pleasure to use. Before that, I was a research director and technical director at NCC group, a global consulting firm. And before that I was an indentured servant, I mean, a grad student at UC Davis. So yeah, I would spend a little bit at a lot of stuff.

Matias Madou:

And actually, I think you've forgotten a very important one. You also did an internship back in the day at Fortify, right around the time when we actually got acquired by HP and I was there too. So that was a lot of fun, right?

Clint Gibler:

Yeah. That was a ton of fun, actually, a number of my friends and sort of mentors to this day, actually all come from that brief period, like Flee, Jonathan Carter. I chatted with Jacob West recently. Yeah, I think for such a short amount of time, it's one of the densest, most useful, awesome professional experiences I ever had and very fun.

Matias Madou:

Yeah, absolutely. It was a really good crowd that we have. So I'm not surprised that you're still in touch with a lot of people from back in the day because they're everywhere in the Bay Area, right?

Clint Gibler:

Yeah, the Fortify mafia. I think people call it.

A love/hate relationship with static analysis

Matias Madou:

Yes. You worked on static analysis and it's quite funny because if I look at what you're producing lately, I think you were going to say you have some sort of a hate-love relationship with static analysis because on the one hand, you're all about automation, but at the same time, I hear you say like, Hey, well, people do not get the most value out of static analysis solutions. Can you dive a little bit deeper on this contradictory?

Clint Gibler:

Yeah. Love-hate is a good way to describe it. I did a number of projects related to static analysis in grad school. And then at Fortify, I did obviously some as well. It was interesting to see the theory like, Oh, this seems like a very promising approach to leveling up security at scale because you don't need to manually look through all the code. And then at NCC group, I was brought into so many companies who were like, "Hey, we're paying a lot of money for this expensive tool, but we're not getting a lot of value from it because it's just driving us in false positives."

Clint Gibler:

I think there's been a shift in how companies approach security, at least I've seen in the Bay Area where they're de-emphasizing this focus on identifying vulnerabilities and instead focusing more on secure defaults and proving the absence of the vulnerability. So it's sort of like a same problem, but looked at the other way. So like rather than can you find all the bugs, can you just set up an environment, an ecosystem and have strong, secure defaults that make a vulnerability impossible so you don't have to find them.

Matias Madou:

If I'm not mistaken, you analyzed like 50 conference talks in the last two years, something like that. It was a crazy amount. Right? And dozens of blog posts, which you all summarize and put into one slide deck, and then over there, if I may say, well, you're fortunate to look into presentations that talk the latest and greatest about secure defaults. So do you mind elaborating a little bit what that means and can you give a good example of a secure default that everybody should do?

High-level findings from fifty conferences

Clint Gibler:

Yeah, that's a good question. So just to give you a little bit of a background. I gave a couple of talks at AppSec Cali, and RSA, and BsidesSF, they were basically a conglomeration of probably like 50 to 75 talks, dozens of blog posts and tools, and even more conversations I had with security professionals at many companies, just sort of over drinks or at meetups in the city.

Clint Gibler:

I started on this path by accident, to be honest, I didn't intend to do it. I was having drinks with a friend and he was like, "Hey, here's some cool interesting novel stuff that we're doing at my company that we haven't really talked about publicly." And there was some stuff that I'd never heard of. And I was like, "Oh man! That's awesome." And then maybe a couple of days later, I was hanging out with a different friend at a different company. And they said something different from what the first person said, a lot of overlap, but some difference. And I just kept going to different friends of mine and being like, "Hey, what's your coolest tips and hacks." And I collected all these insights into one talk, but I didn't intend to do it. I just chatted with a bunch of people. And I'm like, "Oh, people should know about this."

Matias Madou:

Yeah, no, it's a very clever idea because if you go through your presentation, you actually get the synopsis of what happened in the last two years in application security. I actually highly recommend it to watch it because there's a ton of useful information in there.

Clint Gibler:

Yeah. Thanks. We can share a link to it perhaps below.

Matias Madou:

I think we should.

Clint Gibler:

The goal is there's so much material out there. You don't have much time to absorb, like most people aren't willing to spend hundreds of hours absorbing various things because they have a life or friends or something more meaningful, but I was kind of like, let's just save people time.

Clint Gibler:

But to get to your question, what's a secure default? I think one of the most impactful things I've seen at a couple of companies is they choose a vulnerability class and then they try to eradicate it. And there's some compounding wins that happen when you do that, because you no longer have to spend time educating developers about it, or triaging bug bounty or pen test, or making sure that the fixes don't regress.

Clint Gibler:

Like there's like five to seven types of activity for that vulnerability class you just don't have to do any more. Because you're like, "Okay, we've solved that. Cool." So I think that's one insight that's interesting. Like once you sort of rather than stomp out a few, say cross site scripting, if you just categorically eliminate it, there's compounding effects there because then you can focus that time on the next vulnerability class and then becoming more and more highly leveraged as a security team. So I think that insight is important.

Clint Gibler:

Say for example, you have had a number of XXE issues in the past. So parsing XML, you allow external DTS to be referenced, but let's say you're like, "Okay, I'm an AppSec engineer. I want to make sure that doesn't happen anymore." So what you can do is create like a secure wrapper library, where basically it configures the parsers, such that none of the bad things can happen. And then you say, "Hey, developers, I built this thing for you. Please use this." Then you check for all the places in the code where someone is not doing that. Then ideally you have lightweight checks in CICD itself to make sure that new places where that's doing the wrong thing, are not introduced again. And then basically you don't have to worry about that issue anymore. So it's basically like ... Oh yeah, keep going.

Matias Madou:

No, no, no. I actually agree with everything that you said, except for I'm sometimes questioning how practical that is, because quite often, I think it highly depends on the organization and company because there's people with tons and tons of legacy where this is very good from a theoretical perspective, but from a practical perspective, they're like, "Well, but our code runs, so we'll leave it like that." I think newer type of companies, you can do something like secure defaults, which is the recommended way, but I'm not sure how practical that is. And from your experiences, how often can you do that.

Clint Gibler:

Yeah. That's a very good point. I think especially big monstrous complex legacy systems that perform very impactful, like high throughput work that like, if it goes down, you're losing like millions of dollars an hour a day. So one way to do that is if you can make the secure wrapper version, basically like a API one-to-one match with the previous version. So it's like, it's easy to fix. It's almost like a said like search and replace type thing, rather than having to rearchitect the code locally. And for certain classes of issues, you can't do that. This isn't like, Oh, always you can do this and if you're not doing it, it's just because you're lazy. There are some cases where it's just fundamentally very hard, but I think there are some cases where you can make localized changes and still get a lot of the value without necessarily having to understand overall program flow and global state and some things like that.

Clint Gibler:

But yeah, it does require rigorous testing to ensure that you haven't broken anything depending on if the code might rely on some of the side effects that could have security implications that you're sort of taken away by rolling out the secure wrapper library. And some things I think are bigger changes than others. I feel like, for example, killing cross site scripting by porting everything to react rather than sort of jQuery or whatever you're using now, like that's a massive engineering undertaking because you're rewriting basically everything, but some other things like parameterizing all SQL queries or using like an XML secure wrapper library. It depends on obviously your language and framework, but that seems to me like maybe more localized, at least on average. But yeah, I totally agree. It's a great approach, but can be hard in practice. I guess it really depends

Matias Madou:

Yeah. If we can all start from scratch today, we would not have that problem. But unfortunately we're always running on a ton of legacy code because what quite often happens is it was not intended to be a big project and you start working on something, and there always has to be a market fit first and once there's a market fit, then you start to think about security and then you already produced a ton of legacy code.

Clint Gibler:

Yeah. Let's just prototype it and then we'll rewrite it later. And then that prototype is a prod forever.

Scaling threat modeling

Matias Madou:

Exactly. Exactly. We see that all too often. Another thing that you're diving into in all the talks that you've seen in the last couple of years is scaling threat modeling, and you actually talked to a lot of top notch people in this field too, that are doing threat modeling on a day-to-day basis or have built a career on threat modeling. What have you learned and like how can you scale threat modeling today?

Clint Gibler:

Yeah, that's a good question. So just to take a step back. So threat modeling is like, how do we get an understanding of the big picture of security risks to a system? Like what are the threat actors? What sort of big picture things should we have to worry about more? Sort of the whiteboard diagram sort of things, rather than like a specific code vulnerability, I don't know if listeners maybe care about the context.

Clint Gibler:

It used to be in like a waterfall development method, you could sort of like, Oh, before the project we'll threat model, this is everything it's going to do. But now things are happening very quickly. Typically AppSec teams are small. They can't sit in every meeting. So it's like, how do you get reasonable threat modeling coverage given that you just can't possibly threat model every story or every epic even, because you have 500 developers and like two AppSec engineers.

Clint Gibler:

So I've seen a couple of main approaches. I would say having like security questionnaires, doing like integrating threat modeling into the STLC, having developers do it. And then there's like this idea of threat model is code. I'll talk about each one of these in a little more detail, or you can just ask whichever one-

Matias Madou:

No, I think the last one sounds very interesting, threat modeling as code, would love to hear about that, but feel free to go through all three.

Security defaults and threat modeling as code

Clint Gibler:

Yeah, so just like super quickly. Security questionnaires is basically acknowledging that as AppSec team, we can't threat model everything. So how do we focus our time on the things that matter the most? So the idea is having a like a mini web app, or maybe even a Slack bot that says, "Hey, you're building a new thing. Answer this series of questions." And then depending on how high risk it looks, then the AppSec team will get involved.

Clint Gibler:

So they might say things like, are you building a new service versus extending an existing one? Are you touching PII? If so, like how sensitive? What language are you using? What framework are you using? Are you parsing XML? Are you talking to the PCI environment? Basically like as a security engineer, what would you ask a developer in terms of like, how much do I care about this? And then at the end of it, it computes some sort of like customized risk score, whether like low, medium, or high. And then the AppSec team then does a more detailed manual thing with projects or initiatives that are more high risk. So that's like security questionnaire.

Matias Madou:

Is there a good security questionnaire out there? Like is there a default one that you can recommend?

Clint Gibler:

There's not a default one, but there are several public ones. I think Autodesk has a couple of their security questions and GitHub repo, Mozilla has a rapid risk assessment is what they call it, which is basically the same thing. And Slack gave a talk at AppSec USA a couple of years ago and they released their tool, go-sdl, which is sort of how they do it. So there's at least three, probably more, they can give you some example questions, but I think what most organizations do is they sort of figure out what makes sense for them. And then they sort of take bits and pieces from what's out there. It seems very customized. I don't see a centralized questionnaire at least yet.

Matias Madou:

Yep. Okay.

Clint Gibler:

And then so integrating into STLC, just having developers as they're discussing functional requirements, say, "Hey, how could this feature be abused and how can we prevent that?" Something very simple, something that just adds maybe two minutes to every sprint planning meeting that at least it gets them part of the way there, has them start as an adversary. And then threat model is code, which you cared about the whole time.

Matias Madou:

Yes. I was waiting for that one.

Clint Gibler:

Actually I would say there's like several ideas under this. So different people say threat model as code means different things is one thing I found. So one is including like annotations, in-code based on your assumptions of where attacker input could come in or basically it's like you're in coding and the code itself, you're sort of threat model. I'll send you a link to it. I'm not enthused about this idea because I feel like causing code changes is probably unlikely to be worth it at scale.

Clint Gibler:

Typically threat models are drawn in graphical programs, which are not great in that it's hard to put those in source control and it's not easy to see how they change over time. So I think these are from Autodesk, they built like PI TM I believe, where you can basically write a mini Python program that is sort of like, here are the grass and edges in our threat model. Like here's the actors, here's the systems. And you can check this file into version control and see how it changes over time.

Clint Gibler:

But the one that I think is most mapping to our mental model of what threat model as code should be, is work by Abhei of 345, where basically he and his team have built a series of security automation, frameworks, where basically you write security integration tests that ideally you can reuse between services where you say like, okay, because this application takes logins. I expect that it can start rate limiting if someone tries to brute force it or something like that. And you codify this threat model expectation of how your system can behave using like selenium or robot framework type tests, which you then run continuously in CICD.

Matias Madou:

So you're trying to make that the same for all your applications? How does that work in practice?

Clint Gibler:

I haven't played around with it in practice. It's more like I've watched a couple of talks he's given about it and just chatted with people about it. I think the idea is over time you're building up a library of given what your application does. Here are classes of threats that we want to handle. And then you reuse which modules are applicable per application. So it's not necessarily like every app gets all the same tests, but you have this library of threats then you then test them on your various applications. But the idea is rather than having like a point in time threat model, you're sort of continuously testing it in code, not just like what we think it does.

Matias Madou:

It's definitely one of the harder problems like trying to ... Threat modeling as such is already difficult. Trying to scale threat modeling is just an enormous project and one where we may have not have made enough progress in the last couple of years and we're trying different things and we're trying to figure out what works and what doesn't work. So it's good to see that there's a lot of ideas out there to see how we can scale threat modeling.

Clint Gibler:

Yeah. It seems like a lot of people are working on it. It seems like a common pain point. I ask some friends of mine. They're all like, "I'm spending hours and hours threat modeling and there's always more to do." Yeah. It's a challenge many people are having.

Matias Madou:

In a previous company where I did a little bit of consulting. We tried out train the trainer with threat modeling. Trying to train developers on how to think about threat modeling, which then went to the team and try to train their people on how to do threat modeling. That worked a little bit if you make it simple and you try to standardize on certain things, but it is a very hard problem.

Clint Gibler:

Yeah. I'm curious. Did you have like a series of questions you had people ask? I'm curious how you taught people to think about threat modeling?

Matias Madou:

Yeah. We mainly used Adam Shostack's book essentially. We took a lot of ideas out of that book and worked with swim lane stuff and drawing stuff. And we try to get something around that, trying to get some structure around that, how they could approach their projects, put that into a particular model. That's how we did it, but it was a hard problem. Don't get me wrong, it was a very hard problem.

Clint Gibler:

It was super hard. Did you play at all with his card game? I think it's called Escalation of Privilege?

Matias Madou:

Yes we did. We actually did. We did. I think we finished the training session off with playing that game. Maybe we didn't follow all the rules, but we found it very interesting the questions that were on the card game.

Clint Gibler:

Did developers like it?

Matias Madou:

The developers like it. We actually talked a lot to people. So we were talking to people that were already doing more AppSec than really developers. And then these people went to developers. So they liked it because there's a lot of things in there that you would not think of. It's a good set of cards with a good set of questionnaires and they vary a lot. So that's a good part.

Clint Gibler:

Yeah. I think it's nice to have a base framework of things that are just outside your ... Things you would just obviously think of to spur you to be a bit more creative.

Matias Madou:

Yeah. Maybe last question for you, Clint. I watched what you've done in the last two years looking into all these talks and trying to summarize them. And I was wondering like, how many hours of sleep do you get or is there some secret group of people in the back helping you out?

Clint Gibler:

I wish there was a group of people helping me.

Matias Madou:

So there's no group? It's really you?

Clint Gibler:

No, it's really just me.

Matias Madou:

Wow!

Clint Gibler:

Yeah. So I do actually have some mild insomnia in terms of sometimes I'll wake up in the middle of the night and I can't go back to bed. So I'm like, "Oh, I might as well like do some stuff." To be honest, it's just a fair amount of like nights and weekends. Just like carving out little bits of time here and there.

Matias Madou:

Well, I think you're extremely efficient with your time. So what you're doing and what you're producing, you must be extremely efficient with your time if you're able to do that.

Clint Gibler:

I've gotten a lot better. I think the thing you're referring to is I gave a mega blog post where I summarized all 44 AppSec Cali 2019 talks, which took me probably two to 400 hours, maybe more. But yeah, the first couple of summaries were very slow, but then by the end I was a little bit more efficient. I haven't really told that many people this, but the last couple of weeks before I released that blog post, I was really cranking because I delayed and I didn't space it out as I shouldn't have. So I was doing like one to three summaries a day for like two or three weeks, and it was brutal.

Matias Madou:

That is brutal. But you know, the result is just fantastic and I highly recommend it for people. Actually people even new to the field or senior into the field, there's stuff in there for every level of person in our field. So that's really good.

Clint Gibler:

Yeah. Thanks. Yeah. You can go check it out at tldrsec.com. It's just freely out there. Weekly newsletter.

Matias Madou:

Yep. Clint, thank you very, very much for accepting the invitation and being our seventh guru on the Software Security Gurus webcast. And it was a fantastic chat. Thank you very much.

Clint Gibler:

Yeah. Thanks so much for having me. Take care.

Matias Madou:

Thanks.

Never want to miss an episode?
Sign up for our newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.