May 26, 2020

Software Security Gurus Webcast: Episode #1 - Dr. Gary McGraw

Welcome to the Software Security Gurus webcast with Matias Madou.

In this inaugural episode, Matias interviews Dr. Gary McGraw, one of the godfathers of software security and founder of the Berryville Institute of Machine Learning. They discuss the history, present, and future of software security, as well as how these principles may apply to the new frontier of machine learning and AI.

Introduction: 0:00-4:15
The history of secure coding - have we made progress?: 4:45-18:00
Machine learning and AI: 19:00-22:40
Wrap-up: 22:40-26:10

Listen to the podcast version:

Read the transcription:

An introduction to Dr. Gary McGraw, our first guru.

Matias Madou (00:08):

Welcome to the Software Security Gurus webcast. I'm your host Matias Madou, CTO and co-founder of Secure Code Warrior. This webcast is cosponsored by Secure Code Warrior. For more information, see www.softwaresecuritygurus.com. This is the first in a series of interviews with security gurus and I'm super pleased to have with me today one of the founding fathers of the software security field, Dr. Gary McGraw. Gary is the co-founder of the Berryville Institute of Machine Learning.

He is a globally recognized authority on software security and the author of eight bestselling books on this topic. His titles include Software Security, Exploiting Software, Building, Secure Software, Java Security, Exploiting Games, and six other books. He is editor of the Addison Wesley Software Security series.

Dr .McGraw also has written over a hundred peer-reviewed scientific publications. Gary serves on the advisory board of Secure Code Warrior. He has also served as a board member of Cigital and Codiscope, which were acquired by Synopsis, and as advisor to Black Duck, Fortify Software acquired by HP and invoked us acquired by FireEye. Gary produced the monthly Silver Bullet Security podcast His dual PhD is in cognitive science and computer science from Indiana University where he serves on the Dean's advisory console for the Luddy School of Informatics Computing and Engineering. What an impressive bio!

Gary McGraw (01:56):

It seems a little long to me. Stop doing stuff. It's not built overnight. You're making me feel old. Thanks a lot.

Matias Madou (02:10):

Reading out your bio. I was actually super surprised because a key component is not in there. You're actually retired for over a year now and if you'll allow me, I'll actually read out the definition of retirement. So retirement is that we draw from one's position or occupation or from one's active working life. I see and hear that what you have been doing in the last year this does not match your definition, right?

Gary McGraw (02:38):

I think I am super bad at retirement. I mean either that or there's a second definition or maybe a third. There you go. Yeah, no, I've been working on machine learning stuff for the last year and it's been an awful lot of fun thinking about how to take the philosophy of building security in and apply that to machine learning and AI. What's going on in deep learning.

Matias Madou (03:04):

Let's come back to that point in towards the end of this conversation, we'd love to talk about machine learning and see if it, if it has a place in our software security field as well. Sure, so all kidding aside, I think congrats with the retirement. I think it's a turning point in life. You've had a fantastic career contributing to software security. And I actually would like to take that as, as a theme for today. Let's talk about the past where we are today and, and the future if you allow me to, a future of software security sounds like a good plan. Okay. So I think you've, you've done so many things like from building a community up into writing books and from being a technologist to building a business. So you've contributed in, in many, many areas over the last 10 years and you've, you've seen a lot. So where do you feel that in the last 20 years we have not made enough progress? Where do you think like we, we got stuck in a particular area of, of software security?

Software security isn't new, but there are still many areas with room for improvement. Where haven't we made progress in the last 20 years?

Gary McGraw (04:06):

Yeah, that's a really good question. In my view, we haven't made enough progress on architectural risk analysis, sometimes called threat modeling. And basically though we know how to do that process and there are a number of people that are very good at it, we haven't figured out very well how to automate that process. So it's not scaling in the same way as say, dynamic testing of web apps or static analysis of code are scaling. So anything that you can build an automated tool for is doing in some sense much better than any other stuff that you can't. And you'll recall from software security. The book I wrote in 2006 that there are seven touch points of all those touch points. I think we've made more progress in everything except for architecture risk analysis.

Matias Madou (05:05):

Does it have anything to do with the proactive side? We're still, we're still too reactive and we're trying to move towards proactive or it has nothing to do with me.

Gary McGraw (05:16):

Maybe a little bit, I think, I think less than you would want. I think our real excuse is that it's just hard. And you know, in order to teach someone how to do architecture risk analysis, believe me, cause I've done it 20 times or so. It's just a process of showing them kind of like apprenticeship and apprenticeship is not a method that really scales very well. So we can try to train people how to do architecture risk analysis, but it still turns out to be very hard. And many of the people that we try to train, they don't turn out to be very good at it. And we're not really sure why. It's not that the any different, it's just that, you know, it takes a certain mindset to be able to do architecture risk analysis. Well now that said, we do know what we should be doing, we should be looking at tech stacks that are commonly used and figuring out risks that are often associated with those tech stacks.

Gary McGraw (06:16):

So we can kinda use that as a way to start risk analysis looking for risks that we know might be there. And those sorts of activities can help us in the automation of architecture risk analysis. You know what's funny, kind of in a funny, peculiar way, is that my work on architectural risk analysis of machine learning has rendered a whole bunch of risks that we can now look for. But of course I'm starting with kind of the hardest thing in machine learning. And that's very, it's an interesting thing to do. But I guess that's just the way I've always approached everything.

Matias Madou (06:55):

But so there's definitely a challenge in transitioning that knowledge from one person to another one. There's definitely an opportunity over there, not only in, in training, but also in automation.

Gary McGraw (07:06):

Yeah, that's right. Both of those things are our challenges. And scaling up the practice turns out to be very difficult. I mean, look, even at Microsoft where they came up with show stacks, kind of threat modeling approach, there is a problem with scalability of that particular practice. It just turns out to be very hard. And just because it's hard doesn't mean we can sweep it under the rug. I mean, if you look at the mad dash towards DevSecOps or SecDevOps or whatever the thing is called, I mean, what you see is that we've spent a lot more time thinking about testing and automation and static analysis than we have on things like threat modeling and architectural risk analysis just because they're automated. So anything that we can automate, we can stick into that kinda feedback loop. And anything that we can't, we're ignoring. Ignoring is a bad thing to do.

Matias Madou (08:06):

So okay. So that's something we've not made enough progress on in the last 10, 20 years. So if we transition to, to today, if I look at AppSec today, well actually 10 years ago it was a separate department. It was completely detached from developers. Over the last 10 years we've seen satellites, people within engineering that are picking up software security, trying write secure code or security champions. Do we have sufficient companies that embrace that kind of ID that it's in the development organization or is it just like the, the really top organizations that embrace that? And it's absolutely, go ahead.

Gary McGraw (08:52):

Still at the top half probably. I mean, you know, if we think that there are something like, I don't know, 8 million developers out there on earth and that's just a wild ass guess. You know, I think that we probably gotten to about a million of them. So there are lots more developers we need to get to. But in terms of developers that are working for large organizations producing important software like the Microsoft guys or the Google guys or the Facebook people, I mean what you find is that forward looking organizations, those who really understand that their software has got to behave, are doing a pretty reasonably good job with software security. I also don't think that there was this massive transition from application security people in the network security group to software security. I never saw that. I saw much more of a bloom of software security groups that were involved directly with dev the whole time in my own career.

Gary McGraw (09:53):

So, you know, I've spent TA a lot of time with groups that call software security. What it really is software security, not application security. And even that nomenclature tells you the difference in approach because those people who call this application security, generally speaking, came from a network security background and they just marched up the OSI stack from layer one to layer seven and eventually they got to layer seven and guess what it's called, the application layer. And they're like, Oh, we gotta do application security too. But those were network people and we cannot solve this problem with any number of network people. In fact, no large numbers large enough. We have got to instead teach the people writing code how to do it better. And that's why I'm excited about what secure code warrior is doing because we have to teach the people who code how to do it. Right. Good news. They want to learn bad news, they still got to learn.

Matias Madou (10:59):

But so, so one thing you've, you've mentioned with, with application security is its network people that that went up the stack. But at the same time, I feel that when I think about application security, I think more about people moving papers. They do a scan and they give it to the engineers where software security is more, you are with the engineers helping them and trying to write secure code. And, and I think there's, for me, there's more a difference along those lines than, than the network security moving up in the, in the, in the stack.

Gary McGraw (11:32):

I think that that's a reasonable view. I'm okay with that. And it certainly goes along with what's happening in DevSecOps. You know, I can make fun of the name but, but but generally speaking, if you're going to join a DevOps team as the security person, you better be able to code, you know, because somebody is going to go, great, we got a new team member, here's a keyboard, make some stuff. And if you're just like, well, I'm just going to print out four or five reports and to know you're going to be relegated to the closet where you belong. So I sort of agree that kind of the days of bureaucracy is software security kind of box checking are numbered. And that is a good thing. You know, when I started my early career, I used to have big arguments with people like Watts, Humphreys, you know, Watts would say process is super important.

Gary McGraw (12:31):

And I would say, yeah, whatever, I'm just a kid. Just show me your code. I don't care if your code gets written by goats or the goats are sacrificed on Wednesday nights at a full moon. Just show me your code. Okay, that's all I care about. And it turns out that Watts Humphrey was right and I was wrong. That you have got to teach people how to do this stuff using processes that produce better code and tools that produce better code and ideas that produce better code and architecture that produces better code. You can't just kind of look only at the code. And you know, in 25 years ago I was like, yeah, whatever, show me your code. You know, when it's working on static analysis at the time. Now I know better than that.

Matias Madou (13:18):

There's still a lot of opportunity, right. And there's a lot of stuff that needs to be done because I, you know, I, I know that 10 plus years ago, you had a quite politically incorrect t-shirt at your previous organization that says we have jobs because you can't code. I believe John Steven was responsible for that particular t-shirt. And he did get in big trouble because he was talking about our customers.

Matias Madou (13:48):

Yeah, I'll do that. No.

Gary McGraw (13:50):

And we were like, you know what, John, Steven, that is very, very funny.

Matias Madou (13:58):

But so it is. So if we look into the future, we still have jobs. There's actually plenty of opportunity for, for people that want to write secure code. Why are you saying we have jobs? Because nobody can go cause that's what you just said. No, I think there's plenty of opportunity. That's, I think that's what I said. No opportunity,

Gary McGraw (14:19):

Good mumbo-jumbo, middle management speak. I do think we've made progress as a field and maybe we're, you know, halfway there. We certainly have issues of scalability to go. There's plenty of work to be done. And so, and there are plenty of companies who have never heard of software security, but there are just as many that have and just as many organizations that are pretty excited about doing software security. Right.

Matias Madou (14:53):

Let's, let's transition to the, to the future if you don't mind. So

Gary McGraw (14:57):

No problem. You can hear that my dogs agree over there. They're in agreement.

Matias Madou (15:02):

Hopefully the listeners agree too. Who knows?

Gary McGraw (15:06):

Maybe that's the listeners barking.

Matias Madou (15:10):

Never mind. So, the future of software security and more specifically let's talk about writing secure code. So, so the way I think about it, like personally I think we need to build into the future, well also today as much robustness as possible into the frameworks, but I don't think we're quite there yet. And analogy I like to make is we're still training doctors to do routine procedures. While at the same time we know that robots can do routine procedures better, but you know, they still need to learn everything and, and we need to take it from there. So you're, you're working on machine learning. What do you think software security will be like 10 years from now? Will there, you know, will machine learning take over everything? Will there be an abundance of software security specialists? Like where are we moving towards?

Gary McGraw (16:00):

I think we're going backwards. You know, I hate to say it. If you look at what's happening in programming languages, I think that's the most important place to look right now. And in the early days when I was really helping to found the field of software security, we've focused a lot of attention on C and C plus plus because those languages are a disaster from a security perspective. You know, there are tens of thousands of things you can do wrong in C code that will lead to really serious security problems. I mean the buffer overflow, stack-based buffer overflows and so on. Those were real problems. And so when we moved to Java, even though I got my start kind of saying Java is insecure, you know, Java had a lot of very important characteristics as a programming language type. Safety was one, some dynamic analysis was another in real time where you can do runtime comparisons of things.

Gary McGraw (16:59):

I'm the security manager and so on. Those were all forward progress. And if you look at what's happening now with dynamic languages that use the construct of assembly no JS and everything else, you know, what you find is we're falling behind in our capability to look for bugs in some of those languages that the dynamically bound languages. Because if the code's not there, it turns out you can't check it. And if you are waiting around to fetch the code to build an assembly right before you run it, just in time you're gonna find yourself in trouble because you can't look for the bugs until the very last minute. And I'm worried about that. It reminds me a lot of how leveraged our economy is in the world according to debt. So if you think about this, you know, we've been spending many, many years since the eighties squeezing the value chain as much as possible.

Gary McGraw (18:01):

We say we don't want to store inventory. We want to just have those parts arrive right before we stick them on the machine because storing the inventory that costs us money. And so we're just gonna do it just in time. And we've been squeezing value chains since the eighties and we got our economy so incredibly leveraged. No corporations were holding cash, you know, everybody was waiting on the inventory. It was a world supply chain that turned out to be very, very fragile. And during this COVID thing that we're all experiencing now, you know, we're talking on Skype and so is everybody else least, you know, Skype and Zoom and video conferencing and we're not allowed to meet in person anymore for a while until this pandemic goes away. What we learned the hard way is just how leveraged our economy was and it's going to take us a while to reconstruct those supply chains and to get the economy back on its feet.

Gary McGraw (18:59):

Frankly, our move towards dynamic languages is the exact same thing by analogy. And so I think we need to really learn a lesson from economics. Not a simple lesson like, you know, let's spend money earlier. But a hard lesson, which is if we invest time and money upfront, it's going to save us in the long run. And in some sense we have to have some inventory of security and good thinking and reliability and availability, all of the software 'ilities' so that we don't find ourselves caught with our pants down because as far as I can tell everybody doing these video calls, no pants.

Matias Madou (19:43):

I do not have a follow-up question for that one too. You're not going to say stand up please. You sit down, you've learned something over the years.

Machine learning, AI and software security: Where do they converge?

Gary McGraw (19:59):

Now if you want, I can talk a little bit about machine learning security too, but I don't think that's the future of software security. I think that's just another important kind of technology wave that's sweeping the planet. And if we get in front of it, we can do a lot better job from a security perspective than if we're just like wait around until machine learning is everywhere and then go, Oh no, we're in deep trouble.

Who would have guessed?

Matias Madou (20:24):

So is it correct to state that? Well, with software security, there's not even a need for machine learning because we are not quite there yet. We have the fundamentals not right.

Gary McGraw (20:34):

Yeah. And I want to correct an assumption that is often made about my work. I am not trying to apply machine learning to software security at all. I don't really care about using machine learning to do security. I don't care. You can do that all you want. What I do care about is the opposite. Securing machine learning. That is the very technology that we're building to do all this stuff including security stuff is itself insecure. So we have to spend some time thinking about the way that we're building machine learning systems, the kinds of datasets that we're relying on, the learning algorithms that we're using, all of those things. And so at BIML, which is what we call the Berryville Institute of Machine Learning for short at BIML, we spend a lot of time thinking about architectural risks that are built right into machine learning systems.

Gary McGraw (21:37):

And we published an important study on January the 13th about that that recognizes and points out and describes 78 particular risks in machine learning systems. And in some sense provides the kind of information that an engineer or a technologist using machine learning, even applying machine learning or developing new kinds of machine learning, can think about to build a better system while they're doing that activity. That's why we did it. And you know, you've been to Berryville. In fact, you've helped even to oil my donkey, which is a hilarious thing. The donkey ate some stuff that was bad, and so we had to pour oil down its throat. And Madou actually helped me. He's a pretty good donkey holder. He was holding his head up while I was shoving oil down its throat. And the donkey lives in the dark.

Gary McGraw (22:41):

He is still alive. It worked. But Berryville generally speaking, has more cows than people. And so it's kind of fun to have a Berryville Institute of Machine Learning. And the good news is the work that we've done has made a big splash already. I've had calls with the guys at Google and Microsoft and Amazon and other places open AI that are using machine learning all the time. And they're really excited to think about security in this way that I've been doing for 25 years and building security in architectural risk analysis and so on. So that's been very heartening. It's been really fun to work on for a year just for fun. And the fact that the work turned out to be great and people are super interested in it is very gratifying and I suck at retirement.

Matias Madou (23:35):

Let's go back to your garden for a last question that I have. I know you love gardening and, and you live on a fantastic farm. I also know you well-crafted cocktails.

Gary McGraw (23:49):

I do. Yes indeed.

Matias Madou (23:50):

What is the latest cocktail you've made with something out of your garden?

Gary McGraw (23:55):

Oh man. Well, there's a drink that I made up in the summer a couple summers ago called the bourbon mint fizz. And as the name implies, it uses mint in it. It's got lemon and bourbon and ginger beer and mint in it and maybe something else, some kind of bitters. I think it's great proof. Bitters. I can't exactly remember, but that's quite a beautiful drink in the summertime because it's effervescence and lemony and minty and fresh and it's got that little bourbon curl to it. So that's a drink that I, that I make pretty often when the mint is up. Now the mint's not even in over here in Virginia now, I just planted some plants last week and they're the kind of plants that can survive in the cold because our first planting date really doesn't happen 'til May. So getting stuff in, in like March was crazy, but you know, global warming. So that's another good thing. Next pandemic. Here we come.

Matias Madou (25:01):

So it sounds like a very delicious cocktail in the direction of the Tiki kind of stuff that I like. Although it could be more rubbish.

Gary McGraw (25:10):

No rum and there's no Tiki, there's no, you know, whatever colaire number or anything like that.

Matias Madou (25:17):

If this whole Corona thing is over, you know what? I'll jump on a plane and, and I'm happy to try it out and see if we can make it a Tiki drink.

Gary McGraw (25:25):

That sounds good. We, you know, we can add lots of crushed ice and you know, maybe a little tiny umbrella.

Matias Madou (25:31):

Sounds good. Thank you very, very much for accepting to be the first guru on the software security webcast. This was really insightful. Thank you very, very much.

Gary McGraw (25:42):

My pleasure to be here. Thanks for doing this. I'm really looking forward to the whole series and to seeing what everybody has to say about our field.

Never want to miss an episode?
Sign up for our newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.