Player FM - Internet Radio Done Right
Checked 3y ago
Добавлено четыре года назад
Контент предоставлен Datadog. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Datadog или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
Player FM - приложение для подкастов
Работайте офлайн с приложением Player FM !
Работайте офлайн с приложением Player FM !
AppSec Builders
Отметить все как (не)прослушанные ...
Manage series 2805034
Контент предоставлен Datadog. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Datadog или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
AppSec Builders features practical and actionable conversations with application security experts and practitioners. Topics range from understanding and solving classes of vulnerability, building protections to efficiently scale with your business, and core best practices to strengthen your security posture. AppSec Builders is hosted by Jb Aviat, AppSec staff engineer at Datadog, former CTO and co-founder at Sqreen and Apple Red Team member. Contact us at appsecbuilders@datadoghq.com
…
continue reading
7 эпизодов
Отметить все как (не)прослушанные ...
Manage series 2805034
Контент предоставлен Datadog. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Datadog или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
AppSec Builders features practical and actionable conversations with application security experts and practitioners. Topics range from understanding and solving classes of vulnerability, building protections to efficiently scale with your business, and core best practices to strengthen your security posture. AppSec Builders is hosted by Jb Aviat, AppSec staff engineer at Datadog, former CTO and co-founder at Sqreen and Apple Red Team member. Contact us at appsecbuilders@datadoghq.com
…
continue reading
7 эпизодов
Все серии
×In this episode of AppSec Builders, Jb is joined by security professional Jim Manico, founder of Manicode Security to discuss Application Security, Developers, and why they should be trained to build Secure Applications . About Jim: Linkedin: https://www.linkedin.com/in/jmanico Twitter: https://twitter.com/manicode Jim Manico is the founder of Manicode Security where he trains software developers on secure coding and security engineering. He is also the co-founder of the LocoMoco Security Conference and is an investor/advisor for Nucleus Security, BitDiscovery, Secure Circle and Inspectiv. Jim is a frequent speaker on secure software practices and is a member of the JavaOne rockstar speaker community. He is the author of "Iron-Clad Java: Building Secure Web Applications” from McGraw-Hill. Transcript Intro / Outro: [00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat. JB Aviat: [00:00:14] Welcome to this episode of AppSec Builders I am JB Aviat and I am honored to welcome Jim Manico, who, on top of being a famous, opinionated security professional, is also the founder of Many Good Security, where she trains software developers in secure coding and security Engineering he is also an investor advisor for many companies, frequent speaker on secure coding practices and a book writer with Ironclad Java Building Secure Web Applications. Jim, why don't you introduce yourself as well? Jim Manico: [00:00:50] Jean-baptiste is a pleasure to be on your podcast and your show. And like you said, I'm an opinionated application security professional. I just hope that my opinions are helpful to you and your audience. JB Aviat: [00:01:04] Opinions are always helpful, especially when they are held by smart people. So, yes, definitely. And I'm looking forward to have you sharing a bit more about that with our listeners. So, Jim, thanks a lot for joining us today. So when we are familiar with your work, we can notice that your primary focus is developers. So you train them, you write books to educate them. You contribute to a lot of OWASP resources around education. Why that focus centered on the developers? Jim Manico: [00:01:40] I believe that the application security industry traditionally has primarily been about security testing and dev ops and all these different pieces that are about assessment of the security of an application. And I do not believe that you can achieve security through testing. I believe that the only way to truly do application security is to get developers to build secure software and to utilize tools and techniques and processes that will help developers, author, secure software. And I believe that our industry places very little focus on that important specialty because it's hard to sell an idea. The idea that you must change your process, you must change your engineering capabilities and similar. It's not something that sells in the marketplace. It's education, which is not a very big part of our industry. So that's why I focus on that, because it's my specialty and it's also my belief. That's how you really do application security is to enable developers capabilities around security in some way. JB Aviat: [00:02:54] And a so you've been doing that for a while. What are the big changes that you have witnessed over the past year? Jim Manico: [00:03:01] I think the acceleration of dev ops is very interesting. Now, Dev Ops has been around for 20 years. This is about automation around the building, testing, deploying in other aspects of the SDLC. And we were doing that in the late 90s through a lot of custom scripts and similar. And I think that today there's extremely modern tool sets like Jenkins', GitHub actions and similar, where I can build a significant security centric dev ops pipeline in a really short amount of time now, especially if I'm using GitHub actions. Click, click, click. And I got dependable that I got static analysis and some grep. I got dynamic testing and similar testing tools really rapidly in terms of setup. And I believe in a few years that when we're using GitHub in similar repositories, advanced security testing that we see today will be natural and automatic in just a few years. Other trends that I have seen are for more intellectual is the migration away from traditional session management and the movement of stateless session management using JSON Web tokens and the OS two and the open IDE Connect protocols and other JSON Web tokens centric standards. This is a very big departure and change around how secure Web and API applications are built. I also see a lot of new changes in HTP response headers, content security policy, the different response headers to delete site data at logout time, ways I can configure referrer policy or very granular now the advancement of cross origin, resource sharing and the capabilities being available in most every browser. I think that all those response headers have changed dramatically in just the last couple of years to give developers more security capability in modern browsers. JB Aviat: [00:05:07] I definitely agree that don't get me started on JSON Web tokens, but. She already did, and that's a huge fan of the and we took in, because I think that's an interesting evolution in the world of security. It's a bit complex to configure. Right. There are several things you don't want to forget. And to me, that's a tool that has very interesting properties. But that is a bit hard to give, as is to developers like these specifically need the training and education in order not to misuse it. Right. Jim Manico: [00:05:38] I've got to be honest with you. Using JSON Web tokens is radically more difficult. You now have key management secrets, management, the implementation of log out and idle time out, which was easy. In Session management is a challenge with Jason Web tokens that you can certainly achieve more scale. And if you're using micro services, you kind of need to use adjacent web token because it's difficult in a scalable, performance friendly way to tie sessions together among many small services. So, Jean-Baptiste, I really like the back and front end pattern, the PDF pattern where I have more of a traditional session between my JavaScript client or Web client and the main API that serves as a reverse proxy to a fleet of micro services that is back in my private infrastructure. So that way I can still benefit from the statelessness of micro services and performance, but still have a traditional session between the JavaScript client and either a gateway or a reverse proxy API. So all the mess of JSON, Web tokens and micro services are behind the scenes. So I actually agree with you. I really don't like JSON Web tokens when you push them to the client all the way. I try to avoid high powered access tokens and Jason Web tokens from being in a client in particular. I mean, like a browser web client, because there's no good place to store sensitive data long term in a Web client. It's just not mature enough for that yet. So if you're going to use JS on Web tokens or if you're forced to because of the use of micro services, again, the pattern is called the B F f the back and front end pattern, which basically takes the mess of micro services and Jason Web tokens and pushes it back into your private infrastructure aligned JB Aviat: [00:07:37] With your line. Yes. Things. So I think that's a very interesting time to be in security today because. Yes. Of things evolving. You mentioned headers, Clinton's security policy. So, yes, we have a lot of new tools that are evolving and changing the security capabilities that are into the hands of the developers. So that's a necessary step in the journey towards being more secure. There is no question about that, though. Those tools and those security primitives are still extremely complex to implement. If you take a look at the latest security headers that are centered around the Specter meltdown, protections like the complexity of those is really insane. And I feel either we need weeks of training for the developers that will have to use that properly. And Fulu is the mess of like its original house brothers, etc. though we need something in between, like a layer that will automatically configure the application. And I think this is where there is a builder between what you can teach the developers. And no one has infinite time to teach developers and what the tools should be responsible of. And so you said one very interesting thing in your introduction, Jim, is that you teach the developers to use the right tools. And I think that's that's a big part of the business, is to help them find the right tools. Jim Manico: [00:09:07] So let's talk about Spectre and meltdown briefly. Spectre and meltdown is the ability to read data out of a CPU cache. Right. And in my world, this is mostly a problem with the Web browser. Like I'm worried about Chrome and Firefox and similar having an attack with malicious JavaScript cross site scripting primarily, or a malicious third party library that allows malicious JavaScript to read data out of a CPU cache. And we saw a demonstration from Google who provided a JavaScript demo around exactly how this could be done. Now, how is this problem solved? The problem is usually not solved by a web developer building a website or an API. The problem is solved by a browser developer and I do not teach browser developers. I teach the masses of Web and API developers how to build secure web and API applications. But the defenses that really stop spectre and meltdown with my understanding. Traumatised is like site isolation and similar that is built into browser technology, also the use of an HTTP only cookie will also help in stopping these class of attacks. And I do agree to teach a browser developer how to be more resistant to CPU attacks like Spectre and Meltdown. That is extremely sophisticated knowledge. But as long as Web developers and API developers are using basic security principles, they're doing OK. And the other thing is, if someone is using a very old browser, there's very little a developer can do to stop these classes of attacks. So I also recommend that developers as much as they can, they use JavaScript detection to understand what browser is being used by their customers and as best as they can do not allow older browsers to use their sites and APIs. So luckily, it depends on which developer you're trying to influence for Spectre and Meltdown. I need to influence the browser developer and for web for the standard web API developer. I'd like them to use HTP only and other cookie protections and to also make sure that they're only allowing as modern of a suite of browsers as they can possibly get away with. JB Aviat: [00:11:38] Yeah, I would even say we have to influence you designers and of the browser vendors. But I think one of the flaws here was that no matter how safe your browser getting JavaScript execution in one page, which is not so hard, if you think about server kind of attacks on Fatty's, for instance, would allow you to read any part of the memory. And that's something that doesn't depend on the browser, actually, because it's like just running JavaScript. And I think that was the big demonstration of the Google demonstration making you like a Specter expedition mainstream. So one of the way to counter that was that Scerri a very complex idea such as Skouras, etc. And I think that is the same story for about any kind of interface for API that is very complex from a security standpoint. And you have examples, for instance, of tools that made the life of security protocols easier. And so one example that I like is in the ACL, for instance, the crypto library that took a very opinionated stance that what kind of helper's it will offer to the developers. So it's much less flexible than a regular crypto API, but you don't need to be a security expert in order to use it. What do you think about those initiatives? Jim Manico: [00:12:58] First of all, you mentioned NaCl. So this is a cryptographic library conventionally known as Lib's sodium. I believe that does correct. Yes, it does. Good crypto in a library and I think that is a great idea. They are very opinionated in decisions they made there that have stood the test of time that are very good. I believe similar libraries for crypto like Google. Google has a library called Google Tenke, which is also exceptional in the very opinionated decisions they made. And so I like that idea because Salib Sodium and Google Tenke are really usable utilities, even though they're opinionated, they're not so opinionated, where they're not usable, they're very usable and very straightforward to use. So if you're being very opinionated but still providing usability for developers to author software, I do like that idea as well. And I do not think that the suggested defenses around stopping Spectre and meltdown are reasonable. And I don't think that's the way the problem is solved. I think that, again, I believe the problem is solved in the browser itself and Firefox and the Google team and other teams we're building browsers are actively working on making those protections automatic by doing various types of browser isolation and other types of defenses. But to your point, if I do get JavaScript execution in a website, yes, I can likely read the CPU, but I can also do request forgery. Jim Manico: [00:14:29] I got cross-link scripting. I could modify content. The point I'm trying to make Zapatistas any kind of cross site scripting event is game over if the attack executes. Yes. And to your point, I don't think the real issue is spectre and meltdown. I think the real issue is that cross site scripting defense, otherwise a better name would be content. Injection defense in a web application is madly destructive and there's no simple way to stop that. The common developer who's using backbone and angular and 30 other JavaScript libraries, it struggles to keep them up to date. Massive amounts of JavaScript code. They're almost always. Going to have cross site scripting, and even if you're using the latest version of react with best practices, it is still easy to make a mistake in a variety of different ways that will bypass Riak security. And so I think that's the bigger problem. Zhovtis, not Specter meltdown or how to approach it, but just how to build a secure user interface on the Web period. And we see that content security policy three, which allows for dynamic. I'd also say that a trusted type standard is helping a lot in providing that capability. The only problem with content security policy, three, it's not supported and e 11 or even worse, it's not supported at all in Safari and ie eleven support. Jim Manico: [00:16:03] Only some of my customers need that. Most of my customers don't use ie 11 anymore. But according to the W3 C browser statistics i.e. elevons, global use is statistically zero percent at this point as of last month. So we see I 11 finally starting to go away. But content security policy three. I was looking at the Safari Technical preview and within the last couple of weeks I see players that are building content security policy three support into Safari. So now now if we can get developers to implement CSP three, I like Anon's Pastrick, dynamic policy perspective, YOLO and Weichselbaum from Google's research and I limit what browsers I allow my customers to use. I can build some extremely rigorous security today. And I think that when we go ahead a year or two year, two or three and I have CP three everywhere and techniques to limit browsers and a little more awareness about third party libraries, that the capability of developers to build a secure application without excess is going to be more realistic. That's my hope job. At least all we have is hope. But I do agree with your conjecture. The cross site scripting and complicated Web applications is really hard to avoid, even with talented, security centric developers. And that's a problem with web development in a big way. JB Aviat: [00:17:40] I agree it's still easier to avoid today than I think 20 years ago when you weren't using, like, templating engine server side. Yeah, the things were on one hand. Come on. And on the other hand, pretty neglected even at the beginning by security people. And so, yeah, I think things move to a very, very positive way. And we can only think Google here, who is really moving for a while, w3 C and leading implement this implementations we sickroom and I trusted. Stipe's was born from an initiative to solve the excesses problem at the Google scale internally. And so that's insane to see how well they managed to solve it internally. It's not like 100 percent solved with breathtaking. It's like 90 percent solved. And sharing that to the broader audience is amazing, as you said. Yes, it's not trivial to implement that shit within your customers. What are the strategies that you see to actually implement this kind of initiatives? It's complex at the scale of a company at scale. Jim Manico: [00:18:46] It's extremely difficult. It requires at least today, it does require educating developers, which is not a very scalable activity. I realize that's difficult. So my goal is usually to educate the lead security champions of each developer team around content security policy using the Spagnuolo YSL bomb methodologies on top of libraries that use trust types. So who are these rock stars we're talking about? Chris Cristoff from Google is the author of Trusted Types, and Michelle Spagnolo and Lucas Shubham have been giving talks about how they roll out content security policy, which uses XP Level three on the conference circuit. I'm a big fan of their methodologies dramatised in any hero's journey you have helper's. I'm on a journey to learn about secure coding. Even though I am a teacher by trade, my real profession is being a student so I can learn these technologies enough so I may teach them properly and on my own journey of learning about this, these three individuals have helped me the most. Again, I want you to look these people up and look at their work. Michelle Spagnuolo, Lucas Shubham and Chris Cristoff. Those are the three top defenders of excess on the planet with the kind of. Knowledge and work that they are doing, and I'll credit Michael West as well from the W3 C, who has done a lot of work with content security policy at the standard level. Google is no perfect company. No one is. But a lot of the engineers at Google have led the charge in providing good security standards so we can build secure web applications. Jim Manico: [00:20:34] And the way that I work with companies to achieve this knowledge at scale is not to influence every developer. I can't do that realistically in training, but I can influence the main security champions that reside in each team who are dedicated and responsible for secure software. So I try to influence those leaders so the knowledge trickles down to other members of the team. And that, of course, assumes that a company even has security champions embedded with their developers in the first place. But that's the best way I think, that this kind of knowledge will trickle into large companies, because as a side…
In this episode of AppSec Builders, Jb is joined by Security Architect, Sarah Young, to discuss Cloud Security, its evolution, and its increased presence within Cloud Vendor solutions and platforms. About Sarah: Linkedin: https://www.linkedin.com/in/sarahyo16/ Twitter: https://twitter.com/_sarahyo Sarah Young is a security architect based in Melbourne, Australia who has previously worked in New Zealand and Europe and has a wealth of experience in technology working across a range of industry sectors. With a background in network and infrastructure engineering, Sarah brings deep technical knowledge to her work. She also has a penchant for cloud native technologies. Sarah is an experienced public speaker and has presented on a range of IT security and technology topics at industry events both nationally and internationally (BSides Las Vegas, The Diana Initiative, Kiwicon, PyCon AU, Container Camp AU/London, BSides Ottawa, BSides Perth, DevSecCon Boston, CHCon, KubeCon, BSides San Francisco). She is an active supporter of both local and international security and cloud native communities. Resources: Cloud Native Computing Foundation Transcript [00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by Jb Aviat. Jb Aviat : [00:00:14] Welcome to this episode of AppSec Builders, I'm Jb Aviat and today I'm thankful to welcome Sarah Young, who is a senior program manager in Azure security. Sarah, you're very prolific in this security space which conferences, the Azure security podcast your also CNCF - Cloud Native Computing Foundation Ambassador. Sarah, I'd love to hear more about this. Sarah Young : [00:00:38] Thanks! And thank you for having me. Yeah! So many things I could say. So, yeah, I worked for Microsoft. So of course, every day I work with Azure and do Azure security as one would expect. But I've been working in security for oh. Like specifically focusing on security for the last eight or nine years now. Before I joined Microsoft, I worked with other clouds and so I got a fair bit of experience there. But with regards to CNCF I am, as you said, an ambassador and although I'm certainly not a developer, I certainly find the security aspect of cloud native stuff really, really interesting. And that's what I enjoy talking to people about. Jb Aviat : [00:01:20] Alright. And so one thing you seem to be prolific about is Kubernetes and Kubernetes is definitely something that has gone through an amazing popularity over the past years and also got a lot of security exposure because it's notoriously a complex and difficult to use in the secure way. Do you have any specific thought about that? Sarah Young : [00:01:42] Yeah, the of specifics we could go into here and I guess watching Kubernetes over the past two or three years has been really interesting because obviously there are new releases and every time there's a new release, there are updates and improvements made to it. Obviously, I focused more on that for me. I'm more interested in the security side of it. But it's really interesting if you go from the early days of Kubernetes through to now, how much it's improved. I mean, what are we on now? I think we're on twenty, twenty one or something like that. I forget the exact version. We're up to for releases at the moment. But if you go back to the early days or two, three years ago, there was some major, major security holes and Kubernetes. So there were things I mean, it didn't support RBAC or role based access control. So if you don't have roads, access control, you literally can't give people permissions, like everyone just has everything, which is a security person's nightmare. So it's been really good to actually see how it's developed over the years and how the community have addressed those things. Sarah Young : [00:02:46] Now, I'm not saying it's perfect yet, because to be honest with you, let's be honest, like no software, no hardware, nothing is perfect security wise. And and that's what partly why I have a job, because whenever people create things, there will be security holes or things that it doesn't do ideally. So it's been really good to see how the community has really focused in on security more the last few years, because I think in this super, super early days, Kubernetes was just being built more from a traditional developer perspective. People were thinking about the features and what it could do and not the potential security gap. But now that's changed a lot. There are some great people out there in the community who are doing security work. They now have, because this week, while the week we're recording this, it is KubeCon EU and KubeCon's now got Cloud Native Security Day. And there's also the special interest group in the community for security. So certainly it's been really great to see how that has grown over the past few years because they'll always be things to address for sure. Jb Aviat : [00:03:50] Of course. Of course. And so that is very interesting. And how even that's community driven project. How is the decision to prioritize security features made over the decision to prioritize the thousands of the features that are in the. Sarah Young : [00:04:08] I wouldn't say it's an interesting question, because this comes back to a thing that the endless battle that security professionals have is the when you are developing any kind of system, not just Kubernetes, any kind of system or product in I.T., the main priority, of course, is to have the functionality that it needs to do to fulfill whatever business need or functional need that the product needs to do. And security is great, but you can't have security will never win out as a priority over costs, delivery date, and functionality. And there are some there are different trains of thought on this. But I think having worked in delivery as well before, I kind of became more purely focused on security role. When you're trying to deliver something and get something running, you know, you're building a new application, you're building a new micro service, whatever. You know, if you've got a deadline and a budget, you have to meet that because probably your business is paying for it, your project is paying for it, whatever. Security is great. And I think that most devs and security people want to do it. But security is never going to win out over those competing priorities, but pretty much never. Now, I'm sure there might be some better examples out there. Sarah Young : [00:05:27] So really, what we've needed to do in security is security need to be made easier, because if it's not made easier to do and ideally in built into a product, it won't win out over other priorities. And there are some security people who just want to try and really push people saying, no, you know, you've just got to prioritize it. But the fact is that it won't win out over delivering budget and things like that. So we have to make security easier and more straightforward. And I think it's great that the community has embraced. And that's why let's take Kubernetes. It's got now a lot more inbuilt security features. They rather than you having to use a third party Add-On to integrate, say, role based access control or key storage or whatever, like a lot of those things have been fixed. So when you start up the product, that security issue is already largely taken care of. All you do a tiny bit of configuration. And so it's great that the community have actually addressed that because yeah as I said, I wouldn't say I think there's been more focus on it because, of course, you know, if you have a security breach or something is known as being insecure, like a piece of software, people don't want to use it. Sarah Young : [00:06:41] But as I said, as a business, there are other priorities. But another great thing an old boss of mine told me a few jobs ago was and I really, really like this, we're not competitors when it comes to security. Now, what that means, because I was working for a financial services organization at the time, is that when we talk about security, right. If there's a vulnerability in something that's widely used, it's worth fixing. And even if you're fixing it for your company and it helps your competitor, then that's OK. Because at the end of the day, if you look at the cost to security breaches, although, say, I'd say you're an organization in your your main competitor gets owned and like you might be like, yeah, that's amazing. But it's not really because at the end of the day, we all lose out on security breaches always at the end of the day. So it's within everyone's interest to work together to make the overall environment more secure. And of course, there are different ways of doing that. But I really strongly believe in that phrase that my old boss taught me, which was, yeah, we're not competitors when it comes to security. And so we should help each other out. Jb Aviat : [00:07:55] Yeah, that's an interesting point of view. And that's great that each time there is breach the overall trust is touched and impacted. And so that can indeed be hurtful for the overall space or industry. Interesting, yes, and to get back to a Kubernetes. And it isn't the way it evolves and has evolved from a security standpoint over the years where all the security efforts pushed by the community or there's some kind of more global governance done by maybe the CNCF Sarah Young : [00:08:30] Well, there is the special interest group, this SIG security. And so that sort of drives a lot of the security discussions and see CNCF And there are some fabulous, fabulous people in there who really know their stuff, because if you take Ian Coldwater, for example, they are a really, really, really talented penetration tester. And they are absolutely yeah. I have a lot of respect because I am not a penetration tester. I understand the principles of it, but I know that they have really, really, really done some great work, found some really interesting vulnerabilities and. There's also people like Liz Rice, who's been a huge cornerstone of the CNCF security scene for a long time. There's so many names, I'll just chuck a couple of names out there. But there are some amazing individuals who are very talented, really know what they're doing, who've been driving that for a number of years. And it's really, really good to see Jb Aviat : [00:09:30] Yes this is super interesting thanks for the considerations of Kubernetes. And so since you know very well just area, what are the main evolutions that you've seen over Kubernetes over the past year from the offensive standpoint and security research? I've seen lots of interest of articles and tools around everything from the operator and the Kubernetes implementor standpoint. Do you really think that the situation is much better today out of the box than it was maybe 10 years than just five years ago? Sarah Young : [00:10:05] Yeah, it's like I don't know if you've seen this. It makes me think of the job adverts where people have said you've got to have 10 years experience in Kubernetes. And I know that was going there was someone posted one of those on Twitter a while ago. It made me laugh anyway. Oh, there's no doubt that it has improved massively since the early days. I mean, there's no doubt, like I said, I mean, some of the ones that really gaping holes that I can think of, things like I have no role based access control, one that people may remember is the admin page, the admin console of Kubernetes used to be accessible with no authentication. So as long as you knew that URL, you could go to it and do things which you don't have to be a security expert to know that is not good. And so, I mean, that that's the one that I always think of. And there were a couple of relatively at the time, high profile hacks and breaches around that. I also tried that myself, actually, in an experiment to see if I could get someone to own it. But I don't know if mine looks so obvious. Nobody wanted it looked so obvious. It looked like a honeypot. And for those of you who don't know what a honeypot is, that's just basically trying to attract people to attack your thing. But no one ever attacked it, which I was really surprised about that or I didn't pick it up, could have gone either way, I guess. So it's like there's no doubt it's improved hugely over the last few years. Sarah Young : [00:11:36] Absolutely. But as is the case with everything, you still need to know what you're doing. But we're getting loads better at that. So obviously, the general skill level as Kubernetes has been around for longer, there are more people available who are skilled in it and understand what's going on. Also, we've got things like the CIS standard, so the Center for Internet Security benchmark that people can work through. There's also a lot of managed services out there. Now, I'm not shouting out to anyone in particular. There's quite a few providers offering managed Kubernetes clusters. And I think I'm a big fan of if you're not super comfortable with them or it's something you're still learning, then there's nothing wrong with going to a managed cluster, because then a degree of the configuration element, whether it's security or something else, is taken away because that will be done by the provider. And again, if we look at it from a pure security professional perspective, you know, you want to look at reducing your risk and reducing the likelihood something happens. And if you don't have the in-house skills yet or you're still building them up, but you want to use Kubernetes, that is that is a good way to go. There's also other advantages, particularly around integration, because most of the all the major cloud providers offer a managed Kubernetes service. And, you know, depending on where you've thrown your lot in with, it might make sense just from an easier integration perspective as well. Jb Aviat : [00:13:02] Of course, differently agree here, which is a nice transition to my next question, Sara. So, yes, using managed services puts a lot of the security burden away. What are the other tools that you would recommend from a security standpoint to people using the cloud? So I know that's a broad question. It's the past years the security offering of the cloud vendors grew and maybe grew more that many of them along the lines of offering. And so I'd be super interested to know how you would choose in this growth and what other flagship products that you would recommend to anyone in the cloud. Sarah Young : [00:13:45] Yeah, so it's a really tricky one because as you said, there's many, many products out there, so many products, and it can be difficult to know where to start. I think, particularly if you say a lot of organizations that have decided to go cloud first. So they'll like, OK, I'm going to put everything in cloud now. Although having said that, a lot of organizations will always have a bit of. An on premise footprint, it's unless you were born in cloud, say, in the last five years, it's actually quite hard to purely put everything in the cloud for various different reasons. So that's not realistic. So I always look at it. What I've been advising people, because there's so many things out there, you need to start right at the very beginning. More from a capability perspective. So what I mean is, rather than immediately picking a specific product that you like, look at it more from, I need this capability. I need this capability. And you may need I need this capability and I need it to run across, say, two clouds to commercial cloud and on premise. And so that starts to help you narrow down what tools you actually need. So how I look at it is you need I mean, this is what I do every day, but so this is what I love to talk about. Sarah Young : [00:15:04] But you need a SIEM or SIEM or if you're from the US, it depends where you're from as to how you pronounce this. But SIEM, they say SIEM I say SIEM, but it is SIEM, which is security information and event management. Now, it's not a new technology. It's been around for a while. But now, of course, it is moving into cloud. So you can have on prem offerings and you have cloud. What I found and this is from me working more closely in cloud for about the last four or five years is the organizations seem to struggle to integrate cloud with some products. Now that's changing, as in a lot of the more modern cloud based SIEM's a much easier to integrate. But the traditional on premise ones have always been quite tricky for various different reasons. And again, I'm not even talking about a particular product or a particular type of cloud. It's something a problem I've seen across multiple different platforms. So what we see is people start putting things in cloud, but they're not monitoring it because the integration of the logs is tricky. And so we might have an organization that have got everything on premise monitored, but the cloud isn't monitored. And obviously that's a huge big black hole. So for sure, your visibility, if there's one thing you need to do, make sure you've got some visibility of what's going on. Sarah Young : [00:16:26] And I think that's one of the most important things. So the other one is EDR or endpoint detection and response. So of course, I think everybody knows about antivirus and antivirus is still important. You should definitely have antivirus. But antivirus is very static. It just looks for signatures on things. It will look for signatures on files and things like that. And if it sees a match, it will give you an alert. Now, attack. We know that antivirus has been around a long time as attackers know how to get around that nowadays. And so EDR is more looking at general overall behaviors on an endpoint and an endpoint. I do mean, of course, like a desktop or laptop or whatever, but you can also use this on your server infrastructure as well, your VM's if you're still using VM's. And the fact is a lot of people still are. So I think it's wrong to I know we've been talking a lot about cloud native, but the fact is people still have VM's and Edwards much smarter at being able to pick up patterns of behavior as opposed to just a static signature. And so I really think it's important that people have a look at having some kind of EDR capability and of course, that can feed into your monitoring. Sarah Young : [00:17:39] Then I guess more specifically, I'll finish on most actually. Now, two more for Kubernetes. I could go on forever, to be fair, but I'll leave it at these two for Kubernetes and containerized environments. So if you're using any other orchestrator, of course, you need some tools to be able to monitor the behavior of your orchestrator and your containers. Now, that one's trickier because traditional security tools don't always understand the containerized…
In this episode of AppSec Builders, Jb is joined by security expert, John Steven, to discuss his BSIMM study findings, the fundamental shifts in AppSec, software-defined security governance, and much more. About John: Linkedin: https://www.linkedin.com/in/m1splacedsoul/ Twitter: https://twitter.com/m1splacedsoul Through his firm Aedify, John advises innovative security product firms as well as maturing security initiatives. John leads one such firm, ZeroNorth, as CTO. For two decades, John led technical direction at Cigital, where he rose to the position of co-CTO. He founded spin-off Codiscope as CTO in 2015. When both Cigital and Codiscope were acquired by Synopsys in 2016, John transitioned to the role of Senior Director of Security Technology and Applied Research. His expertise runs the gamut of software security—from managing security initiatives, to cloud security, to threat modeling and security architecture, to static analysis, as well as risk-based security orchestration and testing. John is keenly interested in software-defined security governance at the cadence of modern development. As a trusted adviser to security executives, he uses his unparalleled experience to build, measure, and mature security programs. He co-authors the BSIMM study and serves as co-editor of the Building Security In department of IEEE Security & Privacy magazine. John is regularly invited to speak and keynote. Resources: Latest BSIMM Aedify Security Concourse Labs Transcript [00:00:02] Welcome to AppSec Builders, the podcast for practitioners building modern AppSec hosted by JB Aviat. Jb Aviat: [00:00:14] So welcome to this episode of AppSec Builders. Today I'm proud to interview John Stevens. So, John is the founding principle at Aedify where he advises product security firms. John, before that, you led ZeroNorth as a CTO and before that you were leading as co-CTO at the Cigital firm. Welcome, John. John Steven: [00:00:36] Hello, how are you? Thanks for having me. Jb Aviat: [00:00:38] I'm great, thanks for joining. So John, another thing that you've done is that you co-authored BSIMM, so could you let us know what it is and how it can be a useful tool to AppSec builders? John Steven: [00:00:50] Yeah, it's worth clarifying because it's frequently misunderstood. The BSIMM is the building security in maturity model observational study. We went out and over a period of 11 years we've studied about two hundred and over two hundred firms and asked the question, what do you actually do to build your security initiative and to secure your software? And it doesn't prescribe what to do, but you can use it to look at what firms that are within your vertical or that look similar to you in terms of maturity, are doing with their time and money, and decide whether or not you want to replicate those behaviours or cut your own. Jb Aviat: [00:01:29] So you are interviewing like CISO application security practitioners, developers like every actor of the security game. John Steven: [00:01:38] Yes. Historically, the list has looked like what you described. What was interesting to us about the last two years of this study is that when we began talking with the CISO, they'd say, oh, you need to talk to the VP of Cloud on this, or actually you need to talk to the SREs and to to delivery or to the VP of engineering. The people we had to talk to fundamentally changed over the last two years. And that was a key finding that we we wrote about this year, that the people doing the work of security were shifting from the security group to the engineering, digital transformation and cloud groups. John Steven: [00:02:20] And that's a big deal, right, because there's been these phrases that we've held dear for 10 years or more. You know, building security in is something that we've said for two decades. Me and a colleague argue as to who said shift left first and we've ended to like November of 2001 when we first said that. It was a long time ago. The other thing we say is that security is everybody's responsibility. Every developer, every engineer, every operator needs to think about it. And we've been harping on those things forever. And what we see is now that engineers, now that SREs, now that operators are taking a really first class citizen role in security, people are taking that 'security is everybody's responsibility' to heart. And in fact, who makes up a security initiative has now changed. And that's a really big deal. Jb Aviat: [00:03:08] Yes, it is. And so a trend that we have seen over the past two years is like QA testing moved from dedicated teams towards the hands of developers and they are now writing their own data and then monitoring their own deploy, running back if necessary. And so what you describe about security is following the same trend, right? So the teams are now starting to own security by themselves. John Steven: [00:03:35] Yeah, and we see what we call engineering-led security initiatives, where engineers are not only acting as security champions and participants in a program, but the owners of practice areas and the drivers of the program. So it's not uncommon in some organisations, particularly ISPs, that are more mature, for them to have a Product Security Lead or a Chief Architect who has full purview and responsibility for security and for those people to do the things that you'd expect the security group to do prior. Pick defect discovery tools, tune those tools and drive to a secure coding standard, you know, generate and administer a training program associated with those standards and those tools, you know, and build security blueprints and so on and so forth. Jb Aviat: [00:04:22] And so you mentioned shift left. So now what I understand is that you are not like advertising shift left anymore. So, due to this change in the industry, now that security is done to be done by people that are actually conceiving and building the things. John Steven: [00:04:41] With the benefit of time, anything, anything will look wrong, I suppose. So, you know, when we talked about shift left, we were thinking about all of those organisations that use spiral or iterative development or even worse, waterfall. And, you know, we would talk about, look, you know, we can pentest your software, you can apply testing to your software. But wouldn't it be better if you moved earlier in the lifecycle and found those bugs as you were developing them so that they were easier to remediate? And that was the basis of shift left and everybody cited the rational study and it's cheaper to to fix things earlier and yada, yada. You can see why that's a valuable precept. But think about how orchestration platforms and how software delivery has changed over the last five to seven years. We're using Kubernetes you know we've changed the way virtualization has happened. We're layering on top of Kuberntes things like Istio. More and more of the way we deliver software is becoming code, and the whole infrastructure is code movement and the whole delivery and pipeline orchestration movements. What that means is that more and more of the stovepipes between build, test, deliver and operate are being broken down, so that a DevOps engineer can shepherd a greater percentage of the software lifecycle in self-service mode. I don't have to hand something over a wall to you. I can walk it further down the lifecycle pipeline myself. And even the the bridge between Dev and prod is becoming a softer wall than it has historically been. John Steven: [00:06:26] Cloud, open-source, all of these are self-service technology stacks that allow you, again, further control over a larger percentage of the lifecycle. And so what that means is that code is creeping right in the lifecycle. When you use Kubernetes Istio configuration files, when you use infrastructure-as-code, cloud service provider configuration, what you're doing is you're driving that code right in the life cycle and saying more and more of the way I build, package, deliver and operate is going to be software defined. So more accurate than shift left is maybe shift to where the code is. And what we're seeing is that the code is shifting right. So your know my keynote of the BSIMM conference two years ago was shift right to do security everywhere. And it was extremely aggravating to the attendees because after two decades of moving to the left and trying to get closer to design and requirements, I mean, Laurie Williams out of North Carolina has published a study that says that as much as 10, 11 percent of your code may be infrastructure-as-code and that 30 percent of the churn, month to month in your code bases, is that code. So there's data based evidence that says that that code is moving right. John Steven: [00:07:42] And so we must move right with it if we want to get earlier. And so this is really never REST. Your security initiative needs to follow the trends in technology and respond with the same principles, get earlier, re-evaluate how those principles apply to the new tech stack. Does that makes sense. John Steven: [00:08:04] That's fascinating. So shift left, she right or shift to where the code is but there is not only code, right. At some point we need to go beyond because as careful as you are when you design a system, when you when you write your code, there may still be vulnerabilities left or any flaws of any kind. So monitoring the code is not enough. John Steven: [00:08:28] That's right. So when you talk about shift left or shift everywhere, you're talking about proactive or building security and telemetry. The thing the capability you're trying to to build for your organization is deliver better software with fewer security flaws. But to your point, you're not, it turns out, I know this is going to shock everybody. You're not going to deliver perfect software. Not the first time. Not the tenth time. I think we can all conclude that software will have flaws in it. And so some organizations are saying rather than infinitely iterating my security practices, taking on cost and taking on complexity, maybe I listen to those people in my organization who are focusing on speed of delivery and agility and apply some of that same concept to my security initiative. What if instead of slowing people down to build better software, I participate in their desire to deliver software faster and build resiliency into my security capability. And that speaks to what you're saying. You not only have to proactively find defects and fix them, you have to observe potentially malicious or vulnerable behavior and do something that will make you resilient against that exposure. John Steven: [00:09:48] So people are saying, if I can combine a 'building security in capability' with a resiliency capability, I'm going to have a much more robust security program. And instead of my costs becoming infinite on the building security inside, I'll have a balanced approach where I will do the best job I can to deliver and I will have a very confident ability to respond when I get risk telemetry based on behavior and operations and I'll pick where I'm going to solve a problem. Because, you know, when we cite the rough model, we're sort of oversimplifying. There are definitely problems that is cheaper to find based on their production shadow than in requirements. And so having resilience where you can redeploy in 30 minutes based on what you observe is terrific. And what this has driven for security initiatives is a technology challenge. How do I combine my building security in telemetry that comes from legacy tools, like static analysis, dynamic analysis, composition analysis tools, with my observational tools that are post deployment operational? John Steven: [00:10:58] And most importantly, how do I inventory the bits and bytes that are running and map them to the bits and bytes that created them both in terms of the artifacts that go into creating them and the pipelines that create them. How do I tie the people responsible for operating these pieces of the infrastructure and developing and delivering those same pieces of infrastructure? And then how do I know what my code looks like in my service mesh, in my network and those things and identity look like. And so there's been a whole set of obviously technologies, and this is a space you guys play in during the day, where firms are trying to help organizations understand how to tie those disparate pieces of telemetry together so they can see the full picture and then choose how they're going to decide to respond to risk. Are they going to look at operational data? They're going to look at data from the software development lifecycle, or are they going to combine pieces of telemetry and they make an action based on. Jb Aviat: [00:12:00] Yes. So I'm aware of tools that do that to the left on the code part, so I'm sure you have a GitHub or GitLab in mind, when we mentioned gathering data at code level or deployed at the CI level. So those companies GitHub, GitLab they are every day, more and more like security vendors, because they offer more and more like amazing security features and are very well placed to do so. But on the other hand, for the monitoring, the runtime, for the prediction path do you see tools that manage to aggregate those information from the left path and from the right path. John Steven: [00:12:35] So GitHub and GitLab are definitely the 800 pound gorillas in this space. Both of them, in my opinion, doing a great job defining the bones of a security framework for these engineering led initiatives. They're saying you need defect discovery capabilities, so we'll help you plug those things into your pipeline, we'll route the vulnerability or defect data to the right developers. We'll track those change requests and track dora metrics like time to fix. They're doing a great job of that platform and the scaffolding. Those platforms coalesce around code, right? Theyre SEM platforms, right. So they're always going to do a better job on the builder side. Some of them are introducing features that speak to proto operational stuff like security research in and out, like bugcrowd or that stuff that goes in and out, security advisors to go out, crowdsourced defector vulnerability data comes in. They're not credible, in my opinion, yet, on the operations monitor telemetry side. You know, obviously there's a bunch of vendors that handle that. I mean, there's there's vendors that handle it from an aggregate telemetry perspective, the zeronorth's of the world, there's people that handle it from the testing perspective, like IAST vendors. John Steven: [00:13:46] There's there's certainly people like Sqreen that handle it from the RASP perspective and the instrumentation protection side. You know, what I have said about that is that, to investors, to buyers, is a lot of these technologies that are doing the aggregation that act as sort of competitive peers to GitHub and GitLab on the aggregation side, they're pretty early days. Right. And the challenge with that is that how many security tools are there? You know, there's massive fragmentation on a legacy stack. Gartner thinks you need 10 to 30 tools on the cloud stack to get a clear picture of your, and that's just essentially one cloud. You know, there isn't any sort of de facto standard reporting format for these vendors to use in aggregate to. Right now, people are spending a lot of time either using a Series A Series B maturity technologies and making progress in that space or building their own. And in fact, in an adjunct study that I did, which was not published, but there was a compendium to the BSIMM called the DevSecOps study. We found that a third of BSIMM firms have built their own aggregator and have tried to plug it into their particular Frankenstein tech stack of GitHub or whatever. And shockingly, those firms that have done that have spent on average eight to ten million dollars building defect vulnerability management, slash aggregation technologies. So this is a really interesting space, it's sort of at the nexus and fulcrum of your capability to provide resilience, but it's a space right now where you have to pick a vendor. You could build your own, I don't think that's the right move, potentially, but you have to pick a vendor and kind of co-evolve with them. Jb Aviat: [00:15:28] So sometimes you have new vendors. And so we did Y Combinator in 2018. And so we interviewed like 150 different companies during that during our batch. And we had an idea and we found that so many companies had built it already that we had a doubt, like is everyone building that in-house? And that's how we we pushed our playbook capability. But so that's something that was so widespread and it feels so far away from the core business of those company that it was really hard for us to believe that so many companies had built it already. So I perfectly understand what you mean. And so each time we are as a security vendor, when we are exploring a new feature and we always find customers that have built that already, and no matter what is the complexity, you have some specific needs that are not addressed today in the industry and a lot of companies are holding their own. So, a third, that goes with that. Intuitively, I would say the same thing. John Steven: [00:16:26] At Synopsys, one of the things that I did was sort of manage the acquisitions of the portfolio a bit. And one of the spaces I looked at, pretty aggressively, was RASP. And of course, at the time, three years ago, RASP and IAST were more intertwined than they are today. And still vendors intertwine those notions. One of the things I'll say about this DevOps dichotomy is that you do have a specific and hard choice to make. And you see people struggle when they try to hedge that it does fall on one side or the other. When you build and you guys probably know this, when you build telemetry technology, you have a choice as to whether or not to make it fast and robust or whether or not to make it thorough and give you good telemetry on provenance and other things that help you debug. In other words, you can build a developer tool that provides visibility or you can provide an operator tool that hardens and provides audit capability. It's extremely hard to build something that does all of those well. You have to pick: fast and hard or slow and visible. And trends like IAST or similar that try to drive, you know, sort of visibility into production, I think are challenged because one, you know, they don't end up fulfilling...…
In this episode of AppSec Builders, I'm joined by New Relic Principal Engineer and AWS Serverless Hero, Erica Windisch. Erica has decades of experience building developer and operational tooling to serverless applications. We discuss all things serverless including why you should care about serverless security, designing app security when migrating to a serverless environment, how to scale your app security with serverless and much more. About Erica: Erica is a Principal Engineer at New Relic and previously a founder at IO pipe. Erica has extensive experience in building developer and operational tooling to serverless applications. Erica also has more than 17 years of experience designing and building cloud infrastructure management solutions. She was an early and longtime contributor to OpenStack and a maintainer of the Docker project. Follow Erica on Twitter and Linkedin at the below links: Twitter Linkedin Resources : Transcript for Serverless Security with Erica Windisch [00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat. Jb Aviat: [00:00:14] Welcome to this episode of AppSec Builders today I'm proud to receive Erica Windisch, we will discuss about serverless and serverless security. Welcome, Erica. Erica Windisch: [00:00:24] Hi. Jb Aviat: [00:00:26] So Erica you you are an architect and principal engineer at New Relic, you are also an AWS serverless hero previously you were founder at IO Pipe, an before that were security engineer at Docker. Right? Erica Windisch: [00:00:41] Ah correct yeah. Jb Aviat: [00:00:42] So thank you so much for joining us today, Erica. I'm really excited to have you as a guest today. Erica Windisch: [00:00:50] Thank you for having me. Jb Aviat: [00:00:51] So, Erica, Serverless as an AWS serverless hero, I guess you know almost everything and you are very, very aware of what's happening in the serverless world. Before we dive in, like some AWS specificities, maybe you could remind us what is serverless and how does it differ from the traditional world, especially from a security standpoint? Erica Windisch: [00:01:14] Absolutely. So, I mean, my background, it's not just Docker, it's building open stack. It's building web hosting services. And, you know, this is an evolving ecosystem that, I mean, in the 2000s was, you know, as simple or as hard as taking your content and uploading it to a remote server and running your application to as complex as running your own servers. Right. And these, of course, are options that are available to you now. But increasingly, developers are moving towards dev ops. They're using containers. They are finding that CI/CD and deployments and all of these things are useful tools for the organizations to move quickly and operating physical machines as pets, as we would call it, versus cattle, which as a vegan is probably not the best metaphor. But, you know, over this time, we've been increasingly going higher level and operating and deploying and building at higher level layers. And serverless is that highest layer in a sense where rather than building a micro service is shipping a service that runs on a VM in a container and a host that you have to manage and operate, even if that's part of a larger Kubernetes cluster. Erica Windisch: [00:02:33] Instead, you just take your application and you give it to your cloud provider and your cloud provider runs it for you. There's a lot of advantages to this, largely that the platform is fully managed for you to a large degree. You know, you don't have to maintain operating system patches. You don't have to maintain Kernels. You don't have to do anything other than operate your application. And really, the biggest disadvantages to this are that you do lose control of managing some of these pieces. But for most users, there's there's a benefit and a game to not having to operate components that are not mission critical or I mean, arguably they're mission critical because your applications are not going to run without a kernel of some sort of however, that kernel can be tuned, it can be optimized, it can be hardened and it can be done by Amazon rather than having to make that your problem, because you and your organization often may not have the expertise or the time to invest in having the same level of security that Amazon can provide out of the box. Jb Aviat: [00:03:36] Yes. So that's the ability for users to focus more on what they know, more like their business strategy rather than their infrastructure, rather than there are server configuration you need. So from this point of view, that's much more focused towards what you knew and what it would do as the cloud provider knows best. Right. So that's a lot of advantages from a security standpoint, because, as you said, it's everything that is a maintenance like security updates et cetera, is dedicated to the cloud providers and its not your responsibility anymore. So is that like the best thing from a from a security standpoint, migrating to to serverless? Erica Windisch: [00:04:14] So I will add an additional caveat here, which is that mean Serverless is a concept. There are multiple products that provide serverless capabilities. Amazon LAMDA being one of the most popular S3, arguably being one of the first Serverless products, and many users are already using S3. So from a certain perspective, you are already using SERVERLESS services and S3 has minimal attack vectors, but there are also large attack vectors. Potentially you could leave your buckets open. Erica Windisch: [00:04:46] I think that actually just today there's big news that this app called Parler, this alternative to Facebook run by right wing conservatives. And what happened there is that they left S3 buckets open, apparently, and they were in the middle of a shutdown as well, and their services were compromised. And one of the things they've done there is having misconfiguration of their applications. They rely a lot on other serverless Services such as Okta, which they're apparently running a free trial of, and they were removed from that service and then they were then in a situation where people were compromising their services because they didn't have many services available. Now, this is a particular case where they were denied for acceptable use policies for what I consider pretty reasonable reasons of being denied service. But the point kind of stands in a way that here is a company that was relying a lot on some of these serverless services and they found themselves still at the mercy of security vulnerabilities despite doing that. And in some ways, it opened up them more to being disconnected, having Twilio disconnect them, having all these other point solutions that were arguably serverless services, shutting them down because they relied heavily on the platforms on which they were no longer allowed to use. Jb Aviat: [00:06:06] So your point is that using serverless puts you at risk of the solution provider? Erica Windisch: [00:06:11] No, not necessarily. No, actually, that's not the point I'm trying to make so much as in they were hacked before they were shut, before they removed some of these services, they were using serverless services and they still got hacked. Right. So the point is more that Serverless itself doesn't ultimately protect you from application level compromises. Right? Right. It does protect you from some of the infrastructure level compromises. It doesn't stop you from other attack factories. Yes, it is true. It doesn't protect you from being bad people and getting yourself kicked off of services. But it also shows that you can use some of these services that are supposed to provide you third party security controls and they can still fail you. Erica Windisch: [00:06:53] Yes, I guess it's multiple points. Obviously, they made a lot of really critical mistakes, both technologically as well as politically. Jb Aviat: [00:07:03] So basically using serverless is not perfect. You can still make like configuration mistakes, security mistakes at various places of the thing. You mentioned also application security. That yes is not prevented by the fact that you are using serverless because the code you are running is very similar to what you were writing in a regular application. Erica Windisch: [00:07:26] Exactly. You're still building applications. So application security is still essential right. If you're relying on something like Okta or Auto0, it's very easy to misconfigure those and to use them incorrectly. You know, it's possible to have Twilio out and not have two factor working correctly or not having it verify phone numbers. Apparently, you can have S3 and you can leave your buckets open. Right. And that is a large part of my point. Jb Aviat: [00:07:53] Yes, absolutely. One of the opportunities I would see with Serverless is that usually you are starting sometimes from scratch, or at least you need a new CI you need a lot of new things when you are moving to Serverless. So that's also a chance for you to use the infrastructure as code to use a more higher level of deployment frameworks, for instance. And so that could be a place where you can bake some security controls to maybe review you on telephone files or your cloud information files to ensure that you don't have such issues. Are you familiar with such practices, Erica? Erica Windisch: [00:08:29] Yeah, there are definitely companies. A lot of the larger companies actually use their own custom serverless application frameworks where they bake in a lot of these constraints and security controls for everybody, for everybody that is using that framework. I do see that to be a pretty common use case, especially again larger companies. But even with the smaller companies, I think that CI/CD Is a place where you can then slip in some configuration, whether that's, you know, serverless configuration or even if it's potentially Kubernetes. I don't think it's strictly related to Serverless. I think that was serverless. You have a lot more control over your application via configuration, right? Just because I mean, there's less infrastructure. So I guess it goes both ways, right? You have less control and more control. Right. Like all the knobs that you can turn in configuration. Argueably there's fewer of them, but they're more applicable to your applications specifically rather than knobs that are specific to infrastructure. Like you're not turning knobs that control your IO in general. Other than your on Lambda, you can control how much memory you get, which does control how much IO you get and how much CPU you get. But that becomes more of a billing function. It says, how much am I willing to pay for the service and how much performance am I going to get out of what I'm paying for. But I think that's a little bit different than the level of control that gives you whether or not you are running a certain VM or a different operating system, a different kernel, things like that which are out of your control with serverless applications. Jb Aviat: [00:09:58] Yeah. And so to me, I'm actually not sure that serverless means less ops. And you said it's a different kind of controls because if you are a developer. Before you were doing zero ops, all the orchestration you were doing was I dont know API or microservice level, maybe application level, if you move towards serverless, you might suddenly start to use things such as step functions that will orchestrate how your functions are communicating together. And so this is Ops a developer starts doing that they were doing previously. So that's also something that is kind of new. Erica Windisch: [00:10:33] I think that moving away from infrastructure operations to application operations is I think that not operating the hardware gives you more time to focus on operating your application, making sure your applications working, getting your application tests to work, building out more functionality in your application of all of this means that you're using your tools more for application support rather than for infrastructure support. Jb Aviat: [00:10:58] Yes, I agree. And if you look at the you know, there is the typical Venn diagram where you see security operations and developers. And so to me, if we consider serverless like the things are getting more intricate because you have actually a very different kind of Ops when you are moving to serverless. And so one of the things that could have been previously the responsibility of the operations could now be falling into the hands of the developers. So, for instance, who is responsible to define the privileges that a given function should have in terms of IAM and cloud permissions, that the developers who exactly knows what does and is writing. Like I dont knowdon't one function or several functions per day or that the ops actually are not aware of the business logic. I don't know if you see similar. Erica Windisch: [00:11:48] Yeah, I see a lot of organizations creating roles and policies organizationally and providing those to developers and developers that need to use these policies. Configure this way. And for a lot of organizations that works. It does create some challenges around the CI/CD platform. And it can create barriers sometimes because if you want to deploy serverless applications and nobody has yet deployed or built your serverless role or has authorized that for use in your or for lambda in particular, if they don't create the necessary roles for lambda and they don't allow you to create those functions with the right roles and permissions, it becomes a barrier towards adoption within your organization. That said, there's advantages towards using locking down things like that organizationally. And I think that a balance has to be struck between, you know, enabling innovation in your company and this like top-down operation level security that happens again in a lot of companies. And it's a balance. It's not necessarily an easy balance to make. I think that a lot of organizations are very set in their ways because they're not expecting Serverless. It is more and more common. Like I know at New Relic it's something that more and more teams are looking at using, but it's still something that is challenging to potentially use as well. Just because you need to have your CI/CD system set up correctly, you need to have team members who are familiar with learning and building things serverlessly it is a different paradigm and it just challenges to especially again the larger organization or depending on how you structure your your operations. Jb Aviat: [00:13:28] Yes, there is a balance between security and usability. So it's not a new thing. Obviously, from a security standpoint, you would think that the principle of least privilege is super important and that's something that you should keep in your lambda, but probably not to the point of having like one IAM rule per serverless function, because I guess that makes the whole thing super how to scale and even I don't think IAM is a good way to manage like hundreds of rules for you, for your serverless deployment. Erica Windisch: [00:13:57] Yeah, I think it becomes challenging, though, because a lot of serverless applications do not have really great input validation. So that, of course, does vary according to each language and according to each developer. But most of the code written for Serverless or LAMDA in particular is known in Python, and these are dynamic languages. They are not statically typed. Minimal input validation is often given for these functions. So you know having open IAM permissions does also potentially mean potentially having invalid input past to these functions, which does mean that you probably should want better input validation, depending on how open your IAM permissions are. I mean, there is a good argument, which is that you should have good input validation and starting IAM but we also live in the real world and we recognize that doesn't always happen. Jb Aviat: [00:14:47] Yeah, too much complexity is also an enemy to a decent security, but that's a good thing that you are touching because the scale that you have when you deploy serverless, instead of managing like one code base you are managing maybe ten or fifty code bases. And so there is a difference in terms of scale that you didn't have previously. Erica Windisch: [00:15:09] So, you know, I would say that Serverless enables you to build scalable applications and what is good about this is that rather than your application falling over is it will scale and it will also charge you. So it does open up some potential for denial of attacks. Serverless tends to be very inexpensive. So it's not usually a large bill, but it is possible to potentially force a serverless application to scale. Right, almost like a denial of service attack. But instead of denying the service, you are denying a denial of wallet because you're just charging you're putting so many resources that you're just racking up their billing because the service is going to scale. It's going to support your requests, its just going to just keep charging more and more S3 is the same problem. Right. Jb Aviat: [00:15:57] Denial of wallet issue. I like it. Erica Windisch: [00:16:00] Yeah, but I did forget the original question. Jb Aviat: [00:16:04] So it was about the scale. And I think challenges such as, I don't know, like vulnerable dependencies, for instance, is tractable when you have a few code bases. But if you multiply those code bases by 20 or 50, that's much harder to track at that scale. Erica Windisch: [00:16:20] So I think the challenge for me is not necessarily the code bases, but the deployments, because each serverless function is a deployment of code and each of those deployments is an immutable artifact of that code and a snapshot in time. If you are building your application and you don't have good CI/CD, that code could be out of track with what is in Git. You might have code or applications that are working well for you. And here's the I think a big difference between traditional application of Serverless is that if you have a micro service that was serving, say, 15 rest points and you replace it with 15 serverless functions serving one rest End Point each, you now have 15 deployed services. And if one of those rest end points doesn't need updates in a year, it might fall behind the other code bases just because it's not getting those updates. So what some organizations do is they force deployments. You know, they might do minor repairs and…
In this episode of AppSec Builders, I'm joined by DataDog CISO, Emilio Escobar. Emilio's extensive experience at Hulu and Sony Interactive and his contributions to Ettercap all provide a unique perspective on team maturity, managing complex systems across enterprise, leadership insights, security ownership, and becoming the CISO of a public company. Follow Emilio on Twitter and Linkedin at the below links: https://twitter.com/eaescob?lang=en https://www.linkedin.com/in/emilioesc/ Resources Ettercap: https://www.ettercap-project.org/ https://github.com/Ettercap/ettercap Book Recs: Grit: the Power of Passion and Perseverance How Finance Works How to Win Friends and Influence People Episode 3 Transcript Jb: [00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat. Jb: [00:00:14] Welcome to the third episode of AppSec Builders today I'm proud to receive Emilio Escobar, who's CISO at DataDog. Welcome and good morning, Emilio. Emilio: [00:00:24] Good morning. Excited to be here. Thanks for having me. Jb: [00:00:24] Thanks lot for joining us. So you recently joined DataDog as a CISO, but you have a broad experience as a security leader, at DataDog today. But before that, Hulu, Sony, and I think you are also the maintainer of a famous tool for security geeks like this, which is Ettercap, right? Emilio: [00:00:48] Yeah, that is correct. I'm one of the three main maintainers of it, and we've been doing it for about nine years already. Jb: [00:00:56] Do you want to share a bit what Ettercap is about? I used it regularly into pentests'. That's an amazing tool. Emilio: [00:01:02] Sure. Ettercap has been around for a long, long time, I think, since 2006, and it had slowly died down in around like maybe two thousand eight, two thousand nine. But it is a man in the middle attack tool. It's leveraged by a lot of pentesters for doing man in the middle attack to their customers and trying to obtain credentials for for services like SSH Telnet and what have you. How I got started with it was that when I worked at Accuvant Labs, I was a pentester, one of my colleagues was using it or trying to use it for an engagement that he was working on. And he was running into some, some bugs. And he reached out to me and asked me if I knew how to code in C. I said yes. And he's like, I'll give you five hundred dollars for if you solve these two for each of these two bugs that, that I'm running into. So looking at the code, I was able to fix the issues that he was running into. I never got that thousand dollars back. But what that started was the conversation between him and I. This is Eric Meilin, who I believe is that BlackBerry now about like, hey, should we actually resume the support for Ettercap? We wanted it to work well in MacOS. We wanted IPv6 support. We wanted all these new features that it wasn't supporting. And we reach out to ALoR and NaGA the original authors and they were gracious enough to allow us to to run with it as long as we kept it open source. Right. And that was the commitment that we gave them. So fast forward nine years. We've we've added a few versions. Now, I'm less involved in the coding because I really don't just don't have the time for it, but surrounded by two people who are active. So feel free to check it out on GitHub and submit pull requests, issues or use it and give us feedback. Jb: [00:02:51] Amazing. Yes great tool, I used it a lot. So after being a pentester, you went to Sony, Hulu. So two companies in the entertainment world. Emilio: [00:03:34] Yeah. Yeah. So I actually met PlayStation during my consulting days. Right. For some engagements that we did with them and,and a few years later they reach out to me and said, hey, we're looking for to grow the team, we're looking to grow the application product security side of the house. So I joined as employee number two for, for that discipline. And we were able to grow it to a pretty significant team. We were able to build capabilities also out of the Tokyo office out of Europe. So it was it was pretty good program. The team is still growing, is still active. And it was a lot of fun. It was. But it was the first time that I was on the receiving end of attacks from groups like Lizard Squad Anonymous. Right. So PlayStation is a big target and things like fraud and fame and fraud and all those things were a lot of the factors that we had to go sell for. So really, really interesting set of challenges like gaming faces right up. Time is everything. And we have a very opinionated customer base. Right. Like gamers care and they will let you know pretty quickly, I guess. Jb: [00:04:38] And yes, Sony has been in a couple of important leaks were you in the company when that arrived, it must be insane to live that from the inside. Emilio: [00:04:46] I wasn't part of PlayStation during their big outage. I supported them as a consultant. I joined after as an employee and for Sony Pictures, theyre a separate entity, right. So we collaborate, but for something like what happened to them, it's thanks but no thanks kind of approach from them. Right. And rightfully so. And I think they had the right support from the FBI and everyone else involved in their investigation. So we only supported from building a discipline and a practice, but not. Step out of the way and let us do what we do, because they have a pretty good team there as well. Jb: [00:05:16] Yes, OK, interesting. And so then it was Hulu when we first Emilio you were looking at Hulu and I guess that there you had like very distributed architectures. Right. Would you mind sharing a bit about the context at Hulu? Emilio: [00:05:32] Yeah, certainly so, yes. I joined Hulu to grow and build a security practice there and with a very heavy emphasis on product development. So SDLC security. How do we enable velocity? Time to market is everything, you know, obviously for a streaming platform. When I joined Hulu, we were working on the live TV product, so uptime became even more of a concern. Right. Video-On-Demand, if you can watch a video now, you might try in an hour. But live TV, if it's a Super Bowl or the World Cup or what have you, you want to watch it when it happens and not sometime later, unless you purposely record it because you can't watch it when it's live. So uptime was a big concern. So joining Hulu, I discovered the complexity of the architecture right. It was a complete microservice environment. At PlayStation, they were working towards microservice and segmenting things in smaller type of workloads. Hulu had that built. So dealing with that complexity was something that I wasn't faced with at PlayStation. So it just required a different approach of security, right. Everything was automated. Hulu had a platform as a service framework built by Hulu, which was really interesting where developers to get push can push a production and the containers will get built out and everything. So I thought all the right things were in place. We just had to get security in them to make sure that things were done appropriately. But we had to we had to rethink the whole legacy approach to security, of being a gate, doing code reviews and, you know, how do you do static analysis? How do you do dependency scans and all those things? Because you know a developer can get push any time and they were doing over three hundred deploys a day to production. Right. So it was a lot to catch up to. Jb: [00:07:14] And could you could you give us some numbers so we can see the scale of that, like how many developers, applications, repositories, if you have that in mind. Emilio: [00:07:23] Yeah, yeah. If I remember correctly and I'm sure it has changed since, but I think that towards the end of my Hulu tenure we had over 600 developers and I believe the number was around twenty-three hundred microservices. Now, whether that's the right number or not, that's a separate conversation. Right. But that was what we were dealing with and language frameworks were all over the place, right. So we wanted developers to be creative and effective in whatever language they felt the most comfortable with. So we had to support JavaScript, Python, Golang, I believe we have some Scala and node.js and what have you. So it wasn't a centrally standardized environment where everyone was coding Java Emilio: [00:08:05] and uspring framework and all those things that you can get a little bit more commodity out of those, we had to scramble a little bit. Jb: [00:08:12] So, I understand, and as a CTO, it's a tough balance to give a lot of autonomy to people, but also you need to keep a certain degree of currency in your deployments. Jb: [00:08:23] So I'm curious to understand, so ok a lot of different languages, but I guess this also means a lot of different frameworks, a lot of different coding styles and practices, right. That's a nightmare for a security owner. Emilio: [00:08:37] Yes, it is. Yeah so I think, you know, we had to rely on the developers being strong at what they're good at, right at coding, right, so we had to leverage that partnership. You know, all these frameworks, obviously different attack surfaces, right. So we had to find ways of how to put security in place in a matter that wasn't disruptive, that didn't impact production, that was easily adoptable, right. So starting with the "why" making security the default, right, I always tell teams that if you have a developer choosing between defaults and security, default is always going to win. So why not make security the default? So we have to take chip away at that mindset and approach, right. So we had to put, leverage as much of CICD as we could, do things as infrastructure as code, leverage security controls that you can load the library or through infrastructure as code or some sort of automation. So a lot of self-serve is why we wanted developers and teams to serve themselves security and we had to build a paved roads for them to have that enabled for them. But that on the back end to your point of how do you maintain some level of consistency and priority towards quality and security? We made big strides and efforts into tying security as a quality entity. Emilio: [00:09:53] Right. A lot of times to see security and quality being two separate worlds. And they want to approach using different processes and different language to approach what I consider to be the same problem, right. If I'm a consumer of a service and whether it's a functional bug or a security bug is still impacts my experience, right. So, I united them to the point that we were reporting to the executives and stakeholders, security issues as part of the quality conversation, right. And we use the same language as in like escape defects, recurrent defects, and track those because we wanted to leverage that already made it already established interruption process QA had for developers for security concerns as well. And that that got us a lot of wins there where we we're not just saying, hey, we want to do this because the security is like here's a quality element to it that everyone cares about. As a developer, you don't want to be the reason for why a service or there's a bug in production that people complain about in Reddit or whatever. You have pride in the work that you do. So I think leveraging that helped us a lot with security. Jb: [00:10:55] Super interesting. But I guess when you have a bug so it could be impacting the customer experience, like, I don't know, they can't start a movie, it could have a security issue. In the end you want both to be fixed, but the available developer time is still limited. How did you prioritize security versus quality? I guess you still have to make that code somehow? Emilio: [00:11:17] Right, yeah, and that's exactly why I thought combining those two problems into the same conversation helped, because then we can actually do the trade-off conversations in one forum versus having silos for security or quality issues and sort of not being able to combine the two of them. So, yes, we have to be very pragmatic about if it's a security issue, how easy is it to exploit? How likely is it to be exploited? What's the impact of exploitation? Right. And Hulu being very strict about the quality of the product, even if it was a security issue that will lead to a bad experience from a consumer, whether they couldn't start a movie, a show, they couldn't save something to DVR or whatever core functionality the product has, we will still treat it as equally important as a functional issue, right. So that how the bug manifests itself became less important than the impact of the bug to consumers, right. So that put, again, that put the two security and quality in the same conversation, and then we will have the trade off talks. If it was a functional bug that was being seen by 68 percent of the consumer base and a security bug that was only being presented to 3 percent of the consumer base then that was a no brainer, right. We will choose the functional bug issue over the security bug so that's where pragmatism comes to comes to play. Jb: [00:12:36] Right, makes sense, makes sense. And so with such a large distributed architecture, so you have a lot of simple small pieces, but the overall complexity is insane, I guess. How did you manage to cope with that? Did anyone have like a holistic vision of the system? How did you, like, enumerate two thousand services? Emilio: [00:12:57] Yeah, yeah. It was definitely a lot of tribal knowledge for sure. And that was a problem, right. Because well, I think one thing is also to admit to the fact that security will never have the same level of understanding and visibility as like the developers have of their own software and services. So this goes back to the mindset of why security is there, right. So security is there to help developers write secure code and secure and stable services. But if you try and spend energy on security, being able to see and understand one hundred percent of what's there, then that I think you're burning a lot of candles on that side that maybe is not going to drive a lot of results. It's good to have an understanding, but is it good to have it at one hundred percent understanding? I don't think so, because you can rely on the developer community of your company to give you that understanding and empower them to make those decisions. Just measure what security looks like for them. Right. So one example is around abuse of services. And one of the things that we did was that we were empowering development teams to be able to block what they thought was malicious traffic. And the reason for that was like the security team was getting paged, let's say, at 4:00 in the morning, because some some IP's were hitting a few services pretty hard. Right. And the question that we were always getting from developers is, is this, is this a security concern or not? Or is this attack traffic or not? And it always puts us in a weird position because we don't know necessarily how the service gets called. Like, yes, we have an idea, but we don't know it better than the developers who built that service know, right Emilio: [00:14:32] So I would always like to, we always turn around the question to them and say, hey, based on the use cases that you've built into the service and what you see for, what, P99 or normal patterns look like for you, what do you think? Right. And the answer would always come back and say, yeah, this looks like they're trying something weird that is not part of the normal flow. So the question then was like then you plot them versus we've logged them for you. So we actually build those capabilities for them. And one of the team members on the Hulu security team built a service because now we have to deal with the erroneous blocking of somebody who is a human doing something that was just a mistake. So we, my team built a service called "IsitblockedbytheWAF.hulu.com" that customer service could access internally and say, hey, this person is complaining about it here's a description of what they were trying to do. Are they actually being blocked and that they can actually unblock from there. So we enabled the unblocking part as well. But ultimately, what that led to was teams making more informed decisions for the things that they fully own and therefore reducing the need for security to be able to know one hundred percent of everything that's happening, because that's just unrealistic for a dynamic environment like a microservice cloud environment that Hulu is and so is DataDog. So we're not here to cover all the ground. We're here to make sure that people can cover their own ground. Jb: [00:15:53] Super interesting! And I guess as security teams, we are always looking to get a stronger connection to the developers and to the other teams. So the fact of giving them the power and ownership, choosing who to lock is amazing in that sense. But as we see it, I guess the teams were already like owning the operations of the service, the availability, the performance, etc.. Right. Emilio: [00:16:15] Yes. Jb: [00:16:16] So you already need a pretty distributed model to make that work? Emilio: [00:16:19] Yes, absolutely. Yes. That only works if your company has the philosophy of "if you build it you own it" type of mindset. Right. So if the developers are just there to write code and they push it and some other team is then responsible for the operational aspects of the service and and uptime, then again, you're just creating silos of knowledge. I don't see how a developer can be a successful software engineer if the performance aspects of whatever that developer is working on is sort of like...…
In this episode I’m joined by Ksenia Peguero, Sr. Research Lead at Synopsys, for a discussion around frameworks and the foundational effect they have on the security of your application. We’ll share concrete tips for upgrading your security through your framework, choosing the best framework for app security, performing a framework migration, and how to spot and fix security blind spots in your frameworks. Resources: About Ksenia Ksenia Peguero is a Sr. Research Engineer within Synopsys Software Integrity Group, where she leads a team of researchers and engineers working on static analysis and security of different technologies, frameworks, languages, including JavaScript, Java, Python, and others. Before diving into research, Ksenia had a consulting career in a variety of software security practices such as penetration testing, threat modeling, code review, and static analysis tool design, customization, and deployment. During her decade in application security, she performed numerous engagements for clients in financial services, entertainment, telecommunications, and enterprise security industries. Throughout her journey, Ksenia has established and evolved secure coding guidance for many different firms, developed and delivered numerous software security training, and presented at conferences around the world, such as BSides Security, Nullcon, RSA, OWASP AppSec Global, TheWebConf, and LocoMocoSec. She has also served on review boards of OWASP AppSec USA, EU, and Global conferences. https://www.linkedin.com/in/kseniadmitrieva/ https://twitter.com/kseniadmitrieva Ksenia Presentations: https://www.youtube.com/watch?v=Ku8mPXmX7-M https://www.slideshare.net/kseniadmitrieva/how-do-javascript-frameworks-impact-the-security-of-applications Additional Resources: Passeport, Flask login http://www.passportjs.org/ https://flask-login.readthedocs.io/en/latest/ Sails CSRF protection https://sailsjs.com/documentation/concepts/security/csrf Express CSRF plugin https://github.com/expressjs/csurf Django / React security page https://docs.djangoproject.com/en/3.1/topics/security/ https://guides.rubyonrails.org/security.html Ksenia Angular listing rules https://github.com/synopsys-sig/tslint-angular-security W3C security WG https://www.w3.org/2011/webappsec/ Levels of vulnerability mitigation: https://image.slidesharecdn.com/javascriptframeworksecurity-amsterdam-191008173330/95/how-do-javascript-frameworks-impact-the-security-of-applications-7-638.jpg?cb=1570556143 Episode 2 Transcript: [00:00:02] Welcome to App Sec Builders, the podcast for practitioners building modern AppSec hosted by Jb Aviat. Jb: [00:00:10] Hello Ksenia, nice to meet you Ksenia: [00:00:14] Hi, Jb, how are you doing? Jb: [00:00:20] I'm great, thank you. So, Ksenia, you're a senior research engineer at Synopsis. Jb: [00:00:24] You led a team of researchers and engineers working on static analysis. Before Synopsys. You've had a consulting career where you did penetration testing, threat modeling, code review, and you are also a seasoned speaker at various app security conferences across the world, such as the famous OWASP AppSec. So could you tell us a bit more about you and what you enjoy in the AppSec field? Ksenia: [00:00:49] Sure. Well, I come from an engineering background. I was an application developer in the gaming industry for about five years. And then I came to the United States to do my masters. Ksenia: [00:01:01] And the last year I got an internship with Cigital that used to be called Cigital, a consulting company as a security intern. And I never went back to development. That was absolutely fascinating career because as a consultant, as a security person, you always need to learn new things. So I did consulting for about seven years and kind of went up into the ranks of principal consultants. And then I pivoted and started to dig more into the research and security research. And around the same time Cigital was acquired by Synopsys. So now I work in Synopsys. So pretty much with the same company, with the same people, with a different name. But now as part of the security research lab, as a security engineer. Jb: [00:01:47] Super cool. You mentioned the gaming industry. Ksenia, so did you develop anything popular, famous? Ksenia: [00:01:55] Well, that was many, many years ago and I was developing games in Flash. Adobe Flash. Jb: [00:02:03] Oh, my God. Ksenia: [00:02:04] So, you know, the little match, three type Tetris type games for housewives and people at work. Jb: [00:02:13] All right. That could be a nice introduction. Yes. That's how I got into security by hacking flash in the browser, but maybe not. Jb: [00:02:24] Ok, and so you were actually in the middle of a PHD thesis, right? Ksenia: [00:02:29] That's correct. Hopefully closer to the end. But yes, in parallel with my full time job, I'm also doing a PHD and I'm working on guess what? Security research and on framework security specifically. Jb: [00:02:45] And so how do you feel academic research is helping application security moving forward today? Ksenia: [00:02:51] It's interesting. I feel like - because I have a lot of experience in the practical field, I hope - I feel that I bring a different perspective into the academia because a lot of the research that there is and academia, at least in the last 10 years, a lot of the research was focusing on exploits, on finding vulnerabilities, which is great because people in academia spend a lot of time, you know, finding those new vulnerabilities, new types of attacks, especially on more complex concepts like crypto attacks, for example. Ksenia: [00:03:25] But until about now, academia wasn't focused much on fixing the problems that they find. So with my background in security consulting, where we not only find the issues, but we help developers to fix the issues, that's what I'm trying to bring into my research. How do we actually get rid of the bugs and not just find the bugs? Jb: [00:03:47] Yes so basically more shifting left, right? Ksenia: [00:03:51] Exactly. Yeah, exactly. Jb: [00:03:53] So helping academia futures shift live to great outcomes here. So this extensive research that you've done on frameworks, so you presented it recently at AppSec Cali. So in your research, you found that some frameworks made it easier than others to introduce certain categories of vulnerabilities. So would you mind telling us a bit more about that? Ksenia: [00:04:15] Sure. So as part of my research, I was focusing on JavaScript and I started with the client-side JavaScript frameworks or template engines, and then I switched into server-side JavaScript frameworks and I looked at different vulnerabilities. Ksenia: [00:04:30] And the hypothesis was that if the framework actually has security controls or mitigations built-in, then the applications would be more secure than if they're not. So it's kind of a native idea. But with the help of the categorization framework developed by John Steven, I divided the places where the mitigation can exist into different levels. So we start with a level zero where there is no mitigation and code is vulnerable. And oftentimes that happens when there is no framework in use at all. So everything is written by developers from scratch and then we go into the next level of a custom function that developer has written and then into a third party library that developer is using and then into a framework plugin, so something that works very tightly with a framework and then the next level of the mitigation is built into the framework. And then actually there is another level that I discovered throughout my work is when the mitigation is built into the language, programming, language or platform itself. And of course, as far as you go closer to the framework or closer to the architecture level, those vulnerabilities will be fixed and it's less likely that they will actually appear in the applications. But we also need to remember another important thing that again, I discovered comparing the applications and running different security tools on them is that it's not just the built-in mitigations, but also the defaults are important. So if something is built into the framework but not enabled by default, then developers may not even know it exists or may not enable it, or they may be disabled or they need to enable it in a test environment. And then it never got enabled when the application transitioned in production. Jb: [00:06:21] Yes. And so you actually prove by analyzing actual applications that I think it was CSRF protections that were not enabled by default. Ksenia: [00:06:32] Yeah, exactly. So I took several server-side JavaScript frameworks, Express, Koa, Hapi, Sails, and looked at which level each of this framework has the Cross-Site Request Forgery protection enabled if, for example, Express and Koa have plugins. So it's an extra step that developers need to go find the plugin, turn it on, and enable it correctly with correct settings versus Sails, for example, has it built-in, but it wasn't enabled by default. So when I tested about like 500 applications on GitHub and compared them based on the framework, I actually could see that the number of applications that have Cross-Site Request Forgery in Express, for example, is the same as in Sails, which that wasn't what I expected. But when I was digging deeper, most often it was the case that in Sails that protection was not enabled. It was just set to false by default. Jb: [00:07:34] So that's an interesting outcome and so our data at Sqreen concurs with that. One thing we have seen is that amongst Sqreen customers, I've seen that applications without frameworks are 7 times more likely to have vulnerabilities than applications with a framework. That is something like - I'm a former pentester and so at the beginning of my career, I witnessed how Ruby on Rails grew in popularity and helped popularize development best practices across the industry. So it really was a game-changer at the time. As Rails popularized MVC templating engines, database migrations, object-relational mappers, convention over configuration. So it wasn't perfect, but it was such a huge step forward that we really witnessed the quality of Web applications changing. And so did you experience the same thing, like some frameworks drastically improving the security of some applications? Ksenia: [00:08:33] Yeah, yeah, it's fascinating. If we look at the OWASP top 10, as I was researching specifically, Cross-Site Request Forgery. If we'll look at the OWASP top 10 in 2003 and 2009. CSRF was in the top 10, right. Higher up like fourth place than seventh place. And then it was starting to gradually go down. And then in the recent one, it's not even there. It's not present. And the reason for that is that because a lot of frameworks has CSRF protection enabled by default. And sometimes it's not that it's some sort of security feature that they built-in. I mean, it is kind of on purpose, but it's just the way the framework built. So, for example, if we look at .Net, ASP .Net, they have a view state that they save for every page. And so if the content of the page changes, it's like a signature of the page. Right. So if the content changes, for example, if an attacker is trying to inject a request and doesn't have a CSRF token in it, then the request will not be accepted just because the page was crafted by an attacker and it looks slightly different. So basically it's a CSRF protection as long as that view state is signed so that attacker cannot fake it as well. But yeah, basically some of such big frameworks also Spring Security, for example, has that enabled by default for all POST and DELETE requests. So since it's enabled by default, developers don't need to think about it and it just stops being an issue. Jb: [00:10:09] Yes, yes. The view state is famous indeed, it reminds me of there were actually a vulnerability in the view state implementation. You said it's signed and I remember back in the days they were padding Oracle vulnerability in, I think it was like Liferay, one of the .Net frameworks, And so basically you could just basically generate or recover the signature for anything. And just you had the remote code execution by just managing to fake the actual state. That was a small one but a fun one. Padding Oracle attacks are amazing in theory. And when you have one that works in practice, it's always a good achievement. Exactly. So, yes, very good example. And so you mentioned about the levels of vulnerability mitigation by John Steven. So that's a concept that I didn't know and that I discovered in you in your OWASP Cali presentation. Really interesting. So I will share an illustration in the episode resources. So I think it can help us categorize the frameworks because so you have frameworks that tend to be very simple, very modular, such as Express, Flask or Sinatra. And you have so they have very little out of the protection for common threats because they give a lot of freedom to the developer and it's up to the developer to choose what they want to use and how they want to use. Jb: [00:11:32] So amazing performance because they do very little out of the box. And on the other hand, you have much more elaborated frameworks such as Sails, Django, Ruby on Rails, that have many, much more out of the box pieces. And so there are like several ways to add security constraints, either relying on the team or library. So like the team would push their own library to add their own controls. That would be level 1 according to the classification. Maybe it would be a very well known library, like Cerberus or Joy in Node, level 2, or a framework plugin, level three, etc.. And so your research showed that the closer to the frameworks you are and the ultimate being, having this mitigation of this library built into the framework, the best level of security would be achieved. So if we assume that someone wants to pick a framework for a project, so usually security isn't the main driver to deciding what piece of software you want to use. Security is only one dimension amongst others when you evaluate a framework. So how would you recommend evaluating the security of a framework? Ksenia: [00:12:42] Yes, I wish security was important, important for developers when they take it from the start. Right. But of course, I mean, we choose a framework Jb: [00:12:50] I wish security was more important for frameworks developers, Ksenia. But it's unfair, It's more and more true. Ksenia: [00:12:58] Right. But yes, when we choose the framework, we'll look at performance, at functionality doesn't actually solve. Our problem is that MVC framework is a rest framework. Like what is the problem we are trying to solve and then is it popular? Is there the documentation? And then with somebody will say, oh, what about security of the framework? And I actually have a story about that. Ksenia: [00:13:19] So when I was a consultant, we had a client, a big financial industry organization, and oftentimes such companies are not quick in accepting new technologies. They like things that are proven and tested. So they would use .Net and Java and with JSP for the front end. I mean, that was a few years ago, quite a few years ago. And so the front end developers wanted to switch into using Angular and just management was like, well, what is the security impact if we're switching all our front end development into Angular? So they hired us to answer that question. And being the security-minded person, I dig into Angular and found a bunch of ways that you could exploit and different security vulnerabilities. And frankly, Angular is a very secure framework. Right. So there are not many ways compared to other things. But of course I did my best and came up with this presentation and show it all the way is how Angular can be hacked. Ksenia: [00:14:21] And the management were all very frightened. I was like, oh my God, this is so insecure. We should have new security protocols, new manual code review steps or anything else if we want to introduce that and actually no. Right, it's still a front end framework. It still has the same issues as your JSP or another templating engine. It's still going to be vulnerable to cross site scripting and other like iFrame bypasses, etc.. So from the protocols, from the policy standpoint, it's no different. But actually Angular is a pretty secure framework because if you look at the documentation, A. they have like a security page that's separate in the framework documentation not many front-end frameworks have a security section in them, and they made an effort to mitigate as many vulnerabilities as possible. Ksenia: [00:15:19] So, for example, Angular has the contextually aware escaping that is built into the framework it has the way to enable the CSRF protection if it's also enabled on the server-side and have the service and the clients that talk to each other and some of the tokens, et cetera. So, yes, it's great to look at the security of the framework. And as a developer, I mean, of course, maybe you cannot go and actually test the framework and evaluate, OK, what are the security issues with it. But you can definitely look into the documentation, see if there is a security section in it. You can look at the release notes and see what kind of bugs were fixed in the last few versions of the framework. Like were there things that were fixed that have to do with security? If it's an open-source framework, you can go to their GitHub and look at the issues. What kind of issues were reported? Are there a bunch of security issues that were reported and...…
In our inaugural episode, we sit down with Tanya Janca, founder of WeHackPurple, to discuss her expertise in solving for Race Condition vulnerabilities during her career as both a software engineer and application security professional. We spend some time talking through the most common types of Race Conditions, review a few real-world hacks and vulnerabilities, and present actionable tips security and technology teams can make to solve this class of vulnerability. About our Guest: Tanya Janca, also known as SheHacksPurple, is the author of ‘Alice and Bob Learn Application Security’. She is also the founder of We Hack Purple, an online learning academy, community and weekly podcast that revolves around teaching everyone to create secure software. Tanya has been coding and working in IT for over twenty years, won numerous awards, and has been everywhere from startups to public service to tech giants (Microsoft, Adobe, & Nokia). She has worn many hats; startup founder, pentester, CISO, AppSec Engineer, and software developer. She is an award-winning public speaker, active blogger & streamer and has delivered hundreds of talks and trainings on 6 continents. She values diversity, inclusion and kindness, which shines through in her countless initiatives.Founder: We Hack Purple (Academy, Community and Podcast), WoSEC International (Women of Security), OWASP DevSlop, OWASP Victoria, #CyberMentoringMonday Resources: About the vulnerabilities discussed: The Starbucks infinite credit race condition: https://www.schneier.com/blog/archives/2015/05/race_condition_.html The Gitlab ‘merge any pull request’ race condition: https://www.cvedetails.com/cve/CVE-2019-11546/ The Dirty Cow vulnerability: https://dirtycow.ninja/ with the research paper: http://www.iiisci.org/journal/CV$/sci/pdfs/SA025BU17.pdf The Spurious DB race condition, impacting all major operating systems: https://www.triplefault.io/2018/05/spurious-db-exceptions-with-pop-ss.html Tools discussed: Safe Rust race condition guarantees: https://doc.rust-lang.org/nomicon/races.html#data-races-and-race-conditions GoLang race detector: https://blog.golang.org/race-detector Testing race conditions on REST APIs: https://github.com/TheHackerDev/race-the-web Links for Tanya: Tanya's book Alice and Bob Learn Application Security: https://www.amazon.com/dp/1119687357/ https://shehackspurple.ca https://twitter.com/shehackspurple https://www.youtube.com/shehackspurple https://dev.to/shehackspurple https://medium.com/@shehackspurple https://www.youtube.com/shehackspurple https://www.twitch.tv/shehackspurple https://www.linkedin.com/in/tanya-janca https://github.com/shehackspurple/ https://www.slideshare.net/TanyaJanca/ Tanya mentioned she’s also a professional musician, you can find her amazing rock band here! https://www.youtube.com/watch?v=zI6Mh2-E_CQ Links for We Hack Purple: https://wehackpurple.com https://twitter.com/wehackpurple https://www.youtube.com/wehackpurple https://linkedin.com/company/wehackpurple https://newsletter.wehackpurple.com Tanya also shared about https://www.clouddefense.ai/ their new company just going out of stealth. Transcript : [00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern Apps hosted by JB Aviat. Jb: [00:00:14] Welcome to the first episode of AppSec Builders. I'm Jb Aviat, and today I'm proud to welcome Tanya Janca to discuss race conditions. Race conditions are a common class of vulnerabilities in APIs or applications with business logic, which are very well known. For instance, they aren't part of the OWASP top 10. Tanya Janca will tell us more about the application security book she just finished writing, and about her company that just came out of stealth mode. Jb: [00:00:44] Our guest today is Tanya Janca, also known as SheHacksPurple. She's the founder of We Hack People, an online learning academy dedicated to teaching security, DevSecOps, and cloud security. Tanya is devoting a lot of her energy to democratizing security. [00:01:00] She also is the host of an amazing podcast where inclusion and diversity shine through. You have experience working at several software companies such as Microsoft, Adobe and Nokia, and have had varying rules across security and engineering throughout your career. As pentester, CISO, AppSec engineer and software developer. So Tanya, I think you wrote a book, recently. Would you like to tell us a bit about it? Tanya: [00:01:27] Yes. So my book is called Alice and Bob Learn Application Security. Do you remember the characters of Alice and Bob when they first explained what encryption was? Jb: [00:01:38] Of course, who doesn't? Tanya: [00:01:41] Yeah, so and whenever I would give examples I would always say, you know, it's not Alice's fault or or Bob's fault. Tanya: [00:01:48] It's that a safeguard was broken. And that's how this happened to Alice or there was a security header missing and that's how it happened to Bob. And so I would always weave them into things. And when I was trying to decide what to name [00:02:00] of my book, that was all about application security. Well, all my examples will be Alice and Bob. So maybe it should be called Alice and Bob Learn. And it's a text book written in casual language to try to make it really easy to understand. And it's basically the very beginning of AppSec, like how to do security requirements, how to design a secure web app, what is secure coding and what are all the things I need to do? What does that security header mean and why do I have to have it all the way up to, you know, all the different types of security testing, all the different types of tools or activities that exist, how to build an exact program? Tanya: [00:02:38] Basically, I was like, I'm going to take my brain and put it into a book. With jokes. Jb: [00:02:44] Who's the ideal reader of your book, that developers interested in security, AppSec engineers? Tanya: [00:02:50] I would say definitely an AppSec engineer would want to read the book or any software developer. I would say that other areas of I.T. that want to [00:03:00] know about security should read most of it. So there's a bunch of chapters like "How to secure your own digital privacy" and things like that, and the ideas of what is secure design and what are all these security concepts and what do they mean and how do I apply them? And I would say at least half of the book, almost anyone in it, could easily read and understand. But then there is two chapters that are just like, here's a lot of code. I'm getting really nerdy and I can't help it. Jb: [00:03:31] So you sent me to the table of contents of the book and I really enjoy reading it. Jb: [00:03:37] So like, way, before I plan this podcast, I bought three versions of the book to share with the team. We have a team of thirty five engineers and so scaling security is something that is really on my mind. And so I'm super excited about receiving that book because I think that will be the perfect introduction to that. Knowing your background of software developer [00:04:00] for ten years, if I'm correct Tanya: [00:04:02] 17 - I'm older than I look. Tanya: [00:04:05] Yeah, I got I started coding like in the mid nineties and then got my first job in nineteen ninety seven and I was like, oh my God, I'm a professional. Jb: [00:04:14] Congrats to you. But you've been a developer specifically for ten years, righ? Tanya: [00:04:21] More than that. Tanya: [00:04:22] Yeah. Like mostly I did software development for 17 years and then I've done security for six or seven years now. Jb: [00:04:29] All right. OK. And so from my point of view, that's like the best way to get into AppSec because the people who are making your job basically are software developers. And so being a former software developer is just amazing to be able to understand them and just add this software. I believe. Tanya: [00:04:47] I think that the best way to make application security engineers is just find the software developers that are super interested in security and then just feed that interest like, [00:05:00] oh, here's a book for you. Oh, I listen to this podcast. Oh, I'm going to go do this, do you want to come? Until eventually, you hire them to the security team. Tanya: [00:05:11] My plan. Jb: [00:05:14] Smart, smart. Jb: [00:05:16] We will try that to internally maybe. And I've seen that one part of your book is on several different vulnerabilities. Is race conscious one of them? Tanya: [00:05:27] Yes it is Jb. Jb: [00:05:30] Unbelievable. Jb: [00:05:33] So I'd like to introduce the race condition, the vulnerability with an analogy. And after we can definitely jump on one of your favorite examples on that Tanya. So let's assume that you have $50 remaining in your bank account, so you go to an ATM to empty your bank account - that you can only do once. But if you arrange with a friend to withdraw the money simultaneously from [00:06:00] two different ATMs will it work? So if it does, it means that you have successfully exploited the race conditions. Congrats. Race Conditions is a category of vulnerability that often does not come from using a specific library. It only comes from using shared resources and forgetting that those resources are shared. And shared resources are legion, in particular in Web programming, with databases, or caches, as opposed to, for instance, injection bugs. A source code can be 100 percent bug free when you look at it, but still present some race conditions. Race condition bugs require the software engineers to think of the code with one more dimension in mind. And this dimension is time. So if you're into security, you can think of it as an adversarial context. What's happening if a part of the code is executed in parallel, for instance, what happens if any shared resource changes [00:07:00] state at any point of this function execution. And this is what makes this class of bugs extremely hard to detect in most setups. And so Tanya you suggest race conditions as a subject for this episode. So I seem to have some particular history or examples that you like with this class of vulnerability, right? Tanya: [00:07:22] Yes, in my first programming job, I had a race condition with my boss Jb: [00:07:29] The human Race Condition! Tanya: [00:07:32] Yeah, we would design these bill of materials, types of applications. And basically I figured out that if I didn't lock the resources that the things humans programming could come and steal the resources that I was trying to use. And so we made custom software for all these different manufacturing plants. So it wasn't beautiful, GUI stuff like in Web Applications. There was all that backend stuff where you just see [00:08:00] a terminal. And I remember saying, like, Bill, you stole my resource.And he said, oh, you have to put a lock on it. And then we had a discussion about race conditions and say, oh, that's interesting. I never knew that. Same with if someone goes and edits a file in a shared folder and then you try to open it, you know, Microsoft Word, for instance, it'll say no, that's in use, right. Because it's a race condition. And then, you know, we get as my career moved on, I got to use different types of code repositories to save my code. Jb: [00:08:37] I had to work in bigger teams because I started at startups. And then I was like, well, what if we both want to edit this giant program? You can't just lock the whole program. So we had to learn about merging, which was also good. And so when I was writing the book, everyone kept asking, are you going to cover the OWASP top 10? And I was just like, it's such a tired list. Like everyone knows it and they're like [00:09:00] someone reading your book might not know it. You can't not cover the thing everyone knows. And so I decided I would do a whole chapter about all sorts of common mishaps. So common issues that you find. And so I included the OWASP top 10, but I didn't want to just be the top ten. So I covered all sorts of things that are issues that you as a pen tester might find and issues that you as a software developer might inadvertently create. And one is race conditions. And so the example I used in the book. Tanya: [00:09:29] So I really like Starbucks. Jb: [00:09:35] So I do as well. Jb: [00:09:36] Better customer for the best three month because our coffee machine was forbidden for health safety reasons. Tanya: [00:09:44] So some of Starbucks for the last three months, I actually had a vendor ask me to advertise their event recently and they tried to bribe me with a Starbucks card and stuff like that. But [00:10:00] in the vulnerability, basically a security researcher, they realised, oh, well, if I load money onto one of the cards and then transfer the money to a bunch of different cards at the exact same time, I can put on five dollars to card number one. But card number two, three, four or five, six, I can put that five dollars onto all of them at the exact same time so that my five dollars and a twenty five dollars and he just kept doing this circle with the cards and then eventually he's like, OK, so I can reproduce this. Boom. I have a race condition. And so then he submitted a. And so you shared an article with me that Starbucks didn't have a responsible disclosure program, but the article that I quoted in the book was actually how he did it as part of their bug bounty program. So I'm not sure what happened, but they they did give him a reward. I have to say, like, that person's really honest because I really [00:11:00] like their mocha beverages. [00:11:03] And so that's a great example of a real life application. And another example I found that is similar is GitLab. And this bug was publicly reported because GitLab is open source. So a user would be able to approve a merge request, multiple times, and so potentially reaching the approval counts required to merge it. So here we can simply guess that the code checks if the user is authorized to approve the request, then updates the approval and if several requests are executed at the same time, then several will complete the checks more or less at the same time and perform the updates concurrently leading to this check bypass, and to the pull request being authorized. So pretty common class of vulnerability. And as you mentioned, Tanya, pretty easy to slip into programs because I'm not sure that's something that you are really teached at school, or notextensively, and you need to experience that kind of bugs. [00:11:59] I feel in school [00:12:00] or at most universities, they just don't really teach security thoroughly enough to ensure that they're making secure software. Like, I haven't seen a school that teaches race conditions. I mean, I haven't checked out all of the schools. I would love to be corrected, but there are some universities where I live and I have lots of their students come and they say, I wish they taught this in my school. I'm like, they can. I wrote a book, but like, I, I feel like race conditions and all the top 10 and all the other things that I know you're going to talk about from episode to episode on this awesome podcast, you're starting like, I wish that there was a way we could get the word out to all the software developers so that if they know it exists, then they know to try to watch out for it, right? Jb: [00:12:45] Yes, I'm sure they would. But you have a lot of things you need to learn as a developer. And so security is one amongst many. Jb: [00:12:54] And yes, I think if we wanted to perfectly trained software engineers, it would take [00:13:00] another two years to five years training. It would be much more. And there are so many things that you learn when you start actually working in a company that you never touched at school, that I don't think we could ever have a complete developer training just because the field is too broad. And so that's something we will touch after is how tools could help developers actually write bytecode without race conditions. Tanya: [00:13:28] In that case, do you think maybe they should all just subscribe to our podcast? Do you think that could help? Jb: [00:13:36] So maybe the...…
Добро пожаловать в Player FM!
Player FM сканирует Интернет в поисках высококачественных подкастов, чтобы вы могли наслаждаться ими прямо сейчас. Это лучшее приложение для подкастов, которое работает на Android, iPhone и веб-странице. Зарегистрируйтесь, чтобы синхронизировать подписки на разных устройствах.