This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-06-23
Channels
- # ask-the-speaker-track-1 (171)
- # ask-the-speaker-track-2 (401)
- # ask-the-speaker-track-3 (250)
- # ask-the-speaker-track-4 (194)
- # bof-arch-engineering-ops (3)
- # bof-covid-19-lessons (9)
- # bof-cust-biz-tech-divide (8)
- # bof-leadership-culture-learning (19)
- # bof-next-gen-ops (46)
- # bof-overcoming-old-wow (8)
- # bof-project-to-product (10)
- # bof-sec-audit-compliance-grc (9)
- # bof-transformation-journeys (52)
- # bof-working-with-data (33)
- # discussion-connect-february (885)
- # games (335)
- # happy-hour (129)
- # help (411)
- # hiring (43)
- # lean-coffee (17)
- # networking (8)
- # project-to-product (1)
- # snack-club (44)
- # sponsors (77)
- # summit-info (437)
- # xpo-datadog (2)
- # xpo-digitalai-accelerates-software-delivery (28)
- # xpo-github-for-enterprises (25)
- # xpo-gitlab-the-one-devops-platform (25)
- # xpo-itrevolution (3)
- # xpo-launchdarkly (3)
- # xpo-pagerduty-always-on (1)
- # xpo-planview-tasktop (6)
- # xpo-slack-does-devops (13)
- # xpo-snyk (5)
- # xpo-sonatype (12)
- # z-do-not-post-here-old-ask-the-speaker (176)
Hello everyone, I hope our journey in Admiral helps you in yours, would love to speak to anyone who would like to get more information.
I wonder how many people have their office printers on their threat models!
I wonder how many people have their office printers on their threat models!
I guess that IoT would have to include printers and the like, so there is no reason why you should not take the threat against any networked device seriously https://iottechnews.com/news/2020/feb/21/seven-10-companies-know-hacks-against-their-iot-devices-research-finds/
I don't know if this would help you at this stage but have you come across Trivy open source vulnerability scanner? Can scan git repos / filesystems as well as container images https://github.com/aquasecurity/trivy
Thanks for the talk @kevin.foley Vulnerable dependencies are such a big issue for enterprises to get on top of
Hi Jessica, thanks for attending the talk, any thoughts or any discussions that you would like to have with regards to automating secure software development with DevSecOps?
Thanks everyone for the kind words, if you do have any questions please drop me a line. Thanks have a great day.
Looking forward to all of your questions for the VendorDome session coming up in 3 minutes. I’ll be moderating the questions here. Don’t be shy.
😢 had to leave - the hold music was too much, especially during a break where I should (and now shall) get away from the screen for a few minutes)
The first few minutes are on video, and then we’ll be live on the air! Woot!
Anyone using the IBM CI/CD Products? UCD/UCR? Any experience good or bad you would like to share as you automate the full DevSecOps lifecyle?
Anyone using the IBM CI/CD Products? UCD/UCR? Any experience good or bad you would like to share as you automate the full DevSecOps lifecyle?
Are you looking into using IBM CI/CD? What is it you are looking to do, maybe something I can help with?
Not looked at the product but CI and CD can be different software not always the same, there are tolls that can do one or the other better then one who can do it all??
It's not something we've looked at. Surprisingly as it feels like we've looked at everything else that's available 🙂
I agree with @kevin.foley that there are multiple tools and it's choosing the one that best fits your needs. Historically we've tried to drive a one tool approach with limited success which is why we're talk more now about a Platform. And using the to pull inherent capabilities of other tools/services as required.
☝️ sounds similar to something I saw previously re: monitoring tools, in a "DevOps vs SRE" discussion: growing your own tools to fit your org's needs vs using an external monitoring capability that may be hard to make fit.
Hi guys, what you consider are the main practices of shifting security to the left?
Hi guys, what you consider are the main practices of shifting security to the left?
Personally, I think that it's about early detection empowering developers to fix security issues as they are introduced rather than waiting for AppSec teams to find them and then request changes (and thus involves context switching). This allows AppSec teams to focus on the issues that the developers can't fix, ie. the hard ones that they are best suited to fix!
Regarding "Inflicting Tools", totally behind not doing that for most capabilities. But there are some (few) capabilities where I believe that standards must be set. For example source code management is one, binary / finished good repository is another one. What would you consider tools that have to be standardized?
Regarding "Inflicting Tools", totally behind not doing that for most capabilities. But there are some (few) capabilities where I believe that standards must be set. For example source code management is one, binary / finished good repository is another one. What would you consider tools that have to be standardized?
These are great examples. I think artifact repositories, caching dependency servers, and CI practices (especially containerized / repeatable builds) are all important standards too. But I also think most developers want these and even if a dev or dev team is resistant, I find they often feel these tools and practices have enhanced their productivity and quality of life once they’re used to them. Sadly, not all security tools get such glowing reviews post-implementation, which I think leads to the feeling that these are more of an imposition. Not that they have to be, just that the risk is there.
How do you get developers to understand just enough security to help this shifting left of security understanding?
How do you get developers to understand just enough security to help this shifting left of security understanding?
We got them trained, then give them the ownership of it and not someone else, its hard and you will have pushback
Hi @rradclif, good question, as Kevin rightly says, training is key. But also it's a mindset, and one way is to have intel on security vulnerabilities at every stages of the SDLC. This will help to visualise and inform Devs within their tool environment and prompt them to mitigate sec risks across the SDLC. Let's discuss this further with one of our experts, how's your availability looking? 🙂
One aspect I need to understand is how to get this security understanding for developers that are working in COBOL or PL/I or even z/OS assembler - most training uses examples that don’t seem to apply to those languages, or back end systems.
Hi Rosalind, that is a good point, I'm not sure if security is just based around code, its more the understanding of how your security posture is handled or understood and thinking security. I not sure if you use open source on these softwares and how you protect yourselves from them and if you have something sitting in the front of them that should help protect them.
The problem with much of the security training is the examples are all in areas that don’t sound like they apply, even though they do to a typical z/OS application in COBOL. There is very little open source COBOL, and many of these applications were written a long time ago, so people are mostly adding to them. Including packages is not something they will do generally. So it’s a different perspective, but security matters there as much or more since there is so much sensitive critical data.
We do run code quality scans, static security, and library vulnerabilities on a continuous basis. Sonatype is part of our suite. Currently working on moving Dynamic & Interactive testing to left as well.
We do run code quality scans, static security, and library vulnerabilities on a continuous basis. Sonatype is part of our suite. Currently working on moving Dynamic & Interactive testing to left as well.
Sounds like you’re in a great position. What do you use for code quality & security?
Are there any best practices which combination of scanning tools would reveal most vulnerabilities for technology stack of Java, NodeJs, Docker, Jenkins CI/CD?
Are there any best practices which combination of scanning tools would reveal most vulnerabilities for technology stack of Java, NodeJs, Docker, Jenkins CI/CD?
Java is our main target language at Muse, so I’d say to try that (you can try our beta at https://does.muse.dev) For NodeJS there’s ESLint and javascript compilers like typescript and closure (even if you don’t use types in your code these can be useful tools). Muse incorporates ESLint and we’re working on onboarding more Javascript tools. I don’t have as much experience with container scanning, so can’t comment on best tools for Docker. But if you have any more questions about Java / Javascript static analysis tools, feel free to DM me.
@andreas.mueller06 from our perspective a combination of clari(red hat) to do the o/s layer in docker, + nexus lifecycle for java &Node JS would return the best quality results. If you want to expose your developers to the data and use Nexus Repository to proxy http://npm.org then the npm audit function will provide IQ Server data (proprietary, rather than npm data). Java IDE integration is still a good fit, and the SCM integrations provide assistance with automating the remediation where possible.
One of the challenges we face in trying to shifting security left is that not all security tests can be automated ....at least given current tool-sets e.g broken access controls ... Any ideas on how to overcome this constraint >
One of the challenges we face in trying to shifting security left is that not all security tests can be automated ....at least given current tool-sets e.g broken access controls ... Any ideas on how to overcome this constraint >
Not every test can be automated even in a standard test function. So its the same process automated what you can and build in the process to test in sprint the other bit make it part of DOD helps
Thanks Kevin ! Agreed .. but manual testing is often seen as slowing things down, so there is pressure on security team to provide sign-offs based on results from automated tools alone. Of course, manual testing can be done "off the band" .. but then there is an unaddressed risk
I agree, we have the same issues, we are moving the testing into the sprint teams, no hand off or waiting, its hard but its improving slowly
How do you recommend getting security experts that are "too busy" to work with development teams to embed themselves in to help the team learn how to properly identify security considerations on their own?
How do you recommend getting security experts that are "too busy" to work with development teams to embed themselves in to help the team learn how to properly identify security considerations on their own?
I would ask how the security teams are evaluated (is it just things like number of bugs caught / fixed, or measures of audit performance? or is training a part of their job that is recognized by their management? I find most people focus primarily on what gets them the most positive feedback from their direct managers.
@james.v.toomey This is a great question and essentially sums up the mission of DevSecOps, right? We actually host a practitioner event called the DevSecOps Leadership Forum - here industry leaders from security, development and devops speak to not only the technical but also cultural challenges around "shifting-left" https://www.sonatype.com/dlf-2020-namerica-ondemand and share best practices on how to achieve this.
We have discovered an interesting problem in our legacy platform. We scan libraries for vulnerabilities, we would upgrade the libraries, but then discover services that load those libraries that have not been restarted across 3 security updates to that library and still used code with all the vulnerabilities in the library. What do you do to scan for processes that need to be restarted because their dependencies had been upgraded?
We have discovered an interesting problem in our legacy platform. We scan libraries for vulnerabilities, we would upgrade the libraries, but then discover services that load those libraries that have not been restarted across 3 security updates to that library and still used code with all the vulnerabilities in the library. What do you do to scan for processes that need to be restarted because their dependencies had been upgraded?
this is such a great explanation of why it’s useful to have scanning at multiple points in the SDLC, from development through to operations. Scan for vulnerable libraries during development, scan the containers you create in CI, scan the deployed production assets. “Redundant” security checks will help discover these issues.
Containers sort of solve this problem, because you restart everything in container on any change. But if you still have legacy platform where you run servers on which individual services are deployed directly, if those services are not packaged using the OS packaging system or even if they miss a dependency or if the packages do not have restarts defined correctly or … any of this chain is broken, … your service can be very stable, there might be no changes to them, no deployments…
@jiri.klouda we typically scan in the context of the application as part of the SDLC; so “out of the box” we do not provide feedback on “running” services unless they have been scanned prior to deployment, in which case, we will have an sBOM which we can continuously monitor for new/zero days vulnerabilities. IN the case of a legacy application it would require that the version in “production” be scanned to identify the libraries that are in use. This could be done by retrieving the application from Nexus Repository Manager (other repository managers are available) then use the scanning tools to generate an sBOM, and identify who uses the older vulnerable libraries. It would be possible to use the scanning cli to scan applications actually on he prodution system to perform the same activies and generate the appropriate feedback
What I ended up implementing myself was something like once a day we check processes running more than a day, list open files, check those against packages containing the files, check version the package is now and when the process started and then find critical CVE closed between those two versions. Find what service the process belongs to. Then create an event to notify that restarting the service would fix <list of CVE>. I would expect some of the vendors to provide similar functionality and I sort of wonder what other teams are using to do this.
Presentation that I mentioned was from DJ Schleen at Rally Health “Blue is the New Green”
Q: There will always be more devs than security engineers in an organisation. How do you establish collaboration between the two roles so the sec engineers are not always playing catchup?
Q: There will always be more devs than security engineers in an organisation. How do you establish collaboration between the two roles so the sec engineers are not always playing catchup?
Increasing the ratio of QA engineers to Devs can decrease the quality of the software. Maybe there is a similar effect in security.
We’ve have the ambition to raise the security awareness of our developers, by including security tests/scans in our pipelines, and have the people from the devsecops help them in advance when design/code can still be fixed cleanly rather than last minute workaround. We expect developers to ramp up in this process. In the end you have as much sec engineers as devs (ok somehow)
Maybe the question is different, why do you need security engineers? If your devs can do the same job why pass it over?
@nick.jenkins this was the previous DOES talk I mentioned from Mary Lee at Salesforce. She’s definitely outnumbered by developer teams, but has found approaches that work at scale: https://www.youtube.com/watch?v=OGBysCmUk70
we are partnering closely with our InfoSec partners to learning the tools and increase our app teams security IQ.
we are partnering closely with our InfoSec partners to learning the tools and increase our app teams security IQ.
what does 'partnering' mean in practice?
The InfoSec partners teach the apps teams?
Some to of the reports that tools like NexusIQ generate need some coaching and training on how to read them and understand what the actions to remediate should be.
Our InfoSec org has Security Engineers that work with us to make sure we have strong security practices (coding and verification).
But does it feed back the other way? Where I work, the devs are using tech the InfoSec guys haven't even seen yet.
@jose_mingorance the key thing is also to get Devs the sec info early on. The sooner they know, the more it feels like simple corrective actions rather than dreaded rework that needs to be prioritized into the current or next sprint.
@nick.jenkins: Great point on the opportunity for devs to educate the security folks! I’ve heard similar stories of some new technology / framework getting a quick ‘no’ from security based on a misunderstanding of what it does.
I've seen these ivory towers before. And the current Sec tools mantra is counter productive. We need tools where devs and infosec work side by side to develop solutions which are secure by design.
OWASP’s WebGoat is a great training tool for developers to learn more about security
OWASP’s WebGoat is a great training tool for developers to learn more about security
Thanks for sharing. I was reading recently about a similar sounding 'vulnerable-by-design' tool specifically for looking at vulnerabilities in infrastructure-as-code: https://bridgecrew.io/blog/terragoat-open-source-infrastructure-code-security-training-project-terraform/
OWASP has many great resources for folks that want to learn more.
Muse report on WebGoat (cheat sheet to some of the issues hidden in there 😉) : https://console.muse.dev/result/smagill/WebGoat/01EBGMAG8NZ7PXC292RTSTQQQN
How do you advise prioritising security fixes va features on the backlog as not all security issues are equal
How do you advise prioritising security fixes va features on the backlog as not all security issues are equal
I think this is where the security team becomes really valuable by suggesting remediation order for security issues on the backlog. Tools can help too though. The Sonatype / MuseDev callflow analysis @brianf mentioned is an example of this: among all the vulnerable libraries you may be using, prioritize fixing the ones where we can show that the application code actually calls into the vulnerable part of the library. I think there’s definitely room to develop other new tools in this space too.
Ideally security team input could feed into automated prioritization processes.
Thanks for answering ; this is the usual answer I get ; while my question is not on prioritising security issues , it is about mixing it with feature work . So risk vs value
Ah, thanks for clarifying! So the question is how do you split dev time between features and security backlog items? Do you reserve a percentage of dev time for security or do you make it more dynamic, based on, say, the risk that the next security TODO poses? Is that right?
That is a very nuanced and interesting question. I guess I would start by asking how you quantify the value of features. But even given that, I think security and features are almost on different scales. They’re in some sense measurable as “expected cost / expected return” (i.e. on a monetary scale). But security issues are low-probability high-cost events and features are high-probability low (incremental) return events. So it’s really hard to compare them. I haven’t heard of anyone really taking a deep approach to putting those concerns on a level playing field and comparing. But maybe others in the channel have stories to share?
It depends on your organisation’s risk profile too
(Though since GDPR etc many orgs risk profiles look more similar than they used to)
Some of what used to be pure security features are now legal/compliance ones
A common story I use from banks I've worked with is asking the delivery team "what is the value the bank delivers?". Invariably I get an answer of "money". To which I reply "Really? So if you have to push out a new feature which will generate considerable revenue if it goes out right now but you know has a critical vulnerability what do you do?" If the bank is in the business of "money", revenue wins. If the bank is in the business of keeping their clients assets safe, hell no. Your companies risk profile, your awareness of the risk, the size of the change, the robustness of your deployment pipeline checks.
Some of the most dangerous recent hacks (resulting in hypervisor access for example). Were the result of multiple low risk security issues strung together. Prioritization is tough in light of that.
P.S. The bank story is something I tell to get them out of feature factory thinking and starting to at least consider security before hitting go ;-)
+we should stop saying devs don’t know security - similarly saying security doesn’t know coding ; the question is how can we even got better
+we should stop saying devs don’t know security - similarly saying security doesn’t know coding ; the question is how can we even got better
we need to make sure these practices are not top down/bolt on but rather part of our developers DNA
we need to make sure these practices are not top down/bolt on but rather part of our developers DNA
We get InfoSec engineers embedded for a period of time to work on onboarding our applications into the security tools as well as key security practices.
not in a classroom setting but rather doing it hands on on the real application.
Our infosec created business line aligned support teams so we get some level of consistency.
I’ve also seen practices from Shannon Lietz at Intuit where they have been embedding security for years within development. Definitely worth checking out her talks online. When you have someone sitting next to you that is attacking your code or looking for vulns in it, you can learn fast from them and don’t often repeat the same mistakes. You approach it as a partnership rather than as a security whip.
Do you have resources on how to run a capture the flag game? I’m looking to run a developer hack day on a topic like that
Do you have resources on how to run a capture the flag game? I’m looking to run a developer hack day on a topic like that
@mboudreau327 @marcovieira check out this Devsecopsdays Austin session from Ell Marquez: https://www.youtube.com/watch?v=WGiCO2u8JCg
How do you quantify the benefits from investing in tools, training etc., when putting together the business case to get the funding needed, any advise?
How do you quantify the benefits from investing in tools, training etc., when putting together the business case to get the funding needed, any advise?
For us its all around the value the tool can bring and how it helps us improve, maybe man hours, maybe process time, maybe less rework etc, there is a lot to look at. Most company's that sell the software can provide a POV proof of Value and they will help build a case
@simonw @tfr here’s an example of a Forrester TEI Report tied to DevSecOps Total Economic Impact Of The Sonatype Nexus Platform_Full Version.pdf
There's a few by the looks of it https://www.google.com/search?q=total+economic+impact+forrester+devops&oq=total+economic+impact+forrester+devops&aqs=chrome..69i57j69i64l3j69i60l2.8455j0j7&sourceid=chrome&ie=UTF-8
@tfr would be great to schedule a call to discuss your findings, let me know your availability. 🙂
@jgarzon I would be interested seeing what you guys are offering, this is an area I need to get much deeper into
@tfr great, let me message you some time slots and see if you are available.
After lunch, welcome speaker @daniel.maher for Q&A!
"When we talk about automation, we're talking about unlocking human potential" < Really like that description
I don’t know who’s doing the live-tweeting on https://twitter.com/ITRevDOES but they’re doing a great job. 👍
@claudia_cardonatesill Frequency is determined by need, time-boxes, and definitions of success. A squad could last a week, a sprint, a quarter—it depends on how and why the squad was formed.
As an aside, an important part of guilds is also big hats. I forgot to mention that. Collars optional tho

@daniel.maher How are SRE and Full stack developers related? Can we assume that a SRE team is staffed with equivalents of Full stack developers to bring the versatility
@daniel.maher How are SRE and Full stack developers related? Can we assume that a SRE team is staffed with equivalents of Full stack developers to bring the versatility
The term “full stack developer” is tricky, right? Like, how often is a full stack developer writing byte-code for microprocessors, for example? What is the definition of “full stack” you’re asking about in this case?
I am referring to someone who has both Dev & Ops skills especially experience in App dev & infrastructure knowledge
So you mean a generalist—as SREs themselves tend strongly towards being generalists, the relationship is self-fulfilling.
Yes. I agree. We can look at them as generalists in the sense that they are not experts in a single aspect of the value stream. Thanks Dan. Good session.
The answer is “it depends”. I mention this in the presentation actually, hehe.
Again, don’t fall for the dogma here. There is no one true path—not even within a single organisation. And, even worse, what worked two years ago might not be appropriate now, so you have to be willing to evolve your organisational structure along with your business growth.
Yes - specialised skills, economies of scale and other factors. In my view, SRE should be embedded as an integrated way of working as part of the software cycle. It’s context that matters, many a times people think once a central function - always a central function. Adopting a continuous improvement mindset is critical
An SRE’s primary responsibility—certainly as an IC—is ensuring site reliability. 🙂 At a senior / staff level, it’s mentoring others, and the lines blur a bit.
Until now I assumed SRE was more architectural. Now i understand its the full gamut. Thanks.
@daniel.maher What are some best practices for drawing circles around product-scope for product teams that allow for better alignment to SREs (especially for those who aren’t all micro-servicey)?
@daniel.maher What are some best practices for drawing circles around product-scope for product teams that allow for better alignment to SREs (especially for those who aren’t all micro-servicey)?
It might sound counter-intuitive, but start with the edge—literally whatever surface it is that your customer is actually touching—and work backwards from there.
Gotcha. Sometimes drawing lines for product team scope make sense at the edges but don’t really make sense for the SREs and while they are “assigned” they end up generally working on reactive work as a large pool that loses connection to product teams.
Ultimately the only thing that matters is if your customer can do the thing they expect. Take a look at your SLAs, and then what SLIs form the the SLOs in those SLAs.
Who at Datadog is woken up at 2 am if the system is down? A developer, the SRE? Or an - unmentioned - support guy/gal?
Who at Datadog is woken up at 2 am if the system is down? A developer, the SRE? Or an - unmentioned - support guy/gal?
But in this case, interrupt-level alerts go to a product team and the SRE responsible for the related product or portfolio gets involved.
If the alert meets certain threshold, I would assume it goes to the person on-call
That’s up to each product team. In reality, some do and some don’t—this is also related to questions of complexity and criticality. Straightforward issues that can be solved without waking somebody up, don’t!
If it’s both complicated and critical enough that an already-awake person can’t deal with it, then yes, somebody gets woken up. This really doesn’t happen very often.
Yes - thats’ why defining threshold (critical/complex) is useful. You can define it according to the context.
Thanks everyone who attended our Vendor Dome Chat! You guys posted a great list of questions that I’m still working through (I’ll respond to everyone eventually!). If you want to continue the conversation or go deeper on any of these topics you can go to the http://does.muse.dev/ to schedule a chat. I’m around and happy to meet up for 1-on-1s during the conference. Slack DMs work too. Enjoy the talks!
And I posted this in a thread up above, but in case it’s of broader interest: during the Q&A WebGoat came up as a great training aid and I agree. Here are the results that can be found with code scanning: https://console.muse.dev/result/smagill/WebGoat/01EBGMAG8NZ7PXC292RTSTQQQN A good test of developer security training level might be: “do these errors make sense?” e.g. do they know what a CSRF (cross-site request forgery) is? I think there’s way more we can do to help tools do this training too.