This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-05-19
Channels
- # ask-the-speaker-track-1 (220)
- # ask-the-speaker-track-2 (196)
- # ask-the-speaker-track-3 (323)
- # ask-the-speaker-track-4 (212)
- # bof-arch-engineering-ops (1)
- # bof-covid-19-lessons (1)
- # bof-leadership-culture-learning (1)
- # bof-project-to-product (14)
- # bof-sec-audit-compliance-grc (2)
- # demos (7)
- # discussion-main (1192)
- # discussion-more (15)
- # faq (4)
- # games (69)
- # games-self-tracker (2)
- # gather (5)
- # happy-hour (39)
- # help (79)
- # hiring (10)
- # lean-coffee (13)
- # networking (10)
- # project-to-product (12)
- # psychological-safety (1)
- # summit-info (156)
- # summit-stories (3)
- # xpo-anchore-devsecops (5)
- # xpo-cloudbees (4)
- # xpo-copado (1)
- # xpo-epsagon (1)
- # xpo-gitlab-the-one-devops-platform (13)
- # xpo-harness (1)
- # xpo-hcl-software-devops (9)
- # xpo-ibm (4)
- # xpo-itrevolution (16)
- # xpo-launchdarkly (26)
- # xpo-mirantis-devops (10)
- # xpo-pagerduty (11)
- # xpo-planview-tasktop (10)
- # xpo-redgatesoftware-compliant-database-devops (8)
- # xpo-snyk (3)
- # xpo-sonatype (4)
- # xpo-split (25)
- # xpo-synopsys-sig (4)
- # xpo-tricentis-continuous-testing (4)
👋 Hi everyone, my topic and is Supply Chain Security in open source. Happy to chat about it or other appsec topics and questions with you
⭐ Welcome, welcome @dave.karow for our next session's Q&A. Thank you https://doesvirtual.com/split!
Thanks @mollyc - Got my tea poured (in honor of London and so to avoid running the coffee grinder at 4:30am for the fam) 🙂
I wasn't able to find the slides in https://github.com/devopsenterprise/2021-virtual-europe
Not just about slowly rolling out but about learning by starting with partial exposure.
if you don’t automate the learning, it’s more hyper-vigilance and toil 😞
…which means you’ll do it less often. That’s bad.
Checking the box… Does it Avoid Downtime, etc…
too busy to improve is a common challenge I hear - limiting WIP so counter-intuitive
@benk691 Yellow means getting the “Benefits” in the left column
@rgorham By ingesting telemetry that’s attributed to the gradual exposure cohort and the not exposed cohort. Perform automated stats comparison.
super interesting - don't wait for it to happen, can measure on the fly
curious to get everyones thoughts on "starting wiht the end in mind" - getting into the backlogs early on, involving Product Ownership/Product Management or similar to start planning these improvements/capacity for it in?
The danger of showing code is that it can be wrong 😉. Looks like you used 'treatment' and then 'feature'.
Great catch. That’s the problem with pseudo-code… it doesn’t get run 🙂
WIP and Flow are inextricably linked.
Deck is here if you can’t find it on conference site. https://speakerdeck.com/davekarow/does-layered-approach-to-pd-2021
How feature flags would work for mobile apps where you have to release a new app release? #ask-the-speaker-track-4
You change the flag settings at the server. The app just updates periodically from that.
We're using the Flow Framework to help visualize how we're tracking to the business outcomes as they make improvements to see if their "experiments" are impacting the flow metrics/outcomes. Seeing that visualized can be eye opening 👀
@manzi.g You can put flags in your mobile apps… it’s how many teams protect recent changes/run cohorts.
Yep! Your change is instantaneous without pushing new code into the store, provided you deployed the feature and hid it behind a flag.
(not a fan of toil or hypervigilance)
Feature flags are a very powerful architectural construct! Highly recommend. We even have flags on our mainframe code that help us deploy changes with minimal impact.
The increases in Release On Demand and Reduction in Cycle Time came from many changes. But use of Feature Flags was a big contributor.
@katharine.chajka and @andy.hinton Here’s a 20-minute playlist on feature flags, decoupling deploy from release and the shift to full-on progressive delivery: https://www.split.io/blog/progressive-delivery-safe-at-any-speed-playlist-blogs/
…to determine if they have any impact?
@scott.prugh I’ll bet you don’t miss multi-hour hotfix/rollback exercises.
Reminder: slides are here if you’d like: https://speakerdeck.com/davekarow/does-layered-approach-to-pd-2021
The appearance of a feature but no code behind it.
I have a hypothesis that feature flags are a key architectural construct that increases a developers understanding of operability...
one thing that has been on my mind with Feature Flags is how far "left" it reaches to become built in - have heard it talked about as part of backlogs, prioritized along with Features by Product Owner/Product management - how does it get built/baked in to the process?
Once you start using feature flags the devs and teams starting asking: How are we going to deploy this? How do we minimize impact/blast area? As opposed to: Here is my binary/container/etc
@katharine.chajka Feature flags by definition have to be built into the code as it’s being committed… they are as shift left as you go. Here’s an example of how a team would have a flag in their story from the very start. https://www.split.io/product/integrations/atlassian/#:~:text=Integrate%20Jira%20Software%20with%20Split,release%20statuses%20in%20both%20platforms.
So the opposite of dark launch, like the Lean Startup thing of buying a Google ad without a real thing behind it?
@rshoup Look into “Pretotyping” by Alberto Savoia https://www.pretotyping.org/
Glad you liked it folks. Deck will be on DOES site shortly but is here as well: https://speakerdeck.com/davekarow/does-layered-approach-to-pd-2021
Great presentaion @dave.karow!! I love the four shades slide!!
that is an amazing vision and possibilities. I'm working with our team to reduce branches and start to use feature flags... we have a long way to go
@dave.karow inspiring talk, thank you! Now the tricky part is, as you said, separating singal from noise. For me that includes what to measure (because, in any app, there is a ton of things I could measure). Any tips here? Does Split help with that?
The Limit WIP/Achieve Flow is so powerful.... Queues and Dependencies are a massive problem: Team A waits for Team B and queues up work and they both need to arrive together. With Feature Flags you can remove a dependency and a queue!
Welcome @liran.tal for our next session's Q&A. Thank you https://doesvirtual.com/snyk!
@jakub.holy Great question. Start with the “guardrail” metrics your company cares most about. We are taking specific tech or biz metrics not massive details like tailing a log.
@jakub.holy hold a sec and I’ll share a link
https://www.split.io/blog/how-to-choose-the-right-metrics-for-your-experiments/
https://help.split.io/hc/en-us/articles/360031135192-Experimentation-Essentials-101-Metrics
https://www.split.io/blog/how-to-avoid-lying-to-yourself-with-statistics/
Progressive Delivery folks: I’ll be in the Split Channel now at #xpo-split-feature-flags
Hi Dave. I know a fair bit about CD. Could you explain where Progressive Delivery builds on top of CD? The main example given in your talk abstract, of decoupling deploy from launch - that's been in CD since the original 2010 book by Dave and Jez, and it's a practice I've witnessed at companies since 2008 at least At the moment, I see Progressive Delivery as a synonym for CD, as Jez/Dave/others have defined and demonstrated it. Am I missing something. Thanks!
With the case of NPM packages, I would argue that often we could go with less. Re-invent some of the wheel, instead of pulling down create-react-app or similar, just to make a page with a single button on it :thinking_face:
Great blog post about this topic https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610
How effective do you believe automated tooling will be in helping us detect malicious code/edits to packages in our tool chains, @liran.tal?
Very. I think there's a good level of automating we can do about detecting malicious change sets and tracking back. There's also work that is sorely needed to be done on the suuppy chain entities that play a key role here like package registries, for example signing packages and other capabilities. There's security educations and awareness needed to go along with that. To put some code behind my words, I wrote a tool some years back that is as relevant today. Developers blindly install packages, and blindly run arbitrary code with the likes of "npx". This small npq CLI starts before you install a package and examines it to see you're not introducing malicious packages. It would've caught I'd imagine the absolute most typosquatting and malicious packages that had pre install scripts running on a developer machine https://github.com/lirantal/npq
If I could build that small tool to be effective against some types of malicious supply chain packages, then there's definitely hope for the bigger issue :)
Thanks for attending. Love to engage in more discussions on this so don't be shy to shoot anything in the channel here or elsewhere. A lot of smart people here for us to debate some tough questions :)
👏:skin-tone-2:Welcome @paul120 who will be moderating for today's VendorDome Q&A between @nlevine and @swhite941 Thank you https://doesvirtual.com/anchore & https://doesvirtual.com/gitlab!!
Let's do a little poll to see what the audience is doing with DevSecOps!
Do you think that pen testing skills have a valuable place in DevSecOps to flip the security viewpoint on its head and start “thinking like an attacker” and are those skills that more Devs + Ops should start looking at gaining?
Interesting that in the second poll above most people are doing security scans in CI/CD and not at other points.
There is some security checking already taking place in IDE, sometimes better support, sometimes less, but often already on some level. This often depends on the IDE. Most tools i saw (so far) lacked support for XCode
Some tools for static code analysis are making their way to efficient support in IDE but there is still a lot of stuff that’s not that trivial to run locally.
Hi, @paul120 @nlevine @swhite941 — I’d love to hear how the potential leak of cloud credentials due to codecov issue has affected how orgs do CI/CD and cloud credentials. I was blown away by the scope of the problem that this revealed!
Yeah -- the list of people impacted by CodeCov is mind boggling because it's software vendors -- so you get a fan out from codecov to a bunch of other SW vendors which then impacts many more companies.
It was also the first one I've seen that was around containerized software
CodeCov gave me 2 days of just running over every piece of Terraform in our central codebases 👀 Not fun
It took me days to start getting my head around the implications of this were…
I opened a lot of these, I still love the required version string for modules supporting older versions 😂
Where did you get this screenshot? Hearing about this vulnerability yesterday, this seems to be a potential next step to take
That's something I wrote for our maintainers from our internal GitLab 😉
https://discuss.hashicorp.com/t/terraform-updates-for-hcsec-2021-12/23570 https://discuss.hashicorp.com/t/hcsec-2021-12-codecov-security-event-and-hashicorp-gpg-key-exposure/23512 Those are the two comms for it
So the new Executive Order in the US is going to be interesting here: they are going to start mandating an SBOM from SW companies that sell to them, I think that will impact all SW companies and the industry at large
UK govt is doing a similar thing too pretty sure its going to be an interesting few years with regulation and countries getting concerned about peoples WIFI toasters https://www.gov.uk/government/collections/secure-by-design
Effectively it doesn't matter where you sell everyone is going to have to work with these regulations/standards if they want to operate in the UK/USA and possibly EU soon
Yep -- just like GDPR is impacting everyone and US states are now implementing similar rules
Reminds me of something Robert C. Martin said (paraphrasing heavily): "Software will be regulated, it is on us to decide our own ethical codex and how we deal with these things, else someone from politics will"
https://www.youtube.com/watch?v=7EmboKQH8lM&t=53s It's somewhere in the beginning of this IIRC
From the Executive Order:
(vi) maintaining accurate and up-to-date data, provenance (i.e., origin) of software code or components, and controls on internal and third-party software components, tools, and services present in software development processes, and performing audits and enforcement of these controls on a recurring basis;
(vii) providing a purchaser a Software Bill of Materials (SBOM) for each product directly or by publishing it on a public website;
(viii) participating in a vulnerability disclosure program that includes a reporting and disclosure process;
(ix) attesting to conformity with secure software development practices; and
(x) ensuring and attesting, to the extent practicable, to the integrity and provenance of open source software used within any portion of a product.
(f) Within 60 days of the date of this order, the Secretary of Commerce, in coordination with the Assistant Secretary for Communications and Information and the Administrator of the National Telecommunications and Information Administration, shall publish minimum elements for an SBOM.
https://anchore.com/blog/latest-cybersecurity-executive-order-requires-an-sbom/
In working with US Gov and Platform 1, they are definitely worried re insider threats!
This was an interesting survey from Linux Foundation: https://www.techrepublic.com/article/open-source-developers-say-securing-their-code-is-a-soul-withering-waste-of-time/
"The survey, which included questions designed to help researchers understand how contributors allocated their time to FOSS, revealed that respondents spent an average of just 2.27% of their total contribution time https://www.techrepublic.com/resource-library/whitepapers/incident-response-policy/ Moreover, responses indicated that many respondents had little interest in increasing time and effort on security. One respondent commented that they "find the enterprise of security a soul-withering chore and a subject best left for the lawyers and process freaks," while another said: "I find security an insufferably boring procedural hindrance."
Interesting but given now big the Linux project is I don't the developers being daunted by the task at hand. Doesn't mean they aren't fixing the issues take a look at Coverity Scan results for Linux kernal the developers are actively using the results to fix issues on a regular basis. https://scan.coverity.com/projects/linux
That's always been the issue. People want to do the fun things like adding new features. Few want to muddle through security and documentation.
Part of the issue is false-positives. There are so many vulnerabilities that aren't relevant, that developers end up wasting time explaining why things don't need to be fixed. If you can reduce false positives it becomes easier to make the case to developers.
Here's a quote from a recent interview we did: "People don't hate running vulnerability scans. What they hate is fixing them because they don't actually identify problems"
"So, if my noise is say less than 15-20%, I'm good, but right now noise levels are like 60%. And that is very high. It's a big problem because I spend a lot of time trying to satisfy those things."
We’ve also recently published our DevSecOps Survey at GitLab https://about.gitlab.com/developer-survey/ which will be of interest to most of you and see how 4,300 of your peers see the current state of affairs in DevSecOps
Hm, do you mean scanning the actual production or rather create like a summary of what is deployed in production so that you can always check that specific version of code or application at any time?
I think they are referring to run the scans when you can. I.e. scan in IDE, scan at check-in, scan in CI, scan in CD each stage allows different types of scanning and could produce useful results. As they've said noise or FP has to be managed so low FP rates are critical
Don't just shift everything onto the Developer, shift the security tests to everywhere you can and where they make sense
Yes, i agree. My question would be more if it is more feasible to scan the prod env (if that is possible?) or rather have an overview of what has been installed/deployed and then do regular audits? But i guess there is not “THE ONE” answer
I would agree with your conclusion no one answer. If scanning sooner could of fixed a P1 issue before a customer was affected that would of been the better solution
I mean if you introduce a new scanner at one point - either because you changed vendor or introduce another position in the cycle - there will be code or apps that have been run through the process before the introduction. So question would be if you just go back and check everything, maybe get a heartattack or introduce it and scan everything new?
Ah I see your right new tool/technology will mean new defects you need an approach and a way to manage to those. The tool should support you to create a baseline for where your at today and then you can work to drive improvements on that as the teams get familiar with the tools
Ah, thank you. Got it 🙂
> My question would be more if it is more feasible to scan the prod env (if that is possible?) or rather have an overview of what has been installed/deployed and then do regular audits? Apologies for the long post... @christian.kullmann It certainly is possible to get some coverage through both of the methods you are referring to. The cleanest solution is to actually scan the production environment itself. That absolutely is possible; although, it typically requires installing the scanning tool inside the containers that need to be scanned. This requirement can be a deal breaker for many organizations. The other method that you referred to, of compiling a list of assets deployed to production and then regularly scanning those assets is far less intrusive. For containerized applications, this typically involves gathering a list of running containers and identifying which images were used to start those containers. The downside to this type of scanning, is that it does not capture any changes that were made inside the container after it started running. This downside can be mitigated by monitoring the running containers for unwanted changes and alerting when/if those changes do happen. This gives you a reasonable assurance that the images have not been changed in a meaningful way from the time they were originally created and your scans of the container image are therefore representative of what is actually running.
@swhite941 No worries about long answers 🙂 Thank you for clearing that up, that was exactly what i was looking for.
When scanning in production, you may have new vulnerabilities associated with running code. eg there wasn't a vulnerability when it went into prod, but there is now
I think it would be cool to have an index/tool/benchmark for scanning that measures false positives for container images since that's the wave of the future
Hello Everybody, this is Sacha Labourey from CloudBees and I'm with @sanmat.jhanjhari475 and @ben.angell from Nationwide Building Society, happy to be here with y'all!
‼️Welcome @sacha, @sanmat.jhanjhari475 and @ben.angell for our next session's Q&A. Thank you https://doesvirtual.com/cloudbees!
To continue discussion with @paul120 @swhite941, visit #xpo-anchore-devsecops or #xpo-gitlab-all-in-one-devsecops - thanks for tuning in to Anchore / Gitlab Vendordome!
Curious, what do others feel is the biggest challenge in convincing your business to support your software initiatives?
baselining so key. if you don't know where you're at or how fast you're going, how do you know if you're improving? @ben.angell do you use any specific metrics to track progress from business /customer perspective to complement DORA etc?
Hi Patrick, we use BVSSH (better value sooner safer happier) as high level metrics, key indicators for us are delivery time to market (how long to deploy code), MTTR in event of downtime. We are visualising these metrics via tools like thoughtspot, which lands very well with our stakeholders
That's awesome @ben.angell, thanks. Has that lead to more meaningful discussions with business stakeholders? Helped overcome the language barrier etc?
Hi Patrick - I think it has. It has actually been a good learning experience for me - the main lesson is to remove CICD jargon and talk in terms of efficiency and how that can equate to value. Visualising activity timelines really brought it to life, and got us the buy in we need
Top stuff. Glad to hear it! Very impressive work, well done. Best of luck as you continue your journey :facepunch:
@ben.angell Loving some of the metrics to measure efficiency. How quickly you can pass code though SDLC and how automation can take out time. Infrastructure as a code being a priority and delivering the right way.
Thanks Mo - it does really help deliver the message… and we are keen not to “weaponise” the data - it’s used to show where efficiency opportunities may exist!
📣Welcome @gustav.lundsgard1 and @mdahl for our next session's Q&A. Thank you https://doesvirtual.com/synopsys!
Dang it… I love what you did there. We already have other wordings and learning tools in place. Cool stuff.
Hello Christian - in your line of business - this theme certainly must resonate as well.
It so does. I bet at least half our devs would rush to become Cyber Jedis in a heart beat. Sounds way sexier than just Security Champion 😄
Great question, perfect timing. I'll elaborate: First round(our pilot program) we had only self nominations of people wanting to learn more. As we did a bigger rollout we had engineering managers ask around in their teams which ended up with 30ish people. The roll out after that, the community where up and running and leaders around the engineering communities speaks about the academy, then we get a alot of candidates "for free". Now we're looking at teams without jedi's, and it is actually the various levels of engineering managers that ask for more jedis, not us(security).
So at what level will Cyber Jedis receive their light sabers? Master?
I think already at Padawan level. When they start using automated security tests and some security checklists 😉
Haven't you seen the light sabres for sale in IKEA stores - next to wifi speakers in the lighting department.
Nice. Although real light sabers would really cut some discussions with non-believers short 😉
There … are … light sabers at the IKEA store? 😮 Shut up and take my money.
Light sabers are out of stock. We only have the Death Star now. https://www.ikea.com/gb/en/p/ikea-ps-2014-pendant-lamp-white-silver-colour-90311494/
Meh. Already got one of those. Turned out to be rather small and doesn’t come with a couple of TIE-Fighter Squadrons
Exactly - stop by the LEGO store on the way home to recruit the crew.
hahahahaha ,,, good idea 😄
That clarity of vision is crystal and inspirational - for all customers including internal customers
Lack of starting metrics is a challenge. I am starting to build a PagerDuty community at IBM. We do have a form of starting metrics though. Will be interesting to see if any of them improve.
Yes, we are currently mostly looking at trends, as a metric. Meaning that "number X SAST findings" is not interesting. We want to see that the number is moving, either up (meaning we are scanning more) or down(that we are resolving issues)
Again - good timing here - Gustav reflecting on the improvement process of the project - but indeed: Metrics just visualize the process underway.
Thank you for the introduction to the Cyber Jedi Academy @mdahl and @gustav.lundsgard1. Loved it.
Great talk thank you, love the community of learning!
You are most welcome - thanks for your attention and time - much appreciated! Please reach out in the "real world" for further dialogue!