This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-10-05
Channels
- # ask-the-speaker-track-1 (316)
- # ask-the-speaker-track-2 (312)
- # ask-the-speaker-track-3 (283)
- # ask-the-speaker-track-4 (309)
- # bof-leadership-culture-learning (3)
- # bof-project-to-product (10)
- # bof-sec-audit-compliance-grc (2)
- # demos (9)
- # discussion-main (1160)
- # faq (14)
- # games (135)
- # games-self-tracker (4)
- # gather (6)
- # happy-hour (50)
- # help (175)
- # hiring (25)
- # lean-coffee (8)
- # networking (26)
- # project-to-product (3)
- # summit-info (219)
- # xpo-adaptavist (5)
- # xpo-anchore-devsecops (12)
- # xpo-aqua-security-k8s (3)
- # xpo-basis-technologies (17)
- # xpo-blameless (4)
- # xpo-bmc-ami-devops (1)
- # xpo-broadcom (2)
- # xpo-cloudbees (5)
- # xpo-codelogic-code-mapping (8)
- # xpo-dynatrace (1)
- # xpo-everbridge (6)
- # xpo-gitlab-the-one-devops-platform (6)
- # xpo-granulate-continuous-optimization (15)
- # xpo-infosys-enterprise-agile-devops (18)
- # xpo-instana (5)
- # xpo-itrevolution (15)
- # xpo-launchdarkly (7)
- # xpo-logdna (3)
- # xpo-pagerduty (8)
- # xpo-planview-tasktop (12)
- # xpo-rollbar (3)
- # xpo-servicenow (4)
- # xpo-shoreline (11)
- # xpo-snyk (6)
- # xpo-sonatype (6)
- # xpo-split (10)
- # xpo-splunk_observability (3)
- # xpo-stackhawk (1)
- # xpo-synopsys-sig (1)
- # xpo-tricentis-continuous-testing (4)
- # xpo-weaveworks-the-gitops-pioneers (4)
Super excited for all the talks coming up this conference! 😊 🙌
Looking forward to hearing @stephen talk! The Minefield of Open Source: Guidance for Staying Secure
Welcome @keith.puzey and @sujay.solomon from Broadcom's team for our next session's Q&A. Thank you #xpo-broadcom
I love that diagram which shows application as being on a different plane than infrastructure.
Some years ago, when I was at AWS, I was chatting with Suresh Kumar, then the CIO at BNY Mellon. He pointed out that applications last longer than the underlying databases which in turn last longer than underlying OS which in turn last longer than the underlying hardware. It’s important to test accordingly.
My video is at the highest quality setting 720p, but the slides are still a bit difficult to read. Would love it if we could get these slides.
Same here. I would also like to see more of the screen used for the presentation and less for the branding/margin.
Thanks for your notes. The slides will be available in the video library.
My video just cleared up, and I can see everything clearly now. 🎉
I love measuring cycle time and looking at how manual testing affects that negatively and how automated testing affects that positively...
the key for me has been bringing ownership of quality (at all levels of the SDLC) into the dev teams. Automated testing has often been the carrot to get them to take on that ownership 🙂
empowering developers is one thing, but there is something to be said for incentivizing/motivating them to do it as well.
Yes. This is the hardest part. I talk about a failed effort along those lines in How to Misuse and Abuse DORA Metrics after lunch.
i've had a few folks ask about measurable quality/policy gates they can be set to ensure ownership of quality within teams. I haven't really found a good answer for this. Traditionally, code coverage has been used but that often ends up being a checkmark rather than a true reflection of confidence in the change.
https://www.linkedin.com/feed/update/urn:li:activity:6850773330161758208/
“I can think of lots of examples of measuring the wrong things. At one of my clients, they decided that they could improve the quality of their code by increasing the level of test-coverage. So, they began a project to institute the measurement, collect the data, and adopted a policy to encourage improved test-coverage. They set a target of “80% test coverage”. Then they used that measurement to incentivize their development teams, bonuses were tied to hitting targets in test-coverage. Guess what, they achieved their goal! Sometime later, they analyzed the tests that they had, and found that over 25% of their tests had no assertions in them at all. So, they had paid people on development teams, via their bonuses, to write tests that tested nothing at all.” Excerpt from “Modern Software Engineering” by David Farley
I saw similar outcomes at Walmart. It was frustrating to warn against it and have management shrug, do it anyway, and then be surprised I was correct.
im starting to subscribe to this concept of testing trophy which reflects the ROI/confidence gained from different types of testing: https://www.google.com/url?sa=i&url=https%3A%2F%2Ftwitter.com%2Fkentcdodds%2Fstatus%2F960723172591992832&psig=AOvVaw2Hovl0ecazb9Z6aYqbBF8S&ust=1633538833690000&source=images&cd=vfe&ved=0CAsQjRxqFwoTCOCimMLcs_MCFQAAAAAdAAAAABAN
the level of confidence gained from unit tests is fairly low ROI according to this model
Yes, we pushed this in the WM Testing Special Interest Group as well.
I’ve always found this to be a better pattern. The key, however, is understanding Kent’s definition of “Integration Test”
We created a glossary to establish a testing vocab.
Thank you Broadcom! A warm welcome to @stephen for our next session's Q&A. Thank you #xpo-sonatype!
Nice presentation! Good reminder of the need to think cross-platform with testing, just like with apps
Yay @stephen! Showing us how afraid we should be!! 😄
It would be interesting to create a pipeline warn gate around this…
@bryan.finster486: around usage? like warn if no one else is using this?
“Be aware, you and 3 opther people use this in the world”
Noooo…. “Most popular projects have the most vulnerabilities.” 😆 @stephen
"90% least popular products are the least vulnerable"
don’t take this as advice to go use obscure projects. security through obscurity doesn’t work for software development any more than it does for confidentiality!
If it's popular, it's vulnerable because of the usage; many eyes on breaking and looking for the gaps.
yep! security researchers focus on popular projects (as do “black hats”). but as we’ll see later, the best projects also make sure they’re pushing updates out that remediate these vulnerabilities.
Only 0.3% of vulnerabilities don’t have a patch that remediates it. Whew.
make sure you’re monitoring security feeds / using tools / etc to notice when those need to be updated due to a disclosed vulnerability
Canary builds with embedded scanning even if there is no feature change to the component.
In other news, wasn’t there a big Struts vulnerability that affected Confluence users?
just wait for the MTTU graph that comes later — I spent way too much time on that one 🙂
I think this is really interesting. People are, by and large, very focused on remediation. Less awareness of the importance of controlling what comes in.
Every-time I run create-react-app I cross my fingers.
And we are wondering why there are vulnerabilities? The ability to have fast access to something like create-react-app is both beautiful and damned scary.
People are really unaware of what they are introducing into their local environments and lager, into shared environments. YIKES!
We had a process to help make this better at Walmart, but I wonder how many people will proxy NPM and Nexus in their artifact repository and scan the versions of dependencies people are using within the org.
@bryan.finster486 - I had a similar process with Commonwealth of Pennsylvania; a 2-step process so did introduce directly to the developer desktop unless it was a sanbox situation.
We were dong it async, but could black-list something if it failed scan.
Being proactive re: dependencies - 1.Pay attention to Quality
PS: I think I had my first code PR accepted earlier this year, updating a Vega/Vega-Lite upgrade inside of a Clojure library — was immensely proud of myself, @stephen I needed a diagram that could only be done in Vega v5. (@arne.brasseur., it was for Oz library)
@genek: Popularity as a (misleading) metric makes another appearance in a slide or two.
oh maybe it’s not on here. let me find the graphic with popularity included…
This is great, @stephen!!! 1.8x less likely to be vulnerable! (At any given point time?) High quality projects are 8x less likely to have breaking changes.
so more popular projects were more likely to be vulnerable (for all the reasons we already discussed)
Agree on update dependencies frequently and offer that we need teams to understand the purpose of the dependency.... what does it offer?
yes, teams should carefully evaluate / discuss each time they consider adding a new dependency.
TRUE and yet, different teams are all at different points on their journey with some not being as knowledgeable ... and taking a bit of a black box mentality
yes. I’m not sure what the best answer is, but there must be some way to help these teams. what is the optimal feedback to provide to developers to help them make these choices?
I tend to think having an architect or lead tech who has an evangelist mentality with a team can make a difference.
That’s a great finding, @stephen! I love that MTTU can guide good component selection!
it turns out http://libraries.io sourcerank includes a lot of popularity-type measures
MTTU improving over time! (Super novel use of graphs, @stephen 🙂
super encouraging though to see the community-wide improvement in MTTU over time!
Flipping awesome insight to split by year — such an improvement over that MTTU vs MTTR graph we did 2 years ago, which was depressing. OTOH, this is quite hopeful!!
yes! I really want to take a look at other ecosystems now and see if this is widespread or more Java-specific.
Oh my…. is assertion that people don’t want to jump more than 1 or 2 versions? Is that a valid fear?
yep. I wonder (speculating) whether this is due to fear of breaking changes.
1 or 2 versions forward feels safer, so they go to the closest non-vulnerable version rather than just getting fully up-to-date.
Would be interesting to see by industry vertical or some other interesting slice.
noted and good idea. I’ll see if we can provide some by-industry insights.
“red: those are people that upgraded to vulnerable version.” Oof.
good question. I don’t think we’ve looked specifically yet at what percentage of the population exhibits that behavior. I’ll take a look though — it’s an interesting question!
relentlessly marching forward as new vulnerabilities are discovered
Love this research, @stephen – kudos to you and team for it!!!!
Thanks! As you might expect, Bruce was the superstar pulling so many things together for this 🙂
Was thinking of you when I was editing my interview of Dr. Gail Murphy: https://itrevolution.com/the-idealcast-episode-21/ She and I were discussing whether innovation happens because people are allowed to do lots of breaking changes, or if it happens because you don’t introduce breaking changes. Was absolutely fascinating —
I’ll have to go listen to that. I feel like there’s an inflection point. Early on, breaking things is good. Past some point it slows you down too much if things that should be “settled” are breaking all the time.
I missed @stephen’s talk. But will definitely play this one back later. Looks like I missed a good talk
Welcome @rani who will be moderating for today's VendorDome Q&A between #xpo-anchore-devsecops and #xpo-aqua-security-k8s @nurmi and @rory.mccune! https://devopsenterprisesummitus2021.sched.com/?iframe=yes&w=100%&sidebar=yes&bg=no#
Share any questions here that arise for you during this session and our speakers will address them right now during this LIVE session!
to paraphrase: hard to marry regulation with cloud native development practices.
You mentioned, Codecov, why do you think that incident didn't receive a larger media coverage
speakers hope we'll resist the temptation to turn this into a paperwork exercise
Was marveling at the impact of the latest Struts vulnerability — software supply chains is so relevant right now!
64% of enterprises have been impacted by a SW supply chain attack in the last year. Here's the data https://anchore.com/software-supply-chain-security-report/
Do what degree are people not upgrading dependencies because of fear of breaking changes?
I’m seriously interested in what it would take for devs to upgrade dependencies, especially when there is a patched component available — now it’s more than detection, but actually remediation. I love the quote that the best way to stay secure is to just stay up the date.
Hello y’all. Looking forward to discussion in Slack and questions/feedback you may have!
To some extent I think it’s a visibility issue. It’s harder to make dependency updates a prioritized backlog item with stakeholders when compared with the more visible impact of adding features.
plus, it's hard to plan for a lot of work where we can't show measurable gains - like, how can we quantify prevention?
If we can move beyond automated dependency checking to automated remediation, via tools opening PRs to update dependencies we will be a lot better positioned. But we have to have robust automated testing to handle the fear that the upgrade will break something that the speakers touched on earlier.
If we can move beyond automated dependency checking to automated remediation, via tools opening PRs to update dependencies we will be a lot better positioned. But we have to have robust automated testing to handle the fear that the upgrade will break something that the speakers touched on earlier.
couldn’t agree more. dependabot workflows should be a force for good, code dependencies OR in automated testing activities!
Great talk. I have a question - How do the service organizations balance between frequent deployment vs client's expectation of not having frequent changes that may risk service delivery?
A number of successful supply chain attacks have come in through DevOps toolchains.
Thank you! That was a great discussion.
Welcome @dave.karow for our next session's Q&A Thank you #xpo-split
I’m ready to go… feel free to ask questions during the talk!
as part of Scaled Agile, we work at separating deploys from releases. I’ll admit though, this is the first time I’ve heard “Progressive Delivery”. I’m intrigued
Stay tuned… decoupling is key but there’s much more possible 🙂
In London, I got a question about whether progressive delivery was any different than CD. The foundational idea (decouple deploy from release) was in Jez Humble and Dave Farley’s CD book, but the practices of getting more fine grained about gradual releases and using data aligned to these gradual releases is where it gets even interesting.
https://www.split.io/blog/learn-the-four-shades-of-progressive-delivery/
For you code readers… yes, there’s a “bug” in the else if line here… should say “treatment == ”
@dave.karow, what are some examples of "treatment"?
this "flags" are in the same codebase , togheter on the same binary lets say ? what if an application crashess because of one on flag.
@pedro.jordan I missed the second half of your question during the talk. The great thing about flags is that you can toggle the state remotely and instantaneously if an issue arises. That’s how you avoid needing a rollback/roll-forward deployment to resolve an unanticipated issue. Just toggle the flag and within milliseconds it’s off again.
treatment just means code path… could be classic blue button / red button marketing test but could also be back-end code that gets executed.
i love flags, but you need your test automation on point, for both sides of the flag. Thats where I’ve seen struggle in the past
Yes… one binary with multiple possible execution paths that can be executed based on runtime decision a user/session at a time.
Testing strategies are one of the details to work out. Being able to have a test get the desired flag state is the main strategy… the upside is being able to test new and old experience in same environment.
One thing our team struggles with is that multiple teams are rolling out features at the same time, and when our business metrics go negative, we can't figure out which change caused the problem. Any suggestions?
I have also used them and they are cool in some cases , but just wanted to see a full example of these "decouple" arch
@dvancouvering for sure… that’s where tying the metrics to actual flag decisions comes in. Next several slides will introduce the idea.
Love the point on all the ever-changing surroundings and their ability or inability to impact behavior!
Canary by containers exposes entire release to a segment of network traffic (hopefully sticky). % based splitting using flags is still a canary of sorts but is at the feature, not build/release level.
@dvancouvering When multiple flags are being used, the key is to use a different seed for randomizing each. THAT makes what you are seeing now cancel out other flag influence.
OK, yea I think we actually do that. I guess the other problem is it can take a very long time to get statistically significant results. This means rollout can take a week as we gather the data for each phase of the rollout.
Yep. There is a balancing act between test to learn and test to launch. The latter looks for bigger scarier signals and acts on them quickly. The former holds out longer to get defendable stats outcomes.
I like that "test to learn" vs "test to launch"
Gotta say @dave.karow you’re making it hard for me to pay attention to doing other work in the background 😄
Ha! Just connected the dots that you were the next speaker. I used to live in your world not long ago… BlazeMeter, before that SOASTA, and before that Keynote LoadPro. Went from consultant led big-bang load testing to developer-led continuous testing.
Here is more: https://www.split.io/blog/progressive-delivery-safe-at-any-speed-playlist-blogs/
I’ll hang here for one minute and then head over to #xpo-split for more Q&A
That was very interesting. Not sure is applicable to my day job... but it does explain a lot about when you hear about a new feature being rolled out on FB or Google or whatever platform.
@dave.karow great presentation and this is in-line with what we're thinking on my team to decouple deploy from release
Will you be able to share the slides from this presentation so I can share with my team as we plan for 2022?
✨Welcome @p.bruce for our next session's Q&A. Many thanks to #xpo-tricentis for their loyal sponsorship!
🖥️ https://devopsenterprisesummitus2021.sched.com/event/mGSj - with @p.bruce There’s no question that enterprises today want to further integrate continuous performance testing into automated pipelines. However, many are finding it difficult to reconcile the mismatched clock-speed of testing with today’s accelerated pace of development/delivery. Tune into Paul Bruce’s session https://devopsenterprisesummitus2021.sched.com/event/mGSj to learn*,* among other things, the key steps to continuous performance testing in DevOps: • Gather the right metrics to assess your gaps Prioritize, then systematize across your application portfolio • Plan for acceleration across the whole delivery cycle Design concrete measurements with the end in mind • Pick the right targets to automate Make scripting easy for multiple teams Develop performance pipelines • Use dynamic infrastructure for test environments Ensure trustworthy go-no-go decisions ⏰ Session is starting now in #here!
Last year, I published a bunch of blogs (and I think they did a bundle/paper) on continuous performance and load testing: https://www.neotys.com/blog/easy-a-key-requirement-for-continuous-performance-testing/
Also, I recently did another presy for EuroSTAR about how to move the performance mindset and practice forward: https://docs.google.com/presentation/d/11CKT5zUX8bFFruvJK7d_8KKkfyHTv9d06eeNit04Ah4/edit?usp=sharing
"Superheroes in IT are single points of failure" This phrasing is just brilliant. Kudos @p.bruce for coining that.
If you like that one, check out some other thoughts I was able to put down a few months ago: https://www.youtube.com/playlist?list=PLFXQmSmq7uXTElSlaOqUHCeVlJi5nGJ85
Who has operational (i.e. performance) requirements in their work planning process?
You may have heard people refer to performance as ‘non-functional’. Sorry, I call it OPERATIONAL. 😄
Right? Who came up with that term--it's going to be non-functional if you don't account for it...
it does seem most non-functional requirements are looked upon as optional; to add on to your comment, operational should not be optional
For anyone who wants to discuss continuous performance engineering more, I’ve bookmarked my calendly for this event at the top of the channel: https://calendly.com/paulsbruce/devops-ent-summit-21-chat
holy crap. i hope i didn’t ‘bend the laws of physics’ too much with my analogy. i fall asleep for weeks every year to ‘entanglement’ by Aczel 🙂
Highlighted exactly why this remains hard and often outside scope of regular work - and the problems that causes. Thanks @p.bruce!
👏:skin-tone-2:Please welcome @simon540 and Yash Kosaraju for our next session sponsored by #xpo-snyk!👏:skin-tone-2:
do your security champions self-select? Do you have some kind of selection / vetting process?
Great question - there’s often a mix of approaches from company to company, largely based on whether orgs want full coverage across all teams/BUs or whether folks like to ensure that everyone is there because they want to be there. Both approaches have pros and cons, and largely depends on your needs as to which approach you go for
@dfugleberg Coupa has a similar security champions program. The selection is a combination of nomination and endorsement based on interest and experience. After that there is an annual certification run by the Security team to keep the champions up to date. And there are ceremonies (akin to Agile) around the program like monthly meetups.
One of the most important parts in vetting is being really clear about what skills and knowledge a good champion should have. It’s more usual for a mismatch occurring because the role isn’t well defined.
how dynamic data is handled automated CI/CD in your case. for example, I wrote a test case with one version of data but once data got changed then my automated test case needs to change as well. how to handle this dynamic case in ci/cd?
Apart from STRIDE/DREAD, I'd be curious what peoples threat modelling processes look like..
Do you have some threshold (based on severity for found vulns, or potential blast radius, some patterns in IAC…) where you break the build? (eventually even in a pre commit) ? Or do you have a design which limits blast radius and would allow to let full accountability to the product team, just informing them, or some limit upon which it should block?
🌟Welcome @nurmi and @kim.weins for our next session's Q&A! Thank you #xpo-anchore-devsecops for your support of DOES! 🌟
To get the full report and all the data, here's the link: https://get.anchore.com/anchore-2021-software-supply-chain-report/
Interested in learning more about securing the software supply chain? Download the white paper https://get.anchore.com/prevent-software-supply-chain-attacks/
yes, but there is one front runner and others are known to be on their way out / not preferred
“Teams are on average using 6 DevOps tools, involved in the CI/CD pipeline” — and often in enterprises, those 6 tools are all different! 😱
Interestingly, this is something that @lucas.rettig and @levi.geinert500 had to tackle in 2018 at Target — I’ve always interpreted this as a backlash of decades of forced standardization of dev tooling in the prior years. 🙂 https://videos.itrevolution.com/watch/524020857/
Confessions: I have more than a small degree of fear upgrading container base images.
We do see quite a bit of variability within bigger orgs WRT dev/ops tooling, for a lot of reasons. A no uncommon one is when new teams / tech is brought in through acquisition, where along with team and tech also comes the entire dev. infrastructure and tooling as well 🙂
Or siloed enough organizations at different stages of evolution where each silo has its own ecosystem… 😇
My experience is when you ask any IT/Dev person "What do you use for <tool category>?" Answer is inevitably "One of everything"
I saw a talk from a Director of Dev Productivity from Cisco talk about this — as a company driven by acquisitions, the task was given to him to migrate all the companies/divisions/tools onto a smaller groups of tools. It was a fantastic presentation — I’ve been wanting him to share it with DevOps Enterprise Summit, as I thought it would resonate. There is a cost to freedom vs. standardization.
Plus there is tool drift over time, eg our preferred tool from 3 years ago is different than today.
@genek I think we need a track on how to be on the cutting edge of security inititatives. eg. SBOM, I've asked vendors to support Sub Resource Integrity (SRI), getting more visibility into their own security practices, breach notification etc. I often get push back..... any suggestions from anyone...other than just wait 😛
Can you send me an email at <mailto:genek@itrevolution.com|genek@itrevolution.com>? Thanks!
We have similar issues when we are asking our vendors to be responsible for the open source they include in their software. A lot of times their legal teams try to remove those words from our contracts with them. Most of the time the reason is "this isn't something we do"
@jonathon.sturdevant precisely, you also start feeling like the tinfoil hat fellow....
Though I just had a vendor let me know we're 1 of 3 vendors who had asked to support SRI...so there must be other folks out there 🙂
Hopefully there will be a change here - available tooling/tech and renewed focus on these topics I think are making the ability-to-generate-needed-data more accessible to software producers
yah I think there's a real challenge with SBOM, because unless its literally up to the minute it may very well be out of date..
This is gap in the industry that I expect will be addressed by the current focus on SBOM, SPDX and this https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/
"(vii) providing a purchaser a Software Bill of Materials (SBOM) for each product directly or by publishing it on a public website;"
^^^ @jason.cox I imagine you see this, where devs in business units have lots of freedom / autonomy / short attention spans. 😆
Check out the recording of the Cisco session from earlier today @jayson.henkel498. They are using SBOMs.
https://videos.itrevolution.com/watch/621612744/ — for my reference, thank you, @kim.weins
For sure. The US Executive Order might be a kick in the pants for software suppliers, which will probably then influence what other enterprises require from their suppliers + do in their internal dev.
Also the Linux Foundation, CNCF and OpenSSF are starting a major push on OSS projects that will potentially help advance things
Reminder: The plenary sessions are starting again in 5 minutes. Start making your way back to your browser and join us in #ask-the-speaker-plenary to interact live with the speakers and other attendees. https://devopsenterprise.slack.com/files/UATE4LJ94/F01D34MC2KS/image.png