This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
Reminder: Get yourself in front of your browser and into #discussion-plenary for the opening remarks. We’re kicking off the final day of the DevOps Enterprise Summit in 15 minutes at 8:30am CST! https://devopsenterprise.slack.com/files/UATE4LJ94/F04DG604H1C/image.png
Reminder: The final day is starting now – opening remarks and then plenary talks! Join the conversation in #discussion-plenary.
Reminder: Remember all those talks you attended the first two days of the Summit? Please submit your feedback for those! It’s so valuable for us and the speakers. And after all, feedback is a gift and sharing is caring! Enter your feedback for those talks here: https://doesus2022.sched.com/ https://devopsenterprise.slack.com/files/UATE4LJ94/F04DG7DQMSS/image.png
Reminder: The breakout sessions are starting in 5 minutes. Get in front of your browser and start navigating your way to whichever session you’re attending. https://devopsenterprise.slack.com/files/UATE4LJ94/F04DG604H1C/image.png
Let's welcome @gunther.lenz, here to present: "Digital Transformation "Phoenix" Style in Healthcare"
Thank you, @annp and @dtzorko. I am super excited to share our journey and if there are any questions just shout them out or DM me
You keep mentioning improving smoke test pass rate - was there a special emphasis on smoke testing vs e2e testing? did you have to shift to get the focus there or was that something you guys started with?
Firstly, the business wants commitment on the release of a new product 2-3 years in advance. So the typical agile problem is how can we commit to that if we adjust on the way.
Gaming it's mostly for marketing beats. Keeping a good amount of playerbase overtime is important in live service games, regardless of business model (subscription, microtransactions, etc....)
We try to turn the focus on customer value rather than features, so we have the ability to adjust depth and breadth of what we deliver but can show to solve customer problems
Let's welcome @bryan.finster486, here to present, "Minimum Viable Continuous Delivery"
@ibathazi is one of the core team who’s here as well.
we do run smoke and e2e tests. We try to get a better test pyramid shape. We also made sure smoke tests are always passing and can be run very fast
we also change to a keyword-driven regression test framework and on defining more API based tests. We have too many UI tests that are fragile.
I know no one else has ever seen this happen. 🙂
It's 5 o'clock somewhere... 😉
It's my very first DOES and I've missed out on the Dockside Bar... probably out of shyness... :face_with_peeking_eye:
No worries @jonathan.mailhot. I struggled sometimes too. This conference is very open and supportive. We strive to break down the "velvet rope" between veterans, speakers and newcomers as much as possible.
https://app.gather.town/app/kATASYhmxzaVMLO8/DevOps%20Enterprise%20Summit
Reading https://minimumcd.org/minimumcd/ci/#what-code-coverage-percentage-should-we-set-as-a-standard-for-all-teams... ❤️ What code coverage percentage should we set as a standard for all teams? We shouldn’t. Code coverage mandates incentivize meaningless tests that hide the fact that code is not tested. It is better to have no tests than to have tests you do not trust. See the https://dojoconsortium.org/metrics/code-coverage/ on this metric.
Challenge accepted @bryan.finster486. I'll head to the Dockside Bar and start working on a 5-point Maturity Model for this... 😉
For CI I personally prefer to have short-lived feature branches that are frequently merged from main (ideally auto, but opt-in), PR review, with changes only merged to main when the branch is merged from main, the merged tests pass, and the change is approved. Have a resource that might change my mind?
That's exactly what we're doing in my team too... super open to anything that could be deemed better.
I also don't want to just push the problem downstream to feature flags. Sure I delivered to production but if my code is feature flagged off for a month, did that accomplish anything?
My view has been that all changes are not equal, so a broken build for a low value change should not block a high value change. Same for the forced context shifts from mobbing a problem when the team breaks the build - I'd prefer to choose when that context shift happens. In my context I usually have same-day merge, but some last a few days for some teams (which I'm working on).
It’s better to have it flagged off in prod for a month than in a branch for a month.
Still doesn't seem like what we want. If my feature has been flagged off for a month, I won't dare turn it on in production. How does one get confidence we can safely start rolling out a feature flagged feature?
I love feature flags and especially dark launches. The DevOps Handbook 2e had some excellent notes around that.
Oh, definitely don’t want it off for a month. I’d start asking why that’s happening and what needs to be fixed. It obviously wasn’t something that needed to be built right then. That’s really the main point. CD exposes these problems and makes us find solutions to improve how we work so we can live better lives.
For the original question - are you aware of any resources that might shift my thinking? This is the one area where I disagreed with DevOps Handbook, and I'd welcome being wrong
“so a broken build for a low value change should not block a high value change.” - Why was the low value change first??
Multiple people working on things in the same repo. One low impact, one high impact. I wouldn't want a low impact high complexity change that breaks the build to block a high impact low complexity change. It seems to violate an obsession with time to value and optimizing for cost of delay
The other piece is that I'm not convinced that breaking main/trunk is appropriate. I'd prefer to have the branch merge from main, run the tests and security checks after that merge, and only allow merging to main once the tests pass. That reduces the blast radius while allowing main to be deployable.
Gotcha. First, both of those should be very small changes; a few hours of work at most. Next, the goal of CI is to verify that all of our code integrates cleanly. If the other change is blocking something critical, revert the change. Roll forward isn’t always the answer, but the pipeline must be green. We cannot use workarounds to a broken build. the point is to harden our processes so that we can reliably and safely deliver the next high priority change.
On your other point, CI and TBD don’t mean you cannot branch. It means today’s branch integrates today.
A common struggle is people defaulting to “I merge when the feature is complete”. That’s a behavior that needs to be improved to enable CD.
Tons of good info here https://m.youtube.com/c/ContinuousDelivery
Yeah, I need to work on some of the culture. I've been gradually getting people to break things into smaller units. What I'm evaluating is the tradeoff between merging to main every day, vs merging from main everyday. For an individual branch the resulting code is the same, the only difference is whether main is broken or the branch is. I'd choose a broken branch in that case. The benefits would have to come from having multiple branches merged in daily, vs having multiple longer branches that race to be merged, making the conflict someone else's problem. That still sounds like the benefit is from small feature branches though, not from the merge to main.
If everyone is pulling from the trunk, then there is no integration.
You and I are both pulling from main but main isn’t changing. Then we resolve merge conflicts at the end. 🙂
CI requires that we merge. It also requires we are merging code that is tested and not breaking. It doesn’t require complete features.
https://martinfowler.com/articles/continuousIntegration.html
Something I've encountered on multiple teams: people clinging to manual testing as the ultimate OK. How have folks convinced colleagues to let go of that and let the pipeline decide?
Wait, was that the answer? 😂 I can get theatrical with the tears if that works 😁
Manual regression testing or exploratory testing?
Manual regression testing means they need to test every single path that has ever been tested before and do it 100% accurately every time. I dare them.
Right. I agree that regression testing should be automated. Exploratory testing can be useful though for uncovering new abuse cases. Not sure it should be a gate to every release though...
Depends on risk appetite, I suppose... #context
They are adding automated testing, so it's less the manual regression testing stuff... but I think it's going to be hard for some teams to believe the pipeline will catch "all" errors. Which of course it won't.
i can normally convince them that they aren’t testing 100% of every path and they say - yeah ok its just a happy path sanity type of regression testing. But thats so arbitrary - how do you choose what should or should not fall in that bucket
I wonder if there's just some psychological comfort in having a human sign off.
https://medium.com/rise-and-fall-of-devops/5-minute-devops-emotional-testing-f8f6bf18364c
Serializing the flow in a VSM so often blows minds when they think they know what all goes on in a delivery flow...
Five languages is pretty amazing to me. Maybe it's due to my lack of visibility to the greater industry, but that seems an impressive feat in a short amount of time.
Five languages is pretty amazing to me. Maybe it's due to my lack of visibility to the greater industry, but that seems an impressive feat in a short amount of time.
@bryan.finster486 is a strong leader for this. Don't let him fool you ... it wouldn't be remotely where it is without him specifically!
Very like that presentation @bryan.finster486, it triggered something on the "how" and the "why" we are doing certain things. I will definitely look up http://minimumcd.org !
People have found value. We’ve receivedfeedback that people are using this as a roadmap.
Hopefully people using that will share their stories and progress
Thank you @bryan.finster486 for sharing the great work on http://MinimumCD.org!!
We will be a Dockside if anyone wants to chat.
Reminder: The breakout sessions are starting again in 5 minutes. Get in front of your browser and start navigating your way to whichever session you’re attending. https://devopsenterprise.slack.com/files/UATE4LJ94/F04DG604H1C/image.png
Welcome @carl.chesser, here to present, "Navigating Change with Communities of Practice"
Hi Everyone 👋 I'm excited to be joining you here today! Looking forward to chatting and answering any questions you have.
I felt this became very true in the last few years, where you have to continually put energy in externalizing what challenges are being encountered by teams. When we were in the office al the time, it was often to see and hear more adhoc exchanges when things weren't working...
Was so excited that you submitted this story, @carl.chesser !
I love these stories about how to better span silo boundaries — this is the area of study that Spear and I have been spending so much time exploring, including in our presentation later today!
About this, how to maximize knowledge sharing without sending too much information throughout too many media (emails, direct messages, newsletter, etc.)?
This is similar to making a easy to "grok" on content, in a predictable format on a predictable schedule.
There was a fascinating treatment of this in the Stack Overflow talks — I was joking about how searching Slack is a pretty poor way to share knowledge, but a daily reality for me. :)
Just the other day I had a build problem I hadn't seen before. Searched Slack and found the solution... in a post from two years ago.
Yes, one thing we have found valuable is ensuring certain CoP members have time to communicate topics in existing team meetings with leadership. This helps recognize content and topics with peers, and ensures leaders in their space can see value.
Any recommendations for management communities of practice? From the ones I've been in, it seems hard to get everyone on the same page.
This is difficult when you find those interested have many responsibilities. The way I have found it valuable is having two different ways of engaging with direct teams in one part of the organization (ex. every other week), and then rolling up sessions in a larger session (monthly). Then trying to make sure there is an easy engagement pattern that could be outside of these sessions. For example, that can be a RFC process that is lightweight (ex. GitHub issues) that try to help share discussions.
Anyone else losing the video every few minutes? Or just me? (I suspect it's me)
Anyone else losing the video every few minutes? Or just me? (I suspect it's me)
I have too, but it's gotten 10x worse in the past half hr. Probably local...
Well, this was going to be on my list of talks to re-watch anyway, so 🙂
Hmm. Husband just texted me with "I was paging through a large postgres query result earlier, is it better now?"
So I think the issue may have been on my side. So far John's talk has only had one blip, instead of one per minute
For what it's worth... buffering started happening again around 10 minutes in, so I switched to the prerecorded version of the same video and moved to the same point, it's been ok so far.
Fascinating all the ways that certain conversations stop happening as groups that larger — I'm really appreciating how you're pointing this out, and describing countermeasures. What is biggest evidence you've seen that people value these novel mechanisms?
Just based on my own observations in the larger group sessions, I was typically only getting feedback from a routine set of people in that larger group - which was inhibiting how much was getting shared. Once you get into a form (small group), you start getting people to at least share and increase the opportunity to speak (it's hard to be quiet in a small group).
What have you heard from other people who were taking part? I'm so curious to what extent they appreciate all this effort to create these obviously important mechanisms, that otherwise normally wouldn't exist!
Something that I do as part of these sessions, is bake in time at the end for a survey to get feedback from the larger group. What I found was that surveys when just sent out via email were not getting much responses, but when baked into the time of the meeting for people to fill out, I would get a higher percentage of feedback. In these surveys, I would just ask: What feedback do you have on doing break-out sessions? And I have consistently received that people want to continue those, they value being able to elect into topics upfront.
Another part I think has been helpful, is connecting back on what was shared from the small group, to the larger group session, and what actions occur as a result. I have seen where people are less engaged in smaller group session if they don't understand how this connect or what actions are taken afterwards.
Thank you!! I’ll be reaching out with some more questions!! Awesome!
I found it really valuable to find ways to have these smaller group sessions, because you can encounter echo chambers or not getting a diverse set of opinions sharing.
I guess it helps at not having always the same people talk or express their opinions?
Yes, the more I think you can mix who is sharing, which of course requires engaging with people early on speaking - the better. This often is easier when help reduce the burden on sharing content (short amount of time, don't work on building some new ppt).
Any guidance on getting leadership buy-in to support internal communities, including setting aside time and $ for people to attend?
We've had "lunch and learns" but at the moment, that's considered your own time, and it's understandable difficult to get people to spend their own precious time in these sessions.
They can start in a bottoms-up / grassroots type effort, but can quickly start highlighting value when you get quotes from people participating in the value. So, it can almost feel like applying product management approaches on your group in how you advertise these back to the leadership in having these groups. They sometimes feel compelled to participate, if you share how this is working in one part of the organization and if they want to also have members join.
I also found it helpful in how I tried to share my thanks back to those who present, which can be from a platform team advertising a new change that they are looking for adoption / feedback. In that thank you, I include both their manager and their executive on why we appreciated this and some concrete reasons on why it was helpful. In several cases, we would see them engage more as a result.
While it might seem very basic, here are some of the notes that I follow in communicating this back to ensure contributors get further recognition and getting awareness with their leadership: https://che55er.io/posts/thank-you-emails/
Excellent, thank you for your insight! Great talk too!
Usually I do these from our larger group sessions, which is more around monthly. These do not take that much time to do, if you do them immediately after the event - else, I find I don't to them consistently. Sometimes I draft them up as they are presenting, so it is sent close to the event.
Let's welcome, @botchagalupe (John Willis) here to present, "Out of the Cyber Crisis - What Would Deming Do?"
Get your free copy of the Investments Unlimited e-book, while supplies last! https://members.itrevolution.com/free-ebooks
I have the audio book version and it is great. I highly recommend the book
Would you think the focus of initiatives with OKRs (how we plan to achieve the objective) is better to advertise, knowing "the why" is extremely important for people to stay connected with the work, knowing that methods may get refined, but the objective staying true?
I have seen it valuable when we find the initiative that isn't as effective when we measure it to the objective, where it takes either a change - but the method is continually refined within the context of the objective (which shares the why).
It's also interesting because managers, especially at higher levels, have to manage people who are much better at figuring out the method than they are. That's where I can see OKR really being powerful.
Right. I have seen it where an initiative of "what we are doing" is the only thing, and when that thing is done, it is viewed as success. However, the outcome is truly achieved (or measured).