This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # ask-the-speaker-track-1 (705)
- # ask-the-speaker-track-2 (287)
- # ask-the-speaker-track-3 (195)
- # ask-the-speaker-track-4 (356)
- # bof-american-airlines (68)
- # bof-arch-engineering-ops (28)
- # bof-covid-19-lessons (4)
- # bof-cust-biz-tech-divide (2)
- # bof-leadership-culture-learning (5)
- # bof-next-gen-ops (10)
- # bof-overcoming-old-wow (7)
- # bof-project-to-product (5)
- # bof-sec-audit-compliance-grc (5)
- # bof-transformation-journeys (6)
- # bof-working-with-data (3)
- # burnout (31)
- # demos (72)
- # games (114)
- # genecon (1193)
- # genecon-help (197)
- # happy-hour (252)
- # hiring (25)
- # lean-coffee (30)
- # networking (20)
- # project-to-product (21)
- # psychological-safety (9)
- # summit-info (798)
- # summit-stories (4)
- # xpo-atlassian (10)
- # xpo-datadog (6)
- # xpo-delphix (32)
- # xpo-digitalai-accelerates-software-delivery (9)
- # xpo-gitlab-the-one-devops-platform (5)
- # xpo-harness (3)
- # xpo-hcl-software-devops (9)
- # xpo-infosys-enterprise-agile-devops (10)
- # xpo-instana (8)
- # xpo-itmethods-manageddevopssaas (9)
- # xpo-itrevolution (20)
- # xpo-launchdarkly (6)
- # xpo-logdna (12)
- # xpo-logzio (2)
- # xpo-moogsoft (4)
- # xpo-muse (7)
- # xpo-nowsecure-mobile-devsecops (6)
- # xpo-opsani (16)
- # xpo-optimizely (3)
- # xpo-pagerduty (10)
- # xpo-pc-devops-qualifications (9)
- # xpo-planview-tasktop (14)
- # xpo-plutora-vsm (10)
- # xpo-redgatesoftware-compliant-database-devops (6)
- # xpo-servicenow (18)
- # xpo-snyk (11)
- # xpo-sonatype (43)
- # xpo-split (34)
- # xpo-sysdig (29)
- # xpo-teamform-teamops-at-scale (20)
- # xpo-transposit (11)
- # xpo-tricentis-continuous-testing (5)
Hello! @stephen and I are so happy to present the results of our second year studying software supply chains, with our friends from Sonatype — it is such an amazing data set to be able to study!!!
Hi! Great to be hear @genek101 and excited to share these findings with everyone!
The report is available here: https://sscr.muse.dev/ if you want to follow along!
Of course, those friends are the folks at Sonatype, who runs Maven Central, which every Java programmer and anyone who runs on the JVM benefits from every day!
It was super cool to look at a different aspect of software supply chain — last year were the components. This year is the consumer side of equation!
Bonus fact: About 1/3 of projects in Maven Central were not part of the “software supply chain”. They were isolated projects (not used by anyone and not using open source libraries themselves).
"Projects that release frequently have better outcomes and are more secure" - Sooner, Safer and Happier are interrelated and co-dependant.
Is a fast changing UX and customer journey through applications not leading to end user frustration ?
I just ran create-react-app. 1625 packages for "hello world". Doesn't include transitive dependencies.
when you're using JVM o .net dependencies its easy. what about flutter, Erlang and other "strange" dependencies? @stephen
do not run
snyk against a react app. It will take a long time 😞
I heard react-native 100x worse. And MUCH faster moving. I heard, “if you don’t update dependencies for 2 months, you’re basically sunk. You’ll spend a week getting builds going again.” 😱😱😱
good question, @capiedra! In more niche languages you usually have less choice of libraries but I find there are also typically shorter dependency chains — smaller community = more standardization around libraries.
"Projects with more dependencies stay more up to date" - super interesting result
I think the first couple of times we ran the analysis, we thought we had “reversed the polarity” and gotten it backwards. Remember that, @stephen? 😂😂😂
yes! same thing happened with a question about “how many internal forks of open source projects do you maintain?” the companies most involved in open source scored the highest! we were meaning to ask about long-lived forks that diverge from upstream and thought we had the polarity wrong, then realized that — of course! — to contribute changes back to an OSS project, you have to fork it :face_palm:
is the result of hypothesis 3 due to accumulated network effects?
Could be for sure. larger teams, larger networks, more code gets pulled in.
Does popularity make things resistant to change? Maybe that is why the hesitancy to update
I look at time and frequency of updates to determine if i want to incorporate a package.
^^ @robert.cuddy Actually, the most popular projects have highest release frequency. hang on. Looking for graph…
I'd love to see vulnerability spread (contagion) on this dataset on a per-language basis.
Vulnerability spread meaning how they propagate through transitive dependency chains?
That’s the “fast release” zone — they’re all more popular than the rest.
Thank you @genek101 - was trying to correlate that to the comments around Hypothesis 4 and the comments around updates. So is the real issue there changing the project but not paying enough attention to the dependancies underneath?
That’s right, the blue dots over on the left of that diagram are staying up to date as they release. The dots that aren’t blue on the left are releasing frequently but not using that release velocity to keep dependencies up to date.
I remember the flight from PDX to Orlando working on getting these “arc diagrams” working in Vega.
So... that gap between the humps... is that due to your connection in DFW? :rolling_on_the_floor_laughing:
My leadership still doesn't like going to x.0 versions. They want to go to x.1 versions, like those versions don't have new and different bugs.
Love this point, @bryan.finster! We had to throw out more data than I expected to because so many projects don’t even follow standard versioning practices.
Why do people waste creativity on things not related to functon?
For those who want to make an arc diagram in Vega, here’s my simple example: https://gist.github.com/realgenekim/8612fedf7f26e2513d02dafa01fdf4c3
I like the http://YYYY.MM way to version because it shows it is meaningless.
I know you will need more points in that method Bryan...
Holy cow. Making those diagrams made me realize how bad people are at following good version numbers!! cc @stephen
I wonder how all of this affects the barrier for entry to try new packages and understand well enough to keep building new features.
Had to write a ton of special cases to convert version numbers so they could be sorted.
^^^ @bwilliams4 Totally agree!!! (Because semantic versioning, as we all know, is pretty much useless.)
What are some good resources to start building towards the High Performers - DevSecOps teams?
What are the the actual factors or ingredients which "push" DevSecOps?
@brad.kirchmann ^^ wow, @stephen just made a high contrast version for you!!! Nice!!!
These are the main differentiating factors, so good places to start in transforming practices, @scott.2.thompson and @michael.baca.
Paying focused attention right now....😉
Did you see any distinction in terms of the centralized tools used to scan artifacts? Were some tools better than others?
Thanks @stephen. I'm highly encouraged that I'm on the right track! We have Jenkins script integrating with Veracode and Blackduck with thresholds on fails or passes.
Those are great practices, @scott.2.thompson. @michael.baca, we didn’t ask about or compare specific tools. Just whether tools from particular classes (SCA, SAST, etc.) were used.
"Security being integrated into developers daily work" - @genek101 Making security a natural part of what developers are already doing is paramount!
More reason to limit and carefully choose dependencies. developer flexibility.
Yeah, sometimes a new library is just the right fit and more than makes up for the added complexity.
I've seen leadership run with this idea and take it to "all teams will use exactly the same tools." Variance adds costs and standardization can inhibit improvement. Need balance.
oh yeah, standardizing across so many teams seems destined for failure. definitely a balance.
I think driving standards on dependencies for things like the website is a win though. Reigning in some of the cowboy there would really help.
You don’t like my favorite library for formatting dates? I wrote it myself because every other date / time library gets the abstractions all wrong…
Worse. "I didn't bother to look if we had an application already for this solution, so let me demo this thing we are releasing next week to my area" that duplicates 4 others and should be in Platform's domain anyway.
It's getting better rapidly. Proud to say that I'm helping that improve by ignoring whatever "scope" I'm supposed to have. My job is to help teams discover how to deliver better. Anything the constrains that is in my wheelhouse, as far as I'm concerned.
I've seen examples of stopping pipeline if certain level of CVEs found or Open Source licences that are not approved. We've built a compliance gatekeeper that won't allow deployment unless you have certain mandatory checks attestations.
in CI = as part of dev / build / test. we also asked about centralized scanning outside of dev, but within CI was more effective.
The closer the effort is to prod, the more expensive it is to fix. Security has to be part of design not just develop/code.
I agree. I have documented some patterns for how that can work in "Sooner Safer Happier" (book release 10th Nov) . These have been proven at scale in a large bank.
Also Jon's talk here https://www.youtube.com/watch?v=XRMf9QjUwlI
Yes, I want to know on my desktop. Anything after that is increasingly less optimal
PS: @robert.cuddy Some amazing experience reports showing this on plenary stage. Tomorrow morning, Dwayne Holmes from a large hotel company, and GitHub upgrading from Rails 2 to Rails 5 @eileencodes (closing Keynote). Such amazing talks showing how people operationalize this! (And the consequences of not doing so!!)
love the contour map and interesting on High Performance beating security first.
'beating' if dev productivity is more important...lagging if risk management is the priority. All depends on what you prioritize. In any event, they are complementary, not competing.
Love the graphs. So many ideas how to use them in a related context (e.g. update behaviour of customers)
I use Newtonsoft JSON and CsvHelper alot because I don't have to be an expert in coding that functionality. If you were not already sold in on OSS.
Yesterday, I read about an amazing .NET library for parsing durations and times that I was jealous of.
I need to know the name of that one, time is a hard problem.
Any fav scan tools for detecting vulnerabilities in open source ?
Wouldn't call it a favorite, but I use BlackDuck. It works, but lacks good reporting, but our Infosec dept likes it. :)
Chris I would suggest visiting the sponsor channel and checking out the vendors there as well. Lots of great choices.
That upcoming talk from @eileencodes is UNFREAKINGBELIEVABLE!!!! Last talk on Day 3.
I loved the one you shared on Twitter a few weeks ago.
Upcoming one is even better, IMHO, because she added a section directed at tech leaders. So good!!! It’s so powerful.
One of the characteristics of open source components we've really put a lot more work into has been licenses. There are tons of licenses out there and not all of them are enterprise friendly
@stephen I was hoping for your daughter to wave again!
Haha, thanks! I’ll have to rope in one of the other kids next time so they don’t feel left out!
Thanks @genek101 @stephen always terrifying to see this material.
Hi @stephen - I was curious if you could describe the differences between Dijkstra’s, A*, and Jump point path finding algorithms?
@blakee! Good to see you! Actually, there are similarities between A* and various program analysis techniques like symbolic execution or static analysis. The jump point optimization is similar to optimizations in graph traversal that are used in program analysis and model checking. Not sure if you were serious though 🙂
And A* is typically used in online planning whereas Dijkstra’s is more common in off-line / batch contexts (I believe) — but I’m not a planning / optimization expert 🙂
Sounds good, thanks.
np. I have to give a tongue-in-cheek answer too given the topic of the talk with @genek101. The difference is… 3 dependencies vs. 0 🙂 https://www.npmjs.com/package/dijkstrajs https://www.npmjs.com/package/a-star
@genek101 it would sure be helpful to have a buffer between sessions (somewhere between 5-15 minutes)
That feeling when you need to go somewhere, but get drawn into an amazing talk! @arun.infy @useidel This is awesome!!! “Silos!!!”
I love these highly produced videos that teach us about companies, but especially so when they focus on the tech org!!!
Continue the conversation with speakers at #ask-the-speaker-more!
Muse Dev is super handy. I hope everyone gets to check it out. Great to see an update on the survey. Good segue to dev journey talk.
How do you metric your inner source ecosystem? @arun.infy
We measure the number of pull requests that come from developers outside the project (or who dont have access to the repository)
Is there tooling you have created for that? It's the automation of that internal vs external people that is subtle.
We use various dashboards (splunk/elk) to be able to define this. Our team structures are very well defined in active directory so it makes it slightly easier to generate this
@arun.infy @useidel How would you foster inner sourcing in an environment where teams are working on different products, using different tools, and are worldwide?
We use communities for that, with webcasts, events, training and of course the same tools for collaboration and code management
World wide isn't a problem as such as the Inner Source as the techniques are asynchronous and generally use the written form
To add to Udo - In general our teams are global, many teams have matrix reporting, this is in the culture of the company where teams have to contribute cross geographies. To aid innersource - we do market our products internally and make it exciting for others to contribute 🙂
is it necessary serverless? we create dependency to cloud provider and its not cost effective in high volume application
@arun.infy I believe Camilo means that if you integrate with AWS, you are slowly vendor-locking your software and then migrating to something else less expansive or another type when it's time poses a lot of challenges.
that's right @ian.silverwood we can use AWS, but we prefer to use EKS or another services...beacuse, if we are using only serverless. the vendor-locking is so complex and expensive
There are internal products on our side that have started to rise which aims to provide developers with wrappers toward vendor APIs/methods, to avoid this vendor-locking issue and enable easy switch. However, it has yet to be proven as the products are not live yet.
Its a tough question to answer 🙂 Will try to do my best, its a small subset which is using serverless on the cloud, they sure are vendor locked. But on the rest of the platform that we are building is completely cloud agnostic and easily portable across providers (public or private)
@andyweldon Adoption of Bitbucket has been very successful. We were on legacy source code tools and migration to BB has been very swift. We also consume github for opensource contributions. The gitlab, bitbucket, github debate still rages on between the developers though 🙂
@useidel @arun.narayanaswamy How does your SecOps team plug into your toolchain?
Via several methods: first of all they are part of the wider engineering community, secondly we have developed a quite strong security mindset in the different teams, hence the SecOos people are seen as aliens and the are part of the tool selection/approval process
Thanks for answering. So, given your scale, have you automated appsec with integration with your DevOps toolchains?
@arun.narayanaswamy I was impressed to see your daily builds on your Jenkins. May I ask if whether this is running on a public cloud utilizing k8 with their service, or did you set it up another way?
Its running both on private and public cloud. Its setup using images and containers both depending on the internal customer needs
@useidel We're connected on LinkedIn (since 2015) - I will reach out to you on LinkedIn
@arun.narayanaswamy You mention doing fun in your last slide. What all are you doing for fun to help foster the culture (a super important part of digital transformation vs companies that ad hoc rename teams as DevOps - cue the SMH Oprah moment - "You're now DevOps! And you, you're now DevOps! And you, and yoU!")? This is like hackathons, etc?
Hackathons is one, contests, ideathons, incentivizing automation, enabling gradual learning, enabling internal and external trainings, enabling sharing info with ease, providing opportunities to speak/attend conferences, t-shirts, stickers etc 🙂 @blakee do you have any ideas to make it more fun?
Welcome our next speakers @claire.vo and @lawrence.bruhmuller!
I’m here because from the description I’m guessing there were some conversations involved…. 😄
Proof @lawrence.bruhmuller and I are real people. There was a bit of a hiccup with how our video files got complied! 😭
@lawrence.bruhmuller @claire.vo I love it! Feel free to send a new video to @jessicam and me, and we can replace it in the library. And thank you for this talk! Great interaction around chronic problems between Engineering vs. Product!
@claire.vo @lawrence.bruhmuller: how do you create that culture of trust?
It's one of our cultural values at Optimizely (it's the "T" in OPTIFY) so it's an overall company value
That’s a fine definition of trust… however how do you get a culture of trust in practice? I ask because I know lots of companies that have Trust as a stated cultural value, but if you ask the people in the company they don’t feel a sense of trust across teams.
Yeah it is pretty core at Optimizely. But it's also just about leaders being willing to spend the time together, investing in a close partnership.
@jtf my company is in early stages here, but we're experimenting with measuring psychological safety and Transformational Leadership via Google's Project Aristotle example and the ITRev "Transformational Leadership Quick Start" whitepaper as sources. That is at least helping us start to measure where we are at. Those are some of the behaviors that are trying to shift toward this mindset.
Having each other's back. Showing vulnerability. Taking every feedback given with positive intentions and a "single team" mentality. Etc. 🙂
No information or power hoarding. Admitting mistakes and celebrating failure. No drama triangles (make sure feedback is shared directly and privately.)
@nickeggleston See https://rework.withgoogle.com/guides/understanding-team-effectiveness/steps/foster-psychological-safety/ Has seven questions you can ask via a Likert Scale. We were able to experiment with this easily at a team level by re-creating via an MS Forms survey (for those who use Office 365).
@lawrence.bruhmuller @claire.vo: can you give an example of productive conflict you’ve shared? (love the phrase productive conflict btw)
The classic example (and I think we talk about this later) is the level of investment in tech debt. There's a real debate we have to face during planning and other times about how much engineering time do we put into things that are not customer facing. I think we both usually come into those conversations with a different point of view. We talk that out in the open. It's important for teams to see strong discussions with real debates.
But ultimately we end up coming together on something. Or know who is the decision maker and disagree and commit. But we have the discussion.
Sounds like it would be a virtual cycle where the conversation to make the decision helps build the trust for future discussions.
Another good example is around what amount of rigor and analysis is required for a given project ... balancing "small rocks" that teams can pump out quickly with bigger efforts that require a lot more thought, both product/customer related and also technically. Great push/pull discussions here.
I feel like that one of five... something-somethings... :thinking_face: :unicorn_face: 😉
My favorite from the business: "This isn't what we talked about."
@claire.vo and @lawrence.bruhmuller how do we overcome the initial thought that this arrangement requires very large IT teams? There would be lots of projects/efforts going on at once.. it would be interesting to know your thoughts..
I'm a big believer in fully staffed (meaning, able to execute independently) but relatively small teams. And limiting work in flight! We have a pretty big team at Optimizely but we also do alot (probably too much!)
@pavan.kristipati did we get to the question? Or are you saying that our approach only works once you have a larger team, and how would it work with a smaller team?
@lawrence.bruhmuller @claire.vo this discussion is amazing. I keep trying to type questions but you bring them up before I get them sent over 🙂
@lawrence.bruhmuller @claire.vo + DOES Scenius: Any ideas on key, measurable outcomes for re-platform?
I'm trying to advocate some measurable outcomes around things like Availability, Reliability, Performance, Security. Looking for a simple outcomes framework to overlay as a guide so we collectively talk about these things as we hyper-focus on feature parity with our legacy platform.
Depends on the goals. Developer velocity is one common one. Another would be more direct leverage for future projects ... "since we have this new platform / API / component this new project is considerably easier"
@lawrence.bruhmuller As we know, it's tough to make those major rearchitecture investments 🙂
100%. Lots of the other things we talked about don't matter if we're trying to accomplish different things and our teams pull in different directions. One other benefit of an agile mindset is that this type of dissonance can't fester ... it's clear very quickly if you're misaligned and need to get back on track.
Each team owns their CI process, do they own the CI infrastructure too?
Depends, We provide a common CI infrastructure that teams can use. Larger teams provided their own infrastructure
what is the central CI toolset provided as an internal service by your team?
Current central Set is GitLab (SCM) Jenkins (CI/CD), Artifactory for artifact managment
nice. Any reasons why you are not using all the features in GitLab instead? Avoiding vendor lock down?
Partially due to name recognition and what teams were comfortable with. That being said we are seeing an increase in GitLab CI/CD usage.
HI!, How did you manage the mono repo against many teams (trunk based development ?) thinking in the streams of differents customers needs
How can you guarantee government with so many toolchain?. for example reports for audit
Compliance is key business requirement for Intel. We established standard security and compliance practices that every team is required to meet before the software is released out the door - we also have key infrastructure teams within IT that are responsible for security compliance of infrastructure
interesting, but how can you standardize and drive internal knowledge sharing and innersourcing without standardizing at least some parts of the toolchain?
We established source code mirror required for supporting triage and debug of issues - there are also branching guidelines to support different customer needs
Are you using feature flags in your DevOps process? If so, are you using a system that you built yourself or a third party platform?
Is this because of the type of software that intel releases or it hasn't been a high enough priority to add into your workflow or some other reason?
Most likely, much of the software that we release through this process is Drivers and Firmware, where it would be difficult or impossible to enable a feature after it has been deployed to a running system. If you have thoughts or recommendations on feature flags for this type of SW I would love to hear
That's good information and something I suspected which is why I was excited to hear this talk to learn more. I'm not an engineer, but I'm going to reach out to our team and discuss further and if I have a solution to help I will be in touch. My company, Split, has a feature delivery and experimentation platform developed by ex LinkedIn, Google and Salesforce engineers and our big focus is on the Experimentation piece to ensure you get value from the feature releases, but you are correct if you can't kill a feature then it limits the value but its an interesting topic to discuss. My ears really perked up when Madhu said "What cannot be measured cannot be improved" because that that is a big reason why we've been so successful. 😀
Hi @peter.g.tiegs I spoke with a few engineers on our team and as you mentioned unless there is connectivity to the internet to enable or disable features FF may not be possible to enable, but am assuming there are other business units that develop client facing SW or internal tools? You mentioned that Intel employs 15k+ engineers, are the majority releasing drivers and firmware or are there teams that release other types of software? Split has been used to help migrate services and testing of internal tools so that is an area where FF might be helpful, but most likely handled by another team.
I did not hear you specifically talk about consistency in Architecture and Design fundamentals. Should I assume that is included in engineering piece?
@fred.ghahramani Given the complexity of the system level software that each team delivers - the architecture and design guidelines are established and are localized to that specific module. The common interfaces are defined and agreed upon which lead to seamless integration
@fred.ghahramani Yes we have not specifically tied any rules about Architecture and Design Fundamentals into out DevOps pipeline. One of the three teams that was mentioned (The Systems Engineering team) provides some coaching in that area. Generally it is up to the individual upstream IP SW teams
3-5 years? With the speed of today and the amount of vendor-owned and open source tools popping up, isnt that too long? I need to setup a similar cadence on my end, but I am pending to have a 6 months evaluation cycle and 1 year adoption. Is it too small of a cycle?
3-5 years was a challenge for us communicate with our management. Their original asks were around 5 to 7 years. I agree accelerating this eval period would be great. It can some time take over a year to role out a new tool across the organizaition
yes, I know where you come from. Our own Enterprise Architecture and Application Portfolio Management cycles in P&G work on a exact 3-5 years cadence (something we call the Domain Master Plan), similar to what you have. I do however believe that in this space we need to be more agile and be ready for changes here and there to keep up with innovation and avoid vendor lockdown
@emgomez Our mono repo is repository that acts as an index to various other repositories. A manifest file in that repo has a unique set of references to the various IP SW teams at each revision
Thanks for a very informative talk and for taking the time to answer our questions!
thanks for the session! One question: is that session supposed to be shorter? I see that the next item in the agenda come only at 45 past the hour, in 20 min from now
Hey Eduardo! I have the same question and I'm concluding that presenters just finished ahead of time.
The video in the library is 27 minutes long, so they just finished ahead of schedule it seems.
How are you allowing the "low level" software teams (device drivers, etc.) to do CI on in-development hardware?
great question @rob.parkhill524 - the hardware goes through long process of development and is extremely distrubuted in nature (across the globe) - we have a combination of localized validation where each team does CI on their own hardware and a centralized CI system that we showed in third foil which is doing a global CI
So for the long hardware dev process - are you following a more waterfall method (strict requirements, solid systems engineering principles, etc.) up until the hardware is "good enough" for the teams to move into a more agile/CI process? Or have you figured out how to do hardware dev in an agile way?
Adding to what Madhu said, We deploy early HW and SW Simulations of HW to out SW teams for use as part of their CI pipelines. One area we are exploring is creating a hybrid cloud of this HW
I ask because we are struggling with this right now - when developing new hardware, we follow a rigid waterfall model even for the software component, but then new SW capabilities after that initial development are much more agile.
So simulators/emulators for in-development hardware?
I would say the the HW is still developed largely in the Waterfall model, But we have started to reach across the aisle to our HW design colleagues to see how we can apply CI to what they do.