This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
<!channel> does anyone have any experience establishing automated governance/ security/ policy/ quality manual review triggers within their pipelines and workflows? I am looking for any highly regulated industry/ government experiences here to help understand the art of the possible to help improve the Federal gov't Authority To Operate review process to enable delivery at scale. THank you!
<!channel> does anyone have any experience establishing automated governance/ security/ policy/ quality manual review triggers within their pipelines and workflows? I am looking for any highly regulated industry/ government experiences here to help understand the art of the possible to help improve the Federal gov't Authority To Operate review process to enable delivery at scale. THank you!
Are you talking Automated Supply Chain and Governance? We are getting right in the middle of it with a number of our customers. (Not trying to plug) We started an open source project to show some of this off, happy to get you talking or collaborating with someone from our team that has way more experience than me: https://github.com/liatrio/rode
Hi Mike - we did a lot of work on this at Barclays. Most of those who led on this have moved on - I’m now at Saxo Bank in Denmark but Jon Smart and Myles Ogilvie are currently writing a lot of this up for a forthcoming IT Revolution Press book. I can get you in contact with them if it would be useful. Some of this was the basis of Jon’s DOES talk last year https://youtu.be/XRMf9QjUwlI
The other good reference is Topo Pal’s work with Capital One - https://youtu.be/Fs_uYIbxrw8
In particular what I see as an absolute reference class automated pipeline
I'm working with several of the Dept of Defense "software factories" and this is a challenge that we are trying to tackle to enable scaling their governance automation to expand the product portfolios. Right now we're looking at the largest organization having 30 applications accredited (according to federal requirements), but the processes will not allow for it to scale beyond, say, 50.
How many thousands of applications exist across the DoD? That's what we have to think about scaling to, just so we can appropriately kill many of them off 🙂
https://www.dropbox.com/s/ynxtxf1kqxapko7/software-delivery-clean-room.pdf?dl=0
Are you working with PlatformOne in the DoD? They are the team in the federal govn’t most focused on developing pipelines to provide that continuous authority to operate. They also have a lot of good info on how they certify containers and their overall DevSecOps approach to continuous ATO at http://software.af.mil/dsop/documents. We’ve been exploring this space a lot lately, so DM me if it would be helpful to discuss.
Thanks @andre.rodrigues I am tied in with P1, Kessel Run, BESPIN, and KM. This question originated with a convo between myself, KR and BESPIN.
@masnyder2506 no worries at all - that’s a slide from @tapabrata.pal’s presentation that I have pinned up on our whiteboard as what we need to aspire too.
If you could help get me in touch with them as they're creating the new book I would love to hear more. There is a growing community within the DoD that is interested in hearing about these lessons and applying them into their mission areas as well .
For sure - I think @jonathansmart1 should still be around on this Slack channel. I will nudge him elsewhere if he doesn’t respond 😄
I have done some of this and have a lot of thoughts on how this could work
Hey @masnyder2506 - I'm the creator of rode
and I would love to hear more about your use case...this type of feedback would be super valuable as we continue building out the roadmap of the project.
I might rephrase the issue slightly differently. Boeing had several “moments of test” for which they weren’t prepared. Starliner is one of those. The key question is not only “what went wrong?” The other part is “why did it go wrong?” Given the start of every developmental experience is characterized by having too little useful wisdom and skills and given that the successful end of any developmental experience is wisdom and skills sufficient for the test ahead, what was it that they were building wisdom and skills (aka ‘capability’) too little, too slow to succeed? My go to answer is that they were managing these incredibly complex processes meant to design high functioning incredibly complex things in such a way that feedback was too little, too weak, for sufficient learning loops to occur. So, before jumping to the conclusion that if we shed that guy/gal and get that other guy/gal as a replacement, it seems a really relevant question is: can the new person cultivate and exploit a learning dynamic in discovery, design, etc. that is more productive than the existing one?