This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-10-14
Channels
- # ask-the-speaker-track-1 (411)
- # ask-the-speaker-track-2 (347)
- # ask-the-speaker-track-3 (540)
- # ask-the-speaker-track-4 (399)
- # bof-american-airlines (2)
- # bof-arch-engineering-ops (10)
- # bof-covid-19-lessons (1)
- # bof-cust-biz-tech-divide (10)
- # bof-leadership-culture-learning (4)
- # bof-next-gen-ops (8)
- # bof-overcoming-old-wow (3)
- # bof-project-to-product (2)
- # bof-sec-audit-compliance-grc (37)
- # bof-transformation-journeys (1)
- # bof-working-with-data (2)
- # demos (78)
- # discussion-main (1226)
- # games (43)
- # happy-hour (195)
- # help (76)
- # hiring (20)
- # lean-coffee (47)
- # networking (17)
- # project-to-product (1)
- # psychological-safety (10)
- # summit-info (249)
- # summit-stories (23)
- # xpo-delphix (25)
- # xpo-digitalai-accelerates-software-delivery (3)
- # xpo-harness (1)
- # xpo-hcl-software-devops (5)
- # xpo-infosys-enterprise-agile-devops (6)
- # xpo-instana (4)
- # xpo-itmethods-manageddevopssaas (1)
- # xpo-itrevolution (18)
- # xpo-launchdarkly (6)
- # xpo-logdna (2)
- # xpo-moogsoft (4)
- # xpo-muse (2)
- # xpo-nowsecure-mobile-devsecops (6)
- # xpo-opsani (7)
- # xpo-pagerduty (19)
- # xpo-pc-devops-qualifications (3)
- # xpo-planview-tasktop (43)
- # xpo-plutora-vsm (3)
- # xpo-redgatesoftware-compliant-database-devops (6)
- # xpo-servicenow (14)
- # xpo-snyk (3)
- # xpo-sonatype (7)
- # xpo-split (2)
- # xpo-sysdig (15)
- # xpo-teamform-teamops-at-scale (4)
- # xpo-transposit (11)
- # xpo-tricentis-continuous-testing (1)
Happy day 2 folks! We hope youโll join us for our Co-Founder & CTOโs session โOptimizing @ Scaleโ (2:05pm, Track 4). He will dive into how using ML to optimize all your applications across your service delivery platform is a seamless and painless process that will save your cloud budget and meet your performance goals. https://pages.opsani.com/does-2020-live-demo-sign-up
Day two in virtual Vegas! ๐ฅ๐ Great to see so many awesome talks yesterday and great engagement. ๐Make sure to tune in to track 4 at 11:05am PDT today to @grant.fritchey and @stainsworth331 as they talk about the Challenges of Implementing Database DevOps and get your โ answered!
Warning: @grant.fritcheyedited this, so any place that makes me sound like i don't know what im talking about, blame him
Welcome @stainsworth331 and @grant.fritchey for our next session's Q&A! Thank you to #xpo-redgatesoftware-compliant-database-devops!
http://scarydba.com... as if dba's were not scary enough :D
@stainsworth331 do you have users using DBUnit to test in your DevOps toolchain?
Unfortunately no; I think our dev and QA folks are still very new to database testing. Being on the release end of the pipeline, i've focused more on getting the releases standardized.
Would you recommend that? That's what I'm having our DB team explore to integrate DB testing (i.e., SELECT validations) and add this to our Pipelines.
From my Release Management days I got this. One horrible thing about database is that it contains data.
Yes, changes are open-ended in terms of how much time they take
I'm just going to add this index as part of the upgrade [........] how long is this going to take? ๐คท
It really is the hard part. I mean, we could deploy databases so fast if it we didn't have to keep the data.
@stainsworth331 How are u ensure that the quality in each deployment is correct? There're a lot or differents kinds of scripts that we can deploy: stored procedure, triggers, scripts, etc
there is a schema migration library for just about every language. they can be integrated with your code repo and part of your deploys
The hard work should all take place in the testing prior to production. Any script, regardless of what it's doing, should not run for the first time in a production environment.
@stainsworth331 I'm curious to know if you've used or evaluated database virtualization technologies?
We've evaluated a few over the last several years; part of the struggle of being a BU for a corporation is that the infrastructure is often a compromise. Our focus has really been on just getting the CI pipeline in place.
Every time we have started the conversation about automated database rollback, it always ended up with needing complete backend refactoring.
Why refactoring here? Rollbacks can be simple for code (procs/functions/packages/etc.). For tables, this is really a pipeline challenge.
And is gonna depend on the rate of change in your databases. High transaction dbs are very different than datawarehouses, for example.
I am as puzzled as you are. But there always has been something in the database-related code which made the automated rollback something unpredictable. ยฏ\(ใ)/ยฏ
There could be. Doesn't have to be, but I have seen some strange db environments that make me scratch my head and require special handling.
Rollbacks are exceedingly hard. I generally argue against them. Rollforwards are safer and easier to implement.
rollbacks are part of a good schema migration tool, they just require eng build migrations for both directions
If you are going to do database rollbacks, you have to test them as thoroughly as you do the actual deployment.
roll forward is never the same thing tho, you're just replacing with new vs going back to previous known
and how can we do a homogeneous process against different database vendors?
we always plan for a migration or roll-forward. We make sure our changes are small so we can keep the revision plan tight.
True. The issue is, was the problem found immediately, in which case rollback is easy. Or was the problem found 3, 5, 18 days later and now there is 18 days worth of data that you can't chuck, depending on what's needed for the rollback.
What does a matured DB DevOps environment look like? What sort of tools and processes are present, and how do they look as compared to a "normal" DevOps pipeline?
Grant yup! ideally in those cases is you do your schema first so you know you don't have to roll that part back, then do code later, that way 18 days later there still is no data there
DDL Source control (Liquibase, db Maestro), DDL Testing (Liquibase), Data version control (Delphix),
It's all about dealing with the database in the same ways as you deal with code. Mostly, the tools that support the code, support the database. However, you need one special tool. The thing that gets the code in and out of source control and enables a deployment to a database, that's unique.
Also you make the Patrick Stewart tool, Make it So / SQL Compare
There are some cases you want to seperate schema from code repos but yes you are correct, you can still deploy schema first and code second having them all in 1 repo. this would be two pull requests/merges
what would be the case where you want to separate schema from code repos?
As much as possible, I want the code and the db to be on exactly the same version. No deviation.
yes, we are saying the same thing, I was just offering your a strategy for your rollback issue
vaidik: they're all edgecases and should be avoided at most costs, usually when schema is shared between more than one source,
I tend to want the db separate, because often there are multiple apps (ETL, reports, etc.) that query the db. I prefer the db as a first class citizen, not a dependency of an app. However, when you deploy code, often there is a need to have some versioning in the db that allows an app to be sure it works with vXX+. If db v3 adds a column and db v3.2 adds a second one, my app might need to be sure the db is at v3.2 or turn off feature flags.
ah right. sharing schema with more than one app just feels like a nightmare - too hard to control
but yeah in that case, you would want the schema in a separate repo with its own workflow
super relatable. when i joined my current company, we had a database that was a shared database between multiple services. but there was no regard for process to make changes to that database. after a lot of brain wrecking, we managed to put a sane change management process by putting the schema in a git repo. after that, we started calling that database "mother DB". always respect the mother.
One thing we consistently hear at Redgate are companies need the data on demand for development in the lower environment. Is anyone struggling with that?
You are not only needing to mask the data but you're also needing immediately?
Yes, random prod-looking data (but definitely NOT production data) en masse in a random new environment would have saved many hours of debugging.
"immeditately" might be defined differently in different industries. But yes, normally people are trying to reduce the lag
I wonder what database (MS SQL Server, Oracle, DB2 LUW, PostgreSQL ..) types does Redgate support?
Redgate supports SQL, Oracle and these other PostgreSQL, MySQL, DB2, Aurora MySQL, MariaDB, Percona XtraDB Cluster, Aurora PostgreSQL, Redshift, CockroachDB, SAP HANA, Sybase ASE, Informix, H2, HSQLDB, Derby, Snowflake, SQLite, Firebird Hope this helps!
We've got tools for just about any database you can think of. Between Redgate Deploy and Flyway, we really do cover them all.
I've mostly been working with PostgreSQL lately, in both AWS and Azure, using Azure DevOps Pipelines or AWS DeveloperTools, along with Flyway. Works a treat.
store procedures are problematic since you have code that doesn't live in source control.
I won't allow my engineers to use them unless they have no other viable option (which is rare)
Does depend. I'm in favor of ORM tools, but, not all code can be generated. Some is still going to be traditional, 3-5% max.
๐ orms generalize all your things at the cost of performance for the non standard things
True. However, 90-95% of the work an ORm tool does perfectly fine. It's all about identifying that last bit and making adjustments. Most ORMs support procedures.
you start making really odd/special business logic to get around some shity join or query the orm is building
Mine too. It's a big part of why, as a DBA, I've embraced ORMs and supported the developers in their use. It makes it easier for me to say, "All good, except over here."
you give them a query that performs 10x better but the orm will make your life hell actually trying to run it
my personal preference is for a library that helps mapping the db records to objects and then. most of the queries are built from simple db libs
it referred to simple DSLs, where you could easily use db language if needed, and then object mapping the results
it was picking up 5 years ago with Dapper on .Net and JOOq on Java
ahh yeah the problem with tools that abstract the db underneath is they usually perform poorly under load.
they also like orms try to generalize things between db's and usually performance is much better when you specialize
Very poorly. However, like any other tool, you can use them in ways that work and in ways that hurt, a lot.
How are you seeing database work changing as companies move to cross-functional product teams?
oh that's an awesome question! I really see a seperation of responsibilities that are now traditionally combined in the role of DBA. Server admin (backups, configuration, security) and development (performance tuning, etc).
Testing is hard frankly. I like to break the testing in half. Tests with data and tests without. Tests without are primarily for Continuous Integration, fast, light, not thorough, but catches the easy stuff. Then, tests with data, as much like production as possible, are what QA & pre-prod/staging are all about.
Anyone has worked with Robot Library Python or using Groovy to test the changes?
How are rollbacks handled? Do the tools have ability to generate rollback scripts based on the actual code that is going to be implemented?
How many different pipelines you have for database deployments
right now, we're using Octopus deploy. we have one pipeline, with variables.
We have not started database devops journey. I would like to learn more your experiences
what different database platform you are doing DB devops? oracle, sql server, mongoDB ?/
what is the best way to talk to you? One of my colleague may join
Grant was supposed to edit that to make me sound much more intelligent.
Some tools can do rollbacks, yes. However, most generated rollbacks don't take into account data protection. Frequently, those scripts need manual tweaks.
Any tips for managing references across different data domains for QA work? Wanting product teams to own their own data and management of, but for QA environments integration across domains is still necessary and so consistency and availability are key.
My number one recommendation is to have curated data sets for all work. This can be santized prod, or (preferred) a small known set of domain cases that you maintain, just like you maintain tests. This is the best investment I think you can make for db work.
mm, a big deal, we do is test data management practice, and develop our extensions to achieve that, selecting a catalog for the pipeline, and a final task to decomise that data
You can also use masking with distributed Referential Integrity to translate production data in a way that cross-app data references remain consistent.
are there any tools to manage test data sets? I've looked into this without success
it would be good to have something as easy to use as the repositories for containers and libraries
we have to develop our tool for that (yep, we have that thing called iSeries /DB2)
but, itโs just a Java client that move data between de โuniverse dataโ to the environment and have a table to control the data that we have to delete when the pipeline ends
would love to be able to rebuild a datastore from a dataset at any timeโฆthe references between databases and domains where those are owned by different teams is where that becomes hard
that's interesting but I feel there is a simpler gap to fill which is just a tool to manage the set of domain cases to be able to do different tests as @stephen referred
There are masking tools, Redgate makes one, others are out there. I haven't seen good subsetting tools, which is something I wish Redgate or other vendors would work on because these are important. Some of my customers that have had success will pull rows from a prod db and store them in a "testtemplate" db, masking as needed. They then use this as the base image for data virtualization, restores in qa, containers, etc. This is the basis for all work. Over time, they continually add new edge /corner cases from prod as they need to. As an example, one customer has a multi-TB db that is templated back to about 10GB. This is regular work for devs to keep the template up to date. Disclosure: I work for Redgate.
Sure, you keep a db around that is essentially your template. You can handle this in a few ways, but essentially think about having a regular process that: 1. backups up template db 2. restores this in Qa/dev/etc. Everytime we hydrate a new environment for a developer, a QA env, a CI build, we use this same db. This way everyone always has the same basis for starting work. Think of this as a new branch off main for a C#/Java dev. They always have a known starting point. Since dbs grow and change, as we get a ticket or bug, and we realize that we don't have data to match this, say the "Steve Jones account from prod", we copy this data from prod-> template, santize it if needed, and then the next time the process above runs, we have a new db that allows us to work. Ideally, that process is both scheduled and demand driven. These days, I would expect customers to do this with data virtualization or containers. Having a known starting place for every bit of work that is updated as we deploy changes and grow. This isn't simple, because changes to the prod db (Deployments), need to be run against the template db as well, so that developers can keep up to date. Likewise, anything you pull from VCS wrt db code, needs to be applied. This is where migration based frameworks or tooling, like FlywayDB, ensure that you automatically upgrade your copy of the template db when you start work. If devs have updated the VCS with other code, it gets applied to the copy of the template db.
This isn't simple, and it's a change, but it's somewhat like the chance for C# devs that needed to start writing unit tests. A PIA to start, but over time, it becomes a part of work. Maintaining test data is the same thing, albeit with a different process.
makes sense, thank you. I've done something similar with reference data sets, wondering if you seen any kind of procedural data creation to get to these states or to make them easy to maintain? I find the process on clojure.spec interesting on this side of things but I've not used it in the real world.
There is tooling for non-transactional data, so list of countries, statuses, postal codes, etc. This is easier to maintain because this is really the same in all environments. For transactional data, it's harder. There are masking tools, such as Data Master for SQL Server or Oracle (https://www.red-gate.com/products/dba/data-masker/) that can make this a repeatable process, but this doesn't help with the scale of transactional data. Masking all data in a 10GB db is easy, and works well on dev laptops and CI build machines. This is less useful on a 1TB db. Both for time and resources. Instead, we need subsetting tools. I have customers asking about this, and I've tried to get Redgate to invest in this, but so far, no luck. I do need more customers to make this a priority, or even prospects, to justify investment (https://www.red-gate.com/blog/database-devops/database-subsetting-wed-love-hear) , because this will dramatically change agility in the world. There are some other solutions, like random generation, but these don't necessarily cover the cases against which we need to test. Most of the solutions I see from Oracle, IBM, MS, etc. are really putting a large load on users. Companies that develop a process usually love it, but it's a small percentage (< 5%) that build a process to maintain this.
Usually it's a one time project, which starts to fall apart over time as requirements changes.
Feel free to ask more questions, or join the RG happy hour tonight. This is one of those things that is actually not that hard, but it is complex. Lots of protocol and process to implement and stick to.
The only way to deal with the vagaries of the weirdness of a production environment is to emulate/simulate them in the non-production environments as best you can. Shift left.
you eventually will need to snapshot prod (or most of it) to another env to test against
I will not say it's easy. It's work. Lots of work. However, it's all about breaking it down into component parts and then building them out as needed.
So I'm not hearing exactly how you do DB testing in your DevOps pipelines. Can you summarize the tools you're using to accomplish that?
Seriously, though, we use the SQL Change Automation tools, and a lot of the Red Gate tools to do comparisons, and it does a lot of basic sniff tests
when it comes to functional unit testing, i'm not exactly sure what QA and our dev teams use. I will check though. I'm mostly focused on what's changing, because i'm interested in the stability every time.
Does it make sense to test the database outside of testing the application itself?
@nick.kritsky sometimes. It depends on the change made because it may not be surfaced in the UI.
@occasl we've got both approaches of the State based model and Migrations based approach
Cool...thx for sharing. We're using Katalon, so hoping we can integrate DBUnit into that.
for example your alter might take an hour on production data set but takes 10seconds in stage/preprod
how do you structure in slack time in your teams? @stainsworth331
I'm not a manager, just a nerd. However, as a nerd, I ask the boss to allow me time for research & learning. Has to be part of the job.
I think i just answered that, but we can talk more about that if you want!
so we use personal OKRs as a way to sort of have certain personal goals decided over a longer period of time that helps an individual grow holistically (tech skills, leadership, communication, project management, etc.)
however often certain individuals have a hard time achieving those goals
for some reason even with their goals clearly laid out, they have a hard time executing
I love the idea of the personal OKR approach, but again for me, it goes back to the abstract versus checklist approach
I'd be curious if your abstract thinkers are the ones that have a tough time with goals (or is it vice versa)
both actually. and most of them are old timers, when we didnt have structures and processes like these
so the people i am talking about come from a time of being super unstructured and constantly being in "war mode"
Shift left is tricky. To get the data as much like reality as possible, thankfully, I can use Redgate tools like SQL Data Masker which lets me create realistic looking data, but protects production data. Then, we also use SQL Clone, which lets us create tiny copies of the database, well, tiny on the dev machines, but actually full-sized in reality. All through disk virtualization.
Welcome @laksh.ranganathan for our next session's Q&A! Thank you also to #xpo-tasktop
Cheers! If anyone wants to, swing by the Redgate channel to continue the conversation. I'll get out of the way now.
Hi everyone, Thanks so much for coming to listen to my talk on measuring what matters to the business. Please reach out of if you have any questions!
"Love that quote - Not everything that can be counted counts."
"Outcomes over Outputs" is a topic at Lean Coffee today- hosted by @stephen. To join, look for zoom link in #lean-coffee channel at 2:30 PM PT
Big assumption that optimizing individual elements will optimize the system. (And of course, what are we optmizing for)
@jackvinson so true. the Local optimization vs. global optimization problem has been around since the days of Eliyahu Goldratt ๐
Is it right to say that the metrics that matter to the business come from outside the business?
Yes.... and there are a lot of internal customers. And of course, the internal processes end up having an external customer impact eventually.
Exactlyโฆmuch like Dr. Spear was talking about this morning, know what problem are they (the business) are trying to solve and how we fit in help hone in on the individual customer, internal or external
In team coaching the purpose and the team's 'job' is derived from asking about and meeting the needs of all stakeholders, internal and external!
Thatโs such a great point @jackvinson - The whole doesnโt always equals sum of the parts and you have to look at it with a different lens.
THIS --> "diff metrics for diff contexts"!
@laksh.ranganathan,how do you normalize flow metrics across teams or departments or value streams? If you'd like to identify high performing teams (and their practices) and promote them?
This is such a great question! When looking at value streams which tends to constitute multiple teams, its better to normalise the work into key types of work. We abstract that out to Feature , Defect, Risk and Debt at a top level and apply that across teams. And use this level for Flow Metric and data from Application Management Tools help hone in on bottlenecks and identify where improvements will provide the most impact on value. We have see a lot of value in having an Enterprise Flow Team that have a larger visibility across value streams can augment the learnings by providing the systems thinking. This is where Ways of Working teams can help cross pollinate great ideas. Not easy - its a learning process, through experimentation.
Welcome @bradgeesaman and @pawan.shankar for our next session's Q&A! And thank you also to #xpo-sysdig
Hello everyone! Thanks for coming to our session that is about to start. @pawan.shankar and I are looking forward to answering any questions you may have as best as we can.
Thanks for joining the talk! Please feel free to drop me a note with any thoughts or stories from within your organanizations!
Coming up at 1235pm PT, we have a CTO (@brianf) and CEO (@stephen) taking your questions LIVE on DevSecOps in the VendorDome session.
Ask your questions here and Iโll share them live on air with Brian and Stephen
yes, Sysdig also is focusing on Calico for vanilla k8s, gke, rancher eks etc and openshift OVS if you're running on that
These are the main reasons why we shouldn't run our containers as root user...
that is exactly correct @prasad.gamini we did some customer usage analysis, and found 58% of containers are running as root in prod https://sysdig.com/blog/sysdig-2020-container-security-snapshot/
You have to push images to a registry to scan - Sysdigโs inline scanningย scans directly in the customer env and only sends the results back to Sysdig. You donโt have to share images/registry credentials with a 3rd party tool
With Aqua - periodic rescanning required -- Sysdig continuously evaluates all running images for new vulnerabilities
B/c of their LD_preload approach, Aqua has no support for Go based apps (doesnโt support static binaries)
sysdig k8s integration is very rich - can tell you a specific vulnerability maps back to a particular service/namespace/cluster etc
In k8s, the secrets are just Base64 encoded, nothing apart from that... are these secrets bringing any another security advantage over configmap?
Secrets and configmaps are near identical in mechanics, but they afford greater separation possibilities via RBAC
This looks interesting for our env: https://rancher.com/blog/2020/runtime-security-with-falco/ a la @pawan.shankar
https://www.youtube.com/watch?v=u409G5PsO1w this goes into more detail @occasl
checking on this one specifically... we have API integrations for many of the CI/CD tools - bamboo, circle, gitlab etc
REMINDER to stay hereโฆ Coming up at 1235pm PT, we have a CTO (@brianf) and CEO (@stephen) taking your questions LIVE on DevSecOps in the VendorDome session. YES LIVE. Ask your questions here and Iโll share them live on air with Brian and Stephen
@bradgeesaman & @pawan.shankar are happy to continue the conversations in #xpo-sysdig or answer any additional questions that you might have! Thank you all for attending our session and asking some great questions. ๐ Want a LIVE demo or to dig deeper in the Sysdig Secure DevOps platform? Join our next demo in 20 min at 1:00pm PDT here: https://sysdig.zoom.us/j/92710825508?pwd=K2UvNjhPKzF3VzJIT2xOamxPM2dyQT09 @eric.magnus will be hosting & happy to show you anything youโd like to see in more detail ๐
Welcome @weeks who will be moderating for today's VendorDome Q&A between @brianf and @stephen!
If you have questions, share them here for Brian and Stephen
DevSecOps question โ>. How do you get InfoSec to help fund joint projects like WAF and other crossover items that DevOps want to use however it is additional costโฆ.
Donโt miss your chance to learn how to save on your cloud budget AND meet your performance goalsย ๐คฏย Sign up for Opsani Co-Founder & CTOโs Peterโs speaking session Wednesday October 14 @ 2:05pm ๐ฅ https://sched.co/ehCL
Weโre live: send me your questions for @brianf and @stephen
@weeks Question - Many of the latest DevSecOps tools work poorly or not at all with legacy systems for example cobol on a mainframe. At the same time, these systems run critical applications we all use every day. Do you see this as somewhat of an enforced strangler pattern, or do you think devsecops tools will fill in the gap and keep the legacy systems going for years to come?
So my understanding from the replies is that it becomes a strangler pattern due to lack of support.
Is data privacy/data sec/classification becoming more important to your orgs/customers?
@weeks Question - How do you see organizations addressing the need to codify the governance (audit/risk) aspect of things? Many organizations have controls clearly called out but the proof / alignment is not easily codify-able.. Any ideas how to codify the governance?
Not sure if we fully covered this, but automated governance is definitely a trend I hear more and more about at each DOES conference. Also โpolicy as codeโ. Talk to @tapabrata.pal and @john_z_rzeszotarski if you havenโt already.
Security is usually going to development, don't know if that is due to culture or due to other reasons, but how to turn that around? change the culture or change the tools ?
when you say โsecurityโ, do you mean the tools? the team? the practice?
@rradclif was who I was referencing in my answer โ look up her talks for DevOps meets mainframe wisdom!
@stephen Thanks. I heard @rradclif speak a few times at past conferences. I actually don't think it is a bad thing to eventually strangle off old systems and technologies due to lack of support. The same argument can be made for finding people with the skills to maintain these old systems - eventually the lack of a talent pool with strangle them if nothing else.
Which book were you discussing regarding the mainframe? I do not see us every getting rid of ours. Too stable and reliable ๐
@robyn.talbert, you can find her talks at prior DOES conferences on YouTube. Here is one example: https://www.youtube.com/watch?v=TwkJvsmZpF4
The security development should be guaranteed since starting development. But in "the real life" could be different; it's correct to implement differents tools to identify statics/dynamics earlier all the vulnerabilities possibles? Increasing or trying to connect to increase the technical debt @weeks
Are there some tools to secure 3rd party integration to our DevOps pipelines? We have been using VPNs to act like submarine doors where both sides can close off each side and limit blast radius. Are there better ways?
One approach is to adopt inline scanningย in the pipeline/registry. This means you scan directly in your env and only send the results back to a 3rd party tool like Sysdig. You donโt have to share images/registry credentials with the tool either. This gives you better security by keeping you in full control of your PII data/image contents @denver.martin
I like this approach, @pawan.shankar. And Iโm curious about what the submarine doors look like in your process @denver.martin. Iโm going to DM about this!
@stephen we put a Fortinet Virtual appliance on our side in the AWS, then the 3rd party does as well, they can Fortinet or any other vendor (Cisco, Pulse, Checkpoint, etc) and we then connect via IPSec Tunnel. this means we both can close the connection if needed. The other option was to connect via AWS Peering, but that is sharing a lot more open and is controled by only SG rules. You could do ACL but then it become more management. Like a submarine both sides can close the door if there are issues..
More about the Ops side then the Dev side of things..
anyone having any luck with combination of GitOps/DiffOps, ContinuousCompliance as a means of eliminating manual change approval/review bottlenecks (like eCABs) in conjunction with a docslikecode and.or pull-requests with infrastructure-as-code and even enterprise architecture (with the EA equivalent of 'food-critic')???
A peer review of code before merging to trunk works for both code and IaC. When it has been through two pairs of eyes, it can be considered formally accepted. So, that's how we do it pretty much like you are saying. There is a trial of annotating the repositories so that all CMDB-things get updated accordingly.
The idea is changes to the infrastructure architecture or enterprise architecture (I suppose even operating-model) could be done in a configuration/architecture-as-code like manner, with pull-requests (DiffOps) and help from automated scanning tools (like tech debt, anti-patterns, etc.) to assist the process and streamline "hard/slow" formal change reviews.
You don't need change reviews if anything those are supposed to do becomes transparent results of a pipeline.
Check out ServiceNow's speaking session TODAY at 1:35 PM: From velocity to value โ scaling the DevOpsย impactย ๐ฃ @eric.ledyardย will be inย #ask-the-speaker-track-4ย for Live Q&A during the session!
@ferrix True - you might not need them. Tho in the form of pull-requests they can be usefull for triggering other things (both fully and partially automated) which can provide alerts/flags (like SONARQUBE does for tech-debt, only this would be a security/vulnerability/ea-equivalent of that for a scanning tool. Some of which exist - yes/ Verity/Verily, evens ome OSS ones)
Yeah, we have SonarQube, ?AST, license checking and all kinds of things in the pipeline.
Definitely agree with PRs as a key point to surface code scanning issues (if you have quiet tools that donโt generate too many false positives). Itโs the main point of integration for Muse.
@stephen your last comment sounded a little like pointing to a security-driven (TDD/BDD-like) approach (if I understood you correctly). That makes a lot of sense
@brad499: Yes, focus on having processes that can flag results in the moment on each code change. And if thereโs a gap (issues you canโt flag that way) see if you can add a tool / write a rule to automate that.
@brianf Yup - you just nailed it (or at least what I was trying to convey!)
Question: Is there any concern with GitOps and auditing? The machines and changes can't be inspected live, only as a config file. Any issues with how auditors can then verify that actions are correct?
Test cases for expected behaviour
Clean commit messages showing intent.
Test results? Or testing the config with a new deployment? I'm thinking here of an auditor coming back to check on Jan 1, but asking about a vulnerability or a patch from the previous Oct 1. How can they verify that changes were actually implemented? Certainly checking a VCS might work, but I'm curious if auditors have issues with this.
Log the installed package list after a patching activity and store with other proof
yeah - I was thinking its like config-as-code using TDD, CI (with scanning tools), refactoring, with PRs built-in to the flow (and some config rules to determine what outputs are treated as warnings, vs must-review, vs etc.)
@steve.jones If you are so lucky to produce a positive from a vulnerability analysis, you can prove it in staging and even production. So, pretty much the same pattern as coming across a bug and proving it gone with a test.
Any luck with ChatOps, like automating end-to-end automations pipelines with integrating chat bots or any AI/ML?
Hereโs that Google paper: https://cacm.acm.org/magazines/2018/4/226371-lessons-from-building-static-analysis-tools-at-google/fulltext
@steve.jones we have our GitOps create a config capture then put that in a change ticket, then make the change, do another config capture after the change, have that added to the notes of the ticket, then close the change. This way before and after are captured and then we have moved automated changes to std changes so they do not go directly to CAB for approval. For things to be automated they have to be done in non-prod prior to moving to prod.
we also made PR and Commits as part of the approve change management processโฆ
@srujit.biradawada -- ooh! great followup question. (chatops in conjunction with or as an extension of gitops/diffops)
I'm waiting for the future IT to stay in one application and trigger automation or resolve an issue with out the need for moving between different applications to trigger or troubleshoot, so I think chatbot will be that one application to stay on with AI/ML integration or chat bot take care of it. May be there will be teams to integrate in the backend but end-to-end is taken care from one application so we don't have to leave the existing application just like Steve Jobs envisioned for ads in the application bringing in the browser show the ad and with out leaving the app then closing the browser on the app.
hereโs an example of chatops + code scanning: https://github.com/curl/curl/pull/5971#discussion_r490253770
first step toward what youโre describing perhaps (and weโre working on better and more full-featured interactivity)
@brianf I think that comments about protecting environment, and malicious attacks, and AI/ML sounds like an upcoming SyFy movie (or maybe Disney/Pixar) -- maybe starring "the Rock" and Kevin Hart (Central Intelligence 2: DevSecOpsIntelligence (with AI+ML) ๐
Itโs a Jetsons future! George pushed a button for 4hrs a day!
I did R&D on chatops by integrating to ServiceNow to create records (RITM/INC) and then the chatbot is made to rest calls to different applications and then get the responses based on that. But since the chatbot needed the AI/ML for more complex things I had to put the project aside. I still agree on "One step at a time", will see the complete chatops in no longer than 10 years
@denver.martin TRONminator 2 - The MatrixOps (Neo resurrected via AIChat with AlanK and Arnold 'the guvernator')
@weeks Maybe a summary comment from each tying the topic back to the human/empathy side of DevOps? (in the face of AIutomation)
Really enjoyed listening. Sounds like complexity is definitely a problem ๐
Welcome @eric.ledyard and @richard.hawes for our next session's Q&A. And thank you to #xpo-servicenow!
Great session from Sonatype! ServiceNow team is online if you have any questions for Eric and Benโs session thatโs running now.
Thanks @weeks for moderating! I wanted to emphasize Derekโs last point on metrics. Donโt measure the number of tools youโre running โ measure their impact and outcomes! Thanks everyone!
For those who are interested in seeing some code scanning bugs fixed live on a DOES-community open source project, @ncohen will be live-streaming some remediation work on Hygieia and Concord after the next networking break.
DEVOPS data model, does not work with all CI CD tools, are you limited in those or got some way to integrate ?
no we have a framework that allows you (or partners) wo write your own integrations. What toolds do you need to integrate with?
To add to Laurentโs comment - out of the box integrations include ADO, Jenkins, GitLab plus the ability to use our integration model to connect to others. Weโre adding more over time too.
(Thatโs CICD we also connect to Git repos, planning like Jira etc.)
you can gather data from your pipeline and other dev tools as well as from ServiceNow platform (like incidents, outages, ...) to asses your risk dynamically (by opposition to a pre-approved standard change). Then based on those metrics you can auto-approve (or reject) your change, or decide to make a decision manually (by a human or CAB)
ServiceNow is great because of it flexibility, I was able to create a self-healing application (based on ServiceNow Orchestration), as soon as something is wrong with a server my automation goes into the servers fixes the issue and let the users know that the issue is fixed. ๐
Great use case. Now with AIOPs, metrics, logs, and observable data along side Flow Designer and Integration Hub itโs easier then ever to have more automated ops
curious about your tech stack around feature flags and for canary deployments. Are you using in-house tools for these feature release or third party platforms?
@scott.dedoes is that question for ServiceNow or for @srujit.biradawada?
Hi Scott, great question. (Laurent I think heโs referring to what Ben said). Weโre working with pretty much any tools that are holding, configuring and using configuration data. The management piece Ben is talking about means we keep a repository of all the configuration information and add access control to that data (instead of having people work directly with the configuration information).
So a dev working on something moving to production will use our product to make configuration changes and weโll apply intelligent policy to that before passing the change on to the actual deployment tools.
Thanks for the follow up and apologies as I'm not an engineer but very interested in learning. So the ServiceNow product doesn't actually deploy the features for customers? Another part of feature flags that my question pertains to is the feature releases for ServiceNow's products. Does the company deploy these using their own feature flag platform or with third party tools? My company is working on an integration with ServiceNow for a mutual customer and just trying to understand the ServiceNow products and release process better. Thanks!
If by ServiceNow products you mean ServiceNow scoped applications and update sets (deployment between ServiceNow instances) we have a CI/CD solution for that which utilizes integrations with some 3rd party tools - there are more details here: https://docs.servicenow.com/bundle/orlando-servicenow-platform/page/administer/integrationhub-store-spokes/concept/cicd-spoke.html
There is a deep dive YouTube video on it here: https://www.youtube.com/watch?v=I9BRmKjc_8s
๐:skin-tone-4: Stay on track 4๏ธโฃ for the next session: โOptimizing @ Scaleโ and learn how to take your optimization from manual and reactive to autonomous and continuous! Attendees will also get a mask and mug from #xpo-opsani !โ๐ญ
Welcome @peter118 for our next session's Q&A! Thank you to #xpo-opsani!
@peter118 Does Opsani integrate with Kubernetes easily? If so, can you outline the process?
@amir Yes. Opsani specifically targets Kubernetes. You install a small controller which discovers applications on the cluster and tags them for optimization. The optimization is done through our SaaS service (so you don't have to worry about all the ML loads ๐ )
> discovers applications All of them that are on Kubernetes? How do I prevent certain apps from being optimized?
@amir we support annotations, so apps can opt-in or opt-out by attaching a simple annotation to the deployment object. You can also define the service level objective you want an app to be optimized for (e.g., latency should be below 30 msec)
I believe I understand. If i do not annotate, then there is no Opsani optimization; thatโs the way to exclude certain apps / namespaces. Correct?
@amir You got it! When installing Opsani on a cluster, you can define whether you want applications to be onboarded by default (use annotation to opt-out) or require annotation to opt-in.
(obviously, you get more optimized by using explicit opt-out only for apps that should be kept fixed ๐ )
If you have any additional questions or want to learn more about Continuous Optimization as a Service, hereโs how you can get in touch: โข https://pages.opsani.com/does-2020-live-demo-sign-up โข https://pages.opsani.com/does-2020-happyhour (I'll be around as well for another 10-15 minutes here -- then at the Opsani booth)
Happening now! Opsaniโs live demo! Learn about autonomous workload tuning @scale!ย https://us02web.zoom.us/j/4975940985 Join and enter our raffle to win a $200 giftly gift card ๐