Fork me on GitHub
#ask-the-speaker-track-4
<
2020-10-14
>
Katrina Sison15:10:12

Happy day 2 folks! We hope youโ€™ll join us for our Co-Founder & CTOโ€™s session โ€œOptimizing @ Scaleโ€ (2:05pm, Track 4). He will dive into how using ML to optimize all your applications across your service delivery platform is a seamless and painless process that will save your cloud budget and meet your performance goals. https://pages.opsani.com/does-2020-live-demo-sign-up

๐Ÿ™Œ 1
Cecilia Judmann - Redgate (she/her)15:10:01

Day two in virtual Vegas! ๐Ÿฅ‚๐ŸŽ‰ Great to see so many awesome talks yesterday and great engagement. ๐Ÿ˜ŽMake sure to tune in to track 4 at 11:05am PDT today to @grant.fritchey and @stainsworth331 as they talk about the Challenges of Implementing Database DevOps and get your โ“ answered!

๐Ÿ‘ 1
Grant Fritchey17:10:29

OK. Let's do this!

๐Ÿ’ฏ 4
๐ŸŽ‰ 1
Stuart R Ainsworth17:10:47

Warning: @grant.fritcheyedited this, so any place that makes me sound like i don't know what im talking about, blame him

๐Ÿ˜‚ 3
Grant Fritchey17:10:15

He's not wrong.... this time.

๐Ÿ‘ 2
Molly Coyne (Sponsorship Director / ITREV)17:10:53

Welcome @stainsworth331 and @grant.fritchey for our next session's Q&A! Thank you to #xpo-redgatesoftware-compliant-database-devops!

๐Ÿ‘‹ 4
Stuart R Ainsworth18:10:03

Totally should do a blooper reel next time

Grant Fritchey18:10:02

All criminal evidence has been removed from my laptop.

๐Ÿ˜‚ 2
Javier Magaรฑa - Walmart18:10:34

http://scarydba.com... as if dba's were not scary enough :D

๐Ÿ˜‚ 1
๐Ÿ‘ 1
Stuart R Ainsworth18:10:36

Hi, I'm Stuart Ainsworth, and I hate the sound of my own voice

๐Ÿ‘ 1
Grant Fritchey18:10:47

I'm not actually scary though.

Grant Fritchey18:10:26

Here comes the moment I messed up the screen.

๐Ÿ‘€ 1
๐Ÿ‘ 2
Lou Sacco18:10:21

@stainsworth331 do you have users using DBUnit to test in your DevOps toolchain?

๐Ÿ‘ 1
Stuart R Ainsworth18:10:12

Unfortunately no; I think our dev and QA folks are still very new to database testing. Being on the release end of the pipeline, i've focused more on getting the releases standardized.

Lou Sacco18:10:56

Would you recommend that? That's what I'm having our DB team explore to integrate DB testing (i.e., SELECT validations) and add this to our Pipelines.

Stuart R Ainsworth18:10:42

BTW, I twee at @codegumbo if y'all want to follow me after this.

๐Ÿ‘ 3
Stuart R Ainsworth18:10:29

I also suck at typing.

Nick - developer at BNPP18:10:34

From my Release Management days I got this. One horrible thing about database is that it contains data.

๐Ÿ‘ 2
โ˜๏ธ 2
Stuart R Ainsworth18:10:03

abso-frikking-lutely

Stuart R Ainsworth18:10:12

and in some cases it's terabytes of data

Rikard Ottosson - Psychological Safety (People Not Tech Ltd)18:10:32

Yes, changes are open-ended in terms of how much time they take

Rikard Ottosson - Psychological Safety (People Not Tech Ltd)18:10:17

I'm just going to add this index as part of the upgrade [........] how long is this going to take? ๐Ÿคท

Grant Fritchey18:10:14

It really is the hard part. I mean, we could deploy databases so fast if it we didn't have to keep the data.

Santiago Cardona18:10:20

@stainsworth331 How are u ensure that the quality in each deployment is correct? There're a lot or differents kinds of scripts that we can deploy: stored procedure, triggers, scripts, etc

Rylan Hazelton18:10:43

there is a schema migration library for just about every language. they can be integrated with your code repo and part of your deploys

Grant Fritchey18:10:05

The hard work should all take place in the testing prior to production. Any script, regardless of what it's doing, should not run for the first time in a production environment.

Jason MacZura18:10:08

@stainsworth331 I'm curious to know if you've used or evaluated database virtualization technologies?

Stuart R Ainsworth18:10:13

We've evaluated a few over the last several years; part of the struggle of being a BU for a corporation is that the infrastructure is often a compromise. Our focus has really been on just getting the CI pipeline in place.

Nick - developer at BNPP18:10:43

Every time we have started the conversation about automated database rollback, it always ended up with needing complete backend refactoring.

Steve Jones - He/Him18:10:56

Why refactoring here? Rollbacks can be simple for code (procs/functions/packages/etc.). For tables, this is really a pipeline challenge.

Stuart R Ainsworth18:10:40

And is gonna depend on the rate of change in your databases. High transaction dbs are very different than datawarehouses, for example.

Nick - developer at BNPP18:10:38

I am as puzzled as you are. But there always has been something in the database-related code which made the automated rollback something unpredictable. ยฏ\(ใƒ„)/ยฏ

Steve Jones - He/Him18:10:29

There could be. Doesn't have to be, but I have seen some strange db environments that make me scratch my head and require special handling.

Sagan Zavelo Redgate18:10:45

Great to have you on the channel Jeff!

Grant Fritchey18:10:23

Rollbacks are exceedingly hard. I generally argue against them. Rollforwards are safer and easier to implement.

โœ… 2
Rylan Hazelton18:10:30

rollbacks are part of a good schema migration tool, they just require eng build migrations for both directions

Grant Fritchey18:10:40

If you are going to do database rollbacks, you have to test them as thoroughly as you do the actual deployment.

๐Ÿ’ฏ 1
๐Ÿ‘‹ 1
Rylan Hazelton18:10:07

roll forward is never the same thing tho, you're just replacing with new vs going back to previous known

Santiago Cardona18:10:57

@stainsworth331 How to do rollback in data, when you changed the schema?

EmanuelMedina - Bancolombia18:10:10

and how can we do a homogeneous process against different database vendors?

Stuart R Ainsworth18:10:38

we always plan for a migration or roll-forward. We make sure our changes are small so we can keep the revision plan tight.

Grant Fritchey18:10:59

True. The issue is, was the problem found immediately, in which case rollback is easy. Or was the problem found 3, 5, 18 days later and now there is 18 days worth of data that you can't chuck, depending on what's needed for the rollback.

Woody Evans18:10:25

may I suggest you use Delphix

Sara Gramling18:10:47

What does a matured DB DevOps environment look like? What sort of tools and processes are present, and how do they look as compared to a "normal" DevOps pipeline?

Lou Sacco18:10:16

Who's furiously clicking a mouse?

๐Ÿ˜ฌ 1
Stuart R Ainsworth18:10:37

I blame everything on @grant.fritchey

๐Ÿ˜‚ 1
๐Ÿ˜ 1
Lou Sacco18:10:14

Need to get him a trackpad. ๐Ÿ˜„

Grant Fritchey18:10:57

It was probably me.

Lou Sacco18:10:27

np...just ended up turning down the volume.

Rylan Hazelton18:10:43

Grant yup! ideally in those cases is you do your schema first so you know you don't have to roll that part back, then do code later, that way 18 days later there still is no data there

Rasika V18:10:07

what tools did you use to deploy DB changes via the pipeline?

โž• 1
Woody Evans18:10:25

DDL Source control (Liquibase, db Maestro), DDL Testing (Liquibase), Data version control (Delphix),

๐Ÿ‘ 1
Rylan Hazelton18:10:29

depends on your language choice

Grant Fritchey18:10:41

It's all about dealing with the database in the same ways as you deal with code. Mostly, the tools that support the code, support the database. However, you need one special tool. The thing that gets the code in and out of source control and enables a deployment to a database, that's unique.

๐Ÿ‘ 4
Grant Fritchey18:10:53

My company, Redgate Software, makes tools for exactly that.

๐Ÿ‘ 2
Rikard Ottosson - Psychological Safety (People Not Tech Ltd)18:10:24

Also you make the Patrick Stewart tool, Make it So / SQL Compare

Grant Fritchey18:10:09

Ha! Yep. That's the foundation for a lot of the other tools. too.

Woody Evans18:10:59

@grant.fritchey exactly. Data Version Control.

Grant Fritchey18:10:30

Or, Redgate Deploy, Flyway. @woody.evans

Rylan Hazelton18:10:59

There are some cases you want to seperate schema from code repos but yes you are correct, you can still deploy schema first and code second having them all in 1 repo. this would be two pull requests/merges

Vaidik Kapoor (Speaker) - Technology Consultant18:10:27

what would be the case where you want to separate schema from code repos?

Grant Fritchey18:10:55

As much as possible, I want the code and the db to be on exactly the same version. No deviation.

Rylan Hazelton18:10:44

yes, we are saying the same thing, I was just offering your a strategy for your rollback issue

Rylan Hazelton18:10:49

vaidik: they're all edgecases and should be avoided at most costs, usually when schema is shared between more than one source,

Steve Jones - He/Him18:10:08

I tend to want the db separate, because often there are multiple apps (ETL, reports, etc.) that query the db. I prefer the db as a first class citizen, not a dependency of an app. However, when you deploy code, often there is a need to have some versioning in the db that allows an app to be sure it works with vXX+. If db v3 adds a column and db v3.2 adds a second one, my app might need to be sure the db is at v3.2 or turn off feature flags.

Rylan Hazelton18:10:54

yeah there is no one size fits all for every system

Rylan Hazelton18:10:25

but a schema migration tool might help with the version stuff

Vaidik Kapoor (Speaker) - Technology Consultant18:10:19

ah right. sharing schema with more than one app just feels like a nightmare - too hard to control

Vaidik Kapoor (Speaker) - Technology Consultant18:10:38

but yeah in that case, you would want the schema in a separate repo with its own workflow

Rylan Hazelton18:10:06

or feature flags are your friends

๐Ÿ‘ 1
Vaidik Kapoor (Speaker) - Technology Consultant18:10:08

super relatable. when i joined my current company, we had a database that was a shared database between multiple services. but there was no regard for process to make changes to that database. after a lot of brain wrecking, we managed to put a sane change management process by putting the schema in a git repo. after that, we started calling that database "mother DB". always respect the mother.

Stuart R Ainsworth18:10:55

YES. SOURCE CONTROL is TRUTH!

Woody Evans18:10:20

Is anyone using a Data Services Layer to service all of their data needs?

Sagan Zavelo Redgate18:10:42

One thing we consistently hear at Redgate are companies need the data on demand for development in the lower environment. Is anyone struggling with that?

2
1
Nick - developer at BNPP18:10:16

absolutely. the endless battle of data refresh and data masking

Sagan Zavelo Redgate18:10:49

Exactly, that's one of the very real issues that continues to come up

Sagan Zavelo Redgate18:10:18

You are not only needing to mask the data but you're also needing immediately?

Rikard Ottosson - Psychological Safety (People Not Tech Ltd)18:10:47

Yes, random prod-looking data (but definitely NOT production data) en masse in a random new environment would have saved many hours of debugging.

Nick - developer at BNPP18:10:23

"immeditately" might be defined differently in different industries. But yes, normally people are trying to reduce the lag

Sagan Zavelo Redgate18:10:17

We've got tools to address both those issues.

Sagan Zavelo Redgate18:10:29

@scottd.harris and @rasika.vaidya I'll send you a PM

Pavan Kristipati18:10:10

I wonder what database (MS SQL Server, Oracle, DB2 LUW, PostgreSQL ..) types does Redgate support?

Jesus Avila18:10:55

Redgate supports SQL, Oracle and these other PostgreSQL, MySQL, DB2, Aurora MySQL, MariaDB, Percona XtraDB Cluster, Aurora PostgreSQL, Redshift, CockroachDB, SAP HANA, Sybase ASE, Informix, H2, HSQLDB, Derby, Snowflake, SQLite, Firebird Hope this helps!

Grant Fritchey18:10:56

We've got tools for just about any database you can think of. Between Redgate Deploy and Flyway, we really do cover them all.

๐Ÿ‘ 3
Matt Masuda - Quicken Loans18:10:40

@grant.fritchey How about Progress?

Santiago Cardona18:10:42

@grant.fritchey How do you test the databases changes?

Grant Fritchey18:10:02

I've mostly been working with PostgreSQL lately, in both AWS and Azure, using Azure DevOps Pipelines or AWS DeveloperTools, along with Flyway. Works a treat.

Rylan Hazelton18:10:25

store procedures are problematic since you have code that doesn't live in source control.

Stuart R Ainsworth18:10:05

yep; that's a cultural problem yo uhave to overcome.

Rylan Hazelton18:10:58

I won't allow my engineers to use them unless they have no other viable option (which is rare)

Grant Fritchey18:10:18

Does depend. I'm in favor of ORM tools, but, not all code can be generated. Some is still going to be traditional, 3-5% max.

Rylan Hazelton18:10:34

ask a dba how they feel about orms

Grant Fritchey18:10:49

I'm, technically, a dba. I love ORM tools.

Rylan Hazelton18:10:55

๐Ÿ™‚ orms generalize all your things at the cost of performance for the non standard things

Rylan Hazelton18:10:00

explain an orm join query ๐Ÿ˜‰

Rylan Hazelton18:10:25

not all orms are equal here, but the usually fall into the same traps

Grant Fritchey18:10:00

True. However, 90-95% of the work an ORm tool does perfectly fine. It's all about identifying that last bit and making adjustments. Most ORMs support procedures.

Rylan Hazelton18:10:25

its that 5% that ends up costing you a lot later in most of my experience

Rylan Hazelton18:10:48

you start making really odd/special business logic to get around some shity join or query the orm is building

Grant Fritchey18:10:04

Mine too. It's a big part of why, as a DBA, I've embraced ORMs and supported the developers in their use. It makes it easier for me to say, "All good, except over here."

Rylan Hazelton18:10:34

as a dba engineers just say "sorry thats how my orm does it" ๐Ÿ™‚

Rylan Hazelton18:10:13

you give them a query that performs 10x better but the orm will make your life hell actually trying to run it

Rylan Hazelton18:10:21

there's no one answer here.

Joรฃo Acabado - Principal Engineer - Sky UK18:10:56

did people abandon the idea of micro ORMs?

Rylan Hazelton18:10:41

you just mean object mappers right? not sure the micro orm term?

Rylan Hazelton18:10:38

my personal preference is for a library that helps mapping the db records to objects and then. most of the queries are built from simple db libs

Joรฃo Acabado - Principal Engineer - Sky UK18:10:56

it referred to simple DSLs, where you could easily use db language if needed, and then object mapping the results

Joรฃo Acabado - Principal Engineer - Sky UK18:10:13

it was picking up 5 years ago with Dapper on .Net and JOOq on Java

Rylan Hazelton18:10:47

ahh yeah the problem with tools that abstract the db underneath is they usually perform poorly under load.

Rylan Hazelton18:10:36

they also like orms try to generalize things between db's and usually performance is much better when you specialize

Grant Fritchey18:10:48

Very poorly. However, like any other tool, you can use them in ways that work and in ways that hurt, a lot.

Chris Vogel - Edward Jones18:10:04

How are you seeing database work changing as companies move to cross-functional product teams?

๐Ÿ‘ 1
Stuart R Ainsworth18:10:07

oh that's an awesome question! I really see a seperation of responsibilities that are now traditionally combined in the role of DBA. Server admin (backups, configuration, security) and development (performance tuning, etc).

Grant Fritchey18:10:21

Testing is hard frankly. I like to break the testing in half. Tests with data and tests without. Tests without are primarily for Continuous Integration, fast, light, not thorough, but catches the easy stuff. Then, tests with data, as much like production as possible, are what QA & pre-prod/staging are all about.

๐Ÿ‘ 2
โœ… 1
Santiago Cardona18:10:02

Anyone has worked with Robot Library Python or using Groovy to test the changes?

Pavan Kristipati18:10:01

How are rollbacks handled? Do the tools have ability to generate rollback scripts based on the actual code that is going to be implemented?

Vaishali Deshmukh, Team Lead - Database Applications, Edward Jones18:10:14

How many different pipelines you have for database deployments

Stuart R Ainsworth18:10:19

right now, we're using Octopus deploy. we have one pipeline, with variables.

Vaishali Deshmukh, Team Lead - Database Applications, Edward Jones18:10:37

We have not started database devops journey. I would like to learn more your experiences

Vaishali Deshmukh, Team Lead - Database Applications, Edward Jones18:10:48

what different database platform you are doing DB devops? oracle, sql server, mongoDB ?/

Vaishali Deshmukh, Team Lead - Database Applications, Edward Jones19:10:23

what is the best way to talk to you? One of my colleague may join

Stuart R Ainsworth18:10:00

Grant was supposed to edit that to make me sound much more intelligent.

๐Ÿ˜‚ 4
Grant Fritchey18:10:22

Some tools can do rollbacks, yes. However, most generated rollbacks don't take into account data protection. Frequently, those scripts need manual tweaks.

๐Ÿ‘ 2
Scott Harris18:10:43

Any tips for managing references across different data domains for QA work? Wanting product teams to own their own data and management of, but for QA environments integration across domains is still necessary and so consistency and availability are key.

๐Ÿ‘ 1
Steve Jones - He/Him18:10:04

My number one recommendation is to have curated data sets for all work. This can be santized prod, or (preferred) a small known set of domain cases that you maintain, just like you maintain tests. This is the best investment I think you can make for db work.

EmanuelMedina - Bancolombia18:10:13

mm, a big deal, we do is test data management practice, and develop our extensions to achieve that, selecting a catalog for the pipeline, and a final task to decomise that data

Woody Evans18:10:10

You can also use masking with distributed Referential Integrity to translate production data in a way that cross-app data references remain consistent.

Joรฃo Acabado - Principal Engineer - Sky UK18:10:24

are there any tools to manage test data sets? I've looked into this without success

Joรฃo Acabado - Principal Engineer - Sky UK18:10:23

it would be good to have something as easy to use as the repositories for containers and libraries

EmanuelMedina - Bancolombia18:10:03

we have to develop our tool for that (yep, we have that thing called iSeries /DB2)

EmanuelMedina - Bancolombia18:10:58

but, itโ€™s just a Java client that move data between de โ€˜universe dataโ€™ to the environment and have a table to control the data that we have to delete when the pipeline ends

Scott Harris18:10:33

would love to be able to rebuild a datastore from a dataset at any timeโ€ฆthe references between databases and domains where those are owned by different teams is where that becomes hard

Joรฃo Acabado - Principal Engineer - Sky UK18:10:34

that's interesting but I feel there is a simpler gap to fill which is just a tool to manage the set of domain cases to be able to do different tests as @stephen referred

Scott Harris18:10:09

thxโ€ฆwill check that out

Scott Harris18:10:27

that concept does feel to align

Steve Jones - He/Him18:10:22

There are masking tools, Redgate makes one, others are out there. I haven't seen good subsetting tools, which is something I wish Redgate or other vendors would work on because these are important. Some of my customers that have had success will pull rows from a prod db and store them in a "testtemplate" db, masking as needed. They then use this as the base image for data virtualization, restores in qa, containers, etc. This is the basis for all work. Over time, they continually add new edge /corner cases from prod as they need to. As an example, one customer has a multi-TB db that is templated back to about 10GB. This is regular work for devs to keep the template up to date. Disclosure: I work for Redgate.

Joรฃo Acabado - Principal Engineer - Sky UK18:10:44

Steve can you elaborate on the templated bit?

Steve Jones - He/Him19:10:23

Sure, you keep a db around that is essentially your template. You can handle this in a few ways, but essentially think about having a regular process that: 1. backups up template db 2. restores this in Qa/dev/etc. Everytime we hydrate a new environment for a developer, a QA env, a CI build, we use this same db. This way everyone always has the same basis for starting work. Think of this as a new branch off main for a C#/Java dev. They always have a known starting point. Since dbs grow and change, as we get a ticket or bug, and we realize that we don't have data to match this, say the "Steve Jones account from prod", we copy this data from prod-> template, santize it if needed, and then the next time the process above runs, we have a new db that allows us to work. Ideally, that process is both scheduled and demand driven. These days, I would expect customers to do this with data virtualization or containers. Having a known starting place for every bit of work that is updated as we deploy changes and grow. This isn't simple, because changes to the prod db (Deployments), need to be run against the template db as well, so that developers can keep up to date. Likewise, anything you pull from VCS wrt db code, needs to be applied. This is where migration based frameworks or tooling, like FlywayDB, ensure that you automatically upgrade your copy of the template db when you start work. If devs have updated the VCS with other code, it gets applied to the copy of the template db.

Steve Jones - He/Him19:10:00

This isn't simple, and it's a change, but it's somewhat like the chance for C# devs that needed to start writing unit tests. A PIA to start, but over time, it becomes a part of work. Maintaining test data is the same thing, albeit with a different process.

Joรฃo Acabado - Principal Engineer - Sky UK19:10:40

makes sense, thank you. I've done something similar with reference data sets, wondering if you seen any kind of procedural data creation to get to these states or to make them easy to maintain? I find the process on clojure.spec interesting on this side of things but I've not used it in the real world.

Steve Jones - He/Him19:10:34

There is tooling for non-transactional data, so list of countries, statuses, postal codes, etc. This is easier to maintain because this is really the same in all environments. For transactional data, it's harder. There are masking tools, such as Data Master for SQL Server or Oracle (https://www.red-gate.com/products/dba/data-masker/) that can make this a repeatable process, but this doesn't help with the scale of transactional data. Masking all data in a 10GB db is easy, and works well on dev laptops and CI build machines. This is less useful on a 1TB db. Both for time and resources. Instead, we need subsetting tools. I have customers asking about this, and I've tried to get Redgate to invest in this, but so far, no luck. I do need more customers to make this a priority, or even prospects, to justify investment (https://www.red-gate.com/blog/database-devops/database-subsetting-wed-love-hear) , because this will dramatically change agility in the world. There are some other solutions, like random generation, but these don't necessarily cover the cases against which we need to test. Most of the solutions I see from Oracle, IBM, MS, etc. are really putting a large load on users. Companies that develop a process usually love it, but it's a small percentage (< 5%) that build a process to maintain this.

Steve Jones - He/Him19:10:46

Usually it's a one time project, which starts to fall apart over time as requirements changes.

Steve Jones - He/Him19:10:31

Feel free to ask more questions, or join the RG happy hour tonight. This is one of those things that is actually not that hard, but it is complex. Lots of protocol and process to implement and stick to.

Joรฃo Acabado - Principal Engineer - Sky UK19:10:11

thanks I will bring some if they arise

Grant Fritchey18:10:10

Yikes. Oh, look at the time. Have to go....

๐Ÿ˜† 5
2
Sagan Zavelo Redgate18:10:36

Don't leave us Grant!

Grant Fritchey18:10:59

The only way to deal with the vagaries of the weirdness of a production environment is to emulate/simulate them in the non-production environments as best you can. Shift left.

Rylan Hazelton18:10:30

you eventually will need to snapshot prod (or most of it) to another env to test against

Grant Fritchey18:10:31

I will not say it's easy. It's work. Lots of work. However, it's all about breaking it down into component parts and then building them out as needed.

Stuart R Ainsworth18:10:34

when i think, i don't look at the camera.

Lou Sacco18:10:44

So I'm not hearing exactly how you do DB testing in your DevOps pipelines. Can you summarize the tools you're using to accomplish that?

Stuart R Ainsworth18:10:13

QA is a magic cloud for me ๐Ÿ™‚

Santiago Cardona18:10:57

@occasl We`re testing Robot python library, groovy, and Dbunit

๐Ÿ‘ 1
Stuart R Ainsworth18:10:03

Seriously, though, we use the SQL Change Automation tools, and a lot of the Red Gate tools to do comparisons, and it does a lot of basic sniff tests

๐Ÿ‘ 1
Lou Sacco18:10:26

@santiaca which Python lib (link?)

Stuart R Ainsworth18:10:28

when it comes to functional unit testing, i'm not exactly sure what QA and our dev teams use. I will check though. I'm mostly focused on what's changing, because i'm interested in the stability every time.

Lou Sacco18:10:45

@stainsworth331 sure, I'd love to know

Nick - developer at BNPP18:10:54

Does it make sense to test the database outside of testing the application itself?

Santiago Cardona18:10:09

@occasl Robot framework. You can do differents kinds of asserts

Lou Sacco18:10:45

@nick.kritsky sometimes. It depends on the change made because it may not be surfaced in the UI.

Lou Sacco18:10:18

@santiaca Is that built on Selenium?

Sagan Zavelo Redgate18:10:31

@occasl we've got both approaches of the State based model and Migrations based approach

Santiago Cardona18:10:54

@occasl No, but you can use it

Lou Sacco18:10:53

Cool...thx for sharing. We're using Katalon, so hoping we can integrate DBUnit into that.

Lou Sacco18:10:02

It's Java/Groovy based.

Rylan Hazelton18:10:59

for example your alter might take an hour on production data set but takes 10seconds in stage/preprod

Grant Fritchey18:10:06

tSQLt is the one I use the most. It's based on nUnit.

Vaidik Kapoor (Speaker) - Technology Consultant18:10:25

how do you structure in slack time in your teams? @stainsworth331

Lou Sacco18:10:00

Free Fridays! (4 hours dedicated to a project)

Grant Fritchey18:10:48

I'm not a manager, just a nerd. However, as a nerd, I ask the boss to allow me time for research & learning. Has to be part of the job.

Stuart R Ainsworth18:10:23

I think i just answered that, but we can talk more about that if you want!

Stuart R Ainsworth18:10:33

or that is , the other me on the screen

Vaidik Kapoor (Speaker) - Technology Consultant18:10:12

so we use personal OKRs as a way to sort of have certain personal goals decided over a longer period of time that helps an individual grow holistically (tech skills, leadership, communication, project management, etc.)

๐Ÿ‘ 1
Vaidik Kapoor (Speaker) - Technology Consultant18:10:29

and these goals are set with mutual consent

Vaidik Kapoor (Speaker) - Technology Consultant18:10:46

however often certain individuals have a hard time achieving those goals

Vaidik Kapoor (Speaker) - Technology Consultant18:10:17

for some reason even with their goals clearly laid out, they have a hard time executing

Stuart R Ainsworth18:10:44

I think every manager has ๐Ÿ™‚

Stuart R Ainsworth18:10:09

I love the idea of the personal OKR approach, but again for me, it goes back to the abstract versus checklist approach

Stuart R Ainsworth18:10:32

I'd be curious if your abstract thinkers are the ones that have a tough time with goals (or is it vice versa)

Stuart R Ainsworth18:10:46

For me, i have to have squishy goals for my abstract people

Vaidik Kapoor (Speaker) - Technology Consultant18:10:30

both actually. and most of them are old timers, when we didnt have structures and processes like these

Vaidik Kapoor (Speaker) - Technology Consultant18:10:52

so the people i am talking about come from a time of being super unstructured and constantly being in "war mode"

Grant Fritchey18:10:26

Shift left is tricky. To get the data as much like reality as possible, thankfully, I can use Redgate tools like SQL Data Masker which lets me create realistic looking data, but protects production data. Then, we also use SQL Clone, which lets us create tiny copies of the database, well, tiny on the dev machines, but actually full-sized in reality. All through disk virtualization.

๐Ÿ‘ 5
Sagan Zavelo Redgate18:10:17

Love to hear more about the feeling of a successful pipeline!

Grant Fritchey18:10:07

WHOOP!

๐ŸŽ‰ 3
Jeff Meade -Redgate18:10:15

Great job guys!!

๐Ÿ™Œ 4
Molly Coyne (Sponsorship Director / ITREV)18:10:27

Welcome @laksh.ranganathan for our next session's Q&A! Thank you also to #xpo-tasktop

โค๏ธ 1
Grant Fritchey18:10:41

Cheers! If anyone wants to, swing by the Redgate channel to continue the conversation. I'll get out of the way now.

๐Ÿ‘ 5
Nick - developer at BNPP18:10:56

Thanks @grant.fritchey and @stainsworth331!

๐Ÿ‘ 5
Laksh Ranganathan (Tasktop)18:10:29

Hi everyone, Thanks so much for coming to listen to my talk on measuring what matters to the business. Please reach out of if you have any questions!

4
Dominica DeGrandis, Author - Making Work Visible, Principal Flow Advisor18:10:14

"Love that quote - Not everything that can be counted counts."

๐Ÿ‘ 2
โค๏ธ 6
Jack Vinson - flow18:10:32

Measure what matters. And know why it matters.

๐Ÿ‘ 3
โœ”๏ธ 2
Dominica DeGrandis, Author - Making Work Visible, Principal Flow Advisor18:10:38

"Outcomes over Outputs" is a topic at Lean Coffee today- hosted by @stephen. To join, look for zoom link in #lean-coffee channel at 2:30 PM PT

Jack Vinson - flow18:10:45

Big assumption that optimizing individual elements will optimize the system. (And of course, what are we optmizing for)

๐Ÿ‘ 1
Woody Evans18:10:48

@jackvinson so true. the Local optimization vs. global optimization problem has been around since the days of Eliyahu Goldratt ๐Ÿ˜‰

Woody Evans18:10:51

Is it right to say that the metrics that matter to the business come from outside the business?

Jack Vinson - flow18:10:04

Yes.... and there are a lot of internal customers. And of course, the internal processes end up having an external customer impact eventually.

Laksh Ranganathan (Tasktop)18:10:43

Exactlyโ€ฆmuch like Dr. Spear was talking about this morning, know what problem are they (the business) are trying to solve and how we fit in help hone in on the individual customer, internal or external

Ffion Jones (Partner, PeopleNotTech)19:10:03

In team coaching the purpose and the team's 'job' is derived from asking about and meeting the needs of all stakeholders, internal and external!

๐Ÿ‘ 1
Laksh Ranganathan (Tasktop)18:10:07

Thatโ€™s such a great point @jackvinson - The whole doesnโ€™t always equals sum of the parts and you have to look at it with a different lens.

Jack Vinson - flow18:10:42

(just as the stream starts talking about flow metrics)

Dominica DeGrandis, Author - Making Work Visible, Principal Flow Advisor18:10:30

THIS --> "diff metrics for diff contexts"!

๐ŸŽฏ 4
Rajat Sud (DevOps Evangelist - SBPASC, an affiliate of CareFirst) (Speaker)19:10:24

@laksh.ranganathan,how do you normalize flow metrics across teams or departments or value streams? If you'd like to identify high performing teams (and their practices) and promote them?

๐Ÿ‘ 1
Laksh Ranganathan (Tasktop)19:10:19

This is such a great question! When looking at value streams which tends to constitute multiple teams, its better to normalise the work into key types of work. We abstract that out to Feature , Defect, Risk and Debt at a top level and apply that across teams. And use this level for Flow Metric and data from Application Management Tools help hone in on bottlenecks and identify where improvements will provide the most impact on value. We have see a lot of value in having an Enterprise Flow Team that have a larger visibility across value streams can augment the learnings by providing the systems thinking. This is where Ways of Working teams can help cross pollinate great ideas. Not easy - its a learning process, through experimentation.

Jack Vinson - flow19:10:36

"There is no 1 metric to rule them all"

๐Ÿ’ฏ 2
Carmen19:10:29

great talk @laksh.ranganathan ๐Ÿ™‚

๐Ÿ‘ 4
๐Ÿ™ 1
Molly Coyne (Sponsorship Director / ITREV)19:10:21

Welcome @bradgeesaman and @pawan.shankar for our next session's Q&A! And thank you also to #xpo-sysdig

๐ŸŽ‰ 4
Lauren Hernandez - Sysdig19:10:57

the anticipation is killing me!

Brad Geesaman19:10:22

Hello everyone! Thanks for coming to our session that is about to start. @pawan.shankar and I are looking forward to answering any questions you may have as best as we can.

๐Ÿ”ฅ 1
Marc Boudreau (Enterprise Architect)19:10:44

Is there an issue with the video?

Prasad Gamini19:10:02

yup, thought of asking the same...

Laksh Ranganathan (Tasktop)19:10:28

Thanks for joining the talk! Please feel free to drop me a note with any thoughts or stories from within your organanizations!

๐Ÿ‘ 1
Derek Weeks, Sonatype / All Day DevOps19:10:40

Coming up at 1235pm PT, we have a CTO (@brianf) and CEO (@stephen) taking your questions LIVE on DevSecOps in the VendorDome session.

Derek Weeks, Sonatype / All Day DevOps19:10:25

Ask your questions here and Iโ€™ll share them live on air with Brian and Stephen

Lou Sacco19:10:53

What do recommend for your CNI to protect the network?

Brad Geesaman19:10:34

Calico or Cilium are great choices

Lou Sacco19:10:16

We're using Canal, kinda combines Calico and Flannel.

๐Ÿ‘ 1
Pawan Shankar19:10:31

yes, Sysdig also is focusing on Calico for vanilla k8s, gke, rancher eks etc and openshift OVS if you're running on that

Lou Sacco19:10:37

We're on Rancher...which makes K8s a lot easier ๐Ÿ™‚

Prasad Gamini19:10:14

These are the main reasons why we shouldn't run our containers as root user...

๐Ÿ‘ 3
โž• 1
Pawan Shankar19:10:56

that is exactly correct @prasad.gamini we did some customer usage analysis, and found 58% of containers are running as root in prod https://sysdig.com/blog/sysdig-2020-container-security-snapshot/

๐Ÿ˜ฌ 1
Lou Sacco19:10:55

Are you guys fans of JFrog's X-ray?

Matt Cobby (DevEx, InnerSource)19:10:33

so, so. Not as good as other products.

Lou Sacco19:10:23

@matthew.cobby I have the same feeling. Their UI is unusable for a lot of images.

Lou Sacco19:10:29

Still waiting for them to fix that.

Pawan Shankar19:10:51

we have our own image scanning feature, that supports artifactory

Lou Sacco19:10:03

I've been looking at Aqua too for more active scanning controls.

Pawan Shankar19:10:17

yeah our image scanning has a few differences vs Aqua - threading

Pawan Shankar19:10:30

You have to push images to a registry to scan - Sysdigโ€™s inline scanningย  scans directly in the customer env and only sends the results back to Sysdig. You donโ€™t have to share images/registry credentials with a 3rd party tool

๐Ÿ‘ 1
๐Ÿ”ฅ 1
Pawan Shankar19:10:15

With Aqua - periodic rescanning required -- Sysdig continuously evaluates all running images for new vulnerabilities

Pawan Shankar19:10:34

B/c of their LD_preload approach, Aqua has no support for Go based apps (doesnโ€™t support static binaries)

Pawan Shankar19:10:18

sysdig k8s integration is very rich - can tell you a specific vulnerability maps back to a particular service/namespace/cluster etc

Prasad Gamini19:10:48

In k8s, the secrets are just Base64 encoded, nothing apart from that... are these secrets bringing any another security advantage over configmap?

Brad Geesaman19:10:04

Secrets and configmaps are near identical in mechanics, but they afford greater separation possibilities via RBAC

Brad Geesaman19:10:43

Fewer built-in roles allow โ€œget/list secretsโ€

๐Ÿ‘ 1
Lou Sacco19:10:26

This looks interesting for our env: https://rancher.com/blog/2020/runtime-security-with-falco/ a la @pawan.shankar

โœ… 2
โค๏ธ 2
๐Ÿ”ฅ 2
Pawan Shankar19:10:49

https://www.youtube.com/watch?v=u409G5PsO1w this goes into more detail @occasl

๐Ÿ‘ 1
Lou Sacco19:10:44

Do you guys have a plugin for TeamCity (CI/CD)?

Pawan Shankar19:10:01

checking on this one specifically... we have API integrations for many of the CI/CD tools - bamboo, circle, gitlab etc

Pawan Shankar19:10:15

inline scanner or direct call to the api will work for teamcity @occasl

๐Ÿ‘ 1
Derek Weeks, Sonatype / All Day DevOps19:10:31

REMINDER to stay hereโ€ฆ Coming up at 1235pm PT, we have a CTO (@brianf) and CEO (@stephen) taking your questions LIVE on DevSecOps in the VendorDome session. YES LIVE. Ask your questions here and Iโ€™ll share them live on air with Brian and Stephen

Lauren Hernandez - Sysdig19:10:25

@bradgeesaman & @pawan.shankar are happy to continue the conversations in #xpo-sysdig or answer any additional questions that you might have! Thank you all for attending our session and asking some great questions. ๐ŸŽ‰ Want a LIVE demo or to dig deeper in the Sysdig Secure DevOps platform? Join our next demo in 20 min at 1:00pm PDT here: https://sysdig.zoom.us/j/92710825508?pwd=K2UvNjhPKzF3VzJIT2xOamxPM2dyQT09 @eric.magnus will be hosting & happy to show you anything youโ€™d like to see in more detail ๐Ÿ˜Ž

Lou Sacco19:10:32

great talk guys!

๐Ÿ”ฅ 1
Brad Geesaman19:10:44

Appreciate that, thank you.

Molly Coyne (Sponsorship Director / ITREV)19:10:35

Welcome @weeks who will be moderating for today's VendorDome Q&A between @brianf and @stephen!

Nick - developer at BNPP19:10:14

Vendordome - is that a Mad Max reference?

Derek Weeks, Sonatype / All Day DevOps19:10:42

If you have questions, share them here for Brian and Stephen

Denver Martin - Sr. Mgr Cloud Ops Infrastructure19:10:04

DevSecOps question โ€”>. How do you get InfoSec to help fund joint projects like WAF and other crossover items that DevOps want to use however it is additional costโ€ฆ.

๐Ÿ‘ 2
Katrina Sison19:10:40

Donโ€™t miss your chance to learn how to save on your cloud budget AND meet your performance goalsย  ๐Ÿคฏย  Sign up for Opsani Co-Founder & CTOโ€™s Peterโ€™s speaking session Wednesday October 14 @ 2:05pm ๐Ÿ’ฅ https://sched.co/ehCL

Derek Weeks, Sonatype / All Day DevOps19:10:00

Weโ€™re live: send me your questions for @brianf and @stephen

Derek Weeks, Sonatype / All Day DevOps19:10:18

Weโ€™re starting with Denverโ€™s question

Mark Fuller19:10:19

@weeks Question - Many of the latest DevSecOps tools work poorly or not at all with legacy systems for example cobol on a mainframe. At the same time, these systems run critical applications we all use every day. Do you see this as somewhat of an enforced strangler pattern, or do you think devsecops tools will fill in the gap and keep the legacy systems going for years to come?

1
๐Ÿ‘ 5
Mark Fuller19:10:21

So my understanding from the replies is that it becomes a strangler pattern due to lack of support.

Steve Jones - He/Him19:10:22

Is data privacy/data sec/classification becoming more important to your orgs/customers?

๐Ÿ’ฏ 1
๐Ÿ‘ 1
Pavan Kristipati19:10:17

@weeks Question - How do you see organizations addressing the need to codify the governance (audit/risk) aspect of things? Many organizations have controls clearly called out but the proof / alignment is not easily codify-able.. Any ideas how to codify the governance?

3
Stephen Magill [Sonatype]20:10:06

Not sure if we fully covered this, but automated governance is definitely a trend I hear more and more about at each DOES conference. Also โ€œpolicy as codeโ€. Talk to @tapabrata.pal and @john_z_rzeszotarski if you havenโ€™t already.

Pavan Kristipati20:10:47

I appreciate it! Thank you.. will have a chat with these folks.

Aras Kaleda/Change Manager19:10:02

Security is usually going to development, don't know if that is due to culture or due to other reasons, but how to turn that around? change the culture or change the tools ?

Derek Weeks, Sonatype / All Day DevOps19:10:55

when you say โ€œsecurityโ€, do you mean the tools? the team? the practice?

Brian Fox Cofounder/CTO Sonatype19:10:30

what did you mean by going to development @arak?

Stephen Magill [Sonatype]19:10:45

@rradclif was who I was referencing in my answer โ€” look up her talks for DevOps meets mainframe wisdom!

Mark Fuller20:10:39

@stephen Thanks. I heard @rradclif speak a few times at past conferences. I actually don't think it is a bad thing to eventually strangle off old systems and technologies due to lack of support. The same argument can be made for finding people with the skills to maintain these old systems - eventually the lack of a talent pool with strangle them if nothing else.

Robyn Talbert, American Airlines20:10:44

Which book were you discussing regarding the mainframe? I do not see us every getting rid of ours. Too stable and reliable ๐Ÿ™‚

Mark Fuller20:10:01

@robyn.talbert, you can find her talks at prior DOES conferences on YouTube. Here is one example: https://www.youtube.com/watch?v=TwkJvsmZpF4

๐Ÿ‘ 1
Santiago Cardona19:10:32

The security development should be guaranteed since starting development. But in "the real life" could be different; it's correct to implement differents tools to identify statics/dynamics earlier all the vulnerabilities possibles? Increasing or trying to connect to increase the technical debt @weeks

Denver Martin - Sr. Mgr Cloud Ops Infrastructure19:10:15

Are there some tools to secure 3rd party integration to our DevOps pipelines? We have been using VPNs to act like submarine doors where both sides can close off each side and limit blast radius. Are there better ways?

๐Ÿ’ฏ 2
Pawan Shankar20:10:08

One approach is to adopt inline scanningย  in the pipeline/registry. This means you scan directly in your env and only send the results back to a 3rd party tool like Sysdig. You donโ€™t have to share images/registry credentials with the tool either. This gives you better security by keeping you in full control of your PII data/image contents @denver.martin

Stephen Magill [Sonatype]20:10:03

I like this approach, @pawan.shankar. And Iโ€™m curious about what the submarine doors look like in your process @denver.martin. Iโ€™m going to DM about this!

Denver Martin - Sr. Mgr Cloud Ops Infrastructure14:10:17

@stephen we put a Fortinet Virtual appliance on our side in the AWS, then the 3rd party does as well, they can Fortinet or any other vendor (Cisco, Pulse, Checkpoint, etc) and we then connect via IPSec Tunnel. this means we both can close the connection if needed. The other option was to connect via AWS Peering, but that is sharing a lot more open and is controled by only SG rules. You could do ACL but then it become more management. Like a submarine both sides can close the door if there are issues..

Denver Martin - Sr. Mgr Cloud Ops Infrastructure14:10:40

More about the Ops side then the Dev side of things..

Stephen Magill [Sonatype]14:10:37

Cool! โ€” that makes sense. Thanks for the info.

Brad Appleton20:10:30

anyone having any luck with combination of GitOps/DiffOps, ContinuousCompliance as a means of eliminating manual change approval/review bottlenecks (like eCABs) in conjunction with a docslikecode and.or pull-requests with infrastructure-as-code and even enterprise architecture (with the EA equivalent of 'food-critic')???

Ferrix Hovi - Principal Engineering Avocado - SOK (S Group)20:10:12

A peer review of code before merging to trunk works for both code and IaC. When it has been through two pairs of eyes, it can be considered formally accepted. So, that's how we do it pretty much like you are saying. There is a trial of annotating the repositories so that all CMDB-things get updated accordingly.

Brad Appleton20:10:17

The idea is changes to the infrastructure architecture or enterprise architecture (I suppose even operating-model) could be done in a configuration/architecture-as-code like manner, with pull-requests (DiffOps) and help from automated scanning tools (like tech debt, anti-patterns, etc.) to assist the process and streamline "hard/slow" formal change reviews.

Ferrix Hovi - Principal Engineering Avocado - SOK (S Group)20:10:47

You don't need change reviews if anything those are supposed to do becomes transparent results of a pipeline.

Laurent Rochette - ServiceNow DevOps20:10:58

Check out ServiceNow's speaking session TODAY at 1:35 PM: From velocity to value โ€“ scaling the DevOpsย impactย ๐Ÿ“ฃ @eric.ledyardย will be inย #ask-the-speaker-track-4ย for Live Q&A during the session!

Brad Appleton20:10:13

@ferrix True - you might not need them. Tho in the form of pull-requests they can be usefull for triggering other things (both fully and partially automated) which can provide alerts/flags (like SONARQUBE does for tech-debt, only this would be a security/vulnerability/ea-equivalent of that for a scanning tool. Some of which exist - yes/ Verity/Verily, evens ome OSS ones)

Ferrix Hovi - Principal Engineering Avocado - SOK (S Group)20:10:38

Yeah, we have SonarQube, ?AST, license checking and all kinds of things in the pipeline.

Stephen Magill [Sonatype]20:10:32

Definitely agree with PRs as a key point to surface code scanning issues (if you have quiet tools that donโ€™t generate too many false positives). Itโ€™s the main point of integration for Muse.

Brad Appleton20:10:44

@stephen your last comment sounded a little like pointing to a security-driven (TDD/BDD-like) approach (if I understood you correctly). That makes a lot of sense

Brad Appleton20:10:42

Thanks @weeks You translated my query very well.

Stephen Magill [Sonatype]20:10:23

@brad499: Yes, focus on having processes that can flag results in the moment on each code change. And if thereโ€™s a gap (issues you canโ€™t flag that way) see if you can add a tool / write a rule to automate that.

Brad Appleton20:10:33

@brianf Yup - you just nailed it (or at least what I was trying to convey!)

โœ… 1
Steve Jones - He/Him20:10:34

Question: Is there any concern with GitOps and auditing? The machines and changes can't be inspected live, only as a config file. Any issues with how auditors can then verify that actions are correct?

๐Ÿ‘ 1
Steve Jones - He/Him20:10:12

Test results? Or testing the config with a new deployment? I'm thinking here of an auditor coming back to check on Jan 1, but asking about a vulnerability or a patch from the previous Oct 1. How can they verify that changes were actually implemented? Certainly checking a VCS might work, but I'm curious if auditors have issues with this.

Ferrix Hovi - Principal Engineering Avocado - SOK (S Group)20:10:56

Log the installed package list after a patching activity and store with other proof

Brad Appleton20:10:01

yeah - I was thinking its like config-as-code using TDD, CI (with scanning tools), refactoring, with PRs built-in to the flow (and some config rules to determine what outputs are treated as warnings, vs must-review, vs etc.)

Ferrix Hovi - Principal Engineering Avocado - SOK (S Group)20:10:53

@steve.jones If you are so lucky to produce a positive from a vulnerability analysis, you can prove it in staging and even production. So, pretty much the same pattern as coming across a bug and proving it gone with a test.

srujit biradawada20:10:23

Any luck with ChatOps, like automating end-to-end automations pipelines with integrating chat bots or any AI/ML?

Denver Martin - Sr. Mgr Cloud Ops Infrastructure20:10:33

@steve.jones we have our GitOps create a config capture then put that in a change ticket, then make the change, do another config capture after the change, have that added to the notes of the ticket, then close the change. This way before and after are captured and then we have moved automated changes to std changes so they do not go directly to CAB for approval. For things to be automated they have to be done in non-prod prior to moving to prod.

๐Ÿ‘ 2
Denver Martin - Sr. Mgr Cloud Ops Infrastructure20:10:49

we also made PR and Commits as part of the approve change management processโ€ฆ

๐Ÿ‘ 1
Curtis Yanko - Sonatype20:10:46

I had no idea there 146 tools out their!

Brad Appleton20:10:02

@srujit.biradawada -- ooh! great followup question. (chatops in conjunction with or as an extension of gitops/diffops)

โค๏ธ 1
srujit biradawada20:10:28

I'm waiting for the future IT to stay in one application and trigger automation or resolve an issue with out the need for moving between different applications to trigger or troubleshoot, so I think chatbot will be that one application to stay on with AI/ML integration or chat bot take care of it. May be there will be teams to integrate in the backend but end-to-end is taken care from one application so we don't have to leave the existing application just like Steve Jobs envisioned for ads in the application bringing in the browser show the ad and with out leaving the app then closing the browser on the app.

Stephen Magill [Sonatype]21:10:49

first step toward what youโ€™re describing perhaps (and weโ€™re working on better and more full-featured interactivity)

Brad Appleton20:10:57

@brianf I think that comments about protecting environment, and malicious attacks, and AI/ML sounds like an upcoming SyFy movie (or maybe Disney/Pixar) -- maybe starring "the Rock" and Kevin Hart (Central Intelligence 2: DevSecOpsIntelligence (with AI+ML) ๐Ÿ™‚

2
Denver Martin - Sr. Mgr Cloud Ops Infrastructure20:10:20

@brad499 maybe call it TRONโ€ฆ

๐Ÿ˜Ž 2
Curtis Yanko - Sonatype20:10:04

Itโ€™s a Jetsons future! George pushed a button for 4hrs a day!

๐Ÿ˜‚ 3
srujit biradawada20:10:42

I did R&D on chatops by integrating to ServiceNow to create records (RITM/INC) and then the chatbot is made to rest calls to different applications and then get the responses based on that. But since the chatbot needed the AI/ML for more complex things I had to put the project aside. I still agree on "One step at a time", will see the complete chatops in no longer than 10 years

srujit biradawada20:10:48

fingers crossed

๐Ÿ‘ 1
Derek Weeks, Sonatype / All Day DevOps20:10:36

any more questions for Brian and Stephen?

Derek Weeks, Sonatype / All Day DevOps20:10:58

4 minutes left for this live DevSecOps session

Brad Appleton20:10:00

@denver.martin TRONminator 2 - The MatrixOps (Neo resurrected via AIChat with AlanK and Arnold 'the guvernator')

Derek Weeks, Sonatype / All Day DevOps20:10:07

Iโ€™ll try to squeeze it in

Brad Appleton20:10:43

@weeks Maybe a summary comment from each tying the topic back to the human/empathy side of DevOps? (in the face of AIutomation)

Frotz Faatuai (Cisco IT - he/him)20:10:53

Excellent conversation! Thank you!

srujit biradawada20:10:07

Thanks for the great info.....Great Conversation ๐Ÿ˜Š

Nes Cohen, MuseDev20:10:18

Really enjoyed listening. Sounds like complexity is definitely a problem ๐Ÿ˜›

Molly Coyne (Sponsorship Director / ITREV)20:10:10

Welcome @eric.ledyard and @richard.hawes for our next session's Q&A. And thank you to #xpo-servicenow!

Derek Weeks, Sonatype / All Day DevOps20:10:35

Big thanks @brad499 for the questions

๐Ÿ˜Š 1
Richard Hawes - ServiceNow DevOps20:10:46

Great session from Sonatype! ServiceNow team is online if you have any questions for Eric and Benโ€™s session thatโ€™s running now.

Stephen Magill [Sonatype]20:10:59

Thanks @weeks for moderating! I wanted to emphasize Derekโ€™s last point on metrics. Donโ€™t measure the number of tools youโ€™re running โ€” measure their impact and outcomes! Thanks everyone!

๐Ÿ‘ 2
Lou Sacco20:10:23

Shoot nightly builds? How about Continuous Delivery? ๐Ÿ˜‰

Stephen Magill [Sonatype]20:10:48

Haha, indeed @occasl โ€” baby steps ๐Ÿ™‚

Stephen Magill [Sonatype]20:10:08

For those who are interested in seeing some code scanning bugs fixed live on a DOES-community open source project, @ncohen will be live-streaming some remediation work on Hygieia and Concord after the next networking break.

๐Ÿ˜Ž 1
Aras Kaleda/Change Manager20:10:35

DEVOPS data model, does not work with all CI CD tools, are you limited in those or got some way to integrate ?

Laurent Rochette - ServiceNow DevOps20:10:14

no we have a framework that allows you (or partners) wo write your own integrations. What toolds do you need to integrate with?

Laurent Rochette - ServiceNow DevOps20:10:20

Jenkins is supported out of the box

Aras Kaleda/Change Manager20:10:17

how was change automated such as risk assessments?

Richard Hawes - ServiceNow DevOps20:10:46

To add to Laurentโ€™s comment - out of the box integrations include ADO, Jenkins, GitLab plus the ability to use our integration model to connect to others. Weโ€™re adding more over time too.

Richard Hawes - ServiceNow DevOps20:10:30

(Thatโ€™s CICD we also connect to Git repos, planning like Jira etc.)

Laurent Rochette - ServiceNow DevOps20:10:49

you can gather data from your pipeline and other dev tools as well as from ServiceNow platform (like incidents, outages, ...) to asses your risk dynamically (by opposition to a pre-approved standard change). Then based on those metrics you can auto-approve (or reject) your change, or decide to make a decision manually (by a human or CAB)

srujit biradawada20:10:12

ServiceNow is great because of it flexibility, I was able to create a self-healing application (based on ServiceNow Orchestration), as soon as something is wrong with a server my automation goes into the servers fixes the issue and let the users know that the issue is fixed. ๐Ÿ˜Š

๐Ÿ™‚ 1
โค๏ธ 1
christian malone20:10:30

Great use case. Now with AIOPs, metrics, logs, and observable data along side Flow Designer and Integration Hub itโ€™s easier then ever to have more automated ops

โค๏ธ 1
Scott Dedoes20:10:18

curious about your tech stack around feature flags and for canary deployments. Are you using in-house tools for these feature release or third party platforms?

Laurent Rochette - ServiceNow DevOps20:10:12

@scott.dedoes is that question for ServiceNow or for @srujit.biradawada?

Richard Hawes - ServiceNow DevOps20:10:44

Hi Scott, great question. (Laurent I think heโ€™s referring to what Ben said). Weโ€™re working with pretty much any tools that are holding, configuring and using configuration data. The management piece Ben is talking about means we keep a repository of all the configuration information and add access control to that data (instead of having people work directly with the configuration information).

๐Ÿ‘ 1
Richard Hawes - ServiceNow DevOps20:10:56

So a dev working on something moving to production will use our product to make configuration changes and weโ€™ll apply intelligent policy to that before passing the change on to the actual deployment tools.

Scott Dedoes21:10:02

Thanks for the follow up and apologies as I'm not an engineer but very interested in learning. So the ServiceNow product doesn't actually deploy the features for customers? Another part of feature flags that my question pertains to is the feature releases for ServiceNow's products. Does the company deploy these using their own feature flag platform or with third party tools? My company is working on an integration with ServiceNow for a mutual customer and just trying to understand the ServiceNow products and release process better. Thanks!

Danny Smith21:10:11

If by ServiceNow products you mean ServiceNow scoped applications and update sets (deployment between ServiceNow instances) we have a CI/CD solution for that which utilizes integrations with some 3rd party tools - there are more details here: https://docs.servicenow.com/bundle/orlando-servicenow-platform/page/administer/integrationhub-store-spokes/concept/cicd-spoke.html

๐Ÿ™ 1
Danny Smith21:10:20

There is a deep dive YouTube video on it here: https://www.youtube.com/watch?v=I9BRmKjc_8s

๐Ÿ™ 1
Katrina Sison20:10:48

๐Ÿ‘‹:skin-tone-4: Stay on track 4๏ธโƒฃ for the next session: โ€œOptimizing @ Scaleโ€ and learn how to take your optimization from manual and reactive to autonomous and continuous! Attendees will also get a mask and mug from #xpo-opsani !โ˜•๐ŸŽญ

Molly Coyne (Sponsorship Director / ITREV)21:10:28

Welcome @peter118 for our next session's Q&A! Thank you to #xpo-opsani!

Simple Poll21:10:18

Manual workload tuning is too complex for humans?

Peter Nickolov21:10:29

@mollyc thank you - looking forward to an interesting discussion

2
๐ŸŽ‰ 1
Scott Dedoes21:10:35

Thanks for the great talk @eric.ledyard and Ben Riley!

Simple Poll21:10:08

How often does your organization optimize your application stack?

Simple Poll21:10:51

What is your biggest painpoint with manual tuning?

Simple Poll21:10:02

Top Priorities in your cloud optimization strategy:

Amir Sharif21:10:23

@peter118 Does Opsani integrate with Kubernetes easily? If so, can you outline the process?

Peter Nickolov21:10:35

Thank you everyone for joining this session, please send questions my way

Peter Nickolov21:10:41

@amir Yes. Opsani specifically targets Kubernetes. You install a small controller which discovers applications on the cluster and tags them for optimization. The optimization is done through our SaaS service (so you don't have to worry about all the ML loads ๐Ÿ™‚ )

Peter Nickolov21:10:13

Think of it as an additional capability of your cluster

Amir Sharif21:10:50

> discovers applications All of them that are on Kubernetes? How do I prevent certain apps from being optimized?

Peter Nickolov21:10:57

@amir we support annotations, so apps can opt-in or opt-out by attaching a simple annotation to the deployment object. You can also define the service level objective you want an app to be optimized for (e.g., latency should be below 30 msec)

Cassidy Bodley21:10:02

are cloud governance tools and cost optimization tools autonomous?

Amir Sharif21:10:25

I believe I understand. If i do not annotate, then there is no Opsani optimization; thatโ€™s the way to exclude certain apps / namespaces. Correct?

Peter Nickolov21:10:50

@cassidy cloud cost governance tools send you reports

Peter Nickolov21:10:48

@amir You got it! When installing Opsani on a cluster, you can define whether you want applications to be onboarded by default (use annotation to opt-out) or require annotation to opt-in.

Peter Nickolov21:10:26

(obviously, you get more optimized by using explicit opt-out only for apps that should be kept fixed ๐Ÿ™‚ )

Peter Nickolov21:10:55

Having the opt-in mechanism allows teams to try it out in small steps

Peter Nickolov21:10:53

If you have any additional questions or want to learn more about Continuous Optimization as a Service, hereโ€™s how you can get in touch: โ€ข https://pages.opsani.com/does-2020-live-demo-sign-up โ€ข https://pages.opsani.com/does-2020-happyhour (I'll be around as well for another 10-15 minutes here -- then at the Opsani booth)

Katrina Sison22:10:23

Happening now! Opsaniโ€™s live demo! Learn about autonomous workload tuning @scale!ย https://us02web.zoom.us/j/4975940985 Join and enter our raffle to win a $200 giftly gift card ๐Ÿ™‚