This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
Wooo! Welcome everyone! We have such an amazing lineup of technology leaders presenting this morning! 🎉🎉🎉🎉 PS: I had so much fun in the Vibe Coding for Leaders workshop with so many of y’all yesterday with @steve.yegge — Feel free to post your creations in the channel! 🎉
Vibe coding workshop output
Prompt: Write a JavaScript program that shows a cube with a colored light source; create slider bars that can change orientation of the polygon.
Follow up Prompt (fix the bugs):
The cube is not rendering in the browser. There are several errors in the developer console:
WebGL: INVALID_OPERATION: uniformMatrix4fv: location is not from the associated program
drawScene @ 3d-cube-with-lighting.html:504
render @ 3d-cube-with-lighting.html:417
requestAnimationFrame
main @ 3d-cube-with-lighting.html:420Understand this warning
Result - 3D Cube with Lighting and ControlsExcited to be back for another action packed conference and great community! Let's go!!! 🚀 🚀 🚀
This is by far my favorite conference each year, excited to see what the vibes are this year! 🦾 🏆
First time at this conference. So excited to be part of it.
Australia :flag-au: present! Excited to be here!
This conference and community always feels like a family reunion. And I have already met so many new fellow travelers this year. Welcome to the family! :hugging_face:
First time here but would love to meet you @jason.cox . I was in the Vibe Coding workshop yesterday.
👋 Happy to be here from Lisbon, Portugal :flag-pt:
New Zealand :flag-nz: present too! First time attendee and very excited to be here!
Beat me to it Martin!!! Great to see you back here Team Canada sound off!!
Here again, must be the 23rd time assuming Geoff can count :rolling_on_the_floor_laughing:
Germany: 🇩🇪 Present! Happy to be back again!
I'm in! Excited to be back this year!!!
Let’s do a quick pulse check with ETLS (formerly DOES) alumni. Drop a number emoji to show how many times you’ve attended the conference.
First time folks be warned. This community is inclusive and addictive. Once you start engaging it’s hard to stop. And you’ll likely end up presenting a future talk or writing a book.
Flashback to when I worked with Jeff at Excella and we created these stickers and tshirts for AWS re:Invent in 2019 :unicorn_face:
250K merge requests per year; code base 25+ years old; Bruno Passos, Group Product Manager, Developer Experience—http://Booking.com Laura Tacho, CTO—DX
"Devs focus way too much on fixing bugs" — we need them spend more time on innovating.
@passosbruno would love to hear more about what this is and how you measure it when you get a moment
DX can measure innovation ratio through either Self-Reported or System-Based data. It is the percentage of time allocated towards new capabilities. Stop by the DX booth and we can show you live!!
Ship code 2x faster, with better quality, and 80% innovation time? Anyone think these goals are too lofty or not aggressive enough?
http://Booking.com has an amazing history of experimentation — I read somewhere that it is one of the largest buyers of Google Ads — from Google search, over $1B per quarter: Yes, Booking Holdings (the parent company of http://Booking.com) has historically been one of Google's largest advertising buyers, with its performance marketing expenses heavily concentrated on Google to attract travelers to its platforms.
How did Booking decide if a legacy app should be replatformed or some of its features incorporated into another modern/strategic app and the legacy decommissioned. Interested in how you thought about the tradeoffs.
Along this line would love to know what processes and practices you use to keep the tech debt addressed in a healthy way that keeps devs focused on innovation instead of bug fixes
@passosbruno observation on need for training was a huge aha moment when I heard him talk this summer — AI is still so janky, it does take some orientation to show how it can be useful, even though initial first attempts may be (wildly) disappointing.
IMHO: The notion that senior engineers asked AI to do something complex, and immediately dismissed it permanently was one of the landmark observation of @steve.yegge 's first Death of the Junior Developer.
Completely agree as a former senior holdout until recently. Still a common concern of many of my peers
Link to the research of September 2025 https://download.ssrn.com/2025/9/1/5425555.pdf
How do you deal with devs being able to get a better AI for 20 bucks than the internal AI?
Don’t try and replace AI adoptions that have entire companies working on them if you’re only dedicating a few people to it. Buy or use a pretrained model that’s topic specific optimized, and use the larger parameter models vs smaller - if the cost / value isn’t there for a self hosted then find ones external that meet your compliance needs or have self hosted options that still make sense
Would love to know what metrics you found useful in measuring this stuff in more detail
e.g. it can be hard to get % of AI as not all tools provide it, so good estimation seems key to get accurate data.
Check out https://newsletter.pragmaticengineer.com/p/how-tech-companies-measure-the-impact-of-ai Laura wrote! In it, she shares the exact metrics that Booking and other top tech companies use to measure the impact of AI
3-5 days!!!! Amazing — and this is what I've heard from many of you yesterday!
A focus on experiential learning has been so impactful for our own AI adoption. Show, don't tell. Example prompts to try aligned with employees' jobs to be done. A sample daily prompt to try just to get people into the AI tools and make them feel less intimidating.
Agree!!! The biggest hurdle is going from never to the first time — and then making a meaningful change to a daily habit
Optionality! ("Bring as many developers to work on one business problem (in parallel)" — choose the best and most promising approach(es)!)
“Leaders can’t sponsor something they don’t understand” – importance of IT people being good communicators, else so many wasted opportunities
Where do you think your most consequential constraint lies? 😬 <- Focus, value & clarity 👍 <- Planning & shaping, prioritization 🙌 <- Development 🔥 <- Review & Testing ❤️ <- Deployment, Release & Validation 🎯 <- Quality
One pattern we’ve found useful is a “2 sprint” program. Sprint 1, cut it in half and do targeted work on AI coding. Sprint 2, consciously apply AI to all your work. We support each team with our passionate early adopters and then provide community support.
Really like this idea Ian. Curious what this looks like for you at the "the boots on the ground" level? AI coaches embedding in teams for a month at a time? Something else?
We’ve done about 10% of our teams. This was centrally run - some coaching, consolidating some material, identifying use cases. We had managers choose use cases of interest and then helped the team work on them during the first week. We moved responsibility to the manager in the 2nd week. For our scaling, we’re making this something managers own - decide when to do it, follow a playbook, reflect on results. So we’re currently doing a train the trainers both for managers (how to help your team) and for growing our body of experts across our larger orgs (we have about 120 scrum teams company-wide).
Back of the napkin math 250K merges across 3000+ devs is ~80 pull requests per dev per year. 1-2 per week per dev. Is that evenly distributed? or are the AI enabled devs doing an order of magnitude more?
Love that they are mentioning quality as something measure — saving time or more code alone are not great metrics Metrics should be in context (multi dimensional)…
So close! LOC is a vanity metric, but gasp so is dev productivity! If you are at a company, it’s all about the $$$! This is setting up my talk nicely :)
More filler faster is just fast food of software…eventually we’ll get indigestion and health with suffer
Some call it workslop, some call it “shift right”, some call it “legacy on day 1” but whatever you call it, if it isn’t tied to $$ it won’t last :)
Super interested to learn how they overcame the procurement challenges (as someone who has procurement as a responsibility 😬). We are often stuck between wanting to experiment quickly with different tools and making sure we're okay with vendors' TOS and our data security.
I’m curious to learn more about how centralizing procurement, legal, compliance and security helped remove barriers.
Our learning: repeatable patterns and ownership. We’re typically 45-60 days even for POCs, but I got Cursor through in 10. The actual work across all the teams was < a day. So I started building cover letters, sitting at the middle and helping the vendor prepare for reviews particularly focused on security, privacy, model risk, and legal. Once our owners know what these groups are looking for, we can one shot the reviews. Second, get some global priority so your reviewers know at the company level which ones to do first.
We have mega procurement processes at our company too but have been relatively AI forward but with specific guardrails for approved tools. My colleague Shiva would have more deets on what code you can feed to which systems…we also use our own genetic systems for internal use
They are pushing more, but how are the change failure rates?
Ha! @laura507 just answered my question in the talk. Good to know the change failure rate is lower for AI users
We had an interesting finding earlier this month: our overall MR velocity per dev is mostly unchanged (we’re 60%+ AI DAU), but heavy AI users appear to generate LARGER MRs and code reviews are starting to become a bottleneck.
I see that all around, these are good initial metrics, but most of them aren’t AI-native and don’t reflect the fact that LOC have become so cheap
More merged PRs, more innovation, 22.4% lower change failure percentage! 🎉
@passosbruno @laura507 how are Innovation Ratio and Effectiveness defined?
Innovation = new code? Vs fixing issues. They mentioned something earlier in the talk.
You can read more about those metrics https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/?utm_source=app! Innovation Ratio = % time spent on new features Effectiveness = DXI (developer experience index)
I wonder how much this reflects learning curves. Easier projects are easier to get gain; harder projects may have less gain - but it may also just be that we haven’t yet found the patterns that will help AI solve our harder cases.
The holy grail - can AI help us improve/modernize code bases that are very old and haven’t been touched in awhile where the institutional knowledge of the team are no longer at the company?
I wonder about how critical getting legacy content added into legacy apps to make that better
ah! this speaks directly to most of my engineers' experience. 99% of our work is already in-flight. adding new features to existing products.
I talked someone recently who was trying to use it to refactor a legacy code base
Llm training dataset us skewed toward more popular languages => quality of output is different
It’s the context problem for complex applications or problems - plenty of research showing there was a threshold where it has worse and worse performance as more context is needed for the problem it’s working on. Augmented looks to have tried to solve that by breaking into lower context limited tasks, but that’s is a general challenge for AI that humans can do fairly natural
@steve.yegge Yesterday talked about how important modularity is to making AI effective. Not the dominant feature in our legacy code bases!
@eslick 100% there are workarounds to handle that - rag that index the code and creation of agents that are in charge of a specific functionality part of the monolith eve.n if it is in the same file
The pattern I’ve seen the most (in convos, my direct experience is limited) is using AI & indexing to help document and transcribe the use case and modularize as you go vs. working natively in the original code base (where the assembly of context remains challenging for both humans and AI).
I feel like "adoption of GenAI" is a bit too broad to be super useful unless you talk about the form factor(s) that people are using. If your engineers are only using code completions, they probably shouldn't even count at this point. If they're using Chat, then they're solidly on top of last year's form factor, which is... ok. But how many are using agentic coding?
Yah, granularity as to what mode your folks are in matters for sure; we try to focus on where there are using it, on what problem centers. If you have super advanced agentic devs improving… comment coverage, you probably don’t get the value
We’ve been working on a fluency model to differentiate this exact kind of nuance (and show what’s next/possible).
I'm really interested in how you secured your AI tools and workloads in your organization while also allowing engineers to rapidly try tools and find a path forward.
Small PRs, easier review and merge with successful test leads quick success
How do you deal with inability for customers to consume the pace of changes?
“Ship a app that listens to my mic and posts to slack every few mins with the key learnings and quotable quotes from these amazing talks, use as many emoji’s as possible” whoops wrong window sorry :rolling_on_the_floor_laughing:
How are ai PRs compared to non ai PRs? Lines of code, files touched. Were PRs of similar scope? Are stories written different for teams using ai. Are there any differences in automated testing?
✈️ Next up is @gnuyoga, Technology Director (Innovation & Products) at Travelopia and @ccondo, Client Partner, Equal Experts, here to present: Navigating the AI Evolution at Travelopia
2.5 years ago: started experiments in customer support; but LLMs weren't that great... entering the Trough of Disallusionment. But it wasn't the market cycle — it was a personal journey.
It's great we are acknowledging barriers to adoption in AI here.. procurement, compliance, security, data.. these are all real things in scaling AI use cases out successfully 🙌
"95% of AI projects fail" was consoling. But what if we picked the most important project, where the CEO would call if it went wrong.
Just like scientific research...90+% of what you try doesn't work. The important bit is perseverance and learning until you get to a workable solution
What differentiates the successful from the failed? What lessons can we pull from that to support success before we start?
It really is an emotional journey of creating anything great! For everyone! Thanks for sharing Sree.
Absolutely yes, the tools change hourly. I love tech-enabled: that means they’re focused where they should be!
“Just vibe code and ship it to production” Me 😆 We test human created code, we gotta test the machine created code too! Always be testing friends
Can’t truly “test” non-deterministic outputs; use “evals” or “evaluations” to differentiate testing approaches that anticipate non-deterministic variability, always be evaluating :)
I think the facts behind the 95% fail was actually only 5% of AL experiments made it into production and had a measurable positive impact on revenue - That's a pretty high bar for learning!
This theme of innovation speed needing to meet the adoption speed of users is something we all need to pay attention to. (Progressive Delivery)
Perhaps it's so easy to develop with AI now, that it's less expensive to fail? i.e. Fail fast?
I love how Sree (Travelopia) shared the fears along with the excitement… and also how Bruno mentioned that Booking leaders gave devs (hopefully everyone) a place to talk about issues (including frustrations!) … AI is bringing big change, and big emotions…in humans all over the business… devs, in leaders, in attorneys, in hr humans…so much excitement, geeking out, realistic caution… and also fear and friction.
If I recall correctly: One of their early experiments was making a change to a Drupal application, with no Drupal engineers in-house anymore, that everyone was afraid to touch, hadn't been changed in years. What would have been a months long project of finding a consultant... became a change in a day, made by a developer, who had never worked on Drupal)
Tech enabled versus tech company, like that @gnuyoga I feel those groups aren't exclusive for most of us and we are a mix
You need tools to manage all the AI. Then you need tools to manage the tools.
One of his conclusions: the need for functional experts (Drupal, databases) is much less now.
I think about what you are saying like this: No knowledge -> sufficient knowledge -> expert knowledge AI can take people with no knowledge of Drupal to sufficient knowledge, it might not totally replace expert, but you usually don’t need expert knowledge of a tool to do most changes!
Like so many industry shifts before. Automotive, Farming, any mechanization really. Net new jobs of a completely different nature.
When it come to adoption, it’s “One person at a time. One team at a time. One brand at a time. There is no secret sauce.”
We’re all going back to school! There are at least 4-5 CS classes worth of knowledge and practice we’re all producing/discovering somewhat independently. Is @steve.yegge and @genek’s new book the first textbook of the AI era? 🙂
"Champions drive adoption" - specifically champions who attend ETLS!
The test: to put into A/B: 2 devs vs 2 devs + AI: working on the same problem: 6 weeks in: JiRA ticket: 1st team: 2 weeks: 2nd team: 2 hours... Creates a sense of wonder and joy. "Are you sure?" "We're done!" 🤯
This level of acceleration puts strain on every part of the current value stream (and requires holistic change).
What a result! It's so awesome to have a solid A/B test.
Devs who are equivalent of "designers who insist on pixel-perfect." 😂
(Winslow sneaks a slice of leftover cold pizza)
Platform tech offerings typically enable the dev work with speed through guardrails and standards and consistency. Gen ai coding tools are essentially a pseudo developer and pseudo platform offering. What else needs to shift down into the platform?
A platform needs to allow other people to edit the range of possible contexts the code runs in. Like the behavior of a web app built with AI might be fine for people with steady connection, but untenable for people w/ intermittent connection. Is it possible to check and tune that in the platform, or will it require a full re-prompt?
But the pixel perfect engineers are the ones that keep the users happy… otherwise, we end up with UI’s that reflect the complexity of the code.
Yup, both skills/approaches needed: be careful to not heap all the glory on the experimenters, polishers are so crucial to build trust!!
"For the first time in my career, I'm asking the business whether they can push to production — and with an evil grin,I know they'll say 'no!" 😂
There's nothing more satisfying to an engineering team than delivering their stuff so fast that the business owners who asked for it are caught completely unprepared. 🏃
yeah the moment you are pushing product and sales to get more product requirements because you've eaten up the whole backlog is probably the closest thing to engineering nirvana I can imagine
There's often an identity challenge here - part of the adoption journey - how do we convince "non-technical" people they can be technical now!
This feels like the next “full stack engineer” - product + engineer enabled by AI for a scope?
XP and DDD allowed Engineers to learn some about the business domain. AI will allow them to become something closer to experts in the domain and breaking down those business problems into small contexts for ai to code
Aspiration: in one business line: 5x revenue with fixed engineering 200 head count
Have to set a timeframe on this - you can do now this with three years of sales and a small team of 10 with a decent margin product
For software engineers and product leaders , AI will lead the way to deliver amazing things .. start small,prove it works for you and then scale. Don’t rush too much and don’t wait forever!!
Not a draconian board room meeting, jamming down revenue target down people's throats — instead, it was generated internally by the team, with excitement and joy.
If product people are shifting right to learn tech and tech people are shifting left to learn product. Who is going to make sure we deeply understand the customer, value we are going to create and help make the right strategic bets? This strategic muscle isn’t strong in some orgs and leaders… a gap?
This question right here!!
My hope is that this would lead to cross-pollination and better understanding, not confusion. That said, hope doesn’t usually lead to outcomes 😎.
Do "engr shifting left" mean understanding product mgmt practices or understanding customer needs/pains, understanding more of the strategic context (the "why") behind the work, etc. If the latter, I think that addresses some of the customer / value concerns.
Fostering organizational culture to address this issue is definitely a good discussion topic
Anyone have engineers resisting agentic coding tools as “just fancy autocompleters that replace good coding with a bunch of AI slop”? :thinking_face:
we did - until we started applying AI discovery against a massive legacy codebase (mainframe COBOL). Once engineers started to see how it could be applied to our ancient brownfield, the genuine sentiment changed.
Yes. Requires a new way of thinking for many. Describing outcomes and constraints clearly.
We have seen an interesting trend. The Principal Engineers are leaning in, the Jr Engineers are leaning in and learning. It’s the Sr Engineers that we hear that from a lot, my theory it’s because they recently developed their skills and now they are obsolete.
Yes - we're hearing that sentiment in pockets at all levels of our engineering staff. (Juniors, Seniors, Staff) We typically operate in high cost of failure systems, so sometimes warranted - but still - it's an easy excuse to make to "just let me keep doing it the way I did yesterday" Curious to hear specific ways how folks are tackling this!
they are “just fancy autocompleters that replace good coding with a bunch of AI slop” if you dont adjust methodology, metrics and tools to support coding agents it’s our place as engineers leaders to create the culture and set practical examples creating layer of tech leads / managers who know to manage ai hybrid teams is key imo
I’ve had different experiences at the same company depending on the devs and the projects: Tech lead working on largely complex and poorly written legacy code that hated it because it never did what he wanted. Always provided weird ways of doing things that didn’t meet modern expectations for service design, etc. Junior engineer that used it to explain our codebase for a more or less well written frontend Tech lead on DevOps who started writing his own MCP servers because he loves it so much, but he’s in a well written codebase with great modularity I think it’s hugely important to not try and apply it generically and expect the same results - the tech lead isn’t getting by real value out of it, so he doesn’t trust it - and that’s valid - it’s not going to be a good place to start if you need nuanced non-standard implementations.
Experience Based Accelerators/Flashbuilds/Dojos. All just hands-on enablement mechanisms. No shortcut around getting in person and working one-on-one to teach a new way of thinking and working
I do think low stakes workshops building things with it that actually have some good guardrails around it and explore where it works well and where it’s not working well, how to interact with it differently than you would with google or things - and be very clear about the distinction between agentic coding assistance vs copy paste into ChatGPT
We did a workshop where we live vibe coded a chat site together and it was probably the best thing we ever did - lots of great questions, concerns, and peers offering insights from adopters to skeptics, not just a manager asking people to adopt
We have also seen skeptic to champion transformations by doing workshops where they see peers actually producing outcomes within minutes. Hands on with peer encouragement. Seems to make a big difference.
We’ve seen this repeatedly across companies who have been using Unblocked. There are early adopters who are keen (and counter to common beliefs, this isn’t limited to early-in-career folks.. sometime these are very senior folks) who “drag along” a tool into an organization. When that tool unexpectedly provides value to a nay-sayer, the tide begins to shift. Companies that started with large cohorts of engineers with strong “anti-AI” sentiment have shifted, and the key is to find the scenarios / pain points they care about to kick off that shift in perspective.
“It’s no longer incremental, it’s exponential…”
🌟 Please welcome @ajdomeier, Sr Director of Technology, and Scott Brons, Principal Engineer for AI from SPS Commerce, here to present: Architecting for AI: How Platform, Data, and Cloud Strategies Unlock Enterprise AI Success
We'll also publish the full deck at the end of the conference (if not sooner).
"As leaders in this room, we have enormous obligation to our stakeholders in time of incredible change, stress, fear of being replaced...". @ajdomeier and Scott talking about the reasons for so much fear around AI
Folks may no longer feel like authors, but I think using AI means you become a maestro. How can you use the tools to create a symphony?
Scott Brons: Reason for switching from manager to principal engineer: I spent all my time on customer problems, instead of the health of people in our teams. ❤️
It‘s always people and culture! One thing I often do: If someone shows me something cool he built, I asked if he used AI for it. Often the answer is „no, of course not, I did that all by myself“.
"Rewrite this email like Ted Lasso speaks." C'mon, you know you all did something like this when you started!
Yup. I rewrote some of my travel blog posts in the style of an 18th century aristocrat on a grand tour 😅
Early days there was a noticeable increase in quality if you added, “if this code is wrong, kittens will die.” To your prompt.
Totally did something similar when I first started playing with Gen-AI.
Is AI creating opportunities for managers to switch to AI based software engineer role? What do you think ?
And vice versa, does it free your engineers from toil and allow them to take on a broader, leadership role?
I believe yes, when it comes to business hypothesis validation and innovation. For production-ready code, however, there are more subtleties.
A good technologist knows what to do. A great technologist knows which MCP server to call.
One change required 10 changes in other services -> 2 weeks; with AI: 2 days. Didn't change the architecture: but order of magnitude reduction in coordination cost.
It's such a game changer when you turn on all the security scanners. Toil goes to more than half an engineer's schedule and AI tools with great testing strategies brings it back to manageable
FAAFO! Faster, Ambitious, Autonomous/Alone, Fun, Optionality!
When AI can be used to do repetitive, but similar tasks. Does it change code reuse through libraries to code reuse through copy/paste? Then you can control the complexity of changing code in a way that is difficult once libraries become highly integrated and stable?
A guy on my last team was telling it to use https://context7.com/ for any implementation references so it would use libraries consistently
It’s specifically there for giving consistent context but you could hypothetically do the same thing for your own implementations
Has anyone been doing anything with shared config for AI coding context? Like rules the agents should be following consistently across groups? I.e. “Always be using style A. We will always default to AWS services only. Follow this DDD pattern. Here are the current domains for our project” or context like that to keep agentic delivery consistency?
One of our PEs has started creating a context repository. We feel it is crucial if AI is ever going to make sense of our legacy codebase.
We are experimenting with it now for prompt instructions plus knowledge base curation
We had a guy using https://context7.com/ but that was for general frameworks - I’ve been wondering if having a context file for repos might need to start being a best practice or something
We have a set of teams (3-5 scrum teams) doing something like this. They've used the AI to help them document their patterns/practices/styles, and use the tools instruction files (i.e. GitHub Copilot instructions) to tell it read the project documentation as the agent helps them write code. Happy to chat about it sometime if helpful!
Yes! We have a spec-driven workflow that sets up commands, agents, rules. npx installable, versioned, and targets multiple tools and issue-trackers. WIP
This is a use case that a lot of our customers use Unblocked for - surfacing organizational context to coding agents so they have this type of tribal knowledge while they generate code (full disclosure, I’m the founder/CEO of Unblocked - this isn’t meant to be a “plug” though!) My talk at 11:50 actually has a demo of a similar scenario
@michael.s.winslow
One of our PEs has started creating a context repository. We feel it is crucial if AI is ever going to make sense of our legacy codebase.
💯 to thisBuilt all this before MCP — that's life in AI. Life moves fast! 😂
Testimonial: "When we partnered with the engineering team to implement our agen, we didnt antoipate the leed ol success we'd active. Almost immediately, we saw a noticeable reduction in tickets from our Add-on leam supporting legacy dients. Even belter, the Sciets that id come through included richer troubleshooting details, allowing us to resolve issues taster and more elechely. This eliciency freed our team lo look on product and process improvements. Today, Sparky has become the very first step in our troubleshoding guider
I haven't used it yet. Really need to try something with it... Curious if there are specific use cases you find more helpful than others?
It creates better decks than I do. I just feed it an outline and out pops a great deck
Testimonial: ""We built a CFO-grade ROl model and launched an interactive customer app in one week using AI. Normally, this would take 4-6 months, 500+ hours, and $100K+ of resources. That's a 75-80% reduction in time and cost." - Matthew Brolsma, Product Marketing
Battle cards -- not exactly sure what these are, but they sound awesome!!!
Single page pieces of content that are usually made for sales teams to address competition taking points
For B2B sales environments, help provide key differentiators vs competition, traps to plant, etc. for sales and/or sales engineering.
Confidence drives behavior, behavior repeated over time drives culture...
Speaking of the power of AI: with inspiration from Gene and Steve at the Vibe Coding workshop yesterday I woke up early this morning and designed a conference prioritization and agenda tracker app using flutter and dart and had it running on my iphone16 within 2 hours. I'm going to use it to track the rest of the talks and events I attend this week and at future conferences. Try it yourself or collaborate here: https://github.com/Josh-thephillipsequation/conferange-agenda-tracker Thank you for reminding this former engineer and current people leader of the love of individually building! So excited to be a part of this community and vibing with you all this week!
We are at the front-edge of "I want something that does that"... and it appears.
Truly, it's really reigniting what drove me to software engineering in the first place: this is the closest feeling to being a wizard you can realistically experience. We say magic words and reality changes! :male_mage:
🔆 Next, we have @stuart.pearce, Portfolio Chief Technology Officer, Hg, who will present: Reinventing your SDLC for the GenAI Era
I really think a post from @mvk842 on how these intro songs get selected would be a popular read. Just putting that out there... 😁
I want something from K-pop demon hunters or Katseye…. Hopefully @mvk842 is looking out for me.
Golden thread of value... Progressive Delivery of that value to customers when they are ready for it... "Golden" by k-pop demon hunters! Boom! 🎤
When one of their portfolio CEOs discover 6 month old startup is (potentially) going after their customers, in waht was thought to be a highly defensible, niche, sticky market.
"long tail of customers --- typically thought to be a moat"
What is size of prize: 20-30%? Or 10x? (Love that he called "stuck at 20-30% improvement")
Very funny: "under normal times, one should be happy with the 20-30% gains, on top of the 10% gains they already had. But when you see 10x potential, we needed to dig in deeper."
20 to 30% productivity improvement matches a lot of the research around the impacts of AI coding assistants. You need to break out of the IDE for bigger multiples.
Approaching AI as a "faster horse" is limiting... Harkens back to the invention of the car.
Question for us is —- Is it a completely different way of doing things or a better way (improvement) of doing the thing
His observed minimum requisites: the same enables that enable humans to deliver software well: fast feedback loops, independence of action.
There was an amazing post by the founder of Veracode: comparing what % of pull requests have security vulnerabilities by tech stack. If I remember correctly, best was JavaScript, worst was Java -- I think it was 50%! Hypothesis: there's lots of old, insecure Java code that LLMs were trained on.
Makes sense given the age of each - engineers learn what/not to do but it takes time.
"I have a well documented SDLC" sounds like something someone says when they have people who are capable of deducing all the missing context. That's not a luxury you get with AI.
Codify your standards and process - (e.g. make sure your documented SDLC "actually describes" what your engineers have to do), and your AI adoption plan for exponential returns is solid.
💯 there tends to be a huge gap between what people think happens vs what actually happens when you talk to the folks doing the work. @laura507 talked about this as a finding in her team’s research work
"If I turn the people side, which I think is the most important side..." <---
The most amazing moment for me yesterday during the vibe coding workshop: I was struggling to articulate how to do data visualization, hinting at tech stack. Some said, "I want to visualize the data on port 3005". Brilliant example of what you want, and delegating the rest.
Asking engineers to think strategically-> we have to be better about keeping ICs in the loop on our strategy and targets - can’t do this if there’s not transparency and high clarity on the vision and strategy we use to achieve that vision
"Focus shifts to task breakdown..., etc" so we are all project managers in this scenario! (feeling good about my job security 😉)
“The key driver of our incredible exponential revenue growth is driven solely by our SDLC” was that the quote or did I just enjoy that fantastical idea for a moment? 😂
“The key driver of our incredible exponential revenue growth is driven solely by our SDLC” was that the quote or did I just enjoy that fantastical idea for a moment? 😂
How do we teach the agentic approach to junior engineers with much less experience from a senior engineer who has the battle scars of solving architectural problems?
Theory: The 'mindset shift' is that, the junior engineer that is able to experiment (e.g. an environment with enough guardrails to fail, but not do massive damage) will allow experience to be the best teacher. Senior engineers can help build those environments.
OTOH, I failed to mention for people trying to build a iOS app: "use React Native and Expo" (leaving out Expo put people down track that wasn't as magical as the one I stumbled on)
💥 I think I may be in trouble. From the Vibe Coding workshop yesterday to the sessions this morning, my brain already feels “full.” Like, dangerously close-to-exploding full. And we’re only on day two… Thursday may require a helmet. Loving every bit of it so far—amazing content, great people, and just the right mix of inspiration + overload. Wouldn’t want it any other way.
@cmshinkle - pretty sure we're all thinking that, bro. Like Jeff said, you're not alone.
"People were once some of our top performers are struggling... this is a very tough place to be."
We habitually miss a piece of the process when we end with “Deploy to Production”. This always feels final and disconnected from the full life cycle. Observability, monitoring, maintenance (updates), security, kubernetes controls, is it used, decommission, and all the ops side of devops. I realize this is assumed and keeps the slide simple, but is it?
Ops...the most often overlooked, but actually the most costly part of the SDLC
Absolutely - I couldn’t get a real process with full complexity to fit in the slide though! Ops, experiment feedback loops and customer feedback all critical
Yes, for sure this is art of the slide and readability. I leveraged your slide as a point that ops is often forgotten but that wasn’t the point of your slide. So I get it. Yet too often I see it end with those words. Great talk Stuart and thank you for inspiring my thoughts!
One of the challenges with junior developers is that we expect them to excel at managerial-like skills when “leading” coding agents, things like task definition, team communication, and code challenging. Making juniors into managers is usually a bad idea, but the bigger issue is that we rarely provide the right coaching to prepare them for this new reality
No one talking about how as an industry we are actually terrible at coaching people from IC to manager roles
Stuart: Marveling that one engineer delivered 80 PRs in one day — marveling implies that these were value, not "AI slop".
I'm very curious how the team keeps up with the velocity of 80 PR's on a daily basis.
How do junior devs learn effectively when good engineering “judgement” comes from past mistakes + AI suggests a lot of these decisions now
Is it that much different than making mistakes and asking for suggestions from other engineers, or desperately searching Internet forums for other people who have solved the problem before?
I feel like we will all find a way to make mistakes even with all these jazzy tools. If 95% of AI projects fail… it still leaves room for junior devs to build the prototype quickly but the time spent making sure it works can provide this training
I love the 80 PRs in a day comment. But also, activity != output != outcomes
This is so true... Was thinking the exact same thing outcomes > outputs. Felt like I was channeling my friend Jeff Patton
It's easy to get hooked on the endorphin rush of activity and output. Moving PRs, moving cards to done, etc. And lose sight of whether we're making an impact for our customers and our company.
So true. I have a whole parallel program that focuses on building the right product versus building the product right (and faster)
✨ Please welcome @topo.pal, Vice President, Architecture, Fidelity Investments, here to present: From Concept to Prototype: Leveraging Generative AI for Rapid Development in an Enterprise
Whenever i think of documenting the SDLC at a very detailed level, I can't help but think of "gathering requirements". We in Tech have to drink or own kool-aid now and get too the same level of detail and specifics that we have been asking our business partners for for years. And, like i tell my teams all the time, words are hard but critical. I have always seen my role as partially a translator, and this fits right in. 😄
I'm surprise Topo isn't smoking a cigarette, hand shaking, eyes staring off into the distance. "I've seen some things..."
I love this I’ve run production and carried a pager I’ve blown up production Battle scars
I can't even count the number of times I've recommended Investments Unlimited 😂
Really!? Can you give me a 2-3 sentence about why you recommended this book in particular?
It paints a picture of a developer friendly & auditor friendly governance process! in my experience, a lot of people think those 2 things are in conflict, so i love pointing people to a novella so they can see the light!
The statement about artisanal engineer - a word of advice in balance here - please recognize that you can’t remove the need for human innovation, but it should be applied intentionally - it’s an absolutely true point that you’ll have people who want engineering to be an artform instead of a delivery mechanism and aren’t leveraging AI and agents as much as they should be - but there are areas where you need more human innovation, not just tech enablement - so don’t adopt this message so generically that it demonizes human-centric engineering - it’s more about making sure that it’s applied where it makes sense imho. You can’t measure the guy using AI to deliver simple iterative changes to the guy writing a novel algorithm that doesn’t exist yet - but do make sure that we aren’t artisanal engineering something incredibly standard like a new form component. Some areas where AI has (in my experience) big limitations still is in data relationship design and subsequently the APIs around it for instance - it will not design optimized data structures for what you are trying to build - it’s not “thinking” about systems design optimization even if you ask it to, and even then, it doesn’t have enough context - it will design ones that are generic or common or that statistically are more likely based on the context - so make sure your expectations as a leader aren’t unrealistic in how you measure performance and adoption or where you are asking people to apply it.
I make this point to execs by showing two ~1000 line code bases that each drove 100's of billions in value: the original Transformer model & the original PageRank algorithm. Each took years to create, but ultimately we’re measured in the amount of code a typical developer (with AI) does in a day.
I have a theory of leaning into how AI can replace some of the people around the artisanal engineer by giving this engineer (through AI) sufficient skills in UX, Product, Testing, etc… Basically have them focus on using AI to replace the work they don’t want to do so that you are not asking them to initially replace what they love.
> …using AI to replace the work they don’t want to do
An oldie but goodie…eliminate all but the most interesting problems with automation AI.
Great point. It’s a question of focus - where does that engineering expertise make the biggest difference
SBOM 👏👏 AI code gen is a bit like a cooking show. You talks about the cake, dump in some stuff and mix, and then all of a sudden you have a whole cake! SBOM to me is the detailed shopping list and recipe behind the cooking show hand wave
NeoDash had so much potential not realized: so many questions: some are easy: who is using log4j, which business unit, app manager; But the use cases could be so much broader, with so many more constituencies... so many aspirations!
How much software do we work with fits this? "It works, it's slow, it's clunky, doesn't meet all our needs.
I have an entire team just focused on building the exact same SBOM solution at a company Topo formerly worked at ha, I'm going to have to grab a coffee with him this week to crib some notes ☕ 😂
"it's a lot of work, but it's what you need to do" <-- mantra of the FAAFO life
This sort of AI usage has a lot of value and use cases. Something like SBOM management is often assumed to be a business capability that isn't the sort of thing that will have a whole team, product, and hundreds/thousands of manhours allocated to it. Just barely good enough (JBGE) is very different for internal tooling and capabilities than business-centric deliverables to customers.
without AI one could easily spend 5 days just creating good 🧜♂️ diagrams. 🙂
"Zero linting and code quality issues" — presumably very important in this project, because these are the standards he holds other Fidelity teams to...
I mean, come on! It's so clear that this is the application he's dreamed of and wanted his team to build for years!!!! So cool to see the story of how he brought it to life.
It's so funny — I stopped counting lines of code in June. It just doesn't seem to be all that important any more. 😂
It will be when you have 300 lines of code to maintain instead of 30, so I wouldn’t stop measuring it entirely, just might need a different attribution to its meaning :face_with_monocle:
Our director of design (not a developer) vibe coded a web app to help his design team. Our challenge is helping him figure out where to host it, who maintains it, implementing controls around changes... (we still haven't figured it out)
Our director of design (not a developer) vibe coded a web app to help his design team. Our challenge is helping him figure out where to host it, who maintains it, implementing controls around changes... (we still haven't figured it out)
This is the unlock. How does a team operationally support 10-100x the code and functionality they used to?
Generating code is now easy -- deploying, operating, debugging, and evolving needs similar treatment.
Going from written PRDs to some sample UI to a prototype in code is a big step forward in app iteration cycles… there’s still a whole lot of hard questions that need to be answered before prod
Tech leaders writing code with the goal getting it into production — that is going to be a new and common dynamic!!!
Love to learn how GenAI is used by design teams…On risk that I see is that all of our GenAI apps have the same UI and reminds me of the days when all apps looked like Bootstrap. Also will JS eat the world?
First break I have someone meeting me. Lunch ?
Developer + Tester is making a comeback 🙂. (still like creating a TDD loop though..)
I keep hearing this message over and over again...
Diversity in the ecosystem, just like in biology Something, something ... hybrid vigor
5 days to write the code. 15 days to get to production. Reinforces that we need to optimise the E2E flow of value. We can make engineers deliver better sooner safer. But if we don’t optimise the enterprise constraints we will erode the value!
This one is huge - my last company could deliver code to production within 5 minutes of it finishing automated tests - but they didn’t automate or optimize any of the approval process or end to end tests so we were waiting for 24-48 hours to get anything to production and ended up doing weekly when we could have been doing multiple a day - this is where I love DORA metrics as a starting point because it highlights the whole pipeline not just writing code or running unit tests or merging it
Model a writing code and model b test cases solves the same problems as developers writing their own , love it
AI-driven training and education is a huge space. If upskilling and training is part of an organization's staffing strategy then reducing the often-quoted 10k hours required to be an expert becomes critical.
I was asking in the context of junior engineers who may not have the experience or context to be the human in the loop.
Yeah. How to develop that instinct. Pairing with more senior engineers is one way. Pair-programming with AI and both people. Trio programming?! 🙂
@chuck I think good practices matter more than ever. TDD, juniors and seniors pairing, mentoring for juniors, all that good stuff. Happy to talk over some experiences - just come and find me I'm the tall one
One thing I'm so interested in is knowing when AI has taken you as far as it can with an app - anyone have examples of this?
I’m not sure there is a limit, per-se, more than there is for human-written software. I think it’s more about recognizing when the AI has painted itself into a temporary corner and how to refactor to get out of it.
e.g. accumulation of cruft, loss of modularity, duplication of functionality, costs of testing, etc.
The closest I’ve seen is using a code gen tool (like Cursor) to fix an issue/bug or implement a feature that it can’t get quite right. It will bounce back and forth between “solutions” trying to accomplish the task, ultimately failing.. a form of AI-slop.
git worktrees with multiple agents working the same problem independently also. Choose the best solution (or combine parts of each into one).
@dennis - finding something very similar when I use Cursor - glad to know this is an experience others have too. Excited to see what it might look like in ~6 months time too!
The breakout sessions are starting in 5 minutes. Start navigating your way to whichever session you’re attending. https://devopsenterprise.slack.com/files/U06GCH026KT/F09GAC5FBEJ/timer.png
There must be someone out in the world who is thinking about a GAN approach between the "coding" and the "reviewing" models? (Just like Deep Mind trained AlphaGo by having it play itself over and over) Seems so obvious, but maybe I'm missing something fundamental?
We’ve done this a lot at the prompt/context level. Definitely a useful pattern with some deeper benefits.
As my (computer scientist) Dad used to say, "When you have an exponential monster, you need to get out your logarithmic hammer" Feels like AI creating code is that "exponential monster", and so we need to position another AI on the other side to be the "logarithmic hammer" 🙂
@wilsonsd considering the history of the OWASP top 10 not really changing over the years as people keep re-learning the mistakes of the past, do you feel like a AI agent version of OWASP is likely to have the same outcomes?
Lots to unpack in that question. While there is some evolution of the original OWASP T10, I know the original authors (having spoken to one of them about it) do feel a sense of surprise that people are making the same mistakes. That being said, the list is still super useful as an educational tool. I'm also surprised we haven't made more progress on prompt injections, hallucinations, etc. I think they're constants that will be with us for a while, so getting an intuitive sense of what they mean and how to plan for them is important. I think our work continues to promote that.
All this conference leaves me wanting to do is go hole up in a cabin with a fast internet connection and vibe code for a few weeks!
The breakout sessions are starting again in 5 minutes. Start navigating your way to whichever session you’re attending. https://devopsenterprise.slack.com/files/U06GCH026KT/F09GAC5FBEJ/timer.png
The plenary sessions are starting again in 5 minutes. Start making your way back to the Royal Ballroom. https://devopsenterprise.slack.com/files/U06GCH026KT/F09GAC5FBEJ/timer.png
🌟 Please welcome @barbara.arnst@, VP Transformation, and Johan Morel, VP Billing Experience at Telenet, here to present: From Agile 2.0 to "Agile 3.0 +AI": Telenet's Unfiltered Journey
“I’ve been DevOps for like 17 years man… I don’t write my code anymore… I just kick it from AI…”
"Based on Spotify" — what was being sold by consultants everywhere
I'm excited about this presentation, because @barbara.arnst and Johnson Morel talk explicitly how important structure is — in fact, I was just talking with the esteemed Dr. Carliss Baldwin (Harvard Business School) about exactly this! Structure dictates performance! (Dev vs. Ops << DevOps)
Conway's Law in action. Another way of looking at it: if your strategy cuts against the org's structure then it'll ultimately fail
"Guarantee an effortless, digital first billing experience — I have a transparent way to correct bill, secure and easy to pay."
at Datavant, we're also seeing success with driving E2E accountability over billing journeys.
Oh, missed the early results! Will post those shortly! (Cc @annp :)
Challenge: thinking ‘as is’, rather than deeper end to end refactoring to achieve more value.
Antipattern: a thin veneer on top of existing business processes
More and more the shift using AI requires culture change. It’s clear that I’m order to succeed, teams need ownership of their domain.
Interestingly, @jbeutler (OpenAI) is presenting on Day 3, and mentions the T-Mobile customer support case study, which had their CEO presenting on some months ago.
Interesting: for these high stakes projecgts, lots of people are have opinions on philosophies, scope, etc, slowing down decision making.
“Not a project with an end date, a product with a lifecycle and clear outcomes”
This approach is all very aligned with Dan North's http://goalwards.co ... • Transformation as a Product • Align efforts / organization to Business Goals • Metrics to measure
This is different from bigdata, cloud, or crypto: we have the c-suite’s rapt attention: we all need to capitalize and drive positive change towards healthier more thoughtful orgs. Use some of the “AI productivity dividend” to make things better!
How many times has a product had organic growth to the point it all came crashing down? That's a recurring outcome when there isn't time put into being deliberate
For companies that have had rapid growth, it's more common than not for them to "run into a wall" where they need to stop and retool: • Amazon • LinkedIn • eBay • etc.
Simplification -> partitioning: modularizing and linearizing
Just like using the reverse Conway “maneuver helped organizations in the past… Could AI be used in the same way to enable the change to happier, high performance teams? :thinking_face:
"We amplify... not only what works but what doesn't work." So important! You either succeed or you learn.
“It’s all about being able to experiment, learn and adapt”
⭐ Next, we have @emily.rosengren, Sr Director - Product Engineering, and @lphinz, VP and Group Product Manager at Grainger, here to present: Building High-Performance Product Teams: Grainger's Journey with KeepStock
"The ones who get it done" — frontline healthcare workers, facilitates maintenance managers, safety managers. "Not meant to be seen — if you see them, there is something wrong."
High variety, extremely low lead times ("I need it tomorrow!")
Sometimes even tomorrow is too late if someone needs a part that is stopping work today.
"For the ones who get it done" -- An old adage in IT: the best IT groups appear like they aren't doing much. If they are running around with their hair on fire then you should be very concerned
I find it very fun that we have two talks today about very complicated supply chains — SPS Commerce (between retailers and suppliers) and Grainger KeepStock (between the "doers" and their suppliers) — each with extremely high variety
Feature chasing causes integrators and order takes; not co-creators and real problem solvers.
Grainger representing working backwards 🙌 🙌 🙌
"If you can't actually deliver, it doesn't matter how good your ideas are"
If you can’t reliably build things right, you won’t have the trust to talk about building the right things.
So true! If you can't deliver, it doesn't matter how new and shiny your designs are!
This is why some argue that reliability is a product's most important feature (or at the very least a critical feature)
A bit triggering: when you have a lot of downtime, it's hard to get stakeholders to trust you — even to improve the system.
Deployment frequency is the same as incident frequency. Coincidence? :thinking_face:
When order entry systems are down, people bypass the application and go directly into SAP. 😱
I love the polite language: "There was an, umm, particular piece of technology that wasn't going to serve us long term". (It would be fun to know what they call this tech behind closed doors. 😂 )
It is fascinating that these field engineers serving their customers were the glue that kept the service together — bypassing problems, overcoming tech limitations, sometimes having to longer at customer sites. They had to be highly skilled, not enough of them to go around!
"Clear vision and concepts" -- I'm a big fan of Kotter's stuff in that arena. If you can't state your vision in 5 minutes and generate some amount of engagement or excitement then you need to keep working on it
Or bail out: some ideas just sorta sound good in your head but fizzle in contact with reality.
"Teach business stakeholders how to talk with engineering". 😂
Reminds me of the Office Space scene with the two Bobs. “I’m a people person” :rolling_on_the_floor_laughing:
And equally, teach engineers how to talk to Business Stakeholders. One team, one goal! Two different communication styles and approaches.
@shoup.randy do you have any more info on that amazon course? sounds interesting and valuable.
Just heard about it from ex-Amazonians at Thrive. Let’s ask current Amazonians like @michael.s.winslow .
The 12 week course for directors for that purpose is news to me. We have great onboarding with a program called "Escape Velocity" for new Directors. And a follow-up program after 1 year called "Day 1 Director". But specifically for talking to engineers is news to me.
Seriously: I can only imagine all the trained behaviors among business stakeholders after years and years of frustration: "email them, and make sure to cc their boss, and their boss's boss. Quickly escalate any errors — you've got to catch them in the act." 😢
So many classic fundamental practices represented here! “Not particularly novel” so often means ‘tried, tested, reliable, measurable, operational’ 👏
Complex solutions can sometimes be viewed as experts stroking their own ego. The simple stuff is what actually helps that people value and get a lot more recognition for.
"Rotating Engineers"- Curious of other organizations that have measured success of this approach. Are there Key indicators of 'what healthy rotation' looks like? Are there measurements of engineer sentiment that this is beneficial/deterimental?
I'm happy to chat about this, Ben. In summary, we change or rotate team members roughly every 2-3 years, and it's great for both projects and team members.
My time at Pivotal Labs, we used to encourage rotating engineers. We went so far as rotating pairs (we did XP) as often as everyday. It forces you to develop good engineering and communication practices. Without them, you can’t rotate. Treat it like a forcing function. Happy to chat about it.
A general rule of thumb I've heard is no faster than 6-months. Teams need time to gel as they work through the forming/storming/norming/performing thing.
🚙 Please say hello to @kevin.odell, Director of Engineering, and @lisa.a.frey, Agile Services Manager at Toyota Connected. They are here to present: Transforming Chaos into Resilience: The Journey of Toyota's Telematics Safety Platform
It’s heartwarming to hear the stories about how technology helps make a real difference for users when they REALLY need it. 🤍 Thanks @lisa.a.frey and @kevin.odell
I read somewhere that Toyota originally used a service from SiriusXM — a service comparable to the GM OnStar service. A decision was made to bring this capability in-house.
Subtle note slid in there: Engineering responsibility for both building and supporting 🙌
I mentioned to @kevin.odell and @lisa.a.frey that I've talked with so many people automobile manufacturers that software is so often viewed in hierarchy: • engine • powertrain • body • tires • steering wheel • buttons on the steering wheel • : • : • : • floor mats • software
Makes me wonder about establishing a de facto 911 standard to simplify integration.
I can’t help but think of the hardware constraints imposed by vehicle engineering and the really long term lifecycle of a vehicle (by tech standards).
That last diagram of all the functional specialities required to create/deliver this service is amazing — just shows how the organization must allow the high-bandwidth interfaces between those groups.
(Having a simulator/emulator/device in-house beats shipping a bag of USB drives around. Maybe to different countries, waiting months for test feedback!)
At John Deere, our embedded teams similarly have a wall of display simulators as well as a full tractor cab (just the tractor cab, sitting in the office next to desks) if people want to test out their software in a more real-life environment. It's so cool!
The challenge with defining "right" -- everyone has their own opinion. As with many things, its a people and culture problem more than a technical one and often requires a lot of difficult conversations.
“88% of employees at Toyota Connected are Engineers” - yes!! 💪
I love Jidoka. We've been promoting the concept of autonomation on Dev Interrupted as a way to approach AI adoption in a healthy way.
“every sprint review teams/engineers now listen to customer calls” << love this