This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
Iβm really proud of this one. And it might ring begrudgingly true for many of you https://leaddev.com/technical-direction/why-developers-and-their-bosses-disagree-over-generative-ai
I found that github copilot helped my productivity initially, but it quickly became the most annoying pair programmer possible, interrupting my thought process constantly. I also found myself waiting for the auto-completion, which slowed me down. The suggestions were often extremely wrong, so overall it became negative value for me and I only use it for very repetitive tasks. Some of the surveys I've seen are extremely biased since they ask how much it has improved my productivity, with no option for it hurting my productivity or happiness. I currently use it to clear blockers faster when learning a new language or framework. It allows me to get moving faster, get some early wins, then take a step back to review the documentation and best practices. That last step is critical since some of the things I learn from genai are wrong/harmful, and also because I am not going to remember the things that I "learned" unless I put some work into it. It's also very helpful for troubleshooting issues since I can have a conversation with it, though I've had it gaslight me many times. The note about "how developers think" is interesting. I've found that I am less patient after I've used genai, which affects the way I approach problem solving even when I am not using genai. I'm concerned that it will lead to lower critical thinking while also adding to tech debt.
Thatβs honestly something I hadnβt thought of before β that of course, by being right there where you are working, it could be interrupting your natural brainstorms. And wow. Brilliant thoughts Chris. @laura507 @nicolefv not being great with data just with analyzing it, I hadnβt really thought before about how most studies are looking for productivity gains not losses. Although DORA is measuring that loss
Similarly, I have been disenchanted with Gen AI but thought perhaps I simply was not using it correctly. This thread helped me see it differently... I was recently given the initiative to evaluate how AI coding agents like OpenAI's Codex can improve corporate-standard development processes. But based on this thread, I have redefined "improve" to "influence" since there are potentially negative, if unintended, side effects.
It can be used well, but chat based ones are basically worthless to me except for some troubleshooting. I'm seeing them as potential solution generators that need strict guard rails and automated checks/evals to review answers before I see them. This is a bit like stochastic gradient descent with a different function to check the fit. I've had decent luck with Claude Code if I use a canned set of base guidance (I have docs for our cicd pipeline, operational readiness checks, iterative development with tidy/refactor/functional changes kept separate, code styles, testing requirements, etc). I then use /init to build the current understanding of a project, edit the claude.md to correct issues. After that's in place I use plan mode (shift tab) and discuss the problem I want to solve, then when I'm happy with the design I'll let it make changes. Using languages with strict checks helps a lot, e.g. I work with go a lot and I have it run go vet, staticcheck, govulncheck after every code change, along with running the tests. After a larger batch of changes I have it use codeql locally. It's also useful for automating toil, e.g. I give it a list of gcloud commands it can run to check things in the environment, including pulling logs. I also have it update Claude.md or linked files with what it learns from each session, with my review of course. This helps others on my team. Overall I am faster and better than it for projects I know well, and I'm better at making things that are maintainable and operationally ready. For disposable one-time tools in Go it does very well with those guard rails. For things that will last I'm better off doing it myself

Also, from the security side https://www.bleepingcomputer.com/news/security/ai-hallucinated-code-dependencies-become-new-supply-chain-risk/ https://www.linkedin.com/posts/georgzoeller_how-stupidly-easy-is-it-to-put-a-persistent-activity-7348770387016507394-qP-i https://www.infosecurity-magazine.com/news/atlassian-ai-agent-mcp-attack/ https://invariantlabs.ai/blog/mcp-github-vulnerability https://www.csoonline.com/article/4005965/first-ever-zero-click-attack-targets-microsoft-365-copilot.html https://www.linkedin.com/posts/georgzoeller_open-ai-might-have-just-killed-1000-ai-agent-activity-7305597397454176256-GPhe/ And misc issues https://www.linkedin.com/feed/update/urn:li:activity:7294811958384435200/ https://www.linkedin.com/posts/georgzoeller_pretty-brutal-assessment-of-the-capacity-activity-7338237929342779392-9W0f/
If I may say, the METR study brings me a bit of joy as it is a big FU to companies doing preemptive layoffs, planning to replace devs with AI
Ugh. I saw something about that and the use of AI to automate some of that work. I stopped paying a dime to Amazon in my personal life a while ago, and am glad to be migrating off AWS for my platform at work.
Next AWS incident report after an outage: FAFO wasn't a great plan afterall.