The Dark Side, ACIs and Too Many Layers
As I prepare to take some time off and soak up some ☀️ in Greece, I’m sharing a few thoughts from the week.
The Dark Side of Software Development

Over the past few weeks, I’ve covered some topics related to team structures, AI agents, and productivity. The focus has been on supporting the idea of fast flow within teams—structuring teams so as not to over-encumber them or burn out individuals. I’ve touched on DORA metrics in the past, and within peer groups and related publications, the key signals DORA highlights appear to align well with fast flow, Kanban, and increased agility within teams, which is why I was quite surprised that Dr Junade Ali CEng FIET, principal investigator of “The Dark Side of Software Development” concluded that
“Given that software engineers place priorities elsewhere than speed and given risk tolerance differs by industry; the use of the DORA “Four Key Metrics” as a blanket measure of software delivery performance should be discontinued, instead using metrics which are suitable for the risk/reward appetite in a given environment”
This study also investigates perceptions of wrongdoing and whistleblowing among software engineers in the UK. It reveals some stark fear of retaliation from management and colleagues while highlighting the continued use of warranty clauses in settlement agreements to workaround whistleblower protection laws, even in industries where regulators have banned such practices.
The full study is available here: https://engprax.s3.eu-west-2.amazonaws.com/The+Dark+Side+of+Software+Development.pdf
Fractal Patterns in Agent-Computer-Interfaces (ACIs)
Through webinars, conversations with peers, and insights from other thought leaders like
and at The Pragmatic Engineer, I’m pondering the similarities between how we organise human teams and the structure AI-assisted development tools take.In my article on “Team Topologies,” I discussed how the book by Matthew Skelton and Manuel Pais outlines three distinct team types: Stream-aligned, Enabling, and Complicated Subsystem. Each type serves a specific purpose in delivering value, providing specialised knowledge, or handling complex system components. Similarly, in my piece on “Topologies of Generative AI,” I explored Michael Spiteri’s three-layered AI solution: Assistants, Researchers, and Librarians, which work together to provide accurate and relevant information to users.
What’s striking is how closely the ACI of “SWE-agent”, described by Orosz and Nilsson, mimics this AI agent-librarian approach. The ACI acts as an Assistant, relying on underlying models and data (Researchers and Librarians) to interact with development environments, browse and edit files, run tests, and submit solutions.
This fractal pattern – the repetition of similar structures at different scales – indicates that effective team organisation and collaboration principles apply to human teams and to the design and implementation of coding agents and ACI systems.
Knowledge Black Holes
I accept that we build technology in layers; if you play any survival game, it’s quite fun to see how quickly you can progress through a technology tree, mixing old tech with new in some cases and others just building out until you’re making so many boxes in screws per second you need entire factory floors to store them (that game is satisfactory in case you’d missed it).
In the real world, though, we don’t progress as fast or simplify knowledge into clicks of buttons or recipes; it takes whole careers to perfect. With the increasing speed of sophistication and autonomy of developer tools, I am starting to feel like there may be “too many developing layers” in our technology tree, which raises concerns about potential “knowledge black holes” within software engineering teams.
How many of your devs know how to dump a production database from a container environment? Or even better, how many know how to create their server from a bare metal OS?
What technologies need to phase themselves out fast for newer ones to take over, and which have become so convoluted, it is time just say, nope … ☕
