<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Dragan Stepanović</title>
    <description>Should we be having this problem in the first place?</description>
    <link>https://draganstepanovic.com/</link>
    <atom:link href="https://draganstepanovic.com/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Fri, 30 Jan 2026 19:25:00 +0000</pubDate>
    <lastBuildDate>Fri, 30 Jan 2026 19:25:00 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
      <item>
        <title>More code per unit of time? More frequent integration</title>
        <description>&lt;p&gt;Practice of Continuous Integration traditionally meant integrating code to main at least once per day.&lt;/p&gt;

&lt;p&gt;If you’re generating an order-of-magnitude more code with AI than before, then that definition is not accurate anymore - partly because it was focused on the cadence instead of inventory (of unintegrated code), which I think is much more aligned with CI’s actual goal - flow.&lt;/p&gt;

&lt;p&gt;So, depending on how much more code you’re generating per day, you need to integrate to main proportionally more often in order to be doing Continuous Integration.&lt;/p&gt;

&lt;p&gt;Otherwise, there’s going to be a lot more pain coming from merge conflicts of bigger batches of code landing on the main, increasing rework rate, rendering any upstream “AI acceleration” useless. It doesn’t matter how much more code and in parallel you’re able to generate. All of that needs to land in the same codebase and a single, main branch.&lt;/p&gt;

&lt;p&gt;A good visual metaphor: inflating the elephant that is travelling through the boa constrictor. The bad news is that the change lead time goes up as you start doing that.&lt;/p&gt;

&lt;p&gt;Also note that this reasoning applies to all downstream stages from integration (deployment, release, customer adoption, customer acceptance of value, etc.).&lt;/p&gt;

&lt;p&gt;&lt;img width=&quot;1200&quot; height=&quot;576&quot; alt=&quot;image&quot; src=&quot;https://github.com/user-attachments/assets/c7a25279-f0b8-440f-a59f-62294a2f31ec&quot; /&gt;&lt;/p&gt;
</description>
        <pubDate>Fri, 30 Jan 2026 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2026/01/30/more-frequent-integration.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2026/01/30/more-frequent-integration.html</guid>
        
        
      </item>
    
      <item>
        <title>Production doesn&apos;t care</title>
        <description>&lt;p&gt;how fast the code has been written.&lt;/p&gt;

&lt;p&gt;Deploying 1000 LoC at once is still proportionally - often exponentially - riskier than deploying 100 LoC, regardless who wrote it.&lt;/p&gt;

&lt;p&gt;The cost of producing big batches may have dropped with AI, but that doesn’t change the risk inherent in shipping them.&lt;/p&gt;

&lt;p&gt;In fact, improving the efficiency of delivering big batches will likely contribute to even bigger batches to compensate for that.&lt;/p&gt;

&lt;p&gt;And I don’t even want to speculate on the number of latent failure modes that will get activated as a result of the interaction between more, bigger batches deployed in a short time frame. Popcorn time 🍿&lt;/p&gt;

&lt;p&gt;I was trying to find an image for the change in the dynamics that will likely happen as a result of plummeting of the cost of generating more code with AI, and “elephant travelling through a boa constrictor” nails it.&lt;/p&gt;

&lt;p&gt;Essentially, the elephant just got bigger.
&lt;img width=&quot;1200&quot; height=&quot;576&quot; alt=&quot;image&quot; src=&quot;https://github.com/user-attachments/assets/1083dd75-bf9e-4990-9a7c-10d9f8c3c50e&quot; /&gt;&lt;/p&gt;
</description>
        <pubDate>Sat, 24 Jan 2026 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2026/01/24/production-does-not-care.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2026/01/24/production-does-not-care.html</guid>
        
        
      </item>
    
      <item>
        <title>Heavy AI machinery and design</title>
        <description>&lt;p&gt;When coding, I find AI especially valuable when trying to achieve consistency across the codebase once I have a directional change in design. The “look at this example of how I did it and apply it to all other cases in the codebase”, where AI does all the heavy lifting, drastically cuts the time to implement a change.&lt;/p&gt;

&lt;p&gt;But, then, when I do a bit deeper dive into the design, I often find that I was actually too late with addressing the signal design was trying to tell me, but I didn’t listen to.
E.g. already too high fan-in factor to a method/class - it’s used in too many places and there was a missing domain concept that should’ve sat in between to reduce it. Had I had it, I wouldn’t have needed the heavy-lifting because the change would’ve been more contained than it was.&lt;/p&gt;

&lt;p&gt;So, my guiding question currently is: “Any time you recognize big value of AI doing some heavy-lifting work for you, ask yourself, is there something about the design that you should’ve addressed earlier before it got cemented such that you needed heavy machinery to address it?”&lt;/p&gt;

&lt;p&gt;Reducing the cost of this kind of heavy lifting makes it less likely that the design will be addressed in a timely manner. And that’s something I’m trying to be aware about as much as I can.&lt;/p&gt;
</description>
        <pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2025/12/14/heavy-ai-machinery.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2025/12/14/heavy-ai-machinery.html</guid>
        
        
      </item>
    
      <item>
        <title>There&apos;s a huge difference between</title>
        <description>&lt;p&gt;optimizing for making bigger changes faster&lt;/p&gt;

&lt;p&gt;and&lt;/p&gt;

&lt;p&gt;optimizing for making smaller changes more frequently.&lt;/p&gt;

&lt;p&gt;Lean towards the latter, even though most of the industry is trying very hard to find ways to do the former.&lt;/p&gt;
</description>
        <pubDate>Sun, 09 Nov 2025 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2025/11/09/there-is-a-huge-difference.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2025/11/09/there-is-a-huge-difference.html</guid>
        
        
      </item>
    
      <item>
        <title>If you can&apos;t change design cheaply</title>
        <description>&lt;p&gt;If you can’t change design cheaply because refactoring skills are lacking, you’re less likely to end up with a suitable design for a given problem that emerges from insights you get as a byproduct of refactoring.&lt;/p&gt;

&lt;p&gt;That’s to say, teams that struggle with refactoring skills are also teams that most likely struggle with design skills, because tapping into a rich pool of domain and design insights generated as you refactor in small, safe steps is out of their reach.&lt;/p&gt;
</description>
        <pubDate>Sat, 25 Oct 2025 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2025/10/25/if-you-cant-change-design-cheaply.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2025/10/25/if-you-cant-change-design-cheaply.html</guid>
        
        
      </item>
    
      <item>
        <title>It&apos;s not about reducing the costs per se</title>
        <description>&lt;p&gt;I often ask myself if a technology, beside lowering the costs for which lowering the costs is beneficial, is also lowering the costs for which lowering the costs is detrimental.&lt;/p&gt;

&lt;p&gt;And my experience tells me that there are a lot of the latter ones that for lots of teams/orgs go under the radar, deferring addressing painful problems for a way longer time than they should.&lt;/p&gt;

&lt;p&gt;In a bunch of cases, the high cost of change serves a purpose. It points to a problem, addressing which reduces the cost that was pointing at it.&lt;/p&gt;

&lt;p&gt;So, the point is not reducing the cost per se. It’s fixing the problem that the high cost of change pointed at, which in turns dissolves the problem of the high cost.&lt;/p&gt;
</description>
        <pubDate>Sat, 27 Sep 2025 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2025/09/27/reducing-costs.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2025/09/27/reducing-costs.html</guid>
        
        
      </item>
    
      <item>
        <title>Dynamics between transaction cost and batch size is simple, but often counterintuitive.</title>
        <description>&lt;p&gt;In short, the system compensates for the increase in transaction cost per batch size by increasing the batch size.&lt;/p&gt;

&lt;p&gt;If an e-commerce shop charges a delivery fee, on average you won’t see people ordering stuff that is less than, say, 2x the fee. Why? Because it doesn’t make economic sense. People won’t order, or they’ll wait until they need to order more/other things and batch them together in a single shipment to compensate for the previously high delivery fee per order value.&lt;/p&gt;

&lt;p&gt;That’s also why, on average, you’ll observe an increase in PR size if the review delay (transaction cost) per size of the PR, or time invested to write a PR (batch size) increases. Why? To compensate for that increase in transaction cost per batch size.&lt;/p&gt;

&lt;p&gt;In the same way, if tests are slow and/or flaky, and/or deployment takes long per amount of invested work (time or number of changes), you’ll see an increase in the number of changes deployed at once (batch size). We also know how this ends…&lt;/p&gt;

&lt;p&gt;“Write smaller PRs!”, or “Deploy smaller changes!” sounds as a sensible advice, but it doesn’t work when transaction cost per batch size is high.&lt;/p&gt;

&lt;p&gt;And when Work in Progress goes up - if people in a team are not working together (pair/mob), WIP is already too high - delays start kicking in because the team members become less responsive to each other’s requests, since too many things start competing for their attention.&lt;/p&gt;

&lt;p&gt;Those delays drive the transaction cost per batch size up, which soon gets compensated by an increase in the average size of the batch (bigger PRs and bigger deployments).&lt;/p&gt;

&lt;p&gt;That’s a balancing feedback loop kicking in to reduce back the ratio between the transaction cost and the batch size.&lt;/p&gt;

&lt;p&gt;Instead, way better intervention is strictly protecting responsiveness of the system by keeping the WIP low, by people working together. That’s why work produced by teams working together is often of higher quality.&lt;/p&gt;

&lt;p&gt;They are able to make smaller changes, validate them sooner, thus cut the wrong paths sooner and reduce rework, which then translates into more value-added, productive time.&lt;/p&gt;

&lt;p&gt;Working together works.&lt;/p&gt;
</description>
        <pubDate>Thu, 19 Jun 2025 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2025/06/19/working-together-works.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2025/06/19/working-together-works.html</guid>
        
        
      </item>
    
      <item>
        <title>Managing  interdependence over decoupling</title>
        <description>&lt;p&gt;The idea of managing (inherent) interdependence in systems is at least as important as the idea of decoupling (independence) that the software development industry is so predominantly obsessed with since the field’s inception.&lt;/p&gt;

&lt;p&gt;What’s even worse is that putting most of your efforts into the latter will guarantee reducing complex reality to something that it essentially isn’t.&lt;/p&gt;

&lt;p&gt;And that’s when the reality will painfully surprise you, because it doesn’t care what you think it should be.
It just is.&lt;/p&gt;

&lt;p&gt;Your mental model of it is the one that needs to adjust instead.&lt;/p&gt;

&lt;p&gt;Realizing that you just cannot decouple things that are inherently interdependent is often one of those adjustments, and that you’ll be way better off shifting interventions to managing that inherent interdependence over trying to decouple.&lt;/p&gt;
</description>
        <pubDate>Thu, 03 Apr 2025 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2025/04/03/interdependence-over-independence.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2025/04/03/interdependence-over-independence.html</guid>
        
        
      </item>
    
      <item>
        <title>Working together (pair/mob/ensemble) helps with fixing inefficiencies in the system</title>
        <description>&lt;p&gt;in a way that it amplifies the pain that every single individual felt when working in isolation, but because everyone was suffering alone, the pain was perceived not as high to address it.&lt;/p&gt;

&lt;p&gt;When working together, the pain of that slow deployment pipeline, or flaky integration tests, or slow test suite, or waiting for a PR review, etc. becomes very hard to ignore. The reason is that you now have a group of people waiting, so the perceived cost of the inefficiencies in the process goes up, thus making it more likely to be addressed.&lt;/p&gt;

&lt;p&gt;When you address the inefficiency, everyone gets to benefit from it from that point on.&lt;/p&gt;

&lt;p&gt;Also, think about how much cumulative value you get because of the improvements in the cycle time not only because you’ve cut the wait times when working together (everyone needed available in the group), but also by cutting the processing/touch time by fixing inefficiencies.&lt;/p&gt;

&lt;p&gt;All that translates to shorter feedback loops, less rework, accelerated learning cadence, and value sooner.&lt;/p&gt;

&lt;p&gt;Corollary point: If you want to fix problems, get people to experience them as a group working together.&lt;/p&gt;
</description>
        <pubDate>Tue, 04 Mar 2025 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2025/03/04/pain-together.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2025/03/04/pain-together.html</guid>
        
        
      </item>
    
      <item>
        <title>If an incident report</title>
        <description>&lt;p&gt;contains more action items related to adding more gates before a change reaches customers, rather than reducing the size of the change, you’ll likely end up with having to create even more incidents reports.&lt;/p&gt;
</description>
        <pubDate>Tue, 19 Nov 2024 00:00:00 +0000</pubDate>
        <link>https://draganstepanovic.com/2024/11/19/if-incident-report.html</link>
        <guid isPermaLink="true">https://draganstepanovic.com/2024/11/19/if-incident-report.html</guid>
        
        
      </item>
    
  </channel>
</rss>
