There has been a lot of discussion about AI use in software development, its general usefulness, productivity boosts and safety concerns.
These are, of course, topics that we also discuss at farbenmeer and I want to share some of our thoughts and insights on this topic.
Why now, what happened?
While we, as a bunch of naturally curious software developers, have been exploring the potential of AI in our daily work for quite some time it does seem like the current AI tools (or rather Claude to be specific) have reached an inflection point in late 2025 where they can finally produce code that is on par with human output for some tasks.
What does this mean for us?
While wild claims about gigantic productivity boosts are definitely debatable, we think there is no question that AI can help and will be part of software development workflows in the future. In autocomplete-suggestions it has already been present for years. Agentic workflows (where AI gets instructions and executes them autonomously across files, tool calls etc.) will become more and more common as well.
We see both chances and risks in this development. While no one honestly thinks that AI will replace human software developers entirely any time soon, there is a wide range of scenarios we can imagine for where this development is going. I will discuss some of them in the next section.
Scenarios
The Centaur
AI helps boost productivity. We write no more boilerplate code. Instead we derive concrete requirements for an application, design an architecture in a dialog with an AI tool. We automate writing most code as well as design grunt-work. An AI tool takes care of cramming consistent buttons in layout grids as well as with implementing the <div>s, <span>s and database calls to fill those grids with information. AI becomes an abstraction just like a framework. And just like with a framework such as Next.js our job becomes coding high-level things and diving in (through layers of frameworks down to the http-layer if necessary) when the AI gets stuck.
In this case AI makes Developers and Designers more productive and helps them spend more of their time on the most interesting aspects of their work.
Takeaways? I hate searching for that missing semicolon and if AI can do that for me that's probably a good thing.
The Reverse-Centaur
Based on this idea by Cory Doctorow.
AI helps boost productivity. We spend our days frantically switching between multiple terminal sessions as an army of AI agents pings us for the human input they need every once in a while to continue their work. Companies need to somehow 'consume' large quantities of senior developers who work themselves into a burnout while losing all their actual development skill in the process of feeding the AI. When they are burned out the companies dispose them with a bag of cash and so they go do something with wood.
Takeaways? If this actually goes the way AI companies want investors to think it will go then the future sounds horrible.
The big dumb-down
People increasingly use AI. AI writes all the content on the internet. Hallucinations (as well as thoughts implanted in the models by governments, AI companies and other potentially bad actors) become indistinguishable from reality. No one knows anything for sure anymore. Everything is eventually run by AI. This is obviously a dystopian future and an issue that we as a software company with a commitment to social sustainability want to actively work to prevent.
Takeaways? When integrating AI features in digital products it might be a very good idea to mark AI-generated content as such and encourage users to proofread / double check it.
The democratization
Everyone can build software products now as all you need is to write a couple of paragraphs about the application you want. There is just no need for software development. There are types of products where this is already true. You need to create a straight, single file landing page with a bit of fixed content? You need a tool that does one very specific thing that is annoying in your current workflow? Claude Code performs very well on this kind of task and it can also handle the maintenance for simple pages. This can be a real advantage for non-technical people working on such stuff as their iteration cycles turn from
open issue with agency
wait for developer to take it into next spring
wait for implementation
validate if it works as intended
feedback
wait again
repeat...
into
write prompt
validate if it works as intended
write next prompt
repeat...
Takeaways? Democratization is a good thing. We (as a software agency) want to come into play when these things turn too large to handle without technical knowledge. This means the things we work on will become more specialized, more technical, more complex. And it probably means that Vibe-Code-Cleanup will become a part of our daily business.
The bubble
This is all really cool and a bunch of somewhat useful tools but it's just not as big as it looks right now. AI can still not produce correct software at scale. AI code factories produce millions of lines of code really fast but it turns out the code itself is actually a liability, not an asset.
We can now create mockups and prototypes much faster which is really cool because we need to spend less work to figure out which parts of the product we are building are actually worth putting in the work.
We keep the autocomplete, that's just undeniably cool.
We use AI for busywork. It helps us find annoying bugs. It helps us keep track of what memory to free at the end of which function. It helps us with things that are annoying for humans but easy for machines to do.
but actually writing critical code in large projects is still a human task because in large projects the abstractions, the design patterns and the boilerplate already exist — and the real work there is in figuring out the abstractions, the design patterns, the boilerplate and the real work there is in figuring out what to build — the code itself is just an artifact of that process.
This is the scenario I personally believe in until proven otherwise.
Other issues
AI is obviously problematic in various other ways.
It consumes ridiculous amounts of energy burning our planet in the process.
It hallucinates. It always has and always will, the chance to see hallucinations will just get small over time. This is a probabilistic tool and the consequence is that we can never fully trust it.
There is no real mitigation for prompt injection (so far). Using AI is always a risk. Running Claude Code with access to a personal computer is a risk. Committing AI code without human review is a risk. And it's not an abstract risk, AI does dumb and destructive things sometimes if you let it.
AI tools send everything they process to the servers running the model. For the most advanced models these servers are currently all located in the USA. This problem will probably solve itself over time as open models and providers in the rest of the world appear to catch up quite fast and will probably be sufficient for pretty much all purposes in the future. Code is one thing but as soon as we deal with confidential data US servers become a problem and we need to work with european or even self hosted solutions.
Strategy
So after all of these assessments let's talk about how we as a software agency want to deal with AI use in the future. We believe that
We do not want to become an AI code factory. We will not try to maximise output by pushing humans out of the loop in the foreseeable future. We believe that this approach produces liabilities, not assets.
We will use AI tools for various purposes such as exploration, experimentation, documentation, code review and other busy work of all kinds.
We will make sure to keep AI generated designs and code checked through human review to prevent AI from becoming an Accountability Sink
We will review AI use not only in terms of productivity gains but also of ethical factors such as environmental, mental and social impact.
We will give our customers choice and control over to what degree they want lean in or hold back on AI use.
Wrap up
AI is here, it is useful, and it is risky. The most likely near-term reality is that it becomes a powerful tool for busy-work and prototyping while serious, large-scale software development remains a fundamentally human endeavour. We embrace that role: using AI where it genuinely helps, keeping humans in the loop where it matters, and being honest with ourselves and our customers about what this technology can and cannot do.
Any questions or comments on this topic? Reach out to Michel 😊
