Monthly Archives: January 2026

The Hidden Crisis: Who Teaches AI After Stack Overflow?

About a year ago there was a post about the demise of StackOverflow, the portal used by many techies for solving common (and not so common) issues that arise with the use of technology. n00bs and experts all mixed together with the common goal of solving issues resulting from gaps in (unread) documentation. The platform was already in decline from 2018 onwards, with a nice COVID resurgence, and the launch of ChatGPT resulted in a swift drop to almost zero:

The reason is simple: LLMs give answers in your context, where in the past I needed a number of StackOverflow posts, some blogposts, and the product’s own documentation to solve my issues. LLMs combine all this information for me and spit out good enough answers to make me solve the issues I face. The LLMs are an enormous time and energy saver. However, the future AI can not make use of the content from StackOverflow as the generation of that content has virtually stopped.

I suggested that better documentation, written by the same tools that finished off StackOverflow, might fill the hole of content creation and teach future LLMs how to solve problems. Now, hope is not a strategy and thus, I went on a fact-finding mission. I run about 45 Docker containers divided in about 30 stacks on my home server. I dislike writing documentation just as much as the next person, so I never bothered to document any of it. How nice would it be to have an LLM write it for me?

Recently, I have subscribed to Claude.ai and they have an integration with Chrome/Edge that might do the trick. So I navigated to my DocMost wiki and fed it my infrastructure Docker Compose files (think reverse proxy, IAM, Redis, DB’s etc.). In no time it spit out some descriptions of the stack. Nothing too fancy okayish content. However, when I asked it to create a Mermaid diagram is when I saw the real magic happen. It really understood how everything worked together and created a fine diagram.

As it looked really promising I then asked it to generate some content around individual Docker stacks, and list dependencies, and how to reach the actual apps. It again created beautiful mermaid context diagrams, showing how it worked together with other Docker containers. It horribly, struggled when I asked it to create links under the addresses where apps were reachable. It completely died on me when I asked it to create a summary of all the used ports on the Docker host in one table. As the generation slowed down, I tried spinning up a second instance and have two session generating content, but the second one gave up a lot faster than number one. It was just not as dedicated and determined as the first one, and quite frankly, highly disappointing.

Learning points

  • Claude was not strong in determining what is important to document, I had to direct it on what to focus on.
  • Diagramming using code was always my favorite, and obviously, Claude feels the same way. The Mermaid diagrams were brilliantly done.
  • Accessibility for non-MCP interfaces is really important (DocMost fails in that regard), Claude for Chrome/Edge was burning through tokens (usage limits) faster than I have ever seen it do before (I use the Max plan), and still failed to select text in a table to add a simple hyperlink. It was generating and analyzing screenshot after screenshot, trying to select text and add rows to existing tables.

Conclusion
My experience in solving issues using LLMs that arise from using :latest versions (and auto updating using Watchtower) is that I find myself consulting release notes again. This is where LLMs can really help with writing comprehensive documentation, but not without proper supervision. I think, therefore, there is still some space for a StackOverflow type site, but expect little ‘in-person’ views and lots of LLM/agents looking around to find answers to questions. The big question is going to be: What will be the business model that will float these websites? As the eyeballs of consumers are unlikely to return …

Next steps:

Install Notion as a Wiki, it has native MCP, and thus should be much easier for Claude to communicate with.

Secure Document Automation Made Simple: Ollama + N8N

As it feels like everybody has jumped on the local AI bandwagon by now, I felt a bit left behind. So it was time to dip my toes in the water and avoid my year-end admin chores. Of which the most mind-numbing is figuring out the tax mess, which in the Netherlands is almost entirely a snail-mail affair (yes, Dutchies, I am talking about business taxes). The amount of blue envelopes shipped to my home address is staggering and messy. A classic ‘unstructured data’ problem ripe for the plucking with the current state of local AI. It’s time to sort things out using some local LLMs and N8N to keep things private, secure, and as digital as possible.

First things first: I bought a Canon scanner (MF657CdW) with the ability to scan documents double-sided and store them fully OCR’ed on disk or cloud. Next step: add some RAM to my 5-year-old Desktop so it will happily run gpt-oss:120b. Update some broken Docker containers and finally determine why my port forwarding never worked (it sent the replies through my VPN). Port forwarding had to work to get OAuth flows running in N8N; the next blocker was my Authelia IAM, which needed some exceptions for call-backs to N8N.

Selecting the platform for running LLMs was easy: Ollama, as it had the option of opening up the LLM through an API. Installing N8N for agent/workflow tasks was also pretty simple (as I already had Postgres and Redis installed). Creating the workflow and connecting all the steps was a breeze. I successfully mailed my bookkeeper and uploaded the documents to their respective folders and bookkeeping software where necessary.

Lessons learned:

  • N8N has some strange quirks around Oauth setup, first create the credential, and if the connection fails, try again via ‘in-private browsing’
  • Filename changes when uploading documents are ignored, but using a bit of code to change the filename of the binary before uploading does work … Must be a n00b thing, don’t hesitate to give pointers.
  • Let the LLM worry about its own shortcomings by feeding it its excrements and referring to the previously given spec. It took about 5 iterations before it created a verbose enough prompt that survived all the content (yes, even the butterfly my daughter drew for me, which ended up in the snail mail stack).
  • Old hardware is still good enough for some fun, and when buying new, never skimp out on specs, especially RAM.

Next steps:

  1. When the document is a receipt or invoice, create a mutation in the right category in e-boekhouden.nl.
  2. Identify any actions and create Vikunja (my todo-list) tasks for them.
  3. Add all the data to a RAG DB in the hope the LLMs will get clever enough to actually help me optimize my tax strategies in the future (no, we’re not there yet). For now, I will hire Marieke for that.

But first: buy a lot more VRAM to speed up the larger models (8 tokens/s for gpt-oss:120b). When running in the background, this is an acceptable speed, but when debugging, I need things to run a lot faster.