AI assisted software development
AI is now part of our day-to-day engineering toolkit. This process describes how we use it responsibly, what tooling we standardize on, and which gaps we still have to close.
Guardrails
Section titled “Guardrails”- Using AI assistance is fine, including sending code to US-based services such as Google, Microsoft, or Anthropic. Keep sensitive client data and secrets local and scrub payloads if in doubt.
- Code review becomes more important than ever. Treat AI edits like junior pull requests: nothing merges into our SaaS platforms without human review. For trivial one-off scripts we can be more relaxed, but still double-check.
- Prefer regenerating a patch over micro-managing a broken one. Throw away poor results, restate the prompt, and keep iterations small enough to understand.
- Encourage project structures that let AI (and humans) run unit tests quickly. Lightweight build logs reduce token usage and keep iteration fast.
Model selection policy
Section titled “Model selection policy”Prefer the local AI service when it is good enough for the task. Use cloud AI when quality matters more than locality and the local models are not yet on par.
| Situation | Preferred route |
|---|---|
| Small context, low-stakes work, good enough is fine | Local AI |
| Internal drafting, summaries, rewrites, light analysis | Local AI |
| Coding tasks where we want the best quality | Cloud AI |
| Large context or stronger reasoning is needed | Cloud AI |
In practice this means we try the local models first for smaller and cheaper tasks, especially when speed is not the main concern and the best possible inference is not needed. For coding work and other quality-critical tasks, use the cloud setup described below until the local models catch up.
Workflow that works today
Section titled “Workflow that works today”Universal workflow guidelines regardless of underlying tool or model.
- Start prompts with a short summary of the goal and key constraints.
- Keep the central
AGENTS.mdup to date ingit@gitlab.d-centralize.nl:dc/dcentralize/scripts.git. After cloning that repo, symlinkagents/AGENTS.mdinto the config directories of every AI CLI you use (for example~/.claude/CLAUDE.md,~/.codex/AGENTS.md, or~/.gemini/AGENTS.md) so each tool gets the same context and we improve our skills together. - Let AI produce a patch, then run
git add -pto stage only the parts you actually want. Review every hunk, and if it takes longer than a few minutes to fix up, just regenerate with a clearer prompt. - Commit early and often. AI can derail code quickly, so short branches limit blast radius and make revert paths simple.
- After AI finishes, run the relevant unit or integration tests. When projects are refactored so tests are fast and deterministic, AI can even run them autonomously between iterations.
- Avoid gambling on guesses. If neither you nor the model can recognize the right answer, stop, regroup, and try a clearer or smaller prompt instead of betting on broken output.
Tooling philosophy
Section titled “Tooling philosophy”The AI landscape changes rapidly. New models come out every month, pricing shifts, and services can become overloaded or unavailable. We stay flexible between tools rather than locking into a single vendor. The sections below document setup for each tool we have tested and approved. Use whichever is currently recommended, but be ready to switch when circumstances change.
Best practices effective March 2026
Section titled “Best practices effective March 2026”Prefer the local AI service when it is good enough for the task.
When you need the cloud option, use Codex CLI on Azure with gpt-5-4 and
model_reasoning_effort = "high".
Claude/Opus is no longer part of our free-credit best-practice setup.
Codex CLI (preferred)
Section titled “Codex CLI (preferred)”Codex CLI backed by Azure OpenAI is now the default for new work.
Codex CLI installation
Section titled “Codex CLI installation”- Ensure Node.js 18+ is installed (see JavaScript setup for details).
- Install the CLI globally with
npm install -g @openai/codex. - Symlink the shared
AGENTS.mdto~/.codex/AGENTS.mdso Codex picks up organization-wide context.
Codex CLI configuration
Section titled “Codex CLI configuration”- Retrieve the
Azure AI - Shared API Keysitem from Bitwarden and export it (or write it) into theAZURE_OPENAI_API_KEYenvironment variable. - Create
~/.codex/config.toml(or update it) with the endpoint, deployment name, and API key from that Bitwarden item. - Example:
# Make sure these properties are at the top/root of the filemodel = "gpt-5-4"model_provider = "azure"model_reasoning_effort = "high"
[model_providers.azure]name = "Azure OpenAI"base_url = "https://anton-mi6et677-eastus2.openai.azure.com/openai/v1"env_key = "AZURE_OPENAI_API_KEY"wire_api = "responses"gpt-5.4-codex is not a separate Azure model name for our setup. Use the deployment name gpt-5-4.
Gemini CLI (fallback)
Section titled “Gemini CLI (fallback)”Remains a backup option when other tools are unavailable.
Gemini CLI installation
Section titled “Gemini CLI installation”- Symlink the shared
AGENTS.mdto~/.gemini/AGENTS.mdso Gemini picks up organization-wide context.
Keep prompts short and log successful prompt patterns in project READMEs for future reuse.
OpenCode (local LLM)
Section titled “OpenCode (local LLM)”OpenCode is a terminal-based AI coding assistant that connects to local LLM servers. Use it when you need fully offline operation or want to avoid sending code to external services.
Model requirements: OpenCode requires models with good tool calling support. Llama 3.1 and Qwen2.5 work well. Qwen3 has jinja template issues that break tool calling.
OpenCode installation
Section titled “OpenCode installation”curl -fsSL https://opencode.ai/install | bashOpenCode configuration
Section titled “OpenCode configuration”Create ~/.config/opencode/opencode.json:
{ "$schema": "https://opencode.ai/config.json", "provider": { "local": { "npm": "@ai-sdk/openai-compatible", "name": "Local llama.cpp", "options": { "baseURL": "http://192.168.2.31:8080/v1" }, "models": { "Meta-Llama-3.1-8B-Instruct-Q6_K": { "name": "Llama 3.1 8B" } } } }, "model": "local/Meta-Llama-3.1-8B-Instruct-Q6_K", "small_model": "local/Meta-Llama-3.1-8B-Instruct-Q6_K"}The baseURL points to the local AI server. Adjust
the model ID to match the currently loaded model.
OpenCode usage
Section titled “OpenCode usage”Run opencode in your project directory. It provides file search, code modification, and command
execution similar to Claude Code, but backed by the local model.
YOLO mode
Section titled “YOLO mode”AI CLIs ship with permission prompts and sandboxes for safety. We will learn to work with sandboxes properly over time, but constant approval dialogs slow down flow. If you maintain a good daily system backup (and you should), you are allowed to bypass these safeguards using aliases:
# Enable yoloalias codex='codex --dangerously-bypass-approvals-and-sandbox'alias gemini='gemini -y'alias claude='claude --dangerously-skip-permissions'Add these to your shell profile if you accept the risk. When something goes wrong, restore from backup and learn from it.
Secrets and helper scripts
Section titled “Secrets and helper scripts”- We are not publishing the key-cycle script here; keep it inside the private
AGENTS.mdrepo until we agree on wider distribution.
Shared agents file
Section titled “Shared agents file”AGENTS.mdholds the organization-wide guardrails, preferred workflows, and prompt snippets for every AI tool we use. Keeping it centralized ensures Claude, Codex, Gemini, and future CLIs inherit the same expectations without retyping them per tool.- When you add or change AI usage guidelines, update the shared file first, then re-link it into local CLI config directories so each assistant picks up the latest version automatically.
- Project-specific nuances (service credentials, domain conventions, non-general guardrails) belong in
an
AGENTS.mdstored in the project root so contributors and AI agents pick up the context as soon as they enter that repo.
Guidance for humans
Section titled “Guidance for humans”- Never let AI write both the feature and the tests; at least one side needs human authorship to catch hallucinations.
- If AI touched both sides anyway, reviewers slow down—no skimming. Someone must read every line critically.
- Use AI when you can recognize a correct solution. If you cannot validate the answer yourself, do not trust the model (unless the change is trivial to check).
- If you know the solution but AI clearly does not, stop coaxing it. Write the code yourself or restate the problem from scratch.
- Context switching while waiting on long responses is expensive. Keep prompts scoped so you can stay mentally engaged instead of multitasking.
Guidance for models (what we encode in AGENTS.md)
Section titled “Guidance for models (what we encode in AGENTS.md)”- Each
AGENTS.md(global and per-project) should list the most common tasks with sub-bullets for commands, deliverables, and word budgets. Example:Implement issue #1234glab issue view 1234 --commentsfor context.- Goal: patch
api/users.py, add tests intests/api/test_users.py. - Answer: ≤4 paragraphs, no fluff.
- Common categories include writing changelogs, debugging, writing tests, fixing regressions, and reviewing code. Spell these out so the assistant stops inventing random side quests.
- Tone rules: stick to the 1,000–3,000 most common English words, no persuasion, no compliments, no exclamation marks. Be concise and direct.
- Reasoning posture: admit uncertainty, present hypotheses, and avoid writing “is/because” before evidence exists. You are autocomplete, not an oracle.
- Anti-loop rules: never repeat an action without explicit user instructions. If blocked, explain and move on.
- Update priors: your previous output might be wrong. Let user feedback override old assumptions without argument.
- No cheating: do not touch tests just to make them pass, do not reinterpret requirements to dodge work.
- Stay inside the requested task. Do not invent auxiliary chores. Think in fewer than ~10 paragraphs before acting or asking for clarification.
Documentation expectations
Section titled “Documentation expectations”- Every project using AI should briefly document its preferred workflow (prompt snippets, test commands, gotchas) in its README.
- When adding new agents or prompt packs, update the shared
AGENTS.mdfirst so the rest of the team benefits immediately.
Open question
Section titled “Open question”- We have not yet solved how to stay productive while waiting for AI responses without incurring heavy context-switching overhead. Share experiments or tooling ideas in this doc when you find something that works.