AI assisted software development
AI is now part of our day-to-day engineering toolkit. This process describes how we use it responsibly, what tooling we standardize on, and which gaps we still have to close.
Guardrails
Section titled “Guardrails”- Using AI assistance is fine, including sending code to US-based services such as Google, Microsoft, or Anthropic. Keep sensitive client data and secrets local and scrub payloads if in doubt.
- Code review becomes more important than ever. Treat AI edits like junior pull requests: nothing merges into our SaaS platforms without human review. For trivial one-off scripts we can be more relaxed, but still double-check.
- Prefer regenerating a patch over micro-managing a broken one. Throw away poor results, restate the prompt, and keep iterations small enough to understand.
- Encourage project structures that let AI (and humans) run unit tests quickly. Lightweight build logs reduce token usage and keep iteration fast.
Workflow that works today
Section titled “Workflow that works today”Universal workflow guidelines regardless of underlying tool or model.
- Start prompts with a short summary of the goal and key constraints.
- Keep the central
AGENTS.mdup to date ingit@gitlab.d-centralize.nl:dc/dcentralize/scripts.git. After cloning that repo, symlinkagents/AGENTS.mdinto the config directories of every AI CLI you use (for example~/.claude/CLAUDE.md,~/.codex/AGENTS.md, or~/.gemini/AGENTS.md) so each tool gets the same context and we improve our skills together. - Let AI produce a patch, then run
git add -pto stage only the parts you actually want. Review every hunk, and if it takes longer than a few minutes to fix up, just regenerate with a clearer prompt. - Commit early and often. AI can derail code quickly, so short branches limit blast radius and make revert paths simple.
- After AI finishes, run the relevant unit or integration tests. When projects are refactored so tests are fast and deterministic, AI can even run them autonomously between iterations.
- Avoid gambling on guesses. If neither you nor the model can recognize the right answer, stop, regroup, and try a clearer or smaller prompt instead of betting on broken output.
Tooling philosophy
Section titled “Tooling philosophy”The AI landscape changes rapidly. New models come out every month, pricing shifts, and services can become overloaded or unavailable. We stay flexible between tools rather than locking into a single vendor. The sections below document setup for each tool we have tested and approved. Use whichever is currently recommended, but be ready to switch when circumstances change.
Best practices effective December 2025
Section titled “Best practices effective December 2025”We have set up private models on Azure for both Claude (Opus 4.5) and Codex (GPT-5.1-codex), backed with a decent amount of free credits through 22 July 2026. At the moment, Opus 4.5 is regarded as the best model for programming, so prefer Claude Code for new work.
Claude Code (preferred)
Section titled “Claude Code (preferred)”Claude Code is Anthropic’s official CLI for Claude models.
Claude Code installation
Section titled “Claude Code installation”- Ensure Node.js 18+ is installed (see JavaScript setup for details).
- Install the CLI globally with
npm install -g @anthropic-ai/claude-code. - Symlink the shared
AGENTS.mdto~/.claude/CLAUDE.mdso Claude picks up organization-wide context.
Claude Code configuration
Section titled “Claude Code configuration”- Retrieve the
Anthropic claude d-centralize credentials on Azureitem from Bitwarden. It provides four environment variables that need to be defined (e.g. in your.bashrc):ANTHROPIC_FOUNDRY_API_KEYCLAUDE_CODE_USE_FOUNDRYANTHROPIC_FOUNDRY_RESOURCEANTHROPIC_DEFAULT_OPUS_MODEL
- Create
~/.claude/settings.jsonwith the following content:
{ "model": "claude-opus-4-5", "alwaysThinkingEnabled": false}Claude Code usage
Section titled “Claude Code usage”Run claude in your project directory to start a session. Claude Code supports agentic workflows,
file editing, and shell command execution out of the box.
Codex CLI (fallback)
Section titled “Codex CLI (fallback)”Codex CLI backed by Azure OpenAI was previously the default. It remains available as a fallback when Claude is unavailable or for projects already using it.
Codex CLI installation
Section titled “Codex CLI installation”- Ensure Node.js 18+ is installed (see JavaScript setup for details).
- Install the CLI globally with
npm install -g @openai/codex. - Symlink the shared
AGENTS.mdto~/.codex/AGENTS.mdso Codex picks up organization-wide context.
Codex CLI configuration
Section titled “Codex CLI configuration”- Retrieve the
Azure openai keyitem from Bitwarden and export it (or write it) into theAZURE_OPENAI_API_KEYenvironment variable. - Create
~/.codex/config.toml(or update it) with the endpoint, deployment name, and API key from that Bitwarden item. - Example:
model = "gpt-5.1-codex-2" # Replace with your actual Azure deployment namemodel_provider = "azure"model_reasoning_effort = "medium"
[model_providers.azure]name = "Azure OpenAI"base_url = "https://anton-mi6et677-eastus2.cognitiveservices.azure.com/openai/v1"env_key = "AZURE_OPENAI_API_KEY"wire_api = "responses"Gemini CLI (fallback)
Section titled “Gemini CLI (fallback)”Remains a backup option when other tools are unavailable.
Gemini CLI installation
Section titled “Gemini CLI installation”- Symlink the shared
AGENTS.mdto~/.gemini/AGENTS.mdso Gemini picks up organization-wide context.
Keep prompts short and log successful prompt patterns in project READMEs for future reuse.
YOLO mode
Section titled “YOLO mode”AI CLIs ship with permission prompts and sandboxes for safety. We will learn to work with sandboxes properly over time, but constant approval dialogs slow down flow. If you maintain a good daily system backup (and you should), you are allowed to bypass these safeguards using aliases:
# Enable yoloalias codex='codex --dangerously-bypass-approvals-and-sandbox'alias gemini='gemini -y'alias claude='claude --dangerously-skip-permissions'Add these to your shell profile if you accept the risk. When something goes wrong, restore from backup and learn from it.
Secrets and helper scripts
Section titled “Secrets and helper scripts”- We are not publishing the key-cycle script here; keep it inside the private
AGENTS.mdrepo until we agree on wider distribution.
Shared agents file
Section titled “Shared agents file”AGENTS.mdholds the organization-wide guardrails, preferred workflows, and prompt snippets for every AI tool we use. Keeping it centralized ensures Claude, Codex, Gemini, and future CLIs inherit the same expectations without retyping them per tool.- When you add or change AI usage guidelines, update the shared file first, then re-link it into local CLI config directories so each assistant picks up the latest version automatically.
- Project-specific nuances (service credentials, domain conventions, non-general guardrails) belong in
an
AGENTS.mdstored in the project root so contributors and AI agents pick up the context as soon as they enter that repo.
Guidance for humans
Section titled “Guidance for humans”- Never let AI write both the feature and the tests; at least one side needs human authorship to catch hallucinations.
- If AI touched both sides anyway, reviewers slow down—no skimming. Someone must read every line critically.
- Use AI when you can recognize a correct solution. If you cannot validate the answer yourself, do not trust the model (unless the change is trivial to check).
- If you know the solution but AI clearly does not, stop coaxing it. Write the code yourself or restate the problem from scratch.
- Context switching while waiting on long responses is expensive. Keep prompts scoped so you can stay mentally engaged instead of multitasking.
Guidance for models (what we encode in AGENTS.md)
Section titled “Guidance for models (what we encode in AGENTS.md)”- Each
AGENTS.md(global and per-project) should list the most common tasks with sub-bullets for commands, deliverables, and word budgets. Example:Implement issue #1234glab issue view 1234 --commentsfor context.- Goal: patch
api/users.py, add tests intests/api/test_users.py. - Answer: ≤4 paragraphs, no fluff.
- Common categories include writing changelogs, debugging, writing tests, fixing regressions, and reviewing code. Spell these out so the assistant stops inventing random side quests.
- Tone rules: stick to the 1,000–3,000 most common English words, no persuasion, no compliments, no exclamation marks. Be concise and direct.
- Reasoning posture: admit uncertainty, present hypotheses, and avoid writing “is/because” before evidence exists. You are autocomplete, not an oracle.
- Anti-loop rules: never repeat an action without explicit user instructions. If blocked, explain and move on.
- Update priors: your previous output might be wrong. Let user feedback override old assumptions without argument.
- No cheating: do not touch tests just to make them pass, do not reinterpret requirements to dodge work.
- Stay inside the requested task. Do not invent auxiliary chores. Think in fewer than ~10 paragraphs before acting or asking for clarification.
Documentation expectations
Section titled “Documentation expectations”- Every project using AI should briefly document its preferred workflow (prompt snippets, test commands, gotchas) in its README.
- When adding new agents or prompt packs, update the shared
AGENTS.mdfirst so the rest of the team benefits immediately.
Open question
Section titled “Open question”- We have not yet solved how to stay productive while waiting for AI responses without incurring heavy context-switching overhead. Share experiments or tooling ideas in this doc when you find something that works.