Artifacts – Mar ’26 Edition
Engineer's Placeholder, AI Notes, Industry Signals
Hello Engineers 👋🏽
Welcome to March month’s 2026 newsletter. First of all thank you all for the great support !
Let’s dive in
Being Pragmatic at Work
Early in my career, as I transitioned away from junior engineering roles, I often felt a strong responsibility to solve problems the right way. If there was a problem, my instinct was to design the most correct and elegant solution possible. As engineers, we are trained to think in terms of craftsmanship i.e clean systems, automation, and eliminating manual work wherever possible.
But over time, I realized something important was missing from that mindset. Not every problem needs the perfect solution. Sometimes the real questions are:
What does the business actually need right now?
What can we realistically achieve within the timeline?
How often does this problem really occur?
I remember situations where I spent time designing automation for tasks that happened only once a quarter. From a pure engineering perspective, the solution was great. From a practical perspective, it didn’t always create the most value.
Being pragmatic doesn’t mean lowering standards or ignoring good engineering practices. It means understanding context, balancing craftsmanship with impact, timelines, and business priorities.
Sometimes the right solution is automation.
Sometimes the right solution is a quick script.
And sometimes, the right solution is simply doing the task manually once and moving on.
And learning that balance is one of the most valuable lessons in a career.
AGENTS.md
A simple, open format for guiding coding agents. Think of it as a README for AI coding agents. While README.md explains the project to humans, AGENTS.md gives structured instructions to AI tools about how the codebase works like setup commands, code style, testing rules, and project conventions.
AGENTS.md acts as a guardrail.
Example : repository with the following structure:
project/
├── README.md
├── AGENTS.md
├── src/
├── tests/
└── package.jsonWhen an AI coding agent read the repo, it can follow this workflow:
Step 1: Read AGENTS.md for instructions.
Step 2: Understand the Project Rules
Example AGENTS.md:
# AGENTS.md
## Project Setup
Run the following command to install dependencies:
npm install
Start development server:
npm run dev
## Testing
Run tests using:
npm run test
All new features must include tests in /tests.
## Coding Style
- Use TypeScript
- Use functional React components
- Do not introduce new dependencies unless necessary
## Safe Modification Areas
Agents may modify:
/src/components
/src/utils
Do not modify:
/infra
/configStep 3: Agent Executes Tasks Safely
Now when the Agent receives a request like “Add a new dashboard component”
The agent already knows where to add code, how to run tests and coding style expectations. This reduces mistakes and improves reliability.
Tools like Cursor, Claude Code, and GitHub Copilot are all leveraging the project-level instruction layers for coding agents. Cursor already documents automatic pickup of AGENTS.md, GitHub Copilot coding agent now supports it, while Claude Code uses the closely related CLAUDE.md model for persistent project instructions.
tropes.fyi
Website that catalogs recurring writing patterns commonly produced by AI models. It acts as a directory of stylistic “tells” that often appear in AI-generated prose. It helps readers, editors, and writers identify when text may be overly formulaic or machine-generated.
tropes.md, A single file containing all cataloged AI writing tropes.
Tested my recent blog post using tropes.fyi AI Vetter.
Result: human-leaning writing, only one trope detected.
📣 Recap
Sharing a few of my recent posts 📚 in case you missed them.
The Rise of Agent Orchestration
Tags : Agent Orchestration, PaperClip, AI Coworker
Building agents is no longer the differentiator. Low-code and no-code platforms have made it easy for almost anyone to create an AI agent. The real challenge now is making those agents sustainable, scalable, and aligned with broader operational goals. What's emerging in 2026 is a layer above the agents themselves: orchestration. How do you coordinate multiple agents, give them goals, manage costs, enforce governance, and keep humans appropriately in the loop?
( Source : FinancialContent ) AI agents are shifting from simple automation to autonomous digital coworkers, with 80% of enterprise apps expected to embed agents by 2026, driven by 46% CAGR growth.
Paperclip - Interesting Open-Source Project in This Space
Paperclip
An Org Chart for AI Agents
Paperclip is an open-source orchestration platform built around a bold premise: running entire companies with little to no human involvement. Paperclip lets you structure them into a proper organization with an org chart, roles, reporting lines, budgets, and governance.
You define a company goal (say, “reach $1M MRR with an AI note-taking app”), hire AI agents into roles like CEO, CTO, or Content Writer, approve the strategy, and let the system run. Agents wake on scheduled heartbeats, pick up tasks, delegate work up and down the hierarchy, and report back through a ticket system with full audit logs.
What sets it apart is the governance layer. You sit as the board: agents can’t hire other agents or execute strategy without your approval. Every agent has a hard monthly budget. When it’s hit, they stop. No runaway token costs.
It’s self-hosted, MIT-licensed, and works with any agent runtime including Claude, Codex, Cursor, or even a plain bash script. One deployment can run multiple isolated companies in parallel.
AWS brings cloud-native AI agents with memory, tools, and orchestration
Tags : Amazon Bedrock, AWS, OpenAI
Amazon Web Services (AWS) and OpenAI recently announced a strategic partnership that will bring OpenAI’s GPT models and agent capabilities directly into Amazon Bedrock, making it easier for developers to build production-ready AI applications. The collaboration introduces a Stateful Runtime Environment, allowing AI agents to maintain memory, context, and continuity across tasks, tools, and data sources, something that has been difficult with stateless LLM APIs.
In most current AI applications, LLMs are accessed through stateless APIs.
“Stateless” means the model does not remember anything between requests.
In practice, this means developers on AWS will be able to build AI agents that persist context, coordinate workflows, and operate across enterprise systems at scale.
Thanks for reading! I’d love to hear your thoughts or feedback 🙂
KK








