Introduction
This is the structure I’ve been using in my projects and it’s been working well. It’s not the only way to do it, and probably not the best for every type of project. What works for a SaaS application might not make sense for an embedded system or a data science project.
That said, if you’re building fullstack applications with AI assistance, especially using tools like Claude, Cursor, or Windsurf, this approach might save you a lot of headaches. The idea is to share what I learned by making mistakes and getting things right, and you adapt it to your context.
The central point is simple: bad structure for humans is bad structure for AI. But with an aggravating factor: when AI gets lost, it doesn’t ask questions. It invents. And then you discover three commits later that half the code is following one pattern and the other half a completely different one.
Subagents: The Foundation of Everything
Before talking about folder structure, monorepo, or anything like that, we need to talk about subagents. This is the most crucial phase of the project, and if you mess up here, you’ll pay the price later in massive refactoring, absurd token consumption, and project disintegration points.
Subagents are features present in Claude (Projects with custom instructions), but also in other tools like Cursor (rules), Windsurf (cascade rules), and even ChatGPT (custom GPTs). The naming varies, but the concept is the same: you define specific instructions that AI should follow in the context of that project.
Why is this crucial?
Remember the article about guard rails? This is where you define that. Naming, architecture, code patterns, folder structure, commit conventions. Everything you don’t define now, AI will invent. And it’s too creative for our own good.
The problem isn’t AI making bad choices. The problem is it making different choices with each interaction. One day it creates components in PascalCase, the next in kebab-case. One moment it uses named exports, the next default exports. And when you realize it, you have an architectural frankenstein on your hands.
What to put in subagents?
Technical guard rails:
- Languages and versions (TypeScript 5.x, Node 20+)
- Frameworks and their conventions (Next.js App Router, React Server Components)
- Required libraries (Zod for validation, Prisma for database, etc)
- Naming patterns (camelCase for variables, PascalCase for components)
Architecture:
- Expected folder structure
- Separation of responsibilities (where business logic goes, where UI goes)
- Import patterns (absolute paths vs relative)
- Organization of types and interfaces
Code conventions:
- Syntax preferences (arrow functions vs function declarations)
- Error handling
- Logging and debugging
- Comments (when and how)
Practical examples:
# Frontend Agent
## Stack
- Next.js 14 (App Router)
- TypeScript 5.x
- Tailwind CSS
- shadcn/ui
## Conventions
- Components in PascalCase
- Component files with .tsx extension
- Server Components by default, 'use client' only when necessary
- Props always typed with interface (never type)
## Structure
- `/app` for routes
- `/components` for reusable components
- `/lib` for utilities
- Absolute imports using @/
# Backend Agent
## Stack
- Node.js 20+
- Fastify
- Prisma ORM
- PostgreSQL
## Conventions
- Routes follow REST pattern
- Validation with Zod on all routes
- Errors always in format { error: string, details?: unknown }
- Structured logs with pino
## Structure
- `/routes` for route definitions
- `/services` for business logic
- `/repositories` for data access
- Never mix business logic in routes
The cost of not doing this right
I’ve worked on projects where I skipped this step thinking “I’ll define as I go”. Result: by the third feature, AI was creating structures completely different from the first two. I had to do a refactoring that took two days and about 500k tokens just to standardize what already existed.
Rework isn’t just wasted time. It’s lost context, it’s bug risk, it’s mental confusion. And with AI, it’s literally money coming out of your pocket in tokens.
Instructions that save rework
Besides guard rails, there’s a category of instructions that make a brutal difference in generated code quality: tool and library preferences.
AI has a natural tendency to reinvent the wheel. It’ll create a modal component from scratch when you already have shadcn/ui installed. It’ll write SQL migrations manually when your framework has specific commands for that. It’ll implement JWT authentication by hand when you could use NextAuth.
Framework commands:
If you’re using Rails, Django, Laravel, or any framework with robust CLI, make this explicit:
## File creation
**ALWAYS use framework commands:**
- Models: `rails generate model User name:string email:string`
- Controllers: `rails generate controller Users index show`
- Migrations: `rails generate migration AddRoleToUsers role:string`
**NEVER create these files manually**
For Next.js with Prisma:
## Database
- Migrations: always use `npx prisma migrate dev --name <descriptive_name>`
- NEVER edit generated migration files
- NEVER create SQL files manually
- Schema changes always in `schema.prisma` followed by migration
Specialized libraries:
Make clear which problems already have ready solutions:
## Required libraries
- **Validation**: Zod (never manual validation)
- **Dates**: date-fns (never moment.js, never manual manipulation)
- **Forms**: react-hook-form + zod resolver
- **Tables**: TanStack Table (never custom implementation)
- **Modals/Dialogs**: shadcn/ui Dialog
- **Toast notifications**: sonner
- **Global state**: Zustand (only when necessary, prefer server state)
**If the task involves any of these areas, USE the library. Don't reimplement.**
Why this matters:
Without these instructions, AI goes for the path that seems most direct at the moment. Creating a .rb file manually seems simple, but you lose automatic validations, framework conventions, and integration with other tools.
Worse: when you need to add a field later, AI won’t know it has to generate a migration. It’ll just edit the file, and you discover the problem in production.
How to structure these instructions
Create specific sections in your subagent:
# Backend Agent
## Stack
[...]
## Conventions
[...]
## Commands and tools
### Database
- ORM: Prisma
- Migrations: `npx prisma migrate dev`
- Seed: `npx prisma db seed`
### File creation
- Routes: create in `/routes/<resource>.ts`
- Services: create in `/services/<resource>.service.ts`
- NEVER use automatic boilerplate generators
### Libraries for common problems
- Validation: Zod
- Logging: pino
- Testing: vitest + supertest
- Dates: date-fns
- Slugs: slugify
- UUIDs: crypto.randomUUID() (native Node 20+)
**Golden rule: if there's a consolidated library, use it. Don't reimplement.**
The investment of 30 minutes defining this at the beginning will save you days of refactoring later.
PRD for Agents: Not Documentation, It’s Instruction
Traditional PRD (Product Requirements Document) is written for humans. It has business context, justifications, UX considerations. It’s important, but it’s not what AI needs.
For agents, PRD is executable technical specification. The more specific, the better. Abstraction and flexibility are enemies here.
Difference in practice
Traditional PRD:
## Feature: Notification System
### Objective
Improve user engagement through contextual notifications.
### Requirements
- Users should receive relevant notifications
- System must be scalable
- Non-intrusive UX
PRD for AI:
## Feature: Notification System
### Stack
- Implement using Server-Sent Events (SSE)
- Backend: route `/api/notifications/stream`
- Frontend: hook `useNotifications()` in `/lib/hooks/use-notifications.ts`
- Persistence: `notifications` table in Prisma schema
### Schema
```prisma
model Notification {
id String @id @default(cuid())
userId String
type NotificationType
title String
message String
read Boolean @default(false)
createdAt DateTime @default(now())
user User @relation(fields: [userId], references: [id])
@@index([userId, read])
}
enum NotificationType {
INFO
WARNING
SUCCESS
ERROR
}
```
### Endpoints
**GET /api/notifications/stream**
- SSE endpoint
- Authentication required via session
- Returns events in format: `data: {"id":"...","type":"INFO","title":"...","message":"..."}\n\n`
- Heartbeat every 30s
**GET /api/notifications**
- List user's notifications
- Query params: `?limit=20&offset=0&unreadOnly=false`
- Response: `{ notifications: Notification[], total: number }`
**PATCH /api/notifications/:id/read**
- Mark notification as read
- Body: `{ read: boolean }`
- Response: `{ success: boolean }`
### Frontend
**Hook useNotifications()**
```typescript
interface UseNotificationsReturn {
notifications: Notification[];
unreadCount: number;
markAsRead: (id: string) => Promise<void>;
isConnected: boolean;
}
```
**Component NotificationBell**
- Location: `/components/notifications/notification-bell.tsx`
- Props: none (uses hook internally)
- Behavior: badge with unread counter, dropdown with list on click
### Business rules
- Notifications expire after 30 days (implement cron job)
- Maximum 100 unread notifications per user
- When limit reached, automatically delete oldest
See the difference? The second one leaves no room for interpretation. AI knows exactly what to create, where to create it, and how to implement it.
Where to store the PRD
This is where folder structure comes in. I use a .ai/ folder at the project root:
my-project/
├── .ai/
│ ├── prd/
│ │ ├── notifications.md
│ │ ├── authentication.md
│ │ └── dashboard.md
│ ├── agents/
│ │ ├── frontend.md
│ │ ├── backend.md
│ │ └── database.md
│ └── context/
│ ├── architecture.md
│ ├── conventions.md
│ └── decisions.md
├── apps/
│ ├── web/
│ └── api/
├── packages/
└── ...
Why .ai/ and not /docs/?
Practical reasons:
.ai/stays at the top when sorting alphabetically (Unix dot folder convention)- Makes it clear it’s working material for AI, not user documentation
- Can be easily ignored in builds without extra configuration
Some prefer /docs/ai/, works too. The important thing is to be consistent.
Structure inside .ai/
/prd/ - One markdown file per feature
- Descriptive name:
notifications.md, notfeature-1.md - Always with data schema, endpoints, and business rules
- Update as feature evolves
/agents/ - Domain-specific instructions
frontend.md- frontend guard railsbackend.md- backend guard railsdatabase.md- schema and migration conventions
/context/ - Architectural decisions
architecture.md- system overviewconventions.md- general project patternsdecisions.md- summarized ADRs (Architecture Decision Records)
How AI uses this
When you’re working on a feature, the prompt becomes:
Read .ai/agents/frontend.md and .ai/prd/notifications.md
Implement the NotificationBell component as specified in the PRD.
AI has complete context: technical guard rails + feature specification. Zero ambiguity.
PRD maintenance
PRDs aren’t static. Features change, requirements evolve. Update the PRD before asking AI for changes.
Correct flow:
- Decision to change feature X
- Update
.ai/prd/feature-x.md - Ask AI to implement according to new PRD
Wrong flow:
- Ask AI to change feature X
- AI implements based on old context
- You explain the change in the prompt
- PRD stays outdated
- Next change, AI doesn’t know what’s the current version
The PRD is the source of truth. Treat it like code.
Monorepo: Shared Context
Why does this matter in AI development context?
Shared context is power.
When frontend and backend are in separate repositories, AI works blind. It’s implementing an API in the backend without knowing exactly how the frontend will consume it. Or it’s creating a component in the frontend without seeing the real types the API returns.
Result: you become the manual integrator. Adjusting types, fixing contracts, synchronizing changes.
Structure that works
my-project/
├── .ai/ # Instructions and PRDs
├── apps/
│ ├── web/ # Next.js frontend
│ │ ├── app/
│ │ ├── components/
│ │ └── lib/
│ └── api/ # Fastify backend
│ ├── routes/
│ ├── services/
│ └── repositories/
├── packages/
│ ├── types/ # Shared types
│ │ ├── api.ts
│ │ ├── database.ts
│ │ └── index.ts
│ ├── config/ # Shared configs
│ │ ├── eslint/
│ │ └── typescript/
│ └── utils/ # Shared utilities
│ ├── validation/
│ └── formatting/
├── prisma/
│ ├── schema.prisma
│ └── migrations/
├── package.json # Root package
└── turbo.json # Turborepo config
Why this helps AI
1. Shared types
File packages/types/api.ts:
export interface User {
id: string;
email: string;
name: string;
createdAt: Date;
}
export interface CreateUserRequest {
email: string;
name: string;
password: string;
}
export interface CreateUserResponse {
user: User;
token: string;
}
Backend uses:
import { CreateUserRequest, CreateUserResponse } from '@repo/types'
app.post<{ Body: CreateUserRequest, Reply: CreateUserResponse }>('/users', ...)
Frontend uses:
import { CreateUserResponse } from '@repo/types'
const response = await fetch('/api/users', ...)
const data: CreateUserResponse = await response.json()
AI sees both sides. When you ask it to add a role field to the user:
- It updates
packages/types/api.ts - Updates backend automatically (already imports from there)
- Updates frontend automatically (already imports from there)
- Updates Prisma schema
Everything in one interaction. Zero desynchronization.
2. Shared validation
packages/utils/validation/user.ts:
import { z } from "zod";
export const createUserSchema = z.object({
email: z.string().email(),
name: z.string().min(2).max(100),
password: z.string().min(8),
});
export type CreateUserInput = z.infer<typeof createUserSchema>;
Backend validates:
import { createUserSchema } from "@repo/utils/validation";
const body = createUserSchema.parse(request.body);
Frontend validates (react-hook-form):
import { createUserSchema } from "@repo/utils/validation";
import { zodResolver } from "@hookform/resolvers/zod";
const form = useForm({
resolver: zodResolver(createUserSchema),
});
Same rules, zero duplication, zero chance of divergence.
3. AI understands the complete flow
Prompt:
Add optional `phoneNumber` field to user registration.
AI:
- Reads
packages/utils/validation/user.tsand adds field to Zod schema - Reads
packages/types/api.tsand adds to types - Reads
prisma/schema.prismaand adds to User model - Generates migration
- Updates backend route
- Updates frontend form
All because it has visibility of the entire project.
Tools
I use Turborepo, but Nx works equally well. The important thing is to have:
- Shared workspace (
pnpm workspaceornpm workspaces) - Build orchestration (Turbo or Nx)
- Build caching
Basic turbo.json configuration:
{
"$schema": "https://turbo.build/schema.json",
"pipeline": {
"build": {
"dependsOn": ["^build"],
"outputs": [".next/**", "dist/**"]
},
"dev": {
"cache": false,
"persistent": true
},
"lint": {
"dependsOn": ["^lint"]
}
}
}
When monorepo complicates
Honesty: monorepo isn’t a silver bullet.
Don’t use if:
- Completely separate teams (frontend and backend in different companies)
- Very different deploy rhythms (monthly mobile release vs continuous backend deploy)
- Very heterogeneous technologies (doesn’t make sense mono with Go backend + React Native + Python ML)
Can complicate in:
- Initial CI/CD (need to configure affected builds)
- Permissions (if you need granular access per team)
- Onboarding (project seems bigger than it is)
But for solo development or small teams doing fullstack with AI? Totally worth it.
Dealing with Large Context
Projects grow. Instructions increase. Eventually you hit AI’s context limit, and then what? There are better strategies than “pray it fits”.
The problem
You start with a 200-line frontend.md file. Add more conventions, more examples, more edge cases. Six months later you have 2000 lines of instructions. AI starts to:
- Ignore parts at the end
- Confuse instructions from different sections
- Consume too much context, leaving little for code
- Get slow to process everything
Strategy 1: Splitting by domain
Instead of one giant frontend.md, break into specific responsibilities:
.ai/
├── agents/
│ ├── frontend/
│ │ ├── components.md # Component conventions
│ │ ├── routing.md # Next.js App Router
│ │ ├── state.md # Zustand, server state
│ │ ├── forms.md # react-hook-form + zod
│ │ └── styling.md # Tailwind, shadcn/ui
│ ├── backend/
│ │ ├── api.md # REST conventions
│ │ ├── auth.md # NextAuth config
│ │ ├── database.md # Prisma, migrations
│ │ └── validation.md # Zod schemas
│ └── shared/
│ ├── types.md # TypeScript conventions
│ └── testing.md # Vitest, Playwright
In the prompt, you only reference what you need:
Read .ai/agents/frontend/components.md and .ai/agents/frontend/forms.md
Create a product registration form according to PRD.
AI doesn’t process 2000 lines. It processes 400 from relevant sections.
Strategy 2: Instruction hierarchy
Organize in levels of specificity:
.ai/agents/core.md - Rules that apply to ALL project
# Core Conventions
- TypeScript strict mode always active
- Never use `any`, always type correctly
- Absolute imports with @/ (Next.js) or @repo/ (monorepo)
- Comments in English only when complex logic requires
- Error handling: always catch and log structured errors
.ai/agents/frontend/base.md - General frontend rules
# Frontend Base
- Server Components by default
- 'use client' only when necessary (interactivity, hooks, context)
- Async/await in Server Components, never useEffect for fetch
.ai/agents/frontend/forms.md - Form-specific
# Forms
- react-hook-form + zod resolver
- Client-side and server-side validation (same schema)
- Loading states during submit
- Success/error toast (sonner)
In prompt:
Context:
- .ai/agents/core.md (always)
- .ai/agents/frontend/base.md
- .ai/agents/frontend/forms.md
Task: implement login form
You only load what’s necessary, but maintain consistency via core.md.
Strategy 3: Dynamic instructions per feature
For large features, create specific temporary instructions:
.ai/
├── agents/
├── prd/
│ └── notifications/
│ ├── spec.md # Main PRD
│ ├── implementation.md # Technical details
│ └── sse-guide.md # SSE guide (temporary)
sse-guide.md only exists during notifications implementation. Later becomes part of backend/realtime.md if you’ll use SSE in other features.
Strategy 4: References instead of duplication
Avoid repeating the same information in multiple files:
❌ Wrong:
# frontend/components.md
[500 lines about shadcn/ui]
# frontend/forms.md
[same 500 lines about shadcn/ui]
✅ Right:
# frontend/components.md
[500 lines about shadcn/ui]
# frontend/forms.md
For UI component conventions, see components.md
## Form-specific
- Always use Form, FormField, FormItem from shadcn/ui
- [...]
AI is good at following references. You can say “see X.md for Y details” and it will look.
Strategy 5: Examples in separate files
Code examples can take up a lot of space. Extract them:
.ai/
├── agents/
│ └── frontend/
│ ├── components.md
│ └── examples/
│ ├── server-component.tsx
│ ├── client-component.tsx
│ └── form-example.tsx
In components.md:
# Components
## Server Components
For complete example, see examples/server-component.tsx
## Client Components
For complete example, see examples/client-component.tsx
You reference examples only when needed.
Strategy 6: Instruction versioning
Projects pivot. Architecture changes. Don’t delete old instructions, version them:
.ai/
├── agents/
│ └── frontend/
│ ├── state.md # Current version
│ └── archive/
│ └── state-v1-redux.md # When we used Redux
If you need to reference old decisions or understand migrations, it’s there.
Golden rule
If an instruction hasn’t been referenced in the last 10 prompts, it probably doesn’t need to be in the default context.
Review periodically. Extract what’s not being used. Keep only the essential in “core”, and the rest on demand.
Practical Workflows: From Prompt to Implementation
Theory is nice, but how does this work day to day? I’ll show real workflows I use.
Workflow 1: New feature from scratch
Context: Add favorites system to posts
Step 1: Update PRD
# Create or update
.ai/prd/favorites.md
Content:
# Feature: Favorites System
## Database Schema
```prisma
model Favorite {
id String @id @default(cuid())
userId String
postId String
createdAt DateTime @default(now())
user User @relation(fields: [userId], references: [id])
post Post @relation(fields: [postId], references: [id])
@@unique([userId, postId])
@@index([userId])
}
```
## API Endpoints
**POST /api/posts/:postId/favorite**
- Toggle favorite (adds if doesn't exist, removes if exists)
- Auth required
- Response: `{ favorited: boolean }`
**GET /api/users/me/favorites**
- List user's favorited posts
- Query: `?limit=20&offset=0`
- Response: `{ posts: Post[], total: number }`
## Frontend
**Hook useFavorite(postId: string)**
```typescript
interface UseFavoriteReturn {
isFavorited: boolean;
toggleFavorite: () => Promise<void>;
isLoading: boolean;
}
```
**Component FavoriteButton**
- Props: `{ postId: string, size?: 'sm' | 'md' | 'lg' }`
- Icon: star outline when not favorited, star filled when favorited
- Optimistic update (changes UI before response)
- Error toast if fails
Step 2: Implementation - Database
Prompt:
Context:
- .ai/agents/backend/database.md
- .ai/prd/favorites.md
Task: Add Favorite model to Prisma schema and generate migration
AI executes:
- Adds model in
prisma/schema.prisma - Adds relation in
UserandPost - Runs
npx prisma migrate dev --name add-favorites
Step 3: Implementation - Backend
Prompt:
Context:
- .ai/agents/backend/api.md
- .ai/agents/backend/validation.md
- .ai/prd/favorites.md
Task: Implement favorites endpoints according to PRD
AI creates:
apps/api/routes/favorites.tsapps/api/services/favorites.service.ts- Zod validation if needed
- Basic tests
Step 4: Implementation - Frontend
Prompt:
Context:
- .ai/agents/frontend/base.md
- .ai/agents/frontend/components.md
- .ai/prd/favorites.md
Task: Implement useFavorite hook and FavoriteButton component
AI creates:
apps/web/lib/hooks/use-favorite.ts(with SWR or TanStack Query)apps/web/components/posts/favorite-button.tsx- Optimistic updates
- Error handling
Step 5: Integration
Prompt:
Add FavoriteButton to PostCard component.
Location: apps/web/components/posts/post-card.tsx
AI adds button in right place, passes correct props.
Step 6: Manual testing
You test. Find a bug: favoriting posts duplicates in favorites list.
Step 7: Debug
Prompt:
Bug: when favoriting a post, it appears duplicated in /favorites
Relevant context:
- apps/web/app/favorites/page.tsx
- apps/web/lib/hooks/use-favorite.ts
Investigate and fix.
AI identifies: optimistic update isn’t checking if post already exists in list. Fixes it.
Workflow 2: Guided refactoring
Context: Migrate from Redux to Zustand
Step 1: Document decision
Create .ai/context/decisions.md or add entry:
## 2025-02-08: Redux → Zustand Migration
### Reason
- Redux too verbose for simple cases
- Zustand lighter and easier maintenance
- Server state already uses TanStack Query, Redux only for small UI state
### Strategy
- Migrate store by store
- Start with `userPreferences` (smallest)
- Then `sidebar`, `modals`
- `cart` last (most complex)
Step 2: Update agent instructions
Move .ai/agents/frontend/state.md to .ai/agents/frontend/archive/state-v1-redux.md
Create new .ai/agents/frontend/state.md:
# State Management
## Client State
- Zustand for UI state
- Stores in `/lib/stores/<feature>.ts`
- Always type state and actions
## Server State
- TanStack Query
- Queries in `/lib/queries/<feature>.ts`
- Co-located mutations
Step 3: Incremental migration
Prompt:
Context:
- .ai/agents/frontend/state.md
- .ai/context/decisions.md (Redux migration section)
Task: Migrate userPreferences store from Redux to Zustand
Current files:
- apps/web/store/slices/user-preferences.ts (Redux)
- apps/web/hooks/use-preferences.ts (selector hook)
Create:
- apps/web/lib/stores/user-preferences.ts (Zustand)
- Update all consumers
AI does migration, you test, commit.
Repeat for each store until complete.
Workflow 3: Implementation with subagent
Context: Complex task, integrated backend + frontend
Setup: Use Projects in Claude with custom instructions already configured
Initial prompt:
Feature: Analytics dashboard
PRD: .ai/prd/analytics-dashboard.md
I'll divide into 3 stages:
1. Backend (metrics API)
2. Frontend (components and queries)
3. Integration and polish
Confirm you read the PRD and understood the scope before we start.
AI confirms, you proceed step by step in the same chat. Context maintained, consistent decisions.
Workflow 4: Feature flagging with AI
Context: New but uncertain feature, needs flag
Step 1: PRD with flag
.ai/prd/new-editor.md:
# Feature: New WYSIWYG Editor
**Feature Flag: `new_editor_enabled`**
## Implementation
- Feature flag in `.env`: `NEXT_PUBLIC_NEW_EDITOR_ENABLED`
- Component: `EditorV2`
- Fallback: `EditorV1` (current)
## Frontend
```typescript
const EditorSwitch = () => {
const enabled = process.env.NEXT_PUBLIC_NEW_EDITOR_ENABLED === 'true'
return enabled ? <EditorV2 /> : <EditorV1 />
}
```
Prompt:
Context: .ai/prd/new-editor.md
Implement EditorV2 with feature flag according to PRD.
Ensure without active flag, EditorV1 continues working.
AI implements with flag, you test incrementally.
Workflow 5: Pair programming with AI
When to use: Ambiguous task, needs exploration
You don’t have complete PRD. Just an idea.
Prompt:
I want to implement "automatic tag suggestions" when creating post.
Idea: based on title and content, suggest relevant tags.
Before implementing, help me define:
1. Should this be client-side or server-side?
2. Do we use external AI (OpenAI) or something simpler?
3. How to store existing tags for suggestion?
4. UX: suggest during typing or only at the end?
Considering our stack (Next.js, Prisma, etc).
AI discusses options, you decide, update PRD with decisions, then implement.
Workflow 6: Maintenance and evolution
Context: Old feature needs adjustments
Prompt:
Feature: Notifications (implemented 2 months ago)
Needed change: add "MENTION" category when user is mentioned.
Files involved:
- .ai/prd/notifications.md (update)
- packages/types/api.ts (add enum)
- prisma/schema.prisma (update enum)
- backend and frontend (apply change)
Execute in order:
1. Show diff of what will change in PRD
2. Wait for approval
3. Implement changes
4. Generate migration
AI shows diff, you approve, it executes. PRD always updated.
General tips
1. Always start with context
Don’t throw loose prompts. Reference relevant .ai/ files.
2. Split large tasks AI works better with atomic tasks. “Implement complete authentication” becomes:
- Implement User schema
- Implement password hash
- Implement auth routes
- Implement session middleware
- Implement login UI
- Integrate everything
3. Ask for confirmation at critical points Before migrations, before deleting code, before structural changes.
4. Keep history Don’t delete old prompts. If it goes wrong, you go back and see what you asked.
5. Use Artifacts/Code Blocks Ask AI to show code before applying. Review, then confirm.
6. Commit frequently After each successful step. If AI messes up next, you revert easily.
7. Test before proceeding Don’t stack 5 tasks without testing any. Fail early, fix early.
Conclusion
Structuring projects with AI in mind isn’t about serving the tools. It’s about creating systems that both humans and machines can navigate without getting lost.
The difference between a project that scales well with AI and one that becomes a refactoring nightmare is in the first 30 minutes. In the guard rails you define, the PRD you write, the folder structure you choose.
What works:
- Well-configured subagents from the start
- Specific PRDs, not abstract ones
- Monorepo when it makes sense (integrated fullstack)
- Organized context on demand
- Well-defined iterative workflows
What doesn’t work:
- Throwing prompts and hoping for the best
- Letting AI invent patterns
- Vague PRDs full of “must be scalable”
- Giant instructions that nobody (not even AI) reads completely
- Accumulating technical debt thinking “I’ll fix it later”
You’ll make mistakes. I made many until I got to this structure. I still make them. But now when I mess up, I know exactly where to look to fix it. Is the PRD outdated? Are the subagent instructions conflicting? Is the context too large?
Next steps if you’re starting:
- Create the
.ai/folder today - Document your basic guard rails (language, framework, conventions)
- Write the PRD of the next feature before asking AI to implement
- Test simple workflows first (one small feature end to end)
- Evolve as the project grows
You don’t need to implement everything at once. Start small, feel what works for your context, adjust.
The future of development isn’t AI doing everything alone. It’s you orchestrating complex systems with clarity, using AI as a force multiplier. But force multiplier only works if the direction is clear.
Structure is direction.