Below is a practical “Week 0-1” game-plan that gets three developers productive fast, followed by a short note on *which* specification I’m quoting when I say “the spec.” --- ## 1 – Agree on stack & working conventions (½ day) | Topic | Quick decision | Why it unblocks everything | | -------------------------- | --------------------------------------------------------------------- | -------------------------------------------------------------------- | | **Monorepo vs multi-repo** | Monorepo (e.g. pnpm workspaces) | Single PR flow across server, client, worker. | | **Languages** | TypeScript everywhere (Next.js + tRPC or Nest.js API / BullMQ worker) | One language lowers cognitive load for 3-person team. | | **Database** | PostgreSQL 15 via Prisma | Matches spec §14 schema and gives migrations. | | **Queues** | Redis + BullMQ | Satisfies idempotent, retryable background jobs (§65). | | **Containers** | Docker dev-container; use Docker Compose locally | Mirrors spec’s multi-stage image (§87) and lets Dev Ops start day 1. | --- ## 2 – Divide early ownership | Dev | Primary focus (Week 0-1) | Key tasks | | ----------------------- | ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | **A – Backend/API** | Auth, quota, DB migrations | • Google OAuth callback & JWT issue (§18-21) • `/api/batch` & `/api/batch/{id}/status` skeleton (§73-74) • Quota decrement logic (§56-58) | | **B – Worker & Vision** | Queue, ClamAV, Vision stub | • BullMQ queue scaffolding (§31, 65) • ClamAV scan + unit test (§62) • “fake” Vision job that returns static tags so FE can integrate. | | **C – Frontend** | Dashboard drop-zone & review table | • Drag-and-drop with quota gate (§26-29) • Shimmer placeholders (§70) • WebSocket progress feed (§77). | > **Why this split?** Each dev can ship a vertical slice that other devs can integrate against by end of week 1 (upload → placeholder vision → table render). --- ## 3 – Bootstrapping sequence 1. **Init repo & CI** * ESLint, Prettier, Vitest/Jest, and Cypress pipelines (§88). 2. **Docker Compose `dev.yml`** * Services: postgres, redis, maildev (for later), minio (S3-compatible object store). 3. **Database schema & migrations** * Tables `users`, `batches`, `images` exactly as §78-80. * Seed script that creates a test user with “Max” plan for QA. 4. **Auth flow** * Google OAuth 2.0 “email” scope only (§18). * When callback hits, insert/find in `users` and issue access JWT (stateless sessions §64). 5. **Stub endpoints & types** * Define tRPC or OpenAPI contract for 6 endpoints (§73-77). * Return dummy data so FE can already render progress bar (§67) and quota numbers. 6. **Local object storage & file upload** * Use Multer (Node) or nextjs-abstraction; save to Minio bucket path `/{batchUuid}/{filename}` (§30). 7. **Queue a dummy worker job** * Push job ID, mark status `PROCESSING` → after 2 s mark `DONE` to exercise WebSockets. 8. **Frontend drop-zone & WebSocket listener** * Accept up to quota-remaining files, compute SHA-256 client-side (§26-28). * Display shimmer placeholder until status is `DONE`. At the end of this checklist you can **drag files, see quota enforced, watch progress live, and land on a review table populated with fake—but correctly shaped—data.** That demo covers >10 spec requirements and proves the pipeline end-to-end. --- ## 4 – Artifacts to commit in Week 1 * `/docs/ARCHITECTURE.md` – one-pager diagram of client ↔ API ↔ worker ↔ object-store. * `docker-compose.dev.yml` & `Dockerfile` stages (install → build → prod) (§87). * Prisma schema + first migration. * Postman (or Hoppscotch) collection for the 6 REST endpoints. * GitHub Actions workflow running ESLint + tests (§88). * Storybook story for drag-and-drop component (UX spec §68-69). Those assets give every team-mate and future contributor a reproducible environment and living documentation. --- ## 5 – What “the spec” refers to All numbered requirements I cite (e.g., “§26”) come from the *Comprehensive, LLM-Testable Specification for the “AI Bulk Image Renamer” SaaS* you provided in `spec.txt` – the single source of truth for features, quotas, API contracts, and non-functional targets. --- ### TL;DR 1. **Pick TS-first stack + set up CI/Docker.** 2. **Split into Backend (A), Worker (B), Frontend (C) vertical slices.** 3. **Stub auth, upload, queue, and WebSocket so the whole loop runs with dummy data in Week 1.** Hit those, and every later requirement becomes an incremental refinement instead of a new system.