Launching a Claude Code Project: Design First, Then Build

Person drafting on an architectural blueprint

I used Claude’s desktop interface for iterative design, then handed off to Claude Code for implementation.


After deciding to revive my Hazelcast Microservices Framework (MSF) project, and to do so using Claude AI to do much of the heavy lifting, it came down to figuring out how to actually do this. I had no playbook for it. Nobody does, really — we’re all making this up as we go.

I wanted to be transparent about my use of Claude, and at the same time I think the development process is interesting enough to be worthy of discussion. (Heck, maybe it’s more interesting than the framework blog posts I set out to write.) So I expect to end up with a dual series of blog posts: the framework posts — started by Claude, co-edited together, and given a final polish by me — interleaved with my observations on how the collaboration effort worked.

This first “behind the scenes” post covers the design phase: going from a vague idea to a set of design documents and an implementation plan, all before writing a single line of code.

Starting the Conversation

Here was my original prompt to Claude:

I want to use Claude Code to help me finish a demonstration project I started some time ago to show how to implement microservices using Hazelcast. (The main value of Hazelcast is to create materialized views of domain objects to maintain in-memory current state.) If it’s more effective, we can restart with a blank sheet rather than modify the existing project.

I’d really like to iterate over the design several times before any coding starts — is that best done in Claude Code, or using this desktop interface? Ideally, creating various specifications or design documents before any coding starts would be perfect, if Claude can use these various documents as a guide to the coding process.

How do we start?

Claude immediately suggested splitting the work across two interfaces: use the desktop/web interface for design discussions and document creation, then move to Claude Code for implementation. Made sense to me — the conversational interface is better for back-and-forth design iteration, while Claude Code excels at multi-file code generation with direct access to the project directory.

This turned out to be excellent advice. The design phase involved a lot of “what about this?” and “actually, let’s reorganize that” — the kind of exploratory conversation that works much better in a chat interface than in a code-focused tool. I tried doing some design work in Claude Code early on and it was noticeably worse — like trying to brainstorm on a whiteboard that keeps trying to compile your diagrams.

The Design Phase: A Roadmap in Nine Documents

What followed was an extended design conversation that produced nine documents over the course of a single session. I’m not going to walk through every one in detail — you can follow the links if you’re curious — but a few of them are worth talking about because of what they reveal about the collaboration process.

Getting Started: Template and Domain

Claude’s first move was to produce a comprehensive design document template covering everything from executive summary to demonstration scenarios. We never actually completed it — the conversation quickly moved in a more specific direction — but it served its purpose as a structural starting point. The architectural equivalent of a napkin sketch: useful for getting the conversation going, not meant to survive contact with reality.

Before we could fill in any template, though, we needed to pick a domain for the demonstration. Claude laid out a comparison between eCommerce and Financial Services, and we settled on a hybrid approach: start with eCommerce (universally understood, clear event flows, and I had existing code to reference) but design the framework to be domain-agnostic so other domains could be plugged in later. We also simplified from four services down to three: Account, Inventory, and Order. (A fourth service, Payment, showed up later when we built out the saga patterns. Scope creep, but the useful kind.)

That decision led to the eCommerce design document — a detailed Phase 1 design covering all three services, their APIs, events, and materialized views. Three view patterns came out of it: denormalized views (joining customer, product, and order data), aggregation views (pre-computing order statistics), and real-time status views (current inventory levels). If you’ve read the previous posts in this series, you’ll recognize these as exactly the kind of thing that makes Event Sourcing + CQRS worth the effort.

Where I Pushed Back

The conversation then turned to longer-term goals. I had ideas for observability dashboards, microbenchmarking, pluggable implementations, saga patterns, and more — far beyond what could fit in a Phase 1. Claude organized all of this into a phased requirements document spanning five phases.

We iterated over this several times, adding and reorganizing. The most significant change I made was moving Event Sourcing from Phase 2 to Phase 1. Claude had initially positioned it as an advanced feature, but I saw it as the fundamental organizing principle of the entire framework — events are the source of truth, not database rows. Once I explained my existing Hazelcast Jet pipeline architecture (where handleEvent() writes to a PendingEvents map, which triggers a Jet pipeline that persists to the EventStore, updates materialized views, and publishes to the event bus), Claude immediately agreed and restructured the phases accordingly.

This was one of the more interesting moments in the collaboration. Claude had made a reasonable default assumption about complexity ordering, but I had domain-specific knowledge about how the architecture should actually work. The back-and-forth was natural — I explained my reasoning, Claude incorporated it, and the result was better for it. If I’d just accepted the initial phasing without pushing back, the entire project would have been organized around a less coherent architecture. And honestly, I almost did just accept it. It looked reasonable. Sometimes the most important contribution you make is going “wait, actually…” when the first answer seems fine.

Other additions during this iteration:

  • Vector Store integration (Phase 3, optional) for product similarity search
  • An MCP Server (Phase 3) to let AI assistants query and operate the system
  • Open source mandate — everything in Phases 1-2 must run on Hazelcast Community Edition
  • Blog post series structure — features developed in blog-post-sized chunks

Architecture, Code Review, and the Rewrite Decision

The next few documents came quickly. The Event Sourcing discussion led to a dedicated architecture document detailing the Jet pipeline design — based heavily on my existing implementation, but now formally documented with all six pipeline stages, the EventStore design, and how event replay would work.

Then I uploaded several key source files from the original project for Claude to review: the EventSourcingController, DomainObject, SourcedEvent (later renamed to DomainEvent), EventStore, and EventSourcingPipeline. Claude produced a thorough code review comparing the existing code against the design documents. The verdict was encouraging — the core implementation was solid and matched the Phase 1 design almost perfectly. Claude recommended incremental enhancement: add correlation IDs, framework abstractions, observability, and tests on top of what was already there.

I went the other way. After thinking about the package naming, dependency versions, and scope of changes needed, I decided on a clean reimplementation using the existing code as a blueprint. This let us start with the right project structure, package names (com.theyawns.framework.*), and dependency versions (Spring Boot 3.2.x, Hazelcast 5.6.0) from the beginning rather than refactoring them in later. Sometimes — as I’d noted in the previous post — the right move is to stop patching the old cabinets and start fresh.

I won’t pretend this was a purely rational decision. Part of it was just wanting that clean-slate feeling — new project, new structure, no legacy cruft staring at me from the imports. Developers love a greenfield. We can’t help it.

The Implementation Plan

Once the architecture was validated and we’d agreed on the approach, Claude created a detailed Phase 1 implementation plan — a three-week, day-by-day schedule with code templates, success criteria, and task checklists:

  • Week 1: Framework core — Maven multi-module setup, core abstractions, event sourcing controller, Jet pipeline
  • Week 2: Three eCommerce services — Account, Inventory, Order with REST APIs and materialized views
  • Week 3: Integration, Docker Compose, documentation, demo scenarios

We made a few tweaks (updating Hazelcast from 5.4.0 to 5.6.0, for instance), and then it was time to move to code.

The Handoff to Claude Code

Claude provided specific instructions for transitioning to Claude Code, including a context block to paste when starting the session:

I'm building a Hazelcast-based event sourcing microservices framework.

Project location: hazelcast-microservices-framework/
Current state: Design documents complete, ready for implementation

Key decisions:
- Clean reimplementation (no existing code to port)
- Spring Boot 3.2.x + Hazelcast 5.6.0 Community Edition
- Package: com.theyawns.framework.*
- Three services: Account, Inventory, Order (eCommerce domain)
- Event sourcing with Hazelcast Jet pipeline
- REST APIs only

Implementation plan: docs/implementation/phase1-implementation-plan.md

Starting with Day 1: Maven project setup + core abstractions

Please read the implementation plan and let's begin.

The whole point of the “design first” approach: you’re not asking the AI to guess at your architecture. You’re handing it a blueprint. The more detailed the blueprint, the less time you spend arguing about load-bearing walls later.

Documents 7-9: Claude Code Configuration

Before making the jump, I asked Claude about setup suggestions for Claude Code. This produced three more documents:

CLAUDE.md (originally called .clinerules — I’m still not sure where that name came from) is the main configuration file that Claude Code reads automatically. It defines code standards, patterns, pitfalls to avoid, and documentation requirements. This file evolved a lot over the course of the project; looking at the commit history gives a good sense of how the “rules” grew and adapted as we ran into new situations. (More on that in a future post — it turned out to be one of the more interesting aspects of the whole process.)

claude-code-agents.md defined eight specialized agent personas — Framework Developer, Service Developer, Test Writer, Documentation Writer, Pipeline Specialist, and others — each with specific rules, code patterns, and checklists. The idea was to switch between personas depending on the task at hand (e.g., “Switch to Test Writer agent. Write comprehensive tests for EventSourcingController.”). Whether this actually helped or was just a placebo is something I’m still not sure about, honestly.

A docs organization guide rounded out the set, providing a recommended directory structure for keeping all the documentation organized as the project grew.

What Came Next

The resulting project grew well beyond the original three-week Phase 1 plan. At 150 commits, it now includes four microservices (Payment was added for saga demonstrations), an API Gateway, an MCP server for AI integration, choreographed and orchestrated saga patterns, PostgreSQL persistence, Grafana dashboards, and more. The three-week plan took considerably longer than three weeks. So it goes.

But all of that implementation work — and the interesting stories about how human-AI collaboration played out during coding — is material for future posts.

What I’d Do Differently (And What I’d Do Again)

If you’re thinking about using AI for a non-trivial coding project, here’s what I took away from the design phase.

Use the right tool for each phase. The conversational interface is great for the messy, exploratory work of figuring out what you’re actually building. Claude Code is great for building it.

Iterate on design before you write code. We went through multiple rounds of revision on the requirements and architecture documents. Each round caught issues or surfaced priorities (like Event Sourcing belonging in Phase 1) that would have been much more expensive to discover during implementation. Measure twice, cut once. The carpenter’s rule exists for a reason.

Bring your domain knowledge — and don’t be shy about pushing back. Claude made strong default recommendations, but the most valuable moments came when I disagreed based on my understanding of Hazelcast and the architecture I wanted. The AI is a powerful collaborator, but it doesn’t know what you know. If something feels wrong, say so. That’s where the real value of the collaboration happens.

And document everything. I mean it. The design documents weren’t just planning artifacts — they became living reference material that Claude Code used throughout implementation. The CLAUDE.md file in particular became a continuously evolving guide that shaped code quality across the entire project. Every hour spent on documentation saved multiples in “no, that’s not what I meant” corrections later. I’ve never been great about documentation discipline, so having an AI that actually reads and follows the docs was a surprisingly effective motivator to keep them current.


The Hazelcast Microservices Framework is open source under the Apache 2.0 license. You can find it at github.com/myawnhc/hazelcast-microservices-framework.

Next up: what happened when we actually started coding. Spoiler: the plan did not survive intact.

Comments

Leave a comment