Most agencies sell discovery, then a deck, then a spec, then a build, then a handover. Each step waits on the previous one. The FluxCo Method collapses discovery and prototyping into a single working session: real users in the room, code shipped to a real URL on day one. The deliverable from session one is software, not a document. Validation, prototyping, and production are phases on the same codebase, not three separate engagements.
How we work · The FluxCo Method
Validate, prototype, production.
One operating principle: validate with a real user, then ship. Three phases on the same codebase. No discovery decks, no throwaway prototypes, no rewrite at the end.
The principle
Validate then ship. A working thing in front of a real user beats a written spec in front of a stakeholder, every time. AI-assisted development compresses the feedback-to-live loop from weeks to hours, which is what makes this method run at the speed it does. Each phase exists to feed the next: Validate produces what to build, Prototype proves that the idea works, Production hardens it for daily use.
The codebase is the same throughout. Same repository, same architecture, same deployment URL. We write production-shaped code from session one — not because we know what the spec is, but because we know we are not rewriting later.
Phase 1 — Validate
Sit with the people who will actually use the software. Watch the workflow as it exists today. Find what only software can fix.
Real users in the room. Not a manager, not a sponsor — the people who do the work today. End users describe problems differently than the people approving the budget. We need both, but the work starts with users.
Map the workflow before the solution. What does the day-to-day look like? Where does it break? What gets done on paper, in a spreadsheet, or in someone's head? The problem statement comes out of the conversation, not out of a document we wrote in advance.
Define success in one paragraph. Not a 12-point KPI list. One paragraph describing what the world looks like when this works, and a single test we can run later to know if we got there.
Phase 2 — Prototype
A working demo on a real URL, against real data, within days. Iterate with users while the problem is still fresh.
Live URL from session one. Not Figma, not a sandbox — a real deployed URL that the user can open on their own machine. AI-assisted development compresses boilerplate so the first version exists in hours, not weeks.
Real data where possible. A prototype against synthetic data tells you the prototype works. A prototype against real data tells you the idea works. We use representative slices of real data wherever security and consent allow.
Same-day iteration. User clicks through, hits a friction point, says it out loud. We adjust the same day. The feedback loop being short is the whole point — sprint planning ceremonies and two-week demo waits would erase the speed.
Phase 3 — Production
Same codebase, hardened. Auth, persistence, monitoring, performance, security. The thing that validated the idea is the thing that ships.
No throwaway prototypes. The codebase that proved the idea is the codebase that goes to production. We write for performance, security, and maintainability from session one — that is what makes this possible.
Production from the start, hardening at the end. Authentication, data persistence, monitoring, error handling, and integrations get layered on once the workflow is validated. We do not re-architect — we add the missing layers.
You own the codebase. Source code, deployment configuration, and handover documentation, all yours at the end. Run it yourself, hand it to your internal team, or stay on a light retainer for the first 30–60 days. No forced lock-in.
What this method is not
Naming what we avoid is as important as naming what we do.
- Not waterfall with a fresh coat of paint. A discovery phase that produces a 40-page spec, a build phase that executes it, and a handover phase that fixes everything wrong with both is not what this is. We compress those into one continuous codebase.
- Not throwaway prototyping. "Build a quick prototype, then we'll rebuild it properly" is a tax we refuse to charge. AI-assisted development eliminates the rationale for it — production-shaped code from session one is achievable now.
- Not a deck-first engagement. If the deliverable from a working session is a slide deck, we have skipped the part that matters. The deliverable should be software the user can click on, every time.
- Not a fixed-scope contract negotiated against a spec doc. Real software discovers its own requirements as users use it. We scope around outcomes (the success criteria from Validate), not against feature lists.
The method, in one example
Arch Convert is the cleanest case study of the method running end to end.
FAQ
That is a successful Validate phase. Finding out in 3 days that the original idea has a fundamental flaw beats finding out 6 months in after a full build. We have had sessions where the original idea shifted entirely based on what real users said when they actually clicked through something. Pivoting at validate or prototype stage costs days, not months. We are happy to walk if the work is wrong; the alternative is shipping the wrong thing.
Validate: 1 to 5 working days, depending on how scattered the user base is. Prototype: 3 days to 2 weeks for a working demo on a real URL. Production: 2 to 6 weeks, varying with scope (auth, integrations, scale, compliance). End to end, most engagements run 4 to 8 weeks from first session to a production-grade system. Arch Convert went from problem statement to working prototype in a single night, and from there to daily use at two architecture firms within 2 weeks — that is the fast end of the range, not typical.
We reuse it. The codebase that validates the idea is the codebase that ships. We write for performance, security, and maintainability from session one, which means there is no throwaway-prototype-then-rewrite step. AI-assisted development is what makes this possible: the boilerplate compresses, so the time saved goes into producing code that is already production-shaped, not into a rewrite later.
The people who will actually use the tool. Not the manager who approved budget, not a project sponsor — the end users. Bring a problem statement (one paragraph, plain English), not a spec document. We will work out the spec together in the session. Optional but useful: example data, screenshots of the current workflow, and any failed attempts at solving the problem with off-the-shelf tools.
Have a problem worth validating?
Tell us what you're trying to solve. We'll tell you whether we can validate it in a week, and what it would take to ship.
Book a call