TryScribe V2 | The story behind the hook analyzer
Build Notes
This case study captures TryScribe in its in-between stage. V1 had already been retired, and V2 did not exist beyond sketches, notes, and a waitlist page. Nothing was coded. No scoring logic was running. The only thing that was real at this point was the clarity forming around a much sharper creator problem and the decision to validate the idea before writing any backend. This is the documentation of that phase.
1. Origin
TryScribe did not begin as a creator tool. It started from a quiet observation I kept making whenever I watched creators work. They spent hours recording and editing, but their confidence in the intro never felt stable. The first two seconds carried more pressure than the entire video. Some openings worked. Some didn’t. And no one could explain why.
Creators repeated the same line again and again, searching for a version that “felt right.” That moment stayed in my mind long after the video ended. I didn’t plan to build a product around it. But the thought kept returning, and eventually, I stopped pushing it away. I started taking notes to pinpoint exactly where the failure happened.
Early observation notes narrowing down the problem space to a single focus point
Eventually, I stopped pushing the thought away.
2. Early Shape
The first shape of TryScribe had nothing to do with creators. It was a bundle of tools stitched together because each one was useful on its own. A PDF chat tool. A video summariser. An image compressor. A few more utilities built out of curiosity. People used them once and disappeared. Nothing connected these tools into a product with purpose. They solved isolated tasks. They did not solve a continuing problem.
At some point, the analytics didn’t matter because the product itself told the truth. There was no central idea holding anything together. TryScribe V1 was a toolbox, not a product, and toolboxes do not create habits.
3. Limits I Had
This transition phase did not happen in an empty space. I was handling freelance work, college pressure, content creation, and my personal expectations all at the same time. My time was split, and my ability to think deeply came in short windows.
Because my time was fragmented, I couldn't afford to build a complex monolith that required days of continuous coding to understand. The system had to be modular by necessity. If I only had 90 minutes on a Tuesday night, I needed to work on one tiny, isolated piece of the logic without breaking the rest of the application. These constraints defined the architecture before I wrote a single line of code.
4. The Promise
The shift toward creators happened slowly. In analyzing viral content patterns, a brutal reality emerged. The decision for a viewer to commit isn't made over 10 seconds; it's made between frame 0 and frame 60. If the stimulus doesn't land in that tiny window, the rest of the content is irrelevant.
The promise of V2 had to be obsessively focused on measuring that critical window.
Defining the critical window. The product's entire value rests in the first few seconds
The promise became simple and honest: help creators see the real strength of their hook before posting. This clarity was enough to let the old version go.
5. How I Plan to Build the System (Not Yet Built)
Since no backend exists yet, this section describes intent rather than implementation. The analyzer needs to understand structure, pacing, clarity, emotional tone, and viewer expectations. This requires a more thoughtful system than a single API call wrapped inside a UI.
While a simple Next.js API route could handle a basic wrapper, a true scoring engine needs more robust pre-processing. By choosing Python (FastAPI) for the backend, I gain access to mature NLP libraries before the data even hits an LLM. This allows for cheaper, faster initial checks without wasting expensive tokens on obvious failures.
The planned decoupled architecture, separating the Next.js UI from the Python processing layer
This architectural outline exists only on paper for now. But it feels aligned with the nature of the problem.
The Monorepo Structure
6. Designing the Experience
Creators judge tools by the speed and clarity of the first interaction. I wanted the experience to feel like a conversation, not a process. So the design direction focuses on a single clean input field, immediate scoring, and clear feedback.
Anything that introduced friction was removed early. There is no heavy onboarding, no multi-step flow, and no decorative dashboard. The best version of TryScribe is the one that feels invisible.
7. Problems I Had to Solve
Even without code, some challenges appeared early in planning. The scoring system needs to feel consistent or creators won’t trust it. Hook phrasing can change slightly while meaning stays the same, so the pipeline has to detect subtle shifts rather than treat each input as unrelated.
text// The planned response shape for the Hook Analyzer // This ensures the frontend always receives structured feedback interface HookScore { score: number; // 0-100 retention_prediction: { frame_0: number; frame_60: number; // The critical drop-off point I identified }; feedback: { clarity: string; pacing_score: "too_slow" | "optimal" | "rushed"; emotional_tone: string[]; }; }
Latency is another major concern. A slow feedback loop destroys the creative flow. The system must return insights quickly, which means the backend architecture will need to be optimised from day one.
There is also the tone of the feedback. Early prompt tests felt mechanical and stiff. The goal is to produce insights that feel practical, honest, and human, not like instructions from a rigid model. This will require multiple iterations once development begins.
8. Dead Ends
Before committing to V2, I explored several directions that didn’t survive. One idea was building a dashboard that combined multiple creator tools into one workspace. It looked interesting, but it did not remove any friction. It added more than it solved. The other dead end was trying to rely on a single large prompt to evaluate hooks. It produced inconsistent and vague feedback, which pushed me toward the modular evaluation plan.
These dead ends clarified the shape of V2 more than the successful attempts did.
9. What I Know Now
TryScribe taught me that a product needs one central idea strong enough to carry everything else. Without it, even polished features feel empty. I learned that creators value clarity over complexity, and that building something useful often requires removing more than adding. Most importantly, I realised that real progress often happens quietly, in documents, notes, and decisions made long before a single line of code is written.
10. Where It Stands Today
TryScribe V2 is still in the validation stage. There is no scoring engine, no backend, and no internal dashboard yet. But the foundation is clearer than anything I built during V1.
While the backend is still in the design phase shown above, the public-facing vision is clear. The waitlist page isn't just a form, it's the first test of the product's positioning.
The current public face of V2, used for validating demand before engineering begins
The real work has not begun, but the direction finally feels honest.