The Interview I Bombed (And What I'd Do Differently)

5 min read

About a year ago I had a final-round technical interview for a senior iOS position at a major e-commerce company. I'd cleared every previous round. The role was a good fit on paper. I had two decades of experience, a portfolio that spoke for itself, and enough interview reps that the format shouldn't have rattled me.

I spent the next hour drawing disconnected boxes on a digital whiteboard and saying "I don't know."

The setup

The interviewer described a product feature: LLM-powered product search. Users describe what they're looking for in natural language and the app surfaces relevant results. Not traditional keyword matching. Conversational. AI-driven.

Then he opened a shared whiteboard and asked me to architect it.

The unraveling

I knew the pieces. I'd built search interfaces, worked with APIs, designed client-server architectures, thought about latency and fallback behavior in production systems. I had the vocabulary for this conversation. None of that mattered, because I never gave myself a chance to access any of it.

What I did instead was stare at a blank whiteboard and try to produce a finished architecture out of thin air. No scoping. No questions. No conversation with the interviewer about what "conversational search" even meant in their context. I just started drawing things, a box labeled "client," a box labeled "API," an arrow between them, and then I sat there. Staring at these shapes I'd drawn. Realizing I had no idea what to draw next because I hadn't defined the problem I was trying to solve.

The interviewer gave me openings. He asked follow-up questions that were clearly designed to get me talking, to pull me into the kind of collaborative design conversation the session was supposed to be. I didn't take any of them. I was too far into my own head by that point, stuck in a loop of "I should know this" and "why can't I think right now," and that loop left no room for actually thinking.

I said "I don't know" more in that hour than in the previous decade of interviews combined. The interviewer was patient about it, which, if I'm being honest, almost made it worse.

What he was actually looking for

This is the part that bothers me the most. Because I know what that session was supposed to be. I have run sessions exactly like it.

He didn't want a complete system design. Nobody expects a finished LLM search architecture in 60 minutes. What he wanted was to watch me work through ambiguity. He wanted me to ask things like: How big is the catalog? Is the LLM self-hosted or accessed through a third-party API? Is this replacing the existing search or augmenting it? Are we talking multi-turn conversation or single-query interpretation? What are the latency constraints?

Each of those questions would have narrowed the problem. Each one would have given me something concrete to draw. And each one would have turned the whiteboard into what it was supposed to be: a conversation. Not an exam.

And the actual interesting architecture problems, the ones that would have demonstrated senior-level thinking, were sitting right there waiting for me to find them. How do you handle the 2-3 second latency of an LLM response in a mobile UI that users expect to feel instant? How do you structure prompts to return output you can map to a real product catalog instead of hallucinated results? How do you blend LLM results with traditional search so the feature degrades gracefully when the model is slow or wrong? How do you stream partial responses into a native iOS interface using Swift's async/await?

I never reached any of those questions. I was too busy panicking about not having an answer to notice that nobody was asking me for one yet.

The gap between knowing and doing

I can write that list of questions now without effort. I could have written it the day after the interview. The knowledge was never missing. What failed was the process of accessing it under pressure.

There's a version of this story where I tell you I went home, reflected calmly, extracted a clean lesson, and moved on with my life. That is not what happened. I replayed that interview for weeks. I'd be in the shower, or driving, or trying to fall asleep, and the whole thing would just unspool again in my head. The blank whiteboard. The silence. His patience. I was embarrassed in a way that surprised me, not because I'd failed a technical screen, but because I'd failed at the thing I'm supposed to be good at. I help other engineers prepare for exactly these conversations. I evaluate candidates in sessions exactly like this one. And when it was my turn, when it actually mattered, I froze.

The honest lesson, and I've sat with this for a long time now, is that there is a real and meaningful gap between understanding the right approach and being able to execute it when your nervous system is telling you you're failing in real time. Interview prep that only addresses the knowledge side of that equation is incomplete. You also have to practice the meta-skill of slowing down when your instinct is to speed up, and asking questions when every part of you wants to just perform.

What I'd do differently

I've thought about this more than I'd like to admit. If I were back in that room, here is how the first ten minutes would go.

I'd start by saying the problem back to the interviewer in my own words. Not to stall. To give myself a foothold, something concrete to stand on. "So we're building a feature where users describe what they want in natural language and the app returns relevant products using an LLM. Let me ask a few things before I sketch anything."

Then I'd ask questions until the problem had edges. Every answer the interviewer gave me would give me something to draw. "It's a third-party API" means I don't need to think about model serving infrastructure. "It's augmenting existing search" means I already have a fallback path built in. The architecture starts emerging from the constraints, not from my head. And that's the whole point.

Then I'd draw the simple version first. Client, backend service, LLM call, catalog mapping, response. Five boxes. From there the conversation opens up naturally: latency handling on iOS, prompt structure on the backend, caching and rate limiting at the system level, monitoring for a feature where "correct" is inherently subjective. Each of those is a thread the interviewer can pull on, and each one is a chance to show real depth.

That's the session he was trying to have with me. I just wasn't present enough to have it.

Why this post exists

I have other posts in this series where I talk about interview questions and how to answer them well. I stand by all of that advice. But I would be dishonest if I pretended I always follow it myself. This was a case where I didn't, and the cost was a role I wanted at a company I respected.

If you've ever walked out of an interview knowing the gap between what you showed and what you're actually capable of... I don't have a clean takeaway for you. I just know the feeling. And I know that the next time I sit down at a whiteboard, I'm going to start by asking a question instead of drawing a box.