Hiring in the Age of AI: How to Stop Interviewing a Chatbot by Accident
Or: I spent an hour today asking questions that ChatGPT answers better than the candidate, and somehow I was the one who felt embarrassed
I did an interview today. I walked out of it with two things: a page of notes and the specific flavour of headache you only get from an hour of polite interrogation that neither party particularly enjoyed. Because frontend hiring in 2026 has evolved into a beautifully absurd ritual where two grown adults sit on a video call and carefully pretend the third person in the room does not exist.
You know the one. The one with the blinking cursor in the other tab. The one typing faster than both of us combined.
The candidate pretends they are not using it. I pretend I believe them. We both pretend this is a meaningful signal about whether they can ship a feature without setting the build pipeline on fire. Everybody leaves the call feeling vaguely unclean. Ten out of ten, no notes, would not recommend to a friend.
Except I do have notes. A lot of them. I also have opinions, because I have been stewing on this all afternoon and the coffee has not helped. So here we are. This post is for two people: the person who has to run the interview and does not know where to point the flashlight anymore, and the person on the other end of that call who read three LinkedIn threads about “how to beat AI-powered interviews” and somehow came out more confused than they went in. Both of you are going to get roasted, gently. It is for your own good.
The CV that ChatGPT wrote and nobody proofread
Let us start before the call even happens. Let us start with the CV.
For about a year now I have been seeing documents that look less like CVs and more like character sheets from a game I never agreed to play. “React 85%. Angular 90%. TypeScript 80%. Node 75%.” Clean, round, symmetrical, suspiciously confident, all suspiciously ending in five or zero as if the candidate’s skills were rounded for cashier convenience. It is the kind of precision nobody in the history of software has ever felt about their own abilities. I have been doing frontend for nearly two decades and I still could not tell you whether I am “87% Angular” or “63% Angular” or just “a guy who mostly remembers where the semicolons go and when to stop arguing about tabs.”
Nobody writes about themselves in percentages. It is not how humans self-assess. It is, however, exactly how an LLM fills in a table when you politely ask it to “make my CV look more professional.” And the LLM, bless it, does exactly what you asked, with the quiet competence of a kid who really wants that gold star.
And I am not alone in noticing. The numbers on this are actually pretty stark: around two-thirds of job candidates now use AI somewhere in their application process, 90% of hiring managers report a spike in low-effort spammy applications, and 78% of companies actively check for AI-generated content. One Resume Now survey of 925 HR workers found that 62% of hiring managers say AI-generated resumes without personalization lead to rejections. So when I tell you “I can smell an AI-written CV from orbit,” I am not being dramatic, I am describing the average Tuesday for anyone doing frontend hiring in 2026.
To be completely clear: I am a huge fan of numbers on a CV. I love a good SMART bullet point. “Cut cold start from 2.1s to 380ms on the pricing page.” “Migrated 140 components from AngularJS to Angular 17 over eight months.” “Replaced our home-grown form library with Reactive Forms and removed 4,000 lines of dead code.” These are gorgeous. I want to hear the story behind every single one. Give me more of those.
What breaks the spell is not the presence of numbers. It is the too-good-to-be-true energy of the whole document. You know the vibe:
“Increased application performance by 47%.”
“Improved conversion rate by 23%.”
“Contributed to revenue growth of 3.5M.”
“Reduced bundle size by 61% and improved user retention by 18%.”
Four bullets. Every one a banger. No context, no baseline, no attribution, no “we tried three things and this was the one that worked.” Just a highlight reel of round numbers that all happen to land on the good side of zero. It reads less like a career and more like a pitch deck that an LLM generated after being asked to “make it sound impactful.”
This is not a trap question. It is honestly not even the candidate’s fault half the time, because the resume-builder tool literally prompted them with “add measurable impact here.” But the second you ask “walk me through the 18% retention number, what was the baseline, what was the measurement window, what else changed in that sprint, who owned that metric”, one of two things happens. Either a real story unfolds (and you relax, because the candidate was just over-polishing something real) or the whole thing deflates in about fifteen seconds. Both outcomes are useful. That is the point.
The tell, again, is not the numbers themselves. It is the pattern. When every single bullet sparkles and not one of them admits a trade-off, a failure, a “it worked but we spent three weeks on the wrong thing first”, what you are reading is not a career summary. It is a rendered document, and the person who rendered it may not even remember what went in. That is the thing to probe.
What I actually look for in a CV now:
Projects described as projects, not as a vague cloud of “modern frameworks and best practices”
Technologies attached to actual work, not floating in a separate Skills section like product labels in a supermarket
Numbers that mean something (”cut initial render from 2.1s to 380ms”, “migrated 140 components from AngularJS to Angular 17 over eight months”) instead of numbers that mean nothing (”JavaScript 92%, soft skills 88%”)
The ability to expand every single bullet into a five-minute story. Because if you cannot, it is not your CV. It is a CV-shaped object that arrived in your inbox.
That last one is the only real rule. The CV is not a document, it is a contract for the first 30 minutes of our conversation. Everything on it is fair game. If a line is too dangerous to defend, take it out before you hit send. Radical concept, I know.
“Is the candidate cheating?” Wrong question.
Now the elephant. Remote interviews. Second monitor. Cursor open in the background. Claude whispering sweet nothings into a hidden Airpod. The whole pantomime.
Here is my confession. I used to care about this a lot. Like, embarrassingly a lot. The instinct is to build traps. Stricter challenges, trickier questions, increasingly baroque live-coding exercises, the whole interrogation kit. At my lowest point I was basically one step away from asking candidates to solve a LeetCode hard while balancing a glass of water on their head, just to prove they were really there. It does not work. It is miserable for everyone including me, good cheaters are still good at cheating, bad cheaters fail anyway, and at some point you catch your own reflection in the webcam during a 45-minute technical screen and realise you have become a TSA agent with a subscription to Pluralsight. Not a great look.
So I dropped it. Not because I stopped caring, but because I was asking the wrong question. The real question is not “how do I stop the candidate from using AI?” It is:
Why am I asking anything that AI answers better than a human in the first place?
Because if your interview question can be solved by a well-prompted LLM in three seconds, congratulations, you have just verified that your candidate knows how to type into a text box. That is a skill. It is just not the skill you are hiring for. Probably.
“What is a closure?” AI wins. “Explain useEffect.” AI wins. “Difference between let, const, and var?” AI wins, and also I am going to bed.
These were great questions in 2015 when Google was the cheat code and we all pretended it was not. They are now the equivalent of asking a driver to recite the dictionary definition of “steering wheel.”
So I moved the whole interview somewhere the model cannot follow.
Interview the human, not the stack
The questions that work now (and work precisely because they are AI-proof, not by accident but by design) go something like this:
“Tell me about the worst bug you shipped in the last year. What was it, how did you find it, and what did you change so it would not happen again?”
“Describe a moment when you knew your team was making a bad architectural call, and you let it happen anyway. Why? What would you do differently today?”
“Tell me about a technical decision you are proud of. Now tell me about one you regret.”
“What is the hardest code review you have ever given or received? Not technically hard, emotionally hard.”
“When did you last change your mind about a pattern or a practice you used to defend? What convinced you?”
None of these can be prompted. Not because they are clever, but because the answer requires specific humans, specific code, specific hallway conversations, specific 2 a.m. Slack messages. You can try to bluff, but you cannot get past the third follow-up question. There is always a third follow-up question. That is the trick. It is not a magic trick. It is just “keep asking ‘and then what’” until either a real memory surfaces or the whole thing collapses.
And here is the counterintuitive bit. The candidates I trust most are the ones who get things wrong in a very specific, human way. The ones who say “actually, I thought X for years and then I got bitten by it.” The ones who hesitate, reconsider, correct themselves. That hesitation is the sound of someone who has actually been in the room. Candidates who answer everything smoothly and confidently usually fall into one of two buckets: they have an earpiece, or they have never shipped anything that hurt them. I do not know which is worse.
Actually I do. The second one is worse. The first one at least owns a microphone.
Trade-offs, opinions, and the dying art of having a take
I still ask about patterns. I just do not ask what they are. I ask when you would not use them.
“When are Angular signals actually better than RxJS, and when are they worse? Give me a concrete example of each.”
“Page-Feature Composition vs. classic smart/dumb components: where does PFC give you leverage, and where does it fall apart?”
“OnPush, trackBy, defer, lazy loading. Which of these actually move TTI, and which are just code hygiene pretending to be performance?”
“Last project where you consciously chose NOT to use a state management library even though there was state. Walk me through the reasoning.”
I am not looking for correct answers. I am looking for an opinion that the candidate can defend without outsourcing. They can disagree with me. I want them to disagree with me. What I cannot work with is the verbal equivalent of a shrug wrapped in jargon. “It depends” is not an opinion, it is a survival strategy, and I am running out of patience for it.
The developers I want on my team have taste. Taste is the one thing the model cannot give them. It also happens to be the first thing that atrophies when you let the model drive every decision.
The AI literacy test: not how you use it, why you think it works
This is where it gets fun. I ask every candidate about AI now. Not to catch them, but to find out how deeply they actually understand the thing they are putting in their workflow.
Because I see two failure modes, and honestly both of them keep me up at night in different ways.
Type 1: The Dabbler. Uses ChatGPT “sometimes, for small things.” Does not know which model is behind the chat. Has not heard of context windows. Genuinely thinks MCP stands for something Marvel-adjacent. For the Dabbler, AI is “a slightly faster Stack Overflow” and any attempt to discuss agents, tool use, or RAG produces the facial expression of someone being handed a wine list in a language they do not speak. A year from now the Dabbler will be lapped by anyone who took this seriously, and they will not even notice until a junior colleague reviews their PR and politely asks why they are still writing boilerplate by hand like it is 2022.
Type 2: The Full Vibe Coder. Oh buddy. If you know, you know, and if you do not, I already wrote a whole post about this one called Stop Vibing, Start Understanding, where I got most of the yelling out of my system, so I will try to be brief here. The Vibe Coder prompts “build me a login component” and accepts whatever falls out of the machine like a gumball from a supermarket dispenser. They are fast. Terrifyingly fast. Right up until something breaks, at which point they stare at their own codebase with the haunted expression of a person trying to read Linear A after a long weekend, because they did not actually write the code, they commissioned it, and you cannot debug a commission. They are a load-bearing incident waiting to happen. The incident will happen on a Friday. It is always a Friday. I do not make the rules.
Both of these people will tell you, with total sincerity and zero self-awareness, that they “use AI a lot.” Both are wrong in opposite directions. Neither is who you want on call at 2 a.m. when the error dashboard turns the colour of a traffic cone.
The candidate I want sits in the middle and sounds like this:
“Yeah, my AI hallucinates sometimes. Here is how I usually catch it.”
“I keep my prompts short because I know the model loses track past a certain context length, and I would rather feed it one focused chunk than my whole repo.”
“I let it write boilerplate and unit tests. I do not let it near anything security-adjacent without reading every line.”
“The other day Claude confidently invented a method on RxJS that does not exist. I only caught it because the types screamed. If I had been vibe coding, that would have shipped.”
“I use Cursor for inline, Claude Code for bigger refactors, and I keep a list of things I never delegate. Yes, I have thought about this.”
I do not need a lecture on transformer architecture. I need operational understanding. The same way I expect a frontend dev to know roughly what the event loop is doing without having read the V8 source code. Curious people ask themselves these questions unprompted. Incurious people treat the tool as a black box, and black boxes, as anyone who has been on-call can confirm, always explode at the worst possible moment.
If a candidate uses AI every day and cannot tell me why it sometimes lies, that is not a skill gap. That is an attitude problem. And attitude scales across a codebase faster than any framework.
The checklist, because you are going to skim anyway
Fine. Here, have it in one place:
The CV is theirs. Every bullet expands into a five-minute story. If it does not, it is not theirs.
Decisions, not definitions. “Why did you choose this” beats “what is this” every time.
Trade-offs with a take. “We picked X because Y, but today I would go with Z because...” That is the sound of a senior.
Mistakes and the lessons. No mistakes means either they are lying or they never shipped anything that mattered.
Tool understanding, AI included. How, why, where it breaks, what it cannot give them.
Thinking out loud. I give a small problem, not to see the solution but to see the path. A solution without a path is a thing the LLM produced.
Translation skills. “Explain this piece of your project to me like I am a PM.” Because the team is not just devs, and if you cannot talk to the people who pay you, you are a very expensive rubber duck.
If you are on the other side of the table
Okay, switching sides. Hello. Statistically, some of you reading this are the people being interviewed, not the ones doing the interviewing. This section is for you, and I am going to be a little more direct than usual, because I think the advice going around the internet right now is making things worse.
Let me start with the single most important thing, and then we will get tactical.
“I do not know” is a complete sentence
I cannot stress this enough. The strongest thing you can say on a technical interview is “I do not know, but here is how I would find out.” The second you try to ad-lib your way through a question you do not understand, everyone on the call hears it. It is obvious. It is obvious the way a bad lie from a child is obvious. And the tragedy is that the question you were bluffing through was probably something I would have happily explained, moved past, and not held against you at all.
Seniors say “I do not know” constantly. It is one of the ways you tell a senior from a mid. Juniors say “I do not know” and then apologise for 30 seconds. Mids say “well, it kind of depends on, um, several factors, and in some contexts you might consider...” until the clock runs out. Be the senior, even if you are not one yet. The shortcut is available.
Do not scheme
I know the temptation. Everybody is whispering about earpieces, second monitors, AI overlays that read the screen and feed you answers, entire YouTube tutorials on “how to pass a technical interview with Cursor in the background.” Some of it even works, for about ten minutes. Here is what you need to understand: the hiring manager on the other side has done this hundreds of times and is specifically trained to notice the smell of a bluff. You are not as subtle as you think you are. Nobody is.
But more importantly, even if it works, what have you won? A job where every day is a new test you are going to fail because you lied yourself into it. A team that thinks you are a senior when you are a junior who is great at prompting. Six months of sweating every stand-up. At some point the mask slips, and then the fall is much harder than if you had just been honest in the first hour.
The market is tight right now, and I know how it feels to need the job. I am not lecturing you. I am telling you the scheme has a worse ROI than just being yourself. Truly.
The behavioural stuff nobody writes about because it feels beneath them
These feel embarrassingly basic to even list, which is exactly why I have to list them. Nobody writes about this stuff, so everybody gets it wrong, and then they go home and blame their rejection on “the vibes.” The vibes are downstream of the basics. Here are the basics.
Look at the camera, not at your own face. We have all been guilty of this one. Your own face is fascinating. It is right there, in the corner of the Zoom window, doing expressions you did not know it could do. But if your eyes are constantly darting sideways or down-and-to-the-right, we notice, and we notice in a very specific “is this person reading off something” way. If you need a moment to think, it is fine to look up or away briefly, the way people do in actual conversations. What sets off alarms is the rhythmic sideways scan of a person subtitling their own life in real time. Close the other tabs. Close the AI overlay you told yourself you would not actually use. You know the one. Yes, that one.
Let silence exist. When I ask a hard question, take five seconds. Take ten. Say “give me a moment to think about that” and then actually think. I love hearing that sentence. It is the opposite of a red flag, it is basically a heart emoji. Rushing to fill silence is what gets you in trouble, because the words that come out under panic are almost always wrong, and then you have to defend them, and now you are in a bar fight with yourself over an opinion you did not even have ninety seconds ago.
Do not memorise answers. Memorised answers have a specific cadence and we can all hear it. They sound like a LinkedIn post being read aloud by a person who used to do local radio, and nobody wants to hire a LinkedIn post. Your real experience, told messily, with detours and “wait no, that was the other project” corrections, is worth ten polished fakes. The mess is the credential.
Disagree when you disagree. Politely, but actually disagree. If I suggest an approach and you think it is wrong, say so. This is not a trap. It is the single biggest signal that I want to work with you. The candidates I still remember years later are the ones who pushed back on something I said and were right. The ones I have already forgotten are the ones who nodded along and called everything I said “a great point.”
Bring real questions. “What does a normal day look like” is fine, forgettable, and makes me think you downloaded a list. “What is the worst part of this codebase and why” is the kind of question that makes me want to hire you before we have even finished the call, because it is the question of a person who has already mentally moved in and is checking for damp patches. You are allowed to interview us back. You are encouraged to. If the company cannot handle being interviewed, run.
Do not oversell AI fluency you do not have. If you have barely touched Cursor, do not pretend you live inside it like a monk in a cave. If Copilot is basically your whole AI story, say so, and then tell me what you got out of it and the one time it steered you off a cliff. Honest beats impressive every single time in this specific conversation, because I will find out in two follow-up questions either way, and the difference between “honest and learning” and “bluffing and caught” is the entire interview.
The CV and prep bit
Since we are already here:
Read your CV out loud the day before. Every line should produce a memory, not a shrug. If a line makes you sweat, either cut it or prepare the real story behind it. You are not obligated to keep anything an AI tool put there “for impact.”
Have two or three war stories in your pocket. Context, problem, decision you made, trade-off you accepted, outcome, what you would change. Do not recite them. Know them well enough that you can pull the relevant one out on demand when a question lands nearby. This is the thing seniors actually do that juniors think is improvisation. It is not. It is preparation that looks like improvisation.
Know your own AI workflow, in your own words. Not “I use AI a lot.” That phrase is meaningless now. What specifically, for what, where do you not let it in, what was the last time it burned you. One minute of honest, specific answer is worth an hour of buzzwords.
Be ready to talk about what the model cannot do. Not NeurIPS level. “Here is a time it lied to me and here is how I noticed” level. This is where you prove you are a pilot, not a passenger.
Have opinions. Yours. Not mine, not Twitter’s, not the top answer on Stack Overflow. If I say something that genuinely lands, update your view in real time and tell me so. That is not weakness. That is the single most senior move in the entire interview.
And one last thing, which is going to sound obvious until you realise how often people skip it: be kind to yourself about the interview you just bombed. Everybody bombs them. I bomb them. The person interviewing you has bombed them. The market is brutal right now and a rejection is usually not a referendum on your worth, it is a referendum on a one-hour conversation between two tired humans on a Tuesday afternoon. Take the notes, find the one thing you want to do better next time, and move on. That is the loop. Welcome to it.
And if you are on my side of the table, one last word before we part: please, for the love of everything, stop asking questions whose answers live on the first page of a Google search. That was a decent filter in 2015 when the filter was the candidate’s memory. The memory now lives in a data centre in Virginia. You have to be the filter yourself. Sorry. That is the job now.
Wrapping up before I start ranting again
Hiring in the age of AI is not harder because candidates cheat. It is harder because almost everything we used to test for has quietly become free. Memorised APIs. Encyclopedic knowledge of Array methods. The ability to recite the Gang of Four without giggling at “Abstract Factory.” All of it, two tokens and a small monthly subscription.
The value of a candidate has moved. I wrote about this in “You’re Not a Coder Anymore. You’re a Solver” and I stand by every word: the job has shifted toward the things models are worst at. Context. Judgement. Taste. Hard-won scars. The ability to explain to a non-technical stakeholder why we are about to spend two sprints on something that, to them, “already works.” If you can read those qualities off a person in an hour, you will build a good team. If you are still opening your interviews with “what is a Promise”, you will build a team that looks great on paper and turns into a small-but-reliable fire the first time production has a bad Tuesday.
So, hiring managers: stop outsourcing the filter to questions that Google solved a decade ago and ChatGPT now solves in three seconds while also writing you a haiku about it.
Candidates: stop scheming, start saying “I do not know”, and for the love of the event loop, close the other tab. We can tell.
And everybody else: welcome to 2026, where the most important interview skill is being a recognisable human being, which is, objectively, the most ridiculous sentence I have ever had to write on a professional blog. I blame the robots. It is definitely their fault.


