In a landscape where artificial intelligence skills are at a premium, organizations are facing an unexpected challenge: determining who genuinely possesses AI expertise versus who's merely leveraging AI tools to fake competence. Beth Glenfield, speaking at DevDay, offers a refreshing perspective on this growing phenomenon. Her insights reveal a complex reality where technical assessment processes need urgent recalibration to identify authentic talent in an increasingly AI-augmented world.
Traditional technical assessments have become fundamentally broken as candidates increasingly use AI to generate code, complete take-home challenges, and even "ghost-solve" problems during live interviews.
The most successful hiring processes now incorporate multidimensional evaluations that test not just coding ability, but problem decomposition, edge case identification, and implementation skills that AI tools still struggle to replicate.
Organizations should focus on evaluating how candidates think about problems rather than simply what they produce, with special attention to their understanding of model limitations and trade-offs.
The most compelling aspect of Glenfield's talk is her observation that we're witnessing an "authenticity crisis" in technical hiring. This isn't merely about candidates being dishonest—it's about the blurring lines between human and AI-generated work, creating fundamental challenges for identifying true expertise.
What makes this particularly significant is the broader context: as AI tools become ubiquitous in professional settings, the ability to distinguish between AI-assisted work and genuine understanding becomes crucial not just for hiring decisions but for organizational success. Companies that fail to develop this discernment risk building teams with surface-level capabilities rather than deep expertise—a distinction that becomes painfully apparent when facing novel, complex challenges.
What Glenfield's talk doesn't fully explore is the spectrum of AI-human collaboration that's rapidly evolving. Organizations need to recognize that we're moving toward a future where the question isn't binary (human OR AI) but rather about the effectiveness of the human+AI combination.
Consider Shopify's engineering team, which recently updated their assessment process to explicitly allow AI tool usage—but with a critical twist. Candidates must explain their prompting strategy, defend their choices