In a world where artificial intelligence seemingly advances by the hour, a curious contradiction has emerged: our AI applications remain frustratingly difficult to use. This disconnect between AI's theoretical capabilities and its practical implementation is the focus of Ethan Mollick's recent video, where he dissects why today's AI apps fail users and offers a roadmap for improvement.
The most compelling argument Mollick presents is that we're experiencing a "last mile" problem with AI. The technology itself has made remarkable strides, but the interfaces connecting humans to these powerful systems remain clunky, unintuitive, and often frustrating.
This matters tremendously because AI's potential impact depends entirely on widespread adoption. When systems require specialized knowledge or extensive trial-and-error to use effectively, they remain trapped in expert domains rather than transforming everyday work. The parallel to early computing is striking – personal computers only revolutionized society after graphical interfaces made their power accessible to non-programmers.
One aspect Mollick's analysis doesn't fully explore is the emerging success stories in AI user experience. Companies like Notion and Otter.ai have demonstrated how AI can disappear into the background of familiar tools, enhancing productivity without requiring users to think in terms of "prompts" or "completions."
Take Notion's AI writing assistant, which presents contextually relevant suggestions based on the document you're already working on. There's no need to craft complex prompts – the system observes what you're trying to accomplish and offers assistance when appropriate. Similarly, transcription tool Otter.ai doesn't just convert speech to text; it automatically identifies speakers, highlights key points, and generates summaries without requiring special commands.
These examples point toward a future where AI becomes ambient – present throughout our digital experiences but rarely demanding our explicit attention or