Smart Picks
AI Technology May 8, 2026

Uber Expands OpenAI-Powered Assistant and Voice Booking

Uber Expands OpenAI-Powered Assistant and Voice Booking

The Uber OpenAI partnership has moved well past the planning stage. Uber is now running an AI-powered driver assistant in an active U.S. beta and preparing a voice-based ride booking feature built on OpenAI's Realtime API changes that put large language models directly in front of both sides of its marketplace.

How Uber Assistant Turns Marketplace Data Into Driver Guidance

Uber's platform generates a constant stream of operational signals — demand shifts, earnings patterns, surge zones, and city-specific conditions across 40 million daily trips. The challenge has always been converting that volume of data into something a driver can act on mid-shift. Uber Assistant is the product built to close that gap.

The tool handles questions drivers ask throughout a working day: which zone to head to, whether a long airport run makes financial sense right now, or why last week's earnings came in lower than expected. Responses draw from live heatmaps and trend data, then surface as plain-language guidance inside the driver app.

Dharmin Parikh, Uber's Director of Product Management, said the company's goal is to give drivers a clearer picture of marketplace conditions so they can make smarter decisions on their own terms. Early results showed the assistant cutting down the learning curve for new drivers significantly a process that previously took hundreds of trips to work through. Notably, experienced drivers kept returning to the tool as well, which Uber treats as evidence that it has utility beyond onboarding.

Hundreds of thousands of U.S. drivers are now inside the beta, with Uber reporting strong repeat engagement and measurable gains in productive time on the platform.

The Technical Stack Keeping It Accurate and Fast

Uber built the assistant on a multi-agent architecture that matches each incoming query to the most appropriate model tier. Lightweight requests simple lookups, fast classifications go to smaller, faster models. More complex reasoning tasks route to larger frontier models. This approach keeps response times competitive with what a mobile user expects from a live app.

Sitting across the entire system is AI Guard, an internal layer Uber developed to review both inputs and outputs before they reach drivers. It handles policy enforcement, privacy screening, hallucination reduction, and consistency checks. Parikh described trust as the core variable: when the assistant gives useful, accurate answers, drivers return. When it doesn't, they stop.

What Voice Booking Changes for Riders

On the rider side, Uber is using OpenAI's Realtime API to power a new voice booking experience currently rolling out over the coming weeks. A microphone tap in the app's search bar lets riders describe what they need in natural speech including multi-part requests that would be cumbersome to tap through manually.

The system reads saved locations, interprets intent, and syncs spoken responses with visual cues inside the app. A request for a large vehicle to handle extra luggage, for example, would surface UberXL automatically. Uber also sees the feature as an accessibility improvement for riders who find touch navigation difficult.

For driver earnings optimization and broader platform engagement, both tools represent a shift from AI running quietly in the background to AI that drivers and riders interact with directly. Uber says this reflects a wider internal change, with product, legal, and engineering teams now building AI features collaboratively rather than routing everything through a central AI group.

Full details on the rollout, quotes, and product architecture are available in OpenAI's official case study on the Uber partnership.