Google’s offline AI has changed the way I look at AI on a device


Someone holding the Pixel 10 Pro showing the back of the phone.

Joe Maring / Android Authority

AI. AI. AI. I’m sure you’ve grown tired of hearing this word over the past few years. It’s everywhere, on every feature, every app, every key slide, whether it actually makes your phone better or not. Honestly, I felt the same way. Sure, some features are pretty useful, but I find most AI features on Android to be gimmicks that you try once and forget about.

One such trick has always been, at least in my opinion AI on the device. Most AI features in our phones still depend on the cloud (or hybrid architecture), and I’ve long believed that smartphones can’t match the processing power of AI data centers. That’s why in-device AI has never felt so useful, at least not to the extent that companies claim.

However, that was until I tried one of Google’s AI apps on a lesser-known device, which made me rethink that position a bit.

Would you really use AI features that work completely offline on your phone?

54 votes

Google AI Edge Gallery is a hidden gem that I didn’t know about

ai edge gallery hands on 2

Sanuj Bhatia / Android Authority

Google’s AI Edge Gallery app isn’t exactly new. Actually started about a year ago as an experimental app, but what recently brought it back into focus for me is that it’s on Google Updated it to support Gemma 4the best and latest open source AI models. This update finally gave me a proper shot.

The app is available on both Android and iOS. I have tested it myself iPhone Airmy Google Pixel 10 Pro and even Oppo Find X9 Ultra. There are a few differences between the platforms, but the basic idea remains the same. You download these open source AI models directly to your device and then use them for various tasks.

The app has predefined use cases like using a generic chatbot, copying audio, asking questions about an image, or even trying out some agent-style tasks. I’ve always thought that on-device AI models wouldn’t be very useful in real life, especially with their limited settings. But that opinion changed a bit during the last visit, where I actually found the tool surprisingly useful.

Asking questions along the way

ai edge gallery hands 3

Sanuj Bhatia / Android Authority

The best use case, and I know it’s still not perfect, is to access an offline, on-device AI chatbot. The Google AI Edge Gallery app gives you access to AI Chat, similar to other chatbots like Gemini.

It’s pretty simple. You enter a request and wait for the model to respond. It’s multimodal, so you can use text, sound, or even images, and it takes it all into context. It’s slower than something like ChatGPT or Gemini, but the fact that it runs entirely on the device without any internet makes it stand out.

Using AI on my phone at 32,000 feet without internet finally made it feel like AI was real on a device.

I recently used it on a flight to Thailand, where I asked it basic things like phrases I needed to know, and even asked about a few movie titles on the in-flight system to get recommendations and approximate IMDb ratings.

He mentioned that he can’t access the internet and relies on his training data, but if you frame your questions accordingly, he gives you useful answers even when you’re completely offline.

Offline translation

ai edge gallery hands on 4

Sanuj Bhatia / Android Authority

This is probably the best use case for this application. I usually travel with an eSIM when I’m abroad, but there are always places where the connection isn’t good and this really helped. Thanks to Gemma 4’s multimodal capabilities, you can use AI Edge Gallery as a proper offline translator.

The app includes a dedicated audio recording tool that can transcribe speech and also translate it on the go. On phones that can use the hardware properly, translation is surprisingly fast, almost as fast as dedicated translation apps, and works reliably even without internet.

Ask the picture

ai edge gallery hands on 5

Sanuj Bhatia / Android Authority

Another feature that builds on this is the ability to ask questions about images using an offline model. You can add a picture and ask anything about it. I didn’t think I’d use it that much, but it’s been really useful when I’m traveling, especially for translating menus or understanding signs in different languages.

Plus, it helps save my precious mobile data because everything happens on the device and you’re not waiting for a response by uploading images to a server.

Don’t want to miss out on the best Android Authority?

google's preferred source tag is light@2xgoogle's preferred source tag is dark@2x

There are a few things that the AI ​​Edge Gallery Android app needs to fix

Overall, I’ve enjoyed using the Google AI Edge Gallery app as an offline AI tool, but it’s not without its drawbacks. My biggest complaint is that the chats aren’t saved.

For example, with something like Gemini or ChatGPT, each conversation is saved as a thread so you can return to it and pick up where you left off. I understand that offline models have limits when it comes to context length due to hardware limitations, but there should at least be an option to continue a conversation until that limit is reached.

My bigger problem is that Google still doesn’t take full advantage of the hardware in Android. The AI ​​Edge Gallery app is available on both Android and iOS, and I’ve tested it on multiple devices. On iPhones, the app uses the GPU for processing, which is generally faster for AI tasks.

On Android, it’s a mixed experience. phones with high-end chips like Snapdragon 8 Elite Gen 5 It can access the GPU and run the program more efficiently. But on Google’s own Pixel 10 Pro, the app doesn’t take full advantage of the Tensor GPU or even the chip’s NPU.

The app notes that the AICore-based experience that can access the NPU is currently limited to beta testers. For users like me who aren’t part of this program, it falls back to CPU processing instead of using the AI ​​models found on a device like the Gemini Nano, which slows it down noticeably.

AI Core app icon

Robert Triggs / Android Authority

To put that into perspective, the iPhone Air took less than a second to respond to the same voice input, while the Pixel 10 Pro took more than 10 seconds for the same task. This kind of gap really affects the overall experience.

I’m sure I’m in the minority here, talking about AI software on a device with limited capabilities and, to be honest, a pretty niche user base. But given how much Google is pushing AI on the device, it’s a bit frustrating to see it not making full use of its Tensor hardware in this app. It just feels like an oversight and I really hope Google fixes it sooner rather than later.

That said, being able to access the Gemini model directly on my phone and get an answer 32,000 feet in the air was honestly an eye-opener. This is probably the first time I’ve seen a real, practical use case for AI in a device.

Thank you for being a part of our community. Read our Comment Policy before deployment.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *