How Hybrid AI Will Enable Far Greater AI Use

| November 17, 2023

By Rob Enderle –Let’s talk about some of the benefits of a hybrid approach to AI implementation on the desktop and in smartphones.

Lower Costs and Server Loading

As we ramp up large language models (LLMs), we are discovering that they use a lot of resources that create capacity issues even though the use of these AIs is still comparatively small. These are models with increasingly over 13B parameters that require a massive amount of processing power.

When users collectively use these models at once in large numbers, not only do they use up massive amounts of bandwidth, but they create new and painful bottlenecks, potentially bringing cloud servers already under heavy loads to their knees. By off loading these servers and using this centralized resource for data updates, maintenance and for those far smaller occasions where a localized AI won’t do, you cut the costs and loading of these centralized resources and provide users with greater performance capabilities that aren’t dependent on the system being connected.

This last is critical for things like navigation on a smartphone which may need to work when not connected and for getting critical information during a network outage resulting from natural or human-made catastrophes which are coming with ever greater frequency. In short, users just want this technology to work, and the wireless networks aren’t as available as we’d like.

In addition, as companies like Microsoft bring to market user focused productivity tools like Copilot (which is expected to move to general availability in a few short weeks), users’ need to have this always available becomes even more critical.

Smartphones

Smartphones may be even more critical for hybrid AI than laptops based on some of the critical usage models. With the parallel release of multi-channel Bluetooth, it is expected that events like the one I’m attending will move from human translation to AI-driven translation. Moving multiple sound feeds over the same wireless network on top of the data needs of the attendees and presenters isn’t workable today, which is why more focused radio solutions are used instead of central AI driven translators which are also available.

 By moving the loading to the smartphones, not only does this reduce the shrinkage of the existing playback devices that would be obsolesced, but it allows for the support of more languages and a far better experience for the attendees. In addition, more advanced forms of noise cancellation are available in personal devices than in your typical radio used for translation, and digital sound modification can better deal with hearing challenges than those older devices are capable of, leading to higher comprehension and better audience engagement.

Finally, while the old radios only provided inbound translation, smartphone-based tools can provide bidirectional translation and allow an attendee to more easily ask questions in their native language, further improving engagement and reducing the chances for avoidable misunderstandings. This is on top of advantages like better image capture and on-phone editing of images that can be used to add emails or papers being written to cover an event, or to improve the quality of any location-based communication to describe the event being captured.

In short, AI turns the smartphone into a far more capable device and OEMs are already starting to wonder if they can dispense with PCs and just use smartphones, a potential trend being driven by generation Z needs.

Wrapping Up

The move to Hybrid AI is one of the big changes coming to PCs and smartphones next year to address excessive cloud and on-premises loading issues and to provide a better user experience. The recurring cost of this approach, both in resources and hard cash, are expected to be far larger with this approach and the user experience significantly better, particularly for desktop and personal activities with little tolerance for latency or downtime like translations or productivity apps.

In the end, pulling much of the existing load from the cloud to more personal devices is the future anticipated by the major processor providers and Microsoft going forward. There are some security issues that will need to be better addressed should one of these devices be lost, but those are also in the works. By the end of 2025 not only do I expect most of us will find we can’t live without our new AI tools, but these tools will also mostly be running locally.

Category: Uncategorized

About the Author ()

Comments are closed.