“Go small…or stay home?”

There’s been a lot discussed at events such as AWE and CSUN about the use of large language models (LLM) in assistive technology wearables. In the past three years there have been some significant commercial releases using this technology. Given their pricing they are proving popular in targeted groups such as for people with a vision impairment or blindness. With an option providing “similar” functions to a specialist items more than ten times it’s costs which would you choose?

However there is more than cost to consider with these solutions that use LLM as an integral party. LLMs only exist because of an expansive connection of resources to elicit the computing power required. With this need to access such a network you need to be considering on any device:

  • How much computing power do you need?
  • How much energy do you need to consume?
  • What internet connectivity is essential?
  • What compromises are you prepared to forego for ensuring the privacy of your data?

Whilst I frequently demonstrate wearable technologies that use LLM to people with a vision impairment one of the first aspects I broach is the issue of privacy. Yes, said device can give you a reasonably accurate text to speech (TTS) of the material you are trying to read but you need to conscious any data it processes becomes part of the LLM ongoing development.

That’s why I’m particularly interested in what is happening in the Small Language Model (SLM) space in comparison.

Photo of a microscope focussed on a water sample of algae.

So, firstly, what is a SML? According to IBM:

Small language models (SLMs) areĀ artificial intelligence (AI) models capable of processing, understanding and generating natural language content. As their name implies, SLMs are smaller in scale and scope than large language models (LLMs).

https://www.ibm.com/think/topics/small-language-models

When you think of wearables, particularly when used as assistive technology to access otherwise inaccessible visual content, SLMs have several advantages:

  • They are more efficient in terms of power consumption.
  • They are comparable in performance to LLMs.
  • They offer greater privacy and security control: think processing your data on your device and not via the public cloud.
  • They have lower latency: less processes means quicker response.
  • They are more environmentally sustainable due to less resources required.
  • They cost less.

When using SLMs therefore can mean a smaller physical device to achieve the same aim as LLMs. Which is where I get excited to see where this is going especially in relation to wearables.

In Australia we don’t have the world’s fastest mobile internet speeds (Speedtest Global Index): according to latest reports our download rate of 115.66 megabits per second (Mbps) makes us the 37th slowest in the world. Mind you at least its faster than fixed broadband speeds which in Australia, according to the same report, at 85.63 Mbps. The more you need to process via a LLM the faster the mobile speed you need to receive a timely, meaningful response. That is of course you can get access at all.

So if a wearable can use a SLM and therefore minimise its reliance on internet speeds we’re heading in the right direction. At AWE US 2025 Qualcomm announced their AR1_ Gen1 chipset which they purport in comparison to previous chipsets consume less power, have a lower profile and allow AI models to run locally on the device. As Skarred Ghost puts it in his review of the announcement:

This means that, in contrast to Ray-Ban Meta, that to speak with you has to contact Meta services, these glasses can provide some AI features completely locally, without your request leaving the glasses. This is huge for privacy: if your data never leaves the device, then your data is safe and not shared with some data-harvesting company. Of course, we are not talking about huge Large Language Models, which still require tons of GPUs on the cloud to work, but about Small Language Models, which still provide decent answers in some context. It is very interesting nonetheless, it is a good start.

https://skarredghost.com/2025/06/11/snap-spectacles-qualcomm-ar1-plus/

I’m curious to see this rolled out in the next generation of AI wearables.

You may also like...