mirror of
https://github.com/osmarks/website
synced 2025-09-03 03:07:56 +00:00
new post
This commit is contained in:
@@ -30,7 +30,7 @@ As for more smartwatch-replacing products (always-on unobtrusive information ove
|
||||
|
||||
There are claims that the "killer app" is LLM integration, but I'm not convinced: LLM agent systems remain too dumb to be workable by themselves, and the lines of research producing them seem focused on having you interact using natural language as you might delegate something to a human, which works about as well over audio. To be very useful, the AI has to have a unified set of interfaces to helpful tools for you - nobody seems interested in doing that integration work and AIs cannot yet operate the human interfaces directly[^4]. [This proposal](https://federicorcassarino.substack.com/p/ar-glasses-much-more-than-you-wanted) for contextually aware applications is compelling, but much harder than merely having the hardware, and with user modelling of this quality I think you could substitute other sensors and IOs effectively. It *is* possible that people will want ubiquitous AI companionship even without significant practical applications.
|
||||
|
||||
Why do I want AR glasses despite this, other than the fact that they are in science fiction and therefore required? I think there are useful applications for [relatively non-flashy](https://kguttag.com/2019/10/07/fov-obsession/) systems if you are willing to customize heavily and write lots of personalized software, which I generally am. Old work like [Northpaw](https://web.archive.org/web/20240216092219/https://sensebridge.net/projects/northpaw/) has demonstrated the utility of always-on inputs for integrating things into your cognitive maps, and modern electronics gives us a ridiculous array of sensors available at very little cost. It might also be possible to scan [rough semantics of text](https://gwern.net/idea#deep-learning) into your mind more effectively than via normal reading, but this hardly needs the glasses. With good enough input devices, there are other handy applications like having GNU `units` on tap everywhere[^6].
|
||||
Why do I want AR glasses despite this, other than the fact that they are in science fiction and therefore required? I think there are useful applications for [relatively non-flashy](https://kguttag.com/2019/10/07/fov-obsession/) systems if you are willing to customize heavily and write lots of personalized software, which I generally am. Old work like [Northpaw](https://web.archive.org/web/20240216092219/https://sensebridge.net/projects/northpaw/) has demonstrated the utility of always-on inputs for integrating things into your cognitive maps, and modern electronics gives us a ridiculous array of sensors available at very little cost. It might also be possible to scan [rough semantics of text](https://gwern.net/idea#deep-learning) into your mind more effectively than via normal reading, but this hardly needs the glasses. With good enough input devices, there are other handy applications like having GNU `units`[^6] on tap everywhere.
|
||||
|
||||
Mass-market appeal probably requires reduced costs, better batteries and onboard compute, and, most importantly, a new reason to use them.
|
||||
|
||||
@@ -44,5 +44,4 @@ Mass-market appeal probably requires reduced costs, better batteries and onboard
|
||||
|
||||
[^5]: Wirelessly offloading many operations to a normal phone nearby is maybe possible, but streaming lots of video back and forth is also costly. For 3D graphics there still has to be a fast onboard GPU to keep latency low in case of head movements.
|
||||
|
||||
[^6]: [https://en.wikipedia.org/wiki/GNU_Units](https://en.wikipedia.org/wiki/GNU_Units
|
||||
)
|
||||
[^6]: [https://en.wikipedia.org/wiki/GNU_Units](https://en.wikipedia.org/wiki/GNU_Units)
|
||||
|
Reference in New Issue
Block a user