Thursday, February 21, 2008

Pervasive, transparent search and inferencing services

This amazing mobile device mock-up (I'd love one for my birthday tomorrow!), is described by Hard Geek as having "advanced search function". Is this how I would describe it, or how the average user would describe it? Rather, by the time this level of hardware technology is available, the concept of "search" will (should?) have disappeared (to the user at least), and devices will instead should have a seamless understanding of the world around them, including an intimate semantic understanding of their user's short- and long-term goals.

No (or very very few) explicit search boxes; instead, they will be extremely context aware, where context includes: geography, orientation, weather, user history, user voice conversation, user goal(s), interactions with other users' similar (trusted and untrusted) devices, specific user inquiries etc.

Devices such as this one would be giant (transparent for the most part) mashups, deriving their suggestions and answers from a huge possible number of source data, search and inferencing services. Yes, inferencing services. I believe that there will soon be inferencing services which will be able to take large complex semantic networks and inference over them -- themselves drawing on data, search and inferencing services -- to render complex, explainable answers to users' situations and inquiries.

Related: Microsoft Live Lab's Photosynth Project: BBC, Wikipedia, Photosynth on PBS

Update 2008 03 04: The "identify-what-I-am-looking-at" technology needed for this mock-up can be seen at least partially demonstrated in "Cyber Goggles: High-tech memory aid " if perhaps not as elegantly or simply...

1 comment:

Andre Vellino said...

I "worked" (more like "volunteered") for a wireless startup in 2001. We had a not-too-dissimilar vision, but constrained to the then limitations of 3G cell phones. In the "vision" (or was it a mirage?) we saw people's personal profiles / search histories / preferences etc. in the the network, an "inferencing service" (including collaborative filtering for recommending music and movies) and a multi-modal (voice-text) interface to the inferencing server (a "conversational genie"). The whole thing was to be connected with location-based services (for recommending nearby things like restaurants and commerce sites.)

Actually, we built a prototype in about 4 months, which was pretty impressive, all things considered. But it was 2001 so it ended up not going anywhere.

In any event, I've lost faith in such visions - several times burned, several times shy.