friends.praxeme

 

New UIs need semantic approache and Open Business Concepts (posted by Fabien Villard)

In the past months I’ve seen amazing demos of futur UI devices that are studied and prototyped in labs. The last one was this one, very impressive because it mimics nearly exactly the UI found in the movie Minority Report which became a must-have for a lot of geeks *and* non-geeks.

http://www.ted.com/talks/john_underkoffler_drive_3d_data_with_a_gesture.html

This one is frightening, well, until you see the man in the wheelchair:

http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html

What is the common point of all those demos ?

Well, first one is about the technical excellence, the creativity, the imagination that led to the devices and softwares we can see in action and we can expect to reach mainstream in a next future.

Then I noticed that all new ideas about UIs focus on handling objects, either with physical gestures or in 3d spaces, sometimes both. Objects. It’s all about objects. This is clear when you note that demos all work on videos and pictures. Manipulating pictures or adding reality by drawing pictures around physical objects. This seems fantastic. Yes, but apart from videos and pictures, what are the objects we have in our Information Systems? UIs elements containing text and numbers. And widgets with tress (today all relations between information elements *are* trees, aren’t they?). That’s all. Think of a front end application for traders in a bank. Or an application to manage books in a library: where are the books in the application? Nowhere. They do not exist in the application which only refers to real books by a collection of fixed an immutable attributes.

When those amazing technologies arrive in our toolkits, what are we going to do with them? Nothing exciting, I guess. We will see new ways of dealing with incredible non-sense about files and folders and weird things like icons with the “PDF” acronym on it, and when launched, applications will continue to present boring tables of unsignificant lists of texts and numbers, with relations between objects hidden behind cryptic internal identifiers, and kaleidoscopic colors to render some sorts of classifications (remember, trees :-). Oh yes, we will be able to reorganize lists by moving fingers, or even eyes, in a special way, but the result won’t be amazing at all.

That’s where we can do something for the good of futur devices and technologies: prepare the objects, make objects available in apps to be ready to handle them with awesome gloves or fashionable ultra-tech glasses. Create an application, with books for a library, not refences to books. Show books in the GUI, and the relations between them, and the places they are stored in, and the concepts they read about. Add transformations they can undergo (a book can be replaced by a new version, archived, censored perhaps, it can start its life as only an article about the real book, it can be augmented by a second volume…). Then start to handle these objects with current devices: mouse, lazer pointers, touchscreens.

And well, to do that, start by a semantic model of the reality. And use OBC, or contribute to them a posteriori.


Leave a Reply

*

Please leave these two fields as-is:

Protected by Invisible Defender. Showed 403 to 973,600 bad guys.