03
Jul 13

A Sad Milestone In Technology’s History

I was saddened today to hear that Doug Engelbart had died. We all owe him a great debt – he had the vision of what computers could become decades before it became obvious to the Jobses and Thiels and Gateses, much less the rest of us, and did a tremendous amount of the work to make what we do today on a day-to-day basis possible. In his brief remembrance, The Death of Computing Pioneer Doug Engelbart | MIT Technology Review, Brian Bergstein says:

…what really got the soft-spoken Engelbart to light up was the idea that computing could elevate mankind by making it possible for people to collaborate from afar. He had articulated this vision since the 1950s and built key technologies for collaboration in the 1960s. Later he saw these ideas become tangible for everyday people with the advent of the Internet and the World Wide Web, but still in the 2000s he was hoping to see computing’s promise fully realized, with the boosting of our “collective IQ.” Of course in that grand sweep, the mouse was just one small tool.

Everyone is familiar with Engelbart’s “Mother of All Demos” – which is amazing, of course – but he continued his work on human augmentation nearly up to his death. I particularly remember a talk he gave at the 2004 Accelerating Change Conference at Stanford, which I heard as a podcast via IT Conversations. The first Accelerating Change Conferences were organized by Ray Kurzweil, before he became a household name in is own right, and it seemed fitting to me that Engelbart, the inventor of so much we take for granted today, was still on the front lines of what we might be doing in the future.

Link: The Death of Computing Pioneer Doug Engelbart | MIT Technology Review


24
May 13

Accelerating Change In Your Face

Exponential Change: A Handy Mnemonic

We all know change is accelerating, but, as Peter Diamandis points out, things can be deceptively slow before the exponentials kick in. In this video, in less than three minutes Diamandis shares what he calls the “Six D’s Of Exponentials” – things to look out for as the world changes. Nowadays this change usually starts as Digitization, his first D, and it starts out Deceptively slowly, his second D. I’ll let him share the rest of the D’s.

The most significant of the D’s, from the standpoint of product managers, is the fifth one – Demonetization. These transformations continually make goods and services cheaper, particularly if they can be delivered or implemented digitally. This suggests, at a minimum, that our markets have shorter and shorter lifespans.

Agile Everywhere

On the topic of change, I ran across this second video from Bruce Feiler yesterday, from a TEDx conference last year. It describes how the speaker took the concepts of agile development and applied them to managing his family – chores, bickering, choosing vacation spots – to great success. This is a different kind of change, a change of thinking, that’s the type of thing we’re going to need more of in our future. Many of our beliefs and common knowledge about the world turns out to be outmoded and old-fashioned, ready to be replaced by new ways of thinking. Whether it comes to family dynamics, or teaching, or government, or the economy itself, old ideas are rapidly running out of steam, and failing to produce as much value – both societal and economic – as we need.

Have you run across new ideas or examples of accelerating change in unusual places?


27
Jun 12

Accelerating Change – The Last Ten Years And The Next Ten Years

As I sit here in a coffee shop writing on my laptop, much of what I observe around me would have been here ten years ago. People sitting and chatting, young people doing homework, someone knitting, all with their mochas and lattes and cups of tea. But I also see people typing on their laptops, connected to the internet via wifi, or talking on their smart phones, perhaps planning where to eat dinner using Yelp, getting advice from Siri, or getting travel guidance using Google Maps. They could be catching up on TV shows they missed last week or last year via Netflix or Hulu. Or simply catching up with their friends via Facebook.

In the last ten years we had the following new technologies reach wide market adoption, even if some had been under development for much longer:

  •     Smart phones
  •     Web 2.0 and 3.0
  •     Mobile computing
  •     The ubiquity of HDTV
  •     The rise of 3d TV
  •     Skype
  •     Facebook
  •     GPS
  •     Siri
  •     The app-ification of everything
  •     Netflix
  •     Facetime
  •     Stuxnet
  •     The iPad
  •     Wifi
  •     Wikipedia (started more than 10 years ago, but grew explosively during that time)
  •     The disappearance of cameras – everyone uses their phone now
  •     The Cloud

This is just some of what has changed over the past ten years in technology. Many other technology changes are less obvious, like the fact that most cars now have much more power, from smaller and more fuel efficient engines, than in the past, thanks to advances in digital design and manufacturing. That our laptops are lighter and run for longer on a single charge, due to improvements in battery technology. And that their internal storage, while huge, is also augmented by unlimited storage in the cloud.

But technology advances ever faster, so the magnitude of the changes in the next ten years will dwarf the changes of the last ten years. Over the course of that period, processing power is likely to grow by a factor of 40 or 50. This is less important to your laptop than it is to devices that don’t exist yet, like an implantable super-computer.

Right now an iPhone 4S rivals supercomputers of a decade ago in processing power. In ten years, that amount of processing power will be available in a device only 2% of the size of an iPhone – a good size, perhaps, to be implanted in a human brain, or certainly in a prosthetic arm (or eye).

The storage available in a laptop will be a nearly unimaginable size, but the really interesting place for storage might be on our eyeglasses, connected to a camera that can record every waking moment of our life and store it in the space of the temple of the glasses. What you might do with all that information is a very good question, but it’s always been the case that we’ve managed to use up all the storage we can get. Perhaps there’s a limit to how much we can use, but we’re far from reaching that point, and we certainly won’t reach it in ten years (since after all, we still won’t be at “brain-level” storage capacity then). Or perhaps that storage is in a microrobot that swims in your bloodstream, recording your levels of nutrients and drugs, and detecting the signatures of illnesses and conditions early, constantly monitoring, and reporting back, how you are doing.

What other changes are in store? Well, just as today almost 1/3 of the people of the earth have access to mobile phones, and about 10% have smart phones, in ten years not only will the fraction be bigger, but the mobile phones themselves will be transformed. Even the poorest communities in the world will have access to supercomputer-level processing power, and richer communities will have commensurately more powerful capabilities. And the connectivity to go with it. Imagine the outcome of such a widespread availability of such open communication platforms. Of course, they may not remain open, but even if full openness is limited for some, everyone will have access to more information and knowledge and training and collaboration than has ever been possible. The innovation and transformation that will be driven by making the tools of the 21st century available to half of the poorest people in the world is likely to result in changes that are not only different in scale and scope than we’ve experienced before, but different in kind as well.

So far, I’ve only talked about technologies related to high-tech electronics and digitization. There are three significant areas that will also have significant impacts on our lives over the next decade, even if their major fruits are still farther in the future:

  • Nanotechnology and new materials – already today new advances in production and use of carbon nanotubes and new composite materials are announced nearly every day from the research labs of the world
  • New energy systems and energy efficiency – every day MIT and Rice University and Caltech and all the other research universities around the world announce energy-related discoveries and breakthroughs, from stacking nanotubes to make transparent infrared-sensitive solar cells, to new catalysts for small, affordable fuel cells, to breakthroughs in energy density for all these devices, among others.
  • Biotechnology, genomics, and related fields like proteomics – soon desktop bioreactors, that can make living cells from basic ingredients, will be commonplace in laboratories, and then it’s a small step to a Bill Gates-type creating the new world in his or her bedroom.

These technologies are all in very early days today, but in ten years at least some will have matured into commercial viability, and new changes will be upon us.

I’m looking forward to having these new technologies at our fingertips, and to helping bring some of them to market myself. It’s the brave new world for product people like me, full of promise and exciting challenges, taking discoveries and insights from labs and turning them into products that make a difference in peoples’s lives and potentially in our planet’s survival.

What are you thinking about the future? What technologies excite you? What “miracle” are you looking forward to? I’d love to hear your thoughts and comments on this post and the series about future tech.


26
Jun 12

Are You Ready For The Accelerating Tech Miracles of The Next 10 Years?

How much change is likely to happen in five years, in ten years? This is one of my favorite topics – the acceleration of technology. Many day-to-day things we take for granted today were miracles ten years ago, and the same will be the case ten years hence. Here are some miracles today that we will likely have in five to ten years:

  • A babelfish – a universal real-time translator, as described in the masterpiece The Hitchhiker’s Guide to the Galaxy
  • Full time no-glasses 3d on TV (almost here, really)
  • Full time heads up display and associated apps and capabilities – cameras, etc. – Google Glass is just the beginning!
  • Self-driving cars. No crashes. Much more filled roadways.
  • Immersive work from home – telecommuting is just like being in the office (if you want)
  • Real cybernetics – an artificial limb that has a sense of touch, for example. Or an artificial eye implant that restores decent sight to a blind person. With a camera built in, so that the blind person is the first person to have a bionic eye. Or with interchangeable sensors so the formerly blind person can see a wider range of information than we normals can.
  • Vat-grown organ replacements – nearly here now, as discussed in this TED Talk on growing organs
  • Human performance enhancements – pharmaceutical, prosthetic, neurological – that enable use to perform physically and mentally two to ten times faster and stronger.

Yesterday I read about a new research project and demonstration at MIT of the T(ether), a device/system that enables capture and replay of 3-D gestures and actions, using an iPad and some additional sensors that capture where the iPad is in 3d space. Earlier this week I saw a video demonstration of the new Leap Motion device providing high-resolution Kinect-like 3d motion capture functionality for a laptop, enabling at least one half of the then miraculous interactions that were created using special effects in the movies Minority Report and The Avenger’s. These two devices are not the final story in how people will interact with a virtual world in real-world 3d, but they do illustrate just how amazing our technology has become in the last few years. And the Leap device is shipping now to early adopters and developers!

When the Xbox and its cousins were originally released, the idea of using your body as a controller seemed ridiculously far-fetched, if anyone was even thinking about it at all outside the game labs and the special effects departments of movie studios. But then gradually these capabilities started to be released from the woodwork, and soon they were available on all the consoles. They were enabled by the creation of new sensors and massive processing power. Ten years ago they were unimaginable except in science fiction stories – today they are in nearly every teenager’s bedroom.

Tomorrow I’ll continue this series about technological acceleration, look at a whole lot of innovations we’ve experienced over the past ten years, and see if we can come up with some predictions for the next ten years.


12
Jun 12

What Comes After Google Glass? (Part 2)

Yesterday I posted about how reading might change in a world where all our interactions are mediated by an augmented reality device like Google Glass. But I also mentioned some potential pitfalls, and in this post I’ll discuss them, as well as some ideas for avoiding them.

Reading “Out There”

I see a number of potential problems with reading using augmented reality. This is based on my own behavior while reading and while doing other activities and considering how they might be combined. In particular, in a world where your book is floating “out there” in front of you, you’re going to have problems with safety, with attention, and with distraction. If your reading device is the same thing you use to interact with the rest of the world, you’re going to be hard-pressed to just sit down, with nothing to do with your hands, and stare off into space (that is, into the book floating in front of you) and pay attention to the book. This is especially difficult if the rest of the world is kind of faintly visible through the book.

In fact, the great thing about a book and about the Kindle device is that when you’re using it, you can’t do anything else. It captures and focuses your attention. You can reach for a cup of coffee or something, but you really can’t drive, you can’t walk effectively. You have to commit to reading.

Fundamentally, there a haptic – tactile feedback – aspect of reading, even on the Kindle or iPad, that’s important to keeping you engaged. It gives you something to do with at least one of your hands, and that engagement with the hand is the clue to your consciousness that you need to pay attention to what you’re doing. These haptics also extend to the all-important question of navigating the book. Again, with a real book, or with a Kindle or an iPad, you have a physical gesture on the item to turn the page, find the table of contents, and so on. And if you want to highlight a passage, or share it, or go back a few pages to reread that last part, you need a way to do all those things. When interacting with the air this becomes a disembodied gesture at best. And you’re not going to be able to do that just with your eyes, I suspect. And of course both books and iPads are opaque – the rest of the world may appear around the book, but not through the book.

In the interview with Charlie Rose, referred to in my earlier post, Sebastien Thrun showed an interaction of reaching up to the Google Glasses to push a button. But I think that’s not really going to work in the end. Not only is the gesture clumsy, because you can’t see your own hand at that point, but it’s also really obvious, where you might want some ability to be more subtle. And it’s only a single button – can you really get all the necessary interactions into a single button? Steve Jobs couldn’t – that’s why he invented multi-touch for the IOS devices.  Note that the most successful devices of all time – including the pencil, book, and iPhone – require visual engagement – they can’t be operated simply by touch.

But even the IOS devices have a problem – no feedback to your actions. This is a big problem for me when I’m using the iPad or iPhone as an input device, for example. I’m a touch typist, but on the iPad there’s no way to know if I’m touching the right keys, so I have to use my eyes, which slows me down.

A Proposal – A Smart Slate

Assuming my concerns are rational, how might you address this issue in a Google Glass era? You’re going to want to have something with which to interact, that has some physical presence, and that perhaps can even react to your touch. What I’m imagining is a “smart slate” type of device, on which the Google Glass device, or other devices like it, “project” the images for items that need a physical presence to be most useful, such as books, keyboards, “Minority Report”-like displays, and touch interfaces.

The glasses would keep track of the location of the slate, and always make sure the images are projected correctly for the current orientation of the device. If the slate is moved, the images are moved at the same time. If the slate is tilted away, the image tilts. If the user swipes the slate, the page turns, or the table of contents is loaded, depending on where the swipe occurred. The slate could be instrumented to tell the glasses about the swipe, or the glasses could use a Kinect-like capability to detect the swipe visually. In a more advanced version of the slate, it could provide haptic feedback, using one of several technologies that are becoming available for programmatically changing the texture of a surface, such as this technology from Senseg which may appear in Samsung smartphones soon.

This is an example of something I’ve called “rematerialization” – a play on Daniel Burrus’s recognition of “dematerialization” as a central driver in the future. With digital technology we have dematerialized books, but in reality they’ve been rematerialized as Kindles and iPads. Because we humans exist in “meat space,” we still need our “stuff” to exist in meat space, even if it’s not in quite the same form as it used to be. And while our books may dematerialize even more, out of Kindles and iPads into Google Glasses, there’s still going to be a need for meat space interface for us to interact with them.

That’s What I Think – Now It’s Your Turn

What do you think? Are you looking forward to reading books floating in the air, or do you think there will still be a physical device when all is said and done?


11
Jun 12

What Comes After Google Glass? (Part 1)

Some Thoughts On The Future of Reading

Like any good technologist, I’m very interested in the future. I’m waiting anxiously to hear what Tim Cook will say this week at the Apple Worldwide Developer Conference, of course, but for this post I’m going to look a bit farther in the future.

Google Glass has been in the news lately, including an interview with Sebastien Thrun of the GoogleX research lab on the Charlie Rose show, where he showed the Google Glass device and even kind of demonstrated it. The fundamental idea of Google Glass, apparently, is to put a virtual layer over the real world to enable people to do cool stuff without interacting directly with a traditional device like a mobile phone, tablet, or laptop.

A question that occurred to me was how will people read in a Google Glass world? And by “read” I mean long form items like books, magazine articles, and so on – not the messages that come up on a phone screen.

Humans Remain Humans (At Least For A While)

Even in the future, there will be some “fixed points” of human behavior. For example, people will still be talking to each other. And they will still be reading books and long-form articles, and a smaller group will still be writing. But just as today we have many more ways for people to talk to each other than our ancestors did 100 years ago (i.e., the phone, text messages, email, video chat, and so on, in addition to the basics of speaking face-to-face), in the future there will be even more new modes by which people are talking.

The same is true for reading. We’ve gone from hand-written books before Gutenberg, to hard-cover books for several centuries, to broadsheets, to paperbacks (as an addition to hardcover books), and lately to Kindles and e-readers (again, in addition to the physical books), as well as audio books.

In the activity of talking to people, we’ve essentially dematerialized the conversation – you don’t have to be in the same place as your interlocutor, and you don’t have even speak.

Do We Need A Device? Not Technically

Today, we still need a physical artifact – a book, or a magazine, or a device – to read (versus listen to) a book. In the future, is there any reason that people need to read books on a device at all? Or on a device that’s so ubiquitous that it’s not really a device, such as Google Glass or its tenth generation descendant (Google Contacts)?

Well, there’s no question that this will be possible, and it will definitely be the way we consume some reading material, just as today I do occasionally read books on my iPhone, when my iPad or Kindle is not handy.

In fact, there’s no intrinsic reason it has to be a device. It could be a reengineered brain that hijacks the visual signal and adds something on it, like a book.

This is one vision of “the brave new world” of augmented reality, and as I say, there’s no technical reason this can’t happen.

Pitfalls Ahead?

But while I think this is an awesome and cool vision, there are some potential practical pitfalls, which I’ll talk about tomorrow in a follow-on post. In the meantime, what are your “visions” for the future of reading, and for anything else related to Google Glass?


07
Aug 11

Warning: One Of You Will Probably Drop $5k On A Pair Of These

Vuzix STAR 1200 Augmented Reality Glasses

OK, I want a pair of these, I truly do! I wonder if they work over regular glasses?

Originally saw this on the Technology Review site, in an article from May:

At the 2011 Consumer Electronics Show in Las Vegas hardware company Vuzix has revealed the first clear AR glasses for consumers. The glasses, called Raptyr, use holographic optics instead of video screens to make digital objects appear in mid-air. The approach is challenging, not least the interface has to compensate for (or compete with) natural light. For this reason the lenses can electronically darken to compensate for brighter or darker environments.

Here’s the actual product page for the device. It’s now called the Star 1200 (Raptyr was maybe too game-y?), and it costs $5,000 right now (in pre-order). And it looks pretty clunky. But that it can even be done is totally amazing. And if its price curve follows Moore’s Law – and there’s no reason to think it wouldn’t – we should be seeing it in the $500 range, with a lot better design and much less clunky, in three to four years. That is, unless people start walking into buses while wearing them!

For more on what this all means in the next 15-20 years, take a quick look at Rainbows End by Vernor Vinge. Not his best-written book, but likely a prescient view or what augmented reality is going to become. (And of course, I didn’t read it as a book, but on my iPad, acting as a Kindle. The future is already here!)

They say they’ll be shipping this month (August 2011). You can pre-order now for a downpayment of $2,000, with the remaining $3,000 payable on product release. Free shipping, though, in the U.S.!

So, I know you want a pair of these – but what are you willing to pay, and what do they have to look like before you’d be willing to wear them?