Technology in the Long Run

When do we realize things have changed?

Amara’s Law

Several decades back, the late futurist Roy Amara made a statement about technology. Today, it’s referred to as Amara’s Law.

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

Roy Amara

I’ve always loved this. But upon rereading it recently, I tried to reflect back on my childhood and teenage years—at the technological leaps I witnessed—and come up with a single instance during which I saw something new for the first time and thought “Whoa, this is going to change the world.”

And try as I might, I couldn’t think of a single case prior to ~2011, when I got my first software engineering job and became entrenched in tech innovation news.

If Amara was right though, shouldn’t all of the exciting developments of the 90s and 00s have led to overt optimism? I suppose they did conceptually, for the public at large; but not for me personally. Consider the availability of wifi, or cell phones, or GPS. In those instances (and many more), years passed before I and most others realized how significant an inflection point we’d reached that had dramatically altered our lives.

Again, society and the media might have known it. But me, and possibly you? We were too busy living day-to-day.

So perhaps Amara was wrong. Perhaps, for the lay person, the law should read: “Most people tend to underestimate the effect of a technology in the short run and in the long run.

Adoption Curves

You might expect that a lay person such as myself (prior to 2011) was simply too early in the adoption curve of these technologies to appreciate how significant they were. But in fact, the opposite is true. Virtually all of the things you learn about are, probabilistically, somewhere in the middle of their adoption curve. The reason? If they were earlier in their adoption curve, the chances of you hearing about them would be very low.

The yellow curve represents the percent of adoption, but can also be thought of as the percent of people who know about and understand the technology. (Click image for source)

Take wifi for instance. I remember exactly where I was the first time I had to connect to a wifi network. And I don’t remember it because it was life-changing. I remember it because of the cognitive dissonance that arose from realizing that my computer was somehow able to communicate with the internet “through the air.” This was circa 2005, when I was a student at Cornell University, and a sign in the library prompted me that I could connect my computer to the “wireless internet”—without plugging in—and continue browsing the web while outside. How weird.

Millions of Americans already had access to wifi by that point. And I should have paused right then and said, “Hold up. Let me extrapolate this a bit. If I can do that here, might it mean that I will one day be able to do it anywhere? Might this technology become cheap enough that internet access will be commonplace no matter where you are? Is it possible that plugging my computer into an ethernet port will become a thing of the past?”

But I didn’t do that. That thought didn’t occur to me until years later when packing for an upcoming move and realizing that the dozens of ethernet cables I’d saved over the years were now obsolete.

Compounding Changes

There are two main reasons why, in the moment, we don’t grasp the impending impact technology will have on our lives, even though these realizations should happen to us every year in a myriad of ways.

Reason #1: The changes take place incrementally

Most technological leaps are not leaps. They’re actually tiny changes that unfold on long exponential curves. That means they will end up growing astronomically if left unchecked (hence Amara’s Law about the long run). But day by day, they seem to be making little to no progress.

The best example? Moore’s Law, which anticipated back in 1965 that the power of computation would double every two years. That prediction has proven to be astonishingly true until very recently. But on a daily, weekly, monthly basis, consumers didn’t see their electronic devices making massive leaps forward. They saw incremental change at best. And here’s the thing about incremental exponential change on short timelines: it looks like linear change.

Reason #2: Humans are horrible at extrapolating exponential curves

And that brings me to the second reason, which is that human beings are typically psychologically incapable of grasping exponential change, myself included. Moore’s Law is only one example. I’ve written before about the effects of compounding, and how we underestimate the returns it can yield. (This is why the best financial advice young people get is to save early and earn compounding interest; but the inability to see the change happen in real time is also why that advice is rarely adhered to).

What About Now?

Now that I’m in the tech world, I see new technology and its potential impact much earlier in adoption curves. I’ve also gotten better (although not great) at that extrapolation exercise we humans suck at. And so today, looking at the hype around AI, I’m torn. On the one hand, many of the applications people are excited about will probably become footnotes in history books at best. But not all of them.

We are seeing a revolution in the making in fields that will have dramatic impact on how we live. Three that I believe wholeheartedly are at the precipice of something major: 1) healthcare; 2) transportation; and 3) education.

And it’s that last one, education, that I’m particular excited about. And in which I’m working on something new myself. More to come…

P.S.

The recent episode of Acquired about the history of Microsoft explores at length how very few people in the 1970s understood what Moore’s Law would mean for the future. Those that did understand it built one of the most valuable companies of all time. It’s a fascinating story and one I highly recommend (despite being a 4.5 hour listen, and that’s only part one!).