Sotto Voce.

"Qui plume a, guerre a." — Voltaire

Keyboards and the Future of Computers

At Apple’s recent special event announcing its latest MacBook Pro lineup, SVP Phil Schiller introduced the new Touch Bar feature by explaining that it was designed to provide a dynamic and adaptive replacement for the row of physical function keys that has accompanied computer keyboards since the early 1970s. Why, he asked, should interface design be constrained by the legacy of a 45-year-old technology?

Yet, just to the south of the new Touch Bar on this sleek, ultra-modern device sits a nearly 145-year-old technology that continues to artificially constrain computer interface design — one that I believe is way overdue for a radical reimagining:

The physical keyboard.

You’d probably think that, as a guy who makes his living herding words, I’d be the one yelling the loudest that you can have my keyboard when you pry it from my cold, dead hands. But before I can explain why I believe the future of writing absolutely demands the disappearance of the physical keyboard, first I need to go off on a highly pedantic tangent for just a moment.

The Way I Want to Touch You
The near-concurrent release of Microsoft’s impressive Surface Studio has inspired a lot of discussion about the divergent approaches that Microsoft and Apple are taking to interface design. The most common form of the argument is that, when it comes to their full-featured devices, Microsoft is cool with you touching the screen and Apple isn’t.

I don’t think that’s the real difference, though.

Now here’s the part where I get pedantic. MacOS, Apple’s flagship operating system, has always been touch-enabled (as has Windows, for that matter). You move the mouse with your hand and click keys with your fingers, and these actions make things happen on the screen. (And this is the part where you groan and roll your eyes.)

The difference is that these are intermediated inputs. Your hand is moving a device over here that makes changes appear over there. When you put your finger directly on the screen where you want something to happen, you are disintermediating the input from the output.

What Apple seems to be saying is that it defines the boundaries of its OS ecosystems not in terms of the devices they are used on or the things people use them for, but in terms of whether the method of input is intermediated or disintermediated from the output. On an iPhone and iPad, the input and output are primarily taking place on the same plane. On a laptop or desktop, the input is taking place on a separate device from the output.

This is a perfectly valid design philosophy. Whether it will end up winning the survival-of-the-fittest competition of the marketplace to become the industry standard remains to be seen. But either way, I think it offers the potential for a coherent, consistent worldview.

The problem is, as demonstrated on the new MacBook Pros, in order for this input philosophy to have a fair shot at success, it has to ditch the physical keyboard — which, along with the monitor, is one of the things that traditionally defines what a computer is.

Invisible Touch

As it stands now, Apple’s flagship laptop has two context-sensitive touch interfaces on the horizontal slab that allow users to interact with the system using a vocabulary of gestures that are both universal to the entire system and particular to the application or even to a specific function within the application.

And between these two sophisticated, modern, highly adaptive inputs sits four rows of single-purpose, permanently affixed mechanical switches.

Apple’s demonstration of the new Touch Bar featured a parade of people using it to do all kinds of magical things. We had a photographer using the trackpad and Touch Bar to edit a photograph, a filmmaker using them to splice a video, and a musician using them to lay down tracks.

Conspicuously, none of them ever touched the alphanumeric keys that took up most of the horizontal real estate.

What if that space could have been used as a light table for the photographer, a digital moviola for the filmmaker, and a full-on mixing board and platters for the musician? And then, when it came time to name a file or type a caption, have the alphanumeric keyboard appear for just long enough for them to type it, then disappear again to be replaced once again by a suite of controls appropriate for the particular application?

And when you’re using office-y applications, the keyboard of your choice — QWERTY, Dvorak, scientific calculator, adding machine, mathematical formula, shorthand, editing symbols — would be the context-appropriate interface that appears on the horizontal input surface, just when you need it.

Apple is exploring this philosophy on its mobile devices. Want your QWERTY keys to travel and click like they do on a physical keyboard? Haptics have you covered. Don’t need a mouse trackpad? It disappears from the screen while typing until you touch it with two fingers, at which time the keyboard instantly transforms into a trackpad and then snaps back to keyboard mode when you’re done. So not only is it do-able, it already exists within the Apple design philosophy.

Touch the Sky

The QWERTY keyboard is just one of many touch-based input methods that we use to create, manipulate, save, and share what we make. Requiring that it take up permanent residence on a device in the form of physical, single-purpose mechanical switches just looks and feels increasingly antiquated. It forces designers to add modern interface tools around its periphery just as, 45 years ago, computer designers had to add keys for functions, commands, and navigation around the basic, and familiar, typewriter keyboard.

The more things change in computer interface design, in other words, the more they stay the same — as long as those physical QWERTY keys have to be there.

So if Apple and others decide to get rid of the physical keyboard in order to realize the full potential of intermediated input, what would that mean for writers?

As far as I can tell, nothing.

Hey, if you really need or want or can’t do without a physical keyboard, get a wireless one or plug one in. Otherwise, type away on your haptic-glass input panel. If it’s done really well — which is what will make or break the idea — typing on virtual keyboards will be a perfectly normal experience for most users, while also offering options that are impossible with physical keyboards — adjustable key spacing, variable sensitivity, repositioning for maximum comfort, customizable haptic responses, multiple alphabets, accuracy adjustments based on typing speed, user-customizable keys, and tons of other things I can’t even begin to imagine.

Another way to interpret Apple’s dictum that it doesn’t want you touching the screen in macOS is that it doesn’t want you touching the output screen. Apple’s laptops now provide two touch screens on the input side: the trackpad and the Touch Bar. The next step is to unify them — along with the alphanumeric keys that sit between them. Thursday’s MacBook Pro event demonstrated pretty clearly that the future of intermediated input is adaptive, context-sensitive, and virtual. And that means doing away with the mechanical QWERTY keyboard.

The keyboard is dead. Long live the keyboard.

Categorised as: Life the Universe and Everything

Comments are disabled on this post


  1. TCWriter says:

    I wonder if you aren’t underestimating the difficulties of touch-typing on a flat surface. The keys on keyboards are cupped and marked with raised bumps for a reason (namely, sight-free typing).

    Haptic feedback offers some promise, but helping people position their fingers at the start — and on the fly — seems like a tall order.

    • sottovoce says:

      Accurate and consistent finger positioning is certainly the hurdle, but I’m not convinced it’s insurmountable.

      I think we’re going to get hung up if we focus on trying to recreate a digital analogue to a physical keyboard. As you say, the advantage of physical keys is that they allow us to discern the locations and boundaries of keys by touch.

      So why can’t we invent other, non-mechanical ways to discern the boundaries of keys besides physical edges?

      For example, perhaps there could be a way to charge the glass with localized spots of electrostatic resistance that make them feel “sticky” when your finger glides over them. It wouldn’t have to mimic the exact feel of a physical key so much as provide some sort of equivalent sensory cue that indicates the boundaries of the key zones.

      I think if we unpack what it is about keys that work and why they work from the keys themselves, we might come up with some surprising options we haven’t thought of before.

  2. Richard P says:

    My concern is similar to TCWriter’s. As it stands, I can position my fingers with just a glimpse at the keyboard (or even none, since the F and J keys on my MacBook have little ridges to guide my fingers). Then I touch-type in a smooth and comfortable way. The keyboard works very well for me as an interface. In contrast, on a flat touch-screen I have to look at the visual keyboard as I peck out my words. It’s very inefficient and inaccurate (and then there is the three-row keyboard to deal with). The inaccuracies are partly overcome by autocorrect, which has its own drawbacks and feels intrusive.

    Maybe if I just put down my hands anywhere on a flat surface, the software could be clever enough to recognize what keys I intended to be activating just by the motions of my fingers. But for people who can’t touch-type, that won’t work.

    Your idea about electrostatic resistance might be a good solution.

    • sottovoce says:

      I too have found typing on glass to be less efficient, but I haven’t had as much of a problem with inaccuracy, which has probably unduly influenced my opinion as to just how straightforward the transition from physical to virtual will be! 😀

      I like your idea about clever software that learns to adapt to your finger placement. I can see that happening; it would require some combination of predictive text, letter-frequency analysis, and learned patterns to make it work reliably. All of those things already exist in one form or another.

      I wonder if anyone is working on tactile-responsive touch screens. It would be the next frontier…

  3. TCWriter says:

    If flatscreen input is really the future, I’d suggest the standard, QWERTY alphanumeric key layout will simply sink from view.

    Instead of trying to adapt the QWERTY keyboard, it’s probably better to simply replace it with a combination of predictive chording and voice input, probably using the adaptive key placement tech mentioned above.

    In other words, adapting a keyboard designed to slow a touch-typist to flatscreens — a technology notoriously difficult to touch-type on — is probably possible, but is it desireable?

    • sottovoce says:

      I think that’s a logical prediction, and I for one welcome our new touch-screen overlords — and I say that as a proficient keyboarder and typewriter fan. This old dog thinks it’s time to learn some new tricks.

      The QWERTY configuration was a fine solution to a problem that effectively disappeared with advent of the IBM Selectric and digital computers 40+ years ago. It has stuck around since then because it works, of course, but also because it has a lot of inertia behind it, which is going to be hard to nudge in a new direction.

      Plus, if we abandon QWERTY we’re probably looking at a protracted period of experimentation akin to the early days of typewriters before we find the new standard, but with the potential for much more disruption since QWERTY keyboards are ubiquitous today. What kind of effect will that have on communications?

      Were talking about a fundamental sea change with unpredictable consequences. Who out there is willing to take such an enormous risk? Will it start at the top with MS or Apple, or with one of the hardware manufacturers like Dell or Lenovo, or will it be a startup coming out of nowhere with nothing to lose? No way to know yet. But someone has to go first and say “come on in, the water’s fine.” Otherwise, we’re going to remain where we are.

      Whatever ends up happening, though, the nice thing about a virtual keyboard in such a scenario is that you can do all that experimentation without having to buy new hardware every time something new comes out. Just load the new keyboard software and start typing, like you can do on a mobile device now.

  4. sottovoce says:

    Phil Schiller discusses intermediated touch in an interview with The Independent that was published today: “Apple’s Philip Schiller talks computers, touchscreens and voice on the new MacBook Pro.”

    The money quotes:

    It’s part of our thinking about where to take the notebook next. Others are trying to turn the notebook into the tablet. The new MacBook Pro is a product that celebrates that it is a notebook, this shape that has been with us for the last 25 years is probably going to be with us for another 25 years because there’s something eternal about the basic notebook form factor.

    You have a surface that you type down on with your hands, with a screen facing you vertically. That basic orientation, that L shape makes perfect sense and won’t go away. The team came up with this idea that you can create a multi-touch surface that’s coplanar with the keyboard and the trackpad but brings a whole new experience into it, one that’s more interactive, with multi-touch.


    We’re steadfast in our belief that there are fundamentally two different products to make for customers and they’re both important. There’s iPhone and iPad which are single pieces of glass, they’re direct-manipulation, multi-touch and tend towards full-screen applications. And that’s that experience. And we want to make those the best in that direction anyone can imagine. We have a long road ahead of us on that.

    Then there’s the Mac experience, dominated by our notebooks and that’s about indirect manipulation and cursors and menus. We want to make this the best experience we can dream of in this direction

  5. sottovoce says:

    People seem to be latching on to the distinction between intermediated and disintermediated input as a way to figure out the where the Mac fits into the Apple lineup. Check out this post by market analyst Horace Dediu, for example. He gets a little gushy, but he sees the defining distinction between Macs and mobile as being about input technique. Plus, he introduces the much less clunky terms direct and indirect input, which I like. The whole article is worth reading, but here’s the crux:

    The key to the Mac therefore becomes that which the iPad/iPhone isn’t: an indirect input device. The keyboard and mouse/trackpad are what define the Mac. The operating system, the apps, the UX, are all oriented around the indirect input method. The iPhone’s capacitive touch brought about the direct input method, a third pivot in input methods (first was mouse, second trackpad/scroll wheel). Each pivot launched a new set of platforms and the Mac is the legacy of the second.

    It’s not obsolete but it is a decreasing share of engagement. Alternate ways of doing the jobs it does well with direct input are emerging on the third pivot but they are not yet good enough. …

    The management thus has to focus on how to make the keyboard/trackpad interface better while still saying and believing that the future is touch.

    In this context the newest MacBooks Pro are a logical extension of the second wave of computing while avoiding cramming them into the third wave. They are defined by their constraints. Seen thusly, the move from keyboard/trackpad to keyboard/touchbar/trackpad is pure genius.

    Even so, it may seem that Apple is pulling punches. The product could have evolved into the full-touch, dual screens, pen input, hybrid model of Windows. But that only makes sense if you don’t have a mobile product that is promising the same and tearing up the world at the same time.

    If I’m reading it right, though, that last paragraph reveals a blind spot: Dediu seems to take it for granted that when it comes to direct and indirect input, a 100% touch-based interface is only appropriate for direct-input devices. Hybrid inputs mean that the device is still safely, recognizably computer-ish; full-bore touch is, by definition or assumption, reserved for mobile devices.

    My argument is that full-bore touch can be just as appropriate for indirect input as it is for direct. Touch is not about what you touch, it’s about what you do using touch.

    Mobile devices have one surface for both input and output. Desktops and laptops have two coplanar surfaces (to use Phil Schiller’s term), one for input and one for output. Why can’t input be touch-based on both kinds of devices? Particularly if it provides meaningful tactile feedback that allows sight-free use, as TCWriter and Richard P. mentioned above for typing, and which other types of inputs also need. After all, since indirect touch is almost by definition not line-of-sight, to really work it needs to provide some kind of sensory feedback that isn’t eyesight dependent. Let’s make touch truly about touch.

    The MBP Touch Bar is new and it has its physical limitations, but I suspect that it’s going to reveal to people that having a touch-based screen that doesn’t have to function as an output display is going to open up new ways of thinking about touch.

    What I mean by that: the real power of a keyboard (QWERTY or Dvorak or whatever) is that it is a control panel. It does not have to be designed to accommodate what you’re seeing on the screen. The design of mobile touch — that is, unified input/output screen touch — has to take into account that it’s doing its actions on the same playing field where the user is seeing the results. But a control panel is like a menu of choices that you manipulate to assemble something over there.

    I’m not articulating the ideas in that last paragraph very well and I’m not happy with how it reads, but I’m putting it down in its raw form so that I can go back and revisit the idea over time. And hopefully to get some feedback on it too. Whaddya think?