Keyboards and the Future of Computers
At Apple’s recent special event announcing its latest MacBook Pro lineup, SVP Phil Schiller introduced the new Touch Bar feature by explaining that it was designed to provide a dynamic and adaptive replacement for the row of physical function keys that has accompanied computer keyboards since the early 1970s. Why, he asked, should interface design be constrained by the legacy of a 45-year-old technology?
Yet, just to the south of the new Touch Bar on this sleek, ultra-modern device sits a nearly 145-year-old technology that continues to artificially constrain computer interface design — one that I believe is way overdue for a radical reimagining:
The physical keyboard.
You’d probably think that, as a guy who makes his living herding words, I’d be the one yelling the loudest that you can have my keyboard when you pry it from my cold, dead hands. But before I can explain why I believe the future of writing absolutely demands the disappearance of the physical keyboard, first I need to go off on a highly pedantic tangent for just a moment.
The Way I Want to Touch You
The near-concurrent release of Microsoft’s impressive Surface Studio has inspired a lot of discussion about the divergent approaches that Microsoft and Apple are taking to interface design. The most common form of the argument is that, when it comes to their full-featured devices, Microsoft is cool with you touching the screen and Apple isn’t.
I don’t think that’s the real difference, though.
Now here’s the part where I get pedantic. MacOS, Apple’s flagship operating system, has always been touch-enabled (as has Windows, for that matter). You move the mouse with your hand and click keys with your fingers, and these actions make things happen on the screen. (And this is the part where you groan and roll your eyes.)
The difference is that these are intermediated inputs. Your hand is moving a device over here that makes changes appear over there. When you put your finger directly on the screen where you want something to happen, you are disintermediating the input from the output.
What Apple seems to be saying is that it defines the boundaries of its OS ecosystems not in terms of the devices they are used on or the things people use them for, but in terms of whether the method of input is intermediated or disintermediated from the output. On an iPhone and iPad, the input and output are primarily taking place on the same plane. On a laptop or desktop, the input is taking place on a separate device from the output.
This is a perfectly valid design philosophy. Whether it will end up winning the survival-of-the-fittest competition of the marketplace to become the industry standard remains to be seen. But either way, I think it offers the potential for a coherent, consistent worldview.
The problem is, as demonstrated on the new MacBook Pros, in order for this input philosophy to have a fair shot at success, it has to ditch the physical keyboard — which, along with the monitor, is one of the things that traditionally defines what a computer is.
Invisible Touch
As it stands now, Apple’s flagship laptop has two context-sensitive touch interfaces on the horizontal slab that allow users to interact with the system using a vocabulary of gestures that are both universal to the entire system and particular to the application or even to a specific function within the application.
And between these two sophisticated, modern, highly adaptive inputs sits four rows of single-purpose, permanently affixed mechanical switches.
Apple’s demonstration of the new Touch Bar featured a parade of people using it to do all kinds of magical things. We had a photographer using the trackpad and Touch Bar to edit a photograph, a filmmaker using them to splice a video, and a musician using them to lay down tracks.
Conspicuously, none of them ever touched the alphanumeric keys that took up most of the horizontal real estate.
What if that space could have been used as a light table for the photographer, a digital moviola for the filmmaker, and a full-on mixing board and platters for the musician? And then, when it came time to name a file or type a caption, have the alphanumeric keyboard appear for just long enough for them to type it, then disappear again to be replaced once again by a suite of controls appropriate for the particular application?
And when you’re using office-y applications, the keyboard of your choice — QWERTY, Dvorak, scientific calculator, adding machine, mathematical formula, shorthand, editing symbols — would be the context-appropriate interface that appears on the horizontal input surface, just when you need it.
Apple is exploring this philosophy on its mobile devices. Want your QWERTY keys to travel and click like they do on a physical keyboard? Haptics have you covered. Don’t need a mouse trackpad? It disappears from the screen while typing until you touch it with two fingers, at which time the keyboard instantly transforms into a trackpad and then snaps back to keyboard mode when you’re done. So not only is it do-able, it already exists within the Apple design philosophy.
Touch the Sky
The QWERTY keyboard is just one of many touch-based input methods that we use to create, manipulate, save, and share what we make. Requiring that it take up permanent residence on a device in the form of physical, single-purpose mechanical switches just looks and feels increasingly antiquated. It forces designers to add modern interface tools around its periphery just as, 45 years ago, computer designers had to add keys for functions, commands, and navigation around the basic, and familiar, typewriter keyboard.
The more things change in computer interface design, in other words, the more they stay the same — as long as those physical QWERTY keys have to be there.
So if Apple and others decide to get rid of the physical keyboard in order to realize the full potential of intermediated input, what would that mean for writers?
As far as I can tell, nothing.
Hey, if you really need or want or can’t do without a physical keyboard, get a wireless one or plug one in. Otherwise, type away on your haptic-glass input panel. If it’s done really well — which is what will make or break the idea — typing on virtual keyboards will be a perfectly normal experience for most users, while also offering options that are impossible with physical keyboards — adjustable key spacing, variable sensitivity, repositioning for maximum comfort, customizable haptic responses, multiple alphabets, accuracy adjustments based on typing speed, user-customizable keys, and tons of other things I can’t even begin to imagine.
Another way to interpret Apple’s dictum that it doesn’t want you touching the screen in macOS is that it doesn’t want you touching the output screen. Apple’s laptops now provide two touch screens on the input side: the trackpad and the Touch Bar. The next step is to unify them — along with the alphanumeric keys that sit between them. Thursday’s MacBook Pro event demonstrated pretty clearly that the future of intermediated input is adaptive, context-sensitive, and virtual. And that means doing away with the mechanical QWERTY keyboard.
The keyboard is dead. Long live the keyboard.
UPDATE, 3/17/18: Yes, a year and a half later and I still haven’t given up on this quixotic vision, and now I’ve seen a glimmer of hope that I may not be alone: “Apple invention involves a Mac laptop with a ‘keyless keyboard’,” AppleWorld.Today, 3/15/18.
You know what else I haven’t given up on? Wred and the basic concepts behind it. But that’s another dog for another fight.
UPDATE: 6/3/18: Mashable’s Michael Nuñez: “It’s time for Apple to end the tyranny of the physical keyboard altogether and build a laptop with a full touchscreen keyboard.” (“It’s time for Apple’s next act of courage: Kill the MacBook keyboard”)
UPDATE: 6/20/18: Apple World Today’s Dennis Sellers approaches the same idea from a different angle: “Why doesn’t some company make overlays that show different types of keypads on the trackpad without interfering with the touch surface? Let’s take it to the next level: offer such a product, with software, that can tell where and when someone touches designated areas?” (“Invention idea: keypad overlays for Apple’s Magic Trackpad“)
UDATE: 10/5/19: According to the redoubtable Apple World Today, Apple has filed a patent for “static pattern electrostatic haptic electrodes” embedded below a touchscreen that “may produce a variable friction between a conductive object and the insulating material as the conductive object moves across the insulating material” as “part of a display device that is configured to display a virtual key of a virtual keyboard in the area.” So maybe my idea isn’t that crazy after all…
(“Apple patent filing hints at improved virtual keyboards for iOS, iPadOS, macOS devices“)
UPDATE: 5/19/20: The patent hunters at Apple World Today have found another interesting one, but this time instead of glass it’s aluminum, with tiny little perforations that “may be selectively illuminated based on a gesture performed on the contact portion.” No indication of whether it would include any kind of haptics. As shown in the patent drawings (which are not always accurate representations of the intended use, remember), this “ dynamic input surface” takes the place of the traditional trackpad and leaves the physical keyboard in place, but the description appears to suggest it’s more of a multi-purpose controller. I think we’re getting closer. (“ Apple patent filing hints at Mac laptops with virtual, not physical, keyboards”)
Categorised as: Life the Universe and Everything
Comments are disabled on this post
I wonder if you aren’t underestimating the difficulties of touch-typing on a flat surface. The keys on keyboards are cupped and marked with raised bumps for a reason (namely, sight-free typing).
Haptic feedback offers some promise, but helping people position their fingers at the start — and on the fly — seems like a tall order.
Accurate and consistent finger positioning is certainly the hurdle, but I’m not convinced it’s insurmountable.
I think we’re going to get hung up if we focus on trying to recreate a digital analogue to a physical keyboard. As you say, the advantage of physical keys is that they allow us to discern the locations and boundaries of keys by touch.
So why can’t we invent other, non-mechanical ways to discern the boundaries of keys besides physical edges?
For example, perhaps there could be a way to charge the glass with localized spots of electrostatic resistance that make them feel “sticky” when your finger glides over them. It wouldn’t have to mimic the exact feel of a physical key so much as provide some sort of equivalent sensory cue that indicates the boundaries of the key zones.
I think if we unpack what it is about keys that work and why they work from the keys themselves, we might come up with some surprising options we haven’t thought of before.
My concern is similar to TCWriter’s. As it stands, I can position my fingers with just a glimpse at the keyboard (or even none, since the F and J keys on my MacBook have little ridges to guide my fingers). Then I touch-type in a smooth and comfortable way. The keyboard works very well for me as an interface. In contrast, on a flat touch-screen I have to look at the visual keyboard as I peck out my words. It’s very inefficient and inaccurate (and then there is the three-row keyboard to deal with). The inaccuracies are partly overcome by autocorrect, which has its own drawbacks and feels intrusive.
Maybe if I just put down my hands anywhere on a flat surface, the software could be clever enough to recognize what keys I intended to be activating just by the motions of my fingers. But for people who can’t touch-type, that won’t work.
Your idea about electrostatic resistance might be a good solution.
I too have found typing on glass to be less efficient, but I haven’t had as much of a problem with inaccuracy, which has probably unduly influenced my opinion as to just how straightforward the transition from physical to virtual will be! 😀
I like your idea about clever software that learns to adapt to your finger placement. I can see that happening; it would require some combination of predictive text, letter-frequency analysis, and learned patterns to make it work reliably. All of those things already exist in one form or another.
I wonder if anyone is working on tactile-responsive touch screens. It would be the next frontier…
If flatscreen input is really the future, I’d suggest the standard, QWERTY alphanumeric key layout will simply sink from view.
Instead of trying to adapt the QWERTY keyboard, it’s probably better to simply replace it with a combination of predictive chording and voice input, probably using the adaptive key placement tech mentioned above.
In other words, adapting a keyboard designed to slow a touch-typist to flatscreens — a technology notoriously difficult to touch-type on — is probably possible, but is it desireable?
I think that’s a logical prediction, and I for one welcome our new touch-screen overlords — and I say that as a proficient keyboarder and typewriter fan. This old dog thinks it’s time to learn some new tricks.
The QWERTY configuration was a fine solution to a problem that effectively disappeared with advent of the IBM Selectric and digital computers 40+ years ago. It has stuck around since then because it works, of course, but also because it has a lot of inertia behind it, which is going to be hard to nudge in a new direction.
Plus, if we abandon QWERTY we’re probably looking at a protracted period of experimentation akin to the early days of typewriters before we find the new standard, but with the potential for much more disruption since QWERTY keyboards are ubiquitous today. What kind of effect will that have on communications?
Were talking about a fundamental sea change with unpredictable consequences. Who out there is willing to take such an enormous risk? Will it start at the top with MS or Apple, or with one of the hardware manufacturers like Dell or Lenovo, or will it be a startup coming out of nowhere with nothing to lose? No way to know yet. But someone has to go first and say “come on in, the water’s fine.” Otherwise, we’re going to remain where we are.
Whatever ends up happening, though, the nice thing about a virtual keyboard in such a scenario is that you can do all that experimentation without having to buy new hardware every time something new comes out. Just load the new keyboard software and start typing, like you can do on a mobile device now.
Phil Schiller discusses intermediated touch in an interview with The Independent that was published today: “Apple’s Philip Schiller talks computers, touchscreens and voice on the new MacBook Pro.”
The money quotes:
And:
People seem to be latching on to the distinction between intermediated and disintermediated input as a way to figure out the where the Mac fits into the Apple lineup. Check out this post by market analyst Horace Dediu, for example. He gets a little gushy, but he sees the defining distinction between Macs and mobile as being about input technique. Plus, he introduces the much less clunky terms direct and indirect input, which I like. The whole article is worth reading, but here’s the crux:
If I’m reading it right, though, that last paragraph reveals a blind spot: Dediu seems to take it for granted that when it comes to direct and indirect input, a 100% touch-based interface is only appropriate for direct-input devices. Hybrid inputs mean that the device is still safely, recognizably computer-ish; full-bore touch is, by definition or assumption, reserved for mobile devices.
My argument is that full-bore touch can be just as appropriate for indirect input as it is for direct. Touch is not about what you touch, it’s about what you do using touch.
Mobile devices have one surface for both input and output. Desktops and laptops have two coplanar surfaces (to use Phil Schiller’s term), one for input and one for output. Why can’t input be touch-based on both kinds of devices? Particularly if it provides meaningful tactile feedback that allows sight-free use, as TCWriter and Richard P. mentioned above for typing, and which other types of inputs also need. After all, since indirect touch is almost by definition not line-of-sight, to really work it needs to provide some kind of sensory feedback that isn’t eyesight dependent. Let’s make touch truly about touch.
The MBP Touch Bar is new and it has its physical limitations, but I suspect that it’s going to reveal to people that having a touch-based screen that doesn’t have to function as an output display is going to open up new ways of thinking about touch.
What I mean by that: the real power of a keyboard (QWERTY or Dvorak or whatever) is that it is a control panel. It does not have to be designed to accommodate what you’re seeing on the screen. The design of mobile touch — that is, unified input/output screen touch — has to take into account that it’s doing its actions on the same playing field where the user is seeing the results. But a control panel is like a menu of choices that you manipulate to assemble something over there.
I’m not articulating the ideas in that last paragraph very well and I’m not happy with how it reads, but I’m putting it down in its raw form so that I can go back and revisit the idea over time. And hopefully to get some feedback on it too. Whaddya think?
# # #
EDIT: May 27, 2017 (because comments have been closed for a while)
It looks like the idea may be gaining (or possibly regaining) traction: