How today’s touchscreen tech put the world at our fingertips

Welcome back to our three-part series on touchscreen technology. Last time, Florence Ion walked you through the technology’s past, from the invention of the first touchscreens in the 1960s all the way up through the mid-2000s. During this period, different versions of the technology appeared in everything from PCs to early cell phones to personal digital assistants like Apple’s Newton and the Palm Pilot. But all of these gadgets proved to be little more than a tease, a prelude to the main event. In this second part in our series, we’ll be talking about touchscreens in the here-and-now.
When you think about touchscreens today, you probably think about smartphones and tablets, and for good reason. The 2007 introduction of the iPhone kicked off a transformation that turned a couple of niche products—smartphones and tablets—into billion-dollar industries. The current fierce competition from software like Android and Windows Phone (as well as hardware makers like Samsung and a host of others) means that new products are being introduced at a frantic pace.
The screens themselves are just one of the driving forces that makes these devices possible (and successful). Ever-smaller, ever-faster chips allow a phone to do things only a heavy-duty desktop could do just a decade or so ago, something we’ve discussed in detail elsewhere. The software that powers these devices is more important, though. Where older tablets and PDAs required a stylus or interaction with a cramped physical keyboard or trackball to use, mobile software has adapted to be better suited to humans’ native pointing device—the larger, clumsier, but much more convenient finger.

The foundation: capacitive multitouch

Enlarge / Many layers come together to form a delicious touchscreen sandwich.

Most successful touch devices in the last five or so years have had one thing in common: a capacitive touchscreen capable of detecting multiple inputs at once. In this way, interacting with a brand-new phone like Samsung’s Galaxy S 4 is the same as interacting with the original 2007-model iPhone. The list of differences between the two is otherwise about as long as your arm, but the two are built upon that same foundation.
We discussed some early capacitive touchscreens in our last piece, but the modern capacitive touchscreen as used in your phone or tablet is a bit different in its construction. It is composed of several layers: on the top, you’ve got a layer of plastic or glass meant to cover up the rest of the assembly. This layer is normally made out of something thin and scratch-resistant, like Corning’s Gorilla Glass, to help your phone survive a ride in your pocket with your keys and come out unscathed. Underneath this is a capacitive layer that conducts a very small amount of electricity, which is layered on top of another, thinner layer of glass. Underneath all of this is the LCD panel itself. When your finger, a natural electrical conductor, touches the screen, it interferes with the capacitive layer’s electrical field. That data is passed to a controller chip that registers the location (and, often, pressure) of the touch and tells the operating system to respond accordingly.
This arrangement by itself can only accurately detect one touch point at a time—try to touch the screen in two different locations and the controller will interpret the location of the touch incorrectly or not at all. To register multiple distinct touch points, the capacitive layer needs to include two separate layers—one using “transmitter” electrodes and one using “receiver” electrodes. These lines of electrodes run perpendicular to each other and form a grid over the device’s screen. When your finger touches the screen, it interferes with the electric signal between the transmitter and receiver electrodes.

When your finger, a conductor of electricity, touches the screen, it interferes with the electric field that the transmitter electrodes are sending to the receiver electrodes, which registers to the device as a “touch.”

Because of the grid arrangement, the controller can accurately place more than one touch input at once—most phones and tablets today support between two and ten simultaneous points of contact at a time. The multitouch surfaces of the screens allow for more complex gestures like pinching to zoom or rotating an image. Navigating through a mobile operating system is something we take for granted now, but it isn’t possible without the screen’s ability to recognize multiple simultaneous touches.
These basic building blocks are still at the foundation of smartphones, tablets, and touch-enabled PCs now, but the technology has evolved and improved steadily since the first modern smartphones were introduced. Special screen coatings, sometimes called “oleophobic” (or, literally, afraid of oil), have been added to the top glass layer to help screens resist fingerprints and smudges. These even make the smudges that do blight your screen a bit easier to wipe off. Corning has released two new updates to its original Gorilla Glass concept that have made the glass layer thinner while increasing its scratch-resistance. Finally, “in-cell” technology has embedded the capacitive touch layer in the LCD itself, further reducing the overall thickness and complexity of the screens.

Using the coordinates from this grid of electrodes, the device can accurately detect the location of multiple touches at once.

None of these changes have been as fundamentally important as the original multipoint capacitive touchscreen, but they’ve enabled thinner, lighter phones with more room for batteries and other internal components.
Full Story: Ars Technica

Scroll to Top