The ability to point-and-click our way into the future
What are Touchscreens?
There’s a joke that books were the first computer tablets. They were highly intuitive, required no batteries, and were entirely touch-based, which made them highly desirable and sought-after. It’s a joke intended to satirize companies that market touch-based technology as something new and revolutionary.
Historically, touch-integration into modern technology has been around for over 50 years, with one of the earliest instances of touchscreens appearing in 1965, when E.A. Johnson described his work on capacitive touchscreens. In the early 1970s, employees at CERN began developing touchscreens, and CERN manufactured a working touchscreen in 1973.
At their core, touchscreens are nothing more than visual displays that allow users to interact with information onscreen through simple, or multi-touch, gestures.
How do Touchscreens work?
When it comes to touchscreen technology, there’s a need to distinguish between the multiple types of touch-sensitive panels. Most touchscreens work by way of electrical currents. The input – the “touch” – triggers an electrical reaction sent to a controller module that then determines an output. However, different panels translate inputs in different ways.
The earliest touchscreen – invented by E.A. Johnson and manufactured by CERN – was capacitive. Capacitive panels work by way of electrical conductivity. An insulator, like glass, is coated with a transparent electrical conductor; touching the panel with a bare finger causes a change in capacitance which is then translated into an output.
In 1975, the earliest “Resistive” touchscreen was patented by G. Samuel Hurst. Unlike capacitive panels, resistive touchscreens utilize two panels. When a finger or stylus touches the screen, the upper panel connects with the lower panel, and this information is processed into an output.
Many of us think that touchscreens work like buttons – we tap the screen and the machine figures out what to do based on what we tapped. However, touchscreens isolate where the screen was tapped, and the machine determines which action to execute depending on the part of the screen that was tapped. Developers simply put buttons on various parts of the screen to stimulate the human brain. The truth is that touchscreens don’t execute commands based on buttons, but on the location of the tap.
An important concept in the world of touchscreens is “Multi-touch.” Before the current technological revolution, tapping a screen was akin to clicking with a mouse. However, around the 1980s, an idea was hypothesized to allow users to utilize more than one finger to carry out commands. Multi-touch technology can be incorporated in both capacitive and resistive screens, but because it’s easier to use a capacitive screen, multi-touch is less intuitive with resistive surfaces.
The idea is simple: one finger taps, holding one finger can open menus, while two or more fingers can scroll, pinch-to-zoom, rotate, pan, etc.
Why are Touchscreens important?
Touchscreens are both more user-friendly than traditional input options, and far more intuitive. During a demonstration for a generation of iPhone, former Apple CEO Steve Jobs made a remark that it feels more natural to point-and-tap with our fingers than to use a mouse or keyboard. There is a significant amount of truth in this statement.
When we interact with the physical world, everything is available to us through our senses. To touch is to experience, and it’s a deeply human desire to want to know what something feels like in our hands and fingers. Touchscreens capture human curiosity, but they also make it easier for people to use otherwise complicated technologies. After all, what’s easier, commanding a machine through a series of clicks and menus, or simply tapping and having a machine instinctively know what to do?
What is the future of Touchscreen?
I made the point earlier that most touchscreens utilize electrical conductance to generate a desired output. However, since touchscreens merely use the location of an input device to determine what a machine should do, other types of technologies can be used to track input location. For example, sound waves, infrared imaging, and even optical imaging can be used to determine an input location.
As always, I’m excited for the absurd possibilities.

