Human-Technology Interface: Benefits, Challenges and Futuristic Solutions
Humans began to exercise control over their environment through use of tools and (primitive) technology at a very early point in our history (Tool Use by Early Humans Started Much Earlier). Such uses have played a central role in the development of human society and the success of people thriving on the planet (Human Evolution: The Origin of Tool Use). Technology has extended the reach and capability of human achievement in ways that early peoples probably could not have imagined. This is certainly the case in the modern world today. Controlling machines, tools, devices and technology expands the abilities well beyond human limitations imposed by biology and evolution. Yet, the same technology that allows us to exceed our inherent limits is also a “weak link in the chain” of where slippages in the human-technology interface can have disastrous consequences.
Complex tasks achievable only by the human-technology interface range from the “super” human accomplishments (e.g. piloting a modern jetliner, nuclear submarine or spacecraft) to those which are so routine that we take the human-technology interface aspects for granted (e.g. operating a microwave oven, word processing on a computer or driving a car). Everyday activities (such as driving a car) require an often-overlooked but very significant amount of human-technology interaction. (See Human Factors in Transportation Accidents; Human factors in the causation of road traffic crashes; and Human error accounts for 90% of road accidents). Therefore, on one hand the human-technology interface partnership enables people to do more, and more complex, tasks than would be possible without the technological tools. On the other hand, the human side of the equation is far too often the “weak link” of this partnership where human frailties or limits are the key “breakdown” factors resulting in negative consequences. The solution to this paradoxes problem and the human factors disasters world seemingly lie in one of two opposite paths. One path would be to sacrifice the advantages and benefits of the “expanded’ human potential and return to a simpler and less technological dependent existence. Such a return to a pre-technological society (i.e. “Luddite”) solution does not seem economically or socially viable. The other path leads to advancing further technological applications to compensate and autocorrect for human factor deficiencies. In some sense, minimizing or removing human factors from the consequence equation.
Take the “simple” example of driving an automobile. Transportation technology advanced slowly for thousands of years after the application of the “wheel.” Pushing and pulling simplistic carts at slow speeds (and using animals with decoupled influence on the configuration) led to wagons, carriages and eventually in the late 1800s and early 1900s to the motorized “automobile.” The early automobiles were technologically sophisticated complex machines at the time of their introduction. Although viewed as simplistic with historical hindsight, the early automobiles created their own human-technology interface issues. While they allowed people to move faster on land and with less effort than had ever been possible before, they also created risks and dangers of velocity-impact risks as well as testing the limits of human perceptual and behavioral skills. Human factors as well as human-technology interaction in terms of speed, skill, perceptions, reactions, attentiveness, distractions, etc. were problems from the very start. The first confirmed automobile fatalities occurred as early as 1869. (The first auto related fatality in the USA occurred in 1891.) Automobile technology – safety technology as well – has continue to improve. Human experience and driving skills have also improved since those early days. Nonetheless, today automobile “accidents” are the leading cause of injury-related deaths worldwide. So, as a society we either accept this as a necessary collateral damage or try to minimize the human (dysfunctional) element.
In 2012, Google unveiled a project for the Google Self-Driving Car. This approach would seemingly result in removing the human driver from the safe vehicle operations and is one approach to minimize human factor errors and accidents for motor vehicles. Coming at the same problem with a different technological solution, Jaguar Land Rover (JLR) is researching applied technology already used by NASA to measure and assist a pilot’s concentration skills and used by the US bobsleigh team to enhance competitor concentration and focus (Jaguar, NASA Team for Car That Can Read Your Mind). The project called “Mind Sense” targets monitoring a driver’s brainwaves in order to recognize when they are not fully attentive, falling asleep or distracted while operating the vehicle. The goal is to attempt to reduce accidents which occur due to “driver error” when they are stressed, distracted and failing to concentrate on the road ahead.
“’Mind Sense,’ which was derived from a NASA tech used to enhanced (sic) a pilot’s concentration skills. Mind Sense aims to read your brain waves (amplified and filtered by software) using sensors embedded in the steering wheel. An on-board computer will then assess whether you’re alert enough to commandeer a vehicle weighing thousands of pounds. The steering wheel could be programmed to vibrate or the computer could issue a warning sound, in case you’re daydreaming or starting to fall asleep.” Jaguar, NASA Team for Car That Can Read Your Mind
The ultimate application of these technologies might improve driver performance with human-technology interface tools, at least for now in automobile driving tasks. Noah Joseph commented on the JLR press release about “Mind Sense” in an Auto blog post that:
“Jaguar is installing brainwave sensors adapted from NASA into the steering wheel of an XJ sedan, along with medical-grade heart and respiratory sensors in the seat. Together, the sensors would determine if the driver is focused on the road, dozing off, merely thinking about something else, or if the driver’s stress level suddenly peaks. The system would enable the vehicle to better prepare for an emergency, or for a future autonomous vehicle to hand off control to a better-prepared driver where needed….at the same time, JLR is also working on an enhanced infotainment system designed to reduce the amount of time the driver’s hands are off the wheel and their eyes are off the road. The system determines which control they’re reaching for on the display and engages it while their finger is still in mid-air, deploying an ultrasonic pulse to provide artificial haptic feedback without actually having to touch anything…finally, a new haptic accelerator pedal is under development that could alert the driver to respond to an impending situation without overloading the senses with chimes and beeps”
Putting aside for the moment philosophical questions about personal privacy and “big brother” issues, the challenges of human factors, complex systems and human-technology interface is a recurring issue in emergency, disaster and crisis management situations are human factors breakdowns including those of human-technology interactions. In routine situations, human inattention, stress, distractions and attention-perception breakdowns can result in negative consequences. In complex extreme environments as well as during high and hyper-stress situations the break down between user and technology tools is even more critical. The interesting question is to ask if/how soon might similar “Mind Sense” style applications become more widespread including those that could play a significant role in preventing, mitigating, responding, managing and recovering from disasters, emergencies and crisis incidents of all sizes and categories?
In addition to the examples already mentioned about improving human-technology interaction for operating automobiles there are a number of interesting other potential applications. Brain computer interface (BCI) research points to applications of cybernetic feedback and the performance enhancing potential in the areas of attention, adapting workloads, performance capacity and mental state monitoring. Some research suggests that utilizing electroencephalogram (EEG) workload indices into real time human-technology interface could greatly enhance performance of complex tasks, transforming traditionally passive human-system transactions into an interactional exchange where physiological indicators adjust the interaction to suit a user’s engagement level. Other research has investigated sleep deprivation tools that have the potential to measure accumulated sleep debt, monitor the impacts on performance and help optimize the rest recovery steps. There are many potential applications of these emerging to tools preventing, mitigating, responding, managing and recovering from disasters, emergencies and crisis incidents of all sizes and categories.
- Potential future applications of this approach might include:
- Security tasks
- Baggage/x-ray screening chores
- Monitoring surveillance work
- CCTV Public Safety images
- Dashboard and Personal Cameras (e.g. Law Enforcement or first responders)
- Measurements on critical processes – industrial, petro-chemical, electrical
- Law enforcement tasks (routine and extreme situations)
- Military (especially extreme condition contexts)
- Air Traffic Control
- Health care/Medical (including patient care)
- Critical IT work
- Heavy Equipment and vehicle operations
- Critical systems monitoring
- Crisis management dashboards
- Inbound critical communication/information processing
- Incident Command Team
- Emergency Operations Center (including prolonged and extreme conditions)
- Search and rescue
- Disaster recovery
- Disaster restoration tasks
On one hand, the human-technology interface partnership enables people to do more and more complex tasks than would be possible without the technological tools. On the other hand, the human side of the equation is far too often the “weak link” of this partnership where human frailties or limits are the key “breakdown” factors resulting in negative consequences. Soon new cybernetic control technologies may become more widespread including those that could play a significant role in preventing, mitigating, responding, managing and recovering from disasters, emergencies and crisis incidents of all sizes and categories. Until those are in place, we should continue to be diligent to anticipate, recognize, detect, mitigate and manage human factor errors particularly those that arise in the unique contexts of human-technology interface points.