How InfoSec in 2015 is Like the Airline Industry in 1977
I have been flying in some form or another for near 30 years, but I have been in the tech space for a few years longer. As a flying professional, I was responsible for not only my own life in the air, but also the lives of those around me in the aircraft, those on the ground should there be an accident, and those who have a financial interest in the aircraft for that day. It is mind-numbing to think that an aluminum tube with a few hundred people can be flung through the air, full of highly-flammable fuel, with two or more controlled fires out the back providing the thrust to push said vessel to an altitude in which humans cannot survive but for just a few seconds. It takes a certain skillset of knowledge, responsibility, aptitude, foresight, and coordination for airplane crew to successfully make a controlled crash on a runway (commonly referred to as a “landing”) and call the fight a “success”.
But it wasn’t always that way. People died. People died in the air as well as those on the ground. It was tragic and sad for families left behind. It was especially difficult to accept because the pilot was most likely the cause of an accident. When the pilot was not at fault, a maintainer, a controller, or an engineer was at fault for some incorrect action or for the lack of some correct action. Seldom did an aircraft just give up on flying. At their heart, an airplane wants to fly and human actions typically cause crashes. When technology increased speed limits, altitude capabilities, or navigation complexity, the human is the weak link.
This was especially true when technology grew at exponential rates during key phases of our last 100+ years of powered flight. When Orville & Wilbur first started, there were very few accidents. They were risky in thinking that mankind was capable of flying, but they knew the risks and took care where they felt necessary. As flying expanded to more enthusiasts, hobbyists, and idiots, you can expect air accidents to increase, and they did until the beginning of World War II. At that point, there was a significant increase in technology thanks to the global war machine pumping huge amounts of dollars and pounds into every performance edge that could be achieved at that time. Technology increased and so did accident rates. We learned more about the human body… weaknesses that always existed, but that we never realized until they were tested. Why would a pilot “wake up” while flying? Hypoxia was being understood at super-high altitudes and high g-forces (the force exerted on an airplane and pilot when making a turn) were exceeding the limits of the human body version 1.0. The Aircraft Crashes Record Office, based out of Geneva, publishes the “Accident Rate per Year”1 and shows these accident rates spike during the war. Even though there is a sharp decrease following the end of hostilities, accident rates remained slightly elevated from pre-war levels into the 1960s.
During this decade, the commercial aviation business began to gain significant momentum. The De Havilland Comet began the jet-powered commercial industry in the mid-50s, but with the advent of major air carriers and the affordability of travel, more airliners were streaking from coast to coast. Unfortunately, you can predict the new technology of larger airliners, more complex navigational systems, and the grind of corporate profits would result in misperceptions, distractions, and personality conflicts. Accidents followed the trend and yes, people died.
This upward trend continued into the 1970s with two accidents in particular that show were a safety feature and other new technologies actually contributed to the accidents. In December 1972, Eastern Airlines Flight 401 crashed in a Florida swamp as the crew was troubleshooting a faulting landing gear indication bulb. Seriously… as crazy as this may sound, a $2 bulb *contributed* to the accident. I will deviate from some standard designations in the aviation community and not say that it caused the accident, because the inaction of the pilot and crew who were distracted by the bulb failed to properly maintain altitude resulting in the crash; the light bulb didn’t do anything wrong. The technology simply failed and the system was too complex for the crew at that time of night and under those conditions to accurately diagnose easily, and there was no clear delineation as to who was assigned to fly the aircraft. The autopilot kicked offline and the aircraft started a slow descent to alligator-infested waters at nearly 200mph.
Fast forward to 1978 in Portland, Oregon. United Airlines Flight 173 was making an approach to the runway when another landing gear malfunction forced the crew into a holding pattern to evaluate the cause and possible courses of action they should take. Better fuel planning, longer flights, more complex systems, and inevitable malfunctions of a system put the crew in a difficult position. There was a breakdown in managing the aircraft while the crew was troubleshooting the problem, and the aircraft eventually ran out of fuel short of the runway.
That was the airline industry in 1978. It would be another three years before United would implement the first “Crew Resource Management” (CRM)2 program for their entire fleet of pilots and engineers. CRM helps manage the crew for the pre-mission planning and the duration of a flight. CRM provides a framework to delegate roles and responsibilities, as well as an orderly flow of information and techniques for critical safety step conformation. As the industry adopted and refined CRM, the number of accidents stayed at or below the 1981 rate, and there has been a significant decrease in annual accidents each year since even with the advent of ultra-high-tech aircraft seeing more daily operations. Today, the annual accident numbers are less than 150 per year for the past few years, which is almost half of the 345 in 1978.
Source: Aircraft Crashes Record Office “Accident Rate per Year”1
So why is InfoSec like the airline industry in 1978? Since the dot-com bubble of the 1990s, there has been a significant increase in information technology, which expanded in more in scope in the 2010s thanks to the onslaught of social media. Before anyone cared about listening to your hourly playlist status or seeing a photograph of what you had for lunch, technology existed along with the inherent flaws associated with those chip sets, operating systems, applications, network protocols and encryption (if there was any at all). As people became more digitally connected, the ability for others to exploit functional flaws in design or to exploit human nature flaws prevalent in social engineering attacks gave the bad guys and some good guys more attack avenues into your personal life or your personal information stored on someone else’s IT infrastructure.
Each day, we see more and more information leaked from the Office of Personnel Management, Adult Friend Finder, Ashley Madison, Sony Pictures, Anthem, Premera, and the list goes on and on. Will next year be worse than 2015 with the number of hacks or the depth of data stolen? We just discovered a critical hack in certain Chrysler models from an unsecured network feature. How many people would have bought the vehicle if they knew the car was constantly connected to a monitoring network with no documentation to show exactly how it worked? Isn’t technology growing faster than we, the users (pilots) of technology can effectively and safely use? Are we fed up with the loss of information to data thieves? Will we eventually develop the techniques to effectively manage our digital information passengers?
My answer is a definite “maybe”.
The airline industry implemented many programs during this time but none were as successful as CRM. They didn’t know it at the time, and it wasn’t until data showed a few years’ worth of improvements before the rest of the industry fully adopted the idea. Over a couple of decades, the paying public can rest assured that flying is much safer than it was ever before. That does not mean that accidents cannot or will not happen—flying is inherently dangerous and we should never take it for granted. There is little room for error, but pilots are paid to accurately manage the risk and they do so in part through a formal Crew Resource Management program.
But I do not think we are in the InfoSec-date 1978 yet. We are still in 1977 or earlier because we, as an information sharing community, have not developed the tools, techniques, and mindset to accurately address IT data risks going forward. The concept that the user is the weakest link should be apparent to everyone in the design and security world, but end users continue to makes mistakes just as often as software developers, network engineers, and executive management teams. Also, the world continues to see exponential growth in technology and networking. Each new social media site, each new smart phone, and each new networking service presents a measured threat increase, but since we do not fully grasp the consequences of new threats, nor do we have an absolute framework to address them, major data breeches will continue to increase. With the coming wave of the “Internet of Things”, I predict even stranger information system hacks will flood the tabloid headlines as smart network refrigerators and mesh network lightbulbs show up in more and more homes. We are close to rounding the corner into 1978 because the inflammatory headlines are starting to show the disgust and hopeful contraction of the free sharing of personal information along with the need for better security.
When people have finally had enough and the risk-return profile is not in the window for corporate profits, you will see a fundamental change in the way we address InfoSec and I think we already know the first step of that process. It all begins with asking ourselves, “who is in charge of crisis management in the next hacking attempt?” Security people are starting to forge relationships with management and design teams. Those groups are seeing the results of poor design and poor implementation. I smile when I hear that system features are built around a solid foundation of security, because I think that the best preventative measure is to not let the threat happen in the first place. Also, it is more difficult to build security around a software suite after it is built than by design the software with security in mind. With more security professionals in the development process and more company executives embracing security, if not already security professionals themselves, I think that will be the natural path of how we can lower the number of security accidents/incidents. But we will not know the results of our mitigation for many years. Until then, we have to expect more massive data breeches.
Crew Resource Management is not the answer and neither are compliance frameworks, Federal programs, or the U.S. Digital Service. Oversight does little except add complexity and drives management and engineers to doing the minimum amount required. Broader ideas like better user interface design and more secure transmit and security protocol usage will help the case to protect data. But you will never hear about data protection successes and you will always hear about the failures, because this is a thankless industry, just like the airlines. Every flight is expected to be on time, in smooth air, and exactly at the perfect time of day. In the security space, client data is expected to be protected, use authorized and users authenticated.
One thing is certain though: security leaks will happen. People may not die, but information will be lost that can never be recovered again. Let’s just hope the numbers go down sooner rather than later.
(1) “Accident Rate per Year” retrieved 27 July 2015 from
(2) “Crew Resource Management” retrieved 27 July 2015 from
This document may be distributed and reproduced provided the entire document is released unaltered, without overprint, and with full attribution to the author.