Cybercommunications 101: How to deploy an effective cybercommunications program as part of an emergency, disaster recovery, and business continuity effort. As more common, daily-use devices become automated, the risk of cybersabbotage and cyberattacks increases, so planners must take measures to prevent harm to their efforts, personnel, agencies, and organizations.
“The victorious make many calculations prior to the fight, for every battle is won before it is ever fought.”
–Sun Tzu, ca. 2nd century B.C., Book: The Art of War
As emergency personnel deal with a larger amount of threats (including shutdown or takeover by hackers of their communication tools) and as companies reach farther into global markets where cybercommunications are critical, everyone must comply with a much larger spectrum of barriers – for example, international regulations (particularly Foreign Account Tax Compliance Act, privacy, and cybersecurity laws). To understand how to develop and deploy a cybercommunications plan, one must appreciate its importance, different attributes, and complexity.
Crisis Communication vs. Cybercommunications From the devastating explosion of a chemical plant in West, Texas, to wildfires in California or an ice storm in Montreal, Canada, emergency personnel are fledged at crisis communications. Yet, given the speed and scale of replication of cybercommunications, it is important to study, learn, plan, and adapt to this newer, critical tool that can easily be turned into a weapon.
On 20 May 2015, the Office of the Director of National Intelligence released more than 100 deified documents that were captured after the U.S. presidential order to kill Osama bin Laden. Those documents revealed that food security was considered an essential good for survival by terrorist groups like al-Qaida. Similarly, cybercommunications must be treated as a critical element of any emergency planning, response, and recovery effort. To ensure the survival of all parties involved, emergency personnel must consider the exponentially more efficient qualities of cybercommunications as well as its dual nature.
Unlike standard crisis communications, whereby there is a specific order in what gets communicated to certain communities, cybercommunications are distributed universally and, when intercepted, can distribute various decentralized news, information, instructions, and orders. When developing cybercommunications programs, planners should consider the exponentially super-size scale, speed, tight coupling, decentralized and multilayer technological architectures, distribution, and opacity of cybercommunications.
The Fragility & Impact of Cybercommunications As cyberbullying is on the rise, with one in three teenagers now tormented, humiliated, threatened, and even led to suicide from targeted and hacked cybercommunications, it is important to understand the potentially destructive and disabling nature of this progressive, more pervasive, and modern communications approach. For example, in a recent case in Leander, Texas, a 9-year-old cyberbully by the name of Jaide was able to take over Instagram, Facebook, and cellphone text communications of her targets as well as gain access to all Internet communications from a hacked Time Warner box (including access to all security cameras and digital transmissions). In order to rid themselves of the death threats, compromised cybercommunications, and defamatory digital defacing, the families of the targeted cybervictims had to entirely change all of the cybercommunication devices and terminate social media profiles.
On a grander international scale, cybercommunications are now often hacked to disable larger communities through chaos. Since the initial disruptive attacks by the Morris worm in 1988 (resulting in the first cybercommunication convictions under the U.S. Computer Fraud and Abuse Act), disruption of cybercommunications is being used as a standard preemptive method of massive dehabilitation prior to physically violent manmade attacks. Famous examples of this cybercommunications phenomenon include:
The 2007 attacks in Estonia, which disrupted government services, halted online banking, and disrupted cybercommunications;
A March 2015 blackout by Iranian cyberhackers, which paralyzed Turkey; and
Recent cyberhacking pledges over the Deep Web, which report plans of future cybercommunication blackouts by the Islamic State group.
In fact, in its published history of cyberattacks, the North Atlantic Treaty Organization (NATO) has reported over 18 incidents when countries like Canada, China, Israel, South Korea, and the United States were impaired without cybercommunications prior to acts of war.
Cybercommunications – Helping or Impairing Emergency Personnel As people and resources around the world can receive information up to a speed of the nanosecond, cybercommunications definitely can empower users. Modern societies are moving toward the “Internet of Things,” which is defined as a network of physical objects or things embedded with electronics, software, sensors, and connectivity that enable these objects to achieve greater value and service by exchanging data with the manufacturer, operator, and/or other connected devices.
In addition, communications are being anonymized via the Deep Web (aka Deepnet, Invisible Web, or Hidden Web), which is defined as the 96-percent portion of World Wide Web content that is not indexed by standard search engines and cannot be accessed through common browsers like Microsoft Internet Explorer or Firefox. This brings up an important point that most emergency personnel rely on four percent of the more common and most hacked Internet for their cybercommunications. As a result, simple devices like employee cars, microwaves, or modern refrigerators with computerized management and cybercommunication capabilities can now be disabled, or turned into lethal weapons.
As time and safety are the essence of emergency services, having an emergency vehicle suddenly stop moving, or even explode, as a result of hacked cybercommunications would be an eye opener to many. All digitally enabled devices, by design through network layers and transportation protocol, now can be used to disable, counter, or even terminate efforts, critical personnel, and organizations.
The Good, the Bad & the Ugly In 2010, a shooting took place at the University of Texas. Confidential security communications from emergency personnel was intercepted by cyber pundits and redistributed to the media for mass distribution. As the university police personnel communicated their limited capacity to counter the attack, they asked for assistance from Austin Police Department and other related agencies. Colton Tooley, a 19-year-old mathematics major, was firing AK-47 shots from an upper floor of the Perry-Castaneda Library, but the guns used by the university police did not have the necessary range to disable the shooter.
Unfortunately, the confidential communications were breached, sensationalized, and enhanced when redistributed. Journalists asked the public to drop by the University of Texas campus and start shooting the vaguely disclosed source of the shooting with their own personal firearms. As a result, hundreds of concealed handgun license carriers showed up and began free-range shooting. With bullets bouncing back from various buildings, emergency personnel had to deal with an army of unmanaged shooters trying to be good citizens and save the day. Although the propagation of cybercommunications was not as pervasive as it is today, the bad attributes of cybercommunications rapidly bypassed the positive ones – even with well-intentioned citizens.
Automation of intelligent devices through potentially breached to manipulated cybercommunications has the capability and very likely outcome of adding a multiplying factor to good and bad cybercommunications.
Preventing or Limiting Misuse of Cybercommunications Although the topic of cybercommunications could fill several books, a good start is to include cyberethics as part of any cybercommunications program. Cyberethics is a modern discipline that encompasses: digital user behaviors; tasks that computerized devices are programmed to do; and effects on individuals and society.
Just like ethics were derived from the German Nuremberg Doctors WWII trial and resulted in the 1947 Nuremberg code (whereas scientific experiments conducted on humans need to adhere to a code of ethics) as well as the 1964 Declaration of Helsinki, cyberethics emerged upon the deployment of the National Research Act of 1974. This suite of developments gave birth to the 1981 Belmont Report and the 2012 Menlo Report. Through a framework called “Common Rule,” the 2012 Menlo Report is now more commonly used to govern and assist decision makers with ethical standards in information and communication technology.
As much as these two reports focus on computer security research – for example, how to stop botnets that are affecting millions of humans by damaging systems and data in foreseeable, yet nearly unpredictable, patterns – the following focus areas of cyberethics to be applied upon cybercommunications should be considered during any emergency, disaster recovery, and/or business continuity effort:
Understanding the different characteristics of cybercommunications versus standard communication methods;
Ensuring respect of persons (identification of stakeholders – primary, secondary, and key participants, as well as securing informed consent);
Fostering beneficence (distribution of risks, benefits, and burdens starting withentification of potential harms versus benefits including: integrity risks, availability risks, confidentiality risks, necessary secrecy, and transparency, plus mitigation of realized harms);
Promoting justice (fairness and equity); and
Respecting law and public interest (compliance, transparency, and accountability).
U.S. Laws Governing Cybercommunications Although compliance does not ensure security and privacy, some of the U.S. laws that corporate executives, board members, as well as emergency, disaster recovery, and business continuity professionals may want to familiarize themselves with upon planning, designing, implementing, and testing a cybercommunication program include:
The Communications Act of 1996, Protection of Customer Proprietary Network Information, 47 U.S.C. § 222
The Electronic Communications Privacy Act – Wiretap Acp, 18 U.S.C. § 2510-22
The Stored Communications Act, 18 U.S.C. § 2701-2712
The Pen Register & Trap/Trace Act, 18 U.S.C. § 3221-27
The Telephone Records and Privacy Protection Act, 18 U.S.C. § 1038
The Family Education Rights and Privacy Act (FERPA), 20 U.S.C. § 1232g(a)(4)(A)
The Health Insurance Portability and Accountability Act of 1996, Pub. Law 104-191
The Privacy Act of 1974, 5 U.S.C. § 552a(a)(5), (a)(4)
Ensuring the availability, integrity, security, resilience, and reliability of an organization’s cybercommunications infrastructure and activities can be intimidating and challenging. U.S. and foreign organizations are invited to review the tools provided by U.S. Office of Cybersecurity and Communications (CS&C), within the National Protection and Programs Directorate. With the primary objective of protecting federal communication networks since 2006, CS&C carries out its mission by engaging the private sector in a collaborative manner through its five divisions:
A Final Key Element: Contextualization of Cybercommunications Cybercommunications are already catapulting and “taking over the world.” More comprehensive cybercommunication programs are required that include extensive cyberethics from conceptualization to deployment and testing. Another important topic of cybercommunications planning is contextualization, which is defined as the intelligent aggregation of independent data, signals, a