Monday, May 28, 2007

Intelligence Challenges in Tracking Terrorist Internet Fund Transfer Activities

Intelligence Challenges in Tracking Terrorist Internet Fund Transfer Activities
Author: Thomas Winston
DOI: 10.1080/08850600600829833

Published in: International Journal of Intelligence and CounterIntelligence, Volume 20, Issue 2 June 2007 , pages 327 - 343


The thought of cyber intelligence collection conjures images of electronic listening and watching devices in every corner of the world, reporting what they see and hear back to some "centralized intelligence computer system." Current literature indicates that an over-reliance on signals intelligence (SIGINT) in the 1990s created some shortcomings with the intelligence collection processes. But open source intelligence (OSINT), now a mainstay of many intelligence agencies, is widely available as a result of the proliferation of the Internet. Anybody anywhere who can read Arabic now has access to Al-Jazeera, a jihadi Website. Among others available are a Russian separatists' Webpage, a Greek Socialist newspaper, and a North Korean polemic against the West. Yet, the existence and availability of such vast amounts of information do not necessarily reflect its utility for intelligence collection. Rather, OSINT serves as a tool of corroboration for agents and assets in the field. The old adage, "You can't always believe what you read," has never been more true, in that the validity of and political/governmental filtering (or over-allowing) of information on foreign Websites is not defined once the page is loaded. Therefore, the need for corroboration has never been greater. If nothing else, the Internet provides a convenient conduit to produce, read, and critique vast amounts of information, all from anyone's PDA or GPS phone.


The post-11 September 2001 world is vastly different from that of the Cold War era.1 Nation-states no longer battle against each other in the quest for global domination. As New York Times columnist Thomas Friedman put it:

There are no longer superpowers. There are instead super-empowered individuals who strive to topple the leaders of globalization and democratization. These individuals (terrorists) fear that the globalized world will, with extreme prejudice, leave them behind. It is this strife that has brought about the age of distributed threats coming from non-state sponsored actors, from anywhere in the world at any time.2

Friedman was describing a paradigm shift with which the George W. Bush administration was grappling. A technical report from the Army War College states:
In the past, the intelligence community's primary job was to know the Soviet Union. With the loss of the Soviet paradigm, other security issues have moved up in relative priority, and the built-in excuse for not concentrating on them is gone.3

This has a deleterious effect on the ways in which intelligence assets are recruited, and how intelligence itself is collected and disseminated. Since terror groups are typically well-funded, the typical Cold War-era way of recruiting local assets by small repetitive cash payments has become less effective. Even in countries where terrorist organizations are actively recruiting, and unemployment rates are high among single males, dogmatic hatred will likely prevent those who have joined or who are going to join terrorist organizations from committing treason. The misinterpretation of Islam frequently gives jobless, single, 20-something males the meaning they seek in their lives, thus making the barriers for asset recruitment even higher for clandestine service officers (CSOs).4 Former Central Intelligence Agency (CIA) official Frederick P. Hitz puts a fine point on this problem:
By definition spies are liars, law-breakers and traitors.They frequently violate the laws of their home countries, and in spite of the best efforts of the CIA's classified specifications for risk assessment in choosing assets, some of them are simply unsavory characters who will kill their spymasters with the same conscience by which they commit treason to their homeland.5

The dissemination of any information among the various members of the Intelligence Community (IC) is also mired in historical one-upmanship and a complex multidimensional bureaucracy. Ronald Kessler, in his work Inside the CIA, described a historic battle between the Federal Bureau Investigation (FBI) and the CIA that still raged at the time of publication - 1995. This battle started as a result of the way in which the Intelligence Community grew during and after World War II. Much of it is a result of strict jurisdictional measures placed on the CIA and the FBI, with the Agency allowed to gather intelligence only in the overseas arena, and the Bureau in charge of domestic intelligence efforts. President Harry S Truman created this division of labor out of an aversion to having a secret "Gestapo-like" agency operating in the United States.6 History, of course, changed this. By the end of the Cold War counterintelligence operations brought the two agencies closer, but not completely together.


Hundreds of digital online payment systems now exist.7 Payment systems like the Russia-based WebMoney allow for anonymous transfers of money, or transfers to and from anonymous accounts. Middle Eastern countries such as Qatar see these systems as havens for money-laundering activities, and want to control or stop such activities. In an article dated 1 December 2004, a Qatari newspaper described a system that:

combines the world's latest features in this field, such as a dynamic on-line message filtering system, integrating SWIFT8 Alliance Access, ability to compare the origin of all messages to the official FBI-OFAC list and QNB's own list and other lists identifying suspicious activities.9

Services like WebMoney could easily be rife with criminal money laundering or terrorist financing activities. Since they ensure anonymity, to believe that such organizations would cooperate with law enforcement agencies is counterintuitive. Laws regarding the transparency of financial transactions of Internet companies in Russia are unclear. But Russia's Federal Security Service (FSB) is frequently able to surreptitiously monitor Internet traffic:
Authorities continued to infringe on citizens' privacy rights. Government technical regulations that require Internet service providers and telecommunications companies to invest in equipment that enables the FSB to monitor Internet traffic, telephone calls, and pagers without judicial approval caused serious concern. However, in response to a challenge by a St. Petersburg journalist, the Supreme Court ruled in September that the FSB is required to obtain and show court approval to telecommunications companies before it can proceed to initiate surveillance.10

Such businesses generally operate within a set of rules defined by the country in which they are located. In the United States, such companies must report every transaction. Every bit and byte of data is subject to review by the Securities and Exchange Commission (SEC) and various other government agencies. In a like manner, such businesses operating in the U.S. cooperate with law enforcement officials and other investigative agencies in fraud investigations. In Russia, however, the governance process is unclear and many judges are corrupt.
Low salaries and lack of prestige make it difficult to attract talented new judges and contribute to the vulnerability of existing judges to bribery and corruption. Judges have received some incremental salary increases aimed at improving the quality of judges recruited and raising the retention rate. Although judges' pay has improved, working conditions remain poor, and support personnel continue to be underpaid.11

But even if monitoring is occurring, it may be ignored. Occasionally bribes can make the important e-paper trail disappear.

Perhaps the misuse of these systems can be attributed to globalization. The ability to get anything, anytime, anywhere may sometimes work against the investigative paradigm, in that there is no clear path to forensics. X does not necessarily precede Y, and connections among the relevant data can be many and disparate. Thomas Friedman has described the Internet as the tool for globalization. Unfortunately, terrorists have discovered a way to use this democratization tool against the West.12 Friedman further proclaims that countries and cultures that are unable to obtain the tools of technology are going to fall behind in the race toward globalization. Mark Rupert of Syracuse University states:

[o]ngoing transnational dialogues among activist groups, non-governmental organizations, and other elements of an emerging global civil society have generated some remarkable proposals for a more sustainable, egalitarian, and democratic world.13

Others argue for a more self-managed form of governance on the Internet, and that such governance will benefit all Internet users, equally. Of course, this too is problematic. Before any consideration in favor of or against the role of globalization and terrorists' usage of the Internet, law enforcement and intelligence agencies need to be able to track the transnational criminal activities of terrorists using the Internet for money laundering and funding. In this sense, globalization acts as an enabler to such activities.


Tracking hackers via electronic or other means is no longer a simple task. Everything from e-mail to source address identity can be masked or spoofed. The protocols that drive the Internet were originally designed to allow for maximum connectivity and sharing among components.14 Furthermore, the concept of a non-circuit switched network was the focal point of the initial research.15 The packet-switched network concept, as it became to be known, all at once allowed for maximum interoperability and a distributed threat-base for cyber-based attacks.

Today, packet-switched networks interoperate with circuit-switched networks, creating myriad links in the communications chain, all of which can be broken, spied upon, manipulated, or taken offline. The idea that everything connected to the Internet is uniquely addressed provides robustness for the protocol and guaranteed delivery, as well as serving as a reporting mechanism on delivery failures.16 These are the three primary issues of concern for tracking hackers.

An overview of the primary protocols in use on the Internet today reveals that they exhibit three common elements: (1) encapsulation, i.e., the taking of different protocol packets and encapsulating, or enclosing, them into other packets; (2) error reporting and resending mechanisms; finally, (3) guaranteed delivery and ensured notification of delivery failure.


TCP/IP is the primary protocol that enables the smooth functioning of today's Internet. The TCP (Transmission Control Protocol) ensures smooth delivery and retransmission of packets.17 IP (Internet Packet) defines the format of packets and the addressing scheme. Encapsulation is a mechanism for transporting IP packets between networks using different protocols and for transporting packets across diverse network links.18 These aspects create the paradox between the "easy to use, anywhere in the world" nature of the Internet and the distributed threats that the Internet permits. The header information that describes source and destination addressing can be manipulated and modified. This is a more sophisticated form of attack. Spoofing, as this process is known, allows the attacker (or the attack) to originate from a masked or spoofed location. This attack will foil state packet inspection mechanisms, in that the packet will appear to be from somewhere it is not. Furthermore, this attack will foil any security measures designed to accept only packets from certain addresses.19 Worms, viruses, and trojans play an equally important role in the quagmire of Internet investigations.

Once the tools of only the most skilled hackers, viruses, worms, and trojans (VWT) are now freely and easily accessible on the Internet. (At the time of the writing, I was able to download a virus and infect my home network in less than twenty seconds.) Utilizing modern programming languages like C + + and Java, it is possible to create "smart viruses" that appear to be "smart agents." Smart agents are programs that perform specified tasks, such as moving funds.20 State inspection mechanisms placed in networks will have to dynamically adjust to separate the real programs traversing the Web21 from the programs that appear to be real, but are in fact viruses. Such dynamically adjusting systems require a highly developed AI mechanism. Artificial Intelligence (AI) is the area of research that focuses on real-time computerized perception systems. Put simply, this is not just having a computer provide an easily definable binary decision to a simple question like "How are you doing today." A finite number of answers to this question are possible, and all are easily programmable and even can appear to be perception based, by randomizing the frequency and sequencing of the possible responses. A truly successful AI machine would make decisions based on a variety of random variables interacting in a randomized sequence. More importantly, such a system would react differently, based on different combinations of the variables, at different times and in different places. A large and well-established body of research known as cybernetics and stochastic systems already relates to this field.22 In order to properly track and trace illegal or illicit online Internet financial transactions, the system designers would require an extensive knowledge of the praxis and design of such transactions. Such a system and its concomitant "smart agents" would need to do all this and learn how to recognize new methods and be able to subvert deliberately disguised transactions. Current technology is able to detect unusual activity, such as exorbitantly large sums of money electronically traversing international boundaries but cannot determine in real-time whether or not a charitable organization or a new non-governmental organization (NGO) is electronically funneling money to terrorist organizations. Forensic capabilities are also able to link "brick and mortar" activities and electronic activities, but only if under suspicion. And although the USA Patriot Act went a long way in attempting to thwart such activities, it acted more like taking a sledgehammer to a problem that requires careful chiseling.

Internet routers are devices that move Internet packets from one network to another to keep track of incoming and outgoing connections. Part of this recordkeeping involves storing a network hardware address, based on the connections of incoming and outgoing packets. This storage is only temporary, though, and reveals little about the actual contents of the datagram.23 Most forensic investigations take place after a machine(s) is under suspicion. Unfortunately, to proactively track electronic terrorist financing real-time, perception-based monitoring is necessary. To be effective, almost every device on the Internet would have to be sagacious about all packets coming and going. Then these same devices would have to share knowledge of illicit activities and make decisions in real-time, based upon their existence or lack of trespass. Finally, the devices would have to reroute the packets to a collection point for analysts to review. Recent works have described the idea of distributed "micro-firewalls or intrusion detection systems (IDS)" which could detect intrusions and other malevolent activities on the Internet. Perhaps by combining this with earlier research on Case-Based Reasoning (CBR) such a system could be designed. The rationale of CBR states, that "new problems are often similar to previously encountered problems."24 Using CBR on distributed IDS, which is then combined with an application of research done in anomaly detection and data mining, may come close to the mentioned specification. But, untrained, the system will still lack the data relevant to detecting and reporting terrorist-based money laundering via the Internet. Nearly a decade ago, a system was developed to detect credit card fraud. This system combined advanced data mining techniques and neural network algorithms. Their model used a modified version of the CBR system, which combined "like events," coded them, and weighed them to create a statistical significance (or lack thereof) for a given event. Further complicating the creation of this system is the fact that financial institutions do not share their data - fraudulent or legitimate.25 Fraud detection systems and anomaly detection system research do not appropriately address all aspects of this problem, because terrorists using the digital payment systems do, essentially, nothing different during the transaction than does a legitimate user, thus the transaction does not appear to be fraudulent or anomalous.

Any computer system can, remotely or locally, monitor any activities at any time of the day anywhere in the world. This information serves as a body of growing "real-time" intelligence that ultimately needs parsing and substantial "signal-to-noise" processing. Unfortunately, this leads to an ever-growing information overload, which requires more thoughtful and interactive parsing and analysis.26 The information is available, but there is so much of it that it is not manageable, even with current OCR and language processing technologies. Such issues now plague intelligence agencies worldwide. The threats posed by Internet terrorism financing are as distributed as the Internet itself is. With the burgeoning wireless infrastructure worldwide, this threat has become mobile, and thus even more widely distributed. Tracking terrorists has proven to be a difficult challenge for governments, as their cells are widely distributed between remote areas and urban centers. Understanding the nature of terrorists' Internet usage habits is linked to their "social network behavior." This poses a unique challenge for intelligence agencies in that these social networks are difficult to penetrate and thereby gain an appropriate understanding of their functionality. Besides, for the myriad recruiting, training, and retention issues that intelligence agencies face today, the specific challenges created by Internet terrorist usage are only now becoming known. The current model of human intelligence (HUMINT), which relies upon an agent-asset relationship, goes a long way toward getting pertinent information from foreign nationals regarding current events. Signals intelligence (SIGINT) is able to monitor and watch terrorists from afar, but the data analysis techniques are still unable to keep up with the speed of the terrorists' activities. HUMINT efforts have not completely failed to track down terrorists or their Internet usage. But the weakness here is that some assets in key locations work both sides of the street and provide false information. Assets need motivation to become traitors to their country, and this motivation is nearly always related to money. Typically, the "agent" meets with the asset and pays the asset some amount of hard currency (British pounds, American dollars, Euros, etc.) For this sum, the asset reports on some event(s), based upon the needs of the "agent" and the "office" in a particular location. Money was enticing to potential assets during the Cold War era, as hard currency was, in many places, difficult, if not impossible, to obtain. Hard currency also could be used to purchase consumer goods, previously unavailable to a given asset, as a result of his/her fixed, state-provided salary. Obviously, therefore, assets can be, and often are unscrupulous, despite classified guidelines regarding asset recruitment. In order to track terrorists, assets who are terrorists must be recruited. Using money to recruit assets is less effective on terrorist groups because their members are usually rather well compensated. SIGINT is very capable of pinpointing locations of deployments (weapons, people, etc.), but is generally unable to effectively answer questions about specific terrorist cells and their plans. To develop the argument that the IC is climbing a steep and slippery slope in its efforts to effectively track the financial transactions (good and bad) of terrorists, or of even suspected terrorists, an understanding is necessary that the IC is historically known for its failures and not its successes. Richard Betts explained in 1978 that no normative or positive theory of intelligence has been fully developed, adding "that negative or descriptive theories about how intelligence fails abound."27 Any success that the IC has had in tracking such transactions should be and usually is kept classified, and is therefore unknown to the general population and media.

Social and Political Challenges

Financial institutions, for many reasons, simply cannot share information regarding financial transactions. Whereas intelligence and law enforcement agencies can subpoena such records for use in investigations, they are not able to proactively monitor "the next greatest financial threat."28 Revamping the ways in which both HUMINT and SIGINT are collected, analyzed, and disseminated is under investigation today. 9/11 was not so much an intelligence failure as an indication of the need for an "intelligence paradigm shift" that may or may not have predated that catastrophe.29 Journalist Thomas L. Friedman in 2000 discussed the shift from the Cold War economy to a globalized economy, wherein nation-state influences are replaced by terrorists most often not affiliated with a nation-state.30 This process largely started when the Cold War ended. Intelligence procedures may not have been able to catch up to this new world order model, or perhaps they became too mired in domestic or institutional politics to progress beyond the middle 1990s. Recently, the world witnessed what many consider to be the largest "intelligence failure" in history, regarding the presence or lack of weapons of mass destruction in Iraq. But, this could arguably have been more an instance of politicization of information that imbued intelligence officials to act specifically.

In any case, the ways and means by which relevant information regarding a terrorists' Internet activities is collected, like every other aspect of intelligence collection, need revamping. The technological barriers, combined with HUMINT challenges, pose a great challenge for the IC in the twenty-first century. Specifically, analysts once trained in IR theory and economics most now be fluent in technology, and take an even more innovative, multidisciplinary approach to investigating and proactively preventing how and when terrorists use the Internet to send or receive funds. Another issue is the dissemination of any information about these activities once received. Author Ronald Kessler a decade ago described the historic roots of the enmity between law-enforcement (FBI) and intelligence (CIA) agencies.31 To a large extent that enmity has been reduced in the aftermath of 9/11 and the Intelligence Reform Act of 2004, but not fully.

The primary difference between what law enforcement agencies and intelligence agencies collect is evidence versus information. Law enforcement agencies have long preferred evidence to information, and have used information only to obtain evidence. Intelligence agencies continue to work with information - sometimes substantiated, sometimes not. These differences notwithstanding, the greatest challenges facing law enforcement agencies relate to jurisdictional and legal issues. Tracking institutional financial transactions or Internet transactions frequently requires crossing national and international boundaries. Such investigations require highly trained personnel, and cross-jurisdictional capabilities, which in the U.S. beseech a high degree of cooperation among local police departments and federal investigative agencies (FBI, DHS, etc.)

Cyberspace Considerations

But what are the jurisdictional and legal frameworks in cyberspace? David R. Johnson and David G. Post have investigated these notions and ask the question: "How will Cyberspace be governed, and by what right?"32 E. Lyons Longworth has proposed that cyberspace is not only multi-jurisdictional but more appropriately a-jurisdictional.33 The Internet is not constrained by geographic boundaries. In 1996, Henry H. Perrit Jr. noted the "lack of congruence between cyberspace's global, transnational character and the national geographically imposed limitations of the courts."34 A further complication is the fact that users can be anonymous on the Internet, and can easily avoid or subvert the rules and laws of a particular jurisdiction. Therefore, even if evidence were easily obtainable in an investigation, the questions of who has access rights to the evidence, and to what extent, if any, the evidence could be used to build a case remain unanswered. Johnson and Post clearly delineate the legal issues related to the jurisdictional nature (or lack thereof) of the Internet, in their discussion of "decentralized, emergent law."35 Since users have a great deal of control over their own computing environments, and since the users are mobile, and can evade most hierarchically based controls, a centralized governing authority is not really feasible.36 This loose form of governance encourages self-regulation, based "upon the voluntary acceptance of technical protocols and standards"37 Lawrence Lessig describes the "four structures of control" which provide a framework for how various behaviors can and do affect cyberspace law:

Direct effect of the law - particular behaviors will suffer sanctions or penalties, if they violate laws. Examples are laws regarding copyright, defamation, and obscenity.
Social norms. Society or community threatens consequences of behavior in cyberspace that violates a social norm. Studies on "netiquette" relate well to this constraint.
Cyberspace market regulation. Concerns factors such as connectivity bandwidth, congestion, and access charges to certain, specific services on the Web.
Real space code of the Internet. The architecture of the Internet itself acts as a limiting factor on the range of possibilities available on the Internet.38
Lessig's constraint-based framework goes a long way toward defining a process of "decentralized, emergent law." Its elegance rests in the conception that the Internet itself possesses the necessary answers to the unique challenges the Internet places on existing legal systems. Lessig's listed constraints seem to encourage a generalized conception that the Internet's complexity is self-managing and self-maintained, with the developers of the standards and protocols actually creating certain constraints on the usage and capabilities of the network. Post's lex mercatoria ("merchant law") describes an example of unregulated and unconstrained rule making in the absence of nation-state control.39 Such an approach places a heavy burden on a culture's societal norms to create a mutually acceptable form of online governance. But this method does not overtly address ways for dealing with Internet usage that is patently "non-rule violating," but that is nevertheless used for nefarious purposes. Johnson and Post assert that no objective criteria now exist by which to measure whether any particular rule-set is optimal.40 Terrorists using the Internet to transfer or launder funds take great measures to mask or conceal their activities by making them appear to be ordinary. Current protocols and architectural designs inherent to the Internet are not sufficient at managing such usage in a way consistent with societal norms. Direct effect of the law is also ineffective here, because evidence is needed to build a case and ultimately prove illegality, and the Internet's ordinary usage does not provide that evidence. Terrorist activities involving the Internet are deliberately within the scope of normal usage policies and local laws governing Internet usage, thereby negating the usefulness of lex mercatoria.

Managing Internet Jurisdictions

Researchers like Menthe suggest that cyberspace should be treated like space, with the choice of law based on and derived from nationality, not territoriality.41 But there is a potential difficulty with this. Currently, outer space is governed by only those nation-states that can afford to send people there. Traveling to or utilizing outer space is simply not possible for most of the world's population. Aside from potential nefarious purposes, contemporary terrorists have little or no interest in space. The very definition of terrorism precludes this. They do, however, have a great interest in cyberspace, and use it for their purposes. But all this creates a paradox. Current research indicates that a centralized form of control - a centralized Internet law-making body - is just not feasible. Lessig's four structures of control will manage the Internet. Unfortunately, the terrorists are able to use these same structures against the Internet. Because the protocols that make the Internet function allow for anonymity, they potentially cloud what should be transparent activities, like the terrorists' transferral of money for acts of terrorism. In this way, the terrorists are able to "hijack" the Internet to commit criminal acts, just as the 19 Islamic radicals did with airplanes on 9/11. Thomas L. Friedman concurs with this notion, but takes it to a much higher level by claiming that those terrorists hijacked the "American way of life."42 Creating a cyberspace governance council would be a gargantuan task, requiring multilateral agreement from every nation; it would at best be a broad overview of general laws. David R. Johnson and David G. Post continue with this notion: "[T]he bottleneck characteristics of any centralized law-making machinery and the natural frailties of the law-making processmake centralized systems unsuitable for tackling a diverse, rapidly changing, large scale set of problems, such as those posed by the net."43

The Internet desperately needs what it can never have - a centralized form of government and lawmaking, which is adaptable to its ever-changing needs. Johnson and Post assert that centralized "cyberspace agency" would not be appropriate either:

"There would be problems balancing power in such an agencydivergent views regarding democracy, centralized authority and even defining 'fairness' would become an issue. A 'bill of rights' for the Internet would at best only deal with the most fundamental problemsthen there would be the issue of creating laws that were context free."44

Perhaps, in the context of terrorism, this discussion about laws is futile. Since the behaviors of terrorists do not conform to any local, regional, national, or international laws, however, the issue of whether or not cyber terrorism exists must be considered.


The essential difficulty Habermas would see with this dichotomy of Internet governance theory is its neglect of input from the public sphere. His classic discussion of the public sphere was based on the idea that equal access to equal information promotes an equal (i.e., democratic) society. According to Dennis Gaynor,

In the public sphere, Habermas says, discourse becomes democratic through the "non-coercively unifying, consensus building force of a discourse in which participants overcome their at first subjectively biased views in favor of a rationally motivated agreement.45

Practical and rational discourse among individuals is the way toward a more democratic web.46 Terrorist behavior is antithetical to Habermas's views on rational governance. Furthermore, they counter Dr. Amitai Etzioni's communitarian views, which hold that the Internet is used for reinforcing connections with family and friends, and that the users of the Internet join and form communities.47


Cyber terrorists are known in today's vernacular as "hackers." Hackers and terrorists are similar in that their actions are intended to disrupt the daily operations of a given system. Hackers focus on telephones, computers, and now wireless devices such as PDAs.48 They do not necessarily have political or religious motivations, although evidence is growing that this may be changing.49 Hackers prefer to attack inanimate objects (machines). Generally, this is not as obviously damaging and disruptive as a suicide bomber in an Israeli disco, but as the world's dependency on information and communications technologies (ICTs) grows, serious damage of a different kind may result. Dorothy Denning concurs in her essay "Is Cyber Terrorism Next?," stating:

Although cyber terrorism is certainly a real possibility, for a terrorist, digital attacks have several drawbacks. Systems are complex, so controlling an attack and achieving a desired level of damage may be harder than using physical weapons. Unless people are killed or badly injured, there is also less drama and emotional appeal.

Terrorists tend to use mechanisms that evoke the maximum amount of emotive reaction. Losing Internet connectivity, although irritating, does not have the same impact in the media as the imagery of 9/11 did. Denning's research indicates a growing trend of cyber terror groups in the post-9/11 world. The Ohio based YIHAT group (Young Hackers Against Terrorism) has defaced many terrorist websites. In fact, their stated mission is to stop the money sources of terrorism. YIHAT issued a plea on its Web site for corporations to make their networks available to the group's members for the purpose of providing the "electronic equivalent to terrorist training camps."50 A study conducted at the Naval Postgraduate School, Monterey, California, sums this up best:
the barrier to entry for anything beyond annoying hacks is quite high andterrorists generally lack the wherewithal and human capital needed to mount a meaningful operation. Cyber terrorism, they argued, was a thing of the future, although it might be pursued as an ancillary tool.51

The center's report was issued in 1999. Whether 9/11 has changed the outcome of its prediction is not clear. But, in the days following 9/11, hacking groups emerged on both sides of the world. YIHAT's hacks on Iranian and Afghani systems were most certainly countered by Pakistani-based groups like G-Force. More likely, no matter how damaging a cyber attack could be, it will come in the middle of the night, and through regular channels, and will not seem out of the ordinary.52

1. - 9/11 did not necessarily change the world. The world was changing already, before 9/11. This date just serves as a convenient focal point for massive policy shifts, particularly in the United States
2. (2003) Longitudes and Attitudes Alfred A. Knopf , New York — See Thomas L. Friedman
3. Paradigm shift: US strategic intelligence in the 1990's. Study Project. — See K. E. Scott
5. Paradigm shift: U.S. strategic intelligence in the 1990's,. pp. 390–391. — See K. E. Scott
6. — Ibid., p. 391, para. 4
7. — See the Website:
8. — From SWIFT is the financial industry-owned cooperative supplying secure, standardized messaging services and interface software to 7,650 financial institutions in over 200 countries. SWIFT's worldwide community includes banks, broker/dealers, and investment managers, as well as their market infrastructures in payments, securities, treasury and trade
9. — From “The Peninsula” 12/01/04, at = Business_News&month = December2004&file = Business_News2004120181326.xml
10. — Taken from: = 877
11. — Ibid., Section e, para. 3
12. (1999) The Lexus and the Olive Tree Farrar, Strauss, and Giroux , New York — See Thomas L. Friedman
13. — See Mark Rupert's anti-Friedman pages:
14. — The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his “Galactic Network” concept. Taken from
15. — Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path toward computer networking. Taken from
16. — Robustness here refers to the transnational nature of the Internet. The protocols used work across geopolitical borders and across socioeconomic and ethnic boundaries
17. — TCP is typically used by applications that require guaranteed delivery. It is a sliding window protocol that provides handling for both timeouts and retransmissions. Taken from:
18. — Encapsulation is suggested as a means to alter the normal IP routing for datagrams, by delivering them to an intermediate destination that would otherwise not be selected based on the (network part of the) IP Destination Address field in the original IP header. Once the encapsulated datagram arrives at this intermediate destination node, it is decapsulated, yielding the original IP datagram, which is then delivered to the destination indicated by the original Destination Address field. This use of encapsulation and decapsulation of a datagram is frequently referred to as “tunneling” the datagram, and the encapsulator and decapsulator are then considered to be the “endpoints” of the tunnel. Taken from:
20. — “The agents […] are revolutionizing the world of artificial intelligence—from e-business and information management to warfare, telecommunications, and robotics, says Chief Investigator on the project, Professor Leon Sterling.” (Computer Science and Software Engineering)
21. — Data traverses the Web in packets and frames, all represented as light waves, or 0s and 1s (binary code) as electrical impulses, depending on the protocol
22. — Chris Lucas (1999) states: “So far we have looked at deterministic systems, where every option was either chosen or not. Now we can move on to a more realistic mode, and this is where each option has a probability of being chosen (e.g., a coin toss has a 50% probability of being heads and a 50% chance of being tails). These systems are generally avoided by human designers (as they are less predictable and slower to operate) but are ubiquitous in nature and society. They are exemplified by what are called Markov Chains. Here the transition table or transformation is made up of a matrix of probabilities, therefore the trajectory of the system no longer follows one determinate path towards the attractor but can take one of many, reversing direction or going sideways as it changes. Thus the time to settle to a stable equilibrium state is longer and more uncertain. The main feature of these systems is that the probabilities are fixed, so that over a long time (or over multiple instances) the behaviour of the system can be analysed and predicted statistically. We can see such things for example in the proportion of males and females in the population—although we can't determine the sex of any child at conception, we can predict that about 50% of the total children will be female.” See Chris Lucas, “Cybernetics and Stochastic Systems,” June 1999, at
23. — I am using datagram, packet, and frame interchangeably in this paper
24. — Taken from Stuart Aitken “An Introduction to Case Based Reasoning” at:
25. Kokkinaki, Angelika I. (1997) On Atypical Database Transactions: Identification of Probably Frauds Using Machine Learning for User Profiling,. IEEE Knowledge and Data Engineering Exchange Workshop [crossref]
26. — Post-9/11 analyses have shown that hundreds of documents (in Arabic) were obtained by taps and traces but could not be translated in time to be of any use to intelligence professionals
27. (2004) Strategic Intelligence Roxbury Publishing , Los Angeles, CA
28. — I use this phrase in the same sense of the phrase “The next killer app,” which is prevalent in Internet literature today
29. Kuhn, Thomas S. — in Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1970) describes this as “a change from one way of thinking to another. It's a revolution, a transformation, a sort of metamorphosis. It just does not happen, but rather it is driven by agents of change.”
30. Friedman, Thomas L. The Lexus and the Olive Tree
31. Kessler, Ronald (1994) Inside the CIA Pocket Books , New York
32. Johnson, David R. and Post, David G. A Meditation on the Relative Virtues of Decentralized, Emergent Law.. — Taken from
33. Longworth, E. Lyons (2000) The Possibilities for a Legal Framework for Cyberspace Ashgate , Burlington, VT
34. Perrit Jr, Henry H. Jurisdiction in Cyberspace: The Role of Intermediaries p. 1. —
35. Johnson, David R. and Post, David G. And How Shall the Net be Governed? p. 4.
36. Longworth, E. Lyons The Possibilities for a Legal Framework for Cyberspace p. 18.
37. — Ibid
38. Lessig, Lawrence (1997) The Law of the Horse: What Cyberlaw Might Teach,. Stanford Law Review p. 3. — Working Paper,
39. Post, David G. Anarchy, State and the Internet,. — op. cit., para. 26
40. Johnson And How Shall the Net Be Governed?,. p. 8. — op. cit.
41. Jurisdiction in Cyberspace,. — See Menthe
42. The Lexus and the Olive Tree — See Thomas L. Friedman
43. Johnson, David R. and Post, David G. And How Shall the Net Be Governed?,. p. 5. — op. cit.
44. — Ibid.
45. Democracy in the Age of Information: A Reconception of the Public Sphere,. — See Denis Gaynor
46. — Ibid.
47. — See Amitai Etzioni, at
48. — There is much literature on hackers and hacking. Some terms are 2600 (frequency in Hz of original access tone for telephone systems worldwide), phreaking (telephony hacking), as well as warez (illegal software, and licenses) and crackz (illegal fixes for programs that unlock or license the software illegally)
49. — Two groups in particular come to mind: “The Cult of the Dead Cow” and “Legions of Doom” claim to be Satanists, and clearly their Webpages reflect this
51. Cyberterror Prospects and Implications.. — See Center for the Study of Terrorism and Irregular Warfare, at the Naval Postgraduate School (NPS) in Monterey
52. — There is a growing threat of malware (malicious software) sent through regular Web page browsing, or e-mail reading. Also called Spyware

No comments: