This page is part of  Notes on Communication
a free ebook from
     Download as PDF file HERE

Appendix 4

Digital Communications

Some of the advantages of recoding information which is already in a coded form were discussed in Appendix 2. One very important advantage of such recoding is the possibility of changing a message into a form which is easy to transmit over a distance. Early examples of such transmission included alarm fires, smoke signals and message sticks.

More recent examples of the transmission of recoded messages over a distance have included signal flags and semaphore. In the case of signal flags, each flag often stood for a short but complete message. With semaphore, each position of the two arms (either the arms of a person, or those of a semaphore machine) often stood for a letter of the alphabet.

Signal flags took some time to hoist, but each flag gave quite a lot of information. The semaphore arms gave less information at any one time, but their positions could be changed more quickly – and, given sufficient time, there was no limit to the length or complexity of the message.

Morse code, which employs sequenced short and long pulses of any convenient stimulus, was a further useful development. It had the advantage of being independent of the mode of transmission. For example, a hand which intermittently hides a candle flame can send Morse code – but so can a machine employing the most advanced technology in existence.

From Morse code, it is only a small conceptual step to other machine-friendly methods of communication based on the repetitive use of one or more convenient stimuli. In fact, all current methods of digital information management are based on repeated instances of the presence or absence of a single stimulus, thus creating the binary numbers explained below.

Although the ramifications of this small step are already so complex that no single human being thoroughly understands more than a small part of the whole field,[1] the basic principles are very simple. As mentioned above, a numerical system with a base of 2 was chosen for the purpose. With only two options (0 and 1) at any position in such "binary" numbers, it is relatively easy to design a machine that can work with them.

In a computer, for example, the two options are typically represented by the absence (indicating 0) or presence (indicating 1) of a low voltage spike in an electronic circuit. If the base 10 numbers which we use in everyday activities had been employed for this purpose, nine different responses would be needed, as well as the absence of any response.

A binary "word", as the binary numbers are often called, can be a very long string of 0s and 1s, so it is often more convenient to use numbers with a higher base, usually "hexadecimal" numbers (with a base of 16) when writing programmes. However, everything can (and must) be translated back into binary code before it communicates with the hardware components in a typical computer.

Having information in this digital form makes it possible for computers to deal with it in many useful ways. Storage of vast amounts of information in small physical volumes, rapid search and retrieval, easy analysis of content, editing, copying, printing, rapid transmission, and relatively secure encryption are among the commonest of these useful operations. It is therefore hardly surprising that digital technology has revolutionised the management of information.

This revolution, however, has not been entirely bloodless. For example, if editing or copying is done illegally, the speed and ease with which it proceeds can be a decided disadvantage! The possibility that an unknown person, anywhere in the world, might access, copy, alter or destroy vital information, perhaps without the knowledge of its owners, is a very disconcerting aspect of modern information technology.

In some cases, unauthorised manipulation of files is achieved by direct access to the computer on which they are stored. In many cases, however, it results from communication between two or more computers.[2] Fast and convenient communication between different computers is thus a feature of the advantages and the risks of the information management revolution.

Connecting computers together is not difficult – it just requires a protocol (an agreed method) for output and input, and the necessary devices to implement that protocol, plus a connection which might be literally anything that can carry a signal. In practice, wires, optical fibres and radio waves are the most common signal carriers, at the time of writing.

Connection to any available computer in the world is now a simple matter, as the global network of computers called the Internet[3] can be accessed very easily via "Internet service provider" (ISP) companies which operate in most parts of the world. Much of the information available on computers connected to the Internet is fully indexed by advanced "search engines", making it readily accessible to any user.

Assuming that everything is working properly, connections between computers have no overall effect on the information exchanged between them. However, the transmission process itself involves various layers of recoding of information, to facilitate operations such as compression, encryption, authentication and compliance with transmission protocols.

Fortunately, the enormous recoding tasks involved in those things are done by the computers themselves. They could not possibly be done by humans, as the number of operations involved is astronomical. Unwanted modification of the data during recoding is carefully guarded against, and in the case of modern communication systems is extremely rare. Unwanted modification of the data by a human intruder is also carefully guarded against, but it still cannot always be prevented.

Indeed, the vulnerability of digital information to unauthorised access has spawned a gigantic security industry. There are many good "antivirus" and "antispyware" applications which, when combined with the latest "firewall" technology, provide considerable protection against most of the methods used to gain illicit access. However, that protection is never absolute.

Similarly, the latest encryption technology can usually keep the content of a document secret – though it might still be copied or deleted. On the other hand, relying on antivirus, antispyware, firewall or encryption technology a few years old (or virus or spyware definitions even a few days old) can leave both data and system dangerously exposed.

Security can also be breached very easily by carelessness, or by a criminal act – either of which can defeat even the latest and best technology. Some security breaches, such as those involving modification of a computer operating system's kernel code, can go undetected for quite some time, potentially providing the perpetrators with full access to the system.

Despite these dangers, good security practices do keep information management systems pretty secure most of the time. However, it seems likely that the overall problem of unauthorised access to data is here to stay. Preventive measures get more sophisticated almost by the week, but so, unfortunately, do the methods employed to defeat them.

(Click the number of a footnote to return to its reference in the text)

[1] Some software engineers specialise in a single process, such as "opening" a digital file so that it can be accessed in an application. This is analogous to medical specialisation in a restricted field such as surgery of the hand.

[2] The act of connecting computers is called networking, and can involve any number of computers, from two up to the one or more billion computers currently thought to be connected to the Internet.

[3] Originally created (as the ARPANET) for US military communications, but now in the public domain and readily accessible for private or business use.


Creative Commons License This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 Australia License




contact Webmaster                  Sitemap                  contact Secretary