Model of information transmission through technical channels. Summary of the lesson on the topic “Information transfer. Integer representation

Summary of the lesson on the topic "Communication of information"

Lesson objectives

Educational:

    Study and consolidation of knowledge;

    Updating leading knowledge;

    Introduce concepts ways transmission of information, information transmission channels,channel capacity.

    Considertechnical systems of information transmission.

Developing:

    develop cognitive interest, creative activity of students;

    develop friendly and business conversation students working together.

Educational:

    foster interest in the subject, attentiveness, discipline.

Lesson type: learning new material and the primary consolidation of knowledge.
Equipment: PC, projector, screen, presentation "Information transfer".
Types of work: heuristic conversation, lecture-demonstration, independent work of students.

Lesson steps:

    Organizing time.

    Knowledge update:

    Setting the goal of the lesson.

    Learning new material.

    Summing up the lesson.

    Homework setting.

During the classes

Hello guys, sit down. I am very glad to see you. Today we continue to study the chapter “ Information processes in systems "

II. Knowledge update

You know from the basic course:

The transfer of information occurs from the source to the recipient (receiver) of the information.Source information can be anything: any object or phenomenon of animate or inanimate nature. The process of transferring information takes place in a certain material environment that separates the source and recipient of information, which is calledchannel transmission of information. Information is transmitted through the channel in the form of a sequence of signals, symbols, signs, which are calledmessage . Recipient information is an object that receives a message, as a result of which certain changes in its state occur. All of the above is shown schematically in the figure. A person receives information from everything that surrounds him through the senses: hearing, sight, smell, touch, taste. A person receives the greatest amount of information through hearing and sight. Sound messages are perceived by ear - acoustic signals in a continuous medium (most often in the air). Vision perceives light signals that transfer the image of objects.Information channel can be either natural (atmospheric air through which sound waves are transported, sunlight reflected from the observed objects), or beartificially created. Artificially created are just technical means of communication.

And so the topic of our lesson"Transfer of information" (Slide 1)

III. Lesson goal setting

Let's start learning new material. Write the topic of the lesson in a notebook.
Today in the lesson we will meet youwith technical information transmission systems how the process of transferring information is carried out, we will solve practical problems.

I V. Learning new material.

The first technical means of transmitting information over a distance was the telegraph, invented in 1837 by the American Samuel Morse. In 1876, American A. Bell invents the telephone. Based on the discovery of electromagnetic waves by the German physicist Heinrich Hertz (1886), A.S. Popov in Russia in 1895 and almost simultaneously with him in 1896 by G. Marconi in Italy, radio was invented. Television and the Internet appeared in the 20th century.(Slide 2)

K. Shannon's information transfer model

All of these methods of information communication are based on the transmission of a physical (electrical or electromagnetic) signal over a distance and obey some general laws. Communication theory, which emerged in the 1920s, is investigating these laws. The mathematical apparatus of the theory of communication - the mathematical theory of communication, was developed by the American scientist Claude Shannon. (Slide 3)

Model of information transmission through technical communication channels

Claude Shannon proposed a model for the process of transmitting information through technical communication channels.Coding here means any transformation of information coming from a source into a form suitable for its transmission over a communication channel.Decoding - reverse transformation of the signal sequence .

The operation of such a scheme can be explained using the familiar process of talking on the phone. The source of information is the speaking person. The encoder is a telephone handset microphone, with the help of which sound waves (speech) are converted into electrical signals. The communication channel is telephone network(wires, switches of telephone nodes through which the signal passes). The decoding device is the telephone receiver (earpiece) of the listening person - the receiver of information. Here, the incoming electrical signal turns into sound.

Modern computer systems transmission of information - computer networks work on the same principle. There is an encoding process that converts computer binary code into physical signal of the type that is transmitted over the communication channel. Decoding consists in converting the transmitted signal back into computer code. For example, when using telephone lines in computer networks the encoding-decoding functions are performed by a device calledmodem.

Channel bandwidth and information transfer rate

Developers of technical information transmission systems have to solve two interrelated problems: how to ensure the highest speed of information transmission and how to reduce the loss of information during transmission. K. Shannon was the first scientist who took up the solution of these problems and created a new science for that time -information theory ... Shannon identifiedway of measuring the amount of information transmitted over communication channels. He introduced the concept of bandwidthchannel as the maximum possible data transfer rate. This speed is measured in bits per second (as well as kilobits per second, megabits per second).

The bandwidth of a communication channel depends on its technical implementation. For example, the following communication means are used in computer networks:

    telephone lines;

    electrical cable communication;

    fiber-optic cable communication;

    radio communication.

The throughput of telephone lines is tens and hundreds of Kbit / s; the throughput of fiber-optic lines and radio communication lines is measured in tens and hundreds of Mbit / s.

However, there is a problem, which is marked by the word"noise".

Noise, noise protection

The term "noise" refers to all kinds of interference that distorts the transmitted signal and leads to loss of information. Such interference is primarily due to technical reasons such aspoor quality of communication lines , insecurity from each other of various streams of information transmitted through the same channels. Sometimes, when talking on the phone, we hear noise, crackling, interfering with understanding the interlocutor, or the conversation of other people is superimposed on our conversation.

The presence of noise leads to the loss of transmitted information ... In such cases, noise protection is required. For this, first of all, technical methods of protecting communication channels from the effects of noise are used. Such methods are very different, sometimes simple, sometimes very complex. For example: using shielded cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

Shannon developed a special coding theory that provides methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line should beredundant. Due to this, the loss of some part of the information during transmission can be compensated for. For example, if you are hard to hear while talking on the phone, then by repeating each word twice, you have a better chance that the other person will understand you correctly.

Redundancy of the code is the multiple repetition of the transmitted data.

However, you cannot make the redundancy too large. This will lead to delays and higher communication costs. The coding theory makes it possible to obtain such a code that will be optimal: the redundancy of the transmitted information will be minimal, and the reliability information received- maximum.

Great contribution to scientificcommunication theory contributed by the famous Soviet scientist Vladimir Aleksandrovich Kotelnikov. In the 1940s-1950s, he obtained fundamental scientific results on the problem of noise immunity of information transmission systems.

IN modern systems digital communication to combat the loss of information during transmission, the following technique is often used. The entire message is split into chunks - blocks. For each blockchecksum is calculated (the sum of binary digits), which is transmitted along with this block. At the receiving point, the checksum of the received block is recalculated, and if it does not match the original sum, the transmission of this block is repeated. This happens until the original and final checksums match.

Independent work. Students have cards with assignments to complete.

Self-study assignments

    The bandwidth of the communication channel is 100 Mbit / s. The channel is not affected by noise (eg fiber optic line). Determine how long it will take for a text to be transmitted over the channel, the information volume of which is 100 Kb.

    The bandwidth of the communication channel is 10 Mbit / s. The channel is affected by noise, so the transmission code redundancy is 20%. Determine how long it will take through the channel to transmit a text, the information volume of which is 100 Kb.

V. Lesson summary

Our lesson has come to an end. What new have you learned in the lesson and what have you learned?

Vi. Reflection.

I propose to evaluate myself on my own (I say the correct answers). Lesson grades.

Vii. Homework setting

We represent the process of transferring information by means of a model in the form of a diagram shown in Figure 3.

Rice. 3. Generalized model of the information transmission system

Consider the main elements that make up this model, as well as the information transformations that occur in it.

1. Source of information or message (AI) is a material object or subject of information capable of accumulating, storing, transforming and issuing information in the form of messages or signals of various physical nature. This can be a computer keyboard, a person, an analog output of a video camera, etc.

We will consider two types of information sources: if in a finite time interval the information source creates a finite set of messages, it is discrete , otherwise - continuous ... We will dwell on the consideration of sources in more detail in the next lesson.

Information in the form of an initial message from the output of the information source enters the input of the encoder, which includes the source encoder (CI) and the channel encoder (CK).

2. Coder.

2.1.Source encoder provides conversion of a message into a primary signal - a set of elementary symbols .

Note that the code is universal way displaying information during its storage, transmission and processing in the form of a system of unambiguous correspondences between message elements and signals, with the help of which these elements can be fixed. Encoding can always be reduced to an unambiguous conversion of characters from one alphabet to characters from another. Moreover, the code is a rule, a law, an algorithm by which this transformation is carried out.

The code is a complete set of all possible combinations of symbols of the secondary alphabet, built by this law... Combinations of characters belonging to this code are called by code words ... In each specific case, all or part of the code layers belonging to the given code can be used. Moreover, there are "powerful codes", all combinations of which are almost impossible to display. Therefore, by the word "code" we mean, first of all, the law according to which the transformation is carried out, as a result of which we obtain code words, the full set of which belongs to the given code, and not to some other, constructed according to a different law.

The symbols of the secondary alphabet, regardless of the basis of the code, are only carriers of messages. In this case, the message is the letter of the primary alphabet, regardless of the specific physical or semantic content that it reflects.

Thus, the goal of the source coder is to present information in the most compact form. This is necessary in order to efficiently use the resources of the communication channel or storage device. In more detail, the issues of coding sources will be discussed in topic 3.

2.2.Channel encoder. When information is transmitted over a communication channel with interference, errors may occur in the received data. If such errors are small or rarely occur, the information can be used by the consumer. With a large number of errors, the information received cannot be used.

Channel coding, or anti-jamming coding, is a method of processing transmitted data that provides decrease number of errors arising during transmission over a noisy channel.

At the output of the channel encoder, as a result, a sequence of code symbols is formed, called code sequence . In more detail, the issues of channel coding will be discussed in topic number 5, as well as in the course "Theory of electrical communication".

It should be noted that both error-correcting coding and data compression are not mandatory operations in the transmission of information. These procedures (and their corresponding blocks in the structural diagram) may be missing. However, this can lead to very significant losses in the noise immunity of the system, a significant decrease in the transmission rate and a decrease in the quality of information transmission. Therefore, almost all modern systems (with the exception, perhaps, of the simplest ones) must and must include effective and noise-immune data coding.

3. Modulator. If it is necessary to transmit messages to the symbols of the secondary alphabet, specific physical qualitative characteristics are assigned. The process of influencing an encoded message in order to turn it into a signal is called modulation ... Modulator functions - message negotiation source or code sequences generated by the encoder, with communication line properties and enabling the simultaneous transmission of a large number of messages over a common communication channel.

Therefore, the modulator must convert messages source or their corresponding code sequences in signals, (superimpose messages on signals), the properties of which would provide them with the possibility of efficient transmission over existing communication channels. In this case, the signals belonging to a plurality of information transmission systems operating, for example, in a common radio channel, must be such that independent transmission of messages from all sources to all recipients of information is ensured. In detail different methods modulations are studied in the course "Theory of electrical communication".

We can say that the appointment encoder and modulator is the coordination of the source of information with the communication line.

4. Communication line is the environment in which information-carrying signals propagate. The communication channel and the communication line should not be confused. Link - a set of technical means designed to transfer information from a source to a recipient.

Depending on the propagation medium, there are radio channels, wire, fiber-optic, acoustic, etc. channels. There are many models describing communication channels with a greater or lesser degree of detail, however, in the general case, a signal passing through a communication channel undergoes attenuation, acquires a certain time delay (or phase shift) and is noisy.

To increase the throughput of communication lines, messages from several sources can be transmitted over them simultaneously. This technique is called sealing... In this case, messages from each source are transmitted through their own communication channel, although they have a common communication line.

Mathematical models of communication channels will be considered in the course "Theory of electrical communication". The information characteristics of communication channels will be discussed in detail within the framework of our discipline when studying topic number 4.

5. Demodulator . The received (reproduced) message, due to the presence of interference, generally differs from the sent one. The received message will be called the rating (meaning the rating of the message).

To reproduce the estimate of the message, the receiver of the system must first of all by accepted hesitation and taking into account the information about the used in the transfer the form of a signal and modulation method get an estimate of the code sequence, called adopted sequence... This procedure is called demodulation, detection or signal reception... In this case, demodulation should be performed in such a way that the received sequence differs as little as possible from the transmitted code sequence. The issues of optimal signal reception in radio engineering systems are the subject of the TPP course.

6. Decoder.

6.1. Channel decoder... In general, the received sequences may differ from the transmitted codewords, that is, contain errors. The number of such errors depends on the level of interference in the communication channel, the transmission rate chosen for signal transmission and the modulation method, as well as on the reception (demodulation) method. Channel decoder task- detect and, if possible, to correct these errors. The procedure for detecting and correcting errors in the received sequence is called channel decoding . The result of decoding is the evaluation of the information sequence. The choice of an error-correcting code, a coding method, and also a decoding method should be made so that as few uncorrected errors as possible remain at the output of the channel decoder.

The issues of error-correcting coding / decoding in information transmission (and storage) systems are currently being given exceptional attention, since this technique can significantly improve the quality of its transmission. In many cases, when the requirements for the reliability of the received information are very high (in computer data transmission networks, in remote control systems, etc.), transmission without noise-immune coding is generally impossible.

6.2. Source decoder... Since the source information in the process of transmission was subjected to coding in order to represent it more compact (or more convenient) ( data compression, economical coding, source coding), it is necessary to restore it to its original (or almost original form) according to the adopted sequence. The recovery procedure is called source decoding and can be either just the reverse of the encoding operation (non-destructive encoding / decoding), or to restore an approximate value of the original information. The restoration operation will also include the restoration, if necessary, of a continuous function over a set of discrete values ​​of estimates.

It must be said that recently, economical coding has taken an increasingly prominent place in information transmission systems, since, together with error-correcting coding, it turned out to be the most effective way to increase the speed and quality of its transmission.

7.Recipient of information - a material object or subject that perceives information in all forms of its manifestation for the purpose of its further processing and use.

Recipients of information can be both people and technical means that accumulate, store, transform, transmit or receive information.

The first technical means of transmitting information over a distance was the telegraph, invented in 1837 by the American Samuel Morse. In 1876, American A. Bell invents the telephone. Based on the discovery of electromagnetic waves by the German physicist Heinrich Hertz (1886), A.S. Popov in Russia in 1895 and almost simultaneously with him in 1896 by G. Marconi in Italy, radio was invented. Television and the Internet appeared in the 20th century.

All of the listed technical methods of information communication are based on the transmission of a physical (electrical or electromagnetic) signal over a distance and are subject to some general laws. The study of these laws is engaged in communication theory, which arose in the 1920s. The mathematical apparatus of the theory of communication - mathematical communication theory, developed by the American scientist Claude Shannon.

Claude Elwood Shannon (1916-2001), USA

Claude Shannon proposed a model of the process of transferring information through technical communication channels, represented by a diagram.

Technical information transmission system

Coding here means any transformation of information coming from a source into a form suitable for its transmission over a communication channel. Decoding- reverse transformation of the signal sequence.

The operation of such a scheme can be explained using the familiar process of talking on the phone. The source of information is the speaking person. The encoder is a telephone handset microphone, with the help of which sound waves (speech) are converted into electrical signals. The communication channel is the telephone network (wires, switches of telephone nodes through which the signal passes). The decoding device is the handset (earpiece) of the listening person - the receiver of information. Here, the incoming electrical signal turns into sound.

Modern computer systems for transmitting information - computer networks - work on the same principle. There is an encoding process that converts a binary computer code into a physical signal of the type that is transmitted over a communication channel. Decoding consists in converting the transmitted signal back into computer code. For example, when using telephone lines in computer networks, the encoding-decoding functions are performed by a device called a modem.



Channel bandwidth and information transfer rate

Developers of technical information transmission systems have to solve two interrelated problems: how to ensure the highest speed of information transmission and how to reduce the loss of information during transmission. Claude Shannon was the first scientist to tackle these problems and create a new science for that time - information theory.

K. Shannon defined a method for measuring the amount of information transmitted through communication channels. He introduced the concept channel bandwidth,as the maximum possible speed of information transfer. This speed is measured in bits per second (as well as kilobits per second, megabits per second).

The bandwidth of a communication channel depends on its technical implementation. For example, the following communication means are used in computer networks:

Telephone lines,

Electrical cable communication,

Fiber optic cable communication,

Radio communication.

The throughput of telephone lines is tens, hundreds of Kbit / s; the throughput of fiber-optic lines and radio communication lines is measured in tens and hundreds of Mbps.

Noise, noise protection

The term “noise” refers to all kinds of interference that distorts the transmitted signal and leads to loss of information. Such interference primarily occurs for technical reasons: poor quality of communication lines, insecurity from each other of various streams of information transmitted over the same channels. Sometimes, talking on the phone, we hear noise, crackling, interfering with understanding the interlocutor, or the conversation of completely different people is superimposed on our conversation.

The presence of noise leads to the loss of transmitted information. In such cases, noise protection is required.

First of all, technical methods of protecting communication channels from the effects of noise are used. For example, using shielded cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

Claude Shannon was designed coding theory giving methods of dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line should be redundant... Due to this, the loss of some part of the information during transmission can be compensated for. For example, if you are hard to hear when talking on the phone, then by repeating each word twice, you have a better chance that the other person will understand you correctly.

However, you cannot make the redundancy too large. This will lead to delays and higher communication costs. Coding theory allows you to get a code that is optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be maximum.

In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The whole message is split into chunks - packages... For each package, it computes check sum(the sum of binary digits) that is sent with this packet. At the receiving point, the checksum of the received packet is recalculated, and if it does not match the initial sum, the transmission of this packet is repeated. This will continue until the initial and final checksums match.

Considering the transfer of information in propaedeutic and basic computer science courses, first of all, this topic should be discussed from the position of a person as a recipient of information. The ability to receive information from the surrounding world is the most important condition for human existence. The human senses are the information channels of the human body that connect a person with the external environment. On this basis, information is divided into visual, sound, olfactory, tactile, and gustatory. The rationale for the fact that taste, smell and touch carry information to a person is as follows: we remember the smells of familiar objects, the taste of familiar food, we recognize familiar objects by touch. And the contents of our memory are stored information.

Students should be told that in the animal world informational role the sense organs are different from the human. An important information function for animals is performed by the sense of smell. The heightened sense of smell of service dogs is used by law enforcement agencies to search for criminals, detect drugs, etc. The visual and sound perception of animals differs from that of humans. For example, bats are known to hear ultrasound, while cats see in the dark (from a human perspective).

Within the framework of this topic, students should be able to give specific examples of the process of transferring information, determine for these examples the source, receiver of information, the channels used for transferring information.

When studying computer science in high school, students should be introduced to the basic provisions of technical communication theory: the concepts of coding, decoding, information transfer rate, channel capacity, noise, noise protection. These issues can be considered within the framework of the topic “Technical means of computer networks”.

Representation of numbers

Numbers in mathematics

Number is the most important concept of mathematics, which took shape and developed over a long period of human history. People have started working with numbers since primitive times. Initially, a person operated only with positive integers, which are called natural numbers: 1, 2, 3, 4, ... For a long time there was an opinion that there is the largest number, “more than this cannot be understood by the human mind” (as they wrote in Old Slavonic mathematical treatises) ...

The development of mathematical science has led to the conclusion that there is no largest number. From a mathematical point of view, the series of natural numbers is infinite, i.e. is not limited. With the appearance in mathematics of the concept of a negative number (R. Descartes, XVII century in Europe; in India much earlier), it turned out that the set of integers is unlimited both “on the left” and “on the right”. The mathematical set of integers is discrete and unlimited (infinite).

Isaac Newton introduced the concept of a real (or real) number to mathematics in the 18th century. Mathematically the set of real numbers is infinite and continuous... It includes many whole numbers and an infinite number of non-integers. An infinite set of real numbers lies between any two points on the numerical axis. Associated with the concept of a real number is the concept of a continuous number axis, any point of which corresponds to a real number.

Integer representation

In computer memory numbers are stored in binary system reckoning(cm. " Number systems”2). There are two forms of representation of integers in the computer: unsigned integers and signed integers.

Unsigned integers - This the set of positive numbers in the range, where k- this is the capacity of the memory cell allocated for the number. For example, if a 16-bit (2 byte) memory cell is allocated for an integer, then the largest number will be:

In decimal number system this corresponds to: 2 16 - 1 = 65 535

If all bits of the cell are zeros, then it will be zero. Thus, 2 16 = 65 536 integers are placed in a 16-bit cell.

Signed integersis a set of positive and negative numbers in the range[–2k –1 , 2k-eleven]. For example, for k= 16 range of representation of integers: [–32 768, 32 767]. The most significant bit of the memory cell stores the sign of the number: 0 is a positive number, 1 is a negative number. The largest positive number 32 767 has the following representation:

For example, decimal number 255 after being converted to binary and written into a 16-bit memory cell will have the following internal representation:

Negative integers are represented in two's complement code. Additional code positive number N- This such its binary representation, which, when added to the code of the number N, gives the value 2k... Here k- the number of bits in the memory cell. For example, the complementary code for 255 would be:

This is the representation of the negative number –255. Let's add the codes of numbers 255 and -255:

The one in the most significant bit “dropped out” from the cell, so the sum was equal to zero. But this is how it should be: N + (–N) = 0. The computer processor performs the subtraction operation as addition with the complement code of the number to be subtracted. In this case, cell overflow (exceeding the limit values) does not interrupt the program execution. The programmer must know and take into account this circumstance!

The format for representing real numbers in a computer called floating point format... Real number R represented as a product of the mantissa m on the basis of the number system n to some extent p, which is called the order: R= m ? n p.

Floating point representation is ambiguous. For example, for the decimal number 25.324, the following equalities hold:

25.324 = 2.5324? 10 1 = 0.0025324? 10 4 = 2532.4? 10 –2, etc.

To avoid ambiguity, we agreed to use in the computer normalized floating point representation. Mantissa in normalized representation must satisfy the condition: 0.1 n m < 1n... In other words, the mantissa is less than one and the first significant digit is not zero. In some cases, the normalization condition is taken as follows: 1 n m < 10n.

IN computer memory mantissa is represented as an integer containing only significant digits(0 integers and a comma are not stored). Therefore, the internal representation of a real number is reduced to the representation of a pair of integers: mantissa and order.

IN different types computers use various options for representing numbers in floating point form. Let's consider one of the options for the internal representation of a real number in a four-byte memory cell.

The cell should contain the following information about the number: the sign of the number, the order and the significant digits of the mantissa.

The most significant bit of the 1st byte stores the sign of the number: 0 means plus, 1 - minus. The remaining 7 bits of the first byte contain machine order... The next three bytes store the significant digits of the mantissa (24 bits).

Seven binary digits hold binary numbers in the range from 0000000 to 1111111. This means that machine order ranges from 0 to 127 (in decimal notation). There are 128 values ​​in total. The order can obviously be either positive or negative. It is reasonable to divide these 128 values ​​equally between positive and negative values ​​of the order: from -64 to 63.

Machine orderis biased relative to the mathematical and has only positive values. The offset is chosen so that the minimum mathematical value of the order corresponds to zero.

The relationship between machine order (Mp) and mathematical (p) in this case is expressed by the formula: Mp = p + 64.

The resulting formula is written in decimal system. In the binary system, the formula is: Mp 2 = p 2 + 100 0000 2.

To write the internal representation of a real number, you must:

1) translate the modulus of a given number into a binary number system with 24 significant digits,

2) normalize a binary number,

3) find the machine order in the binary number system,

4) taking into account the sign of the number, write out its representation in a four-byte machine word.

Example. Write the internal floating-point representation of 250.1875.

Solution

1. Let's translate it into a binary number system with 24 significant digits:

250,1875 10 = 11111010,0011000000000000 2 .

2. Let's write in the form of a normalized binary floating point number:

0.111110100011000000000000 H 10 2 1000.

Here is the mantissa, the base of the number system
(2 10 = 10 2) and order (8 10 = 1000 2) are written in binary.

3. Let's calculate the machine order in the binary number system:

Mp 2 = 1000 + 100 0000 = 100 1000.

4. Let's write the representation of the number in a four-byte memory cell, taking into account the sign of the number

Hexadecimal form: 48FA3000.

The range of real numbers is much wider than the range of integers. Positive and negative numbers are located symmetrically about zero. Therefore, the maximum and minimum numbers are equal to each other in absolute value.

The smallest absolute value the number is zero. The largest absolute number in floating point form is the number with the largest mantissa and largest order.

For a four-byte machine word, this number would be:

0.111111111111111111111111 · 10 2 1111111.

After converting to the decimal number system, we get:

MAX = (1 - 2 - 24) · 2 63 10 19.

If, during calculations with real numbers, the result is outside the permissible range, then the program execution is interrupted. This happens, for example, when dividing by zero, or by a very small number close to zero.

Real numbers, the bit depth of the mantissa of which exceeds the number of bits allocated for the mantissa in the memory cell, are represented in the computer approximately (with a “cut off” mantissa). For example, a rational decimal number 0.1 in a computer will be represented approximately (rounded off), since in the binary system, its mantissa has an infinite number of digits. The consequence of this approximation is the error of machine calculations with real numbers.

The computer performs calculations with real numbers approximately. The error of such calculations is callederror of machine rounding off.

The set of real numbers, exactly representable in the computer memory in floating point form, is limited and discrete... Discreteness is a consequence of the limited number of digits of the mantissa, as mentioned above.

The number of real numbers that can be exactly represented in the computer memory can be calculated by the formula: N = 2t · ( UL+ 1) + 1. Here t- the number of binary digits of the mantissa; U- the maximum value of the mathematical order; L is the minimum order value. For the above representation option ( t = 24, U = 63,
L
= –64) it turns out: N = 2 146 683 548.

The topic of presenting numerical information in a computer is present both in the standard for the basic school and for the senior grades.

In basic school ( basic course) it is enough to consider the representation of integers in a computer. The study of this issue is possible only after acquaintance with the topic "Number systems". In addition, from the principles of computer architecture, students should know that a computer works with a binary number system.

Considering the representation of integers, the main attention should be paid to the limited range of integers, to the relationship of this range with the capacity of the allocated memory cell - k... For positive numbers (unsigned):, for positive and negative numbers (signed): [–2 k –1 , 2k –1 – 1].

Obtaining the internal representation of numbers should be analyzed using examples. After that, by analogy, students must independently solve such problems.

Example 1. Get the internal signed format representation of the integer 1607 in a 2-byte memory location.

Solution

1) Convert the number to the binary number system: 1607 10 = 11001000111 2.

2) Adding zeros to the left up to 16 digits, we get the internal representation of this number in the cell:

It is desirable to show how the hexadecimal form is used for the concise notation of this code, which is obtained by replacing each four binary digits with one hexadecimal digit: 0647 (see “ Number systems” 2).

More difficult is the problem of obtaining the internal representation of a negative integer (- N) - additional code. You need to show the students the algorithm for this procedure:

1) get the internal representation of a positive number N;

2) get reverse code this number by replacing 0 by 1 and 1 by 0;

3) add 1 to the resulting number.

Example 2. Get the internal representation of the negative integer -1607 in a 2-byte memory location.

Solution

It is helpful to show students what the internal representation of the smallest negative number looks like. In a two-byte cell, this is –32,768.

1) it is easy to translate the number 32 768 into the binary number system, since 32 768 = 2 15. Hence, in binary it is:

2) write the reverse code:

3) add one to this binary number, we get

A one in the first bit denotes a minus sign. You don't need to think that the resulting code is minus zero. This is –32,768 in complementary code form. These are the rules for the machine representation of integers.

Having shown this example, invite the students to prove for themselves that adding the codes for the numbers 32 767 + (–32 768) will result in the code for the number –1.

According to the standard, the representation of real numbers should be studied in high school. When studying computer science in grades 10-11 at the basic level, it is enough to qualitatively tell students about the main features of a computer with real numbers: about the limited range and interruption of the program when it goes beyond it; about the error of machine calculations with real numbers, about the fact that calculations with real numbers are performed by a computer slower than with integers.

Studying at a specialized level requires a detailed analysis of the ways of representing real numbers in floating point format, an analysis of the peculiarities of performing calculations on a computer with real numbers. A very important problem here is the estimation of the calculation error, prevention from loss of value, from interruption of the program. Detailed material on these issues is available in study guide.

Notation

Notation - this is a way of displaying numbers and the corresponding rules of action on numbers... The various number systems that existed before and which are used in our time can be divided into non-positional and positional. Signs used when writing numbers are called in numbers.

IN non-positional number systems the value of the digit does not depend on the position in the number.

An example of a non-positional number system is the Roman numeral system (Roman numerals). In the Roman system, Latin letters are used as numbers:

Example 1. The number CCXXXII is the sum of two hundred, three tens and two units and is equal to two hundred thirty two.

In Roman numerals, numbers are written from left to right in descending order. In this case, their values ​​are added. If a smaller number is written on the left, and a larger one on the right, then their values ​​are subtracted.

Example 2.

VI = 5 + 1 = 6; IV = 5 - 1 = 4.

Example 3.

MCMXCVIII = 1000 + (–100 + 1000) +

+ (–10 + 100) + 5 + 1 + 1 + 1 = 1998.

IN positional number systems the value denoted by a digit in a number recording depends on its position... The number of digits used is called the base of the positional number system.

The number system used in modern mathematics is positional decimal system ... Its base is ten, because any numbers are recorded using ten digits:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9.

The positional nature of this system is easy to understand using the example of any multi-digit number. For example, in the number 333, the first three means three hundred, the second three tens, and the third three ones.

To write numbers in a positional system with a radix n need to have alphabet from n digits. Usually for this, with n < 10 используют n first Arabic numerals, and for n> 10 add letters to ten Arabic numerals. Here are examples of alphabets from several systems:

If it is required to indicate the base of the system to which the number belongs, then it is assigned a subscript to this number. For example:

101101 2, 3671 8, 3B8F 16.

In radix q (q-ary numeral system), the units of digits are sequential powers of the number q. q units of any category form the unit of the next category. To write a number in q-aich numeral system is required q various signs (digits) representing the numbers 0, 1, ..., q- 1. Recording a number q in q-ary numeral system has the form 10.

| 8 classes | Planning lessons for the academic year | Work in local network computer class in file sharing mode

Lesson 2
Work in a local network of a computer class in the file exchange mode

Transfer of information through technical communication channels

Transfer of information through technical communication channels

Shannon's scheme

American scientist, one of the founders of information theory, Claude Shannon proposed a diagram of the process of transmitting information through technical communication channels (Fig. 1.3).

Rice. 1.3. Diagram of a technical information transmission system

The operation of such a scheme can be explained using the familiar process of talking on the phone. Sourse of information- a speaking person. Encoder- the microphone of the handset, with the help of which sound waves (speech) are converted into electrical signals. Communication channel - telephone network (wires, switches of telephone nodes through which the signal passes). Decoder- a telephone receiver (earpiece) of a listening person - a receiver of information. Here, the incoming electrical signal turns into sound.

Here, information is transmitted in the form of a continuous electrical signal. This is analog communication.

Encoding and decoding information

Under coding any transformation of information coming from a source into a form suitable for its transmission over a communication channel is understood.

At the dawn of the era of radio communication, the alphabet code was used Morse... The text was transformed into a sequence of dots and dashes (short and long signals) and broadcast. A person who received such a transmission by ear had to be able to decode the code back into text. Even earlier, Morse code was used in telegraph communications. Transmitting information using Morse code is an example of discrete communication.

Currently, digital communication is widely used, when the transmitted information is encoded into binary form (0 and 1 are binary digits), and then decoded into text, image, sound. Digital communications are obviously also discrete.

Noise and noise protection. Shannon's coding theory

Information through communication channels is transmitted by means of signals of various physical nature: electrical, electromagnetic, light, acoustic... The informational content of the signal consists in the value or in the change in the value of its physical quantity (current strength, brightness of light, etc.). The term "noise" called various kinds of interference that distort the transmitted signal and lead to the loss of information. Such interference primarily occurs for technical reasons: poor quality of communication lines, insecurity from each other of various streams of information transmitted over the same channels. Often, when talking on the phone, we hear noise, crackling, making it difficult to understand the interlocutor, or the conversation of other people is superimposed on our conversation. In such cases, noise protection is required.

Primarily applied technical methods of protection of communication channels from the effects of noise. Such methods are very different, sometimes simple, sometimes very complex. For example, using shielded cable instead of bare wire; the use of various kinds of filters that separate the useful signal from noise, etc.

K. Shannon developed a special coding theory providing methods for dealing with noise. One of the important ideas of this theory is that the code transmitted over the communication line must be redundant. Due to this, the loss of some part of the information during transmission can be compensated for. For example, if you are hard to hear when talking on the phone, then by repeating each word twice, you have a better chance that the other person will understand you correctly.

However, one cannot do redundancy too big. This will lead to delays and higher communication costs. Shannon's coding theory just allows you to get such a code that will be optimal. In this case, the redundancy of the transmitted information will be the minimum possible, and the reliability of the received information will be maximum.

In modern digital communication systems, the following technique is often used to combat the loss of information during transmission. The whole message is split into portions - packets... For each packet, a checksum is calculated (the sum of binary digits), which is transmitted along with this packet. At the place of reception, the checksum of the received packet is recalculated, and if it does not coincide with the original, then the transmission of this packet is repeated. This happens until the original and final checksums match.

Briefly about the main

Any technical system transmission of information consists of a source, receiver, encoding and decoding devices and a communication channel.

Under coding the transformation of information coming from a source into a form suitable for its transmission over a communication channel is understood. Decoding is the inverse transformation.

Noise- this is interference that leads to the loss of information.

In coding theory developed methods presentation of transmitted information in order to reduce its losses under the influence of noise.

Questions and tasks

1. What are the main elements of the information transfer scheme proposed by K. Shannon.

2. What is encoding and decoding when transmitting information?

3. What is noise? What are its implications for the transmission of information?

4. What are the ways to deal with noise?

EC CER: Part 2, conclusion, addition to chapter 1, § 1.1. Center for ORC number 1.

Today, information is spreading so quickly that there is not always enough time to comprehend it. Most people rarely think about how and by what means it is transmitted, and even more so they do not imagine the scheme of information transmission.

Basic concepts

The transfer of information is considered to be physical process movement of data (signs and symbols) in space. From the point of view of data transmission, this is a planned in advance, technically equipped event for the movement of information units for set time from the so-called source to the receiver by means of an information channel, or a data transmission channel.

A data transmission channel is a collection of means or a medium for the dissemination of data. In other words, this is the part of the information transmission scheme that ensures the movement of information from the source to the recipient, and under certain conditions, and vice versa.

There are many classifications of data transmission channels. If we highlight the main ones, then we can list the following: radio channels, optical, acoustic or wireless, wired.

Technical channels of information transmission

Directly to technical data transmission channels are radio channels, fiber optic channels and cable. The cable can be coaxial or twisted pair. The first is an electrical cable with a copper wire inside, and the second is twisted pairs copper wires, insulated in pairs, in a dielectric sheath. These cables are quite flexible and easy to use. Optical fiber consists of fiber optic filaments that transmit light signals through reflection.

The main characteristics are throughput and noise immunity. Under throughput it is customary to understand the amount of information that can be transmitted over the channel in a certain time. And noise immunity is the parameter of channel resistance to external interference (noise).

Understanding Data Transfer

If you do not specify the scope, general scheme information transmission looks simple, it includes three components: "source", "receiver" and "transmission channel".

Shannon's scheme

Claude Shannon, an American mathematician and engineer, was at the forefront of information theory. They proposed a scheme for transmitting information through technical communication channels.

This scheme is not difficult to understand. Especially if you imagine its elements in the form of familiar objects and phenomena. For example, the source of information is a person on the phone. The handset will be an encoder that converts speech or sound waves into electrical signals. The data transmission channel in this case is the communication nodes, in general, the entire telephone network leading from one telephone set to another. The subscriber's handset acts as a decoding device. It converts the electrical signal back into sound, that is, into speech.

In this diagram of the information transfer process, the data is represented as a continuous electrical signal. This connection is called analog.

Coding concept

Coding is considered to be the transformation of information sent by a source into a form suitable for transmission over the communication channel used. The clearest example of coding is Morse code. In it, information is converted into a sequence of dots and dashes, that is, short and long signals. The receiving party must decode this sequence.

IN modern technologies digital communication is used. In it, information is converted (encoded) into binary data, that is, 0 and 1. There is even a binary alphabet. This relationship is called discrete.

Interference in information channels

There is also noise in the data transmission scheme. The concept of "noise" in this case means interference, due to which there is a distortion of the signal and, as a result, its loss. There are various reasons for interference. For example, information channels can be poorly protected from each other. Various technical protection methods, filters, shielding, etc. are used to prevent interference.

K. Shannon developed and proposed to use the theory of coding to combat noise. The idea is that since noise occurs when information is lost, then the transmitted data must be redundant, but at the same time not so much as to reduce the transmission rate.

IN digital channels communication information is divided into parts - packets, for each of which a checksum is calculated. This amount is sent with each packet. The receiver of information recalculates this amount and accepts the packet only if it matches the original one. Otherwise, the package is sent again. And so on until the sent and received checksums match.