Shcherbakova S.M., Krupina T.A. Basic concepts for Internet users on modern interconnected computer networks. Computer networks By the size of the covered territory

"[Exam in Computer Science] [Ticket # 22]

Local and global computer networks. Addressing in networks.

A computer network is a collection of computers and various devices, providing information exchange between computers in the network without using any intermediate storage media.

The creation of computer networks is caused by the practical need of users of computers remote from each other for the same information. Networks provide users with the opportunity not only to quickly exchange information, but also to collaborate on printers and other peripheral devices, and even to process documents at the same time.

All the variety of computer networks can be classified according to a group of features:

  • Territorial prevalence;
  • Departmental affiliation;
  • Information transfer rate;
  • Type of transmission medium;

In terms of territorial distribution, networks can be local, global, and regional.

Departmental and state networks are distinguished by affiliation. Departmental belong to one organization and are located on its territory.

According to the speed of information transfer, computer networks are divided into low-, medium- and high-speed.

By the type of transmission medium, they are divided into coaxial, twisted-pair, fiber-optic networks, with the transmission of information over radio channels, in the infrared range.

Local computer networks.

A local network unites computers installed in one room (for example, a school computer class, consisting of 8-12 computers) or in one building (for example, in a school building, several dozen computers installed in various subject rooms can be connected into a local network).

In small local networks, all computers are usually equal, that is, users independently decide which resources of their computer (disks, directories, files) to make public over the network. These networks are called peer-to-peer networks.

If to local network more than ten computers are connected, the peer-to-peer network may be insufficiently productive. To increase performance, as well as to provide greater reliability when storing information on the network, some computers are specially allocated for storing files or application programs. These computers are called servers, and the local area network is called a server-based network.
Each computer connected to a local network must have a special card (network adapter). Computers (network adapters) are connected to each other using cables.

Network topology.

The general scheme for connecting computers in local networks is called network topology. Network topologies can be different.

Ethernet networks can be in both bus and star topologies. In the first case, all computers are connected to one common cable (bus), in the second, there is a special central device (hub), from which "rays" go to each computer, i.e. each computer is connected to its own cable.

The bus structure is simpler and more economical as it does not require an additional device and uses less cable. But it is very sensitive to cabling faults. If the cable is damaged at least in one place, then problems arise for the entire network. The location of the malfunction is difficult to locate.

In this sense, the "star" is more stable. A damaged cable is a problem for one specific computer; it does not affect the operation of the network as a whole. No effort is required to isolate the fault.

In a network with a "ring" structure, information is transmitted between stations along the ring with a relay in each network controller. Re-reception is carried out through buffer drives made on the basis of random access memory devices, therefore, if one network controller fails, the operation of the entire ring may be disrupted.
The advantage of the ring structure is the ease of implementation of the devices, and the disadvantage is low reliability.

Regional computer networks.

Local networks do not allow to provide shared access to information for users located, for example, in different parts of the city. Regional networks that unite computers within one region (city, country, continent) come to the rescue.

Corporate computer networks.

Many organizations interested in protecting information from unauthorized access (for example, military, banking, etc.) create their own, so-called corporate networks. A corporate network can unite thousands and tens of thousands of computers located in different countries and cities (for example, the Microsoft Corporation network, MSN).

Global computer network Internet.

In 1969, the ARPAnet computer network was created in the United States, uniting the computer centers of the Department of Defense and a number of academic organizations. This network was designed for a narrow purpose: mainly to learn how to keep in touch in the event of a nuclear attack and to help researchers share information. As this network grew, many other networks were created and developed. Even before the advent of the personal computer era, the creators of ARPAnet began developing the Internetting Project. The success of this project led to the following results. First, the largest in the United States was created internet network(with lowercase i). Second, various options for the interaction of this network with a number of other US networks were tested. This created the prerequisites for the successful integration of many networks into a single world network. Such a "network of networks" is now called the Internet everywhere (in Russian publications, the Russian-language spelling Internet is also widely used).

Currently, tens of millions of computers connected to the Internet store a huge amount of information (hundreds of millions of files, documents, etc.) and hundreds of millions of people use the information services of the global network.

The Internet is a global computer network that unites many local, regional and corporate networks and includes tens of millions of computers.

Every local or corporate network usually has at least one computer that has a persistent connection to the Internet using a high-bandwidth communication line (Internet server).

The reliability of the global network is ensured by the redundancy of communication lines: as a rule, servers have more than two communication lines connecting them to the Internet.

The Internet is based on more than one hundred million servers that are constantly connected to the network.

Internet servers can be connected using local networks or dial-up telephone lines hundreds of millions of web users.

Internet addressing

In order to communicate with some computer on the Internet, you need to know its unique Internet address. There are two equivalent address formats that differ only in their form: IP address and DNS address.

IP address

An IP address is made up of four blocks of numbers, separated by periods. It may look like this:
84.42.63.1

Each block can contain a number from 0 to 255. Thanks to this organization, you can get over four billion possible addresses. But since some addresses are reserved for special purposes, and blocks are configured depending on the type of network, the actual number of possible addresses is slightly less. Nevertheless, it is more than enough for the future expansion of the Internet.

The concept of "host" is closely related to the concept of IP - addresses. A host is any device that uses the TCP / IP protocol to communicate with other equipment. This can be not only a computer, but also a router, hub, etc. All these devices connected to the network must have their own unique IP address.

DNS - address

The IP address has a numeric form, as computers use it in their work. But it is very difficult to remember, so the domain name system was developed: DNS. The DNS address includes more user-friendly letter abbreviations, which are also separated by periods into separate information blocks (domains). For example:

If you enter a DNS address, it is first sent to a so called name server, which translates it into a 32 - bit machine readable IP address.

Domain names

A DNS address usually has three components (although there can be any number of them).

The domain name system has a hierarchical structure: top-level domains - second-level domains, and so on. Top-level domains are of two types: geographic (two-letter - each country has its own code) and administrative (three-letter).

Russia owns a geographic domain ru.

The Klyaks @ .net portal has registered the second-level domain klyaksa in the administrative top-level domain net.

The names of computers that are servers on the Internet include the fully qualified domain name and the actual computer name. So the full address of the Klyaks @ .net portal looks like www.site

gov - government agency or organization
mil - military institution
com - commercial organization
net - network organization
org - an organization that does not belong to one of the above

Among the frequently used domains - country identifiers, the following can be distinguished:

at - Austria
au from Australia
ca from Canada
ch - Switzerland
de - Germany
dk - Denmark
es from Spain
fi from Finland
fr - France
it from Italy
jp - Japan
nl - Netherlands
no - Norway
nz from New Zealand
ru - Russia
se from Sweden
uk - Ukraine
za from South Africa

E-mail address

Using an IP address or DNS address on the Internet, you can access any computer you need. If you want to send a message by e-mail, then specifying only these addresses will not be enough, since the message should go not only to required computer, but also to a specific user of the system.

A special protocol SMPT (Simple Mail Transport Protocol) is used to deliver and receive e-mail messages. The computer through which e-mail messages are transmitted to the Internet is called an SMPT server. Messages are delivered by e-mail to the computer specified in the address, which is responsible for further delivery. Therefore, data such as the username and the name of the corresponding SMPT server are separated by the "@" sign. This sign is called "at commercial" (in the jargon - dog, dog). Thus, you are addressing your message to a specific user of a specific computer. For example:
[email protected] site Here ivanov is the user to whom the message is intended, and the site — SMPT — is the server on which his mailbox is located. The mailbox stores messages sent to a specific address.

A URL (Uniform Resource Locator) is the address of some information on the Internet. It has the following format:
resource type: // node address / other information
The following types of resources are considered the most common:

Ftp: // ftp - server
gopher: // gopher menu
http: // WWW address
mailto: // email address
news: // UseNet newsgroup
telnet: // computer that can be logged in using telnet

The resource portion of a URL always ends with a colon and two or three backslashes. What follows is the specific address of the site you want to visit. Behind it, as a limiter, there is an oblique line. In principle, this is quite enough. But if you want to view a specific document on a given site and know its exact location, you can include its address in the URL. Below are some URLs and their meanings:

Http: //www..php main page of the information and educational portal Klyaks @ .net

ftp://ftp.microsoft.com/dirmap.txt file named dirmap.txt on ftp - server of Microsoft company

So, the following types of addresses are possible on the Internet.

Computer network(computer network, data transmission network) - a communication system of computers and / or computer equipment (servers, routers and other equipment). Various physical phenomena can be used to transmit information, as a rule, various types of electrical signals or electromagnetic radiation.

By appointment, computer networks are distributed

1.computing
2.information
3.mixed (information and computing)

Computing networks are intended mainly for solving tasks of users with the exchange of data between their subscribers. Information networks focused mainly on providing information services to users. Mixed networks combine the functions of the first two.

Classification

For the classification of computer networks, various characteristics are used, the choice of which is to select from the existing variety those that would provide such mandatory qualities to this classification scheme:

  • the ability to classify all, both existing and future, computer networks;
  • differentiation of substantially different networks;
  • unambiguous classification of any computer network;
  • clarity, simplicity and practical feasibility of the classification scheme.

A certain discrepancy between these requirements makes the task of choosing a rational classification scheme for a computer network rather complicated, one that has not yet found an unambiguous solution. Basically, computer networks are classified according to the characteristics of structural and functional organization.

By the size of the covered territory

  • Personal Area Network (PAN)
  • Local Area Network (LAN)
    • HomePNA
    • Combining several buildings (CAN, Campus Area Network)
  • Metropolitan Area Network (MAN)
  • Campu Area Network (CAN)
  • Wide Area Networks (WAN)
  • Wide Area Network (WAN)

By type of functional interaction

  • Client-server
  • Mixed network
  • Peer-to-peer network
  • Multi-rank networks

Type network topology

  • Star
  • Ring
  • Lattice
  • Mixed topology
  • Fully connected topology

By functional purpose

  • Storage area networks
  • Server farms
  • Process control networks
  • SOHO networks

By network OS

Types of networks: peer-to-peer and server-based. Advantages and disadvantages.

In a peer-to-peer network, all computers are equal: there is no hierarchy among computers and there is no dedicated server. Typically, each computer functions as both a client and a server; in other words, there is no separate computer responsible for administering the entire network. All users independently decide what data on their computer to make publicly available over the network.

Peer-to-peer networks are also called workgroups. A workgroup is a small team, therefore, in peer-to-peer networks, most often there are no more than 30 computers. Peer-to-peer networks are relatively simple. Since each computer is both a client and a server, there is no need for a powerful central server or other components required for more complex networks. Peer-to-peer networks are generally cheaper than server-based networks, but require more powerful (and more expensive) computers. Peer-to-peer networking tends to have lower performance and security requirements for network software than dedicated server networks. Dedicated servers function exclusively as servers, not clients or workstations. We will talk about this below. Operating systems such as Microsoft Windows NT Workstation, Microsoft Windows 9X, Microsoft Windows 2000 / XP have built-in support for peer-to-peer networks. Therefore, to install a peer-to-peer network, no additional software is required.

Implementation

Peer-to-peer is characterized by a number of standard solutions:

  • computers are located on users' desktops;
  • users themselves act as administrators and ensure the protection of information;
  • a simple cabling system is used to connect computers to a network.

Feasibility of application

Peer-to-peer networking is fine where:

  • the number of users does not exceed 30 people;
  • users are compact;
  • data protection issues are not critical;
  • no significant expansion of the firm and therefore of the network is expected in the foreseeable future.

If these conditions are met, then most likely the choice of the peer-to-peer network will be correct. Since in a peer-to-peer network, each computer functions as both a client and a server, users must have a sufficient level of knowledge to work as both users and administrators of their computer.

Server-based networks

If more than 30 users are connected to the network, then a peer-to-peer network, where computers act as both clients and servers, may not perform well. Therefore, most networks use dedicated servers. Dedicated is a server that functions only as a server (excluding client or workstation functions). They are specially optimized for fast processing of requests from network clients and for managing the protection of files and directories. Server-based networking has become an industry standard and will usually be cited as examples.

As the size of the network and the volume of network traffic increase, the number of servers must increase. Spreading tasks across multiple servers ensures that each task is performed in the most efficient way possible.

In peer-to-peer networks, each computer functions as a client and as a server. For a small group of users, such networks easily provide separation of data and peripherals. However, because peer-to-peer administration is decentralized, advanced data protection is difficult to achieve.

Server-based networks are most effective when large amounts of resources and data are shared. The administrator can manage data protection by observing the functioning of the network. In such networks, there can be one or several servers, depending on the volume of network traffic, the number of peripheral devices, etc. There are also combined networks that combine the properties of both types of networks. Such networks are quite popular, although for effective work they require more careful planning, therefore, user training should be higher.

The main requirement for networks is is the network's performance of its main function - providing users with the potential to access the shared resources of all computers connected to the network. All other requirements - performance, reliability, compatibility, manageability, security, extensibility and scalability - are related to the quality of this core task.

While all of these requirements are very important, often the concept of "quality of service" (QpS) of a computer network is interpreted more narrowly - it includes only the two most important characteristics of the network - performance and reliability.

Regardless of the chosen indicator of the quality of service of the network, there are two approaches to ensuring it. The first approach will obviously seem the most natural from the point of view of the netizen. It consists in the fact that the network (more precisely, the personnel serving it) guarantees the user compliance with a certain numerical value of the service quality indicator. For example, the network can guarantee user A that any of his packets sent to user B will be delayed by the network by no more than 150 ms. Or that the average bandwidth between users A and B will not be lower than 5 Mbit / s, while the channel will allow traffic ripple of 10 Mbit at time intervals of no more than 2 seconds. Frame relay and ATM technologies make it possible to build networks that guarantee quality of service in terms of performance.

The second approach is that the network serves users according to their priorities. That is, the quality of service depends on the degree of privilege of the user or the group of users to which he belongs. The quality of service is not guaranteed in this case, but only the user's privilege level is guaranteed. This service is called best effort service. The network tries to serve the user as well as possible, but does not guarantee anything. For example, local networks built on switches with frame prioritization work according to this principle.

Performance

Potentially high performance is one of the main properties of distributed systems, which include computer networks. This property is provided by the ability to parallelize work between several computers on the network. Unfortunately, this opportunity is not always realized.

There are several main characteristics of network performance:

 reaction time;

 throughput;

• transmission delay and variation in transmission delay.

Network response time is an integral measure of network performance from the user's point of view. It is this characteristic that the user has in mind when he says, "Today the network is slow."

In general, response time is defined as the time interval between the occurrence of a user request for a network service and the receipt of a response to this request.

Obviously, the value of this indicator depends on the type of service the user is accessing, on which user and which server is accessing, as well as on the current state of network elements - the load of segments, switches and routers through which the request passes, the load of the server, and etc.

Therefore, it makes sense to also use a weighted average estimate of the network response time, averaging this indicator across users, servers and time of day (on which network load largely depends).

Bandwidth reflects the amount of data transmitted by the network or part of it per unit of time. Bandwidth is no longer a user characteristic, since it speaks about the speed of performing internal network operations - the transfer of data packets between network nodes through various communication devices. But it directly characterizes the quality of performance of the main function of the network - transporting messages - and therefore is more often used in analyzing network performance than response time.

Throughput is measured in either bits per second or packets per second. The throughput can be instant, maximum and average.

The average throughput is calculated by dividing the total volume of transmitted data by the time of their transmission, and a sufficiently long period of time is chosen - an hour, a day or a week.

The instantaneous throughput differs from the average in that a very small time interval is selected for averaging - for example, 10 ms or 1 s.

Maximum throughput is the highest instantaneous throughput recorded during the observation period.

Bandwidth can be measured between any two nodes or points on the network, for example, between a client computer and a server, between the ingress and egress ports of a router. To analyze and configure a network, it is very useful to know the data on the throughput of individual network elements.

Transmission delay is defined as the delay between the moment a packet arrives at the input of any network device or part of the network and the moment it appears at the output of this device. This performance parameter is close in meaning to the network reaction time, but differs in that it always characterizes only the network stages of data processing, without processing delays by network computers. Typically, the quality of the network is characterized by the values ​​of the maximum transmission delay and delay variation. Not all types of traffic are sensitive to transmission delays, in any case, to those delays that are typical for computer networks - usually delays do not exceed hundreds of milliseconds, less often - several seconds. This order of delay in packets generated by the file service, e-mail service, or print service has little impact on the quality of those services from the point of view of the network user. On the other hand, the same delays in packets carrying voice data or video can lead to a significant decrease in the quality of information provided to the user - the appearance of the "echo" effect, the inability to understand some words, image jitter, etc.

Throughput and transmission delays are independent parameters, so that the network can have, for example, high throughput, but introduce significant delays in the transmission of each packet. An example of such a situation is provided by a communication channel formed by a geostationary satellite. The throughput of this channel can be very high, for example, 2 Mbit / s, while the transmission delay is always at least 0.24 s, which is determined by the signal propagation speed (about 300,000 km / s) and the channel length (72,000 km ).

Reliability and safety

One of the initial goals of creating distributed systems, which include computer networks, was to achieve greater reliability compared to individual computers.

It is important to distinguish between several aspects of reliability. For technical devices, such reliability indicators are used as mean time between failures, probability of failure, and failure rate. However, these indicators are suitable for assessing the reliability of simple elements and devices that can be in only two states - operable or inoperative. Complex systems consisting of many elements, in addition to states of operability and inoperability, may have other intermediate states that do not take these characteristics into account. In this regard, a different set of characteristics is used to assess the reliability of complex systems.

Availability, or availability, refers to the fraction of time that a system can be used. Availability can be improved by introducing redundancy into the structure of the system: the key elements of the system must exist in several copies, so that if one of them fails, others will ensure the functioning of the system.

For a system to be considered highly reliable, it must at least have high availability, but this is not enough. It is necessary to ensure the safety of the data and protect it from distortion. In addition, consistency (consistency) of data must be maintained, for example, if several copies of data are stored on several file servers to improve reliability, then it is necessary to constantly ensure their identity.

Since the network operates on the basis of a mechanism for transmitting packets between end nodes, one of the characteristic characteristics of reliability is the probability of delivering a packet to the destination node without distortion. Along with this characteristic, other indicators can also be used: the probability of packet loss (for any reason - due to a router buffer overflow, due to a checksum mismatch, due to the absence of an efficient path to the destination node, etc.), the probability distortion of a single bit of transmitted data, the ratio of lost to delivered packets.

Another aspect of overall reliability is security, that is, the ability of the system to protect data from unauthorized access. This is much more difficult in a distributed system than in a centralized one. In networks, messages are transmitted over communication lines, often passing through public premises in which wiretapping devices can be installed. Another vulnerability can be left unattended personal computers. In addition, there is always a potential threat of hacking network protection from unauthorized users if the network has access to global public networks.

Another characteristic of reliability is fault wrance. In networks, fault tolerance refers to the ability of a system to hide the failure of its individual elements from the user. For example, if copies of a database table are stored simultaneously on several file servers, then users may simply not notice the failure of one of them. In a fault-tolerant system, the failure of one of its elements leads to a certain decrease in the quality of its work (degradation), and not to a complete shutdown. So, if one of the file servers fails in the previous example, only the access time to the database increases due to a decrease in the degree of parallelization of queries, but in general the system will continue to perform its functions.

Extensibility and scalability

The terms extensibility and scalability are sometimes used interchangeably, but this is incorrect - each of them has a well-defined independent meaning.

Extensibility means the ability to relatively easily add individual network elements (users, computers, applications, services), increase the length of network segments and replace existing equipment with more powerful ones. At the same time, it is fundamentally important that the ease of expanding the system can sometimes be provided within some very limited limits. For example, an Ethernet LAN based on a single thick coaxial cable segment is highly scalable in the sense that new stations can be easily connected. However, such a network has a limit on the number of stations - their number should not exceed 30-40. Although the network allows a physical connection to a segment and a larger number of stations (up to 100), this often drastically degrades network performance. The presence of such a limitation is a sign of poor scalability of the system with good extensibility.

Scalability means that the network can grow the number of nodes and the length of the links over a very wide range, while the performance of the network does not degrade. To ensure the scalability of the network, it is necessary to use additional communication equipment and structure the network in a special way. For example, a multi-segment network built using switches and routers and having a hierarchical structure of links has good scalability. Such a network can include several thousand computers and at the same time provide each network user with the desired quality of service.

Transparency

Transparency of a network is achieved when the network is not presented to users as a set individual computers interconnected by a complex system of cables, but as a single traditional computer with a time sharing system. The famous slogan of Sun Microsystems: "The network is a computer" - speaks of just such a transparent network.

Transparency can be achieved at two different levels - at the user level and at the programmer level. At the user level, transparency means that to work with remote resources, he uses the same commands and procedures that are familiar to him as for working with local resources. At the programmatic level, transparency is that an application requires the same calls to access remote resources as it does to access local resources. Transparency at the user level is achieved more easily, since all the features of the procedures associated with the distributed nature of the system are masked from the user by the programmer who creates the application. Transparency at the application level requires that all the details of the distribution be hidden by means of the network operating system.

The network must hide all the peculiarities of operating systems and differences in types of computers. The Macintosh user should be able to access resources supported by the UNIX system, and the UNIX user should be able to share information with Windows users 95. The vast majority of users do not want to know anything about the internal file formats or the syntax of UNIX commands.

A user of an IBM 3270 terminal should be able to exchange messages with users of a network of personal computers without having to delve into the secrets of hard-to-remember addresses.

The concept of transparency can be applied to various aspects of the network.

For example, location transparency means that the user is not required to know where software and hardware resources such as processors, printers, files, and databases are located. The resource name must not include information about its location, so names like mashinel: prog.c or \\ ftp_serv \ pub are not transparent. Likewise, moving transparency means that resources must move freely from one computer to another without changing their names. Another of the possible aspects of transparency is the transparency of parallelism, which consists in the fact that the process of parallelizing computations occurs automatically, without the participation of a programmer, while the system itself distributes parallel branches of the application among processors and computers on the network. At present, it cannot be said that the property of transparency is fully inherent in many computer networks; it is rather a goal towards which the developers of modern networks are striving.

Support for different types of traffic

Computer networks were originally designed to share user access to computer resources: files, printers, etc. The traffic generated by these traditional computer network services has its own characteristics and is significantly different from message traffic in telephone networks or, for example, in cable TV networks. However, the 1990s saw the penetration of digital media traffic into computer networks, representing speech and video.

Computer networks began to be used for organizing video conferencing, training and entertainment based on video films, etc. Naturally, dynamic transmission of multimedia traffic requires different algorithms and protocols and, accordingly, other equipment. Although the share of multimedia traffic is still small, it has already begun its penetration into both global and local networks, and this process, obviously, will continue at an increasing speed.

The main feature of the traffic generated during the dynamic transmission of voice or image is the presence of strict requirements for the synchronization of the transmitted messages. For high-quality reproduction of continuous processes, which are sound vibrations or changes in light intensity in a video image, it is necessary to obtain measured and encoded signal amplitudes with the same frequency with which they were measured on the transmitting side. If the messages are delayed, there will be distortions.

At the same time, the traffic of computer data is characterized by an extremely uneven intensity of messages entering the network in the absence of strict requirements for the synchronization of delivery of these messages. For example, the access of a user working with text on remote disk, generates a random flow of messages between the remote and local computers, depending on the user's actions to edit the text, and delays in delivery within certain (and rather broad from a computer point of view) limits have little effect on the quality of service for a network user. All computer communication algorithms, corresponding protocols and communication equipment were designed for exactly this “pulsating” traffic pattern, therefore the need to transmit multimedia traffic requires fundamental changes to both protocols and equipment. Today, almost all new protocols provide support for multimedia traffic to one degree or another.

Combining traditional computer and multimedia traffic in one network is especially difficult. The transmission of exclusively multimedia traffic by a computer network, although associated with certain difficulties, causes less difficulties. But the case of the coexistence of two types of traffic with opposite requirements for quality of service is a much more difficult problem. Usually, protocols and equipment of computer networks classify multimedia traffic as optional, so the quality of its service is poor. Are spent today great effort to create networks that do not infringe on the interests of one of the types of traffic. The closest to this goal are networks based on ATM technology, the developers of which initially took into account the case of coexistence different types traffic in one network.

Controllability

Network manageability implies the ability to centrally monitor the state of the main elements of the network, identify and resolve problems that arise during the operation of the network, perform performance analysis and plan the development of the network. Ideally, network management is a system that monitors, controls, and manages every element of the network, from the simplest to the most sophisticated devices, while treating the network as a whole, rather than as a disparate collection of separate devices.

A good management system monitors the network and, upon detecting a problem, triggers an action, corrects the situation, and notifies the administrator of what happened and what steps were taken. At the same time, the control system must accumulate data on the basis of which the development of the network can be planned. Finally, the control system must be manufacturer-independent and possess user-friendly interface allowing you to perform all actions from one console.

In tactical tasks, administrators and technicians face the daily challenges of keeping the network up and running.

These tasks require a quick solution, the staff of the network must respond promptly to messages about faults coming from users or automatic network controls. Gradually become more noticeable common problems performance, network configuration, fault handling, and data security that require a strategic approach, that is, network planning. Planning, in addition, includes forecasting changes in user requirements for the network, questions of the use of new applications, new network technologies, etc.

The usefulness of the management system is especially pronounced in large networks: corporate or public global. Without a control system, such networks require the presence of qualified maintenance specialists in every building in every city where the network equipment is installed, which ultimately leads to the need to maintain a huge staff of maintenance personnel.

Currently, there are many unsolved problems in the field of network management systems. It is clearly not enough really convenient, compact and multi-protocol network management tools. Most of the existing tools do not manage the network at all, but only monitor its operation. They monitor the network, but do not take active action if something has happened to the network or is about to happen. Few scalable systems capable of serving both department-wide and enterprise-wide networks - very many systems manage only separate elements networks and do not analyze the ability of the network to perform high-quality data transfer between the end users of the network.

Compatibility

Compatibility or integrability means that a network can include a wide variety of software and hardware, that is, it can coexist different operating systems that support different communication protocol stacks and run hardware and applications from different manufacturers. A network consisting of elements of different types is called heterogeneous or heterogeneous, and if a heterogeneous network works without problems, then it is integrated. The main way to build integrated networks is to use modules made in accordance with open standards and specifications.

1.2. ISO / OSI Model

The fact that a protocol is an agreement adopted by two interacting entities, in this case two computers operating on a network, does not at all follow that it is necessarily a standard. But in practice, when implementing networks, they tend to use standard protocols. These can be proprietary, national, or international standards.

The International Standards Organization (ISO) has developed a model that clearly defines the various levels of interaction between systems, gives them standard names, and specifies what work each level should do. This model is called the Open System Interconnection (OSI) model or ISO / OSI model.

The OSI model divides communication into seven layers or layers (Figure 1.1). Each level deals with one specific aspect of interaction. Thus, the problem of interaction is decomposed into 7 particular problems, each of which can be solved independently of the others. Each layer supports interfaces with higher and lower layers.

Rice. 1.1. ISO / OSI Open Systems Interconnection Model

The OSI model describes only system communications, not end-user applications. Applications implement their own communication protocols by accessing system tools. It should be borne in mind that an application can take over the functions of some of the upper layers of the OSI model, in which case, if necessary, internetworking, it directly accesses the system tools that perform the functions of the remaining lower layers of the OSI model.

An end-user application can use system communication tools not only to organize a dialogue with another application running on another machine, but also simply to obtain services of a particular network service, for example, accessing remote files, receiving mail, or printing on a shared printer.

So, let's say an application makes a request to an application layer, such as a file service. Based on this request software the application level generates a message in a standard format, into which it places service information (header) and, possibly, the transmitted data. This message is then sent to the representative layer. The presentation layer adds its own header to the message and passes the result down to the session layer, which in turn adds its own header, etc. Some protocol implementations provide for the presence of not only a header, but also a trailer in the message. Finally, the message reaches the lowest, physical layer that actually carries it over the communication lines.

When a message arrives over the network to another machine, it sequentially moves up from level to level. Each level analyzes, processes and removes the heading of its level, performs the corresponding this level functions and passes the message to the higher layer.

In addition to the term "message", there are other names used by network professionals to refer to a unit of data exchange. ISO standards use a term such as Protocol Data Unit (PDU) for protocols of any layer. In addition, the names frame (frame), packet (packet), datagram (datagram) are often used.

ISO / OSI Model Layer Functions

Physical layer. This layer deals with the transmission of bits over physical channels such as coaxial cable, twisted pair cable, or fiber optic cable. This level is related to the characteristics of physical data transmission media, such as bandwidth, noise immunity, characteristic impedance, and others. At the same level, the characteristics of electrical signals are determined, such as requirements for pulse edges, voltage or current levels of the transmitted signal, the type of coding, and the signal transmission rate. In addition, the types of connectors and the purpose of each contact are standardized here.

Physical layer functions are implemented in all devices connected to the network. From the computer side, the physical layer functions are performed by the network adapter or serial port.

An example of a physical layer protocol is the 10Base-T Ethernet specification, which defines an unshielded cable as the cable used. twisted pair category 3 with a characteristic impedance of 100 ohms, an RJ-45 connector, a maximum physical segment length of 100 meters, the Manchester code for the presentation of data on the cable, and other characteristics of the environment and electrical signals.

Link layer. At the physical layer, bits are simply transferred. This does not take into account that in some networks in which communication lines are used (shared) alternately by several pairs of interacting computers, the physical transmission medium may be busy. Therefore, one of the tasks of the link layer is to check the availability of the transmission medium. Another task of the data link layer is to implement error detection and correction mechanisms. To do this, at the data link layer, bits are grouped into sets called frames. The link layer ensures that each frame is transmitted correctly by placing a special sequence of bits at the beginning and end of each frame to mark it, and also calculates a checksum by summing all the bytes of the frame in a specific way and adding the checksum to the frame. When a frame arrives, the receiver computes the checksum of the received data again and compares the result with the checksum from the frame. If they match, the frame is considered correct and accepted. If the checksums do not match, then an error is recorded.

The link-layer protocols used in local networks have a certain structure of connections between computers and ways of addressing them. Although the link layer ensures the delivery of a frame between any two nodes of the local network, it does this only in a network with a completely defined topology of links, exactly the topology for which it was designed. Common bus, ring, and star are typical topologies supported by LAN data-link protocols. Examples of link layer protocols are Ethernet, Token Ring, FDDI, 100VG-AnyLAN.

In local area networks, link-layer protocols are used by computers, bridges, switches and routers. In computers, link layer functions are implemented jointly by network adapters and their drivers.

In wide area networks, which rarely have a regular topology, the data link layer provides the exchange of messages between two neighboring computers connected by a single communication line. Examples of point-to-point protocols (as such protocols are often called) are the widely used PPP and LAP-B protocols.

Network layer. This level serves to form a single transport system that unites several networks with different principles of information transfer between end nodes. Let's consider the functions of the network layer using the example of local networks. The data link layer protocol of local networks ensures the delivery of data between any nodes only in a network with an appropriate typical topology... This is a very severe limitation that does not allow building networks with a developed structure, for example, networks that combine several enterprise networks into a single network, or highly reliable networks in which there are redundant connections between nodes. In order, on the one hand, to preserve the simplicity of data transfer procedures for typical topologies, and on the other hand, to allow the use of arbitrary topologies, an additional network layer is used. At this level, the concept of "network" is introduced. In this case, a network is understood as a set of computers interconnected in accordance with one of the standard typical topologies and using for data transmission one of the link layer protocols defined for this topology.

Thus, within the network, data delivery is regulated by the link layer, while the network layer is responsible for delivering data between networks.

Network layer messages are usually called packets... When organizing packet delivery at the network level, the concept is used "Network number"... In this case, the recipient's address consists of a network number and a computer number on that network.

Networks are interconnected by special devices called routers. Router is a device that collects information about the topology of interconnection and, based on it, forwards the network layer packets to the destination network. In order to transfer a message from a sender located in one network to a recipient located in another network, you need to make a number of hops between the networks, each time choosing the appropriate route. Thus, a route is a sequence of routers through which a packet passes.

The problem of choosing the best path is called routing and her solution is main task network layer. This problem is compounded by the fact that the shortest path is not always the best. Often the criterion for choosing a route is the time of data transmission along this route; it depends on the bandwidth of the communication channels and the traffic intensity, which may change over time. Some routing algorithms try to adapt to changes in load, while others make decisions based on averages over time. Route selection can be carried out according to other criteria, for example, transmission reliability.

At the network layer, two kinds of protocols are defined. The first type relates to the definition of rules for the transfer of packets with data of end nodes from a node to a router and between routers. These are the protocols that are commonly referred to when talking about network layer protocols. The network layer also includes another type of protocol, called routing exchange protocols... Routers use these protocols to collect information about the topology of interconnection. Network layer protocols are implemented by operating system software modules, as well as by software and hardware of routers.

Examples of network layer protocols are the IP interworking protocol of the TCP / IP stack and the IPX internetworking protocol of the Novell stack.

Transport layer. On the way from sender to receiver, packets can be garbled or lost. While some applications have their own error handling facilities, there are some that prefer to deal with a reliable connection straight away. The job of the transport layer is to ensure that applications or the upper layers of the stack — application and session — can transfer data with the degree of reliability they require. The OSI model defines five classes of service provided by the transport layer. These types of services are distinguished by the quality of the services provided: urgency, the ability to restore an interrupted connection, the availability of multiplexing means for multiple connections between different application protocols via a common transport protocol, and most importantly, the ability to detect and correct transmission errors such as distortion, loss and duplication of packets.

The choice of the class of service of the transport layer is determined, on the one hand, by the extent to which the problem of ensuring reliability is solved by the applications themselves and protocols higher than the transport levels, and on the other hand, this choice depends on how reliable the entire data transport system is. online. So, for example, if the quality of communication channels is very high, and the probability of errors that are not detected by the protocols of lower levels is small, then it is reasonable to use one of the lightweight services of the transport layer, not burdened with numerous checks, acknowledgment and other methods of increasing reliability. If the vehicles are initially very unreliable, then it is advisable to turn to the most developed service of the transport layer, which works using the maximum means for detecting and eliminating errors - by preliminary establishing a logical connection, monitoring message delivery using checksums and cyclic numbering of packets, establishing delivery timeouts, etc.

As a rule, all protocols, starting from the transport layer and higher, are implemented by the software of the end nodes of the network - the components of their network operating systems. Examples of transport protocols include the TCP and UDP protocols of the TCP / IP stack and the SPX protocol of the Novell stack.

Session level. The session layer provides control of the conversation in order to record which side is currently active, and also provides a means of synchronization. The latter allow breakpoints to be inserted into long passes so that if a failure occurs, you can go back to the last breakpoint instead of starting over. In practice, few applications use the session layer, and it is rarely implemented.

Presentation layer. This layer provides assurance that the information conveyed by the application layer will be understood by the application layer on another system. If necessary, the presentation layer converts the data formats into some common presentation format, and at the reception, accordingly, performs the reverse conversion. In this way, application layers can overcome, for example, syntactic differences in data presentation. At this level, data encryption and decryption can be performed, thanks to which the secrecy of data exchange is ensured for all application services at once. An example of a protocol that operates at the presentation layer is Secure Socket Layer (SSL), which provides secure messaging for the application layer protocols of the TCP / IP stack.

Application level. The application layer is really just a collection of various protocols by which network users access shared resources such as files, printers, or hypertext Web pages, and also organize their collaboration, for example, using the e-mail protocol. The unit of data that the application layer operates on is usually called message.

There is a very wide variety of application layer protocols. Let's take as examples at least a few of the most common file service implementations: NCP in the Novell NetWare operating system, SMB in Microsoft Windows NT, NFS, FTP, and TFTP that are part of the TCP / IP stack.

The OSI model, although very important, is only one of many communication models. These models and their associated protocol stacks can differ in the number of layers, their functions, message formats, services provided at the upper layers, and other parameters.

Standard communication protocol stacks

Modules that implement protocols of neighboring layers and located in the same node also interact with each other in accordance with well-defined rules and using standardized message formats. These rules are usually called interface... An interface defines a set of services provided by a given layer to an adjacent layer.

The means of each layer must work out, firstly, its own protocol, and secondly, interfaces with neighboring layers.

A hierarchically organized set of protocols sufficient for organizing the interaction of nodes in a network is called a stack communication protocols.

Communication protocols can be implemented both in software and hardware. Lower-layer protocols are often implemented by a combination of software and hardware, while upper-layer protocols are usually purely software.

Protocols are implemented not only by computers, but also by other network devices - hubs, bridges, switches, routers, etc. Depending on the type of device, it must have built-in tools that implement a particular set of protocols.

Network operating system structure

A network operating system is the backbone of any computer network. Each computer in a network is largely autonomous, therefore a network operating system in a broad sense is understood as a set of operating systems of individual computers interacting with the purpose of exchanging messages and sharing resources according to uniform rules - protocols. In a narrow sense, a network operating system is the operating system of an individual computer that provides it with the ability to work on a network.

Rice. 1.1. Network OS structure

In the network operating system of an individual machine, several parts can be distinguished (Figure 1.1):

  • Local Computer Resource Management Tools: Distribution Functions random access memory between processes, scheduling and scheduling of processes, control of processors in multiprocessor machines, control of peripheral devices and other functions of local OS resource management.
  • Means of providing your own resources and services for general use - the server part of the OS (server). These tools provide, for example, the locking of files and records, which is necessary for their sharing; maintaining directories of names of network resources; processing requests remote access to your own file system and database; management of queues of requests of remote users to their peripheral devices.
  • Means of requesting access to remote resources and services and their use - the client side of the OS (redirector). This part performs recognition and redirection to the network of requests to remote resources from applications and users, while the request comes from the application in a local form, and is transmitted to the network in a different form that meets the server's requirements. The client part also receives responses from servers and converts them to a local format, so that the application makes local and remote requests indistinguishable.
  • Communication means of the OS, with the help of which messages are exchanged on the network. This part provides addressing and buffering of messages, selection of a message transmission route over the network, transmission reliability, etc., that is, it is a means of transporting messages.

Depending on the functions assigned to a particular computer, either the client or the server part may be absent in its operating system.

Figure 1.2 shows the interaction of network components. Here computer 1 plays the role of a "clean" client, and computer 2 plays the role of a "clean" server, respectively, the first machine does not have a server part, and the second has no client part. The figure separately shows the client-side component - the redirector. It is the redirector that intercepts all requests from applications and analyzes them. If a request was issued for a resource this computer, then it is forwarded to the corresponding subsystem of the local OS, but if this is a request to a remote resource, then it is forwarded to the network. In this case, the client part converts the request from the local form to the network format and transfers it to the transport subsystem, which is responsible for delivering messages to the specified server. The server part of the operating system of computer 2 receives the request, transforms it and transmits it for execution to its local OS. After the result is received, the server turns to the transport subsystem and sends a response to the client that issued the request. The client part converts the result into the appropriate format and addresses it to the application that issued the request.

Rice. 1.2. interaction of operating system components in the interaction of computers

In practice, there are several approaches to building network operating systems (Figure 1.3).

Rice. 1.3. Options for building network operating systems

The first network operating systems were a collection of the existing local operating system and built on top of it network shell... At the same time, a minimum of network functions necessary for the operation of the network shell, which performed the main network functions, was built into the local OS. An example of this approach is the use of MS DOS operating system on every machine on the network (which, starting with its third version, has such built-in functions as file and write locking, necessary for file sharing). The principle of building network operating systems in the form of a network shell over a local operating system is also used in modern operating systems, such as, for example, LANtastic or Personal Ware.

However, the more efficient way seems to be the way of developing operating systems, originally intended for work in the network. Network functions of this type of OS are deep embedded into the main modules of the system, which ensures their logical harmony, ease of operation and modification, as well as high performance. An example of such an OS is the Windows NT system from Microsoft, which, due to the built-in network facilities, provides higher performance and information security as compared to the LAN Manager network OS of the same company (joint development with IBM), which is an add-on over the local OS / 2 operating system. ...

The main functions of network operating systems include:
directory and file management;
resource management;
communication functions;
protection against unauthorized access;
ensuring fault tolerance;
network management.

Currently, the most widespread are three main network operating systems - UNIX, Windows NT and Novell Netware.
ОC UNIX is used mainly in large corporate networks, since this system is characterized by high reliability, the ability to easily scale the network. Unix provides a number of commands and programs that support them for networking. First, these are the ftp and telnet commands that implement file exchange and emulation of a remote host based on TCP / IP protocols. Second, the UUCP protocol, commands and programs, designed with a focus on asynchronous modem communication over telephone lines between remote Unix nodes in corporate and local networks.
Windows NT OS includes server (Windows NT Server) and client (Windows NT Workstation) parts and, thus, provides work in client / server networks. Windows NT is typically used on medium-sized networks.
The Novell Netware OS consists of a server side and shells that are hosted on client nodes. Provides users with the ability to share files, printers, and other hardware. Contains a directory service, a shared distributed database of users and network resources. This OS is more often used in small networks.

When choosing network software, the following factors should be considered first:

  • what kind of network it supports: peer-to-peer, server-based, or both;
  • what is the maximum number of users allowed (it is better to take with a margin of at least 20%);
  • how many servers can be included and what types of servers are possible;
  • what is the compatibility with different operating systems and different computers, as well as with other network facilities;
  • what is the level of software performance in different modes of operation;
  • what is the degree of reliability of work, what are the permitted access modes and the degree of data protection;
  • and perhaps most importantly, what is the cost of the software.

Department networks

Department networks- These are networks that are used by a relatively small group of employees working in one department of the enterprise. These employees perform some general tasks, such as accounting or marketing. It is believed that a department can have up to 100-150 employees.

The main purpose of the department network is separation local resources such as applications, data, laser printers and modems. Typically, departmental networks have one or two file servers, no more than thirty users (Figure 8.3), and are not subnetted. Most of the enterprise traffic is localized on these networks. Department networks are usually created on the basis of any one network technology - Ethernet, Token Ring. Such a network most often uses one or at most two types of operating systems. A small number of users allows peer-to-peer network operating systems such as Windows 98 to be used in departmental networks.

Network management tasks at the department level are relatively simple: adding new users, fixing simple failures, installing new nodes, and installing new software versions. Such a network can be managed by an employee who devotes only part of his time to performing the duties of an administrator. Most often, the network administrator of a department does not have special training, but is the person in the department who understands computers best of all, and of course it turns out that he is engaged in network administration.

There is another type of networks that are close to department networks - workgroup networks. These networks include very small networks, including up to 10-20 computers. The characteristics of workgroup networks do not differ much from the characteristics of departmental networks described above. Properties such as network simplicity and homogeneity are most pronounced here, while departmental networks can in some cases approach the next largest type of network - campus networks.

Campus networks

Campus networks get their name from english word campus is a campus. It was on the territory of university campuses that it was often necessary to combine several small networks into one large one. Now this name is not associated with student campuses, but is used to refer to the networks of any enterprises and organizations.

Campus networks(Fig. 8.4) unite many networks of different departments of the same enterprise within a single building or one territory covering an area of ​​several square kilometers. However, global connections are not used on campus networks. Services of such a network include interaction between networks of departments, access to common bases enterprise data, access to shared fax servers, high-speed modems and high-speed printers. As a result, employees of each department of the enterprise gain access to some files and resources of networks of other departments. Campus networks provide access to corporate databases no matter what types of computers they are located on.

It is at the campus network level that the problems of integrating heterogeneous hardware and software arise. The types of computers, network operating systems, network hardware in each department may differ. Hence the complexities of managing campus networks. In this case, administrators should be more qualified, and the means of operational management of the network - more effective.

Enterprise networks

Corporate networks are also called enterprise-wide networks, which corresponds to the literal translation of the term " enterprise-wide networks", used in English literature to refer to this type of network. Enterprise networks (corporate networks) unite a large number of computers throughout the territory of a single enterprise. They can be complexly connected and can cover a city, region or even a continent. The number of users and computers can be measured thousands, and the number of servers - hundreds, the distances between the networks of individual territories are such that you have to use global connections (Fig. 8.5). To connect remote local networks and individual computers in a corporate network, various telecommunication means are used, including telephone channels, radio channels , satellite communications A corporate network can be represented as "islands of local networks" floating in the telecommunications environment.

An indispensable attribute of such a complex and large-scale network is high degree heterogeneity ( heterogeneity) - it is impossible to satisfy the needs of thousands of users using the same type of software and hardware. A corporate network will definitely use different types of computers - from mainframes to personal computers, several types of operating systems and many different applications. The heterogeneous parts of the corporate network should work as a whole, providing users with the most convenient and simple access to all necessary resources.

Enterprise networks ( corporate networks) unite a large number of computers in all territories of an individual enterprise. The corporate network is characterized by:

  • scale - thousands of user computers, hundreds of servers, huge volumes of data stored and transmitted over communication lines, a variety of applications;
  • a high degree of heterogeneity - different types of computers, communication equipment, operating systems and applications;
  • use of global connections - the networks of branches are connected using telecommunication means, including telephone channels, radio channels, satellite communications.

The emergence of corporate networks is a good illustration of the well-known postulate of the transition from quantity to quality. When connecting separate networks of a large enterprise with branches in different cities and even countries, into a single network, many quantitative characteristics of the united network cross a certain critical threshold, beyond which a new quality begins. Under these conditions, the existing methods and approaches to solving traditional problems of smaller-scale networks for corporate networks turned out to be unsuitable. Tasks and problems have come to the fore that in the networks of workgroups, departments and even campuses either were of secondary importance or did not appear at all. An example is the simplest (for small networks) task - maintaining credentials about network users.

Windows NT was Microsoft's continuation of the OS / 2 project undertaken by Microsoft after it parted ways with IBM. D. Kutler, who has extensive experience in the development of operating systems at DEC (OS VAX VMS), was invited as the project manager for Windows NT.

From the very beginning, Windows NT was planned as an OS designed to serve as a server. Windows NT is a fully 32-bit, object-oriented operating system built on top of a microkernel. The latter circumstance made it possible to make the OS available on a large number of hardware platforms for CISC and RISC processors, including symmetric multiprocessor architectures. However, starting with version 4, Windows NT only runs on the Intel / Pentium processor architecture. The OS architecture is shown in Figure 5.1.

The implementation of the microkernel concept in Windows NT is that the OS consists of server processes that directly service user processes in user mode, and a part of the system that runs in kernel mode, which performs low-level and critical operations at the request of the server processes.

Core scheduling processor actions and synchronizing the work of processes and threads. The kernel is resident and non-interruptible. The kernel is object-based, that is, it provides a low-level base for certain OS objects that can be used by higher-level components. Kernel objects are divided into two groups: control objects and dispatch objects. The main object of control is a process, which is an address space, a set of objects accessible to a process and a set of control threads. Some other control objects: interrupt, synchronous call procedure, deferred call procedure, etc. Scheduling objects are characterized by signal states and control the scheduling and timing of operations. Examples of dispatch objects: thread, semaphore, event, mutual exclusion (mutex for user mode and mutant for kernel mode), and others.

The kernel implements the basic process and thread scheduling policy (although it can be changed by the subsystem servers). In total, Windows NT has 32 priority grades, divided into 4 classes. At startup, a process gets the default priority level assigned to its class:

  • for a real-time class - level 24;
  • for the high class - level 13;
  • for a normal class, level 9 for an interactive process, or level 7 for a background process;
  • for a deferred class - level 4.

Executive subsystem- the top level of the kernel, representing the kernel service to the environment subsystems and other servers. The components of the executive subsystem are listed below.

Types of computer networks

Computer network is called the combination of two or more computers through data transmission channels. Thus, the purpose of any network is compatibleefficient use of hardware and software resources, access to informationonny resources. Computer networks occupy an increasingly important place in the life of mankind. Networks can combine information resources of both small enterprises and large organizations that occupy distant from each other.premises, sometimes even located in different countries. This determines the wayso the connection of computers to each other and, accordingly, the type of network.

By the nature of their use on the network, computers are divided intoservers andwork whose stations .

Server - a specially dedicated computer that is intended for partitionfile uploading, remote application launching, processing requests on receivinginformation from databases and providing communication with common external devicesdevices: printers, modems, CD-ROM drives.

Work station (orcustomer) - a personal computer using the servicegems provided by the servers.

There are two main approaches (Fig. 1) to the interaction of computersonline.

First approach: transferring the bulk of computing and information processingto the server; the client is only responsible for a small part of the work that does not requirelarge resources. This approach is the basiscentralized processinginformation.

Second approach: the bulk of information processing is done on workersstations, and the server acts as a repository of information. This approach is the basisdecentralized (distributed) information processing.


Rice. 1. Options for building a network

There are two main types of computer networks (Fig. 2):local andglobal.


Rice. 2. Types of networks

The local network - interconnected computers located onlarge distances from each other.

Typically, a local network connects computers located withinone building, at distances of about 50-100 meters; 90% of the information, cirthat is circulating in such a network is information from a local organization. For example,in an office located in the same building, employees have access to one andthe same internal sources of information for the preparation of various reports,scheduling and planning the general activities of the enterprise. Spesocial networking software allows you to automatically schedule meetings,choosing for them the most suitable time for all; boss can checkto determine whether his instructions, which were sent out over the network, etc., were fulfilled.

A local network can include not only computers, but also printers,and other external public devices. To organize a localnetwork uses different types of cables.

By the way computers are connected in a local network, they are distinguishedpeer-to-peer networks andthe networkwith dedicated server . Peer-to-peer networks (Figure 3) use peer-to-peer technology. Any computer can use the resources of anyonethe computer connected to it. In other words, any computer canstep as both a server and a client. This imposes certain restrictionson the composition and performance of such networks, which are as follows:

    the number of computers should be within 10-30, depending onthe intensity of message flows in the network;

    it is not customary to use personal computers as application servers ",but only as file servers or as machines for connecting to printers;

    the performance of applications on a computer deteriorates when its resources are usedThere are other computers on the network.

Dedicated Server Networks (Fig. 4) in this sense is much more stable and aboutmore disgusting.


Rice. 3. Peer-to-peer local area network

If computers are located at a remote distance from each other, then forother communication channels are used for data transmission: telephone lines, satellitecommunication, fiber optic lines. Such communication channels can connect a computerry located both in neighboring houses and on different continents. Distancehere - not a hindrance. Networks that provide remote connection of computers andlocal area networks are calledglobal.


Rice. 4. Local network with a dedicated server

Historically, the prototype of global networks becameregional networks, which provided communication between computers within the same region.

A well-known example of a global network isInternet - worldwidecomputer network: a set of technical means, softwarestandards and conventions to maintain communication betweenvarious computer networks in the world.

Currently, to ensure communication in global networks, uniform rules have been developed, which are calledtechnology Internet . These rules establishedone wayconnectivity a separate computer or local network toglobal, uniteddata transfer rules , unified systemidentification of a computer on the network (assigning itnetwork address ). When creating thesenology pursued several goals, but one of the main ones was withbuilding a network that is resistant to partial damage. One of the ways to achieve this goal is to develop technologydecentralized processing information on the network.

Decentralization of information processing is achieved by the presence of a huge number ofla servers located in different parts of the world, and in a variety of waysaccess to these servers. This means that if some node fails(server computer) of the network, the functioning of all other computers withstored at the expense of other, serviceable servers. Data organized inpack you, move through the network to the computer with the desired address and whenif one of the computers crashes are automatically directed to the otherroute. For the recipient, it does not matter at all which route this oranother package will reach him. At the destination, they will be joined together. Sothat packets can reach the destination and workarounds.

Global network - the union of computers located on a remote racesstanding, for the general use of world information resources.

Today, computer networks continue to develop, and it would be enoughstr. The gap between LAN and WAN is constantly narrowinglargely due to the emergence of high-speed territorial communication channels,not inferior in quality to cable systems of local networks. In globalnetworks, access services to resources appear, which are just as convenient and transparent,as well as local area network services. Similar examples abound in demonthe most popular network - Internet.

Local networks are also changing. Along with the passive cable connectingcomputers, communication equipment (switches) appeared andsoftware (routers, gateways). Thanks to this,the ability to create large corporate networks, which includethousands of computers and have a complex structure.

Let's remember how train tickets are ordered. At the request of any cashierthe operator's monitor displays information about the availability of vacancies,the cost of travel tickets, etc. At the instruction of the passenger, the cashier through the networkenters into the central computer a request for the purchase of a ticket and issuesticket sale. The paid place after purchase is withdrawn from furthersales. This is an example of centralized processing of information on the web. AboutTickets for the same routes can be sold from many cities.Such a network can no longer be called local. It serves to process information from one firm or association of firms and is therefore calledcorporate (from the word corporation - association).

Corporate network - combining local networks within one corporationtions for solving common problems.

Corporate networks can connect branches within one corporation,geographically distant from each other. Corporate networks are characterized bycombination of centralized information processing using remoteconnection of computers. Information may be changed by employees,who have access to it. The networks described above can have access to other outsidewide networks, for example, in order to obtain information from remote databasesdata of global importance, or forward messages by e-mail toanother network, send a fax.

Currently, for the connection of computers in corporate networks, it has been developednew technology -Internet . Intranetuses experience in distributedenvironment and is built on the "client-server" technology with centralized processingsome information.

TechnologyIntranetIt has following benefits:

    lightweight centralized server and client managementin places;

    distributed corporate Information system which usesno protocols and technologyInternet;

    the server is not necessarily a fixed machine; they might beany computer of a corporation, as well as any serverInternet;

    Intranetprovides the ability to combine dissimilar softwaredecisions;

    Intranetuses open standards.

So the technologyIntranetflexibly combines the capabilities of centralized and distributed information processing, and allows you to achieve the greatestwork efficiency.

Hardware computer networks

The main purpose of creating any computer network (local or global)consists in ensuring the exchange of information between its objects (servers and clients). Obviously, for this it is necessary to connect the computers to each other. Therefore, the mandatory components of any network are all kinds of communication channels (wired and wireless), for which different physical media are used. In accordance with this, such kacommunications such as electrical cable, telephone and fiber optic lines,radio communications, space communications.

To transmit information through communication channels, it is necessary to convert computer signals into signals of physical media, that is, to make it possible to transfer them over electrical, optical, telephone lines. To do this, usecallnetwork adapters.

Network adapters (network cards) - technical devices performingfunctions of interfacing computers with communication channels.

Network adapters must match the communication channels. For every kind of kaCommunication requires its own type of network adapter. The adapter is inserted into the freenest motherboard computer and connect with a cable to the network adapter of another computer. Computer addresses are displayed on network cardsin a network without which transmission is impossible. When information circulates throughnetwork, any networked computer selects from it only the one that isfor him. It is determined in accordance with the address of the computer.

For communication remote computers telephone lines are widely used.To transfer information over a telephone line, the computer must have a mustachea special device has been installed -modem . The information stored on the computer in a binary digital code, the modem converts(modulates) in analogsignals that can be transmitted over a telephone line. At the other end of the connection, they are received by the other modem and converted(demodulate Xia) from analog signals to digital signals of a computer.

So that the information transmitted by one computer, after receiving itwas understood by another computer, it was necessary to develop uniform rulesla data transmission on a network calledprotocols . When developing them, take into accountall communication problems were solved and standard delivery algorithms were developedinformation.

A transfer protocol establishes an agreement between interactingcomputers. In order for the connection between computers to be established,you need to specify their addresses. The rules for the formation of addresses of computers in the global network should be exactly the same, despite the fact that the computernetwork players can be heterogeneous and use different operating systems.systems.

ConceptInternetusually associated with two major features of thishypernets:

    batch method of information transfer;

    using the international TCP / IP protocol, which providestransfer of information between networks of various types.

According to the protocolTCP, the data sent is divided intopackages , every paket is marked in such a way that it contains the information necessary forassembly of the document correctly on the recipient's computer. ProtocolTCPalsois responsible for the reliability of the transmission of large amounts of information, processesand fixes network disruptions.

ProtocolIPprovides routing (delivery to address) of network packetsComrade The essence of this protocol is that each network participant mustbe your unique address calledIP -address. StructureIP-addresses of tacosva that every computer that passes throughTCP-package, can determineto whom the package should be forwarded. When forwarding a packet, it is not geographic proximity that is taken into account, but the conditions of communication and the bandwidth of the line.

ProtocolsTCPandIPare closely related, and they are often combined, saying thatvInternetthe basic protocol is TCP / IP.

Software computer networks

To ensure the operation of the network, it is necessary not only to have hardware,but also related software.

The network software includes, first of all,operating system, providing configuration and configuration of the network. Modern versions of the operating systemWindowsallow you to configure the work as in the localenoah and in the global network.

There are also software tools that expose network addressescomputers. A large number of special networked system shells have emerged. These add-ons make it possible to determine the addresses of computers,order the required number of network users, if the network is limited in numberclients, make available directories or computer hardware resourcesto yuther clients in the network, assigning them certain rights, etc. Programsmake it possible to protect information. One directory can be accessedare read-only, others are for reading and writing information, and some ingenerally inaccessible, hidden. In the latter case, only part of theserver information. Network programs make it possible to assign different access rights to different users. This necessary measure leads to athe storage of information and the observance of its confidentiality.

On computers whose users want to access other resourcessome computers, it is necessary to add to the operating system some

special software modules that should generate requests foraccess to remote resources and transfer them to the desired computer. Such fashionare they usually calledsoftware clients. Actually the adapters and kanaCommunication links transmit messages with requests and responses from one computer to another, and the main work of organizing the sharing of resourcessovs run the client and server portions of operating systems.

A program requesting certain information services callsXiaclient, and the program responding to this request isserver. A couple of programsClient-server modules allow users to share a specific type of resource, such as files. In this case, it is said thatthe user deals withfile service. Usually network operating systemThe system supports several types of network services for its users:file service, print service, e-mail service, remote servicego access and others.

There is a wide variety of email client programs. These include, for example,MicrosoftOutlookExpress, which is part of the operating systemthemesWindows98 as standard. A more powerful program that integrates intoto yourself, besides e-mail support, other means of office work -MicrosoftOutlook... Some programs are popularTheBatandEudoraPro.

In order to send information by e-mail, the usermust have youremail address. Email addresses have oneforging structure:

[email protected]

This entry, like a regular mailing address, has two parts:to whom andwhere, timesseparated by "@". The right side uniquely identifies the name of the mail server, and together with the left - the user's mailbox. It is clear that this addressmust be unique.

Domain ( domain ) - an area dividing computers by territorial orthematic feature. There are domains of several levels. Domain highthe highest level in the e-mail address is indicated on the right. For example,ru- domainRussia,ua- Ukraine,sa- Canada;com- commercial organizations,org- thstate,edu- educational. The domain system has a hierarchicalstructure, therefore, domains of the next levels are indicated in the address according to the stepno nesting from right to left and separated by periods. Usually the domain of the secondlevel means the name of the organization or city. So, for example, the second level domainnyaspbmeans Saint Petersburg,microsoft- companyMicrosoft .

Today networkInternetused in various fields: in science, technology,development, economy. The first thing a netizen facesInternet, - this is a huge amount of information. Most of the documents available on network serversInternethavehypertext format.

Hypertext is a document containing links to other documents organizedcalled according to certain rules.

ServiceInternetgoverning the transmission of such documents is calledWorldWideWeb ( Www) - "The World Wide Web". The same term orenvironment wwware calleda set of documents between which there are hypertext links.

To viewWeb-pages use special programs thatare calledbrowsers, browsers ornavigators. The most commoninjured navigators areInternetExplorerfirmsMicrosoft andNetscapeNavigatorfirmsNetscape .

AddressesWeb-documents have the same structure, for example:

http:// dogovor. desk. ru

http:// www. piter. com

http:// officeupdate. microsoft. com

To exchange files in a peer-to-peer local area network, it is necessary that eachhome computer has been shared with some folders. For forwardingki files in shared folders using the program Network Neighborhood.

What is a network administrator?

Issues related to the organization of a local or corporate network, itsspecially trained people are engaged in setting and management -network administrators ornetwork administrators. They install equipmentconfiguration, configure the software, determine the access rights of eachuser to the network. An ordinary user does not need to delve into the prenetwork device wisdom. Everyone must do their job. Therefore, a number ofa new user, in order to use the services of the network, only needs to know the typethe network on which he works, his network address and software, willwhich allows access to other computers andInternet, as well as your access rights

Aygazieva Saltanat, Afanasyeva Svetlana, Kutepova Natalia

Group research topic

Computer networks

Problem question (research question)

What kind of networks are there?

Research hypothesis

Computer networks are: Local, Global, Regional

Research Objectives

Study the structure of a computer network; Learn basic networking tools.

Research progress

What is a computer network? Network - a group of computers and / or other devices connected in any way to exchange information and share programs, data files and peripheral devices.

Server - a computer with a server operating system installed on it, which provides its software and hardware resources to network users.

A peer is an equal participant in a network that provides services to other participants in a peer-to-peer network and uses their services himself.

A backbone is a transmission channel between two points - nodes or switches.

The Internet is a collection of networks connected to each other by telecommunication infrastructure.

Domain is a group of computers under uniform control and having a common segment in the Internet address. IP address is the 32-bit Internet Protocol address assigned to the host. An IP address contains two components: a node number and a network number. A modem is a device that converts digital signals into analog signals for transmission over a telephone line, and also converts incoming analog signals into digital signals for processing in a computer. Router (router) - network equipment operating at the network level and establishing communication between different networks. Network switch, switch, switch (jarg. From the English. Switch - switch) - a device designed to connect several nodes of a computer network within one or more network segments. A network hub or hub (jarg. From the English. Hub - the center of activity) is a network device designed to combine several Ethernet devices into a common network segment. Protocol is a standard that describes the rules for the interaction of functional blocks during data transmission. A packet is a formatted block of data transmitted over a network. What kind of networks are there? Classification of computer networks By territorial prevalence: PAN (Personal Area Network) - a personal network designed for the interaction of devices belonging to the same owner. LAN (Local Area Network) - local area networks with a closed infrastructure before reaching service providers. Access to local networks is allowed only to a limited number of users. CAN (Campus Area Network) is a campus network that connects local networks of closely located buildings. MAN (Metropolitan Area Network) - metropolitan area networks between institutions within one or several cities WAN (Wide Area Network) - a global network covering large geographic regions, including both local networks and other telecommunication networks and devices. By the type of functional interaction. Client-server is a computing or networking architecture in which jobs or network load are distributed between service providers (services), called servers, and service customers, called clients. A mixed network is a network architecture in which there are a number of servers that form a peer-to-peer network among themselves. End users each connect to their own server according to the "client-server" scheme. A peer-to-peer network is an overlay computer network based on the equality of participants. In such a network there are no dedicated servers, and each node (peer) is both a client and a server. Multi-rank network - a network in which there is a master, or "master" - a controller that coordinates the work of "slave" controllers that actually control one or more points of passage. By type of network topology: Basic: A bus is a common cable (called a bus or backbone) to which all workstations are connected. There are terminators at the ends of the cable to prevent signal reflection. A ring is a topology in which each computer is connected by communication lines with only two others: from one it only receives information, and to the other it only transmits. Only one transmitter and one receiver operates on each communication line. A star is a computer network topology in which all computers on the network are connected to a central node (usually a switch), forming a physical network segment. Derivatives: A double ring is a topology built on two rings. The first ring is the main path for data transmission. The second is a backup path that duplicates the main one. Mesh is a basic fully meshed topology of a computer network, in which each workstation on a network is connected to several other workstations on the same network. A lattice is a topology in which the nodes form a regular multidimensional lattice. Moreover, each edge of the lattice is parallel to its axis and connects two adjacent nodes along this axis. The tree is a more developed configuration of the "bus" type. Several simple buses are connected to a common backbone bus through active repeaters or passive multipliers. Fat Tree is a computer network topology for supercomputers. In a thickened tree, connections between nodes become more bandwidth efficient with each level as they get closer to the root of the tree. By the type of transmission medium: Wired (telephone wire, coaxial cable, twisted pair, fiber-optic cable. Wireless (transmission of information over radio waves in a certain frequency range). By functional purpose: Storage networks - an architectural solution for connection external devices data storage, so that the operating system recognizes the connected resources as local. A server farm is an association of servers that are interconnected by a data network and operate as a single unit. Process control networks are computerized systems capable of converting information, making calculations, logical operations, using various types of computer networks and modern information technologies. SOHO Networks & Home Network is a type of local area network, laid within one building or uniting several nearby buildings. By transmission speed: low-speed (up to 10 Mbps) medium-speed (up to 100 Mbps) high-speed (over 100 Mbps) By network OS: Based on Windows Based on UNIX Based on NetWare If you need to maintain a constant connection: Packet network, for example Fidonet and UUCP Online network such as Internet and GSM How do computers on the network find each other? Each computer on a local network has its own unique address, just like a person has his own mailing address. It is at these addresses that computers find each other on the network. There should not be two identical addresses on the same network. The format of the address is standard and is defined by the IP protocol. The computer's IP address is recorded in 32 bits (4 octets). Each octet contains a decimal number from 0 to 255 (in binary, the entry represents a sequence of 0 and 1). The IP address is represented by four numbers, separated by a period. For example, a computer with an IP address of 192.168.3.24. The total number of IP addresses is 4.2 billion, all addresses are unique. An IP address can be assigned not only to a computer, but also to other network devices, such as a print server or router. Therefore, all devices on the network are usually called nodes or hosts. One and the same physical device (computer or other) can have several IP addresses. For example, if your computer has multiple network adapters, then each adapter must have its own unique IP address. These computers are used to connect multiple local networks and are called routers. A large IP network is divided into several subnets, assigning each of them its own address. Subnets are separate, independently functioning parts of a network that have their own identifier. For the subnet address, in the IP address, space is allocated from the host address. The subnet mask is used to determine the network address and subnet. The subnet mask recording format is the same as the IP address format, with four fields separated by a period. Thus, paired with the IP address of the computers, the subnet mask must be specified.

conclusions

A computer network is a combination of several computers for the joint solution of information, computational, educational and other tasks.

All computer networks, without exception, have one purpose - to provide shared access to shared resources. Resources are of three types: hardware, software, information. By the way they are organized, networks are subdivided into real and artificial. According to the speed of information transfer, computer networks are divided into low-, medium- and high-speed. In terms of territorial distribution, networks can be local, global, regional and urban.

A local computer network is a collection of computers connected by communication lines that provide network users with the potential to share the resources of all computers. A local network is created for the rational use of computer equipment and the efficient work of employees.

A global network (WAN or WAN - World Area NetWork) is a network that connects computers that are geographically remote at large distances from each other. It differs from a local network in more extended communications (satellite, cable, etc.). The global network connects local networks.

The Internet is a global computer network covering the entire world. The Internet is a constantly developing network, which still has everything ahead, let's hope that our country will not lag behind progress.

BASIC CONCEPTS FOR INTERNET USERS ON MODERN UNITED COMPUTER NETWORKS

Shcherbakova Svetlana Mikhailovna 1, Krupina Tatiana Aleksandrovna 1
1 Moscow Pedagogical State University, undergraduate student of the Department of Applied Mathematics and IT


annotation
This article was written based on the results of the master's study. It is devoted to an overview of the basic concepts of modern computer networks. Thus, global networks today unite intranets and extranets.

BASIC CONCEPTS FOR INTERNET USERS ON CONTEMPORARY UNITED COMPUTER NETWORKS

Shcherbakova Svetlana Mikhailovna 1, Krupina Tatiana Aleksandrovna 1
1 Moscow State Pedagogical University, Graduate of the Department of Applied Mathematics and IT


Abstract
This article is written as a result of master "s studies. It provides an overview of the basic concepts of modern computer networks. Thus, the global network now bring together intranets and extranets.

Bibliographic link to the article:
Shcherbakova S.M., Krupina T.A. Basic concepts for Internet users on modern united computer networks // Modern equipment and technologies. 2016. No. 10 [Electronic resource] .. 02.2019).

In connection with the annual increase in the services provided to the population through computers and the global computer network Internet (for work, education, recreation), there is an increasing need for communication between specialists in the field of networks with ordinary users, and using some network terms. Therefore, the time has come for a certain educational program on computer networks for ordinary more or less active Internet users.

A computer network is an interconnected group of computers in the same room or on different continents, exchanging information via special wired or wireless communication channels. In terms of scale, computer networks can be classified as follows: personal, local, corporate, city and global.

Network administrators, in contrast to the user, have access to configure the network, troubleshoot problems associated with achieving the desired level of health and performance.

By the method of connecting computers and other network devices, computer networks can be classified in accordance with the hardware technology implemented when creating networks, which is used to connect individual network devices, for example, fiber-optic, twisted pair (Ethernet), wireless (Wireless LAN) , based on telephone lines. In addition to computers, the network contains network devices such as hubs, switches, bridges, routers and gateways. Wireless LAN technology uses radio frequencies to connect devices on a network.

According to the functional connection between computers, networks can be classified as client-server (where there is a dedicated master computer and slaves) and peer-to-peer (all computers are the same in rank). By topology, computer networks can be classified according to the logical connection of all network devices, for example, bus, star, ring, tree, hierarchical topologies, etc. Network topology refers to the way that smart devices on a network see logical connections to each other. That is, the network topology is independent of the “physical” location of the network. Even if the network computers are physically in a linear arrangement, if they are connected through a hub, the network has a star topology, not a bus topology. Therefore, the logical network topology does not necessarily match the physical location.

The rules for transferring information between computers on a network are called a protocol. The most commonly used protocols are TCP and IP.

Let's consider the main types of networks. A personal area network (PAN) is a computer network between computers and devices owned by the same person. Some examples of devices that can be used on a personal network are printers, faxes, telephones, PDAs, or scanners. The PAN is usually at home within about 5-12 meters.

A local area network (LAN) covers a small area, it can be a separate room, home, office or business center. Modern local area networks are based on Ethernet technology. The defining characteristics of local networks, in contrast to global ones, are their higher data transfer rates, smaller sizes, and also the absence of the need to rent communication lines. Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbps. This is the baud rate. However, the IEEE also has standards up to 100 Gbps.

A wide area network (WAN) is a data network that spans a relatively wide geographic area (ie, entire cities and countries), and often uses telephone lines from telephone companies to transmit data. WAN technologies typically operate at the bottom three layers reference model OSI: physical layer, link layer and network layer.

Wide Area Network (GAN). A wide area network (GAN) specification is under development by several groups and there is no single definition. In general, the GAN is a model for supporting mobile communications over an arbitrary number of WLANs, satellite coverage areas, etc. The main challenge in the field of mobile communications is to “switch” user messages from one local area to another.

Two or more networks or network segments connected by devices that operate at layer 3 (the 'network' layer) of the OSI base reference model, such as a router. Any interconnection between public, private, commercial, industrial, or government networks can be termed an interconnected network.

In modern practice of interconnected networks, there are at least three types of networks, depending on who controls and who participates in them. These are: 1. Intranet 2. Extranet 3. Internet. Intranets and extranets may or may not have an Internet connection. When connected to the Internet, an intranet or extranet is usually protected from unauthorized access from the Internet without special permission. The Internet is not considered part of an intranet or extranet, although it can serve as a portal to access parts of an extranet.

An intranet is a collection of interconnected networks using the Internet Protocol and uses tools such as Web browsers and FTP tools that are under the control of an administrator. The administrator closes and opens the intranet from the rest of the world, and only gives access to certain users. Most often, an intranet is an internal network of a company or other enterprise. A large intranet usually has its own web server to provide more information to users.

An extranet is a network that is limited within one organization or legal entity, but also has limited connections with the networks of one or more trusted organizations or legal entities (for example, a company's customers may be given access to some part of its intranet, thus creating Thus, extranet, at the same time, clients cannot be considered “their own people” in terms of security). From a technical point of view, an extranet can also be classified as a corporate type of network, although, by definition, an extranet cannot be composed of a single local area network; it must have at least one connection to the external network.

The Internet is made up of interconnected worldwide governmental, scientific, public and private networks. Members on the Internet, or their service providers, use IP addresses obtained from address registrars.


Bibliographic list
  1. Olifer V. G., Olifer N. A. Computer networks. Textbook for universities. 4th ed. - SPb .: Peter, 2010 .-- 944s.
  2. Abdulgalimov G.L. Progress of information society in Russia and deficit of staff potential. Life Science Journal. 2014.Vol. 11.No. 8, pp. 494-496.
  3. Abdulgalimov G.L., Yakusheva N.M. Modern computer networks. Uch. allowance. - M .: MGGU im. M. A. Sholokhova, 2011.120s.
  4. Federal State Educational Standard 222000 Innovation (qualification (degree) "Master"). The website of the Ministry of Education and Science of Russia http://minobrnauki.rf/documents/926. Access date 03.10.2016.
  5. Abdulgalimov G.L. Problems and solutions for the implementation of the Federal State Educational Standard. Pedagogy. 2013. No. 10. S. 57-61.
  6. Abdulgalimov G.L. Information technology for the subject teacher. Uch. allowance. -M .: MGGU im. M. A. Sholokhova, 2008.-244s.
  7. Tanenbaum E., Weatherall D. Computer networks. 5th ed. -SPb .: Peter, 2012 .-- 960 p.