The concept of computer technology. Personal computers

The term “computer technology” has come into use relatively recently. This designation initially did not imply all those aspects that are included in it today. And, unfortunately, most people for some reason believe that computers and computer technology are synonymous words. This is clearly a fallacy.

Computer technology: the meaning of the word

The meaning of this term can be interpreted in completely different ways, especially since different dictionaries can interpret it in different interpretations.

However, if we approach the issue with some kind of generalization, we can safely say that computer technology is a technical device with a set of certain mathematical tools, techniques and methods for automating (or even mechanizing) the processing of any information and computational processes or describing this or that another phenomenon (physical, mechanical, etc.).

What is this in a broad sense?

Computer technology has been known to mankind for a long time. The most primitive devices that appeared hundreds of years BC can be called, for example, the same Chinese abacus or the Roman abacus. Already in the second half of the current millennium, devices such as the Knepper scale, Schickard arithmometer, calculator, etc. appeared. Judge for yourself, today’s analogues in the form of calculators can also safely be attributed to one of the varieties of computer technology.

Nevertheless, the interpretation of this term acquired a more expanded meaning with the advent of the first computers. This happened in 1946, when the first computer was created in the USA, denoted by the abbreviation ENIAC (in the USSR, such a device was created in 1950 and was called MESM).

Today, the interpretation has expanded even further. Thus, at the present stage of technology development, it can be defined that computer technology is:

  • computer systems and network management tools;
  • automated control systems and data (information) processing;
  • automated design, modeling and forecasting tools;
  • software development systems, etc.

Computing Tools

Now let's see what computer technology is. The basis of any process is information or, as they say now, data. But the concept of information is considered quite subjective, since for one person some process may carry a semantic load, but for another it does not. Thus, to unify data, it was developed that is perceived by any machine and is most widely used for data processing.

Among the tools themselves, one can highlight technical devices (processors, memory, input/output devices) and software, without which all this “hardware” turns out to be completely useless. It is worth noting here that a computing system has a number of characteristic features, for example, integrity, organization, connectivity and interactivity. There are also so-called computing systems, which are classified as multiprocessor systems that provide reliability and increased levels of performance not available with conventional single-processor systems. And only in the overall combination of hardware and software can we say that they are the main means of computing. Naturally, we can add here methods that provide a mathematical description of a particular process, but this may take quite a long time.

The structure of modern computers

Based on all these definitions, we can describe the operation of modern computers. As mentioned above, they combine hardware and software, and one cannot function without the other.

Thus, a modern computer (computing technology) is a set of technical devices, ensuring the functioning of the software environment for performing certain tasks, and vice versa (a set of programs for the operation of the hardware). The first statement is the most correct, not the second, because ultimately this set is needed specifically for processing incoming information and outputting the result.

(computer technology) includes several basic components without which no system can do. This may include motherboards, processors, hard drives, RAM, monitors, keyboards, mice, peripherals (printers, scanners, etc.), disk drives, etc. In terms of software, operating systems and drivers occupy the first place. Operating systems run application programs, and drivers ensure the correct functioning of all hardware devices.

A few words about classification

Modern computing systems can be classified according to several criteria:

  • principle of operation (digital, analog, hybrid);
  • generations (stages of creation);
  • purpose (problem-oriented, basic, household, dedicated, specialized, universal);
  • capabilities and sizes (super-large, super-small, single- or multi-user);
  • conditions of use (home, office, industrial);
  • other characteristics (number of processors, architecture, performance, consumer properties).

As is already clear, it is impossible to draw clear boundaries in defining classes. In principle, any division of modern systems into groups still looks purely conditional.

Methods for organizing software and hardware in automated workplace complexes should be determined in the general context of the considered processes of operational production management (OPM) of industrial enterprises, the objective function of which is to minimize the costs of all types of resources for the production of the established range of items of labor.

Methods for organizing software and hardware in automated workplace complexes should be determined in the general context of the considered processes of operational production management (OPM) of industrial enterprises, the objective function of which is to minimize the costs of all types of resources for the production of the established range of items of labor.

The synthesis of methods and models for organizing software and hardware when presenting AS EUP as automated workplace complexes of self-supporting production teams must go through two stages: the stage of determining the rational composition of computer hardware and the stage of solving the problem of allocating the resources of the computer system of automated workplace complexes to its end users.

Technical (hardware) compatibility of new VT equipment in relation to the customer’s existing VT fleet and to the VT fleet predicted for future acquisition. Practice shows that this indicator is one of the most important taken into account when choosing a VT. The tendency to purchase VT equipment that is hardware compatible with existing ones is associated with many objective and subjective reasons, not least of which is the customer’s psychology, his sense of confidence in the success of using this particular class of hardware. Software compatibility, which is determined by the compatibility of the hardware-implemented instruction system, the compatibility of data presentation formats, the compatibility of translators, DBMS, etc. The significant impact of this indicator on resource consumption can be explained by the presence of large volumes of previously prepared regulatory, archival and statistical data, as well as the specialization of trained personnel at the enterprise who have experience working with specific basic software tools.

Operational compatibility within the purchased complex of computer hardware, which allows, in the event of failure of individual workstation modules, either promptly replacing the failed module, or reassigning the devices used between specific workstations within the computing resources of all complexes (within a workshop complex, within an intershop complex, within the system of any enterprise).

Reliability of VT equipment according to technical specifications and its compliance with specific operating conditions: vibration, oxidation, dust, gas contamination, power surges, etc. requires additional funds protection.

The total speed of solving functional problems by type of automated workstation complex is the speed of processing existing volumes of data in various operating modes. Usually, to determine the values ​​of this indicator, it is not enough to know only the volume of the information base of a particular workstation and the passport characteristics and provided computing resources.

Therefore, for an approximate (ordinal) assessment of the values ​​of this indicator, it is essential either the operating experience on similar-class VT objects, or the results obtained from simulation models, where the databases correspond in volume and structure to real ones. Approximation of data obtained from test examples can lead to an error in the results, which differ by an order of magnitude from the actual estimates subsequently obtained during the operation of the system. The source of error most often is the ambiguity of operating algorithms, operating system utilities, communication protocols, drivers and basic language tools when operating systems in multi-user, multi-tasking mode at the maximum resources of computing systems or volumes for their elements. In this case, the possibilities of direct calculation using the performance characteristics of processors, intramachine communication channels, network communication channels, data access speeds by types of external devices cannot be used ineffectively. Currently, the capacity of many processors and the language tools implemented for them does not allow providing the entire potential set of PPP CS tasks with the required computational accuracy. Therefore, when determining the values ​​of this indicator, it is necessary to introduce detail by task classes of specific types of automated workstations with reference to the considered combination of VT tools and basic software.

The cost of implementing a “friendly interface” includes training programs and the opportunity to receive information while working on the workstation about ways to continue or end the dialogue.

Possibility of changing the composition and content of functions implemented at specific workstations, including redistribution between personnel.

Ensuring requirements for protection against unauthorized access for knowledge bases and databases, as well as ensuring their “transparency” if necessary.


When considering computers, it is common to distinguish between their architecture and structure.

What computer characteristics are standardized to implement the open architecture principle?

Only the description of the operating principle of a computer and its configuration (a certain set of hardware and connections between them) are regulated and standardized. Thus, the computer can be assembled from individual components and parts designed and manufactured by independent manufacturers. The computer is easily expanded and upgraded due to the presence of internal expansion slots into which the user can insert a variety of devices, and thereby set the configuration of his machine in accordance with his personal preferences.

Specify distinctive features classical architecture (“von Neumann”)?

Von Neumann architecture. One arithmetic-logical unit (ALU), through which the data flow passes, and one control device (CU), through which the command flow passes - the program. This is a single-processor computer. This type of architecture also includes the architecture of a personal computer with a common bus. All functional blocks here are interconnected by a common bus, also called a system bus.

Physically, the trunk is a multi-wire line with sockets for connecting electronic circuits. The set of trunk wires is divided into separate groups: address bus, data bus and control bus.

Peripheral devices (printer, etc.) are connected to the computer hardware through special controllers - peripheral device control devices.

Controller- a device that connects peripheral equipment or communication channels with the central processor, relieving the processor from directly controlling the operation of this equipment.

Name the advantages of standard and non-standard computer architectures.

Standard architectures are focused on solving a wide range of different problems. At the same time, the advantage in performance of multiprocessor and multi-machine computing systems over single-processor ones is obvious. When solving some specific problems, a non-standard architecture allows for greater performance.

Name the most typical areas of application of standard and non-standard computer architectures

1. Classical architecture. This is a single-processor computer. This type of architecture also includes the architecture of a personal computer with a common bus. Peripheral devices (printer, etc.) are connected to the computer hardware through special controllers - peripheral device control devices.

2. Multiprocessor architecture. The presence of several processors in a computer means that many data streams and many command streams can be organized in parallel. Thus, several fragments of one task can be executed in parallel.

3. Multi-machine computing system. Here, several processors included in a computing system do not have a common random access memory, but each has its own (local). Each computer in a multi-machine system has a classical architecture, and such a system is used quite widely. The effect of using such a computing system can only be obtained by solving problems that have a very special structure: it must be divided into as many loosely coupled subtasks as there are computers in the system. The speed advantage of multiprocessor and multi-machine computing systems over single-processor ones is obvious.

4. Architecture with parallel processors. Here, several ALUs operate under the control of one control unit. This means that a lot of data can be processed by one program - that is, by one stream of commands. High performance of such an architecture can only be achieved on tasks in which the same computational operations are performed simultaneously on different data sets of the same type. Modern cars often contain elements of various types of architectural solutions. There are also architectural solutions that are radically different from those discussed above.

State the merits of open and closed computer architectures

Advantages of open architecture:

Competition between manufacturers has led to cheaper computer components, and therefore the computers themselves.

The emergence of a large number of computer equipment has allowed customers to expand their choice, which also contributed to lower prices for components and an increase in their quality.

The modular structure of the computer and ease of assembly allowed users to independently select the devices they needed and install them with ease; it also became possible to assemble and upgrade their computer at home without much difficulty.

The ability to upgrade led to the fact that users were able to choose a computer based on their actual needs and the thickness of their pocket, which again contributed to the increasing popularity of personal computers.

Advantages of closed architecture:

The closed architecture does not allow other manufacturers to release additional external devices for computers; therefore, there is no problem of compatibility of devices from different manufacturers.

Why are computer hardware and software configurations considered separately?


Position 13 Basic hardware configuration of a personal computer



Questions for self-expression

Describe the functions of the processor. Indicate the main characteristics of the processor and their typical values.

Main processor functions:

Sampling (reading) of executed commands;

Input (reading) of data from memory or input/output device;

Outputting (writing) data to memory or input/output devices;

Processing of data (operands), including arithmetic operations on them;

Memory addressing, that is, specifying the memory address with which the exchange will be carried out;

Handling interrupts and direct access mode.

Processor Specifications:

Number of data bus bits

The number of bits of its address bus

The number of control signals in the control bus.

The width of the data bus determines the speed of the system. The address bus width determines the permissible complexity of the system. The number of control lines determines the variety of exchange modes and the efficiency of processor exchange with other system devices.

In addition to the pins for the signals of the three main buses, the processor always has a pin (or two pins) for connecting an external clock signal or a quartz resonator (CLK), since the processor is always a clocked device. The higher the processor clock speed, the faster it works, that is, the faster it executes commands. However, the performance of a processor is determined not only by the clock frequency, but also by the features of its structure. Modern processors execute most instructions in one clock cycle and have facilities for executing multiple instructions in parallel. The clock frequency of the processor is not directly and strictly related to the transmission speed on the highway, since the transmission speed on the highway is limited by signal propagation delays and signal distortion on the highway. That is, the clock frequency of the processor determines only its internal performance, and not its external one. Sometimes the processor clock speed has a lower and an upper limit. If the upper frequency limit is exceeded, the processor may overheat, as well as failures, and, what is most unpleasant, they do not always occur regularly.

Initial signal reset RESET. When the power is turned on, in an emergency or when the processor freezes, the supply of this signal leads to the initialization of the processor and forces it to begin executing the initial startup program. An emergency situation can be caused by interference in the power and ground circuits, memory failures, external ionizing radiation and many other reasons. As a result, the processor may lose control of the executing program and stop at some address. To exit this state, the initial reset signal is used. This same initial reset input can be used to notify the processor that the supply voltage has dropped below a specified limit. In this case, the processor proceeds to execute the program for storing important data. Essentially, this input is a special type of radial interrupt.

Sometimes the processor chip has one or two more radial interrupt inputs to handle special situations (for example, for an interrupt from an external timer).

The power bus of a modern processor usually has one supply voltage (+5V or +3.3V) and a common wire (ground). Early processors often required multiple supply voltages. Some processors have a low-power mode. In general, modern processor chips, especially those with high clock frequencies, consume quite a lot of power. As a result, in order to maintain the normal operating temperature of the case, it is often necessary to install radiators, fans, or even special microrefrigerators on them.

To connect the processor to the bus, buffer chips are used, providing, if necessary, demultiplexing of signals and electrical buffering of bus signals. Sometimes the exchange protocols on the system bus and on the processor buses do not coincide with each other, then the buffer chips also coordinate these protocols with each other. Sometimes a microprocessor system uses several highways (system and local), then each of the highways has its own buffer node. This structure is typical, for example, for personal computers.

After turning on the power, the processor goes to the first address of the startup program and executes this program. This program pre-recorded into permanent (non-volatile) memory. After completion of the initial start-up program, the processor begins to execute the main program located in permanent or RAM memory, for which it selects all commands in turn. The processor may be distracted from this program by external interrupts or DMA requests. The processor selects instructions from memory using read cycles over the bus. When necessary, the processor writes data to memory or I/O devices using write cycles, or reads data from memory or I/O devices using read cycles.

Indicate what underlies the division of computer memory into internal and external. List what is included in the internal memory?

The computer's internal memory is designed to store programs and data that the processor directly works with while the computer is turned on. In modern computers, internal memory elements are manufactured on microcircuits. External computer memory is designed for long-term storage of large amounts of information. Turning off the computer's power does not result in data loss during external memory. The internal memory consists of RAM, cache memory and special memory.

Describe the functions of RAM. Indicate the main characteristics of RAM and their typical values.

Random access memory - (RAM, English RAM, Random Access Memory - random access memory) is a fast storage device of not very large capacity, directly connected to the processor and designed for writing, reading and storing executable programs and data processed by these programs.

RAM is used only for temporary storage of data and programs, since when the machine is turned off, everything that was in the RAM is lost. Access to RAM elements is direct - this means that each byte of memory has its own individual address.

The amount of RAM usually ranges from 32 to 512 MB. For simple administrative tasks, 32 MB of RAM is sufficient, but complex computer design tasks may require 512 MB to 2 GB of RAM.

Typically, RAM is made from SDRAM (synchronous dynamic RAM) integrated circuits. Each information bit in SDRAM is stored as electric charge a tiny capacitor formed in the structure of a semiconductor crystal. Due to leakage currents, such capacitors quickly discharge and are periodically (approximately every 2 milliseconds) recharged special devices. This process is called memory regeneration (Refresh Memory). SDRAM chips have a capacity of 16 - 256 Mbit or more. They are installed in cases and assembled into memory modules.

What is the purpose of external memory? List the types of external memory devices.

External memory (ERAM) is designed for long-term storage of programs and data, and the integrity of its contents does not depend on whether the computer is turned on or off. Unlike RAM, external memory does not have a direct connection with the processor.

The computer's external memory includes:

Hard disk drives;

Floppy disk drives;

CD drives;

Magneto-optical CD drives;

Magnetic tape drives (streamers), etc.

Describe the operating principle hard drive. Indicate the main characteristics of the hard drive and their typical values.

Hard disk drive - (English HDD - Hard Disk Drive) or hard drive- this is the most widespread high-capacity storage device in which information carriers are round aluminum plates - platters, both surfaces of which are covered with a layer of magnetic material. Used for permanent storage of information - programs and data.

Like a floppy disk, the working surfaces of plotters are divided into circular concentric tracks, and the tracks into sectors. The read-write heads, along with their supporting structure and disks, are enclosed in a hermetically sealed housing called a data module. When a data module is installed on a disk drive, it automatically connects to a system that pumps purified cooled air. The surface of the plotter has a magnetic coating only 1.1 microns thick, as well as a layer of lubricant to protect the head from damage when lowering and raising on the move. When the plotter rotates, an air layer is formed above it, which provides an air cushion for the head to hover at a height of 0.5 microns above the surface of the disk.

Winchester drives have a very large capacity: from 10 to 100 GB. In modern models, the spindle speed (rotating shaft) is usually 7200 rpm, the average data search time is 9 ms, and the average data transfer speed is up to 60 MB/s. Unlike a floppy disk, HDD rotates continuously. All modern drives are equipped with a built-in cache (usually 2 MB), which significantly increases their performance. The hard drive is connected to the processor through the hard drive controller.

What are device ports? Describe the main types of ports.

3. Computer technology 1

3.1 History of the development of computer technology 1

3.2 Methods for classifying computers 3

3.3 Other types of computer classification 5

3.4 Composition of the computing system 7

3.4.1 Hardware 7

3.4.2 Software 7

3.5 Classification of applications software 9

3.6 Classification of utility software 12

3.7 The concept of information and mathematical support for computer systems 13

3.8 Summing up 13

  1. Computer Engineering

    1. History of the development of computer technology

Computing system, computer

Finding means and methods for mechanization and automation of work is one of the main tasks of technical disciplines. Automation of work with data has its own characteristics and differences from automation of other types of work. For this class of tasks, special types of devices are used, most of which are electronic devices. A set of devices designed for automatic or automated data processing is called computer technology, A specific set of interacting devices and programs designed to serve one work area is called computing system. The central device of most computing systems is computer.

A computer is an electronic device designed to automate the creation, storage, processing and transportation of data.

How the computer works

In defining a computer as a device, we indicated the defining feature - electronic. However, automatic calculations were not always performed by electronic devices. Mechanical devices are also known that can perform calculations automatically.

Analyzing the early history of computer technology, some foreign researchers often name a mechanical calculating device as an ancient predecessor of the computer. abacus. The “from the abacus” approach indicates a deep methodological misconception, since the abacus does not have the property of automatically performing calculations, but for a computer it is decisive.

The abacus is the earliest mechanical counting device, originally a clay plate with grooves in which stones representing numbers were placed. The appearance of the abacus dates back to the fourth millennium BC. e. The place of origin is considered to be Asia. In the Middle Ages in Europe, the abacus was replaced by graphed tables. Calculations using them were called counting on the lines, and in Russia in the 16th-17th centuries a much more advanced invention appeared, which is still used today - Russian abacus.

At the same time, we are very familiar with another device that can automatically perform calculations - a watch. Regardless of the operating principle, all types of clocks (sandwatch, water clock, mechanical, electric, electronic, etc.) have the ability to generate movements or signals at regular intervals and record the resulting changes, that is, perform automatic summation of signals or movements. This principle can be seen even in sundials containing only a recording device (the role of a generator is performed by the Earth-Sun system).

A mechanical watch is a device consisting of a device that automatically performs movements at regular specified intervals and a device for recording these movements. The place where the first mechanical watches appeared is unknown. The earliest examples date back to the 14th century and belong to monasteries (tower clock).

At the heart of any modern computer, as in electronic watch, lies clock generator, generating electrical signals at regular intervals that are used to drive all devices in a computer system. Controlling a computer actually comes down to managing the distribution of signals between devices. Such control can be carried out automatically (in this case they speak of program control) or manually using external controls - buttons, switches, jumpers, etc. (in early models). In modern computers, external control is largely automated using special hardware-logical interfaces to which control and data input devices (keyboard, mouse, joystick and others) are connected. In contrast to program control, such control is called interactive.

Mechanical sources

The world's first automatic device for performing the addition operation was created on the basis of a mechanical watch. In 1623, it was developed by Wilhelm Schickard, a professor at the Department of Oriental Languages ​​at the University of Tübingen (Germany). Nowadays, a working model of the device has been reproduced from the drawings and has confirmed its functionality. The inventor himself called the machine a “summing clock” in his letters.

In 1642, the French mechanic Blaise Pascal (1623-1662) developed a more compact adding device, which became the world's first mass-produced mechanical calculator (mainly for the needs of Parisian moneylenders and money changers). In 1673, the German mathematician and philosopher G. W. Leibniz (1646-1717) created a mechanical calculator that could perform multiplication and division operations by repeating addition and subtraction operations over and over again.

During the 18th century, known as the Age of Enlightenment, new, more advanced models appeared, but the principle of mechanical control of computing operations remained the same. The idea of ​​programming computational operations came from the same watch industry. The ancient monastery tower clock was set so that specified time turn on the mechanism associated with the bell system. Such programming was tough - the same operation was performed at the same time.

The idea of ​​flexible programming of mechanical devices using perforated paper tape was first implemented in 1804 in the Jacquard loom, after which it was only one step to programmatic control of computing operations.

This step was taken by the outstanding English mathematician and inventor Charles Babbage (1792-1871) in his Analytical Engine, which, unfortunately, was never fully built by the inventor during his lifetime, but was reproduced in our days according to his drawings, so that today we have the right to talk about the Analytical Engine as a really existing device. A special feature of the Analytical Engine was that it was the first to implement the principle of dividing information into commands and data. The analytical engine contained two large units - a “warehouse” and a “mill”. Data was entered into the mechanical memory of the "warehouse" by installing blocks of gears, and then processed in the "mill" using commands that were entered from perforated cards (as in a Jacquard loom).

Researchers of Charles Babbage's work certainly note the special role of Countess Augusta Ada Lovelace (1815-1852), daughter of the famous poet Lord Byron, in the development of the Analytical Engine project. It was she who came up with the idea of ​​using perforated cards for programming computational operations (1843). In particular, in one of her letters she wrote: “The Analytical Engine weaves algebraic patterns in the same way as a loom reproduces flowers and leaves.” Lady Ada can rightfully be called the world's first programmer. Today one of the famous programming languages ​​is named after her.

Charles Babbage's idea of ​​separate consideration teams And data turned out to be unusually fruitful. In the 20th century it was developed in the principles of John von Neumann (1941), and today in computing the principle of separate consideration programs And data is very important. It is taken into account both when developing the architectures of modern computers and when developing computer programs.

Mathematical sources

If we think about what objects the first mechanical predecessors of the modern electronic computer worked with, we must admit that numbers were represented either in the form of linear movements of chain and rack mechanisms, or in the form of angular movements of gear and lever mechanisms. In both cases, these were movements, which could not but affect the dimensions of the devices and the speed of their operation. Only the transition from recording movements to recording signals made it possible to significantly reduce dimensions and increase performance. However, on the way to this achievement it was necessary to introduce several more important principles and concepts.

Leibniz binary system. In mechanical devices, gears can have quite a lot of fixed and, most importantly, different between constitute provisions. The number of such positions is at least equal to the number of gear teeth. In electrical and electronic devices it is not about registration provisions structural elements, and about registration states device elements. So stable and distinguishable There are only two states: on - off; open - closed; charged - discharged, etc. Therefore, the traditional decimal system used in mechanical calculators is inconvenient for electronic computing devices.

The possibility of representing any numbers (and not only numbers) with binary digits was first proposed by Gottfried Wilhelm Leibniz in 1666. He came to the binary number system while researching the philosophical concept of unity and the struggle of opposites. An attempt to imagine the universe in the form of a continuous interaction of two principles (“black” and “white”, male and female, good and evil) and to apply the methods of “pure” mathematics to its study prompted Leibniz to study the properties of the binary representation of data. It must be said that Leibniz had already thought about the possibility of using a binary system in a computing device, but since there was no need for this for mechanical devices, he did not use the principles of the binary system in his calculator (1673).

Mathematical logic of George Boole, Speaking about the work of George Boole, researchers of the history of computer technology certainly emphasize that this outstanding English scientist of the first half of the 19th century was self-taught. Perhaps it was precisely due to the lack of a “classical” (as understood at that time) education that George Boole introduced revolutionary changes to logic as a science.

While studying the laws of thinking, he applied a system of formal notation and rules in logic that was close to the mathematical one. Subsequently this system called logical algebra or Boolean algebra. The rules of this system are applicable to a wide variety of objects and their groups (sets, according to the author's terminology). The main purpose of the system, as conceived by J. Boole, was to encode logical statements and reduce the structures of logical conclusions to simple expressions close in form to mathematical formulas. The result of a formal evaluation of a logical expression is one of two logical values: true or lie.

The importance of logical algebra was ignored for a long time, since its techniques and methods did not contain practical benefits for the science and technology of that time. However, when the fundamental possibility of creating computer technology on an electronic basis arose, the operations introduced by Boole turned out to be very useful. They are initially focused on working with only two entities: true And lie. It is not difficult to understand how they were useful for working with binary code, which in modern computers is also represented by only two signals: zero And unit.

Not all of George Boole's system (nor all of the logical operations he proposed) were used to create electronic computers, but four main operations: And (intersection), OR (Union), NOT (appeal) and EXCLUSIVE OR - form the basis of the operation of all types of processors in modern computers.

Rice. 3.1. Basic operations of logical algebra

Composition of the computing system. Composition of a computing system Consider the hardware and software configuration. The interfaces of any computing system can be divided into serial and parallel. The system level is transitional, ensuring the interaction of other computer system programs both with basic level programs and directly with hardware, in particular with the central processor.


Share your work on social networks

If this work does not suit you, at the bottom of the page there is a list of similar works. You can also use the search button


Lecture 4. History of the development of computer technology. Classification of computers. Composition of the computing system. Hardware and software. Classification of utility and application software

History of the development of computer technology

The first calculating devices were mechanical devices. In 1642, a French mechanic Blaise Pascal developed a compact adding device mechanical calculator.

In 1673, German mathematician and philosopher Leibniz improved it by addingmultiplication and division operations. Throughout the 18th century, increasingly more advanced, but still mechanical, computing devices were developed based on gear, rack and pinion, lever and other mechanisms.

The idea of ​​programming computational operations came from hourly industry. Such programming was rigid: the same operation was performed at the same time (example the operation of a machine using a copier).

The idea of ​​flexible programmingcomputing operations was expressed by an English mathematicianCharles Babbage in 1836-1848 A feature of his analytical engine was the principle of dividing information intocommands and data. However, the project was not implemented.

Programs for computing on the Babbage machine, compiled by the daughter of the poet Byron Adoi Lovelace (1815-1852), are very similar to the programs subsequently compiled for the first computers. This wonderful woman was namedthe first programmer in the world.

When switching from registration mode provisions mechanical device to mode registration states of electronic device elementsthe decimal system has becomeinconvenient, because the states of the elements are only two : On and off.

Possibility of presenting anynumbers in binary formwas first proposed by Leibniz in 1666.

The idea of ​​encoding logical statements into mathematical expressions:

  • true (True) or false (False);
  • in binary code 0 or 1,

was realized by the English mathematician George Boole (1815-1864) in the first half XIX century.

However, the algebra of logic he developed, “Boole algebra,” found application only in the next century, when a mathematical apparatus was needed to design computer circuits using the binary number system. The American scientist Claude Shannon “connected” mathematical logic with the binary number system and electrical circuits in his famous dissertation (1936).

In logical algebra, when creating computers, they are used in basically 4 operations:

  • AND (intersection or conjunction - A^B);
  • OR (union or disjunction - AvB);
  • NOT (inversion - |A) ;
  • EXCLUSIVE OR ( A *| B+| A*B).

In 1936, the English mathematician A. Turing and, independently of him, E. Post, put forward and developed the conceptabstract computing machine. They proved the fundamental possibility of solving any problem with automatic machines, provided that it can be algorithmized.

In 1946, a report was compiled by John von Neumann, Goldstein and Burks (Princeton Institute for Advanced Study) that contained a detailed descriptionprinciples for constructing digital computerswhich are still in use today.

  1. John von Neumann's computer architecture includes:
    1. CPU, consisting of a control device (CU) and an arithmetic-logical unit (ALU);
    2. memory : operational (RAM) and external;
    3. Input Devices;
    4. output devices.
  2. Principles of computer operation proposed by von Neumann:
    1. homogeneity of memory;
    2. software control;
    3. targeting.
  3. We can distinguish the main generations of computers and their characteristics:

Years
applications

195560

196065

196570

1970 90

From 1990 to
the present
time

Basic
element

Electronic
lamp

Transistor

IP
(1400
elements)

Big
IP
(tens of thousands)
elements)

Big
IP
(millions
elements)

Computer example

IBM 701
(1952)

IBM 360-40
(1964)

IBM 370-
145 (1970)

IBM 370-168
(1972)

IBM Server
z990
2003

Fast-
effect, op./s

8 000

246 000

1 230 000

7 700 000

9*10 9

RAM capacity,
byte

20 480

256 000

512 000

8 200 000

256*10 9

Note

Shannon,
background
Neumann,
Norbert
Wiener

Languages
FORTRAN,
COBOL,
ALGOL

Minicom-
pewter, OS
MS DOS,
Unix OS,
net

PC,
graphically
Chinese OS,
Internet

Artificial
ny
intelligence,
recognize
speech
laser

The rapid development of computing systems began in the 60s of the 20th century with the abandonment of vacuum tubes and development semiconductor, and then laser technology.

Efficiency Mainframes (computers) grew significantly in the 70s of the 20th century with the development of processors based onintegrated circuits.

A qualitative leap in the development of computers occurred in the 80s XX century with invention personal computer and the development of the global information network - Internet.

Classification of computers

  1. By purpose:
    • supercomputers;
    • servers;
    • embedded computers (microprocessors);
    • personal computers (PCs).

Supercomputers - computing centers - are created to solve extremely complex computing problems (modeling complex phenomena, processing extremely large amounts of information, making forecasts, etc.).

Servers (from the English word serve, serve, manage) are computers that provide local or global network, specializing in the provision of information services and maintenance of computers of large enterprises, banks, educational institutions, etc.

Embedded computers (microprocessors) have become widespread in manufacturing and household appliances, where control can be reduced to executing a limited sequence of commands (robots on a conveyor belt, on-board robots, integrated in household appliances and so on.)

Personal computers ( PC ) are designed for the work of one person, therefore they are used everywhere. Their birth is considered to be August 12, 1981, when IBM introduced their first model. PCs made a computer revolution in the lives of millions of people and had a huge impact on the development of human society.

PC are divided into mass, business, portable, entertainment, and workstations.

PC Standards:

  • Consumer PC (mass);
    • Office PC (business);
    • Entertainment PC (entertainment);
    • Workstation PC (workstation);
    • Mobile PC (portable).

Most PCs are massive.

Business (office) PC contain professional programs, but they minimize the requirements for graphics and sound reproduction tools.

In entertainment PC means are widely represented Multimedia.

Workstations have increased data storage requirements.

For portable devices, it is mandatory to have access to a computer network.

  1. By level of specialization:
    • universal;
    • specialized (examples: file server, Web -server, print server, etc.).
  2. By standard sizes:
    • desktop (desktop);
    • wearable (notebook, iPad);
    • pocket (palmtop);
    • mobile computing devices (PDA - p ersonal d digital a ssist a nt), combining the functions of palmtop and cell phones.
  3. By hardware compatibility:
    • IBM PC;
    • Apple Macintosh.
  4. By processor type:
    • Intel (in personal computers from IBM);
    • Motorola (in Macintosh personal computers).

Composition of the computing system

Consider the hardware and software configuration, since often the solution to the same problems can be provided by both hardware and software. The criterion in each case is operational efficiency.

It is believed that increasing operational efficiency through the development of hardware is on average more expensive, but implementing solutions using software requires highly qualified personnel.

Hardware

To hardware computing systems support includedevices and instruments(block-modular design is used).

Based on the way devices are placed relative to the central processing unit, internal and external devices are distinguished. External are input/output devices ( peripherals) and additional devices designed for long-term data storage.

Coordination between individual blocks and nodes is carried out using transitional hardware-logical devices - hardware interfaces operating in accordance with approved standards.

The interfaces of any computer system can be divided intoserial and parallel.

Parallel interfaces are more complex, requiring synchronization of the transmitting and receiving devices, but have higher performance, which is measuredbytes per second(byte/s, KB/s, MB/s). Used (rarely now) when connecting a printer.

Sequential - simpler and slower, they are calledasynchronous interfaces. Due to the lack of synchronization of sendings, useful data is preceded and completed by sending service data (per 1 byte - 1-3 service bits), performance is measuredbits per second(bit/s, Kbit/s, Mbit/s).

Used to connect input, output and information storage devices: mice, keyboards, flash memory, sensors, voice recorders, video cameras, communication devices, printers, etc.

Standards the hardware interfaces in VT are called protocols. A protocol is a set of technical conditions that must be provided by computer hardware developers to successfully coordinate the operation of devices.

Software

Software(software) or software configuration are programs (ordered sequences of commands). There is a relationship between programs: some work relying on others (at a lower level), i.e. we should talk about an interprogram interface.

  1. Basic level (BIOS) - the lowest level. The underlying software is responsible for interacting with the underlying hardware. Basic software is stored on the chip permanent storage device - ROM (Read Only Memory (ROM)).

If the parameters of the basic tools need to be changed during operation, usereprogrammable Erasable and Programmable Read Only Memory (EPROM) ). The implementation of the PROM is carried out using a “non-volatile memory” chip or CMOS , which also works when the computer boots up.

  1. System level- transitional, ensuring interaction of other computer system programs, both with basic level programs and directly with hardware, in particular with the central processor.

Part system support includes:

  • device drivers- programs that ensure the interaction of the computer with specific devices;
  • installation tools programs;
  • standard means user interface,ensuring effective interaction with the user, entering data into the system and obtaining results.

The set of system-level programs formscore operating system PC.

If the computer is equipped with system-level software, then it is already prepared:

  • to the interaction of software with equipment;
  • to installing programs more high levels;
  • and most importantly to interaction with the user.

mandatory and mostly sufficient condition for providing work person on the computer.

  1. Service levelThe software makes it possible to work with both basic-level programs and system-level programs. The main purpose of utility programs (utilities) is to automate the work of checking, setting up and configuring a PC. In addition, they are used to expand and improve the functions of system programs. Some of the utility-level programs are initially included in the operating system as standard ones.

There are two alternative directions in the development and operation of utility programs: integration with the operating system and autonomous operation.

In the second case, they provide the user with more options to personalize their interaction with the hardware and software.

  1. Application layeris a set of application programs with the help of which specific tasks are performed at a given workplace. Their range is very wide (from production to entertainment).

Availability of application software and breadth of functionality PC directly depends on the operating system used, i.e. what system tools its kernel contains and, therefore, how it ensures interaction: people programs hardware.

Classification of utility software

  1. File Managers (file managers). They are used to copy, move and rename files, create directories, delete files and directories, search for files and navigate in the file structure (for example, Explorer ( Windows Explorer)).
  2. Archivers file compression tools
  3. Viewer and Playback Tools. Simple and universal viewing tools that do not provide editing, but allow you to view (reproduce) documents of various types.
  4. Diagnostic toolsto automate software and hardware diagnostic processes. They are used not only to troubleshoot problems, but also to optimize computer performance.
  5. Means of control (monitoring) or monitors - allow you to monitor the processes occurring in the computer. Two modes are used: real-time monitoring and monitoring with recording of results in a protocol file (used when monitoring needs to be provided automatically and remotely).
  6. Installation monitors- provide control over software installation, monitor the state of the surrounding software environment, and allow you to restore connections lost as a result of deleting previously installed programs.

The simplest monitors are usually part of the operating system and are located at the system level.

  1. Communication means(communication programs) - connections with remote computers, handle message transmission Email and so on.
  2. Computer Security Tools(active and passive). Passive protection means these are programs Reserve copy. Antivirus software is used as active protection.
  3. Electronic digital signature tools(EDS).

Classification of application programs

  1. Text editors (Notepad, WordPad , Lexicon, editor Norton Commander, etc.).
  2. Word processors(allow you not only to enter and edit texts, but also to format, i.e. design them). Thus, the means of word processors include means of ensuring interaction text, graphics , tables, as well as tools for automating the formatting process (Word).
  3. Graphic editor. These are raster (dot), vector editors and creation tools three-dimensional graphics (3D editors).

In raster editors ( Paint ) a graphic object is presented as a combination of points, each of which has the properties of brightness and color. This option is effective in cases where the image has many halftones, and information about the color of the object elements is more important than information about their shape. Raster editors are widely used for retouching images and creating photo effects, but they are not always convenient for creating new images and are uneconomical, because the images have a lot of redundancy.

In vector editors ( CorelDraw ) the elementary object of the image is not a point, but a line. This approach is typical for drawing and graphic work, when the shape of the lines is more important than information about the color of the individual points that make it up. This representation is much more compact than the raster representation. Vector editors are convenient for creating images, but are practically not used for processing finished drawings.

Three-dimensional graphics editors allow you to flexibly control the interaction of object surface properties with the properties of light sources, as well as create three-dimensional animation, which is why they are also called 3D graphics editors. D-animators.

  1. Database Management Systems(DBMS). Their main functions are:
  • creating an empty database;
  • providing tools for filling it out and importing data from tables in another database;
  • providing the ability to access data, search and filter tools.
  1. Spreadsheets. These are complex tools for storing and processing data ( Excel ). Provide a wide range of methods for working with numerical data.
  2. Computer-aided design systems(CAD systems). Designed to automate design and construction work, and can also perform basic calculations and select structural elements from databases.
  3. Desktop Publishing. Designed to automate the process of layout of printed publications. They occupy an intermediate position between word processors and automatic design systems. Typical usage: application to documents that have been pre-processed in word processors and graphic editors.
  4. Expert systems(analysis of data contained in knowledge bases). Their characteristic feature is the ability for self-development (if necessary, generates a sufficient set of questions for an expert and automatically improves their quality).
  5. WEB editors . Combines the properties of text and graphic editors and are intended for creating and editing WEB documents.
  6. Browsers (viewers WEB documents).
  7. Integrated office management systems.Main functions editing and formatting of simple documents, centralization of e-mail, fax and telephone communication, dispatching and monitoring of enterprise documents.
  8. Accounting systems combine the functions of text and spreadsheet editors, provide automation of the preparation and recording of primary documents, maintenance of accounts in the accounting plan, and preparation of regular reporting.
  9. Financial analyticalsystems. Used in banking and stock exchange structures. Allows you to monitor and predict the situation in financial, stock and commodity markets, perform analysis, and prepare reports.
  10. Geoinformationsystems (GIS). Designed for automation of cartographic and geodetic works.
  11. Video editing systemsprocessing of video materials.
  12. Educational, developmental, reference and entertainingprograms. Their peculiarity is the increased requirements for multimedia (musical compositions, graphic animation and video materials).

In addition to hardware and software, there areInformation Support(spell checker, dictionaries, thesauri, etc.)

In specialized computer systems (on-board), the set of software and information support is called mathematical software.

PAGE 7

Other similar works that may interest you.vshm>

7644. Formation of ideas about methods for solving applied problems using computer technology 29.54 KB
The presence of error is due to a number of reasons. Initial data usually contain errors because they are either obtained as a result of measurement experiments or are the result of solving some auxiliary problems. The total error in the result of solving a problem on a computer consists of three components: the irremovable error, the method error and the computational error: .
166. Providing grounding in computer technology 169.06 KB
Almost every power supply for a computer or other device has a surge protector (Fig. When zeroing, you need to be sure that this zero will not become a phase if someone turns over some power plug. Computer power supply input circuits Fig. Formation of potential on the computer case Of course, the power of this source is limited; the short circuit current to ground ranges from units to tens of milliamps, and more than more powerful block power supply, the larger the capacitance of the filter capacitors and therefore the current:...
167. General information on the operation of computer equipment 18.21 KB
Basic concepts Computer equipment SVT these are computers, which include personal computers PCs, network workstations, servers and other types of computers, as well as peripheral devices, computer office equipment and intercomputer communication tools. Operation of the SVT consists of using the equipment for its intended purpose when the VT must perform the entire range of tasks assigned to it. To effectively use and maintain the SVT in working condition during operation,...
8370. Setting up folders and files. Setting up operating system tools. Use of standard utilities. Principles of object binding and embedding. Networks: basic concepts and classification 33.34 KB
Setting up operating system tools. Setting up operating system tools All settings are usually made through the Control Panel. Setting the operating system style Setting the system style is carried out along the following path: Start Control Panel All Control Panel Elements System. The Advanced System Settings tab opens the System Properties window, in which the Advanced tab is the most important for setting.
9083. Software. Purpose and classification 71.79 KB
Antiviruses Oddly enough, there is still no exact definition of what a virus is. either inherent in other programs that are in no way viruses, or there are viruses that do not contain the above distinctive features except for the possibility of distribution. macro viruses infect files Word documents and Excel. There are a large number of combinations, for example, file-boot viruses that infect both files and boot sectors disks.
5380. Development of a training stand The design and operating principle of a printer as a means of improving the quality of training for students in the specialty Maintenance of computer equipment and computer networks 243.46 KB
Printers are classified according to five main positions: the operating principle of the printing mechanism, the maximum paper sheet size, the use of color printing, the presence or absence of hardware support for the PostScript language, as well as the recommended monthly load.
10480. Computer software. Types of application programs 15.53 KB
By changing computer programs you can turn it into workplace an accountant or a statistician or a designer can edit documents on it or play some game. Classification of programs Programs running on a computer can be divided into three categories: application programs that directly provide the work required by users: editing texts, drawing pictures, watching videos, etc.; system programs performing various auxiliary functions such as creating copies...
7045. Information Systems. Concept, composition, structure, classification, generations 12.11KB
Properties information system: Divisibility of allocation of subsystems, which simplifies the analysis of development, implementation and operation of IS; Integrity and consistency of functioning of the subsystems of the system as a whole. Composition of the information system: The information environment is a set of systematized and specially organized data and knowledge; Information Technology. Classification of information systems by purpose Information management systems for collecting and processing information necessary for managing an enterprise organization...
19330. DEVELOPMENT OF A COMPUTING SYSTEM FOR TRANSPORT LOGISTICS IN C# LANGUAGE 476.65 KB
A programming language is a formal sign system designed to write computer programs. A programming language defines a set of lexical, syntactic and semantic rules that define appearance programs and actions that the performer (computer) will perform under its control.
9186. The process of operation of a computing system and related concepts 112.98 KB
Consider the following example. Two students run a square root program. One wants to calculate the square root of 4, and the other wants to calculate the square root of 1. From the students' point of view, the same program is running; From the computer system's point of view, it has to deal with two different computational processes because different inputs lead to a different set of computations.

Publications on the topic