Traditional Culture Encyclopedia - Weather inquiry - What is the Vonner-Iman structure?

What is the Vonner-Iman structure?

Von Neumann structure, also known as Princeton structure, is a memory structure that combines program instruction memory and data memory. The storage addresses of program instructions and data point to different physical locations in the same memory, so the widths of program instructions and data are the same. For example, the program instructions and data of Intel's 8086 CPU are 16 bits wide.

Structure introduction:

When it comes to the development of computers, we can't help but mention American scientist von Neumann. Since the beginning of the 20th century, scientists in the fields of physics and electronics have been arguing about what kind of structure should be used to make machines that can perform numerical calculations. People are troubled by decimal system, which is a common counting method for human beings. Therefore, at that time, the voice of developing analog computers was louder and more powerful. In the mid-1930s, American scientist von Neumann boldly proposed to abandon decimal system and use binary system as the basis of digital computer. At the same time, he also said that the calculation program is prepared in advance, and then the computer will carry out numerical calculation according to the calculation order set in advance.

People call this von Neumann theory the von Neumann framework. Von Neumann architecture is adopted from EDVAC to the most advanced computers at present. So von Neumann is the father of digital computers.

People call the electronic computer system designed by using this concept and principle "von Norman structure" computer. The processors of von Norman architecture use the same memory and transmit through the same bus.

content

trait

Von Neumann architecture processor has the following characteristics:

1: There must be memory;

2. There must be a controller;

3. There must be an arithmetic unit to complete arithmetic and logical operations;

4. Man-machine communication must have input devices and output devices.

In addition, the program and data are stored in a unified way and work automatically under the control of the program.

function

A computer based on von Neumann architecture must have the following functions:

Send the required programs and data to the computer.

Must have the ability to memorize programs, data, intermediate results and final operation results for a long time.

Able to complete arithmetic, logical operation, data transmission and other data processing.

The processing results can be output to users as required.

In order to accomplish the above functions, a computer must have five basic components.

? Including:

An input device for inputting data and programs;

A memory for storing programs and data;

An arithmetic unit for completing data processing;

A controller for controlling program execution;

The output device outputs the processing result.

bottleneck

? Separating CPU from memory is not perfect, but it will lead to the so-called von Neumann bottleneck: the traffic (data transfer rate) between CPU and memory is quite small compared with the capacity of memory. In modern computers, the traffic is very small compared with the working efficiency of CPU. In some cases (when CPU needs to execute some simple instructions on huge data), data flow has become a very serious constraint on the overall efficiency. When data is input or output into memory, the CPU will be idle. Because the growth rate of CPU speed and memory capacity is much faster than the traffic of both sides, the bottleneck problem is becoming more and more serious. The von Neumann bottleneck first appeared in 1977, when john balks won the ACM Turing Award. According to Bacos:

"... there is indeed a way to change the storage device, which is more advanced than circulating a large amount of data through the von Neumann bottleneck. The word bottleneck is not only a description of the data flow of the problem itself, but more importantly, it is also an intelligent bottleneck, which limits our way of thinking to the mode of' one character at a time'. This makes us afraid to think about broader concepts. So programming becomes a kind of character data stream that plans and refines the bottleneck of von Neumann, and most of the problems are not the characteristics of data, but how to find data. "

The cache between CPU and memory solves the efficiency problem of von Neumann bottleneck. In addition, the establishment of branch predictor algorithm is also helpful to alleviate this problem. The "intelligent bottleneck" discussed by Bacos in 1977 has changed a lot. Bacos has no obvious influence on the solution of this problem. Modern functional programming and object-oriented programming rarely perform the operation of "moving a large number of values into and out of memory" like the early Fortran, but in all fairness, these operations do occupy most of the execution time of the computer.

The architecture of CPU can be divided into von Neumann architecture and Harvard architecture.

structure

There are many central processors and microcontrollers that use von Neumann structure. In addition to Intel's 8086 mentioned above, other CPUs of Intel, ARM7 of ARM and MIPS processor of MIPS also adopt von Neumann structure.

1945, von Neumann first put forward the concept and binary principle of "stored program". Later, people called the electronic computer system designed with this concept and principle "von Neumann type structure" computer. The processors of Von Neumann architecture use the same memory and transmit through the same bus.

Von Norman architecture processor has the following characteristics: it must have memory; There must be a controller; There must be an operator to perform arithmetic and logical operations; Man-machine communication must have input and output devices.

Harvard structure

? Harvard structure is a memory structure that separates program instruction storage from data storage. First, the CPU reads the contents of the program instructions in the program instruction memory, decodes them to get the data address, and then reads the data in the corresponding data memory for the next operation (usually execution). The separation of program instruction storage and data storage can make instructions and data have different data widths. For example, the program instruction of PIC 16 chip of microchip company is 14 bit wide, while the data is 8 bit wide.

Microprocessors with Harvard structure usually have high execution efficiency. If program instructions and data instructions are organized and stored separately, the next instruction can be read in advance when it is executed. There are many central processing units and microcontrollers with Harvard architecture, besides the PIC series of Microchip mentioned above, there are also MC68 series of Motorola, Z8 series of Zilog, AVR series of ATMEL and ARM9, ARM 10 and ARM 1 1.

Harvard structure refers to the independent architecture of program and data space, aiming at reducing the bottleneck of accessing memory when the program is running.

For example, in the most common convolution operation, an instruction takes two operands at the same time, and in pipeline processing, there is also a fetch operation. If programs and data are accessed through a bus, there will be conflicts between fetching and fetching, which is very unfavorable to the execution efficiency of loops with large calculation amount. Harvard structure can basically solve the conflict between indexing and retrieval. The access to another operand can only adopt the enhanced Harvard structure, such as splitting the data area again and adding a set of buses like TI. Or, like AD, using instruction cache, you can store some data in the instruction area.

In DSP algorithm, one of the biggest tasks is to exchange information with memory, including sampling data, filter coefficients and program instructions as input signals. For example, if you multiply two numbers stored in memory, you need to take out three binary numbers from memory, that is, two numbers to be multiplied and 1 program instructions describing how to do it. The internal structure of DSP is generally Harvard structure, and there are at least four groups of buses on the chip: program data bus, program address bus, data data bus and data address bus. This separation of program bus and data bus allows instruction words (from program memory) and operands (from data memory) to be obtained at the same time without mutual interference. This means that instructions and operands can be prepared simultaneously in one machine cycle. Some DSP chips also contain other buses, such as DMA bus, which can complete more work in a single cycle. This multi-bus structure is like setting up a expressway extending in all directions inside the DSP, which ensures that the computing unit can get the required data in time and improves the computing speed. So for DSP, the internal bus is a resource. The more buses, the more complicated the function. SuperHarvard architecture (abbreviated as SHARC) adds instruction cache and special I/O controller to Harvard architecture.

Harvard architecture processor has two obvious characteristics: two independent memory modules are used to store instructions and data respectively, and each memory module does not allow instructions and data to coexist; Two independent buses are used as dedicated communication paths between CPU and each memory, and there is no correlation between the two buses.

? The improved Harvard structure has the following structural characteristics: in order to realize parallel processing; It has an independent address bus and an independent data bus. The common address bus is used to access two storage modules (program storage module and data storage module), and the common data bus is used to complete data transmission between the program storage module or data storage module and the CPU.