At the beginning of this article, let me answer the question of how the computer architecture differs from the computer organization.
An answer is simple, computer architecture contains these elements that are important for the programmer. Thus, it matters how programmer writes programs. Such parameters may be data types, the way of addressing memory or the instruction list. Computer organization includes operating units and its connections that realize programs execution, but don’t affect the logic and are invisible for the programmer. Examples of such units are memory technology, processor clock speed or control signals.
We will look at the basic classifications of computer architectures:
The organization of memory:
- Von Neumann architecture – data and instructions are stored in the same memory space and coded in the same way.
In this architecture computer consists of four main components:
- memory space that each cell memory has the own address
- a control unit that directs the operation of the processor
- an arithmetic logic unit that performs arithmetic and bitwise operations
- input/output devices responsible for interaction with the operator
- Harvard architecture – very similar to von Neumann except data and instructions memory is physically separated
- Modified Harvard architecture – includes the features of von Neumann architecture and Harvard architecture, data and instruction memory are separated, but they use common data and address pathways
The number of concurrent instruction (or control) streams and data streams available in the architecture:
- SISD – single instruction stream single data stream – a sequential computer, examples of SISD architecture are traditional uniprocessor machines like older personal computers (currently many PCs has multiple cores) and mainframe computers
- SIMD – single instruction stream, multiple data stream – all parallel units share the same instruction, but they carry it out on different data elements, used in the array or vector processors (for example in GPU)
- MISD – multiple instruction streams, single data stream – an uncommon architecture where many functional units perform different operations on the same data, used in fault-tolerant computers or for example in weather forecasting
- MIMD – multiple instruction streams, multiple data stream – multiple autonomous, independent processors simultaneously executing different instructions on different data, allows that obtain full parallelism, most parallel computers are this type system
The division of work and access to memory by CPU:
- SMP – Symmetric Multiprocessing – two or more identical processors are connected to a single, shared main memory, currently most multiprocessor systems use this architecture
- ASMP – Asymmetric Multiprocessing – individual processors are not treated equally, they can be dedicated to specific tasks in specific time
- NUMA – Non-Uniform Memory Access – computer memory design where the memory access time depends on the memory location relative to the processor, processor can access own local memory faster than local memory another processor or memory shared between processors
- AMP – Asynchronous Multiprocessing – same as in SMP, but processors aren’t identical, in particular, different processors can have others timing clocks, or may be on them running different operating systems
- MPP – Massively Parallel Processors – computer which uses a large number of cells processors, each processor create own subsystem with own memory and operating system
Others commonly used term “computer architecture” is the type of processor with a set of instructions. A more accurate description, in this case, is instruction set architecture (ISA).