Saturday, 15 July 2017

Performance measures of computer

Computer education point
PERFORMANCE MEASURES: In this section, we consider the important issue of assessing the performance of a computer. In particular, we focus our discussion on a number of performance measures that are used to assess computers. Let us admit at the outset that there are various facets to the performance of a computer. For example, a user of a computer measures its performance based on the time taken to execute a given job (program). On the other hand, a laboratory engineer measures the performance of his system by the total amount of work done in a given time. While the user considers the program execution time a measure for performance, the laboratory engineer considers the throughput a more important measure for performance. A metric for assessing the performance of a computer helps comparing alternative designs.
Performance analysis should help answering questions such as how fast can a program be executed using a given computer? In order to answer such a question,
we need to determine the time taken by a computer to execute a given job. We define the clock cycle time as the time between two consecutive rising (trailing) edges of a periodic clock signal (Fig. 1.1). Clock cycles allow counting unit computations, because the storage of computation results is synchronized with rising (trailing) clock edges. The time required to execute a job by a computer is often expressed in terms of clock cycles.
We denote the number of CPU clock cycles for executing a job to be the cycle count (CC), the cycle time by CT, and the clock frequency by f ¼ 1/CT. The time taken by the CPU to execute a job can be expressed as
CPU time ¼ CC CT ¼ CC=f
It may be easier to count the number of instructions executed in a given program as compared to counting the number of CPU clock cycles needed for executing that
program. Therefore, the average number of clock cycles per instruction (CPI) has been used as an alternate performance measure. The following equation shows how to compute the CPI.
It is known that the instruction set of a given machine consists of a number of instruction categories: ALU (simple assignment and arithmetic and logic instructions), load, store, branch, and so on. In the case that the CPI for each instruction category is known, the overall CPI can be computed as
where Ii is the number of times an instruction of type i is executed in the program and CPIi is the average number of clock cycles needed to execute such instruction.
Example Consider computing the overall CPI for a machine A for which the following performance measures were recorded when executing a set of benchmark programs. Assume that the clock rate of the CPU is 200 MHz.
Assuming the execution of 100 instructions, the overall CPI can be computed as
It should be noted that the CPI reflects the organization and the instruction set architecture of the processor while the instruction count reflects the instruction set architecture and compiler technology used. This shows the degree of interdependence between the two performance parameters. Therefore, it is imperative that both the CPI and the instruction count are considered in assessing the merits of a given computer or equivalently in comparing the performance of two machines.
A different performance measure that has been given a lot of attention in recent years is MIPS (million instructions-per-second (the rate of instruction execution per unit time)), which is defined as

Saturday, 8 July 2017

Addressing mode


Autodecrement Mode Similar to the autoincrement, the autodecrement mode uses a register to hold the address of the operand. However, in this case the content of the autodecrement register is first decremented and the new content is used as the effective address of the operand. In order to reflect the fact that the content of the autodecrement register is decremented before accessing the operand, a (2) is included before the indirection parentheses. Consider, for example, the instruction LOAD (Rauto), Ri. This instruction decrements the content of the register Rauto and then uses the new content as the effective address of the operand that is to be loaded into register Ri. Figure 2.11 illustrates the autodecrement addressing mode.
The seven addressing modes presented above are summarized in Table 2.2. In each case, the table shows the name of the addressing mode, its definition, and a generic example illustrating the use of such mode.
In presenting the different addressing modes we have used the load instruction
for illustration. However, it should be understood that there are other types of instructions in a given machine. In the following section we elaborate on the different types of instructions that typically constitute the instruction set of a given
machine.










Computer structure

The computer interacts in some fashion with its external environment. In general, all of its linkages to the external environment can be classified as peripheral devices or communication lines.We will have something to say about both types of linkages.
But of greater concern in this book ismthe internal structure of the computer itself, which is shown in Figure 1.4.There are four main structural components:
  • Central processing unit (CPU): Controls the operation of the computer and performs its data processing functions; often simply referred to as processor.
  • Main memory: Stores data.
  • I/O: Moves data between the computer and its external environment.
  • System interconnection: Some mechanism that provides for communication among CPU, main memory, and I/O. A common example of system
interconnection is by means of a system bus, consisting of a number of conducting
wires to which all the other components attach. There may be one or more of each of the aforementioned components.Traditionally, there has been just a single processor. In recent years, there has been increasing use of multiple processors in a single computer. Some design issues relating to multiple processors crop up and are discussed as the text proceeds; Part Five focuses on such computers.
Each of these components will be examined in some detail in Part Two. However, for our purposes, the most interesting and in some ways the most complex component is the CPU. Its major structural components are as follows:
  • Control unit: Controls the operation of the CPU and hence the computer
  • Arithmetic and logic unit (ALU): Performs the computer’s data processing functions
  • Registers: Provides storage internal to the CPU
  • CPU interconnection: Some mechanism that provides for communication among the control unit,ALU, and registers
Each of these components will be examined in some detail in Part Three, where we will see that complexity is added by the use of parallel and pipelined organizational techniques. Finally, there are several approaches to the implementation of the control unit; one common approach is a microprogrammed implementation. In essence, a microprogrammed control unit operates by executing microinstructions that define the functionality of the control unit.With this approach, the structure of the control unit can be depicted, as in Figure 1.4.This structure will be examined in Part Four.








Wednesday, 5 July 2017

Vonneumann model of computer architecture

VONNEUMANN MODEL The von Neumann model of computer architecture wasfirst described in 1946 in the famous paper by Burks, Goldstein, and vonNeumann (1946). A number of very early computers or computerlike devices hadbeen built, starting with the work of Charles Babbage, but the simple structureof a stored‑program computer was first described in thislandmark paper. The authors pointed out that instructions and data consist ofbits with no distinguishing characteristics. Thus a common memory can be usedto store both instructions and data. The differentiationbetween these two is made by the accessing mechanism and context; the programcounter accesses instructions while the effective address register accessesdata. If by some chance, such as a programming error, instructions and data areexchanged in memory, the performance of the program is indeterminate. Beforevon Neumann posited the single address space architecture, a number ofcomputers were built that had disjoint instruction and data memories. One ofthese machines was built by Howard Aiken at Harvard University, leading to thisdesign style being called a Harvard architecture. A variation onthe von Neumann architecture that is widely used for implementing calculatorstoday is called a tagged architecture. With these machines, each data type inmemory has an associated tag that describes the data type: instruction,floating-point value (engineering notation), integer, etc. When the calculatoris commanded to add a floating‑point number to aninteger, the tags are compared; the integer is converted tofloating point, the addition is performed, and the result is displayed infloating point. You can try this yourself with your scientific calculator. Memory Thecomputer will have memory that can hold both data and also the programprocessing that data. In modern computers this memory is RAM. Image result for von neumann architecture images ControlUnit Thecontrol unit will manage the process of moving data and program into and out ofmemory and also deal with carrying out (executing) program instructions - oneat a time. This includes the idea of a 'register' to hold intermediate values.In the illustration above, the 'accumulator' is one such register. Image result for von neumann architecture images Input- Output Thisarchitecture allows for the idea that a person needs to interact with themachine. Whatever values that are passed to and forth are stored once again insome internal registers. Image result for von neumann architecture images ArithmeticLogic Unit Thispart of the architecture is solely involved with carrying out calculations uponthe data. All the usual Add, Multiply, Divide and Subtract calculations will beavailable but also data comparisons such as 'Greater Than', 'Less Than', 'EqualTo' will be available. Bus Noticethe arrows between components? This implies that information should flowbetween various parts of the computer. In a modern computer built to the VonNeumann architecture, information passes back and forth along a 'bus'. Thereare buses to identify locations in memory - an 'address bus' Image result for von neumann architecture images

Performance measures of computer

Computer education point PERFORMANCE MEASURES : In this section, we consider the important issue of assessing the performance of a comput...