

CS 353: Architecture and Compilersâ€”Midterm Exam Review




About the exam




Wednesday 20 October 2008 in the normal lecture room (Ford 204) at the normal lecture time (12:40pm)




study resources: lecture notes, online text, lab assignments




the exam will be oneandahalf hours long, closed book/closed notes




typical format: some T/F, some short answer, a few longer "conceptual" questions




Numeral systems: binary, decimal, hexadecimal, etc.




basic concepts: bases, places, valuation




numerals can be viewed as the coefficients of a polynomialâ€”the base is the "unknown"




what is the highest digit used in base n? how many digits are used in base n?




how many things can we count/represent with n digits in base b? what is the highest number we can represent?




ordering of the numerals: odometer style




fast conversion between bases (Horner's lefttoright accumulating technique; the role of div and mod)




(for each digit, starting with a total of zero: multiply current total by base and add next digit)




addition algorithms on strings of bits (or other digits): carries, etc.




Data Representation




natural numbers: based on the numeral systems above




integers (including negative): standard signed/magnitude representation (with sign bit)




two's complement: conversion to and from, use in addition, checking for overflow




highorder bit always tells sign (no negative zero)




asymmetric coverage: highest magnitude negative number is one greater




for negative numbers, read zeros as ones and vice versa (plus off by one)




letters, symbols, etc.: ASCII code (7 bits, plus maybe one parity bit) versus Unicode (complex, multibyte, "encoded codes")




some properties of ASCII codes for letters: each set in order, upper and lower case separated by 32




Boolean Logic




we consider functions (and expressions) over a twovalued (boolean) domain of truth values, 0/1 or T/F




how many combinations are there of a certain number of arguments?




how many functions (with a given number of arguments) are there?




standard operators: AND, OR, NAND, NOR, XOR, XNOR




use of truth tables to determine the valuation of expressions




use binary numeral "odometer" ordering to cover the possible values of variables




to write a formula for a truth table, use disjunction of conjunctions technique




Gates and Circuits




represent 0/false as either no voltage or lower voltage, and 1/true as higher voltage




transistor allows electrical control of "switching" a signal: either ON to enable flow or ON to prevent flow




can build basic gates (AND, OR, etc.) from a few transistors (e.g., in serial or parallel)




we build circuits by connecting gates with wires, according to the hierarchical structure of the expression




real physical implementation requires some attention to timing, amplification, etc.




combinatorial versus sequential circuits (the latter may include feedback "loops")




some simple combinatorial circuits (as in lab):




add, shift, count, increment, selection, etc.




sequential circuits can represent memory storage over time




there may be multiple consistent states, which depend on previous inputs




Computer Organization




combinations of circuits are used to build memory, control, ALU




components are connected by busses: groups of wires (under some communication protocol) with control lines to "direct traffic"




memory: a hierarchy from registers to cache to RAM to disk (network can be seen as especially slowbutlarge external storage)




(from closer/fewer/faster/more expensive to farther/more/slower/cheaper)




the control unit decodes instructions, selects an operation, and dispatches arguments/results




ALU (arithmeticlogic unit) performs actual data transformation (e.g., ADD, SHIFT)




realistic modern architectures include many more features:




pipelining: execute several instructions in parallel in stages (fetch, execute, write)




hardwarebased stacks (for expression evaluation, method calls)




Instruction Set Architecture and Assembly Programming




program versus data storage: both in RAM, but conceptually separate




issues of word size (tradeoff with richness of instruction set)




typically choose some fixedwidth binary "word size" (e.g., 8, 12, 16 or 32 bits)




addressing modes: how are operands specified?




implicit: the location of the data is part of the specification of the opcode (e.g. DATA)




immediate: data is encoded in the instruction itself (e.g. SET)




direct: number of register or address of RAM is encoded in the instruction (e.g. ADD, JUMP)




indirect: a register or RAM location holds the address of the actual data (e.g. LOAD, JPIF)




indexed: some architectures allow an "array address" and an index to be combined (not in PC231)




assembler directives: not actual instructions on the machine, just to the assembler program (e.g., CONST on the PC231)




macro assemblers: allow abbreviation of instruction sequences as a single "pseudoinstruction"




issue: how do you avoid overuse of registers? (those used in the macro and those in the calling context)




issue: multiline macros may change line numbering (labels thus become crucial)




issue: how are "arguments"/parameters to macros specified?




issue: can one macro "call" another? (i.e. recursive macros)




Machine code and Assembly Language Programming Techniques




basic problems have to do with limited availability of space (e.g. registers) and movement of data




sometimes need to simulate operations which are not directly available




examples: maximum of two numbers, comparison of numbers, multiplication




approaches to subroutines ("functions" or "methods")




use macros and simply expand the code inline (faster)




use a jump to a secondary set of instructions in memory




how do we get back to the calling address?




how do we communicate parameters? (in registers)




do we "save" the values in other registers for the caller, or only use a special subset of registers?


