F-CPU project
terms and abbreviations

Associative Memory
An associative memory does not know addresses. It's behaviour is like environment variables or PERL hashes, accessing data through a key selected by the user.
Bus
Data path connecting multiple devices in parallel to exchange data. Disadvantage: only one device can send at a time.
Cache
A fast memory containing a copy of frequently accessed data, located near the place where the data is used. It's faster than the main memory and can have a larger bus if located on the CPU. Problem: keeping data in cache synced with data in main memory.
Cache hit
Data can be fetched from the cache, much faster than from main memory
Cache miss
The data requested is not present in the cache. It must be fetched from the main memory. Usually, the data missed is stored in the cache in order to access it faster next time.
CMB
Context Memory Block
The CMB holds the state of any task in such a way that it can be stopped
The CMB holds the access rights and the most important protection information
The CMB holds the pointer to the task's page table (when paging is enabled).
Context switch
One processor runs many -> threads on a time sharing basis. The single threads assume to be alone on the CPU. So the context switch must happen transparently to the individual threads.
CPI
???? could be (Clock) Cycles Per Instruction
Cross bar
The cross bar (Xbar) is the interconnect between -> Registers, -> Load / Store Unit (LSU) and the data processing engines.
ILP
Instruction Level Parallelism


IPC
  • Inter Process Communication
  • ????
ISA
  • Instruction Set Architecture
  • Industry Standard Architecture: The famous old ISA Bus
ISS
Instruction Set Simulator


LFS

Load / Store Unit (LSU)
The interface between the -> crossbar and the data cache / memory
LSU
Abbreviation for -> Load / Store Unit
M2M
Memory to Memoy architecture
OOO
Out Of Order
PFQ

Register
A variable directly stored on the CPU. Provides fastest data access.
RISC
Reduced Instruction Set Computer
ROP2
Unit performing binary logic operations (N)AND, (N)OR ... , also with inverted inputs.






Scoreboard
Bookkeeping mechanism for register access.
  • Is a register waiting for being written to? If yes, an instruction wanting to read from it has to wait.

SHL
Bit Scrambling Unit
Smooth Register Backup
On a -> context switch, the registers must be backed up before the new context can write to them. The old context assumes no foreign manipulation of the register contents and can only continue with restored registers it reads. The backup and restore process takes place out of order, i.e. the first register being written to by the new context is backed up first.
SMP
Symmetric MultiProcessing
SMT
Simultaneous MultiThreading  -> thread
SRB
Abbreviation for -> Smooth Register Backup
Thread
A sequence of instructions. A program consists of one or more threads.
TLB
Translation Lookaside Buffer. A -> cache for mapping virtual to physical addresses.
see http://www.memorymanagement.org/glossary/t.html
TTA
Transfer Triggered Architecture
TTL
  • Transistor-Transistor Logic
  • Time To Live
VHDL
V??? Hardware Description Language
VLIW
Very Large Instructon Word
Xbar
Abbreviation for -> cross bar
Well, here are some definitions and facts.

CISC - like Intel's x86 processors, uses elaborate decode and a
memory-memory instruction set architecture intended

RISC- Reduced Instruction Set Computer. A class of register-register
designs that
simplify the hardware by making the compiler schedule the instruction
stream.  Modern "RISC" processors are getting very complicated, with
register renaming and dynamic scheduling, which increases the number of
machine states, the complexity of the hardware, thereby reducing the
clock rate in order to gain a better CPI.

VLIW - Very Long Instruction Word, a kind of processor with no decode,
usually targetted toward very wide implementations without absorbing the
huge penalty of run time checking.  The compiler explicitly states in
one instruction word what can and cannot be issued in a given clock
cycle.

Multi-threading - a technique in which a single processor or integrated
set of processors is given many tasks to handle at once in order to fill
general or pipeline stalls, thereby keeping the hardware busy, even with
branching code.

There, that's all you need to know about this discussion.  I would
recommend P&H HW/SW Interface.  It's everything you ever wanted to know
about microprocessor design and more.  If you have any specific
questions, I'm sure that anyone on this list would be more than happy to
take them.

And DEFINITELY, don't be afraid to offer suggestions.  Good ideas come
>from new people who don't yet know what is impossible.

--Maxx