Embedded System Design

The Washington Institute for Near East Policy

Politico.com: Politics '08

Tuesday, July 31, 2007

The basics of programming embedded processors: Part 2

The basics of programming embedded processors: Part 2
By Wayne Wolf, Embedded.com
Jul 31 2007 (0:15 AM)
URL: http://www.embedded.com/showArticle.jhtml?articleID=201201938


Where previously in Part 1 we covered the basics of program design and analysis and the usefulness of design patterns, in this part in this series of articles we develop models for programs that are more general than source code.


Why not use the source code directly? First, there are many different types of source code - assembly languages, C code, and so on - but we can use a single model to describe all of them. Second, once we have such a model, we can perform many useful analyses on the model more easily than we could on the source code.

Our fundamental model for programs is the control/data flow graph (CDFG). (We can also model hardware behavior with the CDFG.) As the name implies, the CDFG has constructs that model both data operations (arithmetic and other computations) and control operations (conditionals). Part of the power of the CDFG comes from its combination of control and data constructs. To understand the CDFG, we start with pure data descriptions and then extend the model to control.

Data Flow Graphs
A data flow graph is a model of a program with no conditionals. In a high-level programming language, a code segment with no conditionals—more precisely, with only one entry and exit point—is known as a basic block. Figure 5-3 below shows a simple basic block. As the C code is executed, we would enter this basic block at the beginning and execute all the statements.

w = a + b;
x = a - c;
y = x+ d;
x = a + c;
z= y + e;

Figure 5-3. A basic block in C


Before we are able to draw the data flow graph for this code we need to modify it slightly. There are two assignments to the variable x - it appears twice on the left side of an assignment. We need to rewrite the code in single-assignment form, in which a variable appears only once on the left side.


Since our specification is C code, we assume that the statements are executed sequentially, so that any use of a variable refers to its latest assigned value. In this case, x is not reused in this block (presumably it is used elsewhere), so we just have to eliminate the multiple assignment to x . The result is shown in Figure 5-4 below, where we have used the names x1 and x2 to distinguish the separate uses of x.

w = a + b;
x1 = a - c;
y = x1 + d;
x2 = a + c;
z = y + e;

Figure 5-4. The basic block in single-assignment form.

The single-assignment form is important because it allows us to identify a unique location in the code where each named location is computed. As an introduction to the data flow graph, we use two types of nodes in the graph— round nodes denote operators and square nodes represent values. The value nodes may be either inputs to the basic block, such as a and b , or variables assigned to within the block, such as w and x1 . The data flow graph for our single-assignment code is shown in Figure 5-5 below.


The single-assignment form means that the data flow graph is acyclic—if we assigned to x multiple times, then the second assignment would form a cycle in the graph including x and the operators used to compute x .


Keeping the data flow graph acyclic is important in many types of analyses we want to do on the graph. (Of course, it is important to know whether the source code actually assigns to a variable multiple times, because some of those assignments may be mistakes. We consider the analysis of source code for proper use of assignments later in this series.)


Figure 5-5. An extended data flow graph for our sample block

The data flow graph is generally drawn in the form shown in Figure 5-6 below. Here, the variables are not explicitly represented by nodes. Instead, the edges are labeled with the variables they represent. As a result, a variable can be represented by more than one edge. However, the edges are directed and all the edges for a variable must come from a single source. We use this form for its simplicity and compactness.



Figure 5-6. Standard data flow graph for sample basic block

The data flow graph for the code makes the order in which the operations are performed in the C code much less obvious. This is one of the advantages of the data flow graph. We can use it to determine feasible reorderings of the operations, which may help us to reduce pipeline or cache conflicts.


We can also use it when the exact order of operations simply doesn't matter. The data flow graph defines a partial ordering of the operations in the basic block. We must ensure that a value is computed before it is used, but generally there are several possible orderings of evaluating expressions that satisfy this requirement.

Control/Data Flow Graphs
A CDFG uses a data flow graph as an element, adding constructs to describe control. In a basic CDFG, we have two types of nodes: decision nodes and data flow nodes.

A data flow node encapsulates a complete data flow graph to represent a basic block. We can use one type of decision node to describe all the types of control in a sequential program. (The jump/branch is, after all, the way we implement all those high-level control constructs.)

Figure 5-7 below shows a bit of C code with control constructs and the CDFG constructed from it. The rectangular nodes in the graph represent the basic blocks. The basic blocks in the C code have been represented by function calls for simplicity. The diamond-shaped nodes represent the conditionals. The node's condition is given by the label, and the edges are labeled with the possible outcomes of evaluating the condition.


Building a CDFG for a while loop is straightforward, as shown in Figure 5-8 below. The while loop consists of both a test and a loop body, each of which we know how to represent in a CDFG. We can represent for loops by remembering that, in C, a for loop is defined in terms of a while loop. The following for loop

for (i = 0; i < N; i++) {
loop_body();
}

is equivalent to

if (cond1) basic_block_1();else basic_block_2();basic_block_3();switch (test1) { case c1: basic_block_4(); break; case c2: basic_block_5(); break; case c3: basic_block_6(): break;}
C Code
Figure 5-7. C code (above) and its CDFG (below)







--------------------------------------------------------------------------------

i=0;while (iC Code

Figure 5-8. C code (above) and its CDFG (below) for a while loop.



Hierarchical representation. For a complete CDFG model, we can use a data flow graph to model each data flow node. Thus, the CDFG is a hierarchical representation—a data flow CDFG can be expanded to reveal a complete data flow graph. An execution model for a CDFG is very much like the execution of the program it represents. The CDFG does not require explicit declaration of variables, but we assume that the implementation has sufficient memory for all the variables.

We can define a state variable that represents a program counter in a CPU. (When studying a drawing of a CDFG, a finger works well for keeping track of the program counter state.) As we execute the program, we either execute the data flow node or compute the decision in the decision node and follow the appropriate edge, depending on the type of node the program counter points on.


Even though the data flow nodes may specify only a partial ordering on the data flow computations, the CDFG is a sequential representation of the program. There is only one program counter in our execution model of the CDFG, and operations are not executed in parallel.

The CDFG is not necessarily tied to high-level language control structures. We can also build a CDFG for an assembly language program. A jump instruction corresponds to a nonlocal edge in the CDFG. Some architectures, such as ARM and many VLIW processors, support predicated execution of instructions, which may be represented by special constructs in the CDFG.

Assembly and linking
Assembly and linking are the last steps in the compilation process—they turn a list of instructions into an image of the program's bits in memory. In this section, we survey the basic techniques required for assembly linking to help us understand the complete compilation process.

Figure 5-9 below highlights the role of assemblers and linkers in the compilation process. This process is often hidden from us by compilation commands that do everything required to generate an executable program. As the figure shows, most compilers do not directly generate machine code, but instead create the instruction-level program in the form of humanreadable assembly language.


Generating assembly language rather than binary instructions frees the compiler writer from details extraneous to the compilation process, which include the instruction format as well as the exact addresses of instructions and data. The assembler's job is to translate symbolic assembly language statements into bit-level representations of instructions known as object code.

The assembler takes care of instruction formats and does part of the job of translating labels into addresses. However, since the program may be built from many files, the final steps in determining the addresses of instructions and data are performed by the linker, which produces an executable binary file. That file may not necessarily be located in the CPU's memory, however, unless the linker happens to create the executable directly in RAM. The program that brings the program into memory for execution is called a loader.


Figure 5-9. Program generation from compilation through loading.

The simplest form of the assembler assumes that the starting address of the assembly language program has been specified by the programmer. The addresses in such a program are known as absolute addresses.


However, in many cases, particularly when we are creating an executable out of several component files, we do not want to specify the starting addresses for all the modules before assembly.


If we did, we would have to determine before assembly not only the length of each program in memory but also the order in which they would be linked into the program.


Most assemblers therefore allow us to use relative addresses by specifying at the start of the file that the origin of the assembly language module is to be computed later. Addresses within the module are then computed relative to the start of the module. The linker is then responsible for translating relative addresses into absolute addresses.

Assemblers. When translating assembly code into object code, the assembler must translate opcodes and format the bits in each instruction, and translate labels into addresses. In this section, we review the translation of assembly language into binary.

Labels make the assembly process more complex, but they are the most important abstraction provided by the assembler. Labels let the programmer (a human programmer or a compiler generating assembly code) avoid worrying about the absolute locations of instructions and data. Label processing requires making two passes through the assembly source code as follows:

1. The first pass scans the code to determine the address of each label.
2. The second pass assembles the instructions using the label values computed in the first pass.

As shown in Figure 5-10 below, the name of each symbol and its address is stored in a symbol table that is built during the first pass. The symbol table is built by scanning from the first instruction to the last. (For the moment, we assume that we know the absolute address of the first instruction in the program; we consider the general case later in this series).


During scanning, the current location in memory is kept in a program location counter (PLC). Despite the similarity in name to a program counter, the PLC is not used to execute the program, only to assign memory locations to labels. For example, the PLC always makes exactly one pass through the program, whereas the program counter makes many passes over code in a loop.


Thus, at the start of the first pass, the PLC is set to the program's starting address and the assembler looks at the first line. After examining the line, the assembler updates the PLC to the next location (since ARM instructions are four bytes long, the PLC would be incremented by four) and looks at the next instruction.


If the instruction begins with a label, a new entry is made in the symbol table, which includes the label name and its value. The value of the label is equal to the current value of the PLC. At the end of the first pass, the assembler rewinds to the beginning of the assembly language file to make the second pass. During the second pass, when a label name is found, the label is looked up in the symbol table and its value substituted into the appropriate place in the instruction.


Figure 5-10. Symbol table processing during assembly.

But how do we know the starting value of the PLC? The simplest case is absolute addressing. In this case, one of the first statements in the assembly language program is a pseudo-op that specifies the origin of the program, that is, the location of the first address in the program. A common name for this pseudo-op (e.g., the one used for the ARM) is the ORG statement

ORG 2000

which puts the start of the program at location 2000. This pseudo-op accomplishes this by setting the PLC's value to its argument's value, 2000 in this case. Assemblers generally allow a program to have many ORG statements in case instructions or data must be spread around various spots in memory. Example 5-1 below illustrates the use of the PLC in generating the symbol table.

Example 5-1: Generating a Symbol Table
Let's use the following simple example of ARM assembly code:
ORG 100 label1 ADR r4,c LDR r0,[r4] label2 ADR r4,d LDR r1,[r4] label3 SUB r0,r0,r1
The initial ORG statement tells us the starting address of the program. To begin, let's initialize the symbol table to an empty state and put the PLC at the initial ORG statement.



The PLC value shown is at the beginning of this step, before we have processed the ORG statement. The ORG tells us to set the PLC value to 100.



To process the next statement, we move the PLC to point to the next statement. But because the last statement was a pseudo-op that generates no memory values, the PLC value remains at 100.



Because there is a label in this statement, we add it to the symbol table, taking its value from the current PLC value.



To process the next statement, we advance the PLC to point to the next line of the program and increment its value by the length in memory of the last line, namely, 4.



We continue this process as we scan the program until we reach the end, at which the state of the PLC and symbol table are as shown below.



Assemblers allow labels to be added to the symbol table without occupying space in the program memory. A typical name of this pseudo-op is EQU for equate. For example, in the code

ADD r0,r1,r2FOO EQU 5BAZ SUB r3,r4,#FOO
the EQU pseudo-op adds a label named FOO with the value 5 to the symbol table. The value of the BAZ label is the same as if the EQU pseudo-op were not present, since EQU does not advance the PLC. The new label is used in the subsequent SUB instruction as the name for a constant. EQUs can be used to define symbolic values to help make the assembly code more structured.

The ARM assembler supports one pseudo-op that is particular to the ARM instruction set. In other architectures, an address would be loaded into a register (e.g., for an indirect access) by reading it from a memory location. ARM does not have an instruction that can load an effective address, so the assembler supplies the ADR pseudo-op to create the address in the register.


It does so by using ADD or SUB instructions to generate the address. The address to be loaded can be register relative, program relative, or numeric, but it must assemble to a single instruction. More complicated address calculations must be explicitly programmed.

The assembler produces an object file that describes the instructions and data in binary format. A commonly used object file format, originally developed for Unix but now used in other environments as well, is known as COFF (common object file format). The object file must describe the instructions, data, and any addressing information and also usually carries along the symbol table for later use in debugging.

Generating relative code rather than absolute code introduces some new challenges to the assembly language process. Rather than using an ORG statement to provide the starting address, the assembly code uses a pseudo-op to indicate that the code is in fact relocatable. (Relative code is the default for both the ARM and SHARC assemblers.)


Similarly, we must mark the output object file as being relative code. We can initialize the PLC to 0 to denote that addresses are relative to the start of the file. However, when we generate code that makes use of those labels, we must be careful, since we do not yet know the actual value that must be put into the bits.


We must instead generate relocatable code. We use extra bits in the object file format to mark the relevant fields as relocatable and then insert the label's relative value into the field.


The linker must therefore modify the generated code - when it finds a field marked as relative, it uses the absolute addresses that it has generated to replace the relative value with a correct, absolute value for the address. To understand the details of turning relocatable code into absolute executable code, we must understand the linking process described in the next section.

Linking. Many assembly language programs are written as several smaller pieces rather than as a single large file. Breaking a large program into smaller files helps delineate program modularity. If the program uses library routines, those will already be preassembled, and assembly language source code for the libraries may not be available for purchase.


A linker allows a program to be stitched together out of several smaller pieces. The linker operates on the object files created by the assembler and modifies the assembled code to make the necessary links between files.

Some labels will be both defined and used in the same file. Other labels will be defined in a single file but used elsewhere as illustrated in Figure 5-11 below. The place in the file where a label is defined is known as an entry point. The place in the file where the label is used is called an external reference.


The main job of the loader is to resolve external references based on available entry points. As a result of the need to know how definitions and references connect, the assembler passes to the linker not only the object file but also the symbol table. Even if the entire symbol table is not kept for later debugging purposes, it must at least pass the entry points. External references are identified in the object code by their relative symbol identifiers.


Figure 5-11. External references and entry points.


The linker proceeds in two phases. First, it determines the absolute address of the start of each object file. The order in which object files are to be loaded is given by the user, either by specifying parameters when the loader is run or by creating a load map file that gives the order in which files are to be placed in memory.


Given the order in which files are to be placed in memory and the length of each object file, it is easy to compute the absolute starting address of each file. At the start of the second phase, the loader merges all symbol tables from the object files into a single, large table. It then edits the object files to change relative addresses into absolute addresses.


This is typically performed by having the assembler write extra bits into the object file to identify the instructions and fields that refer to labels. If a label cannot be found in the merged symbol table, it is undefined and an error message is sent to the user.

Controlling where code modules are loaded into memory is important in embedded systems. Some data structures and instructions, such as those used to manage interrupts, must be put at precise memory locations for them to work. In other cases, different types of memory may be installed at different address ranges. For example, if we have EPROM in some locations and DRAM in others, we want to make sure that locations to be written are put in the DRAM locations.

Workstations and PCs provide dynamically linked libraries, and certain sophisticated embedded computing environments may provide them as well. Rather than link a separate copy of commonly used routines such as I/O to every executable program on the system, dynamically linked libraries allow them to be linked in at the start of program execution.


A brief linking process is run just before execution of the program begins; the dynamic linker uses code libraries to link in the required routines. This not only saves storage space but also allows programs that use those libraries to be easily updated. However, it does introduce a delay before the program starts executing.


Next, in Part 3: Basic compilation techniques.
To read Part 1, go to "Program design and analysis."


Used with the permission of the publisher, Newnes/Elsevier, this series of six articles is based on copyrighted material from "Computers as Components: Principles of Embedded Computer System Design" by Wayne Wolf. The book can be purchased on line.

Wayne Wolf is currently the Georgia Research Alliance Eminent Scholar holding the Rhesa "Ray" S. Farmer, Jr., Distinguished Chair in Embedded Computer Systems at Georgia Tech's School of Electrical and Computer Engineering (ECE). Previously a professor of electrical engineering at Princeton University, he worked at AT&T Bell Laboratories. He has served as editor in chief of the ACM Transactions on Embedded Computing and of Design Automation for Embedded Systems.


References:
[Dou99] Bruce Powell Douglas, Doing Hard Time: Developing Real Time Systems with UML. Addison Wesley, 1999.
[Chi94] M.Chiodo, et. al., "Hardware/software codesign of Embedded Systems," IEEE Micro, 1994


Copyright 2005 © CMP Media LLC

The basics of programming embedded processors: Part 1

http://www.embedded.com/showArticle.jhtml?articleID=201200638


Wayne Wolf, Embedded.com Jul 24 2007 (1:00 AM

Designing and implementing embedded programs is different and more challenging than writing typical workstation or PC programs. Embedded code must not only provide rich functionality, it must also often run at a required rate to meet system deadlines, fit into the allowed amount of memory, and meet power consumption requirements.
Designing code that simultaneously meets multiple design constraints is a considerable challenge, but luckily there are techniques and tools that we can use to help us through the design process. Making sure that the program works is also a challenge, but once again methods and tools come to our aid.
Presented in this series of six articles we will contentrate on high-level programming languages, specifically the C language. High-level languages were once shunned as too inefficient for embedded microcontrollers, but better compilers, more compilerfriendly architectures, and faster processors and memory have made highlevel language programs common.
Some sections of a program may still need to be written in assembly language if the compiler doesn't give sufficiently good results, but even when coding in assembly language it is often helpful to think about the program's functionality in high-level form. Many of the analysis and optimization techniques that we study in this chapter are equally applicable to programs written in assembly language.
Future parts in this series will discuss (1) the control/data flow graph as a model for high-level language programs (which can also be applied to programs written originally in assembly language) with a particular focus on design patterns; (2) the assembly and linking process; (3) the basic steps in compilation; (4) optimization techniques specific to embedded computing for energy consumption, performance and size.Design PatternsA design pattern is a generalized description of a way to solve a certain class of problems. As a simple example, we could write C code for one implementation of a linked list, but that code would set in concrete the data items available in the list, actions on errors, and so on. A design pattern describing the list mechanism would capture the essential components and behaviors of the list without adding unnecessary detail. A design pattern can be described in the Unified Modeling Language (UML); it usually takes the form of a collaboration diagram, which shows how classes work together to perform a given function.
Figure 5-1 below shows a simple description of a design pattern as a UML class diagram. The diagram defines two classes: List to describe the entire list and List-element for one of the items in the list. The List class defines the basic operations that you want to do on a list.
The details of what goes in the list and so forth can be easily added into this design pattern. A design pattern can be parameterized so that it can be customized to the needs of a particular application. A more complete description of the pattern might include
state diagrams to describe behavior and
sequence diagrams to show how classes interact.
Figure 5-1. A simple description of a design pattern
Design patterns are primarily intended to help solve midlevel design challenges. A design pattern may include only a single class, but it usually describes a handful of classes.
Design patterns rarely include more than a few dozen classes. A design pattern probably will not provide you with the complete architecture of your system, but it can provide you with the architectures for many subsystems in your design:
By stitching together and specializing existing design patterns, you may be able to quickly create a large part of your system architecture.
Design patterns are meant to be used in ways similar to how engineers in other disciplines work. A designer can consult catalogs of design patterns to find patterns that seem to fit a particular design problem.
The designer can then choose parameters suited to the application and see what that implies for the implementation of the design pattern. The designer can then choose the design pattern that seems to be the best match for the design, parameterize it, and instantiate it.
Design patterns can be of many different types. A few are listed below.
The digital filter is easily described as a design pattern.
Data structures and their associated actions can be described as design patterns.
A reactive system that reacts to external stimuli can be described as a design pattern, leaving the exact state transition diagram as a parameter.
Douglass [Dou99] describes a policy class that describes a protocol that can be used to implement a variety of policies.
Design Patterns for Embedded SystemsIn this section, we consider design patterns for two very different styles of programs: the state machine and the circular buffer. State machines are well suited to reactive systems such as user interfaces, and circular buffers are useful in digital signal processing.
State machine style. When inputs appear intermittently rather than as periodic samples, it is often convenient to think of the system as reacting to those inputs. The reaction of most systems can be characterized in terms of the input received and the current state of the system. This leads naturally to a finite-state machine style of describing the reactive system's behavior.
Moreover, if the behavior is specified in that way, it is natural to write the program implementing that behavior in a state machine style. The state machine style of programming is also an efficient implementation of such computations.
Finite-state machines are usually first encountered in the context of hardware design.The programming example describe below shows how to write a finite-state machine in a high-level programming language.
Programming Example: A state machine in CThe behavior we want to implement is a simple seat belt controller [Chi94]. The controller's job is to turn on a buzzer if a person sits in a seat and does not fasten the seat belt within a fixed amount of time. This system has three inputs and one output.
The inputs are a sensor for the seat to know when a person has sat down, a seat belt sensor that tells when the belt is fastened, and a timer that goes off when the required time interval has elapsed. The output is the buzzer. Shown below is a state diagram that describes the seat belt controller's behavior.
The idle state is in force when there is no person in the seat. When the person sits down, the machine goes into the seated state and turns on the timer. If the timer goes off before the seat belt is fastened, the machine goes into the buzzer state. If the seat belt goes on first, it enters the belted state. When the person leaves the seat, the machine goes back to idle.
To write this behavior in C, we will assume that we have loaded the current values of all three inputs ( seat , belt , timer ) into variables and will similarly hold the outputs in variables temporarily ( timer_on , buzzer_on ). We will use a variable named state to hold the current state of the machine and a switch statement to determine what action to take in each state. The code follows: #define IDLE 0 #define SEATED 1 #define BELTED 2 #define BUZZER 3
This code takes advantage of the fact that the state will remain the same unless explicitly changed; this makes self-loops back to the same state easy to implement.
This state machine may be executed forever in a while(TRUE) loop or periodically called by some other code. In either case, the code must be executed regularly so that it can check on the current value of the inputs and, if necessary, go into a new state.
Data stream style. The data stream style makes sense for data that comes in regularly and must be processed on the fly. The FIR filter of the example shown above is a classic example of stream-oriented processing. For each sample, the filter must emit one output that depends on the values of the last n inputs.
In a typical workstation application, we would process the samples over a given interval by reading them all in from a file and then computing the results all at once in a batch process. In an embedded system we must not only emit outputs in real time, but we must also do so using a minimum amount of memory.
The circular buffer is a data structure that lets us handle streaming data in an efficient way. Figure 5-2 below illustrates how a circular buffer stores a subset of the data stream. At each point in time, the algorithm needs a subset of the data stream that forms a window into the stream.
The window slides with time as we throw out old values no longer needed and add new values. Since the size of the window does not change, we can use a fixed-size buffer to hold the current data.
Figure 5-2. A circular buffer for streaming data
To avoid constantly copying data within the buffer, we will move the head of the buffer in time. The buffer points to the location at which the next sample will be placed; every time we add a sample, we automatically overwrite the oldest sample, which is the one that needs to be thrown out.
When the pointer gets to the end of the buffer, it wraps around to the top. Described below is an example of an efficient implementation of a circular buffer.
Programming Example: A circular buffer for an FIR filterAppearing below are the declarations for the circular buffer and filter coefficients, assuming that N , the number of taps in the filter, has been previously defined. int circ_buffer[N]; /* circular buffer for data */ int circ_buffer_head = 0; /* current head of the buffer */ int c[N]; /* filter coefficients (constants) */
To write C code for a circular buffer-based FIR filter, we need to modify the original loop slightly. Because the 0th element of data may not be in the 0th element of the circular buffer, we have to change the way in which we access the data. One of the implications of this is that we need separate loop indices for the circular buffer and coefficients.
The above code assumes that some other code, such as an interrupt handler, is replacing the last element of the circular buffer at the appropriate times. The statement 1buff = (ibuff == (N " 1) ? 0 : ibuff++) is a shorthand C way of incrementing ibuff such that it returns to 0 after reaching the end of the circular buffer array.
Next in Part 2: Models of programs.
Used with the permission of the publisher, Newnes/Elsevier, this series of six articles is based on copyrighted material from "Computers as Components: Principles of Embedded Computer System Design" by Wayne Wolf. The book can be purchased on line.
Wayne Wolf is currently the Georgia Research Alliance Eminent Scholar holding the Rhesa "Ray" S. Farmer, Jr., Distinguished Chair in Embedded Computer Systems at Georgia Tech's School of Electrical and Computer Engineering (ECE). Previously a professor of electrical engineering at Princeton University, he worked at AT&T Bell Laboratories. He has served as editor in chief of the ACM Transactions on Embedded Computing and of Design Automation for Embedded Systems.
References:[Dou99] Bruce Powell Douglas, Doing Hard Time: Developing Real Time Systems with UML. Addison Wesley, 1999.[Chi94] M.Chiodo, et. al., "Hardware/software codesign of Embedded Systems," IEEE Micro, 1994
Copyright 2005 © CMP Media LLC

Thursday, July 19, 2007

The Smurfit-Stone Building in Chicago from Millennium Park




Smurfit-Stone Building
Designed by: A. Epstein and Sons
Construction Completed: 1984
Area: The Loop
Post Code: 60601
City: Chicago, Illinois
One of Chicago's signature skyscrapers, what the Smurfit-Stone building lacks in height, it more than makes up for in style.

How to become an Embedded Geek

http://www.ganssle.com/startinges.pdf

Basics of ADCs and DACs, part 1

By Walt Kester and James Bryant, Analog Devices, Courtesy of DSP DesignLine
Jul 19 2007 (3:00 AM)

Introduction
Prior to the actual analog-to-digital conversion, the analog signal usually passes through some sort of signal conditioning circuitry, which performs such functions as amplification, attenuation, and filtering. The low-pass/band-pass filter is required to remove unwanted signals outside the bandwidth of interest and prevent aliasing
http://www.embedded.com/showArticle.jhtml?articleID=201002320

[Part 2 explains how ADCs and DACs introduce noise through quantization errors, offsets errors, and other "DC" errors. It will be published Thursday, July 26.]

802.11 -1999 IEEE Standard

http://standards.ieee.org/getieee802/download/802.11-1999.pdf

The Resonant Chamber