Embedded Microprocessor Systems: Real World Design (Embedded Technology)


There was a problem filtering reviews right now. Please try again later. This edition is great if you want to know something quickly and you don't need alot of explanation. This is not a very long book but it does have alot of terms that are important for the engineering career. The later editions are an exact copy of this book except there is more explanation for the terms. One person found this helpful 2 people found this helpful.

Nice book to read for non electrical engineers as well to start embedded systems. One person found this helpful. This book has wide contents about embedded system but brief. Readers will take many basic concepts of real system and embedded design. This book is appropriate to the beginner of embedded design.

The book help a lot for embedde system design. It covers some complex problems that a system designer can face during development. With Best Wishes pankaj. See all 4 reviews. Amazon Giveaway allows you to run promotional giveaways in order to create buzz, reward your audience, and attract new followers and customers. Learn more about Amazon Giveaway. Real World Design Embedded Technology. Set up a giveaway.

Feedback If you need help or have a question for Customer Service, contact us. Would you like to report poor quality or formatting in this book? Click here Would you like to report this content as inappropriate? Click here Do you believe that this item violates a copyright? There's a problem loading this menu right now. Get fast, free shipping with Amazon Prime. Your recently viewed items and featured recommendations. View or edit your browsing history. Get to Know Us.

English Choose a language for shopping. Not Enabled Word Wise: Not Enabled Enhanced Typesetting: Would you like to report this content as inappropriate? Do you believe that this item violates a copyright? Amazon Music Stream millions of songs. Amazon Drive Cloud storage from Amazon. Alexa Actionable Analytics for the Web. AmazonGlobal Ship Orders Internationally. Amazon Inspire Digital Educational Resources. Amazon Rapids Fun stories for kids on the go. Amazon Restaurants Food delivery from local restaurants. That said, company or customer policy sometimes requires detailed software specifications.

For example, defense projects usually require extensive documentation detailing every function that the software performs. The mechanical and electrical requirements are typically testable. Stresses and tolerance stickups and power dissipations can be mathematically tested. With software, it is more difficult to prove that the requirements are correct and that the flowcharts really will produce code that does what was intended.

The more complex and detailed the software requirements are, the less likely it is that you can prove the requirements to be correct. For this reason, the software requirements document is likely to be less detailed or even to be omitted entirely from the design process. System Design 21 In a simple design, the software definition, like the hardware definition, may describe the software for a single board. In a complex multiprocessor system, there may be an overall software document, which I consider to be part of the system engineering specification.

The software specifications should include the following: A statement of the requirements, including the requirements definition, engineering specifications, and hardware definition, as appropriate. The communication protocol to any other software, whether to another processor or to another piece of the software for this processor.

A description of how the design was implemented, using flowcharts, pseudocode, or other methods. Chapter 3 describes these in more detail. Because software can be broken down more flexibly than hardware, it is difficult to pin down a single software definition format that works for everybody all the time. The key is to define any interfaces that other engineers need to know about and i d e n w the design details that engineers in the future might need to know.

Embedded Microprocessor Systems: Real World Design

This discussion assumes that the hardware and software are fairly independent. In a simple system like the pool timer, that is a good model. The hardware is designed, the software is. While the actual design implementations may proceed in parallel, the software engineer basically writes code around the available hardware.

In a more complex system, the process may be iterative. For example, the software and hardware engineers may have a meeting at which they jointly decide what hardware is required to perform the function. Large amounts of memory may be required for data buffers, or the software group may request a specific peripheral IC because an interface library already has been developed for it. There are tradeoffs in this game between ease of software development and cost or complexity of hardware. HardwardSoftware Partitioning Once, while having lunch with a group of engineers, I jokingly made the statement that my design philosophy was to put everything under software control.

That way, bugs in the design were by definition the fault of the software engineer. This flippant conversation touches on a real problem in any embedded system: Which functions should be performed in hardware and which should be performed 22 Embedded Micrqtwocessor Systems in software?

An example of this can be found in the pool timer. As we will see in the next chapter, the pool timer displays time information on four seven-segment LED displays. There are display decoder ICs that accept a four-bit input and produce the signals necessary to drive the display.

This design takes a different approach and drives the display segments directly from a register, which is under software control.

Editorial Reviews

When the software wants to display a number, it must convert the number to the seven-segment pattern and write that pattern to a register. The savings was a single IC in the design. While this decision saved an IC, it had three costs: ROM space was needed for the lookup table, extra code had to be included for the hex to seven segment conversion, and the software needed extra time to perform the translation. Given the simplicity of this function, none of these was a serious problem. The table was 16 bytes long, so the code took a few more bytes and needed only a few microseconds to execute.

The more functions that can be pushed into software, the lower will be the product cost, up to the point where a faster processor or more memory is required to implement the added functionality.

Customers who bought this item also bought

As the saying goes, there is no such thing as a free lunch. Pushing functionality into the software increases software complexity, development time, and debug time. However, given the increasing speed and power of microprocessors, I expect to see an ever-increasing trend toward including as much functionality in the software as possible. In a more complex system, these tradeoffs can create heated discussion.

Should the software handle regular timer interrupts at a high rate and count them to time low-rate events, or should an external timer be added that can be programmed to interrupt the software when it times out? Should the software drive the stepper motor directly, or should an external stepper controller be used? If the software drives the motor, should protection logic be included to prevent damage to the motor drive transistors if the software turns on the wrong pair?

And if the processor runs out of throughput halfway though the project, did the design place too much of a burden on the software, or did the software engineer write inefficient code? The answers to these questions depend on your design. If you stay in this field very long, be prepared to get into one of these discussions. While doing everything in software increases development costs, moving functionality to the hardware increases product cost, and these costs are incurred with every unit built.

In an extremely cost-sensitive design, such as a low-cost System Design 23 consumer product, functions that cannot be performed in software may simply be left out. Here, we summarize the tradeoffs involved in choosing a multiprocessor architecture. A distributed processor system might have a single CPU that communicates with a host computer and distributes commands and data to lower-level processors that control motors, collect data from sensors, or perform some other, simpler task.

Distributed processor systems have the following advantages: The actual processing hardware can be located near the device being controlled or monitored. In large equipment, this may be a real advantage. If some of the functionality is optional, the cost of the processor that controls the option can be added or removed with the option. In a distributed processor system, each of the distributed CPUs usually can be a lower-performance cheaper part than would be required for one central CPU. A distributed system can be designed with a better match between the CPU and the task it must perform.

In a single-CPU system, the CPU must be fast enough and have enough memory and so forth to perform all the tasks, whether they are simple or complex. The code for any given CPU in a distributed system usually is simpler. It is easier to determine whether the CPU power is adequate in a distributed system because fewer tasks are being swapped in and out and there is less interaction among the various processing that must be performed. For example, you need not worry about how the motor control function affects the serial interface throughput if the tasks are handled by separate processors.

Debug of distributed systems can be simpler since each processor performs a limited set of tasks. The advantages of a single-CPU system are: Synchronization, when needed, is easier. For instance, it is easier for a single-CPU system to synchronize motor startup to limit current surge simply by communication between tasks or by scheduling. All the data is in the same place, making communication with a host or other systems easier.

Fewer communication protocols are required to pass data around. On the other hand, a faster processor may be required, operating at a higher frequency and generating a lot of EMI. If the design changes so that intertask communication must be added, such as for motor synchronization, a distributed design may require that interfaces be added to each distributed CPU. In a single-CPU design, such a change is likely to be only to the software.

It is easier to download or update code in a single-CPU system. Debug of a single-CPU system may be easier since all the functions are in a single place and all the interactions can be examined. Of course, these interactions as well as the task switching and general complexity of the code can complicate debug as well. Fewer development tools are needed since there is only one processor. In a distributed system, the same thing can be achieved by using only one type of CPU; however, this defeats the ability to match the CPU to the task.

However, a more complex, more expensive RTOS may be required. With increasing processor power at decreasing cost, I think more single-CPU designs are to be expected. Some designs will take advantage of increased CPU horsepower to add new functions, such as real-time signal processing. But motors and other electromechanical devices are getting no faster, so systems that interact with these devices probably will use fewer, more powerful processors.

Complex systems that use a single Pentiumclass CPU and a few 8-bit microcontrollers as smart sensors would not be surprising. The requirements document describes: The requirements, restated from engineering or requirements documents How the hardware implements the functionality The software interfaces to the hardware System Design 25 Software specifications describe: The requirements Interfaces to other software How the software implements the requirements A Requirements Document Outline The following is an outline for a requirements document that will fit most products.

Related internal documents, such as product specifications, environmental specifications, and the like. These could include the following: The following items are typical of the sort of thing that might be listed, and obviously all of these items will not apply to all products. This section is the core of the document and may run to dozens of pages. External interfaces interfaces to other systems, to a controlling host system, or to a slave subsystem? Available options may be lengthy if several need to be described Input power list input voltages, frequencies, and current; include international requirements Export restrictions and requirements applies if using controlled technology; also, requirements for the product to be marketed in certain countries may limit technology that can be used 26 Embedded Microprocessor Systems Input requirements What size bottles does it use?

What sizes of paper can it handle? How big or how small can the block of steel be that goes into the input hopper? Capacity How many blocks of steel or bottles or pieces of paper can it handle at a time? Error handling What happens if the operator puts in too many bottles or a block of steel that is too heavy?

What happens if power goes off halfway through the process? Weight usually applies only to large or portable products Size Does it have to fit through a standard door or on a standard elevator? In a standard briefcase? Safety requirements Does it have to operate in standing water with no danger of electrocution? Does it need a safety mat to stop the robotic arm when a person steps inside the fence? Are there rotating mechanisms that must be covered or stopped when a door is opened?

Must the operator be protected from high temperatures? External interfaces interfaces to external systems, like a base-T Ethernet interface to a computer network or an IRDA interface to transfer data to and from a PC Note that there may be other requirements as well, such as media requirements, customer versus field engineer maintenance items, and the like. However, since we are concentrating on embedded systems, these requirements are outside the scope of this outline. Finally, there is an additional type of requirement that deserves mention but that is outside the scope of this book.

Like all requirements, these are sometimes hard to q u a n q , but they do filter down to the design requirements at some point. In a complex design, it is often useful to include, with each requirement, a description of what drives that requirement. A requirement for an RS serial interface may be needed because the product must interface to product XYZ. If product XYZ becomes obsolete, or if another interface is used instead, that requirement can be deleted. Similarly, if someone suggests that the I2S be removed, the original requirement to include it can be traced back to its source, and you can determine whether the requirement is still valid.

The connection between requirements and their source can be documented in an appendix. As mentioned earlier, this can be beneficial in finding the real requirements. System Design 27 Communication Specifications are important to any system design. That is an ideal target, rarely realized in practice.

However, even if your system specifications are that good, you cannot eliminate the need for face-to-face communication on any design that involves more than one person. Of course, you will document the design to make life easy for the software engineers, right? Embedded microprocessors fall into two broad categories: Examples of these are a , , or EX-based embedded system. Note that they are basically the same except that the single-chip design has everything inside the chip inside the dashed line and the multichip design has everything except the processor itself outside.

Most also have internal timers, serial interfaces, or other peripherals. The actual design of single integrated circuit IC systems is straightforward. A single-IC design often requires an external timebase. What you use depends on your cost requirements and how accurate the timebase needs to be. If you are using an external clock source, such as a packaged oscillator, make sure it meets the voltage and capacitance drive specifications of the processor. Some microcontrollers, such as the Atmel ATTiny series, have internal R-C oscillators and do not need any external clock.

However, the R-C oscillators are not as stable or as accurate as a crystal or ceramic resonator. Some microcontrollers with internal oscillators improve clock accuracy by providing a means to calibrate the frequency, but the result is still not as stable as a crystal. The architectures are similar, but in the real world, a multichip design usually is more complex. A singlechip microcontroller may not be suitable for a design for many reasons: However, once a decision has been made to go to a multichip implementation, you take a quantum step in complexity.

A multichip design usually has most or all of the following as separate components: The benefits are more flexibility, more expandability, and usually more processing power. There are several types of microprocessor bus cycles, but all do the same basic things: If the cycle is a read cycle, the processor supplies a signal to tell the peripheral to drive its data onto the tristate data bus for the processor to capture.

If the cycle is a write cycle, the processor drives the write data onto the data bus and generates a signal indicating that the peripheral should capture the data. Intel, Microchip, Zilog, Motorola, and Hitachi. The speed of the signals varies greatly from one processor to the next, but the basic waveform is the same for processors within a given family. Other manufacturers, such as Philips, also make variations on the 8x31 family that use Intel-type timing to access external memory.

In the Intel scheme, the data bus is multiplexed with the address bus. If the data bus is 16 bits, then all 16 data bits are multiplexed with the lower 16 address bits. A multiplexed scheme requires only 16 pins for address and data. During the first part of the machine cycle labeled A on the diagram , the microprocessor places the address on the data bus; it must be captured by an external latch such as a 74AC For a read cycle, -RD is driven low, indicating to the peripheral device that it should drive read data onto the bus, which the processor will leave in the tristated condition.

For a write cycle, -WR indicates that write data is available for the peripheral, and the processor will drive the data onto the data bus. The second waveform in Figure 2. The basic waveform is nearly identical to the Intel, with one significant difference: During a write cycle, the Microchip part places write data on the data bus prior to the leading falling edge of the -WR strobe. With the Intel timing, write data is guaranteed to be stable only prior to the trailing rising edge of the -WR strobe.

The third waveform in Figure 2. The data bus is still multiplexed with the address, but the address strobe -AS is true when low instead of when high. There are no separate strobes for read and write. The fourth waveform in Figure 2. These parts have separate address and data buses.

The address strobe is not used to latch the address but to indicate that a valid address is present on the bus. Similarly, the data strobe is used to indicate that valid write data is present on the data bus write cycle or that the peripheral should place read data on the bus read cycle. The family parts also use a -DTACK data transfer acknowledge signal from the addressed device to indicate the end of the Hardware Design I 33 data transfer cycle.

The processor will leave the data, address, and control signals active until a -DTACK is received from the peripheral device. The last timing diagram in Figure 2. These parts use an address strobe -AS to indicate a valid address but do not need an ALE signal, as there are separate pins for address and data. The diagram in Figure 2. The timing sequences shown in Figure 2. Some other memory access schemes exist. The C parts also have a demultiplexed mode that eliminates the external address latch.

Since the address is demultiplexed inside the chip, this mode requires an additional 16 pins for the address signals. The ALE signal is still generated to indicate a valid address, but external address latches are not required. The Zilog and microprocessors, not shown in Figure 2. However, these parts do not multiplex the address lines with the data lines, so there is no need for an ALE signal to latch the address. There are dedicated address pins on the part, and the address is stable throughout the bus cycle.

The also provides an indication, similar to the ALE signal, when a bus cycle starts for designs requiring that information. Some ARM-7 processors use a nonmultiplexed version of the Intel timing. The address is latched so that when the multiplexed bus switches to data, the address is still available for the peripherals to use. The circuit shown in Figure 2. The output enable signal to the 74AC is shown grounded. This enables the outputs, and therefore the address bus, all the time.

The latching circuit need not be a duplicate of the one shown in Figure 2. It could be implemented in a programmable logic device PLD or other logic. So far, we have discussed only bit address buses, which allow access to 64K of memory. Many processors can address more than this. Wait States In many cases, a fast microprocessor must interface with a much slower peripheral. In this case, the normal timing of the microprocessor read, write, or data strobes is much too fast for the peripheral. For example, the processor may generate an -RD signal that is s in length, but the peripheral has a s output enable time.

In these cases, the usual solution is to add wait stales to the bus cycles when the CPU accesses that peripheral. A wait state extends the microprocessor read or write cycle by an integral number of processor clock cycles. Hardware Design 1 35 Not all microprocessors support wait states; for example, most single-chipprocessors such as , PIC17C4x do not have a provision for wait states. However, most processors designed for multichip applications support wait states. Internal Wait States Some processors have internal logic that can insert wait states. The has several outputs that can be programmed to generate chip selects at specific address ranges.

For each output, an internal wait state generator can be programmed to automatically insert up to three wait states. They can also be programmed to either accept or ignore wait requests from the external wait signals. Wait State Timing When the processor starts a bus cycle and detects that the wait line is active, it will extend the cycle, leaving the -RD,-WR, or -DS signal active and sampling the wait line once per clock.

Once the wait signal has gone inactive, indicating that the peripheral is ready, the processor will complete the bus cycle. The wait input is conceptually straightforward, but the details can cause problems. The most common problem is timing assertion of the wait state, which requires study of the data sheets. If the wait logic is delayed too much, the request will occur too late and the processor will ignore it. Other processors have different quirks that must be taken into account. Some peripheral ICs include integral wait-state generators.

If you use one of these, be sure that the timing will work with the processor. Some peripheral ICs assert the wait request too late in the cycle for some processors to recognize it. They do not use a bus-acknowledge signal and they default to no wait states. In other words, the input usuallyREADY that causes wait states to be inserted in the cycle normally is pulled to the ready no wait state condition.

If the external logic does not drive the input to generate wait states, the processor generates the access cycle and continues on, regardless of whether the peripheral was really ready. Processors like the Motorola family use a normally-not-ready bus. In this scheme, each peripheral must return an ACK signal to indicate that it has completed the data transfer accepted the write data or generated the read data. Normally-not-ready timing means that the default operation of the processor is to wait until the peripheral responds, which may be forever if the peripheral does not acknowledge the transfer.

In theory, access to nonexistent memory or a nonresponding peripheral will cause a permanent wait state. In practical systems, a timeout circuit usually generates an ACK or more specifically, an error signal if the peripheral does not. The ready input is then driven active, permitting the processor to complete the cycle. Hardware Design I 37 In normally-not-ready systems, the peripheral must generate an ACK to indicate that the transfer is complete. In actual systems, the peripheral itself usually does not introduce the wait states. This is normally done by the logic that controls access to the peripheral device, which times wait or ACK assertions and makes sure that they are asserted only when the correct peripheral is accessed.

Some peripherals particularly those designed for the family generate ACK internally and need no external logic for this function. Memory Processors with multiplexed buses need to capture the address in a latch because EPROMs, RAM,and most other peripheral devices need a stable address input during the external bus cycle. The code to be executed is programmed into the device, and it is read out by the microprocessor. They can be programmed once, but because there is no erasure window, they cannot be erased and reprogrammed.

Another type of memory is flash memory. However, flash memory can be erased electrically, which means it can be reprogrammed without taking it out of the microprocessor circuit. Flash memory often is used when the product requires that the firmware be upgraded in the field. Early flash memories were expensive compared to EPROM, but the pricing is such that nearly all new designs are flash based.

The advantage of flash memory is that it can be programmed incircuit, usually by the microprocessor that uses it. The programming procedure requires that the memory first be erased.

Product details

The embedded design process generally follows these steps: Check out my book and learn the proper way today! They can be programmed once, but because there is no erasure window, they cannot be erased and reprogrammed. As indicated in Figure 2. Modern electronic ignition systems, for example, have so many inputs air sensors, engine rpm, and so on with complex relationships that few choices other than a microprocessor are suitable. Processing Speed Required This is another area that is easier to get right after you have some experience with it, but a few guidelines can help:

This can present a problem-if the code to program the flash memory resides in the flash memory itself, how do you reprogram it? This was often a real problem for designers using early flash memories. One way to fix the problem is to move the programming code into RAM and execute it from there. Another approach is to use a newer block type of flash memory. These devices do not require that the entire memory be erased, instead permitting the memory to be erased in blocks.

So the programming code can reside in a section of memory that is not erased, while the operating code resides in another part of memory that is erased and reprogrammed as needed. Two versions are available,one with the boot block at the bottom ofthe memory startingat and one with the boot block at the top of memory startingat FCOOO.

This allows you to reprogram the rest of the memory to update the firmware. Hardware Design 1 39 Programming flash memories typically requires a specified sequence of writes to specific locations. The Atmel AT49F uses the following sequence to initiate the erase cycle: Most flash devices use -DATA polling when programming.

This allows the processor to poll the device by attempting to read the location just programmed. The flash memory returns the complement of the data that was written until the internally timed programming cycle is complete. Not all block-organized flash memories have a single small boot block and a larger main block. Some have multiple boot blocks, and some divide the memory into a few large blocks.

Most modern flash devices can be programmed using only the normal supply voltage 5V, 2. Internal charge pumps generate the higher voltage needed for programming typically 12V. Flash devices also can be programmed in a PROM programmer, which usually allows the boot block erase lockout to be overridden. Some microcontrollers with internal flash memory require an external programming voltage. The flash memory will have an additional input that controls writing of the memory array and which is inactive during reading.

Flash memory devices also have a means to read the device manufacturer and ID code. This is useful for device programmers, but it also is often needed for in-circuit programming. Different manufacturers have different algorithms for erasing and programming flash memory. If you want to have multiple sources for the flash memory in your design, your software will need to read the flash to determine which device is installed so it can determine which programming algorithm to use.

You also will need to retain multiple programming algorithms in memory, one for each type of device you can substitute into the system. Programming differences were taken care of by the device programmer. As flash memories have grown in density, this becomes impractical since they no longer fit in a dual inline package DIP. Early flash parts were available in PLCC plastic leaded chip carrier packages, which could be socketed, but many newer parts are only available in packages such as TSOP thin small outline package or BGA Ball grid array that are difficult or impossible to put in a socket.

The parts are soldered on the board. The result is that in many designs, the only way to program the flash memory is in-circuit, using the microprocessor itself. This is fine if you want to program a flash memory to update an existing program. But how does the program get into the memory in the first place? Flash memories still can be programmed by a programmer, using a special socket, before they are installed on the board. But what happens if the vendor or your own manufacturing department inadvertently skips the programming step? Or what if you get a batch of boards with the wrong program in the flash?

Do you scrap the entire lot? Some designs are intended to have the flash memory programmed by an incircuit programmer when the boards are tested. This is common in high-volume designs. In a design that does not use this technique, it is a good idea to provide a means to program the flash using an external fixture. To do this, the microprocessor must tristate its address and data lines so the external circuit can get to the flash. Many processors will tristate their buses if they are held in reset.

If buffers or latches are used for the signals, or if the signals pass through a PLD before they reach the flash, they can be tristated there. Once the processor has released the bus, some means must be provided to access the flash memory. This can be accomplished with a connector that brings out the addreddata buses and control signals. If there is no room for that, a matrix of pads on the PCB, accessed with spring-loaded test pins, can be used instead.

Finally, an alternative to directly programming the flash is to provide a means, such as a header, to install a daughterboard containing a small flash memory that replaces the system flash. By mapping part of the system flash to a different location in memory when the daughterboard is installed, the boot portion of the system flash can be reprogrammed.

Such remapping can be accomplished with a jumper on the main board, or it could be automatically activated when the daughterboard is installed. The boot portion of the system flash, of course, would be programmed with code that permits the remaining flash memory to be programmed. As mentioned in Chapter 1, this is memory programmed by the IC manufacturer using a mask. ROM normally is used only in very high-volume applications where the code is not expected to change over the life of the product. The address inputs, which can be up to 18 bits; a chip select; and an output enable. The only outputs are the 8 or 16 data bits back to the microprocessor.

Until the access time has elapsed, the output data is undefined. After the access time has elapsed, the output data for the addressed location is available. The output enable signal turns on the tristate EPROM outputs, driving the data onto the microprocessor data bus. The chip select signal comes from the address decoding logic connected to the microprocessor data bus.

The chip select signal in those cases can come directly from the microprocessor itself. If the address changes during the cycle, the outputs also change as the EPROM attempts to access the data at the new address. EPROM access times are specified as a maximum. For example, an EPROM with a specified maximum access time of srequires no more than ns from the time the address is stable and chip select is low to generate a valid output. Most of these EPROMs will be faster than the maximum time specified, which gets a lot of designers into trouble.

If you do not take into account the worstcase numbers, the design will work until the purchasing department buys a batch of EPROMs that happen to be a little slower than the ones you used in engineering debug. The procedure is as follows: Calculate the time from when the microprocessor provides a stable address until it requires stable data. Subtract any delays, such as the address latch propagation delay.

There are three clock cycles from when the microprocessor outputs the address until it requires stable data. However, due to internal delays in the microprocessor IC itself, the address is not available until some time Tad on the diagram after the first clock edge. Then, the processor needs the EPROM data stable some time before the clock edge that captures the data because the internal data latch has a finite setup time.

This is time Tsu on the diagram. You can see that the effect of any propagation delay in the address or data path is to shorten the available access time by the sum of all the delays. Hardware Design I 43 A processor with a nonmultiplexed data bus will have different timing from that shown in Figure 2. The processor will assert the address some delay after a clock edge, a control strobe will generate some delay after another clock edge, and the processor will want data to be stable on the rising edge of the control strobe or on the clock edge preceding it.

The EPROM must be fast enough to produce data in the time from when the address is stable to when the processor needs the data, minus any delays in the data or address paths. For most EPROMs, the access time from chip select is the same, or nearly the same, as the access time from the address. Referring again to Figure 2. The procedure for calculating the chip select access time is the same as for the address access time except that the delay through the address decode logic must be subtracted from the total time available.

If the upper address bits are latched and then decoded to generate the chip select, both the latch delay and the decoder delay must be subtracted from the total time. After the address and chip select access times are calculated, the EPROM speed required is the smaller of the two numbers. This is the time from when the microprocessor asserts the -RD strobe or the equivalent signal to when it needs stable data available. However, it should be checked. Calculating the output enable time is similar to calculating the access time: Calculate the time from when the microprocessor asserts the -RD signal until it requires stable data.

Subtract any delays, such as the data bus transceivers. This is the time from when the output enable OE signal goes high until the EPROM actually stops driving tristating its pins, sometimes called the datu bus reZeuse tim. This time is important because if the EPROM is still driving the data bus when the processor starts the next cycle, there will be bus contention and the wrong address can be latched.

In most cases, selecting an EPROM that is fast enough for the processor also results 44 Embedded Micrwocessar Systems in the data hold time being fast enough. If the calculated hold time is a problem, the solution is to use a data buffer more about that later or go to a faster EPROM. Calculating the timing for flash memories is the same as for EPROMs except that you also must take into account the write timing.

In this respect, flash memory timing is similar to a RAM, which we will discuss next. Static means that the memory cells do not change unless they are rewritten or the power is removed. A static RAM consists of an array of flip-flops that are selected by a decoding array inside the chip. Static RAM usually comes in x 8 configurations, but there are some x16 devices.

This consists of a special low-power RAM chip packaged with a battery usually lithium. The combination also includes power-switching circuitry that operates the RAM from system power when available and from the battery when system power is removed. The switching logic also protects the RAM from inadvertent writes when the power is below a certain threshold, usually when the system power is coming on or going off.

It stores information as charge on a tiny capacitor, one per data bit. Because the capacitor charge bleeds off, the data must be refreshed periodically. DRAM multiplexes the address pins into row and column addresses. The row address is latched in with a signal called RAS row address strobe , and the column address is latched in with a signal called CAS column address strobe. The need to multiplex the addresses, generate the strobes, and refresh the part make DRAM more difficult to design with.

Note that the connections are identical to those for an EPROM with the exception of the added write enable signal, which is connected to the microprocessor -WR signal. For a write cycle, additional factors must be considered. First, the data and control setup and hold times must be calculated. Several additional timing parameters must be taken into consideration with a RAM: If the minimum setup time is not met, the address decoding logic inside the RAM still may be changing when -WR is asserted, and the wrong address or multiple addresses may be changed.

Note that the address setup time applies to the leading edge of the -WR strobe. To guarantee that the correct data are written to the selected location, the data must be stable before the trailing edge of the -WR signal. The data and address must each be held for some specific time after the trailing edge of -WR. The price for not meeting these parameters is intermittent RAM problemslocations that seem to change at random or data that are incorrectly written.

Like EPROM access time problems, the symptoms may occur only with specific brands of parts or only when the temperature reaches a certain point. Calculating the address setup time is as follows: Using the microprocessor data sheets, calculate the time that the address is stable before assertion of the -WR signal remember: Subtract address latch propagation delays. The result must be greater than the address setup time specified for the RAM chip to be used. If it is not, you must either select a faster RAM or delay the assertion of -WR using external logic.

The formula for this is: Note that delays in the -WR path do not affect address setup time. In fact, a delay in the -WR path impoves address setup time because it gives the address more time to stabilize at the RAM before the -WR signal arrives. However, this is not a free lunch-delays in the -wR signal path can cause a data hold time problem, which we'll look at later.

Data setup time is calculated in much the same way as address setup time. Calculate the time from when the microprocessor asserts the data until the trailing edge of the -WR signal. Subtract any data bus buffer delays. Your RAM must have a data setup time that is less than the calculated value: Data and address hold time are calculated by determining how long the microprocessor holds the address and data after the trailing edge of -WR.

If you use address latches for all address lines, address hold time usually will not be a problem since the address will remain stable until the start of the next cycle. If you have data bus buffers, add the minimum propagation delay, if known, to the microprocessor data hold time. If the minimum is not known, do not add the buffer delay. The RAM must have a smaller hold time requirement than the calculated result: The preceding information is based on the assumption that your microprocessor generates separate -RD and -WR signals. One of the chip select signals is connected to the data strobe from the processor.

The disadvantage to this is that the output enable time becomes the chip select access time, which may require that a faster device be used. The second option for these processors is to generate the read and write strobes from the microprocessor data strobe and direction signals. It has the same timing parameters as SRAM and is interfaced in the same way. The disadvantages of DRAMS are that interfacing is more difficult and that the parts must be refreshed periodically. A typical DRAM has half as many address lines as are needed to access the entire memory array.

The lines are multiplexed with the row address presented first and the column address presented on the same pins. The actual DRAM would have 11 address lines. A DRAM has several important parameters: Row address setup time. The time that the row address must be stable on the address inputs before -RAS is driven low.

Row address hold time. The time that the row address must be stable after the falling edge of -RAS. Column address setup time. The time that the column address must be stable on the address inputs before -CAS is driven low. Column address hold time. The time that the column address must be stable after the falling edge of -CAS. The maximum time from the falling edge of -RAS to output data available. The maximum time from the falling edge of -CAS to output data available. Looking at a DRAM data sheet reveals many more timing parameters than those listed here, but these are the key ones.

Note that two access times are listed: The actual access time is determined by the circuit. In a slower circuit, where the row address will be internally decoded by the time CAS occurs, the access time will be governed by when CAS falls. This example is x8 for simplicity. At the start of the cycle see Figure 2. After the column address setup time is met, -CAS is pulsed, latching the column address.

This is called a read dzh Wzte rmw cycle. Few embedded processors execute rmw cycles. The reason this timing is important is because you need to avoid bus contention for processors where the write signal may be later than -CAS. Note, however, that the data is latched and must be stable before -WE or -CAS, whichever occurs later. After Delay 1,which is the row address hold time, the select signal to the multiplexer changes states, which switches the DRAM address inputs from the row to the column address. After Delay 2, which is the column address setup time, -CAS is driven low. The delays in Figure 2.

In either case, you must make sure that the inactive state of -DS has propagated through all delays before the next cycle starts. This circuit is simplified since it does not include a provision for separate refresh, but it shows the timing principles involved. The example in Figure 2.

  1. Browse this book;
  2. .
  3. The Alice Behind Wonderland.
  4. Related Video Shorts (0).
  5. Embedded Microprocessor Systems: Real World Design | ScienceDirect.

This may require the addition of wait states, depending on processor and DRAM speed. In some designs, you can start the cycle early. With a Motorola-type bus, the address strobe can be used to start the cycle. In both cases, the address decoding must be fast enough to ensure that the RAM is not falsely selected. Also, the address multiplexer adds an additional level of delay that must be taken into account; the row address must be stable prior to the leading edge of -RAS.

Refresh Dynamic RAM must be refreshed. The storage capacitor loses its charge fairly quickly, typically in 15 milliseconds ms or less. Refresh is accomplished by accessing each row in the DRAM. Internal logic in the DRAM restores the charge on the capacitor. Note that accessing any row refreshes all columns in that row. Any read or write cycle refreshes the entire row, but the catch is that allrows that is, all row addresses must be refreshed within the refresh interval.

Unless refresh was accomplished with an actual data read, early DRAMSrequired that the user generate a refresh address and a -RAS signal every 15 microseconds ps or so. This scheme required an external counter and a way to multiplex the count onto the address lines. The timing logic had to recognize a refresh request and generate a refresh cycle, arbitrating it with processor cycles. Each time the DRAM is refreshed using a special refresh cycle, the counter increments to the next address. The DRAM recognizes this condition and refreshes the internal row, then increments the refresh counter.

The data bus is not driven during the refresh cycle. While an external counter is not required for the internal refresh cycle, refresh still poses some problems. First, an external timer must generate a request for refresh at regular intervals. Second, the interface logic must interleave the refresh cycles with the processor access cycles. There are several ways to handle the conflict between processor and refresh cycles: If the processor wants to use the DRAM, it must wait until the current refresh cycle is completed.

This probably is the most common method of handling refresh. Synchronize refresh to the processor. Allow refresh to occur only for cycles that do not use the DRAM. This can be dangerous if the processor is executing code from the DRAM, which may never permit refresh to occur. However, if the DRAM will be used only for data, this approach may be feasible.

A slow processor may permit the entire refresh cycle to be performed without affecting normal operation, such as during the ALE time. Use the direct memory access DMA capability of the processor. DMA can be used for refresh by allowing the refresh logic to request a hold and do the refresh cycle when the processor acknowledges the hold request. The disadvantage of this is that it usually takes a few clocks for the processor to get in and out of hold.

Many microprocessors, such as some versions of the 80C, have built-in refresh logic. This consists of an internal timer that generates refresh requests at regular intervals. Processors that generate refresh requests internally also provide the refresh row address, so that -RASonly refresh cycles may be performed. The required delays may be generated using delay lines or a clock. Either way, all the DRAM timing constraints must be met. This function handles all the timing, addressing, and control necessary to refresh the memory. The only drawback is that the CPU cannot access the memory while refreshing, and the CPU must command the self-refresh to begin.

The TC59LM has two control bits that select self-refresh and other modes of operation. Hardware Design 1 53 However, the decision to use DRAM implies a considerable increase in the cost and complexity of a design; you should consider it carefully to determine whether it is necessary. This section has been a lengthy discussion of connecting memory to a microprocessor and calculating the worst-case timings, but it is important because the timing of all other peripherals is calculated in the same way.

The foregoing information is based on the assumption that the designer will use worst-case numbers. Some manufacturers provide a table or other information that indicates the memory speed needed for a specific clock rate. However, if it is not specified in that way, assume the worst-case timing scenario eventually will happen. One last note about timing calculations: They are straightforward to do with a calculator, but a number of timing analysis programs for PCs will do the calculations, display the resulting waveform on the screen, and even highlight problem areas in red.

An example is Timing Diagrammer Pro from Synapticad. These programs typically include libraries of microprocessors and other parts, including the timing parameters, so you need not even look up the worst-case parameters on the data sheets. The program does all the calculations for you and you can print out a timing diagram that can be included in the board specifications or other documentation. The entire point of an embedded microprocessor is to monitor or control some real-world event. Some microprocessor ICs have built-in ADCs, but the processor itself still works with digital values.

When the microprocessor performs a read to the address of the 74AC, the decoding logic produces a read strobe, and the 74AC outputs are enabled onto the microprocessor data bus. Similarly, a write to the address of the 74AC generates a write strobe that clocks the data bus value into the 74AC The input bits to the 74AC could be switch contacts, a temperature sensor, comparator outputs, or any other digital information. The decoding logic should produce output strobes that follow the microprocessor -RD and -WR signals.

Of course, a wider address bus or a need to decode to greater resolution will require a wider NAND circuit. The second circuit in Figure 2. This circuit produces output strobes that follow the data strobe from the microprocessor and are suitable for clocking data into a register or for enabling a buffer. The unused enable is pulled up. As indicated in Figure 2. One drawback to this circuit is that each strobe goes active for either a read or a write.

If a write strobe was just an address decode not gated with -WR ,the register would not get a clock until the after data were gone. If the read strobe were not gated with -RD,an output buffer would be enabled too long, and there may be bus contention at the end of the bus cycle when the next one starts and the microprocessor tries to drive the data bus.

Embedded Microprocessor Systems, Third Edition Real World Design Embedded Technology

A second reason for gating the strobes is that while the address is changing at the start of a bus cycle, the address lines may not all change at the same time. Consequently, the wrong address may momentarily appear on the address lines, and the wrong device could be selected. The decoding logic could produce a short pulse on a write strobe signal, clocking garbage data into a register. Gating read and write strobes with the control signals makes sure the strobes go active only when address and data signals are stable.

The last circuit in Figure 2. Read strobes are generated at and , and write strobes are generated at , , and The latch outputs cannot be read to determine whether a particular bit is set. The inputs are always inputs; the outputs are always outputs. If you need nine inputs instead of eight, but only seven outputs, you cannot use a latch output as an input-you must add another 74AC buffer. PC board real estate. Each new set of eight inputs or outputs requires another IC and another output from the decoding logic. A design often requires other functions, such as a timer, serial interface, or ADC, which cannot be implemented with simple latches.

Peripheral ICs Most microprocessors intended for multichip designs have peripheral ICs as part of the product family. A few examples are described here. Timers A timer peripheral consists of a counter that decrements or increments at some clock rate. The processor can read the count, and the timer may generate an interrupt or pulse an output pin when the count rolls over to zero.

Some timer ICs allow one timer to be cascaded from another for long delays. The timer output varies with the particular IC used; many have outputs that can be programmed for a square wave, single pulse on output, or variable duty cycle. Typical uses for a timer IC are to generate a delay, usually for scheduling some real-time event, controlling motors DC PWM or stepper , and generating a regular timekeeping interrupt. Some port ICs include hardware handshaking that permits a port to be used for interprocessor communication in multiprocessor systems.