Frequently Asked Questions

This document was last updated on 6 July, 2007. If the answer to your question is not here, and you are reading this off line, check to see if the on-line FAQ is more current. Or search the help files. If you still can't find the answer, ask us.

Is NoICE a simulator?

The answer to this question used to be no. However, due to popular demand, we have added simulators for the ARM, MSP430, 68HC12, 68HC08, and 8051.

Except for these, NoICE is a remote debugger. To execute your program, NoICE must be connected to physical target hardware. For the HC12 or MC9S08, the connection can be BDM. For the HC08, it can be MON08. For the ARM or MSP430, it can be JTAG. For other targets, the connection is usually via serial interface, and the target must contain a small monitor program.

For evaluation purposes, you can configure NoICE to use a "dummy target". This simulates target memory, so that you can download programs and experiment with most NoICE commands, including the disassembler, source viewer, memory editing and watch, symbols, etc. However, the dummy target will not execute programs.

What's the matter with simulators?

Personal bias. I have been an embedded systems programmer for over 25 years. I wrote NoICE for my own use, and I prefer to debug on real hardware whenever possible. Most embedded processors are used to control specialized external hardware, which is often difficult or impossible to simulate exactly.

Instruction set simulation is quite straightforward, and that is what the NoICE simulator supports. However, simulating the UARTs, timers, and other peripherals found on current microprocessors is a very complex task - at least, if you want a good (i.e., useful) simulation. The cycle-by-cycle operation of these peripherals is seldom publicly documented, and anything less than a cycle-by-cycle simulation will mask the problems which occur in real systems.

One often sees the claim that simulation is useful "until the hardware is ready". In my experience, the first-cut hardware is (or can be) ready long before marketing gets done arguing about the software feature set - and certainly before you get done doing your initial design and documentation.

If all you need is instruction execution, buy a single-board-computer (SBC) and use a debugger. You won't have to worry about whether or not the author of the simulator got the carry flag setting wrong on one obscure instruction.

For real debugging, either wait for the first cut of real hardware, or tack some interfaces onto your SBC.

If you really want a simulator, there are some very good ones available, shareware and otherwise, which do cycle-by-cycle simulations of on-chip peripherals. And no, I can't recommend one.

What is a remote debugger?

A remote debugger consists of a host program running on your PC; which provides the debugger user interface, symbol tables, etc.; and a (usually) small monitor program running on the actual target processor. The two programs communicate via RS-232 or some other medium.

"Remote" in this case usually means only a few feet of cable connecting the two devices. The term was coined to distinguish two-computer debugging from the case where the debugger runs on the same computer as the program being debugged.

The target monitor usually implements only a few primitive commands such as memory read and write, register read and write, and execute.

The advantage of this approach is that the target monitor can be kept simple and small, because it doesn't need to contain disassembler tables, ASCII command parsers and the like. Small size also leaves as much of the processor address space as possible for your application program.

Some newer processors have built-in debugging resources that you may be able to use instead of a target monitor. See 68HC12 with BDM, 68HC08 with MON08 or MC9S08 with BDM, ARM with JTAG, or MSP430 with JTAG.

The host program can grow as fancy as required, usually without affecting the target program. For example, the DOS and Windows version of NoICE both work with the same target program, even though the Windows version offers many new features.

The resources used by the NoICE monitor are similar to those required by "classic" hex debug monitors such as BUFFALO for the 68HC11, or Steve Kemplin's MONPLUS for the 8051:

More about the NoICE monitor

Customizing the NoICE monitor

Wouldn't it be better to use an In Circuit Emulator?

In Circuit Emulators are very nice devices, and they have several advantages over NoICE or any monitor-based debugger:

Let's look at these:

The major downside to In Circuit Emulators is their expense. This often leads to the purchase of one ICE, which must be shared by all the software engineers. This isn't fun. With remote debuggers, every developer can have their own setup.

That said, NoICE could be modified to use an In Circuit Emulator instead of a target monitor. Please contact us for details.

Why is it called NoICE?

NoICE provides you with most of the features of an In Circuit Emulator - but with "No ICE". OK, OK, I'm a software engineer, not a marketing preson/comedian: the program needed a name.

NoICE Debugger is not associated with

What is source-level debugging?

You can debug at several levels

disassembly NoICE will disassemble memory and show the result as assembly mnemonics. Addresses and other arguments will be shown in hexadecimal.
symbolic As above, except that addresses and other arguments will be displayed using names, labels, and equates defined in your program whenever possible. This requires support from your assembler/compiler, or the use of a symbol processing utility to pass symbol definitions to NoICE. Once the symbols are defined, you can also use them in NoICE expressions instead of decimal or hexadecimal numbers.
source-level As above, except that NoICE will display your C or assembly source code rather than disassembled memory whenever possible. This requires support from your assembler/compiler, or the use of a symbol processing utility to pass the address of each source line to NoICE.

How do I get source-level debug with assembler/compiler X?

NoICE provides explicit support for a number of assemblers and compilers.

NoICE can load symbol and line information along with memory contents from Imagecraft DBG files, Elf/Dwarf/Stabs files, and IEEE-695 files. Imagecraft's format is proprietary, and few freeware or shareware compilers or assemblers support IEEE-695. Elf/Dwarf/Stabs is quite popular, used by GNU/GCC and most commercial compilers.

For assembly debugging, some symbol information may also be loaded from Extended Tektronix or Intel Hex format files. However, the line number information necessary for source-level debugging is generally not available in these formats, and most freeware assemblers do not generate symbol information at all.

In many cases, line number and symbol information for assembly programs can be extracted from listing and map files by small utility programs. The output of these utilities is a NoICE command file which may then be PLAYed by NoICE to define the symbols and line numbers. A new assembler or linker may be supported simply by writing the appropriate utility.

The source code for most of these utilities is available to licensed users of NoICE, in case you wish to customize them, or write a similar utility for use with another assembler.

Support for C compilers is more difficult, and requires some degree of support from the compiler vendor. This is because much of the information, including correlation of addresses to C source lines, data types, and stack- relative automatic variables, is not available to the assembler or linker unless explicitly passed by the compiler.

For detailed information about the symbol processing utilities and how to use them, please refer to the help files.

I followed the instructions to get source-level debug, but when I download, I just see disassembly. Why?

The answer is part geek and part philosophy: main is the first C code of your program, but it is not the first instruction executed by the processor. That honor rests with the C startup code, which initializes the stackpointer, clears memory, and does assorted I/O initialization. After all that, it calls main.

Some debuggers hide all the low-level stuff from you and execute til main before showing you anything. My opinion is that hiding low-level stuff is not a good thing when working on embedded systems: there are times when you want to step through the startup code, or set a breakpoint in the midst of it.

So, NoICE loads the code and sets the PC to the first instruction of the startup code. If you were running without NoICE, this is where the reset vector would point.

If you want to skip over this, you could set a breakpoint at main, and then execute til there:

    B main

This should stop at main and show you the source code there.

If you are loading Elf/Dwarf, Imagecraft DBG, or IEEE-695 files, NoICE can do this automatically for you. Click on the "Run" menu, and select "Go until main after LOAD".

Why do I sometimes get "overlap" errors with 8051 breakpoints and single-step?

Breakpoints operate by inserting a special instruction at the breakpoint location. Execution of the instruction causes entry into the monitor. Ideally this instruction is a single byte software interrupt, such as the 68HC11's SWI or the Z80's RSTnn.

Unfortunately, the 8051 has no single-byte op-code suitable for use as a breakpoint instruction. Thus, NoICE must use a three byte "LCALL" (hex 12) instruction. This has the effect that breakpoints cannot be placed more closely than three bytes apart, as the inserted breakpoint instructions would overlap. NoICE will prevent the insertion of a breakpoint which would overlap another.

Hey, you with the 8051-core in an FPGA: why not add a one-byte breakpoint instruction and eliminate all this foolishness? Just make one of the unused 8051 op-codes perform a subroutine call to a fixed address - the NoICE monitor's breakpoint handler. Hmmm - it seems that there is only one unused op-code: 0xA5.

Since single-step is implemented using breakpoints, there may be instructions, such as short forward or backward branches, which cannot be stepped due to overlapped breakpoints.

Note, however, that there are some unsafe breakpoint locations which NoICE cannot detect. Consider the following 8051 instruction sequence

    mov a,foobar ;2 bytes
    inc a  ;1 byte
    mov foobar,a ;2 bytes

If you place a breakpoint (lcall) on the first instruction, its three bytes will cover both the first "mov" and the "inc". In most cases, this will not cause a problem: when you hit the breakpoint, the breakpoint instruction is removed and the original bytes restored. You can then continue from the breakpoint and execute the original instructions.

Suppose, however, that the program branches to the "inc a" while the breakpoint instruction is in place. Then instead of executing "inc a", the processor will execute the third byte of the breakpoint's "lcall" instruction, with unpredictable consequences.

Since NoICE has no way of knowing which locations may be jumped to from elsewhere, the only way for NoICE to prevent this would be to allow breakpoints only on three-byte instructions. Since most 8051 instructions are only one or two bytes, this would be too restrictive. Thus, you must be careful not to insert 8051 breakpoints whose second or third bytes would overlap branch, jump, or call destinations. (Note: version 6 and above of NoICE will not allow the insertion of breakpoints that overlap symbol addresses.)

This scenario does not usually occur during single-step, because the automatic breakpoint instructions used by single-step are only inserted during the execution of a single instruction.

Why don't you support the PIC/AVR/etc.?

We would like to support the Microchip PIC and the Atmel AVR, but the architecture of both pretty much precludes the use of a monitor-based debugger. For the PIC in particular: code memory is wider than 8 bits, the processor cannot write into code memory, most members of the family have only a two level stack, and there is no way to push or pull data from the stack except for call and return. The AVR has a similar set of problems. If you can think of a way around these problems, please contact us.

Additional processors may be supported in the future, as time and interest allow. Some likely candidates are listed below. If you are interested in one of these,or wish to suggest another target, please contact us.

The DOS version of NoICE also supported the Z8, 6801/6301, 6805, 8096, 6809, MELPS740, TMS370, and (beta)H8/300. A version of NoICE for Windows for any of these processors could be done if there were sufficient interest. Please contact us to lobby for your favorite.

How can I make NoICE reset my target hardware?

If you are using BDM with the 68HC12, or MON08 with the 68HC08, you can use the RESET command to reset your target hardware. However, this command is not yet supported by the NoICE serial protocol.

Some people using the serial protocol reset their hardware by connecting one of the RS-232 control lines to their target's RESET signal.

NoICE can control both RTS and DTR using the RTS and DTRcommands. When your PC starts up, it probably sets both of these off (negative voltage). When NoICE opens the serial port, it sets RTS and DTR to the states you specify in the Target Communications dialog. So, using RTS as an example:

  1. Decide whether you want RTS off or RTS on to reset the target, and connect your hardware as appropriate. Since most RS-232 receivers invert, and most processor reset lines are active low, you may find RTS on = reset to be the easiest connection. Depending on what else drives the reset line, you may need a blocking diode or a gate between the RS-232 receiver and the reset signal.
  2. Configure the NoICE serial dialog to turn RTS off at startup (otherwise your target won't respond during startup).
  3. Reset the target by typing the commands
        RTS 1
        RTS 0

You may find it convenient to create a file called RST.NOI containing:

    RTS 1
    WAIT 500
    RTS 0

Then you can just type "RST" as a command and it will invoke the file, resetting the target for 500 msec.

At least one user has done a similar trick using a pulse-width detector on the Receive Data line, using the BRK command to generate a break condition long enough to reset the target. This adds hardware, but eliminates the need for a control line in your serial cable.

I use ImageCraft ICC11/12 version 6. ICCNOI doesn't seem to work right. What's up?

ImageCraft changed some aspects of their debug format. Please download the latest version of NoICE, which can load ImageCraft DBG files directly.

When I try to run NoICE, I get a dialog box saying "NComBDM.dll version is not compatible with NoICE.dll". What does this mean?

When NoICE starts, it verifies that the necessary NoICE DLLs are of proper versions to work with the EXE. If the major and minor versions don't match, you will get an error message. This is usually caused by improperly copying NoICE DLLs.

The best way to eliminate the message is to get the proper DLL or EXE. You can always download the latest version of NoICE from

Why doesn't NoICE like code in #include files?

Most people only put equates, structs, and function prototypes in #include files, but some folks use them as a poor-man's substitute for macros.

However, if you have a source file filename.asm and it includes a file called (same name, different extension) that contains code:

           org       $2000
   Start   lds       #$FF
           bsr       Sub

you will have problems with source-level debugging.

The problem is that NoICE for Windows does not allow more than one FILE that contains source line information to have the same name. In the example, filename.asm and have the same name, differing only in the extension.

You can avoid this problem by changing the name of Note that you only have to do this if the include file contains code - otherwise the include files may have any name.

Why do I get errors the first time I run NoICE430 or NoICE12 after booting, but it works after that?

P&E Micro ( tells us: "We have had reports over the last 6 months of people seeing initial communications problems between PCs running Windows XP and some of our parallel port interface cables which then disappear after a short period of time."

"It seems that newer versions of Windows XP run an autodetect sequence on the parallel port where they pulse data out the port every 5 seconds for about a minute or so. It seems that this autodetect may be triggered when the parallel port is first accessed. This would explain why some customers are seeing some sort of sporadic failure which goes away after a short period of time."

The problem can be corrected by changing a Registry key so this auto-scan feature will not run. This auto-scan feature does not exist on previous versions of XP or the other Windows operating systems such as 2000. Here are the steps to change the Registry setting:

  1. Save the winxp2.reg file to your hard drive. This may have been placed in your NoICE\bin directory when you installed NoICE. If it isn't there, create a file called winxp2.reg containing
        Windows Registry Editor Version 5.00
  2. Make sure you are logged in as the system administrator
  3. Run the winxp2.reg file (by double clicking on it)
  4. A question will be displayed "Are you sure you want to add the information in winxp2.reg to the Registry?"
  5. Click the Yes button
  6. A message will display "Information in winxp2.reg has been successfully entered into the Registry."
  7. Restart your computer to apply the new changes

The ParEnableLegacyZip key seems to be the same as having "enable legacy PnP" checked in the Device properties for the LPT,

You can get more information about DisableWarmPoll and a bunch of other parallel port lore at Jan Axelson's Parallel Port FAQ (

Why does my firewall warn about NoICE when using a P&E BDM pod?

If you use a firewall, you may receive a warning when you run NoICE for the first time after selecting a P&E pod. For example, the Windows XP Firewall says

"Windows Firewall has blocked this program from accepting connections from the Internet or a network."

This is because the DLL that NoICE uses for P&E BDM supports all P&E BDM pods including an Ethernet version (Cyclone Pro and Cyclone Max). The DLL runs an enumeration process that listens on a TCP port. Since NoICE does not support the Cyclone pods, you can tell your firewall to block access to the port. This should have no effect on NoICE.

For more information, see FAQ#61 at

Why does Flash burning take so long?

If you are using a P&E BDM pod and the burn progress bar moves at reasonable rate, but then hangs for 10 or 20 seconds at the end, there is some sort of driver issue. NoICE's installer follows P&Es recommended install procedure, but we have observed that it doesn't always seem to do the trick. Go to P&E's web site and download the latest BDM drivers. Run their install. If that doesn't eliminate the hang at the end of burn, please let us know.

If your target is an HC08 using MON08, the protocol is almost perversely inefficient, requiring many more bytes to be sent and recieved than would be necessary. Since the basic Flash burning program used by NoICE uses the monitor to send data to be burned to the target, Flash burning can be slow - up to 100 seconds to burn 4K at 9600 baud.

A faster protocol can be used, but it requires knowledge of internal entrypoints in the MON08 ROM. These entrypoints are in many cases not documented (and some of the documented ones are incorrect). However, the speed up is very impressive: the 100 second burn time mentioned above decreases to 13 seconds using the revised protocol.

Since discovering the internal entrypoints requires disassembly of the MON08 ROM, and each HC08 family member has a slightly different ROM, the high-speed algorithm is only available for targets that we have had access to for disassembly and testing. If you need a fast burner for another family member, please contact us .

I don't have administrator rights on my PC. How can I use NoICE?

You must have adminstrator rights in order to install NoICE. We also recommend that you run NoICE several times with administrator rights in order to verify proper functionality before you use it with non-administrator rights.

Certain target processors and certain target interfaces may have special needs. If you run into something not listed here, please let us know.

Why is NoICE disassembling my ARM code as Thumb code (or vice versa)?

Processors with multiple instruction sets are a pain for a disassembler - how do you know how to disassemble a given chunk of memory? This isn't a problem at run time - if the processor's T-bit is set, then it executes Thumb instructions. If not, it executes ARM instructions. But you may be disassembling a piece of code far from where the processor is executing, and its Thumbness or ARMity is unrelated to the processor's current state.

NoICE provides four options (on the View menu) to control the disassembler:

If you are always seeing Thumb, you may have selected "as Thumb" at some point - the setting is remembered from run to run.

Why can't I do Page Up in the disassembly?

This would indeed be a nice feature. However, most of the target processors supported by NoICE have variable-length instructions. For example, HC12 instructions range in length from 1 to 6 bytes. So, if the first visible line is at address 0x2000 and you press "up arrow", should NoICE disassemble from address 0x1FFF, 0x1FFE, 0x1FFD...?

Since the second byte of a two-byte instruction might have a value that happens to be a one-byte instruction, there is no reliable way to sanity-check this.

If you are doing source-level debugging in mixed source/disassembly mode, the view can scroll up, moving back to the previous source line in the file. NoICE can do this and be sure that it is indeed an instruction boundary.

The NoICE monitor uses Brand X assembler, but I want to use Brand Y

The NoICE target monitors use a variety of freeware assemblers, based primarily on what we could find when we wrote the code. Some of these, especially PseudoSam, have what one might call "unusual" syntax for assembly pseudo-ops such as equates and data declarations.

If you prefer another assembler, you have two choices:

Given that the default assemblers are free, the latter choice seems to make the most sense.

If you do decide to port the monitor to another assembler, be careful: many free (and some non-free) assemblers are pretty lax in error checking. This is especially true of "Motorola/Freescale style" assemblers, where end-of-line comments do not begin with semi-colon. This can lead to a line such as

    LDA  buffer + 5

being interpreted as

    LDA  buffer (comment)

To verify your conversion, use the default assembler to assemble the monitor as provided, and your assembler to assemble the converted monitor before any other changes are made. Then compare the code produced by both assemblers.

You can use the NoICE CHECKSUM command for the comparison. If the monitor starts at location XXXX, and is YYYY bytes long, give the following NoICE commands for each monitor

    LOAD monitor.hex

If you have correctly converted the file, both versions should give the same checksum. This will work even with the "dummy target", if you do not yet have a working monitor program.

I used NoICE for DOS. Can I use my existing target monitor?

Yes. The monitors and target communications protocol are unchanged.

I used NoICE for DOS. What about NOICE.CFG and the command set?

NoICE for Windows supports almost all commands in version 3.2 for DOS, with the exception of TRACE, PRINT, and DOS. TRACE may be supported in a future version if enough users request it.

The file NOICE.CFG is no longer supported. Most of the former contents are now saved in the Registry when you exit NoICE. You can, however, put commands in a file NOICE51.NOI, NOICE11.NOI, etc. which will be PLAYed when NoICE starts up. Unlike the restricted set of commands allowed in NOICE.CFG, the new file can contain any legal NoICE command.

NoICE (tm) Debugger, Copyright 2012 by John Hartman

Using NoICE - Contact us - NoICE Home