• Tidak ada hasil yang ditemukan

2.2 Advanced SBST Techniques

2.2.3 SBST Code Optimization

2.2 Advanced SBST Techniques

to performance overhead due to higher test code download time [63]. Also, a longer execution time escalates the test application time. So, test code optimization, in terms of execution time and size, has been crucial for the effectiveness of SBST testing of pro- cessors [64]. Some of the recent techniques in test code optimization [4, 65] have demon- strated redundant instruction elimination methods to maximize the test compaction.

However, in these methods, the number of fault simulations required to identify the redundant instructions of test code is proportional to the test code size. So, the op- timization of a larger test code would consume a larger number of fault simulations, thereby increasing the computational cost.

The test development technique proposed in [63] demonstrates how low-cost test codes are developed for RISC processor cores. Initially, this method classifies the pro- cessor components into functional, control, and hidden components to prioritize them for test development. Thereafter, compact loops of test instructions that excite the com- ponent operations are developed for each component. In [64], D.Gizopoulos proposes four low-cost, online test development approaches which aim at small memory footprint, small execution time, and low power consumption. To reduce the CPU execution time of test codes, these techniques minimize the instruction and data memory interaction.

In the above methods [63, 64], the test compaction was carried out as one of the steps during the test development phase, i.e., the dedicated effort for test compaction was insignificant. As a consequence, the amount of shortened test code size or reduced test execution time due to the compaction procedure was low. However, the state of the art techniques of SBST compaction, such as [4, 65, 93, 94], employ a dedicated test com- paction module to conduct a comprehensive, instruction-by-instruction test compaction procedure.

In [4], two test code compaction methods were introduced. The first method makes use of a random instruction removal algorithm, called asA0test compaction algorithm, where redundant instructions are greedily searched and removed from the original test code. In A0, random instruction is selected in each step. If any instruction does not contribute towards the overall fault coverage, it is permanently removed from the orig- inal test code. However, the remaining test code must execute and terminate properly without exceptions.

In the second test code compaction method proposed in [4], a restoration-based

algorithm, called asA1xxtest compaction algorithm, is employed. InA1xx, the authors construct the optimized test code by removing blocks of instructions of the original test code and subsequently, restoring them to identify the redundant instructions of these blocks. Initially, the test code is split into blocks of instructions with equal size. Now, each block is selected and removed from the test program one at a time. Following the removal of a block, its instructions are restored one by one until all the faults that may get undetected due to the block removal are detected. So, the redundant instructions of each group are not restored, thereby constructing a compacted self-test code.

One of the critical issue ofA0andA1xxcompaction techniques [4] is the occurrences of Length Dependent Faults (LDF). An LDF fault could be detected only if the test code has at least a specific number (n) instructions. If LDFs are present, test code length and execution time cannot be reduced beyond n using the conventional compaction techniques. So, the technique proposed by [94] extends the A0 compaction algorithm [4] by inserting a NOP instruction on the removal of a redundant instruction. This placement of NOP instructions allows the test code to maintain the fault coverage by detecting the LDF faults, which eventually helps in the progress of test compaction process.

An advanced test compaction technique is introduced by Touati et al. [93] which discovers the smallest set of functional SBST codes that yield high fault coverage with reduced test time. To realize this, the redundant test codes are identified and removed from the set of test codes by comparing the list of covered faults. Also, different sequences of test code executions are investigated to find out the minimal test execution time for the optimized set of test codes.

The ARES (Automated Reordering for Efficient SBST) approach in [65] attempts to reduce the test execution time with maintaining the fault coverage. This method operates in two stages. In the first stage, the self-test code is partitioned into non- overlapping groups in all possible sequences, for a target number of groups. Among all grouping arrangements, the best grouping solution is selected with the help of a test length based quality metric evaluated using high-level logic simulations. In the second stage, these groups are reordered and fault simulated to discover the group ordering with minimum test execution time.

The required computational cost for a thorough, instruction by instruction test

2.2 Advanced SBST Techniques

compaction would be very high. Instruction restoration and instruction removal tech- niques [4] guarantee the elimination of redundant instructions in terms of coverage with a reasonable compaction rate. Since efficient test compaction needs a large number of fault simulations and single fault simulation takes a significant amount of time, the overall computation cost of these test compaction techniques is large.

A low-cost, instruction-wise test optimization technique that yields adequate test compaction could be introduced by enhancing the existing A1xx technique [4]. In this possible modification, a high-level logic simulation could be used to replace the preceding blocks of a restoring block of instructions with a smaller set of instructions that yields similar fault coverage. Also, every group of independent instructions could be investigated to identify and remove the redundant groups which do not contribute towards the overall fault coverage.