Joe, aged about 10 months, is the cute one in this shot.

This was in 1995: Joe now has a lot more hair, and I a lot less.

Click on the picture to see a resume.

Click up to go back the the Willett EE homepage.

Click next to see some more family pictures.

** And here are some recent paper titles and abstracts. **

(D. Crouse, M. Guerriero and P. Willett)

We combine concepts from numerous papers to provide a derivation and description of a generalized Probabilistic Multi-Hypothesis Tracker that can track multiple targets in a cluttered environment, utilizing multiple sensors and feature measurements, if available. Additionally, we provide a full derivation of the algorithm, including parts omitted or abbreviated in other work. We also provide an improved analytic solution for the proir target-measurement probabilities (the pi's) conditioned on the number of observations, a simplified method of performing the maximization step of the algorithm when multiple sensors are used, a consistent covariance approximation of the algorithm when using multiple sensors, explore the use of deterministic annealing to improve performance and discuss implementation difficulties.

(O. Erdinc, P. Willett and Y. Bar-Shalom)

An algorithm that is capable not only of tracking multiple targets but also of "track management" - meaning that it does not need to know the number of targets as a user input -- is of considerable interest. In this paper we devise a recursive track-managed filter via a quantized state-space ("bin") model. In the limit, as the discretization implied by the bins becomes as refined as possible (infinitesimal bins) we find that the filter equations are identical to Mahler's probability hypothesis density (PHD) filter, a novel track-managed filtering scheme that is attracting increasing attention. Thus one contribution of this paper is an interpretation of, if not the PHD itself, at least what the PHD is doing. This does offer some intuitive appeal, but has some practical use as well: with this model it is possible to identify the PHD's "target-death" problem, and also the statistical inference structures of the PHD filters. To obviate the target death problem, PHD originator Mahler developed a new "cardinalized" version of PHD (CPHD). The second contribution of this paper is to extend the "bin-occupancy" model such that the resulting recursive filter is identical to the cardinalized PHD filter.

(W. Blanding, P. Willett, Y. Bar-Shalom and S. Coraluppi)

In active sonar tracking applications, targets frequently undergo fading detection performance in which the target’s detection probability can shift suddenly between high and low values. This characteristic is a function of the undersea environment. Using a multistatic active sonar problem, we examine the performance of track management (confirmation and termination) routines where target detections are based on an underlying Hidden Markov Model (HMM) with high and low detection states. Rule-based track confirmation tests are compared including M/N rules and rules that differentiate the measurements as to receiver source (M/N from at least C sensors), each of which is sub-optimal compared to a fixed length likelihood ratio test. We show that significant performance improvements (to near-optimal) can be obtained using a composite track confirmation test that combines two or three such rules in a logical OR operation. Track termination tests are next compared and it is shown that a Bayesian sequential test (the Shiryaev test) yields dramatic performance improvements over a K/N track termination rule and the Page test. The model-based results are validated using simulations of a multistatic tracking scenario. The results of this paper are informed by the multistatic sonar application. However, targets may fade in and out of view in other modalities as well, due to aspect-dependent radar cross-section or occlusion. As such, our suggestions for improved HMM-modulated SNR track-initiation and -termination apply to multi-sensor radar target tracking as well.

(B. Li, J. Huang, S. Zhou, K. Ball, M. Stojanovic, L. Freitag and P. Willett)

Underwater acoustic channels induce large Doppler drifts that result in intercarrier interference (ICI) for OFDM transmissions. Assuming that after proper Doppler compensation the residual ICI is limited to only direct neighbors, we propose an OFDM signal design that decouples channel estimation and data demodulation. We investigate five receivers that are categorized into three groups: (i) two receivers that ignore the residual ICI, (ii) two receivers that are based on a basis expansion model (BEM) and pursue channel estimation independently along each basis, and (iii) one receiver that is based on a path-based model. The receiver performance is compared based on data from the GLINT experiment conducted south of the island Elba in the Mediterranean, July 2008, and the SPACE experiment conducted off the coast of Martha's Vineyard, Massachusetts, October 2008. The receiver based on the path-based model and a basis pursuit (BP) algorithm achieves the best performance, followed by the ICI-ignorant and BEM versions of BP. The least-squares channel estimation performs the worst, especially in combination with the BEM. The BEM based receivers are often inferior to the ICI-ignorant counterparts, except for conditions with very large Doppler spread. This implies that there exists a trade-off between ICI compensation and the estimation accuracy of the much increased number of BEM parameters. On the contrary, the path-based channel model facilitates ICI compensation without increasing the number of model parameters, by exploiting the sparse representation in the joint delay-Doppler domain.

(C. Berger, M. Guerriero, S. Zhou and P. Willett)

We introduce a sequential procedure to detect a target with distributed sensors in a two dimensional region. The detection is carried out in a mobile fusion center which successively counts the number of binary decisions reported by local sensors lying inside its moving field of view. This is a two-dimensional scan statistic - an emerging tool from the statistics field that has been applied to a variety of anomaly detection problems such as of epidemics or computer intrusion, but that seems to be unfamiliar to the signal processing community. We show that an optimal size of the field of view exists. We compare the sequential two-dimensional scan statistic test and two other tests. Results for system level detection are presented.

(M. Guerriero, P. Willett and J. Glaz)

We introduce a sequential procedure to detect a target with distributed sensors in a two dimensional region. The detection is carried out in a mobile fusion center which successively counts the number of binary decisions reported by local sensors lying inside its moving field of view. This is a two-dimensional scan statistic - an emerging tool from the statistics field that has been applied to a variety of anomaly detection problems such as of epidemics or computer intrusion, but that seems to be unfamiliar to the signal processing community. We show that an optimal size of the field of view exists. We compare the sequential two-dimensional scan statistic test and two other tests. Results for system level detection are presented.

(S. Ruan, Y. Zhou, F. Yu, K. Pattipati, P. Willett and A. Patterson-Hine)

In this paper, we consider a model for Dynamic Multiple Fault Diagnosis (DMFD) problem arising in on-line monitoring of complex systems and present a solution. This problem involves real-time inference of the most likely set of faults and their time-evolution based on blocks of unreliable test outcomes over time. In the DMFD problem, there is a finite set of mutually independent fault states, and a finite set of sensors (tests) are used to monitor their status. We model the dependency of test outcomes on the fault states via the traditional D-matrix (fault dictionary). The tests are imperfect in the sense that they can have missed detections, false alarms, or may be available asynchronously. Based on the imperfect observations over time, the problem is to identify the most likely evolution of fault states over time. The DMFD problem is an intractable NP-hard combinatorial optimization problem. Consequently, we decompose the DMFD problem into a series of decoupled sub-problems, one for each sample epoch. For a single epoch MFD, we develop a fast and high-quality deterministic simulated annealing method. Based on the sequential inferences, a local search and update scheme is applied to further improve the solution. Finally, we discuss how the method can be extended to dependent faults.

(J. Huang, S. Zhou, J. Zhu, and P. Willett)

Using group theory, we analyze cycle GF(2^p) codes that use Cayley graphs as their associated graphs. First, we show that through row and column permutations the parity check matrix H can be put in a concatenation form of row-permuted block-diagonal matrices. Encoding utilizing this form can be performed in linear time and in parallel. Second, we derive a rule to determine the nonzero entries of H and present determinate and semi-determinate codes. Our simulations show that the determinate and semi-determinate codes have better performance than codes with randomly generated nonzero entries for GF(16) and GF(64), and have similar performance for GF(256). The constructed determinate and semi-determinate codes over GF(64) and GF(256) can outperform the binary irregular counterparts of the same block lengths. One distinct advantage for determinate and semi-determinate codes is that they greatly reduce the storage cost of H for decoding. The results in this correspondence are appealing for the implementation of efÞcient encoders and decoders for this class of promising LDPC codes, especially when the block length is large.

(X. Zhang, P. Willett and Y. Bar-Shalom)

We consider distributed binary detection problems in which the remote sensors of a network implement a censoring strategy to fulfill energy constraints, and the network works under the attack of an eavesdropper. The attacker wants to discover the state of the nature scrutinized by the system, but the network implements appropriate countermeasures to make this task hopeless. The goal is to achieve perfect secrecy at the physical layer, making the data available at the eavesdropper useless for its detection task. Adopting as performance metric certain Ali-Silvey distances, we characterize the detection performance of the system under physical layer secrecy. Two communication scenarios are addressed: parallel access channels and a multiple access channel. In both cases the optimal operative points from the network perspective are found. The most emph{economic} operative solution is shown to lie in the asymptote of low energy regime. How the perfect secrecy requirement impacts on the achievable performances, with respect to the absence of countermeasures, is also investigated.

(X. Zhang, P. Willett and Y. Bar-Shalom)

When the size of targets is comparable to the range resolution of monopulse radars, these targets should be considered as extended rather than point targets. If several closely-spaced targets fall within the same radar beam and between adjacent matched filter samples in range, the full monopulse information from all of these samples can and should be used to resolve these targets, i.e., estimate the number of targets and their respective angles-of-arrival and ranges. To detect and localize multiple unresolved extended targets, we establish a model for monopulse radar returns from extended objects, and present a maximum likelihood estimator to localize the targets. Rissanen's minimum description length (MDL) criterion will be used to decide the number of existing extended objects. We also derive the upper limit on the number of targets and their scattering centers that can be resolved, and we provide necessary conditions for these targets to be uniquely identified. We compare the new extended target monopulse processing scheme with previously developed point-target monopulse techniques in the simulations.

(M. Guerriero, S. Marano, V. Matta and P. Willett)

Stochastic Resonance (SR) is a nonlinear phenomenon known in physics that has attracted recent interest in the signal processing literature, and specifically in the context of detection. We investigate the SR effect arising in sequential detectors for shift-in-mean binary hypothesis testing, and characterize the optimal resonance as the solution of specific optimization problems. One particular (and at first glance perhaps counterintuitive) finding is that certain sequential detection procedures can be made more efficient by randomly adding or subtracting a suitable constant value to the data at the input of the detector.

(S. Singh, H. Tu, W. Donat, K. Pattipati and P. Willett)

The problem of detecting an anomaly (or abnormal event) is such that the distribution of observations is different before and after an unknown onset time, and the objective is to detect the change by statistically matching the observed pattern with that predicted by a model. In the context of asymmetric threats 1, the detection of an abnormal situation refers to the discovery of suspicious activities of a hostile nation or group out of noisy, scattered, and partial intelligence data. The problem becomes complex in a low SNR environment, such as asymmetric threats, because the signal observations are far fewer than noise observations. Further, the signal observations are hidden in the noise. In this paper, we illustrate the capabilities of hidden Markov models (HMMs), combined with feature-aided tracking, for the detection of asymmetric threats. A transaction-based probabilistic model is proposed to combine hidden Markov models and feature-aided tracking. A procedure analogous to Page's test is used for the quickest detection of abnormal events. The simulation results show that our methods are able to detect the modeled pattern of an asymmetric threat with a high performance. Performance analysis shows that the detection of HMMs improves with increase in the complexity of HMMs (i.e., the number of states in a HMM).

(J. Huang, S. Zhou and P. Willett)

Recently, multicarrier modulation in the form of orthogonal frequency division multiplexing (OFDM) has been shown feasible for underwater acoustic communications via effective algorithms to handle the channel time-variability. In this paper, we propose to use nonbinary low density parity check (LDPC) codes to address two other main issues in OFDM: (i) plain (or uncoded) OFDM has poor performance in fading channels, and (ii) OFDM transmission has high peak to average power ratio (PAPR). We develop new methods to construct nonbinary regular and irregular LDPC codes that achieve excellent performance, match well with the underlying modulation, and can be encoded in linear time and in a parallel fashion. Based on the fact that the generator matrix of LDPC codes has high density, we further show how to reduce the PAPR considerably with minimal overhead. Experimental results confirm the excellent performance of the proposed nonbinary LDPC codes in multicarrier underwater acoustic communications.

(S. Mason, C. Berger, S. Zhou and P. Willett)

In this paper, we propose a novel method for detection, synchronization and Doppler scale estimation for underwater acoustic communication using orthogonal frequency division multiplex (OFDM) waveforms. This new method involves transmitting two identical OFDM symbols together with a cyclic prefix, while the receiver uses a bank of parallel self-correlators. Each correlator is matched to a different Doppler scaling factor with respect to the waveform dilation or compression. We characterize the receiver operating characteristic in terms of probability of false alarm and probability of detection. We also analyze the impact of Doppler scale estimation accuracy on the data transmission performance. These analytical results provide guidelines for the selection of the detection threshold and Doppler scale resolution. In addition to computer-based simulations, we have tested the proposed method with real data from an experiment at Buzzards Bay, MA, Dec. 15, 2006. Using only one preamble, the proposed method achieves similar performance on the Doppler scale estimation and the bit error rate as an existing method that uses two linearly-frequency-modulated (LFM) waveforms, one as a preamble and the other as a postamble, around each data burst transmission. Compared with the LFM based method, the proposed method works with a constant detection threshold independent of the noise level and is suited to handle the presence of dense multipath channels. More importantly, the proposed approach avoids the need of buffering the whole data packet before data demodulation, which enables online receiver operation for multicarrier underwater acoustic communications.

(D. Crouse, C. Berger, S. Zhou and P. Willett)

We derive optimal memoryless relays for the case of noncoherent modulation over additive white Gaussian noise (AWGN) channels with or without fading. The derivation is flexible, as it can be applied to any binary hypothesis observed at the relay. We investigate several channels, including random phase and fading, and apply different modulation schemes, namely on-off-keying (OOK) and orthogonal frequencyshift- keying (FSK). We find that at low signal-to-noise ratios (SNR) the relay censors its observation, as it only transmits at non-zero energy if the observations seem reliable. Compared to the known results that optimal memoryless relays for the case of coherent binary-phase-shift-keying (BPSK) are combinations of soft-information and a hard-limiter [1]ï¿½[4], the noncoherent relays have considerably less emphasis on soft-information and converge much faster to the hard-limiter.

(C. Berger, S. Zhou, Y. Wen, P. Willett and K. Pattipati)

To achieve reliable packet transmission over a wireless link without feedback, we propose a layered coding approach that uses error-correction coding within each packet and erasure-correction coding across the packets. This layered approach is also applicable to an end-to-end data transport over a network where a wireless link is the performance bottleneck. We investigate how to optimally combine the strengths of error- and erasure-correction coding to optimize the system performance with a given resource constraint, or to maximize the resource utilization efficiency subject to a prescribed performance. Our results determine the optimum tradeoff in splitting redundancy between error-correction coding and erasure-correction codes, which depends on the fading statistics and the average signal to noise ratio (SNR) of the wireless channel. For severe fading channels, such as Rayleigh fading channels, the tradeoff leans towards more redundancy on erasure-correction coding across packets, and less so on error-correction coding within each packet. For channels with better fading conditions, more redundancy can be spent on error-correction coding. The analysis has been extended to a limiting case with a large number of packets, and a scenario where only discrete rates are available via a finite number of transmission modes.

(M. Guerriero, S. Marano, V. Matta and P. Willett)

Recently, the DOA (Direction Of Arrival) estimation of an acoustic wavefront has been considered in a setting where the inference task is performed by a Wireless Sensor Network (WSN) made of isotropic (hence individually useless) sensors. The WSN was designed according to the SENMA architecture with a Mobile Agent (MA) that successively queries the sensors lying inside its field of view. In this paper the ideal assumption previously made that the visibility of individual sensors is governed by deterministic laws is relaxed; this yields, interestingly, simpler analytical formulas. Both fast/simple and optimal DOA-estimation schemes are proposed, and an optimization of the MAï¿½s observation management is also carried out, with the surprising finding that the MA ought to orient itself at an oblique angle to the expected DOA, rather than directly toward it. The extension to multiple sources is also considered; intriguingly, per-source DOA accuracy is higher when there is more than one source. In all cases, performance is investigated by simulation and compared, when appropriate, with asymptotic bounds; these latter are usually met after a moderate number of MA dwells.

(C. Berger, S. Zhou, Z. Tian and P. Willett)

In this paper we develop a fine synchronization algorithm for multiband OFDM transmission in the presence of frequency selective channels. This algorithm is based on maximum a posteriori (MAP) joint timing and channel estimation that incorporates channel statistical information, leading to considerable performance enhancement relative to existing maximum likelihood (ML) approaches. We carry out a thorough performance analysis of the fine timing algorithm, and link the diversity concept widely used in data communications to the timing performance. Our result reveals that the timing estimate is very much concentrated around the true timing as the signal to noise ratio (SNR) increases. Our simulations confirm the theoretical analysis, and also demonstrate the robustness of the proposed timing algorithm against model mismatches in a realistic UWB indoor channel.

(O. Erdinc, C. Brideau, P. Willett and T. Kirubarajaran)

This paper presents an approach to the detection and isolation of component failures in large-scale systems; in the case of sensor reporting rates of 1Hz or less, the algorithm can be considered real-time. The input is a set of observed test results from multiple sensors, and the algorithmÕs main task is to deal with sensor errors. The sensors are assumed to be of a threshold-test (pass/fail) type, but to be vulnerable to noise, in that occasionally true failures are missed, and likewise there can be false alarms. These errors are, further, assumed independent conditioned on the systemÕs diagnostic state; their probabilities, of missed detection and of false alarm, are not known a-priori, and must be estimated Ð ideally along with the accuracies of these estimates Ð online, within the inference engine. Further, recognizing a practical concern in most real systems, a sparsely-instantiated observation vector must not be a problem. The key ingredients to our solution include the Multiple Hypothesis Tracking (MHT) philosophy to complexity management, a Beta prior distribution on the sensor errors and a quickest detection overlay to detect changes in these error rates when the prior is violated. We provide results illustrating performance in terms both of computational needs and error rate, and show its application both as a filter (i.e., used to "clean" sensor reports) and as a stand-alone state estimator.

(C. Berger, S. Zhou, P. Willett and L. Liu)

Underwater acoustic localization usually relies on time of arrival (ToA) measurements, which are then converted into range estimates. However, the water medium is inhomogeneous and the sound speed varies depending on several parameters, e.g., the temperature, pressure and salinity. As a result, sound waves do not necessarily travel in straight lines. Ignoring this stratification effect could lead to considerable bias in the range estimates. We propose a depth-based approach to compensate the stratification effect for improved underwater ranging. We assume that the sound velocity profile (SVP) is only vertically stratified, the position of the sender is known, and the receiver has a noisy depth estimate via a depth sensor. We find a numerically simple range estimator, based on reconstructing the slanted path using Fermat's Principle and calculus of variations. This estimator removes the bias and is asymptotically efficient. We compare our solution to the simplistic linear estimator that assumes straight-line propagation in a shallow-water example where the sound speed decreases monotonically with depth. We find that the bias of the linear estimator increases with range and is non-negligible when the ToA measurements have a small variance, while our solution is bias-free and meets the Cramï¿½er- Rao lower bound (CRLB).

(B. Li, S. Zhou, M. Stojanovic, L. Freitag and P. Willett)

Underwater acoustic (UWA) channels are wideband in nature due to the small ratio of the carrier frequency to the signal bandwidth, which introduces frequency-dependentDoppler shifts. In this paper, we treat the channel as having a common Doppler scaling factor on all propagation paths, and propose a twostep approach to mitigating the Doppler effect: (1) non-uniform Doppler compensation via resampling that converts a wideband problem into a narrowband problem; and (2) high-resolution uniform compensation of the residual Doppler. We focus on zero-padded OFDM to minimize the transmission power. Null subcarriers are used to facilitate Doppler compensation, and pilot subcarriers are used for channel estimation. The receiver is based on block-by-block processing, and does not rely on channel dependence across OFDM blocks; thus, it is suitable for fast-varying UWA channels. The data from two shallow water experiments near Woods Hole, MA, are used to demonstrate the receiver performance. Excellent performance results are obtained even when the transmitter and the receiver are moving at a relative speed of up to 10 knots, at which the Doppler shifts are greater than the OFDM subcarrier spacing. These results suggest that OFDM is a viable option for high-rate communications over wideband underwater acoustic channels with nonuniform Doppler shifts.

(O. Gerard, C. Carthel, S. Coraluppi and P. Willett)

This paper presents a method to detect and classify odontocete echolocation clicks as well as to estimate the number of animals that are vocalizing. A transient detector using the Page test [1-3] is used to extract the clicks: the click time, the click duration, the click amplitude and the spectral information of the clicks are extracted. A probability distribution over the species is assigned to each click, based on the spectral information of the click. The estimation of the number of animals is done using feature-aided multihypothesis tracking (MHT) algorithms. The association is based on the assumptions of slowly-varying click amplitude and intra-click timing [4-5]. This work has been done on the dataset provided by the organizers of the 3rd International Workshop on the Detection and Classification of Marine Mammals using Passive Acoustics, Boston, July 2007. This dataset consists of training and test data; the training data includes vocalizations of three species: Blainvilleï¿½s beaked whale (Mesoplodon densirostris), Rissoï¿½s dolphin (Grampus griseus) and short-finned pilot whale (Globicephala macrorhynchus).

(W. Blanding, P. Willett and Y. Bar-Shalom)

Developed over 15 years ago, the Maximum Likelihoodï¿½Probabilistic Data Association target tracking algorithm has been demonstrated to be effective in tracking Very Low Observable (VLO) targets where target signal-to-noise ratios (SNR) require very low detection processing thresholds to reliably give target detections. However this algorithm has had limitations, which in many cases would preclude use in realtime tracking applications. In this paper we describe three recent advances in the ML-PDA algorithm which make it suitable for real-time tracking. First we look at two recently reported techniques for finding the ML-PDA track estimate which improves computational efficiency by one order of magnitude. Next we review a method for validating ML-PDA track estimates based on the Neyman-Pearson Lemma which gives improved reliability in track validation over previous methods. As our main contribution, we extend ML-PDA from a single-target tracker to a multi-target tracker and compare its performance to that of the Probabilistic Multi-Hypothesis Tracker (PMHT).

(A. Isaac, P. Willett and Y. Bar-Shalom)

Recent advances have been reported in detecting and estimating the location of more than one target within a single monopulse radar beam. Successful tracking of those targets has been achieved with the aid of nonlinear filters that approximate the targets states conditional pdf, bypassing the measurement extraction stage, and operating directly on the monopulse sum/difference data, i.e., without measurement extraction. The problem of detecting a target spawn will be tackled in this paper. Particle filters will be employed as nonlinear tracking filters to approximate the posterior probability densities of the targets states under different hypotheses of the number of targets, which in turn can be used to evaluate the likelihood ratio between two different hypotheses at subsequent time steps. Ultimately, a quickest detection procedure based on sequential processing of the likelihood ratios will be used to decide on a change in the underlying target model as an indication of a newly spawning target. Radar signal processing, data association and target tracking are handled simultaneously.

(Y. Ruan, P. Willett, A. Marrs, S. Marano and F. Palmieri)

Most treatments of decentralized estimation rely on some form of track fusion, in which local track estimates and their associated covariances are communicated. This implies a great deal of communication; and it was recently proposed that by an intelligent quantization directly of measurements, the communication needs could be considerably cut. However, several issues were not discussed. The first of these is that estimation with quantized measurements requires an update with a non-Gaussian distribution, reflecting the uncertainty within the quantization bin. In general this would be a difficult task for dynamic estimation, but Markov-Chain Monte-Carlo (MCMC, and specifically here particle filtering) techniques appear quite appropriate since the resulting system is, in essence, a nonlinear filter. The second issue is that in a realistic sensor network it is to be expected that measurements should arrive out-of-sequence. Again, a particle filter is appropriate, since the recent literature has reported particle filter modifications that accommodate nonlinear-filter updates based on new past measurements, with the need to re-filter obviated. We show results that indicate a compander/particle-filter combination is a natural fit, and specifically that quite good performance is achievable with only 2-3 bits per dimension per observation. The third issue is that intelligent quantization requires that both sensor and fuser share an understanding of the quantization rule. In dynamic estimation this is a problem since both quantizer and fuser are working with only partial information; if measurements arrive out of sequence the problem is compounded. We therefore suggest architectures, and comment on their suitabilities for the task ï¿½ a scheme based on delta-modulation appears to be promising.

(O. Erdinc, C. Brideau, P. Willett and T. Kirubarajaran)

The impact of delayed sensor-alarm data upon a diagnostic inference engine appears not to be well appreciated. In this paper we illustrate the effect of sensor latency, and we propose an inference approach to obviate it.

(W. Blanding, P. Willett, Y. Bar-Shalom and R. Lynch)

The Maximum Likelihood Probabilistic Data Association (ML-PDA) tracking algorithm is effective in tracking Very Low Observable targets (i.e., very low signal-to-noise ratio (SNR) targets in a high false alarm environment). However, the computational complexity associated with obtaining the track estimate in many cases has precluded its use in real-time scenarios. Previous ML-PDA implementations used a multi-pass grid search to find the track estimate. In this paper, two alternate methods for finding the track estimate are presented a genetic search and a newly developed directed subspace (DSS) search algorithm. Each algorithm is tested using active sonar scenarios in which an autonomous underwater vehicle searches for and tracks a target. Within each scenario, the problem parameters are varied to illustrate the relative performance of each search technique. Both the DSS search and the genetic algorithm are shown to be an order of magnitude more computationally efficient than the multi-pass grid search, making possible real-time implementation. In addition, the DSS search is shown to be the most effective technique at tracking a target at the lowest SNR levels ï¿½ reliable tracking down to 5 dB (post-processing SNR in a resolution cell) using a 5-frame sliding window is demonstrated, this being 6dB better than the multi-pass grid search.

(S. Marano, V. Matta, L. Tong and P. Willett)

In a wireless sensor network the nodes collect independent observations about a nonrandom parameter theta to be estimated, and deliver informations
to a fusion center (FC) by transmitting suitable waveforms through a common Multiple Access Channel (MAC).
The FC implements some appropriate fusion rule and outputs the final estimate of theta. We introduce a new access/estimation scheme, here referred to as
LBMA (Likelihood Based Multiple Access), and prove it to be asymptotically efficient in the limit of increasingly large number of sensors n,
when the used bandwidth W is allowed to scale as n-to-the-alpha, 0.5

(S. Marano, V. Matta and P. Willett)

Conceptual and practical encoding/decoding, aimed at accurately reproducing remotely collected observations, has been heavily investigated since the pioneering works by Shannon about source coding. However, when the goal is not to reproduce the observables, but making inference about an embedded parameter and the scenario consists of many unconnected remote nodes, the landscape is less certain. We consider a multiterminal system designed for efficiently estimating a random parameter according to the MMSE criterion. The analysis is limited to scalar quantizers followed by a joint entropy encoder, and it is performed in the high-resolution regime where the problem can be easier mathematically tackled. Focus is made on the peculiarities deriving from the estimation task, as opposed to that of reconstruction, as well as on the multiterminal, as opposite to centralized, character of the inference. The general form of the optimal nonuniform quantizer is derived and examples are given.

(E. Phelps, P. Willett, T. Kirubarajan and C. Brideau)

Prognostics, which refers to the inference of an expected time-to-failure for a system, is made difficult by the need to track and predict the trajectories of real-valued system parameters over essentially unbounded domains, and by the need to prescribe a subset of these domains in which an alarm should be raised. In this paper we propose an idea, one whereby these problems are avoided: instead of physical system or sensor parameters, a vector corresponding to the failure probabilities of the systemï¿½s sensors (which of course are bounded within the unit hypercube) is tracked. With the help of a system diagnosis model, the corresponding the fault signatures can be identified as terminal states for these probability vectors. To perform the tracking, Kalman filters and interacting multiple model (IMM) estimators are implemented for each sensor. The work that has been completed thus far shows promising results in both large and small-scale systems, with the impending failures being detected quickly and the prediction of the time until this failure occurs being determined accurately.

(P. Willett, W. Dale Blair and X. Zhang)

It has recently been found that via jointly processing multiple (sum, azimuth- and elevation-difference) matched filter samples it is possible to extract and localize several (more than two) targets spaced more closely than the classical interpretation of radar resolution. This paper derives the Cramer-Rao lower bound (CRLB) for sampled monopulse radar data. It is worthwhile to know the limits of such procedures; and in addition to its role in delivering the measurement accuracies required by a target tracker, the CRLB reveals an estimator's efficiency. We interrogate the CRLB expressions for cases of interest. Of particular interest are the CRLB's implications on the number of targets localizable: assuming a sampling-period equal to a rectangular pulse's length, five targets can be isolated between two matched filter samples given the target's SNRs are known. This reduced to three targets when the SNRs are not known, but the number of targets increases back to five (and beyond) when a dithered boresight strategy is used. Further insight to the impact of pulse shape and of the benefits of over-sampling are given.

(S. Zhou and P. Willett)

It is well known to active-sonar engineers that the reflected signal from a target can be highly aspect-dependent, hence in many cases only receivers located in a particular zone determined by the source/target receive-geometry and the target aspect can detect the return signal. Thus, submarines can hide well from traditional sonar systems. For these low-visibility targets, we propose a target localization paradigm based on a distributed sensor network which consists of low complexity sensors that only report binary detection results. Based on binary outputs and the positions of the sensors, we develop optimal maximum likelihood and suboptimal line-fitting based estimators, and derive the Cramer-Rao lower bound on estimation accuracy. We extend our results from single source to multi-source settings, both with and without explicit incorporation of a reflection model that links the target orientation to the propagation direction. Our numerical results verify the feasibility of the proposed estimators. We do not rely on continuous quantities such as signal strength, direction of arrival, time or time-difference of arrival, and instead localize based on discrete detection results which include both false alarms and missed detections.

(O. Erdinc, P. Willett and S. Coraluppi)

Active sonar tracking using measurements from multistatic sensors has shown promise: there are benefits in terms of robustness, complementarity (covariance-ellipse intersection) and of course simply due to the increased probability of detection that naturally accrues from a well-designed data fusion system. It is not always clear what the placement of the sources and receivers that gives the best fused measurement covariance for any target ï¿½ or at least for any target that is of interest ï¿½ might be. In this paper, we investigate the problem as one of global optimization, in which the objective is to maximize the information provided to the tracker. We assume that the number of sensors is given, so that the optimization is done in a continuous space. The strong variability of target strength as a function of aspect is integral to the cost function we optimize. Doppler information is not discarded when constant frequency (Doppler-sensitive) waveforms are available. The optimal placements that result are consistent with our intuition, suggesting that our placement strategy may provide a useful tool in more complex scenarios where intuition is challenged.

(A. Isaac, P. Willett and Y. Bar-Shalom)

This paper discusses four techniques to successfully track two closely-spaced and unresolved targets using monopulse radar measurements, the quality of such tracking being a determinant of successful detection of target spawn. It explores statistical estimation techniques based on the maximum likelihood criterion and Gibbs sampling, and addresses concerns about the accuracy of the measurements delivered thereby. In particular, the Gibbs approach can deliver joint measurements (and the associated covariances) from both targets, and it is therefore natural to consider a joint filter. The ideas are compared; and amongst the various strategies discussed, a particle filter that operates directly on the monopulse measurements seems to be best.

(W. Blanding, P. Willett and Y. Bar-Shalom)

We present two procedures for validating candidate target tracks obtained using the Maximum Likelihood Probabilistic Data Association (ML-PDA) algorithm. The ML-PDA, developed for Very Low Observable (VLO) target tracking, always provides a track estimate that must then be validated or rejected by comparing the value of the Log Likelihood Ratio (LLR) at the track estimate to a threshold. Using extreme value theory, we show that in the absence of a target the LLR at the track estimate obeys approximately a Gumbel distribution rather than the Gaussian distribution previously ascribed to it in the literature. The optimal off-line track validation procedure relies on extensive off-line simulations to obtain a set of track validation thresholds that are then used by the tracking system. The real-time procedure, which is suboptimal, uses the data set that produced the track estimate to also determine the track validation threshold. The performance of these two procedures is investigated through simulation of two active sonar tracking scenarios by comparing the false track and true track acceptance probabilities. These techniques have potential for use in a broader class of maximum likelihood estimation problems with similar structure.

(Y. Chen, Q. Zhu and P. Willett)

Optical Doppler tomography is a valuable functional extension of optical coherence tomography OCT that can be used to study subsurface blood flows of biological tissues. We propose a novel frequency estimation technique that uses an adaptive notch filter ANF to track the depth-resolved Doppler frequency. This new technique is a minimal-parameter filter and works in the time domain without the need of Fourier transformation. Therefore, the algorithm has a computationally efficient structure that may be well suited for implementation in real-time ODT systems. Our simulations and imaging results also demonstrate that this filter has good performance in terms of noise robustness and estimation accuracy compared with existing estimation algorithms.

(C. Berger, S. Zhou, P. Willett and P. Swaszek)

We look at the simple scenario where multiple sensors make conditionally independent observations of a binary source and process the measurement data using a function U(x) before forwarding them to a fusion center via a Gaussian multiaccess channel. Subject to a total power constraint, we obtain the optimal U(x) that maximizes the deflection; the latter can also be interpreted as the output signal-to-noise ratio for an equivalent binary detection problem. The shape of the optimal function only depends on the probability density function of the observation noise, which we assume symmetric around zero, while the height is scaled by the allowed transmission power. We emphasize that the optimal function herein is derived for an arbitrary distribution of the observation noise. It reduces to a tanh function when the observation noise is additive Gaussian, which has been studied in the literature.

(S. Marano, V. Matta, P. Willett, L. Tong)

A network of sensors polled by a mobile agent (the SENMA paradigm) is used for detection purposes, with both the remote nodes and the mobile agent implementing Wald's sequential tests. When polled, each remote node transmits its local decision (if any) to the agent, and two network/agent communication schemes are considered. One of these is designed with specific care to the network's energy consumption. In both cases, collisions over the common communication channel are precluded by the sequentiality of the sensors' query. The system performances in terms of average decision time, error probability, and network energy consumption are derived in exact analytical form. A tradeoff exists between the amount and the reliability of the information that the rover may collect: at optimality, the decentralized system overcomes a single supernode by orders of magnitude in terms of decision time, while only 30% of the sensors encountered by the mobile agent spend energy to reveal themselves. The remaining sensors contribute to the detection process by their silence.

(S. Marano, V. Matta and P. Willett)

We investigate the design of simple noncooperative quantizers for distributed estimation of a common random variable. It is assumed that there is a budget of aggregate rate, a criterion of Fisher information and a large population of sensors. It is further assumed that sensor quantizers are uniform, and that rate is determined by the entropy of the outputs of these. The key question asked is whether it is better to quantize a relatively few sensors finely or as many as possible coarsely.

(A. Isaac, P. Willett and Y. Bar-Shalom)

For the case of a single resolved target, monopulse-based radar sub-beam angle and sub-bin range measurements carry errors that are approximately Gaussian with known covariances, and hence a tracker that uses them can be Kalman-based. However, the errors accruing from extracting measurements for multiple unresolved targets are not Gaussian. We therefore submit that to track such targets it is worth the effort to apply a nonlinear (non-Kalman) filter. Specifically, in this paper we propose a particle filter that operates directly on the monopulse sum/difference data for two unresolved targets. Significant performance improvements are seen versus a scheme in which signal processing (measurement extraction from the monopulse data) and tracking (target state estimation from the extracted measurements) are separated.

(X. Zhang, P. Willett and Y. Bar-Shalom)

Many target tracking subsystems have the ability to schedule their own data rates; essentially they can ï¿½orderï¿½ new information whenever they need it, and the cost is in terms of the sensor resource. But among the un-managed schemes, uniform sampling, in which a new measurement is requested periodically and regularly, is the most commonly-used sampling scheme; deliberately nonuniform schemes are seldom given serious consideration. In this paper, however, we show that such schemes may have been discarded prematurely: a nonuniform sampling can have its benefits. Specifically, the nonuniform and uniform sampling schemes are compared for two kind of trackers: the PDAF, which updates its track based on a single scan of information at a time; and N-D assignment (an optimization-based implementation of the MHT), in which the sliding window involves many scans of observations. We find that given the ground rule of maintenance of the same overall scan rate (i.e. the same sensor effort) uniform sampling is always optimal for the single-scan tracker in the sense of track life. However, nonuniform sampling can outperform uniform sampling if a more sophisticated multi-scan tracker is used, particularly when: (i) the target has a high process noise; and/or (ii) the false alarm density is high; and/or (iii) the probability of detection is high.

(S. Marano, V. Matta, P. Willett and L. Tong)

In a recent paper we showed that a network of unconnected and completely DOA-blind sensors (ï¿½beepersï¿½) is able within the SENMA architecture (unlabeled polling performed by a mobile agent) to perform DOA estimation quite effectively. The idea is that the mobile agent collects the periodic emissions of the polled sensors, with the time origin of such emissions being the passage of the acoustic wavefront. Depending on the relative orientation between the acoustic wavefront and the field of view of the mobile agent, the impinging times over different sensors are more or less clustered, and so are the recorded emissions. On this basis the DOA may be inferred. Here two new estimators are proposed. One method (support-based) exploits the maximum spread between recorded times, is simple to implement and its performance, measured in terms of mean square error, is improved significantly versus that proposed in the older paper. In fact, the support-based estimator achieves performance close to that of the maximum-likelihood estimator ï¿½ also investigated here ï¿½ indicating that the support-based structure is perhaps suitable for tasks that involve cheap robust designs, such as sea/ground surveillance and sniper location.

(H. Tu, J. Allanach, S. Singh, K. Pattipati and P. Willett)

In the world full of diverse, distributed and uncertain information sources, how to use information to increase analysis efficiencies, collaborate more effectively, make better decisions, and respond more quickly to new threats or opportunities have become key issues in many areas. One such area is counter-terrorism. In this paper, a collaboration scheme for information integration among multiple agencies (and/or various divisions within a single agency) is designed using hierarchical and hybrid Bayesian networks (HHBNs). In this scheme, raw information is represented by transactions (such as communication, travel, financing), and information entities to be integrated are modeled as random variables (such as an event occurs, an effect exists, or an action is undertaken). Each random variable has certain states with probabilities assigned to them. Hierarchical is in terms of the model structure and hybrid stems from our usage of both general Bayesian networks (BNs) and hidden Markov models (HMMs, a special form of dynamic BNs). The general Bayesian networks are adopted in the top (decision) layer to address global assessment for a specific question (e.g., ï¿½Is target A under terrorist threat?ï¿½ in the context of counterterrorism). HMMs function in the bottom (observation) layer to report processed evidence to the upper layer BN based on the local information available to a particular agency or a division. A software tool, termed the adaptive safety analysis and monitoring (ASAM) system, is developed to implement HHBNs for information integration either in a centralized or in a distributed fashion. As an example, a terrorist attack scenario gleaned from open sources is modeled and analyzed to illustrate the functionality of the proposed framework.

(X. Zhang, S. Zhou and P. Willett)

We develop optimal probability loading for parallel binary channels, subject to a constraint on the total probability of sending ï¿½1ï¿½s. The distinctions from the water-filling power loading for parallel Gaussian channels, particularly the latterï¿½s ï¿½droppingï¿½ of poor quality channels, are highlighted. The only binary input binary output channel that is never dropped is the Z-channel.

(Z. Wang and P. Willett)

When employed to detect a transient change between known iid populations, Pageï¿½s test is easy to implement and provides reliable performance. However, its application to unknown transient changes is less clear. A Page test can be thought of as a repeated sequential test, and here we propose that each sequential test use a time-varying threshold. The idea is that short signals are detected quickly before post-termination data has a chance to refute them; and that evidence for a long signal is allowed to build, rather than being summarily discarded too early.

(S. Marano, V. Matta, P. Willett and L. Tong)

Following the SENMA concept, we consider a wireless network of very dumb and cheap sensors, polled by a travelling ï¿½roverï¿½. Sensors are randomly placed and isotropic: individually they have no ability to resolve the direction of arrival (DOA) of an acoustic wave. However, they do observe the wavefront at different times. We assume that the communication load must be as limited as possible, so that these times cannot be communicated to the rover. Notwithstanding the lack of transmission of arrival times and the lack of DOA resolution ability of the individual sensors, DOA estimation is possible, simple, and asymptotic efficiency becomes closely approximated after a reasonable number of rover snapshots. Key features are the directionality of the rover antenna, the area it surveys, and the average number of sensors inside that area, as accorded a Poisson distribution.

(X. Zhang, P. Willett and Y. Bar-Shalom)

Recently, there have been several new results for an old topic, the Cramer-Rao lower bound (CRLB). Specifically, it has been shown that for a wide class of parameter estimation problems (e.g. for objects with deterministic dynamics) the matrix CRLB, with both measurement origin uncertainty (i.e., in the presence of false alarms or random clutter) and measurement noise, is simply that without measurement origin uncertainty times a scalar ï¿½information reduction factorï¿½ (IRF). Conversely, there has arisen a neat expression for the CRLB for state estimation of a stochastic dynamic nonlinear system (i.e. objects with a stochastic motion); but this is only valid without measurement origin uncertainty. The present paper can be considered a marriage of the two topics: the clever Riccati-like form from the latter is preserved, but it includes the IRF from the former. The effects of plant and observation dynamics on the CRLB are explored. Further, the CRLB is compared via simulation to two common target tracking algorithms, the probabilistic data association filter (PDAF) and the multi-frame (N-D) assignment algorithm.

(X. Zhang, P. Willett, Y. Bar-Shalom and I. Segall)

Here we discuss intervisibility ï¿½ the existence of an unobstructed line of sight (LOS) between two points ï¿½ accounting for the vertical and horizontal errors in the estimated locations of both points as well as elevation errors in the database of the terrain that could obstruct the LOS between these points. The errors are first simply treated as a ï¿½whiteï¿½ noise sequence: we assume no correlation between the intervisibility at two different times, and the probability of an instantaneous intervisibility event is in this case developed. This is useful; but perhaps of greater concern is whether or not a target remains visible long enough and/or often enough that its motion can be tracked? Consequently, we present a second treatment in which the errors are stochastic processes of a certain bandwidth, and both the probability density function of an intervisibility interval and the average number of intervisibility intervals over a certain time period are developed.

(Y. Ruan and P. Willett)

Many practical multi-sensor tracking systems are based on some form of track fusion, in which local track estimates and their associated covariances are shared among sensors. Communication load is a significant concern, and the goal of this paper is to propose an architecture for low-bandwidth track fusion. The scheme involves intelligent scalar and vector quantization of the local state estimates and of the associated estimation error covariance matrices. Simulation studies indicate that the communication saving can be quite significant, with only minor degradation of track accuracy.

(X. Zhang, P. Willett and Y. Bar-Shalom)

If several closely-spaced targets fall within the same radar beam and between two adjacent matched filter samples in range, monopulse information from both of these samples can and should be used for estimation, both of angle and of range (i.e., estimation of the range to sub-bin accuracy). Similarly, if several closely-spaced targets fall within the same radar beam and among three matched filter samples in range, monopulse information from all of these samples should be used for the estimation of the angles and ranges of these targets. Here, a model is established and a maximum likelihood (ML) extractor is developed. The limits of the number of targets that can be estimated are given for both case A, where the targets are in a beam and in a range ï¿½slotï¿½ between the centers of two adjacent resolution cells (that is, from detections in two adjacent matched filter samples), and case B, where the targets are in two or more adjacent slots (among three or more adjacent samples). A minimum description length (MDL) criterion is used to detect the number of targets between the matched filter samples, and simulations support the theory.

(V. Matta, S. Marano and P. Willett)

Quantization for estimation is explored for the case that it must be performed jointly with data association; that is, the case in which measurements are of uncertain origin. Data association requires some sort of gating of distributed observations, and a censoring strategy is proposed. Several quantization philosophies are explored, specifically uniform quantization, uniform quantization with measurement exchangeability incorporated (the ï¿½typeï¿½ method), and uniform quantization of sorted measurements. The second scheme uses less bandwidth than the third. But it is shown, perhaps surprisingly, that the third preserves more information that may be useful for estimation; and a simple procedure for optimal fused estimation based on this third scheme is given. Interestingly, when compared in terms of ratedistortion curve, the schemes two and three perform similarly; their censored versions offer further improvement in performances due to the uncertain-origin property of the measurements.

(Y. Ruan and P. Willett)

The PMHT (probabilistic multi-hypothesis tracker) uses ï¿½softï¿½ a-posteriori-probability associations between measurements and targets. Its implementation is a straightforward iterative application of a Kalman smoother operating on ï¿½syntheticï¿½ (i.e., modified) measurements, and of recalculation of these synthetic measurements based on the current track estimate. In this correspondence, we first discuss the basic PMHT and some of the older PMHT variants that have been used to enhance convergence. We then introduce the new turbo PMHT, which is informed by the recent success of turbo decoding in the digital communication context. This new PMHT has performance substantially improved versus any of the previous versions.

(Yanhua Ruan, Peter Willett)

The Probabilistic Multiple Hypothesis Tracker (PMHT) uses the Expectation-Maximization (EM) algorithm to solve the measurement-origin uncertainty problem. Here we explore some of its variants for maneuvering targets and in particular discuss the Multiple Model PMHT. We apply this PMHT to the six ï¿½typicalï¿½ tracking scenarios given in the second benchmark problem from Blair et al. [6]. The manner in which the PMHT is used to track the targets and to manage radar allocation is discussed, and the results compared to those of the IMM/PDAF and IMM/MHT. The PMHT works well: its performance lies between those of the IMM/PDAF and IMM/MHT both in terms of tracking performance and computational load.

(D. Pham, K. Pattipati, P. Willett and J. Luo)

A new complex sphere decoding algorithm is presented for signal detection in V-BLAST systems, which has a computational cost that is significantly lower than that of the original complex sphere decoder (SD) for a wide range of SNRs. Simulation results on a 64-QAM system with 23 transmit and 23 receive antennas at an SNR per bit of 24 dB show that the new sphere decoding algorithm obtains the ML solution with an average cost that is at least 6 times lower than that of the original complex SD. Further, the new algorithm also shows robustness with respect to the initial choice of sphere radius.

(Y. Sun, P. Willett and R. Lynch)

In active sonar systems, proper selection of the transmitted waveform is critical for target detection and parameter estimation, especially with the existence of clutter (reverberation). Two commonly used waveforms --- constant frequency (CF) and linear frequency modulated (LFM) --- are studied in this paper. Their characteristics are complementary both with respect to their accuracies and their sensitivity to the blind zero-Doppler ridge. Several fusion schemes of the two kinds of waveforms are explored and fusion results are studied both analytically and from simulation. It is concluded that fusion of the information of different waveforms can be not only more robust, but in some cases outright preferable, in terms of detection probability and estimation accuracy.

(D. Pham, K. Pattipati and P. Willett)

The Probabilistic Data Association (PDA) method for multiuser detection over synchronous CDMA channels is extended to the signal detection problem in V-BLAST systems where it is generalized for the case of complex modulation. Computer simulations show that the algorithm has an error probability that is significantly lower than that of the V-BLAST optimal order detector and has a computational complexity of O(8n_T^3), where n_T is the number of transmit antennas.

(F. Hasegawa, J. Luo, K. Pattipati, P. Willett and D. Pham)

In this paper, we compare the complexity and efficiency of several methods used for multi-user detection in a synchronous CDMA system. Various methods are discussed, including decision feedback (DF) detection, group decision feedback (GDF) detection, coordinate descent, quadratic programming with constraints, space alternating generalized EM (SAGE) detection, TABU search, a Boltzmann machine detector, semi-definite relaxation, Probabilistic Data Association (PDA), branch and bound and the sphere decoding method. The efficiencies of the algorithms, defined as the probability of group detection error divided by the number of floating point computations, are compared under various situations. Of particular interest is the appearance of an "efficient frontier" of algorithms, primarily composed of DF detector, GDF detector, PDA detector, the branch and bound optimal algorithm and the sphere decoding method. The efficient frontier is the convex hull of algorithms as plotted on probability-of-error versus computational-demands axes: algorithms not on this efficient frontier can be considered dominated by those that are.

(Z. Wang and P. Willett)

We present an approach for the joint segmentation and classification of a time series. The segmentation is on the basis of a menu of possible statistical models: each of these must be describable in terms of a sufficient statistic, but there is no need for these sufficient statistics to be the same, and these can be as complex (for example, cepstral features or autoregressive coefficients) as fits. All that is needed is the PDF (probability density function) of each sufficient statistic under its own assumed model --- presumably this comes from training data, and it is particularly appealing that there is no need at all for a *joint* statistical characterization of all the statistics. There is similarly no need for an a-priori specification of the number of sections, as the approach uses an appropriate penalization of an over-zealous segmentation. The scheme has two stages. In stage one, rough segmentations are implemented sequentially using a piecewise GLR (generalized likelihood ratio); in the second stage, the results from the first stage (both forward and backward) are refined. The computational burden is remarkably small, approximately linear with the length of the time series, and the method is nicely accurate in terms both of discovered number of segments and of segmentation accuracy. A hybrid of the approach with one based on Gibbs sampling is also presented; this combination is somewhat slower but considerably more accurate.

(J. Luo, K. Pattipati, P. Willett and G. Levchuk)

A fast optimal algorithm based on the branch and bound (BBD) method is proposed for the joint detection of binary symbols of K users in a synchronous Code-Division Multiple Access (CDMA) channel with Gaussian noise. Relationships between the proposed algorithms (depth-first BBD and fast BBD) and both the decorrelating decision feedback (DF) detector and sphere decoding (SD) algorithm are clearly drawn. It turns out that decorrelating DF detector corresponds to a "one-pass" depth-first BBD; sphere decoding is in fact a type of depth-first BBD, but one that can be improved considerably via tight upper bounds and user ordering as in the fast BBD. A fast "any-time" sub-optimal algorithm is also available by simply picking the "current-best" solution in the BBD method. Theoretical results are given on the computational complexity and the performance of the "current-best" sub-optimal solution.

(Y. Sun, P. Willett and P. Swaszek)

A common model for sonar clutter is that of the transmitted signal convolved with a colored Gaussian process relating to the sea-bottom profile. Rather surprisingly, this noise becomes strongly non-Gaussian if there are multiple realizations and if a realistic random phase is introduced --- the univariate statistics remain Gaussian, but the joint probability density function (pdf) is not. In this short paper we explore this behavior and we develop optimal detection statistics for a "two-look" situation. It turns out that the gains over a naive assumption of Gaussianity can be substantial.

(T. Luginbuhl and P. Willett)

A general frequency modulated (GFM) signal characterizes the vibrations produced by compressors, turbines, propellers, gears and other rotating machines in a dynamic environment. A GFM signal is defined as the composition of a real or complex, periodic or almost-periodic carrier function with a real, differentiable modulation function. A GFM therefore contains sinusoids whose frequencies are (possibly non-integral) multiples of a fundamental; to distinguish a GFM from a set of unrelated sinusoids it is necessary to track them as a group. This paper develops the general frequency modulation tracker (GFMT) for one or more GFM signals in noise using the expectation/conditional maximization (ECM) algorithm which is an extension of the expectation-maximization (EM) algorithm. Three advantages of this approach are that the ratios (harmonic numbers) of the carrier functions do not need to be known a priori, that the parameters of multiple signals are estimated simultaneously, and that the GFMT algorithm exploits knowledge of the noise spectrum so that a separate normalization procedure is not required. Several simulated examples are presented to illustrate the algorithm's performance.

(J. Luo, K. Pattipati and P. Willett)

The Probabilistic Data Association (PDA) method is extended to multiuser detection over symbol asynchronous Code-Division Multiple Access (CDMA) communication channels. PDA achieves near-optimal performance with O(K^3) computational complexity in synchronous CDMA where K is the number of users. In this paper, a direct extension of the PDA method to an asynchronous system is firstly presented by viewing the asynchronous CDMA as a large synchronous one. Then, a sliding window processing is proposed, to avoid considering the entire data. Computer simulations show that the probability of group detection error of the proposed PDA method is very close to the performance lower bound provided by an ideal clairvoyant optimal detector. While the computational complexity of the PDA method is O(K^3) in synchronous CDMA where K is the number of users, it is only marginally increased to O([h/s]K^3) per time frame in asynchronous CDMA where h and s are the width and the sliding rate of the processing window, respectively. Due to the outstanding performance of the PDA detector in heavily-overloaded asynchronous systems, it is also proposed to use asynchronous transmission deliberately even when synchronous transmission is possible -- asynchronous is better than synchronous!

(J. Luo, K. Pattipati, P. Willett and F. Hasegawa)

A strategy for user ordering and time labeling for a decision feedback detector in asynchronous Code-Division Multiple Access communications is discussed. Ordering and labeling would at first appear to be of a complexity exponential in K, the number of users. Surprisingly, optimal sequencing requires only O(K^4) operations, and is needed only once per packet: it is thus a cheap way to obtain an often marked improvement in performance, compared to power-ordering and chronological labeling.

(R. Lynch and P. Willett)

In this paper a method of classification referred to as the Bayesian Data Reduction Algorithm is developed. The algorithm is based on the assumption that the discrete symbol probabilities of each class are a priori uniformly Dirichlet distributed, and it employs a "greedy" approach (similar to a backward sequential feature search) for reducing irrelevant features from the training data of each class. Notice that reducing irrelevant features is synonymous here with selecting those features that provide best classification performance; the metric for making data reducing decisions is an analytic formula for the probability of error conditioned on the training data. To illustrate its performance the Bayesian Data Reduction Algorithm is applied both to simulated and to real data, and it is also compared to other classification methods. Further, the algorithm is extended to deal with the problem of missing features in the data. Results demonstrate that the Bayesian Data Reduction Algorithm performs well despite its relative simplicity. This is significant because the Bayesian Data Reduction Algorithm differs from many other classifiers: as opposed to adjusting the model to obtain a "best fit" for the data, the data, through its quantization, is itself adjusted.

(P. Willett, W. Blair and Y. Bar-Shalom)

Many radar systems use the monopulse ratio to extract angle of arrival (AOA) measurements in both azimuth and elevation angles. The accuracies of each such measurement are reasonably well known: each measurement is, conditioned on the sum-signal return, Gaussian-distributed with calculable bias (relative to the true AOA) and variance. However, we note that the two monopulse ratios are functions of basic radar measurements that are not entirely independent, specifically in that the sum signal is common to both. The effect of this is that the monopulse ratios are dependent, and a simple explicit expression is given for their correlation; this is of considerable interest when the measurements are supplied to a tracking algorithm that requires a measurement covariance matrix. The system performance improvement when this is taken into account is quantified: while it makes little difference for a tracking radar with small pointing errors, there are more substantial gains when a target is allowed to stray within the beam, as with a rotating (track-while-scan) radar or when a single radar dwell interrogates two or more targets at different ranges. But in any case, the correct covariance expression is so simple that there is little reason not to use it. We additionally derive the Cramer-Rao lower bound (CRLB) on joint azimuth/elevation angle estimation and discover that it differs only slightly from the covariance matrix corresponding to the individual monopulse ratios. Hence, using the individual monopulse ratios and their simple joint accuracy expression is an adequate and quick approximation of the optimal maximum-likelihood procedure for single resolved targets.

(R. Lynch and P. Willett)

The sequential group detection technique is studied in this paper. The computational complexity of a Group For important classification tasks there may already be extant an arsenal of classification tools, these representing previous attempts and best efforts at solution. Many times these are useful classifiers; and although the fact that all base their decisions on the same observations implies that their decisions are strongly dependent, there is often some benefit from fusing them to a better corporate decision. However, these classifiers are often of a black-box nature, and there is no precise way to model their joint statistical behavior such that the fused decision can be optimal. Nonetheless one can consider this fusion as of building a meta-classifier, based on data vectors whose elements are the individual legacy classifier decisions. The Bayesian data reduction algorithm (BDRA) imposes a uniform prior probability mass function on discrete symbol probabilities, and thereby can predict its own probability of error performance as conditioned upon its training data. It turns out that this prior probability implies a very appropriate penalty on models whose complexity is not supported by the training data: the BDRA can select a best subset of its input features, by which is meant that the fused decision may best use only a subset of legacy classifier outputs. In this paper the BDRA is applied to such decision-fusion, and is compared favorably to a number of other expert-mixing approaches. Parameters varied include the number of relevant legacy classifiers (some may have been poorly designed, and ought to be discounted automatically), the numbers of training data and classes, and the dependence between legacy classifiers -- a fusion approach should reject redundant decisions.

(J. Luo, K. Pattipati, P. Willett and G. Levchuk)

The sequential group detection technique is studied in this paper. The computational complexity of a Group Decision Feedback Detector (GDFD) is exponential in the largest size of the groups; thus instead of using the partition of users as design parameters, choosing the "maximum group size" is more reasonable in practice. Given the maximum group size, a grouping algorithm is proposed. It is shown that the proposed grouping algorithm maximizes the Asymptotic Symmetric Energy (ASE) of the multiuser detection system. Furthermore, based on a set of lower bounds on Asymptotic Group Symmetric Energy (AGSE) of the group decision feedback detector, it is shown that the proposed grouping algorithm, in fact, maximizes the AGSE lower bound for every group of users. Together with a fast computational method based on branch-and-bound, the theoretical analysis of the grouping algorithm enables the offline estimation of the computational cost and the performance of GDFD. Simulation results on both small and large size problems are presented to verify the theoretical results. All the results in this paper can be applied to the Decision Feedback Detector (DFD) by simply setting the maximum group size to 1.

(Zhen Wang and Peter Willett)

Two new algorithms are presented for the segmentation of a white Gaussian-distributed time series having unknown but piecewise-constant variances. The first "Sequential/MDL" idea includes a rough parsing via the GLR, a penalization of segmentations having too many parts via MDL, and an optional refinement stage. The second "Gibbs Sampling" approach is Bayesian, and develops a Monte Carlo estimator. From simulation it appears that both schemes are very accurate in terms of their segmentation; but that the Sequential/MDL approach is orders of magnitude lower in its computational needs. The Gibbs approach can, however, be useful and efficient as a final post-processing step. Both approaches (and a hybrid) are compared to several algorithms from the literature.

(Stefano Marano, Peter Willett and Vincenzo Matta)

It is often required to detect a long weak signal in Gaussian noise, and frequently the exact form of that signal is parametrized, but not known. A bank of matched filters provides an appropriate detector. However, in some practical applications there are very many matched filters and most are quite long. The consequent computational needs may render the classical bank-of-filters approach infeasibly expensive. One example, and our original motivation, is the detection of chirp gravitational waves by an earth-based interferometer. In this paper we provide a computational approach to this problem via sequential testing. Since the sequential tests to be used are not for constant signals, we develop the theory in terms of average sample number (ASN) for this case. Specifically, we propose two easily-calculable expressions for the ASN, one a bound and the other an approximation. The sequential approach does yield moderate computational savings. But we find that by pre-processing the data using short/medium FFT's, and an appropriate sorting of these FFT outputs such that the most informative samples are entered to a sequential test first, quite high numerical efficiency can be realized. The idea is simple, but appears to be quite successful: examples are presented in which the computational load is reduced by several orders of magnitude. The FFT is an example of an energy-agglomerating transform, but of course there are many others. The point here is that the transform need not match the sought signal exactly, in the sense that all energy becomes confined to a single sample: it is enough that the energy becomes concentrated, and the more concentrated the better.

(Stefano Marano, Vincenzo Matta and Peter Willett)

To detect a purely harmonic signal it is difficult to beat an FFT. However, when the signal is very long and weak, Parker and White have shown that a sequential probability ratio test (SPRT) operating on magnitude-square FFT data is far more efficient. Indeed, both from a numerical-error perspective and in terms of robustness against a deviation from a precisely tonal signal, the block-FFT/SPRT idea is very appealing. Here the approach is extended to the case that the frequency is unknown, and expressions are developed for performance both in terms of detection and of average sample number. The approach is applicable to a large number of practical problems; but particular attention is paid to the continuous gravitational wave example. The computational savings as compared to a fixed test vary as a function of signal strength, block length, bandwidth and operating point; but gains of a factor of two are easy. That these gains are not more exciting relates mostly to the underlying FFT structure: although it is useful that many SPRT's "end early", it is difficult to take advantage of that with an efficient FFT algorithm.

(D. Pham, J. Luo, K. Pattipati and P. Willett)

The Probabilistic Data Association (PDA) method for multiuser detection in synchronous Code-Division Multiple Access (CDMA) communication channels is extended to asynchronous CDMA, where a Kalman filter or smoother is employed to track the correlated noise arising from the outputs of a decorrelator. The estimates from the tracker, coupled with an iterative PDA, result in impressively low bit error rates. Computer simulations show that the new scheme significantly outperforms the best Decision Feedback detector. The algorithm has the order of K-cubed complexity per time frame, where K is the number of users.

(P. Willett, Y. Ruan and R. Streit)

The PMHT is a target tracking algorithm of considerable theoretical elegance. In practice, its performance turns out to be at best similar to that of the PDAF; and since the implementation of the PDAF is less intense numerically the PMHT has been having a hard time finding acceptance. In this paper the PMHT's problems of nonadaptivity, narcissism, and over-hospitality to clutter are elicited. The PMHT's main selling-point is its flexible and easily-modifiable model, which we use to develop the "homothetic" PMHT; maneuver-based PMHTs, including those with separate and joint homothetic measurement models; a modified PMHT whose measurement/target association model is more similar to that of the PDAF; and PMHTs with eccentric and/or estimated measurement models. Ideally, this paper's "bottom line" would be a version of the PMHT with clear advantages over existing trackers. If the goal is of an accurate (in terms of MSE) track, then there are a number of versions for which this is available. In terms of lost tracks there is no clearly preferable PMHT, although several variants (e.g. convergence-aided via measurement-variance inflation, maneuvering, and homothetic) offer performance similar to that of the PDAF; also, in very adverse tracking situations, the PMHT is preferable. We further hope that our demonstration of the facility by which a new model "idea" is turned into a working algorithm is encouraging to other researchers.

(R. Niu, P. Willett and Y. Bar-Shalom)

The conventional approach for tracking system design is to treat the sensor and tracking subsystems as completely independent units. However, the two subsystems can be designed jointly to improve system (tracking) performance. It is known that different radar signal waveforms result in very different resolution cell shapes (for example, a rectangle versus an eccentric parallelogram) in the range/range-rate space, and that there are corresponding differences in overall tracking performance. In this paper we develop a framework for the analysis of this performance. An imperfect detection process, false alarms, target dynamics, and the matched filter sampling grid are all accounted for, using the Markov chain approach of Li and Bar-Shalom. The role of the grid is stressed, and it is seen that the measurement-extraction process from contiguous radar "hits" is very important. A number of conclusions are given, perhaps the most interesting of which is the corroboration in the new measurement space of Fitzgerald's result for delay-only (i.e. range) measurements, that a linear FM upsweep offers very good tracking performance.

(Y. Sun and P. Willett)

The online detection of a very long and weak chirp signal is studied. The signal has an extremely slowly-decreasing frequency, and is corrupted by white Gaussian noise and possibly also by powerful tones. By exploring and comparing candidate methods, it is found that the Hough transform (HT) detector appears to be most suitable given constraints on computational load and detectability. The analytical and simulational performance of the HT detector are obtained and compared to the analytical performance of the generalized likelihood ratio test (GLRT), which is assumed to be optimal. Applying a suitable threshold for the HT can increase speed dramatically while preserving performance. We have found that both dithering (taking varied frequency shifts for FFTs) and increasing the FFT length can reduce the minimum detectable frequency slope with nearly no additional computation.

(D. Abraham and P. Willett)

The use of active sonar in shallow water results in received echoes that may be considerably spread in time compared to the resolution of the transmitted waveform. The duration and structure of the spreading and the time of occurrence of the received echo are unknown without accurate knowledge of the environment and a priori information on the location and reflection properties of the target. A sequential detector based on the Page test is proposed for the detection of time-spread active sonar echoes. The detector also provides estimates of the starting and stopping times of the received echo. This signal segmentation is crucial to allow further processing such as more accurate range and bearing localization, depth localization, or classification. The detector is designed to exploit the time spreading of the received echo and is tuned as a function of range to the expected signal-to-noise ratio (SNR) as determined by the transmitted signal power, transmission loss, approximate target strength, and the estimated noise background level. The theoretical false alarm and detection performance of the proposed detector, the standard Page test, and the conventional thresholded matched filter detector are compared as a function of range, echo duration, SNR, and the mismatch between the actual and assumed SNR. The proposed detector and the standard Page test are seen to perform better than the conventional thresholded matched filter detector as soon as the received echo is minimally spread in time. The use of the proposed detector and the standard Page test in active sonar is illustrated with reverberation data containing target-like echoes from geological features, where it was seen that the proposed detector was able to suppress reverberation generated false alarms that were detected by the standard Page test.

(Z. Wang, P. Willett and R. Streit)

Detecting signals that are long, weak, and narrowband is a well known and important problem in signal processing, and has among many applications those in industrial process monitoring, radar and sonar. Such signals may be composed of dense sets of lightly-modulated tones or be true narrowband processes; in fact, which form they have will be shown to be moot as regards detection. An { em ad hoc} scheme is developed: its stages include the DFT, a multiresolution decomposition in the frequency domain, and a GLRT. The computational load is light, and the performance is remarkably good. This is so not just in the original narrowband situation, but also, due to an inherent adaptivity to the data, in the detection of signals that are relatively broadband in nature. Generalizations are given to CFAR operation in both prewhitened and unwhitened cases, and to the detection of multi-band signals. As regards the last, it is discovered that there is little loss from over-estimating the number of bands.

(Z. Wang and P. Willett)

Recently, a power-law statistic operating on DFT data has emerged as a basis for a remarkably robust detector of transient signals having unknown structure, location and strength. In this paper we offer a number of improvements to Nuttall's original power-law detector. Specifically, the power-law detector requires that its data be pre-normalized and spectrally white; a CFAR and self-whitening version is developed and analyzed. Further, it is noted that transient signals tend to be contiguous both in temporal and frequency senses, and consequently new power-law detectors in the frequency and the wavelet domains are given. The resulting detectors offer exceptional performance and are extremely easy to implement. There are no parameters to tune: they may be considered "plug-in" solutions to the transient detection problem, and are "all-purpose" in that they make minimal assumptions on the structure of the transient signal save of some degree of agglomeration of energy in time and/or frequency.

(J. Luo, K. Pattipati, P. Willett and F. Hasegawa )

A Probabilistic Data Association (PDA) method is proposed in this paper for multiuser detection over synchronous Code Division Multiple Access (CDMA) communication channels. PDA models the undecided user signals as binary random variables. By approximating the Inter-User Interference (IUI) as Gaussian noise with an appropriately elevated covariance matrix, the probability associated with each user signal is iteratively updated. Computer simulations show that the system usually converges within 3-4 iterations, and the resulting probability of error is very close to that of the optimal Maximum Likelihood (ML) detector. Further modifications are also presented to significantly reduce the computational cost.

(R. Niu, P. Willett and Y. Bar-Shalom)

In many estimation situations measurements are of uncertain origin. This is best exemplified by the target-tracking situation in which at each scan (of a radar, sonar, or electro-optical sensor) a number of measurements are obtained, and it is not known which, if any, of these is target-originated. The source of extraneous measurements can be false alarms - especially in low-SNR situations that force the detector at the end of the signal processing chain to operate with a reduced threshold - or spurious targets. In several earlier papers the surprising observation was made that the Cramer-Rao lower bound (CRLB) for the estimation of a fixed parameter vector (e.g., initial position and velocity) that characterizes the target motion, for the special case of multidimensional measurements in the presence of additive white Gaussian noise, is simply a multiple of that for the case with no uncertainty. That is, there is a scalar information-reduction factor; this is particularly useful as it allows comparison in terms of a scalar. In this paper we explore this result to determine how wide the class of such problems is. It turns out to include many non-Gaussian situations. Simulations corroborate the analysis.

(B. Chen and P. Willett)

This paper addresses the quickest detection of superimposed hidden Markov model (HMM) transient signals. It is assumed that a known HMM is always extant but at an unknown time a second known HMM may also be present, and overlapped with the previous. Two approaches are proposed. The first treats the superimposed HMMs as a unit with an expanded state space, thus converting the problem of detecting superimposed HMMs into detection of a change in HMM, this being readily solved using a previously-proposed procedure. Such an approach, though excellent in terms of performance, is not suitable for the superposition of multiple HMMs with large state dimensions due to computational complexity. A second detection scheme - based on multiple target tracking ideas - with much lower computational needs but little loss in terms of performance, is therefore developed.

(P. Willett, R. Niu, and Y. Bar-Shalom)

Existing detection systems generally are operated using a fixed threshold, optimized to the Neyman-Pearson criterion. An alternative is Bayes detection, in which the threshold varies according to the ratio of prior probabilities. In a recursive target tracker such as the probabilistic data association filter (PDAF) such priors are available in the form of a predicted location and associated covariance; but the information is not at present made available to the detector. Put another way, in a standard detection/tracking implementation information flows only one way, from detector to tracker. Here we explore the idea of two-way information flow, in which the tracker instructs the detector where to look for a target, and the detector returns what it has found. More specifically, we show that the Bayesian detection threshold is lowered in the vicinity of the predicted measurement, and we explain the appropriate modification to the PDAF. The implementation is simple, and the performance is remarkably good.

(Z. Wang, P. Willett, P. DeAguiar and J. Webster)

An artificial neural network (ANN) approach is proposed for the detection of workpiece "burn", the undesirable change in metallurgical properties of the material produced by overly-aggressive or otherwise inappropriate grinding. The grinding Acoustic Emission (AE) signals for 52100 bearing steel were collected and digested to extract feature vectors which appear to be suitable for ANN processing. Two feature vectors are represented, one concerning the band-power, the kurtosis and the skew, and the other the autoregressive (AR) coefficients. The result (burn or no-burn) of the signals was identified based on hardness and profile tests after grinding. Then 10 burn, 10 no-burn and 10 noise-only signals were used to train a radial basis NN employing either of the feature vectors as the input. The trained NN works remarkably well for burn detection. Other signal processing approaches are also discussed. Among them, the CFAR power-law and the Mean-Value Deviance (MVD) prove useful.

(P. Willett, P. Swaszek and R. Blum)

Most results about quantized detection rely strongly on an assumption of independence among random variables. With this assumption removed, little is known. Thus, in this paper, Bayes-optimal binary quantization for the detection of a shift in mean in a pair of dependent Gaussian random variables is studied. This is arguably the simplest meaningful problem one could consider. If results and rules are to be found, they ought to make themselves plain in this problem. For certain problem parametrizations (meaning: the signals and correlation coefficient) optimal quantization is achievable via a single threshold applied to each observation -- the same as under independence. In other cases one observation is best ignored, or is quantized with two thresholds; neither behavior is seen under independence. Further, and again in distinction from the case of independence, it is seen that in certain situations an XOR fusion rule is optimal, and in these cases the implied decision rule is bizarre. The analysis is extended to the multivariate Gaussian problem.

(B. Chen and P. Willett)

This paper addresses quickest detection of transient signals which can be represented as hidden Markov (HMM), with the application of detection of transient signals. Relying on the fact that Page's test is equivalent to a repeated Sequential Probability Ratio Test (SPRT), we are able to devise a procedure analogous to Page's test for dependent observations. By using the so called forward variable of an HMM, such a procedure is applied to the detection of a change in hidden Markov modeled observations, i.e., a switch from one HMM to another. Performance indices of Page's test, the average run length (ARL) under both hypotheses, are approximated and confirmed via simulation. Several important examples are investigated in depth to illustrate the advantages of the proposed scheme.

(Z. Wang and P. Willett)

We present a simulational study of several different statistics applied to the detection of unknown low-SNR transient signals in white Gaussian noise. The results suggest that relatively-unsophisticated tests based on temporal localization of power, such as the Page test and a test based on a new statistic due to Nuttall, give reliable results.

willett@engr.uconn.edu

tions. By using the so called forward variable of an HMM, such a procedure is applied to the detection of a change in hidden Markov modeled observations, i.e., a switch from one HMM to another. Performance indices of Page's test, the average run length (ARL) under both hypotheses, are approximated and confirmed via simulation. Several important examples are investigated in depth to illustrate the advantages of the proposed scheme.

(Z. Wang and P. Willett)

We present a simulational study of several different statistics applied to the detection of unknown low-SNR transient signals in white Gaussian noise. The results suggest that relatively-unsophisticated tests based on temporal localization of power, such as the Page test and a test based on a new statistic due to Nuttall, give reliable results.

willett@engr.uconn.edu

/A>

conn.edu

/A>

ESS> SS> SS> SS>