What is an "Independent Observer"? A person, or thing, that has a different view of a situation than the primary observer.
Football fans know it well. A referee makes a call on the field that the team coach disagrees with and protests. The officials gather around a TV monitor to see what evidence there is to indicate that the call was right or wrong. The cameras have different views of the play—perhaps not as clear as the ref’s eyes, but the different views may show that the call was wrong (or right).
The critical feature of the Independent Observer (in this case, the camera) is that it does not have the same technology and/or capabilities as the primary “measurement” (the ref’s eyeball). In this case, the Independent Observer may be better than the primary sensor but that is not necessarily (or often) the case.
In this post, I'll cover:
How does this apply to test data acquisition? We need to ask ourselves: What are some of the inherent shortcomings of our data acquisition system? Here is a short list of candidates:
An important feature (or “un-feature”) of the Independent Observer is that, in most cases, it need not be very accurate or powerful. It may be a rough check on the sanity of the measurement. Its objective is to catch significant errors.
So, what are we looking for? Let’s discuss some of the types of errors that should be spotted if a proper Independent Observer is used.
Back in the olden days before I knew better, we were driving a test with a digital signal generator (Digital-to-Analog converter/DAC) that did not have a low-pass filter on its output. We were monitoring the test with a good data system that had an anti-alias (low-pass) filter that cut off the high-frequency energy above the Nyquist frequency. We ran a sine-sweep test and the ANALOG RMS shutdown system tripped us offline—repeatedly at the same point.
The test-system elements are shown in Fig 1:
A Little History The DUT was the “iron maiden” test model of the future Hubble Space Telescope that was constructed and tested at Lockheed in 1973. The objective was a characterized microG modal response of the mirror simulators to excitations from the reaction wheels. MODALAB, the in foreground, was the 256-channel real-time sine-sweep control/acquisition/analysis system developed to dredge out the low-level responses. |
So, what was causing the shutdown? Let’s make a model of a simple test system:
Now the reflected images of the excitation are interacting strongly with the gain of the transfer function. It is easy to see why an RMS detector would cause a shutdown that we could not see with the digitized data.
We dragged in a spectrum analyzer to look at the raw analog signal. It showed results similar to those shown here. The DAC was producing high-frequency noise that we filtered off in the acquisition system so we couldn't see it. The analog shutdown system (and the specimen) saw all of it. In some cases, we might have broken something.
The fix for the problem was straightforward. The missing component in the excitation system is an analog low-pass filter following the digital-analog converter. Lesson learned.
What was the primary Independent Observer? The RMS detector. It must be an analog calculation so that it's bandwidth is not limited. Then, a spectrum analyzer gave us a better view, and understanding, of the problem.
In all of my blogs, I have endlessly discussed the fact that the fundamental hazard of the digital data acquisition process is aliasing. Despite my (and many others) harping, critical tests are still being performed that have significant aliasing errors. One example happened in a major testing lab a few years ago. I will show how it happened and how an Independent Observer should have discovered the problem early in the test sequence and not after tens or hundreds of tests had been performed.
The tests being performed were pyrotechnic (explosive) events that produced a lot of high-frequency energy—frequencies much higher than those of interest for structural damage. However, that energy is sensed by the transducers and must be handled properly by the data acquisition system.
I am going to use the PyroShock Time History that I have used for so many demonstrations (Figure 4). It has lots of energy out to 30 KHz and significant energy to 80 KHz.
Let’s assume that the maximum frequency of interest (potential damage) is 3 KHz. Then, a sample rate of 36KS/S gives us over 10 points/cycle at the highest frequency (the standard minimum for shock testing). The raw and sampled time histories are shown in Figure 5. In this view, it is easy to see that it is not sampled fast enough. However, the acquired data only shows the red data values. It is not obvious that they are corrupted.
Figure 6 shows the spectra in Fourier and Shock Response forms.
The red curves show the acquired/undersampled results.
What do we need to provide an Independent Observer for this test? A digital oscilloscope/spectrum analyzer with a bandwidth of at least 1 MHz. The in-band level difference might be hard to see but it would have shown that there was a lot of high-frequency energy that is present and unaccounted for in the acquired version.
So, how should the data have been acquired? A solution is an analog anti-alias filter before digitization. Figure 7 shows the results when an 8-pole Butterworth filter at 20KHz. is applied.
In addition to the need for an Independent Observer, there is one critical point to make: Always calculate the Fourier Spectrum of your data. Many errors are made obvious there.
One of my prime concerns is whether digital vibration test control systems actually do what they claim. They have beautiful displays that show what they believe is the truth. Is it really?
The objective of the Independent Observer is to confirm (or not) the validity of the process.
Let’s look at the procedure (Figure 8). Simply put, it is a closed-loop control system that is trying to make the exciter (Shaker) produce the desired response at the transducer (T). The desired and response functions may be Sine, Random, Shock…whatever. The algorithm and displays will be appropriate for each of the application processes. Displays might include time history, spectrum, and RMS level indications.
The problem is that the system only acts on and displays what it sees from the transducer and the following analysis. What could possibly go wrong?
The fundamental problem is that our wonderful computer-based systems isolate us from what is really going on at the transducer. The system output may be right...or...wrong. We have no way of telling. The system views the event through a myopic window—the display says you are in control and the result matches the desired. Does it really? It needs to be verified.
An Independent Observer, whose indications agree with the system result, would be a simple verification process.
On the Independent Observer side, we have a transducer (co-located with the control transducer) and signal conditioner followed by relatively simple data-analysis/display systems.
The Independent Observer needs to have one basic characteristic: It must have a bandwidth significantly higher than the test system. Other than that, it does not have to be very sophisticated or accurate. We are only doing a sanity check.
What tests should we do? Emulation of the real test lab activities.
Sine Sweep: Program a slow, constant level, sweep over the full frequency range and monitor the Independent Observer’s RMS and Time History (oscilloscope) displays:
Random: Program a constant-level spectrum with lots of averaging over the full frequency range, and monitor the Independent Observer’s RMS and Spectral displays:
Repeat the test with a typical spectrum shape used in the lab.
In all cases, the most obvious (and simple) indicator of trouble is the RMS measurement. What if the system and monitor indications are significantly different?
I am going to close with a copy of the slide that I showed at the beginning and end of my data acquisition short courses.
The obvious point is to not trust any vendor, but it goes beyond that. Don’t trust your present equipment: It sometimes will go bad and/or do weird things.
And, don’t trust yourself: Do all of the checks you can think of to block your own biases and stupidities. This is admittedly toughest when you are under the gun. That is when it is most important!
Most of the vendors that service the data acquisition and control world do a good job and offer a good product. However, it is my experience that they often don’t qualify their system as well as they should.
And then, there are the turkeys. Often, they are wearing a very nice costume (in this case, displays). They need to be rooted out. I hope you don’t already own one.
As users, it is our job to be sure that our tools do what they should and preferably, do that in the demonstration phase before we have bought the system. Doing Independent Observer tests is the first, and most critical, qualification step.