QA/QC current measurement (comprehensive)
FollowUnfortunately, many things can go wrong during a deployment. This may be related to the environment, configuration settings, the instrument, or other equipment. Some quality checks should be carried out to assess if the collected data are reliable and to determine which post-processing methods are necessary or desired. By taking some measures in advance of deployment, certain things can easily be eliminated, the degree of impact can be limited or the risk of occurrence can be significantly reduced. This article will highlight several important considerations when it comes to quality assurance and quality control of data. Since all instruments are different, it is wise to keep an instrument's specific characteristics in mind when evaluating data from that instrument. Technical specifications can be found via Nortek's product site by selecting the instrument of interest. Ocean Contour, Storm, and Surge are Nortek-supplied software made to analyze and process current data.
Amplitude
What is amplitude?
Amplitude is a typical variable used to describe the characteristics of a wave. It says something about the amount of energy that is transferred by the wave and can be seen as a measure of how strong or big the wave is. When it comes to acoustics, a greater amplitude means a greater intensity and the wave sounds louder (in the audible frequency range). ADCP instruments emit longitudinal sound waves into the water and measure the echoes. For current measurements, the amplitude (also called signal strength) can be defined as "a measure of the magnitude of the acoustic reflection from water". As the amplitude is a measure of the echoes, it is a function of the scattering conditions. The amplitude is given in the dimensionless unit decibel (dB) or counts. The relation between the two units may vary a bit between instruments, but a conversion of 0.5 dB/counts is a good estimate.
Amplitude data can show both spatial (profiler) and temporal (profiler and current meter) variations. Figure 1 shows an example of this from Ocean Contour. Figure 1-a presents the amplitude readings for Beam 2 in space and time. The amplitude along the horizontal dotted line is illustrated in Figure 1-b and shows how the amplitude changes with time within that specific cell. Figure 1-c shows the amplitude along the vertical dotted line in Figure 1-a and tells how the amplitude changes along the profiling range at that specific time. The amplitude values always refer to the along-beam signal strength and are independent of the chosen velocity coordinate system.
(a) Amplitude readings in space and time
(b) Time variation of amplitude
(c) Spatial variation of amplitude
Figure 1: Amplitude can show both temporal and spatial resolution for currents profilers. (a) Show both in an example of overall amplitude, (b) shows the temporal variation in one cell, and (c) gives spatial at one time.
Range
Due to spreading and absorption, the signal strength decreases with range, as you can see in Figure 2. When measuring in open waters far from any boundaries the amplitude profile should follow the Sonar equation and look something similar to the left profile. After a gradual decrease in signal strength, the amplitude reaches a constant limit value known as the noise floor. At this point the instrument only measures noise and we cannot get any valid velocity measurement. All instruments have an individual noise floor due to internal electronic noise inside the instrument. In this way, the amplitude can determine the range over which an instrument can measure velocity accurately. Signal-to-noise ratio (SNR) is a normal measure to compare desired signal strength to background noise. As the term implies it is a ratio of received echoes and the noise floor. Your data will be questionable when the SNR is too low. At this point, the signal strength is very low and the noise is high, and hence will the standard deviation increase. As a rule of thumb, we recommend that the signal should be at least 3 dB higher than the noise floor and that lower signal-leveled data should be discarded.
However, the range depends on several factors. One important element is the particles in water. To be able to measure any echoes, the water must contain enough suspended particles to reflect the transmitted signal. The strength of scattering will have an impact as well. It is normally more scattering material in the upper layer of the water column, which implies that an upside-looking instrument might have a longer range than an instrument measuring from the surface and down. The range is also affected by the instrument frequency, where lower frequencies generally enable longer ranges. The maximum possible range also depends on the cell size, because the transmit pulse is nearly of the same length as the cell size. A larger cell will hence contain more data points than a smaller cell. The configured range will also depend on chosen blanking distance (start of profile) and the number of cells. A maximum profiling range is given in the technical specifications of every current profiling instrument. But keep in mind that this is a nominal value, with the possibility that the actual values obtained could be longer or shorter.
Figure 2: Typical amplitude behavior along a profiling range. Left: The amplitude gradually decreases and then reaches the noise floor when the values level off. This applies in situations when the instrument does not detect blockages or boundaries. Right: After a gradual decrease in amplitude the signal strength increases as the signal approaches the surface. This increase in amplitude can also be seen when measuring the seabed or other blockages. The area in which the amplitude increases is affected by sidelobe interference.
Amplitude check
An amplitude quality test should be applied to each beam and to each cell. Whenever the amplitude profile deviates from the Sonar equation you should look closer at the data. If the amplitude increases with distance in one or more beams it may indicate a solid boundary such as the surface, bottom, or an obstruction. The right amplitude profile in Figure 2 is taken from an instrument measuring the sea surface. Here you can see that the amplitude at first decreases, but before it reaches the noise floor the amplitude then increases until the surface is detected at the maximum amplitude value. A single, unusually high amplitude may indicate a passing passive thing that reflects stronger than water. The amplitude data can also give information about other kinds of occurrences, of which several are presented later in this article.
Correlation
Correlation is a statistical measure of similar behavior between two observables, which in our case is how similar the transmitted signal is to itself at a delayed time (when the signal is received after reflection). Correlation is output in %, where 100% means perfect correlation and 0% means no similarity. The magnitude of the correlation is thus a quality measure of the velocity data, and as the correlation decreases so does the data accuracy. Correlation decreases with distance from the instrument and establishes the maximum usable profiling range. A commonly accepted threshold for range when considering correlation data is 50%. This is approximately where the amplitude drops to the noise floor. When this is not the case, take it as a sign that the data needs more careful analysis.
Velocities
ADCPs are used to measure water current velocities by utilizing the principle of the Doppler effect. The output velocities can be given in the three coordinate systems BEAM, XYZ, and ENU. BEAM coordinates provide velocities along the beams, while velocities in ENU and XYZ coordinates are typically split into horizontal (ENU-plane or XY-plane) and vertical (along the U or Z axis). The horizontal velocities are further often given with one direction and one magnitude. Note that in XYZ coordinates the velocities are truly horizontal and vertical only when the instrument is leveled.
When deploying instruments, the user often has some idea of what velocities to expect. The magnitude and direction of velocities vary a lot from site to site. The more familiar one is with the measurement area, the better the basis one has for using velocity readings as part of data quality control because unexpected values can indicate that something has happened.
The averaged vertical velocities in oceans are usually close to zero. Whenever this is not the case, it may be because something is disturbing the measurements. Among other things, elevated vertical velocities can be caused by sidelobe interference, flow disturbances from blockages, turbulence, and when the instrument itself is moving.
The horizontal velocities can be both low and high, depending on the site. Determining whether one velocity reading is realistic requires knowledge of the area. Looking at velocity profiles will be able to provide important information anyway. For instance, if the whole profile has a quite high speed except for a few adjacent cells where the velocity is close to zero, it might be something interfering in these cells. A stationary blockage would provide a reading of zero at its location even though there are strong currents around it. Sidelobe interference and drag down are two common things that can provide abnormally high velocities.
Trustworthy heading readings are important to gain proper directions in ENU coordinates. If you believe something is wrong with your velocity directions, it may be due to wrong heading data. One example of this is of the currents go towards the west, but you expect them to go towards the south based on the topography in the area. All magnetic materials around the instrument can disturb the compass. Compass calibration is hence important in order to obtain reliable heading data and furthermore right directions in ENU coordinates.
Quality check of the sensors
It is important to check the state of every instrument during a measurement series, which the sensors give an indication of. Always perform a function test before every deployment. This is to verify that the instrument works as expected and to avoid spending resources on deploying instruments with malfunctioning sensors. Function testing is also a good first step when it comes to troubleshooting if something abnormal is discovered in the data. This might tell if there is something wrong with a sensor. If nothing is uncovered, the next step may be to look into the measurement conditions and possibly external factors.
Sensors to check:
- Pitch
- Roll
- Heading
- Temperature
- Pressure
- Battery
Pitch and roll
WHAT TO CHECK: Look at the values of pitch and roll. Both should always be close to zero (except roll for down-looking Signatures with AHRS) and only show small variations. Any clear and large changes may indicate that something has happened to the instrument during the measurement series.
Pitch and roll, collectively referred to as tilt, is one of the first things to check. This tells whether the instrument is level or not. All Nortek instruments have a sensor measuring tilt that has been calibrated during production. Values of pitch and roll are given in degrees. The difference is that the former is rotation about the Y-axis, and the latter is rotation about the X-axis. For an instrument looking straight upward, pitch and roll are defined to be 0°. When an instrument is turned over and points straight downwards, the pitch will still be 0°, while the roll is 0° for traditional liquid level and solid state accelerometer tilt sensors and +/- 180° when an AHRS is installed.
Ideally, instruments should always have 0° (or 180°) of tilt during a deployment. This will often be difficult, especially with buoy-mounted instruments that move with water. Bottom-mounted frames on uneven ground and massive currents or waves displacing an instrument are also common reasons leading to tilt. To compensate for uneven sea floor, it may be beneficial to use a gimbal. The gimbal uses gravity to keep the instrument upright, so it is necessary to have a counterweight underneath the instrument. Extra weight on the frame can also to some extent reduce the risk of being displaced by massive currents or waves. Whatever reason causes the tilt, pitch and roll lead to several unwanted consequences, such as depth change of cells, thickness increase of sidelobe interference, and changes in the direction of measured velocities. To correct depth errors, it is possible to remap the cells in a process called bin mapping. This is an option in Nortek's post-processing software called "Bin mapping" or "Remove tilt effects". Any layer experiencing sidelobe interference should be discarded as the received signal is contaminated and there is no way to isolate the bias effect. A possible increase in the sidelobe effect near a boundary will further reduce the effective range. Measured velocity directions will change with tilt for both BEAM and XYZ coordinates. This does not concern the ENU coordinate system (given valid heading readings) because it is relative to the Earth's magnetic field and not the instrument itself. Read more about the consequences of tilt in the following FAQ: What happens when my instrument is tilted and what actions should I take?
Values of pitch and roll are used when converting to ENU coordinates and when bin mapping.
Figure 3 shows an example of how a sudden excessive tilt may appear in the data. Here you can see that the average signal strength is affected by the tilting. The big change in pitch shows that the instrument is almost completely turned upside down. Bad data due to the instrument being tilted is not always as visually obvious as this, especially when tilt is not as extreme, so you must always take a closer look at the data in periods when pitch and roll are significant and varies a lot. For tilt over 5 degrees, it is recommended to carry out thorough quality control measures and consider reprocessing the data. When the tilt exceeds 20-25 degrees the range of the instrument is reduced and the data begins to degrade in ways that are not recoverable. It is, however, important to note that the limits given should be considered as guidelines and it is the end users' responsibility to apply proper quality control measures for all data and consider tilt effects for each individual deployment. If you have high tilt, much variation, or distinct changes in tilt, you should also take the measurement area and conditions into consideration and ask yourself "What can have caused this?".
Figure 3: An example of how extreme tilt can look in the data for an instrument measuring the sea surface.
Heading
WHAT TO CHECK: See if the heading seems reasonable, taking into account the measurement condition and deployment setup.
The heading indicates an instrument's orientation relative to the magnetic north pole. In practice, it works like a traditional mechanical compass with the X-axis as the magnetic needle. Output data is given in degrees and corresponds to the readings on a compass. Together with pitch and roll it tells the overall orientation of an instrument.
There are no "right" values of the heading unless defined by the user. It simply states which direction the X-axis is pointing. The heading can indicate whether there has been much movement or not during deployment. Buoy-mounted instruments will naturally have some heading variations, but for a bottom-mounted deployment, the reading should be steady. A distinct change of heading could be a result of something displacing the mount or magnetic objects nearby interfering. A shift in heading also means that the BEAM coordinate system and the XYZ coordinate system are shifted. Hence, the direction of velocity vectors in these systems will vary within one measurement series. The given velocity directions in ENU coordinates will not be altered by varying heading, as it is not relative to the orientation of the instrument. (Read more about the different coordinate systems here: What are the different coordinate systems and how are they defined?) But if a change of heading is due to a relocation of the instrument, the measurement area is also changed and this cannot be corrected. On the other hand, currents vary slowly over horizontal displacements, and small changes of position might not affect the result very much and can be used further - but this must be assessed on a case-by-case basis.
Heading data are used when converting to ENU coordinates. If desired to represent current data in this coordinate system, it is necessary to have accurate heading values. Knowing the heading can also be beneficial if one wants to orient the XYZ velocities relative to axes or reference points in the immediate area. If you intend to use the compass, make sure to perform a compass calibration just before submergence. This should be done with all magnetic materials attached and fixed relative to each other, just like they will be when the measurements are taken. The possible offsets can come from the two sources hard iron and soft iron. Also, be aware that a battery change can shift the magnetic field around an instrument. For this reason, you should perform the compass calibration each time you change batteries. When components disturb the magnetic field near an instrument, the consequence is unrealistic values of heading and further wrong directions of speed in ENU coordinates. For Signature instruments, it is possible to recalibrate the compass in Ocean Contour after deployment if the instrument has been rotated 360° while measuring (typically on its way down or up).
Temperature
WHAT TO CHECK: Confirm that the temperature readings are realistic, without too much variation.
Temperature is measured by a thermistor (a temperature sensor) located inside the instrument head and reported in degrees Celsius. The reason for measuring temperature is primarily to calculate the speed of sound, which further is used to convert the time between transmitted and received signals into distances. Measuring the correct temperature is therefore crucial for finding range and cell locations.
The water temperature varies with salinity, latitude, and depth. For this reason, the given position of an instrument during deployment will determine which temperatures are realistic. There may also be periodic differences throughout the day, because of seasonal changes or due to external factors. Unrealistic temperature measurements may indicate that something is wrong, either with the sensor itself or the measurement conditions. A sudden change in readings of temperature can be caused by something blocking the sensor, such as organisms, biological material, rubbish floating in the water, etc. If this blockage eventually disappears, the temperature will return to the excepted values because the sensor will then measures the water temperature again (and not the thing blocking the sensor).
Pressure
WHAT TO CHECK: Confirm that the pressure readings are as expected. If the approximate deployment depth is known, it can be compared to the measured pressure and checked to see if they match, as 1 dBar corresponds to approximately 1 meter of seawater.
Pressure is measured during every deployment and reported in units of dBar. To be able to measure the hydrostatic pressure at the instrument, that is the pressure exerted by the water above due to gravity, a pressure offset should be made just before deployment to correct for local atmospheric pressure at the surface. Pressure readings are not particularly important in terms of the quality of current measurements, but they can provide an estimate of instrument depth.
The pressure depends on the depth and density of the water. The deeper and greater density, the higher the pressure. In addition, the pressure is proportional to the gravitation acceleration, which differs slightly across the earth. The tide will produce regular variations of pressure for submerged instruments. A typical cause of irregular or excessive change of pressure is drag-down, which will be discussed later.
Battery
WHAT TO CHECK: Confirm that the battery voltage is above the threshold value stated in the technical specifications.
The instrument won't measure if the voltage supply is too small. This includes measurements of the other sensors as well. Insufficient power supply will therefore be easily detectable in the rest of the data.
If somehow the power alternates between on/off or above/below the threshold, the data will contain gaps where no measurements have been made. This can be due to much cable movement close to the connector at the instrument or because of trouble at the other end with the power supply. Using a Chinese finger trap can be a good measure in order to reduce the amount of movement close to the connectors. When batteries are used beyond their capacity concerning the chosen configuration setting, the voltage will drop below the threshold. The consequence is not gaps in the data set, but the measurements will stop entirely. When external power is supplied through long cables, be sure to consider the voltage drop due to resistance in the cable.
Before deployment check the date stamp on your batteries and be aware that batteries discharge over time. Make sure to store your batteries in the right temperature and humidity. Read more about the storage of batteries in How to store batteries and For how long can batteries be stored before use? You should also do a load test on all batteries before use.
The power supply will not affect the data quality but determines whether a planned measurement is made or not.
Common issues during deployment
There are many things that can happen during a deployment. Some of the most common issues are presented here and include beam blockage, tidal burial of transducers, fouling, drag down, acoustics interference, sidelobe interference, low SNR, and unrealistic velocities.
Beam blockage
Any physical object located along one or more beams will affect the data. When a transmitted sound pulse reaches an object and gets reflected, the registered Doppler frequency shift represents the velocity of that object and not the water. This means that all data influenced by blockages should be discarded. If the blockage only concerns one beam and the data from the other beams are reasonable, it can be considered to exclude one beam in the post-processing of the data.
There are many things that can block the beams, such as rocks, ropes, buoys, trawl balls, and so on. Keeping the measurement area free of known obstacles is something to be aware of when planning a deployment setup. This includes measures such as placing a release buoy far enough away from the instrument or making sure ropes do not interfere with the transducers.
Figure 4 shows an illustration of beam blockage, where one beam is pointing towards a stone wall and another towards a chain. Data from the measurement cells marked with red are affected by the physical objects and should be discarded. Sound waves don't just stop when they encounter an obstacle. Instead, they can propagate behind this and we can get valid velocity measurements further out in the profile. In these situations, it is essential to assess the data quality behind the obstacle and assure that the SNR and correlation are good enough. If obstacles reflect or absorb much energy leading to the sound wave's energy further along the beam being substantially reduced, the blockage can reduce the effective profiling range as well. Looking at Figure 4 again, we can see that there might be good data beyond the chain, but once the left beam meets the stone wall the rest of the profile is blocked. One more note to gather from Figure 4; none of the obstructions are located directly above the instrument. The beam angles make sure that the area of measurement gets increasingly wider the further away from the instrument, so objects to the side of the instrument can still interfere. Keep the beam's angles in mind when considering possible blockages.
Figure 4: Illustration of possible beam blockage.
A blockage is visible in the amplitude. An example of this is shown in Figure 5. The amplitude readings in space and time in Figure 5-a present a blockage in the first half of the measurement period, and this seems to disappear in the second half. Most objects are stronger reflectors than the passive particles we make use of to calculate currents. The amplitude profile will thus deviate from the Sonar equation (Figure 2) and have spikes at the location of the obstacle, which can be seen in Figure 5-b. It is also possible to notice an obstacle in the velocity measurements. For instance, if the obstacle is stationary, the velocity in the area covered by the obstacle will be zero, even though the velocities before and after are not. Due to flow disturbances caused by the physical obstruction, the velocities reading near it can also be different than expected.
(a) Beam blockage in amplitude reading for space and time
(b) Beam blockage in amplitude profile
Figure 5: A blockage can be discovered in amplitude data. (a) shows how a blockage may be visible in the overall amplitude, while (b) gives an example of what amplitude profiles can look like.
Tidal burial of transducers and fouling are types of blockages that will be discussed in the two next sections.
Tidal burial of transducers
Figure 6: Tidal burial of transducers.
The very long-period waves that move across the ocean due to gravitational forces exerted on the earth by the moon and sun can cause tidal burial of the transducers. This is most relevant for bottom-mounted instruments covered by masses from the seabed, such as sand and mud. The result is periodic changes in data quality that follow the approximately two high and two low tides throughout the day. Figure 6 shows how this can be seen in the amplitude. For this data set, the signal strength is very low when the sea level rises and this is when the transducers are buried. The sediments covering the transducers work in the same manner as other beam blockages and the signal strength further out in the profile is severely reduced compared to the signal strength when the transducers are not covered.
The extent of blockage and the resulting data quality determines whether all data can be used or not. There is no way to process the effects of tidal burial. If this is a typical problem you have, an evaluation of the deployment setup is a good beginning. Maybe the solution is to use another frame where the instrument is held higher above the seabed, replace the frame with a subsurface buoy, or move the whole mount to another site.
Fouling
When an instrument is deployed for a long time, it is normal for organisms to start growing on it. Growth on the transducers is a form of blockage that gradually increases. The result may appear as a gradual decrease in amplitude with time, as presented in Figure 7. It can still be possible to get valid velocity measurements beyond the fouling, at least for a while, before the growth becomes massive and absorbs too much of the transmitted signal. As a part of the data analysis, you have to assess the amplitude and/or correlation to determine the data quality. Data from cells closest to the instrument, should not be used if it includes the fouling (depending upon the blanking distance and fouling thickness). Then the instrument measures the velocity of the fouling and not the water. Types of fouling and how common it is depends on where in the world the deployment is. One way to prevent, or reduce, fouling is to use antifouling patches on the transducers.
Figure 7: Fouling on transducers reduces the signal strength gradually.
Drag down
Drag down is when drag forces, often waves or strong currents, act on a mooring and result in an instrument being dragged down and away from its original position. Since the instrument is moved downwards in the water column, drag down can be discovered by a pressure spike. It is also common to see higher roll and pitch values at the time of displacement. Figure 8 shows an example of how drag down can look in the data for an instrument mounted on a subsurface buoy. You can see the accordance between pressure and tilt spikes and the apparent change in speed. Drag down is often noticed by abnormal high readings of speed.
Figure 8: Drag down for an instrument mounted in a subsurface bouy. Drag down are typically seen by pressure spikes, tilt spikes, and changes in velocities.
The principle of current measurements based on the Doppler effect is to measure relative movement between an instrument and the water. When an instrument is in a fixed position, the measured current velocities represent the water's motion exclusively. There is no way to distinguish between the movement of the instrument and the water when the instrument itself is moving. The consequence of drag down is hence invalid current data that affects the entire profile.
Drag down typically occur when nearby currents get so strong that the mooring is unable to withstand the forces. An effective way to reduce the risk of drag down is to minimize the amount of volume that the current can take hold of. This can be to use a thinner rope or a shorter rope if possible. FAQ Mounting Guideline.
For some, the main purpose of measurements with ADCP is to determine the maximum speed in an area. This can unfortunately be difficult to achieve if the deployment setup is susceptible to the drag down effect.
Acoustic interference
Acoustic interference occurs when sound waves from different sources interact with each other. An ADCP can experience acoustic interference by detecting sound waves from external sources, other instruments on the same rig or nearby, and reflected signals that originate from the instrument itself. As for the latter, signals can be reflected from all sorts of nearby objects and structures. Even if only one beam is reflected, all beams can experience interference because the reflected signal can propagate into the measurement range of all beams as it is not necessarily reflected in the same direction as it came. Sidelobe interference is a type of acoustic interference that occurs when signals are reflected from boundaries. This phenomenon is discussed in the next section.
The amplitudes from all interacting sound waves are combined and form a new wave, according to the principle of superposition. The new wave generally has new properties such as changed frequency or amplitude. Any detected frequency that is different from what is emitted is interpreted as a Doppler shift by the instrument, meaning that the instrument will register it as water motion, and that affects the resulting velocity output. Figure 9 shows an example of how acoustic interference, caused by a nearby instrument, can appear in amplitude data.
Figure 9: Acoustic interference.
There is no way in post-processing to separate the bias from acoustic interference, but there are some measures that can be performed in order to reduce the risk of it. Regarding multiple instruments in an area, one measure is to keep them so far apart that the measurement areas don't overlap. "How far apart do they need to be to avoid interference?" is, however, a question without an exact answer. It depends on several factors, such as the instrument frequency, beam angle, broadband, and scattering conditions. Broad-banded instruments are, for instance, more sensitive to acoustic interference than narrow-banded ones. If one broad-banded instrument and one narrow-banded instrument measure in the same area, it is possible that the instrument with broadband detects signals from the instrument with narrowband, but not the other way around. Regardless, the safest option is always to alternate the sampling regimes, which is called staggering. This means that only one instrument measures at a time, while other instruments are in sleep mode. Since ADCPs usually don't ping constantly, this will be possible for many measurement series. One example of staggering is shown in Table 1, by giving the start times for four different instruments. They start at two-minute intervals, which means that the average interval can be up to 120 seconds and still avoid interference. The second profile interval starts 10 minutes after the first, giving all instruments time to complete the first averaging interval in advance (with this setup, instrument 4 can measure for four minutes, or if all measures for a maximum of two minutes, a fifth instrument could be included as well, without increasing the risk of interference). When it comes to external sources, such as echosounder, hydrophones, sediment sensors, etc., it is wise to get an overview of all known acoustic instruments nearby and preferably stagger them as well. Familiarizing yourself with the measurement area is also beneficial when it comes to awareness of obstructions that possibly can reflect signals.
Table 1: An example of how to stagger instruments.
Instrument ID |
Start time | |
First profile interval | Second profile interval | |
1 | 08:00 | 08:10 |
2 | 08:02 | 08:12 |
3 | 08:04 | 08:14 |
4 | 08:06 | 08:16 |
Please keep in mind that even one instrument facing down and another one facing up may interfere with each other's signal as the signals may be reflected at boundaries such as the surface and the seafloor. This is the case of the data in Figure 10.
Figure 10: Acoustic interference for two instruments in the same mooring, in which one is measuring upwards and the other downwards.
Sidelobe interference
Sidelobe interference is an interference phenomenon that prevents us from measuring currents close to a remote boundary. Sidelobes can be described as energy that "leaks out" from an acoustic beam. These weak signals are sent in multiple different directions than the main lobe. Weak signals that reach a boundary before the main lobe can cause sidelobe interference. This is because boundaries provide a stronger echo than the suspended particles in the water, leading to the received signals from cells near the boundary being dominated and hence contaminated by the sidelobe signals. Figure 11 illustrates the sidelobe interference layer when measuring the surface. Other boundaries can be the sea bed or physical objects.
Figure 11: Illustration of sidelobe interference for an instrument measuring the sea surface. The effective range (R) is less than the distance to the sea surface (D).
Sidelobe interference can be spotted in amplitude profiles, as shown in Figure 2. When the signal approaches a boundary the signal strength increases until it reaches a maximum value which indicates the boundary. Sidelobes interfere in the area of increase. Sidelobe interference will typically result in a bias toward the velocity of the interfering boundary. For the bottom, this is a bias toward zero (unless there is a moving bottom). The bias will depend on the sea state or surface wind conditions when mearing the sea surface. Sidelobe interference has in many cases been noticed by high velocities in the upper layer. A tip when analyzing data is to check the vertical velocity (Z or Up) extra carefully in this area. It should typically read close to zero. If not, it might be an effect of interference.
The following is an approximate equation illustrating the constraint of near-boundary contamination and states where the sidelobe interference layer possibly starts.
R = D * cos(α)
All variables in the formula are included in Figure 11, where R is the effective range (the range without sidelobe interference), D is the distance from the instrument to the boundary, and α is the transducer angle. Roughly speaking, we say that sidelobe interference can affect up to 10% of the velocity profile between the instrument and the boundary for slanted beams. Vertical beams (Signature 500/100) will not experience sidelobe interference since they point directly to the surface ( α=0° --> R=D). However, this applies to instruments that are leveled. The impact increases with tilt. If the sidelobe interference extends partly into one cell, the whole cell should be discarded. This is because we cannot distinguish where in the cell it applies to.
Furhter, the extent to which the sidelobe interference will contaminate the velocity measurements is a function of the boundary conditions, the scattering return strength from the water, and the acoustic properties of the transducers. High scattering strength by the boundary and low scattering return strength from the water makes a greater potential for the sidelobe reflection to dominate the signal. Sidelobe interference may, on the other hand, be unimportant with strong backscatter.
Unfortunately, there is no way in post-processing to separate the bias effect from the sidelobes. However, there are measures that can be taken in advance of a deployment to reduce its impact. One action is to move the instrument closer to the boundary (10% of a short profile is less than 10% of a long profile). Reduction in cell size can also be positive, as this increases the spatial resolution. Also, make sure to keep the instrument as leveled as possible.
Low SNR
To gain good data, it is crucial that the received signal is strong enough. As a rule of thumb, we recommend it to be a minimum of 3 dB higher than the instrument's noise floor (3 dB SNR threshold). To achieve this, an essential element is that the concentration of scattering materials in the water is sufficient. Sometimes the measurement conditions make it difficult to fulfill this. For instance, the amount of scattering particles in polar areas during winter is small. This can be very challenging when attempting to provide good data from measurements under these circumstances. There are, however, some measures that can help. One thing is to have the highest possible power level (High or 0 dB) so that the transmitted pulse contains much energy. This will require higher power consumption and possibly shorter deployment. A shorter average interval or lower measurement load may be considered to compensate. Increasing the cell size will provide more data within each cell, but reduces the spatial resolution. It is also common to see a daily variation in signal strength because zooplankton follows a movement pattern with a circadian rhythm, where they swim up in the water column at night at down during the day. This is called diel vertical migration. An example of how this can appear in amplitude data is given in Figure 12. Whether the lowest signal strength is during night or day depends on the orientation of the instrument. An instrument pointing up will typically have the best SNR values during the night, and the opposite for an instrument looking down. If data needs to be discarded during the times of lowest signal strength, depends on how low it is. Assessing the SNR is hence important in these situations. The influence of ideal vertical migration varies depending on the deployment site. Areas with a lot of other particles in the water are less dependent on zooplankton to obtain good data. Some measurement series has a combination of seasonal variation and diel vertical migration of zooplankton, and can as a result of that have some periods where the lower signal strength is sufficient and other times when it is not.
With low SNR, the standard deviation increases as most of the signal is just noise. The velocity readings are not valid and can fluctuate due to the random nature of noise.
Nortek's post-processing software offers easy methods to discard data due to low SNR. In the processing settings, a low SNR threshold can be chosen. 3 dB is the default.
Figure 12: Daily variation in signal strength due to dial vertical migration of zooplankton.
Unrealistic velocities
The reason for unrealistic velocities can be many, and several are mentioned above. Unexpected velocity values near a boundary can be due to sidelobe interference. A blockage will affect the velocity measurements at its location, but also near it due to flow disturbances. Drag down leads to motion-induced bias as the instrument itself is moving. With low signal strength (and hence low SNR), the noise is considerable and the velocities can be "all over the place". Tilt also has an impact as it means that the instrument measures at different places and with different angles than is intended. If the compass was not properly calibrated before deployment, that could be the reason for unrealistic velocity directions in ENU coordinates. Whenever seeing unexpected velocities in a data set, it is important to go through all parameters in an attempt to find out what caused it.
Comments
0 comments
Please sign in to leave a comment.