Sunday 6 July 2014

Comparative tank test challenges: Part II - Reducing systemic errors & bias


Part I discussed some of the inherent difficulties with comparing different concepts of wave energy converter using tank tests. Some of these problems we are stuck with; others can be avoided if we know about them in advance.

The appropriate choice of tank, following standardised quality procedures, and allowing the same impartial observer to oversee tests and ensure the procedures are implemented will all help reduce errors.

Choosing the right wave tank

An appropriate tank should produce highly repeatable waves that closely approximate the requested spectra. For this reason, it is recommended that force-feedback wave paddles are used. Good tank calibration is also important: it is useful to know what waves you get, rather than what waves you've asked for. The tank of course must be correctly sized for the waves, and this may feed into the decision on model scale. To limit difficulties associated with scaling, it is desirable to have as big a model as the tank allows, whereas to reduce testing costs, it is desirable to have as small a model as the tank allows. The frontal width of the model (with respect to the waves) should be small compared to the tank width to avoid tank blockage effects. These effects can occur even when the model is not impacted by boundary reflections: e.g. for a bottom-hinged pitching flap, the flap width should be less than a fifth of the tank width to keep the error due to blockage below 10%.

What counts as an appropriate tank can also depend on the types of models to be tested:
  • If models tested are large enough for boundary reflections to be an issue, a larger tank should be used; if not available, a practical mitigation is to line tank walls without paddles should with wave absorbing surfaces.
  • If models tested have very peaky performance, then a tank capable of generating spectra with bandwidths representative of the proposed site is required. Artificial spectra such as JONSWAP will give over-estimations of power capture, with bigger overestimations for concepts with peakier bandwidths.
  • If the performance of a concept depends on wave incidence, then the tank should be able to generate long-crested waves from different incidences, and short-crested waves with a directional spreading representative of the proposed site. The ability to quickly rotate the model in the tank can be helpful if the directional capability of the tank is limited.
  • If the concept has passive weather-vaning, a tank with both waves and currents is required to test the weather-vaning aspect of performance.
  • For comparison of shallow-water and deep-water concepts, a tank with variable depth would be useful.

Unforeseen errors and some types of systemic bias could be avoided by testing at the same tank using a standardised test program. Where possible, the same waves, model scale, post-processing and analysis techniques, quality control, and test variables could be used for all competing devices.

Standardised quality procedures

It is necessary to apply the same quality procedures to each of the models being tested. A good way to ensure that this happens is to use a series of check-lists. It is useful to have check-lists for putting the model into the tank, changing model set-ups, removing the model, the start of the day, the start of each run. There may be check-lists for the day end, or for troubleshooting. The check-lists should have contributions from the technicians who manage the tank, the engineers and technicians who have built the model, and from the testers who will be analysing the data. Hence they will depend upon the types of tests, and the experience of the people involved. Here are examples of the types of quality procedures that will typically be considered:

  • All external sensors and assumed values such as damping coefficients could be tested by the same independent specialist tester.
  • The model position in the tank should be similar if possible, and should allow sufficient distance from wave makers, beaches, and reflective surfaces. 
  • Similar moorings arrangements should be used by all models if possible.
  • Similar procedures for ballasting the model, and checking sea-keeping properties such as metacentre.
  • The same daily start-up routine should be done in a particular order,e.g. water level check, equipment switched on, model position check, sensor check, wave gauge calibration/check, and running of a test wave, if this is the tank house-procedure.
  • Wave calibration: the waves should be measured by wave gauges in the position of the model, in the absence of the model. It is preferable to do this just prior to tests so that significant errors (e.g. a coding bug resulting in a wave that is not at all what it should be) can be spotted, and tank time not wasted.
  • The same quality procedures should be used for analysis of similar types of runs for all models being compared.  If it is possible to automate selection of data to be analysed, then this should be done (both for quality and to eliminate tedium). Sychronisation signals from the wavemakers, or from sensors that are recorded in separate logging files, will improve the quality of data selection.
  • It is necessary to run the wave for long enough to allow selection of good quality data (e.g. post-transient; in reflective tanks, sinusoidal waves can be selected before reflections reach the model). With regular waves, data should span an integer number of wave periods. With irregular waves generated by superimposing components dictated by a spectrum, it is important to choose data that spans the duration of the repeat period. The repeat period of the wave can be tailored to some extent to the desired duration of the run. 
  • During testing, a set procedure should be used for deciding on experimental variables (such as choice of PTO damping level) .
  • A standardised test program should be used for all models being compared.

While a standardised test program will reduce the impact of errors, by ensuring any errors present apply to all concepts, it will not completely eliminate bias due to these errors. For example, errors due to waves radiated towards reflecting boundaries will result in a bias that favours larger device models. As discussed in Part I, a standardised test program could introduce other forms of systemic bias. Practical restrictions or choice of tests and metrics could result in some concepts being more favourably represented than others.

Given these inherent difficulties, the best advice I can think of for anyone designing a standardised test program or test guidelines, is to consult staff of existing wave energy converter developers who have overseen both tank tests and sea trials. They would be able to give specific advice on which aspects of performance in real seas might be missed with tank testing, and on the specific test requirements for particular types of concept.


Image credit:

'Best dressed raft' in Inverness 2011 raft race by Lifespan Inverness. Blogger's own photo.

No comments:

Post a Comment

Comments

Note: only a member of this blog may post a comment.