Monday 26 August 2013

Comparative tank test challenges: ...... Part I ...... systemic errors and bias




A USA funding body (Wind and Water Power Technologies Office) recently called for advice on an innovation prize challenge for WECs. The proposed approach was tank-testing of ten short-listed concepts. I was very encouraged by this proposal, and this got me thinking about the practical challenges of comparing ten very different concepts using tank tests. There is a risk that the concept which does best in the tank may not be the concept which does best at sea. The reasons for this include:
  • scale and conditions insufficient to identify full-scale sea-trial problems
  • systemic tank testing errors
  • systemic bias due to choice of test program
  • systemic bias due to practical restrictions on test program
  • practical difficulties associated with direct comparison of different types of device
  • bias due to non-blind testing

Part I (this article) will address all but the last point in this list. Part II will consider ways of reducing systemic errors and bias. Part III will discuss ways of reducing bias due to non-blind testing.


Uncertainties inherent in tank testing


There is no evidence that the tank-testing and numerical models available to academia give a reliable estimation of the cost of energy (CoE) of first arrays. All the anecdotal evidence points at an overestimation of power capture. Even the early economic assessments of devices in the UK based on tank test results [Thorpe 1999] predicted that on-shore oscillating water columns (OWCs) were one of the cheapest technologies. Several have now been built, and performance has been significantly lower than expected. Such is the uncertainty of tank test and numerical modelling results, there is still an open debate about what an economic WEC might look like.

Pelamis are widely regarded as the device developer with the most advanced hydrodynamic models and tank testing expertise. In 2004, when they installed their P1 at EMEC, the performance measured was about half that expected, and the extreme motions about double. In Richard Yemm's 2010 Peaks and Troughs lecture (in a nutshell here) he described the sources of the uncertainties in pre-sea-trial modelling: component under-performance [31:10], non-linear behaviour not included in numerical models [31:00], and an industry-standard [37:35] tank-testing method used to estimate peak loads, which was found not to be suitable for wave power [39:10].

It is reasonable to expect that any WEC assessment team starting from scratch might experience similar levels of uncertainty.


Scale and conditions insufficient to identify full-scale component problems


One source of uncertainty in predicting performance of a first full-scale prototype is variability and under-performance of off-the-shelf components. For example, the dampers used in the EMEC P1 [31:10] did not provide the level of damping expected. The level of damping determines power captured, so knowing its exact value is extremely important for efficiency, load management, and control of the ratio of pitch to yaw rotations at each joint.

Problems specific to full scale components will not be identified with tank-tests. Bench-tests of full scale components will identify many problems, but not those specific to their operation at sea in conjunction with the rest of the system. Different types of devices use different components, so this is one source of uncertainty in comparative testing of distinct concepts.


Scale and conditions insufficient to identify all non-linear behaviour


Non-linear behaviour which manifests in conditions that are not tank tested are an inherent source of uncertainty. For example, there are a limited number of tanks that are able to model both waves and tidal streams, as well as waves with components from several directions. It is standard testing practice to test with long-crested waves (all components in the same direction) and in the absence of a tidal stream.

Richard Yemm identified non-linear behaviour as a source of uncertainty in the modelling of Pelamis prior to their first EMEC trial [31:00]. The Pelamis team were already in possession of a non-linear numerical model of their device at the time. The reason for the performance being lower than expected was presumably some previously unidentified non-linear behaviour.

Again, concepts that are very different in operation could be expected to experience different non-linear behaviour and have different implementation problems when prototyped at sea. This is an unavoidable source of uncertainty when comparing concepts based on tank trials.


Systemic errors in tank testing


There is always the risk of errors in experimental work. Here are some examples that apply to tank-tests of WECs:

  • Methods which are standard in other fields such as oil&gas may prove unsuitable for wave energy. For example, in Richard Yemm's Peaks and Troughs talk [37:35], he identified that the focussed wave technique had resulted in an underestimation of extreme loads.
  • Errors in sensors and estimated damping levels.
  • Repeatability: the less repeatable the results, the more runs of a particular set-up are required to provide a reliable average. If tank time is limited, repeat runs may be sacrificed.
  • Errors in generated waves: if the tank has not been calibrated, and waves not checked individually, the power content of the waves could be different to that expected. Unless the wavemakers use force-feedback regulation, the presence of a body that diffracts or radiates waves back to the wavemakers will result in errors in the generated waves.
  • Tank reflections: Many tanks have some reflecting boundaries (usually side walls). These have no mechanism for absorbing waves, such as beaches or force-feedback wavemakers. Errors can arise if the waves radiated or diffracted by the model are large enough that, when reflected off the boundary and back onto the model, contain a significant amount of power. This problem arises when the model's size is a similar order of magnitude to the tank, and it results in waves that travel in the direction of the reflecting boundaries. Similar problems can arise if the beach is inefficient at absorbing wave components that excite the model, or if the wavemakers used do not have wave absorbing capabilities.
  • Tank hot-spots: When testing in regular (sinusoidal) waves, standing waves can arise, resulting in an uneven spacial power distribution that is dependent on wave period. Errors can arise if the model or wave probes are located at a hot-spot or cool-spot.

Some methods have specific implementation requirements. If these are not adhered to, this will affect data quality. For example, during trials where the steady-state behaviour is of interest, the initial transients should be allowed to decay. Wavemakers also experience transients, so the first few waves reaching the model are not representative. When using waves generated from a reverse DFT of a spectrum, the post-processing should use data that is the exact length of the DFT repeat time. Thus it is important to record enough of the run to allow sufficient data for processing after transients have been removed.


Systemic bias due to choice of test program


The use of one standardised test program for all concepts could introduce a bias. There are some types of tests or evaluation metrics which could favour particular types of device. For example, a device with a narrow peak of performance, with respect to frequency/period, would do better in peaky (narrow bandwidth) spectra, and would perform poorly in double-peaked or broad bandwidth spectra. In many tanks the use of narrow bandwidth spectra is standard practice, and this could result in an overestimation of performance, particularly for devices with narrow bandwidth performance curves.

The directional sensitivity of WEC concepts vary. Some devices may perform well over a range of wave incidences, while others, such as single DoF bottom-mounted devices, may have a peaky performance curve with respect to angle of wave incidence. Highly directional devices are likely to perform better in long-crested waves (no directional spreading) than in short-crested waves (directional spreading representative of deep water waves). The behaviour of interest is in reponse to waves with directional spreading and spectral characteristics representative of the intended location. Testing in short-crested waves introduces yet another test parameter. For this reason, it is standard practice to principally test in long-crested waves, even in tanks where short-crested waves are available.


Systemic bias due to practical restrictions


Practical restrictions could result in bias of tests to particular types of concept. For example, there are the difficulties in testing directional devices. Testing in short-crested waves depends on tank capabilities. In many tanks it is not possible or practical to generate the desired short-crested waves at all the desired principle incidences. For some devices it may be acceptable to test in waves from a narrow range of incidences, while for others such an approach would lead to unrepresentative results.


Practical difficulties with direct comparison of different types of device


In practice it may not be possible to test all concepts in the same conditions. Some devices might be very different in size, so it might be inconvenient to test them all at the same scale. When comparing devices tested at different scales, the waves used would be physically different, the full-scale depth would be different, any scaling issues would be different, and different post-processing techniques would be required to compare results at the same scale. Some devices may be designed to work and cooperate in arrays, and it may be insufficient to test one module in isolation.

It would be difficult to compare tests of shallow-water and deep water devices: the waves would have to be different. An a CoE indicator, it would be more useful to base the comparison on the power in the associated deep water wave, rather than the power in the wave actually tested. This raises the question of whether to use measurements from two sites, or to estimate the shallow water resource from deep water site measurements.


Reference:

'A Brief Review of Wave Energy', Thorpe TW, UK Dept. Trade & Industry, 1999

Image credit:


'Best dressed raft' in Inverness 2011 raft race by Lifespan Inverness. Blogger's own photo.

No comments:

Post a Comment

Comments

Note: only a member of this blog may post a comment.