Scientists see seafloor with sonar
By Steve GartnerJune 20th, 2013
Scientists are exploring new data processing technologies to handle vast streams of acoustic and video data being collected from the seafloor to better manage fisheries and monitor the environment.
20 June 2013
Glen Paul: G’day, and welcome to CSIROpod. I’m Glen Paul. The fact that we know more about the moon’s surface than we do about the surface of the seafloor is an anomaly inspiring scientists to further explore the underwater landscape and its biodiversity.
A two day conference was recently held in Hobart, where scientists from CSIRO, government agencies and universities, discussed how to interpret the vast streams of acoustic and video data being collected from the seafloor with new technologies, including multibeam sonar systems. While using this sound and video data is opening up new possibilities for monitoring the environment, new data processing technologies are required to make the job faster and easier.
Joining me on the line to discuss this is CSIRO’s Dr Keith Hayes. Keith, firstly, why is there this lack of knowledge about the seafloor?
Dr Hayes: Good question, Glen. You know, it’s a difficult environment to work in, obviously much more difficult than a terrestrial environment. The depth that we’re interested in working in now sort of precludes the traditional methods.
I guess in the past perhaps there hasn’t been as much of a driver in terms of the management need to understand better what’s been on the seafloor, but I think now that’s changing as you move into a realm of trying to manage a model for the processes that might be occurring there. So I guess that’s part of the rationale for it.
Glen Paul: And with the newer technologies now, the acoustic and video technologies, how are they being used to address it?
Dr Hayes: Well traditionally we’d use some of the acoustic technologies really just to estimate what the depth was. I mean that’s primarily where that technology started, just to get a better understanding of bathymetry, maybe a finer resolution. But increasingly now the data that comes back, which tells us what the depth is, also gives us a whole lot of other information which we can interpret or we can try and analyse to understand not just the depth, but you know was that a hard reef down there, was that soft sediment down there, so just the nature of the substrates, as well as their depth.
The understanding of the nature of the substrate enabled us to create maps of the seafloor, particularly around for example the instance of reef, so hard reef, coral reefs, and that in turn enabled us to get a better understanding of where, for example, biodiversity hotspots might be, understanding where some of the more productive benthos might be, and what the nature of the sediments are, also for things like natural plumes of oil and gas escaping from the seafloor, where those locations might be.
And so there’s a whole range of management applications that data could potentially be used for.
Glen Paul: And it sounds like you can get to quite a depth. How far down can you use this acoustic equipment?
Dr Hayes: Well the latest technology will go down kilometres. I think the latest one now claims at nine kilometres, and we just don’t have any seafloor which is deeper than nine kilometres, so you can really range from a hundred metres, sub-hundred metres, tens of metres, all the way down to the deepest ocean depths on the planet. So depth is no longer a restriction for us in terms of gathering the data – the bigger restriction now is how do we interpret this, how do we analyse it, how do we ground truth it, so when we make a prediction that we think they’re seafloors of this type, or we think this might be the type of biota on there, how do we actually get some video imagery for example, or some other types of data to say, “OK, well that prediction was correct.”
Those are the primary challenges now, and also just the vast areas that are still unmapped – those two are the two key challenges for us now.
Glen Paul: OK. And with the video equipment, how far are you getting down with that, and what are you seeing?
Dr Hayes: Well we primarily focused on shelf and slope environments, so we’ve primarily restricted ourselves to depth ranges from beyond diver range, so from about 50 metres down to about 600 metres, so that’s where we’ve primarily deployed our video equipment to date. But of course, you know, there are platforms which are capable of going beyond that, typically they’re much harder to deploy, and much more costly to deploy. We’ve really been focused on at the moment 50 metres down to about 600 metres.
Glen Paul: So you’re now getting the data, but that brings us to the next question, how are you processing this data? Have you got some new technology that you’re using?
Dr Hayes: There’s certainly a range of emerging algorithms which are looking at the full nature of the data. So in the past what we might do, for example it might have been sonar data, is say well we can differentiate different substrate types at a certain angle that the multibeam sonar data comes back to the ship, the multibeam sonar sends out soundwaves 110° from across the full spectrum of degrees from zero to 55, both left and right, so a full 100° of spectrum of data that’s going out, and it comes back at 110° as well.
Now some of that data tends to be a little bit messed up, some of the data on the very fringes out to plus or minus 55° tends to be a little bit unreliable. So there’s bands of the data which tend to be more reliable, and again it varies by depth and varies by types of machinery, and historically we might look at say one of those angles, say a reference angle at 4° or 25°, and use the data of that angle there to try and differentiate what the seafloor might be.
But there are now emerging techniques which are using this data, along with other image analysis data around the grey scale of the back scatter, and managing that with the bathymetry data we can start to say, “Well, you know, are we able to distinguish different categories of seabed beyond the course categorisation that we’re comfortable with at the moment; can we distinguish for example sand veneer over rock; can we distinguish rock that has large macroinvertebrate communities on them,” and these are the algorithms that have potentially given us the scope to do that.
But they need to be carefully ground truthed, and that’s one of the challenges with it, is developing what is the best practice in this area to do this kind of work.
Glen Paul: And open us up to a new world. Thank you very much for talking about your research with us today, Keith.
Dr Hayes: Cheers, Glen.
Glen Paul: Dr Keith Hayes. And to find out more about the project, or to follow us on other social media, just visit www.csiro.au.