In recent predictions of technologies that could change the world in five years, IBM claimed robotic microscopes powered by artificial intelligence could transform researchers’ ability to monitor ocean health – particularly the tiny plankton that form the base of the marine food chain, produce two-thirds of the planet’s oxygen and take up carbon from the air.
“Small autonomous AI microscopes, networked in the cloud and deployed around the world, will continually monitor in real time the health of one of Earth’s most important and threatened resources: water,” Arvind Krishna, head of IBM Research, wrote on a company website.
But Peter Franks, vice chair of the Scripps Institution of Oceanography at the University of California, San Diego, and a professor of biological oceanography, doesn’t buy it, at least without major advances in plankton-literate AI. Technologies developed over the past decade have brought his field to the point that “gathering images is not very difficult,” he said. However, “doing anything with the images is fairly challenging.” In other words, the revolution is on the cusp, but even IBM might have to wait.
Plankton populations have fallen by around 40 percent since 1950, according to research published in the journal Nature in 2010, possibly because of rising sea-surface temperatures. Local changes in plankton populations can produce a cascading effect, causing toxic algae to grow and coral – home to a quarter of all marine fish species – to die. That, and their key role in global carbon and oxygen cycles, is why a better real-time understanding of how plankton numbers, distribution and species are changing could help scientists studying the ocean’s wide-ranging environmental challenges.
Plankton collection first took a great leap forward in 1948, with the initial deployment of an ingenious mechanical contraption towed by commercial vessels, the Continuous Plankton Recorder. It consisted of a box containing silk-mesh gauze on a roll, like a paper-towel dispenser. A propeller turned the roll, releasing the gauze, which collected plankton before returning into a container of preservative. Knowing the route and speed of the ship would tell researchers where the plankton was collected.
A more recent advance was the use of satellite imagery, but this isn’t detailed enough to identify species. Cameras, such as those on the Scripps research pier in La Jolla, California, have also been collecting color images of plankton for about a decade. Holographic techniques and new lenses, which allow distant image-gathering, have come online more recently. In the past couple of years, autonomous underwater vehicles have joined the mission, collecting images and mimicking the way plankton swim to test mathematical theories about how they congregate and form red tides.
The latest technology from IBM researchers in San Jose, California, is a system that collects 3D images using swimming autonomous microscopes. To track plankton behavior in their environment, twin LED lights form shadows of plankton swimming by, which are captured on an imaging chip from a cellphone to create an image called a shadowgraph. Inventor Tom Zimmerman and theoretical physicist Simone Bianco hope to embed the devices with artificial intelligence that could determine plankton health from their size, shape and behavior. They imagine the findings could help ecologists understand how plankton respond to environmental disturbances, such as temperature spikes, changes in salinity, agricultural runoff, oil spills and toxic algal blooms.
“Using plankton as a reporter for the health of the environment allows you to understand if something is going on that you don’t have a specific test for,” Bianco said. Instead of sampling the water, he said, “if you see the behavior or composition of the sample is changing, it raises a flag.”
Zimmerman said he envisions hundreds of thousands of the inexpensive devices all around the world. “The AI piece lets you analyze local data,” he said.
For Franks – the type of scientist that this technology would be aimed at helping – the AI part will be the challenge.
“It’s a great idea, but it’s going to be harder to do than they’re suggesting,” Franks said. “You have to know what you’re looking at, what the size or shape [of the plankton in the image] depends on and whether you’re looking at different species. That kind of knowledge is super-hard to get.”
That’s because AI is only as good as the information you give it. Software that analyzes the content of images needs to be trained with existing data sets, which are currently lacking.
“Giving the AI annotated data sets, where the objects are ID’d so the AI has something to work from, that’s the biggest bottleneck in this type of research,” Franks said. “Matching up the images, the taxonomy and the molecular biology is going to be a major task over the next couple of decades.”
Franks estimates that currently there are 10–20 scientific groups collecting images of plankton, but much of it isn’t identified at the species level. For example, the largest set of images of phytoplankton, residing in the lab of Heidi Sosik at Woods Hole Oceanographic Institution in Massachusetts, contains millions of images of several thousand species, but they are divided into just 103 more general genus-level categories.
“What we really need is a global center to archive the images and a global center of taxonomists to identify and label these images so researchers can go through the data sets that are generated and do science with them. Right now that’s not happening,” Franks said.
Once that bottleneck gets cleared, though, the Mariana Trench is the limit.
“It’s going to utterly revolutionize plankton ecology and biological oceanography,” Franks said. “The kinds of questions you can ask with dense-in-time and dense-in-space data are really amazing and will lead to a better understanding of red tides, of ecosystem changes driven by climate change and how ecosystems respond to physical and biological forcing.”