High resolution imaging and increasingly advanced sensor technologies are allowing us to ever-more-closely monitor the world around us. Now we can use artificial intelligence and machine learning to update these maps in almost real-time, meaning we always have the latest and most accurate data. As this technology is becoming more affordable, environmental organisations are using it’s processing power to generate live flood risk maps at unprecedented resolution. The hope is to one day have ‘live dashboards’ with computers constantly monitoring changes to help plan land use and issue warnings.
On May 27th, a deluge dumped more than 6 inches of rain in less than three hours on Ellicott City, Maryland, killing one person and transforming Main Street into what looked like Class V river rapids, with cars tossed about like rubber ducks. The National Weather Service put the probability of such a storm at once in 1,000 years. Yet, “it’s the second time it’s happened in the last three years,” says Jeff Allenby, director of conservation technology for Chesapeake Conservancy, an environmental group.
Floods are nothing new in Ellicott City, located where two tributaries join the Patapsco River. But Allenby says the floods are getting worse, as development covers what used to be the “natural sponge of a forest” with paved surfaces, rooftops, and lawns. Just days before the May 27 flood, the US Department of Homeland Security selected Ellicott City—on the basis of its 2016 flood—for a pilot program to deliver better flood warnings to residents via automated sensors.
Recently, Allenby developed another tool to help predict, plan, and prepare for future floods: a first-of-its-kind, high-resolution map showing what’s on the ground—buildings, pavement, trees, lawns—across 100,000 square miles from upstate New York to southern Virginia that drain into Chesapeake Bay. The map, generated from aerial imagery with the help of artificial intelligence, shows objects as small as 3 feet square, roughly 1,000 times more precise than the maps that flood planners previously used. To understand the difference, imagine trying to identify an Uber driver on a crowded city street using a map that can only display objects the size of a Walmart.
Creating the map consumed a year and cost $3.5 million, with help from Microsoft and the University of Vermont. Allenby’s team pored over aerial imagery, road maps, and zoning charts to establish rules, classify objects, and scrub errors. “As soon as we finished the first data set,” Allenby says, “everyone started asking ‘when are you going to do it again?’” to keep the map fresh.
Enter AI. Microsoft helped Allenby’s team train its AI for Earth algorithms to identify objects on its own. Even with a robust data set, training the algorithms wasn’t easy. The effort required regular “pixel peeping”—manually zooming in on objects to verify and amend the automated results. With each pass, the algorithm improved its ability to recognize waterways, trees, fields, roads, and buildings. As relevant new data become available, Chesapeake Conservancy plans to use its AI to refresh the map more frequently and easily than the initial labor-intensive multi-million dollar effort.
Now, Microsoft is making the tool available more widely. For $42, anyone can run 200 million aerial images through Microsoft’s AI for Earth platform and generate a high-resolution land-cover map of the entire US in 10 minutes. The results won’t be as precise in other parts of the country where the algorithm has not been trained on local conditions—a redwood tree or saguaro cactus looks nothing like a willow oak.
A map of land use around Ellicott City, Maryland, built with the help of artificial intelligence (left) offers far more detail than its predecessor (right).
To a society obsessed with location and mapping services—where the physical world unfolds in the digital every day—the accomplishment may not seem groundbreaking. Until recently, though, neither the high-resolution data nor the AI smarts existed to make such maps cost-effective for environmental purposes, especially for nonprofit conservation organizations. With Microsoft’s offer, AI on a planetary scale is about to become a commodity.
Detailed, up-to-date information is paramount when it comes to designing stormwater management systems, Allenby says. “Looking at these systems with the power of AI can start to show when a watershed” is more likely to flood, he says. The Center for Watershed Protection, a nonprofit based in Ellicott City, reported in a 2001 study that when 10 percent of natural land gets developed, stream health declines and it begins to lose its ability to manage runoff. At 20 percent, runoff doubles, compared with undeveloped land. Allenby notes that paved surfaces and rooftops in Ellicott City reached 19 percent in recent years.
Allenby says the more detailed map will enable planners to keep up with land-use changes and plan drainage systems that can accommodate more water. Eventually, the map will offer “live dashboards” and automated alerts to serve as a warning system when new development threatens to overwhelm stormwater management capacity. The Urban Forestry Administration in Washington, DC, has used the new map to determine where to plant trees by searching the district for areas without tree cover where standing water accumulates. Earlier this year, Chesapeake Conservancy began working with conservation groups in Iowa and Arizona to develop training sets for the algorithms specific to those landscapes.
The combination of high-resolution imaging and sensor technologies, AI, and cloud computing is giving conservationists deeper insight into the health of the planet. The result is a near-real-time readout of Earth’s vital signs, firing off alerts and alarms whenever the ailing patient takes a turn for the worse.
Others are applying these techniques around the world. Global Forest Watch (GFW), a conservation project established by World Resources Institute, began offering monthly and weekly deforestation alerts in 2016, powered by AI algorithms developed by the University of Maryland. Forest Watcher, volunteers and forest rangers take to the trees to verify the automated alerts in places like the Leuser Ecosystem in Indonesia, which calls itself “the last place on Earth where orangutans, rhinos, elephants and tigers are found together in the wild.”The algorithms analyze satellite imagery as it’s refreshed to detect “patterns that may indicate impending deforestation,” according to the organization’s website. Using GFW’s mobile app,
The new conservation formula is also spilling into the oceans. On June 4, Paul Allen Philanthropies revealed a partnership with the Carnegie Institution of Science, the University of Queensland, the Hawaii Institute of Marine Biology, and the private satellite company Planet to map all of the world’s coral reefs by 2020. As Andrew Zolli, a Planet vice president, explains: For the first time in history, “new tools are up to the [planetary] level of the problem.”
By the end of 2017, Planet deployed nearly 200 satellites, forming a necklace around the globe that images the entire Earth every day down to 3-meter resolution. That’s trillions of pixels raining down daily, which could never be transformed into useful maps without AI algorithms trained to interpret them. The partnership leverages the Carnegie Institution’s computer-vision tools and the University of Queensland’s data on local conditions, including coral, algae, sand, and rocks.
“Today, we have no idea of the geography, rate, and frequency of global bleaching events,” explains Greg Asner, a scientist at Carnegie’s Department of Global Ecology. Based on whatknown, scientists project that more than 90 percent of the world’s reefs, which sustain 25 percent of marine life, will be extinct by 2050. Lauren Kickham, impact director for Paul Allen Philanthropies, expects the partnership will bring the world’s coral crisis into clear view and enable scientists to track their health on a daily basis.
In a separate coral reef project, also being conducted with Planet and the Carnegie Institution, The Nature Conservancy is leveraging Carnegie’s computer vision AI to develop a high-resolution map of the shallow waters of the Caribbean basin. “By learning how these systems live and how they adapt, maybe not our generation, but maybe the next will be able to bring them back,” says Luis Solorzano, The Nature Conservancy’s Caribbean Coral Reef project lead.
Mapping services are hardly new to conservation. Geographic Information Systems have been a staple in the conservation toolkit for years, providing interactive maps to facilitate environmental monitoring, regulatory enforcement, and preservation planning. But, mapping services are only as good as the underlying data, which can be expensive to acquire and maintain. As a result, many conservationists resort to what’s freely available, like the 30-meter-resolution images supplied by the United States Geological Survey.
Ellicott City and the Chesapeake watershed demonstrate the challenges of responding to a changing climate and the impacts of human activity. Since the 1950s, the bay’s oyster reefs have declined by more than 80 percent. Biologists discovered one of the planet’s first marine dead zones in Chesapeake Bay in the 1970s. Blue crab populations plunged in the 1990s. The sea level has risen more than a foot since 1895, and, according to a 2017 National Oceanic and Atmospheric Administration (NOAA) report, may rise as much as 6 feet by the end of this century.
Allenby joined the Chesapeake Conservancy in 2012 when technology companies provided a grant to explore the ways in which technology could help inform conservation. Allenby sought ways to deploy technology to help land managers, like those in Ellicott City, improve upon the dated 30-meter-resolution images that FEMA also uses for flood planning and preparation.
In 2015, Allenby connected with the University of Vermont—nationally recognized experts in generating county-level high-resolution land-cover maps—seeking a partner on a bigger project. They secured funding from a consortium of state and local governments, and nonprofit groups in 2016. The year-long effort involved integrating data from such disparate sources as aerial imagery, road maps, and zoning charts. As the data set came together, a Conservancy board member introduced Allenby to Microsoft, which was eager to demonstrate how its AI and cloud computing could be leveraged to support conservation.
“It’s been the frustration of my life to see what we’re capable of, yet how far behind we are in understanding basic information about the health of our planet,” says Lucas Joppa, Microsoft’s chief environmental scientist, who oversees AI for Earth. “And to see that those individuals on the front line solving society’s problems, like environmental sustainability, are often in organizations with the least resources to take advantage of the technologies that are being put out there.”
The ultimate question, however, is whether the diagnoses offered by these AI-powered land-cover maps will arrive in time to help cure the problems caused by man.