What the complex math of fire modeling tells us about the future of California’s forests

At the height of California’s worst wildfire season on record, Geoff Marshall looked down at his computer and realized that an enormous blaze was about to take firefighters by surprise.

Marshall runs the fire prediction team at the California Department of Forestry and Fire Protection (known as Cal Fire), headquartered in Sacramento, which gives him an increasingly difficult job: anticipating the behavior of wildfires that become less predictable every year.

The problem was obvious from where Marshall sat: California’s forests were caught between a management regime devoted to growing thick stands of trees—and eradicating the low-intensity fire that had once cleared them—and a rapidly warming, increasingly unstable climate.

As a result, more and more fires were crossing a poorly understood threshold from typical wildfires—part of a normal burn cycle for a landscape like California’s—to monstrous, highly destructive blazes. Sometimes called “megafires” (a scientifically meaningless term that loosely refers to fires that burn more than 100,000 acres), these massive blazes are occurring more often around the world, blasting across huge swaths of California, Chile, Australia, the Amazon, and the Mediterranean region.

At that particular moment in California last September, several unprecedented fires were burning simultaneously. Together, they would double the record-setting acreage of the 2018 wildfire season in less than a month. But just as concerning to Marshall as their size was that the biggest fires often behaved in unexpected ways, making it harder to forecast their movements.

To face this new era, Marshall had a new tool at his disposal: Wildfire Analyst, a real-time fire prediction and modeling program that Cal Fire first licensed from a California-based firm called TechnoSylva in 2019.

The work of predicting how fires spread had long been a matter of hand-drawn ellipses and models so slow analysts set them before bed and hoped they were done in the morning. Wildfire Analyst, on the other hand, funnels data from dozens of distinct feeds: weather forecasts, satellite images, and measures of moisture in a given area. Then it projects all that on an elegant graphic overlay of fires burning across California.

A modeling tool called Wildfire Analyst shows how a blaze in California might spread over a period of eight hours. The red objects are buildings.

Every night, while fire crews sleep, Wildfire Analyst seeds those digital forests with millions of test burns, pre-calculating their spread so that human analysts like Marshall can do simulations in a matter of seconds, creating “runs” they can port to Google Maps to show their superiors where the biggest risks are. But this particular risk, Marshall suddenly realized, had slipped past the program.

The display now showed a cluster of bright pink and green polygons creeping over the east flank of the Sierras, near the town of Big Creek. The polygons, one of the many feeds ported directly into Wildfire Analyst, were from FireGuard, a real-time feed from the US Department of Defense that estimates all wildfires’ current locations. They were spreading, far faster than they should have been, up the Big Creek drainage.

In its calculations, Wildfire Analyst had made a number of assumptions. It “saw,” on the other side of Big Creek, a dense stand of heavy timber. Such stands were traditionally thought to impede the rapid spread of fire, which models attribute largely to fine fuels like pine straw.

But Marshall suddenly realized, as the algorithms driving Wildfire Analyst had not, that the drainage held all the ingredients for a perfect firestorm. That “heavy timber,” he knew, was in fact a huge patch of dead trees weakened by beetles, killed by drought, and baked by two weeks of 100 °F heat into picture-perfect firewood. And the Big Creek valley would focus the wind onto the fire like a bellows. With no weather station at the mouth of the creek, the program couldn’t see all that.

Marshall went back to his computer and re-ran some numbers with the new variables factored in. He watched on his screen as the fire spread at frightening speed across the Sierra. “I went to the operation trailer and told my uppers: I think it’s going to jump the San Joaquin River,” he recalls. “And if it does, it’s going to run big.”

This was, at that moment, a far-fetched claim—no California fire had ever made a nine-mile run in heavy timber, no matter how dry. But in this case, the trees’ combustion created powerful plumes of superheated air that drove the fire on. It jumped the river and raced through the timber to a reservoir known as Mammoth Pool, where a last-minute airlift saved 200 campers from fiery death.

The Creek Fire was a case study in the challenge facing today’s fire analysts, who are trying to predict the movements of fires that are far more severe than those seen just a decade ago. Since we understand so little about how fire works, they’re using mathematical tools built on outdated assumptions, as well as technological platforms that fail to capture the uncertainty in their work. Programs like Wildfire Analyst, while useful, give an impression of precision and accuracy that can be misleading.

Getting ahead of the most destructive fires will require not simply new computational tools but a sweeping change in how forests are managed. Along with climate change, generations of land and environmental management decisions—intended to preserve the forests that many Californians feel a duty to protect—have inadvertently created this new age of hyper-destructive fire.

But if these massive fires continue, California could see the forests of the Sierra erased as thoroughly as those of Australia’s Blue Mountains. Avoiding this nightmare scenario will require a paradigm shift. Residents, fire commanders, and political leaders must switch from a mindset of preventing or controlling wildfire to learning to live with it. That will mean embracing fire management techniques that encourage more frequent burns—and ultimately allowing fires to forever transform the landscapes that they love.

Shaky assumptions

In late October, Marshall shared his screen and took me on a tour in Wildfire Analyst. We watched the fluorescent FireGuard polygons of a new flame “finger” break out from the smoldering August Complex. With a few clicks, he laid four tiny virtual fires along the real fire’s edge, on the far side of the fire line that had blocked its progress. A few seconds later, fire blossomed across the simulated landscape. Under current conditions, the model estimated, a fire that broke out at those points could “blow out” to 8,000 acres—a nearly three-mile run—within 24 hours.

For Marshall and the rest of Cal Fire’s analysts, Wildfire Analyst provides a standardized platform on which to share data from fires they’re watching, projections about the runs they might make, and hacks to make a simulated fire approximate the behavior of a real one. With that information, they try to anticipate where a fire is going to go next, which in theory can drive decisions about where to send crews or which regions to evacuate.      

Like any model, Wildfire Analyst is only as good as the data that feeds it—and that data is only as good as our scientific understanding of the phenomenon in question. When it comes to the mechanics of wildland fire, that understanding is “medieval,” says Mark Finney, director of the US Forest Service’s Missoula Fire Lab.

Our current approach to fire modeling, which powers every real-time analytic platform including TechnoSylva’s Wildfire Analyst, is built on a particular set of equations that a researcher named Richard Rothermel derived at the Fire Lab nearly half a century ago to calculate how fast fire would move, with given wind conditions, through given fuels.

Rothermel’s key assumption—perhaps a necessary one, given the computational tools available at the time, but one we now know to be false—was that fires spread only through radiation as the front of the flame catches fine fuels (pine straw, leaf litter, twigs) on the ground.

That spread, Rothermel found, drove outward in a thin, expanding edge along an ellipse. To figure out how a fire would grow, firefighters in the field used “nomograms”: premade graphs that assigned specific values for wind speed, slope, and fuel conditions to reveal an average speed of spread.

fire behavior chart
US DEPARTMENT OF AGRICULTURE

In his early days in the field, Finney says, “you would spread your folder of nomograms on the hood of your pickup and make your projections in thick pencil,” charting on a topo map where the fire would be in an hour, or two, or three. Rothermel’s equations allowed analysts to model fire like a game of Go, across homogenous cells of a two-dimensional landscape.

This is where things have stood for decades. Wildfire Analyst and similar tools represent a repackaging of this approach more than a fundamental improvement on it. (TechnoSylva did not respond to multiple interview requests.) What’s needed now is less a technique for real-time prediction than a fundamental reappraisal of how fire works—and a concerted effort to restore California’s landscapes to something approaching a natural equilibrium.

Complications

The problem for products like Wildfire Analyst, and for analysts like Marshall, is easy to state and hard to solve. A fire is not a linear system, proceeding from cause to effect. It is a “coupled” system in which cause and effect are tangled up. Even on the scale of a candle, ignition kicks off a self-sustaining reaction that deforms the environment around it, changing the entire system further—fuel decaying into flame, sucking in more wind, which stokes the fire further and breaks down more fuel.

Such systems are notoriously sensitive to even small changes, which makes them fiendishly difficult to model. A small variance in the starting data can lead, as with the Creek Fire calculations, to an answer that is exponentially wrong. In terms of this kind of nonlinear complexity, fire is a lot like weather—but the computational fluid dynamic models that are used to build forecasts for, say, the National Weather Service require supercomputers. The models that try to capture the complexity of a wildland blaze are typically hundreds of times simpler.

Pioneering scientists like Rothermel dealt with this intractable problem by ignoring it. Instead, they searched for factors, such as wind speed and slope, that could help them predict a fire’s next move in real time.

Looking back, Finney says, it’s a miracle that Rothermel’s equations work for wildfires at all. There’s the sheer difference in scale—Rothermel derived his equations from tiny, controlled fires set in 18-inch fuel beds. But there are also more fundamental errors. Most glaring was Rothermel’s assumption that fire spreads only by radiation, instead of through the convection currents that you see when a campfire flickers.

This assumption isn’t true, and yet for some fires, even huge ones like 2017’s Northwest Oklahoma Complex, which burned more than 780,000 acres, Rothermel’s spread equations still seem to work. But at certain scales, and under certain conditions, fire creates a new kind of system that defies any such attempt to describe it.

The Creek Fire in California, for example, didn’t just go big. It created a plume of hot air that pooled under the stratosphere, like steam against the lid of a pressure cooker. Then it popped through to 50,000 feet, sucking in air from below that drove the flames on, creating a storm system—complete with lightning and fire tornadoes—where no storm should have been.

Other huge, destructive fires appear to ricochet off the weather, or each other, in chaotic ways. Fires usually quiet down at night, but in 2020, two of the biggest runs in California broke out at night. Since heat rises, fires usually burn uphill, but in the Bear Fire, two enormous flame heads raced 22 miles downhill, a line of tornadic plumes spinning between them.

Finney says we don’t know if the intensity caused the strange behaviors or vice versa, or if both rose from some deeper dynamic. One measure of our ignorance, in his view, is that we can’t even rely on it: “It would be really nice to know when our current models will work and when they won’t,” he says.

Illusions

To Finney and other fire scientists, the danger with products like Wildfire Analyst is not necessarily that they’re inaccurate. All models are. It’s that they hide solutions inside a black box, and—far more important—focus on the wrong problem.

Unlike Wildfire Analyst, the older generation of tools required analysts to know precisely what hedges and assumptions they were making. The new tools leave all that to the computer. Such products play into the field’s obsession with modeling, scientist after scientist told me, despite the fact that no model can predict what fire will do.

“You can always calibrate the system afterward to match your observations,” says Brandon Collins, a wildfire research scientist at UC Berkeley. “But can you predict it beforehand?”

Doing so is a question of science rather than technology: it would require primary research to develop and test a new theory of flame. But such work is expensive, and most wildfire research money is awarded to solve specific technical problems. The Missoula Fire Lab survives on the remnants of a Great Society–era budget; its sister facility, the Macon Fire Lab in Georgia, was shut down in the 1990s.

Collins and Finney are doing what they can with the funds available to them. They’re both part of a public-private fire science working group called Pyregence that’s converting a grain silo into a furnace to see how large logs, like the fallen timber on Big Creek, spread fire.

Meanwhile, Finney’s team at the Missoula Fire Lab is working to develop a data set that answers fundamental questions about fire—a potential basis for new models. They aim to describe how wind on smoldering logs drives new flame fronts; quantify the likelihood that embers cast by a flame will “spot,” or ignite, new fires; and study the role that pine forests seem to play in encouraging their own burning.

The point of those models is less to see where a particular fire will go once it’s broken out, and more to serve as a planning tool to help Californians better manage the fire-prone, fire-suppressed landscape they live in.

Like ecosystems in Chile, Portugal, Greece, and Australia—all regions that have recently seen more megafires—California’s conifer forests evolved over thousands of years in which natural and human-caused fires periodically cleared out excess fuel and created the space and nutrients for new growth.

Before the 19th century, Native Americans are thought to have deliberately burned about as much of California every year as burned there in 2020. Similar practices survived until as recently as the 1970s—ranchers in the Sierra foothills would burn brush to encourage new growth for their animals to eat. Loggers pulled tons of timber from forests groomed to produce huge volumes of it, burning the debris in place.

controlled burn technique
JOSH BERENDES / UNSPLASH

Then, as ranchers went bust and sold their land to developers, pastureland became residential communities. Clean-air regulations discouraged the remaining ranchers from burning. And decades of conflict between environmental organizations and logging companies ended, in the 1990s, with loggers deserting the forests they had once clear-cut.

In the Sierra—as in these other regions now prone to huge, destructive fires—a heavily altered landscape that was long ago torn from any natural equilibrium was largely abandoned. Millions of acres of pine grew in, packed and thirsty. Eventually many were killed by drought and bark beetles, accumulating into a preponderance of fuel. Fires that could have cleared the land and reset the forest were extinguished by the US Forest Service and Cal Fire, whose primary objective had become wholesale fire suppression.

Breaking free of this legacy won’t be easy. The future Finney is working toward is one where people can compare various models and decide which will work best for a given situation. He and his team hope better data will lead to better planning models that, he says, “could give us the confidence to let some fires burn and do our work for us.”

Still, he says, focusing too much on models risks missing a more important question: “What if we are ignoring the basic aspect of wildfire—that we need more fire, proper fire, so that we don’t let wildfire surprise and destroy us?”

Living with wildfires

In 2014, the King Fire raged across the California Sierra, leaving a burn scar where trees have still not regrown. Instead, says Forest Service silviculturist Dana Walsh, they’ve been replaced by thick mats of chaparral, a fire-prone shrub that has squeezed out the forest’s return.

“People ask what happens if we just let nature take its course after a big fire,” Walsh says. “You get 30,000 acres of chaparral.”

This is the danger that landscapes from the Pyrenees to California Sierra to Australia’s Blue Mountains now face, says Marc Castellnou, a Catalan fire scientist who is a consultant to TechnoSylva. Over the last two decades, he’s studied the rise of megafires around the world, watching as they smashed records for length or speed of runs.

For too long, he says, California’s fire and forest policy has resisted an inevitable change in the landscape. The state doesn’t need flawless predictive tools to see where its forests are headed, he says: “The fuel is building up, the energy is building up, the atmosphere is getting hotter.” The landscape will rebalance itself.

California’s choice—as in Catalonia, where Castellnou is chief scientist for the autonomous province’s 4,000-person fire corps—is to either move with that change and have some chance of influencing it, or be bowled over by megafires.

The goal is less to regenerate native forests in these areas—which Castellnou believes have been made obsolete by climate change—than to work with the landscape to develop a new type of forest where wildfires are less likely to blow out into massive blazes.

In large measure, his approach lies in returning to old land management techniques. Rural people in his region once controlled destructive fires by starting or allowing frequent, low-intensity fires, and using livestock to eat down brush in the interim. They planted stands of fire-resistant hardwood species that stood like sentinels, blocking waves of flame.

For Castellnou, though, this also means making politically difficult choices. In July 2019, just outside of Tivissa, Spain, I watched him explain to a group of rural Catalan mayors and olive farmers why he had let the area around their towns burn.

He’d worried that if crews slowed the Catalan fires, they might cause it to form a pyrocumulonimbus—a violent cloud of fire, thunder, and wind like the one that formed over the Creek Fire. Such a phenomenon could have spurred the fire on until it took the towns anyway. Now, he says, gesturing to the burn scar, the towns had a fire defense in place of a liability. It was another tile in a mosaic landscape of pasture, forest, and old fire scars that could interrupt wildfire.

As tough as planned burns are for many to swallow, letting wildfires burn through towns—even evacuated ones—is an even tougher sell. And replacing pristine Sierra Nevada forests with a landscape able to survive both drought and the most destructive fires—say, open stands of ponderosa pine punctuated by fields of grass, picked over by goats or cattle—might feel like a loss.

Doing any of this well means adopting a change in philosophy as big as any change in predictive tech or science—one that would welcome fire back as a natural part of the environment. “We are not trying to save the landscape,” Castellnou says. “We are trying to help create the next landscape. We are not here to fight flames. We are here to make sure we have a forest tomorrow.”


What the complex math of fire modeling tells us about the future of California’s forests 2021/01/18 13:00

Police are flying surveillance over Washington. Where were they last week?

As the world watched rioters take over the US Capitol on January 6, the lack of security was chilling. Some active police officers stood their ground but were outnumbered and defenseless. Other video showed an officer appearing to wave members of a pro-Trump mob beyond a police barrier; some were even filmed taking selfies with the invaders. 

Ahead of the inauguration, however, the government is responding with a show of force that includes ramping up surveillance measures that likely were not in place ahead of the riot.

Multiple surveillance aircraft have been tracked over DC in the last few days, according to data from flight-tracking websites ADS-B Exchange and Flight Aware and monitored by MIT Technology Review. A surveillance plane registered to Lasai Aviation, a contractor of the US Army, likely equipped with highly sensitive radar was logged circling Capitol airspace in a racetrack motion for several hours in the middle of the day on January 13. The same type of plane, also registered to Lasai Aviation, was previously spotted in Latvia near the border of Russia and Belarus. The Department of Defense has denied that the plane belongs to the US military. 

Screenshot of the surveillance plane from ADS-B Exchange

In addition, two helicopters registered to the US Department of the Interior and operated by the US Park Police have been flying over the city. One has been spotted almost every day since January 10 and another was tracked in the air on January 11-13. The Park Police said the flights were part of routine maintenance, and the helicopters are frequent fliers in the city. There have also been regular reports of DC Metropolitan police helicopters over Washington since January 6.

This is not the first time such vehicles have been deployed in the skies above Congress in the past year. Over the summer, for example, the National Guard used an RC-26B reconnaissance craft carrying infrared and electro-optical cameras to monitor the Black Lives Matter protests in Washington; it had previously been used for reconnaissance in Iraq and Afghanistan. 

Jay Stanley, a senior policy analyst at the ACLU, says that the mob at the Capitol “was an attack on the core functions of our democracy.” From a civil liberties perspective, he says, increased surveillance is “certainly justified” to protect democracy, though transparency and policies around the use of technologies are essential. “We should scrutinize and interrogate the necessity for aerial surveillance in any situation”, he says. 

But the level of surveillance and show of force at the Capitol stand in marked contrast to the apparent lack of security in place ahead of January 6. A search by MIT Technology Review found evidence of only one helicopter run by the DC police in the skies at the time of the Capitol mob. Currently, thousands of troops are stationed inside and outside the building, and the situational response is taking on a formality and sophistication akin to a military operation. While Stanley cautions that it is unlikely that increased surveillance would have dramatically changed the course of the assault, the disparity between then and now has left many experts wondering what went wrong before the Capitol riot, and why.

“There just didn’t seem to be any kind of a response,” says Seth Stoughton, an associate professor of criminology at the University of South Carolina. “That looks like a planning, leadership, or command-and-control failure.”

So what should have happened, and what went wrong? 

Advance notice to a heavily-funded force

The potential threat on January 6 might have surprised some, but the danger was known and visible to law enforcement. According to the Washington Post, the FBI’s Norfolk Field Office sent a situational awareness report on January 5 about credible threats of violence at the Capitol. Hotels were booked up in the area, and there had been weeks of online discussion about organized violence. A leader of the Proud Boys was arrested in Washington, DC, two days before the rally with high-capacity firearm magazines. And most of all, of course, President Trump had been falsely telling his supporters for months that the election had been stolen from him, and that followers would have to “liberate” states. On the morning of the riot, he addressed the crowd and told them, “You will never take back our country with weakness.”

Despite all this, the US Capitol Police had prepared for a typical free-speech rally with only scattered violence, such as small fights breaking out in large crowds. There are no reports of standard surveillance measures used ahead of potentially violent large-scale events, such as police videographers or pole camera set-ups, and only one helicopter registered to the DC Police to perform aerial surveillance. Body cameras on police also appeared to be used sparingly, as the Capitol police do not wear them.

Nor were resources an issue. The United States Capitol Police, or USCP, is one of the most well-funded police forces in the country. It is responsible for security across just 0.4 square miles of land, but that area hosts some of the most high-profile events in American politics, including presidential inaugurations, lying-in-state ceremonies, and major protests. The USCP is well-staffed, with 2,300 officers and civilian employees, and its annual budget is at least $460 million—putting it among the top 20 police budgets in the US. In fact, it’s about the size of the Atlanta and Nashville police budgets combined. For comparison, the DC Metropolitan Police Department—which works regularly with the USCP and covers the rest of the District’s 68 square miles—has a budget of $546 million

The USCP is different from state and local departments in other important ways, too. As a federal agency that has no residents inside its jurisdiction, for example, it answers to a private oversight board and to Congress—and only Congress has the power to change its rules and budgets. Nor is it subject to transparency laws such as the Freedom of Information Act, which makes it even more veiled than the most opaque departments elsewhere in the country. 

All of this means there is little public information about the tools and tactics that were at the USCP’s disposal ahead of the riots. 

But “they have access to some pretty sophisticated stuff if they want to use it,” says Stoughton. That includes the resources of other agencies like the Secret Service, the FBI, the Department of Homeland Security, the Department of the Interior, and the United States military. (“We are working [on technology] on every level with pretty much every agency in the country,” the USCP’s then-chief said in 2015, in a rare acknowledgment of the force’s technical savvy.)

What should have happened

With such resources at its disposal, the Capitol Police would likely have made heavy use of online surveillance ahead of January 6. Such monitoring usually involves not just watching online spaces, but tracking known extremists who had been at other violent events. In this case, that would include the “Unite the Right” rally in Charlottesville, Virginia, in 2017 and the protest against coronavirus restrictions at the Michigan state capitol in 2020. 

Exactly what surveillance was happening before the riots is unclear. The FBI turned down a request for a comment, and the USCP did not respond. “I’d find it very hard to believe, though, that a well-funded, well-staffed agency with a pretty robust history of assisting with responding to crowd control situations in DC didn’t do that type of basic intelligence gathering,” says Stoughton. 

Ed Maguire, professor of criminal justice at Arizona State University, is an expert on protests and policing. He says undercover officers would usually operate in the crowd to monitor any developments, which he says can be the most effective surveillance tool to manage potentially volatile situations—but that would require some preparedness and planning that perhaps was lacking. 

Major events of this kind would usually involve a detailed risk assessment, informed by monitoring efforts and FBI intelligence reports. These assessments determine all security, staffing, and surveillance plans for an event. Stoughton says that what he sees as inconsistency in officers’ decisions to retreat or not, as well as the lack of an evacuation plan and the clear delay in securing backup, point to notable mistakes. 

This supports one of the more obvious explanations for the failure: that the department simply misjudged the risk. 

What seems to have happened

It appears that Capitol Police didn’t coordinate with the Park Police or the Metropolitan Police ahead of the rally—though the Metropolitan Police were staffed at capacity in anticipation of violence. Capitol Police Chief Steven Sund, who announced his resignation in the wake of the riots, also asserts that he requested additional National Guard backup on January 5, though the Pentagon denies this.

The USCP has also been accused of racial bias, along with other police forces. Departments in New York, Seattle, and Philadelphia are among those looking into whether their own officers took part in the assault, and the Capitol Police itself suspended “several” employees and will investigate 10 officers over their role.

But one significant factor that might have altered the volatility of the situation, Maguire says, is that police clashes with the Proud Boys in the weeks and days before the event, including a violent rally in Salem, Oregon, and the arrest of the white supremacist group’s leader, Henry Tarrio, fractured the right wing’s assumption that law enforcement was essentially on their side. On January 5, Maguire had tweeted about hardening rhetoric and threats of violence as this assumption started to fall apart. 

 “That fraying of the relationship between the police and the right in the few days leading up to this event, I think, are directly implicated in the use of force against police at the Capitol,” he says. In online comments on video of the confrontation in Oregon, he says, it’s clear there’s “a sense of betrayal” among the Proud Boys.

“A land grab for new powers”

Despite all the problems in the immediate response to the assault, investigations and arrests of the rioters have been taking place. The Capitol grounds are well-fitted with surveillance tools, many of which will be called on as investigations continue. A sea of Wi-Fi networks and cell towers capture mobile-phone data, and an expansive fleet of high-tech cameras covers most of the building. 

As of January 16, the FBI has collected 140,000 pieces of social media through an online portal asking for tips and images from the mob. The agency also has access to facial recognition software from ClearviewAI, which reported a spike in use of its tool in the days after the riot.

But the lack of transparency into police tools, tactics, policies, and execution makes any attempt at connecting the dots speculation. Experts have been calling for formal investigations into exactly what happened ahead of January 6, because transparency into the intelligence analysis and operational decisions is the only way to determine the key points of failure. On January 15, the inspectors general of the Departments of Justice, Defense, the Interior and Homeland Security announced a joint investigation into the federal response.

As the inauguration of Joe Biden ramps up, all eyes will be on whether the security professionals are prepared. And their eyes will likely be on us, too, as surveillance continues to increase, though it’s unlikely the public will know about the true nature of that surveillance anytime soon. 

Stanley suggests we should remain vigilant about the impact of the fallout from the Capitol, cautioning that “the people who want various security powers and toys and so forth use an emergency to try and get it. We saw that after 9/11 and I think there’s going to be some of that now too.” 

He echoes the calls for investigation and transparency into the Capitol police on January 6, but suggests that people remain skeptical. “Don’t let this become a land grab for new powers and surveillance activities because of the law enforcement’s very failures,” he says.


Police are flying surveillance over Washington. Where were they last week? 2021/01/18 12:00

Do your neighbors want to get vaccinated?

As the coronavirus vaccines have rolled out across the US, the process has been confusing and disastrous. States, left by the federal government to fend for themselves, have struggled to get a handle on the logistics of distribution. Many, including Georgia, Virginia, and California, have fallen woefully behind schedule.

But even if there were a perfect supply chain, there’s another obstacle: Not all Americans want the vaccine.

Survey data gathered through Facebook by Carnegie Mellon University’s Delphi Lab, one of the nation’s best flu-forecasting teams, showed that more than a quarter of the country’s population would not get vaccinated if it were available to them today. How people feel about receiving vaccinations varies widely by state and county. The percentage of respondents who would accept a vaccine falls as low as 48% in Terrebonne parish, Louisiana, and peaks as high as 92% in Arlington county, Virginia.

The findings are extremely worrying. The fewer people who are vaccinated, the longer the virus will continue to ravage the country, and prevent us from returning to normal. “It’s one of those things that probably shouldn’t have surprised me,” says Alex Reinhart, an assistant teaching professor in statistics & data science, who was part of the research. “But when you look at the map, it’s still surprising to see.”

The good news—and there is some good news—is that this data could also help fight public hesitancy. The Delphi Lab has been helping the CDC to track and understand the spread of covid infections since the beginning of the pandemic. The latest survey will help the agency identify where to perform more targeted education campaigns. The research group is also working with several county-level health departments to inform local outreach.

The Delphi researchers collected the data via a large-scale survey that it has been operating through Facebook since April 2019. It works with the social media giant to reach as wide a cross-section of the US population as possible, and surfaces daily questions to a statistically representative sample of Facebook users. An average of 56,000 people participate daily, and the company itself never sees the results.

During the pandemic, the survey has included a variety of questions to understand people’s covid-related behaviors, including mask adherence, social distancing, and their mental health. Some of the results are fed into the lab’s coronavirus forecasting model, while others are summarized and given directly to public health officials and other academic researchers. The questions are regularly updated, and the vaccine acceptance question was added at the start of January—after the first vaccines had been authorized by the US government.

The map visualizes each county’s polling average from January 1 to January 14. For counties with too few daily respondents—less than 100—the Delphi researchers grouped the data from neighboring counties. This is reflected in our map above, which is why various clusters of counties show up with the same percentage. The researchers also independently verified their results with some of the CDC’s own surveys and Pew Research.

Next, the researchers plan to expand their survey to understand why people are reticent about the vaccine. They’re also exploring questions that could help identify what blocks people from accessing vaccines, especially for at-risk populations.

This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.


Do your neighbors want to get vaccinated? 2021/01/16 13:00

Banks need to strike the right balance for digital transformation

Every financial institution is looking to digital transformation to meet rising customer expectations for speed and convenience, lower its operating cost, and fend off competition, including from tech companies moving into financial services. Some are spending over 10% of yearly revenue on technology investments, according to Bloomberg. “This is a huge investment and most financial institutions cannot support this for the long term,” says Michael Fei, SME banking CEO at OneConnect Financial Technology, an associate of Ping An Insurance.

The covid-19 pandemic has revealed how even financial institutions that considered themselves digitally advanced are, in reality, still wedded to analog processes along the chain of processing.

“For many financial institutions, this has been a wake-up call,” says Fei. “In the past, many had thought that if they have an online portal and a mobile application then that’s enough. But now they’ve realized it’s not. Some banks have online portals and mobile apps where you can apply for loans, but they still need to send items to the customer and carry out on-site inspection before they can process the loans, which hasn’t been possible during covid. Banks have had to reshape and redesign the whole process of their lending products.”

Banks have also realized their lack of truly deep customer knowledge, which is crucial to inform responsible and flexible decisions during an economic downturn as customer needs rapidly change.  

“Now that everything is digital, financial institutions are realizing how little they knew their customers,” says Tan Bin Ru, chief executive officer for Southeast Asia at OneConnect Financial Technology. “Customer hyper-personalization tools, to understand what products to offer, have been acknowledged conceptually for a long time but not implemented—now banks are moving towards it and really getting tools to do it.” Traditional banks that were not previously utilizing alternative datasets now want to integrate them more into secure lending, Tan says.

The power of partnerships

Banks have increasingly understood they need outside help to execute their digital transformation agenda. “Banks usually have very rigid systems and procedures,” says Fei. “For instance, if you want to launch a new product you have to follow the process, and it takes at least six months. In the age of digitalization, this doesn’t work, as customers want things immediately. This has put huge pressure on these financial institutions to build agile operations and systems to be able to respond to the needs of their customers.”

But the number of tech companies pushing into financial services can be overwhelming and not all of them have domain expertise, which can lead to misguided attempts to apply new technologies everywhere. Without experience of financial services, tech companies may also underestimate the trade-offs involved in deploying certain digital tools. 

OneConnect combines expertise in digital technology with deep knowledge of banking. Fei, who has past experience working at HSBC China and Bank of Langfang, a Chinese commercial bank, describes one partnership with a Chinese national bank to reimagine its customer service center as an illustration of why banking experience matters in digital reform. The lender was looking to transform a 6,000-person call center toward a more intelligent, AI-enabled approach with greater use of automation. But automating customer services must be done carefully; customers will not appreciate being handed off to a robot for certain sensitive or urgent inquiries where a human counterpart is desired.  

OneConnect built a knowledge map with the bank, to understand and anticipate what problem a customer is trying to solve with a given query, and then understanding when and where to apply automation versus human support. “This required extensive understanding of the business and the industry, which many technology companies do not have,” he says. “You need that, to know when to intervene, what should be done by robotics and what should be a human being. Many tech companies cannot offer this.”

Rather than advocating digital transformation across the board, OneConnect works to get the right balance between customization and integration, and to appreciate that banks are looking for a blend, or omnichannel approach. “Our banking customers, and their customers, want to be offline for certain things, and online for others; they want that flexibility,” says Tan.

A second partnership problem banks face is the sheer number of technology vendors and startups, which can be overwhelming and complicate their digital transformation journey. It is unclear which fintechs will survive and which will not; startups might offer an appealing technology, but if their underlying business model proves unviable, or they cannot raise sufficient funding to support their expansion, or they pivot to a new direction, a bank is exposed.

In many cases, banks take on many different fintechs because no single startup can manage the breadth of their needs, or because the bank wants to diversify its risk. “Since the digital journey is such a long process, a lot of banks feel they need to look at 15 to 20 fintechs to piece together their journey, but the more players they have, the more risk there is,” says Tan.

OneConnect solves both problemsan overly complicated vendor network and the risk of working with fledgling tech companies—by offering a broad sweep of turnkey solutions, with the commercial scale and security that customers can rely on. Typically, a bank will chart its desired journey and up to 80% of those solutions can be provided by OneConnect, says Tan. The company, publicly traded on the New York Stock Exchange, also draws on over 30 years of experience in financial services of its parent company, Ping An, described by The Economist as a window into the future of finance. “No other traditional financial-services group in the world comes close to rivaling Ping An’s ability to develop technologies and deploy them at such a scale,” the magazine recently wrote.

OneConnect: The journey so far

OneConnect has built a broad business in China, serving all of its major banks, 99% of its city commercial banks, and 53% of insurance companies. But its footprint is increasingly global, with over 50 international customers in more than 15 markets, including Singapore, Indonesia, Malaysia, Philippines, and Abu Dhabi.

The company has built new technology solutions to enhance pricing accuracy, such as an alternative data, AI-based credit scoring model for a credit bureau in Indonesia, and supported Malaysian banks to develop user-friendly apps, digital portals, and onboarding. It is leveraging image recognition, a core enabler of “insur-tech” that allows insurers to quickly assess damage claims and pay out to eligible beneficiaries. OneConnect has partnered with Swiss Re, a European insurer, to develop a digital end-to-end solution for motor claims handling, based on AI-based image recognition and advanced data analytics. The tool can analyze photos of vehicle damage, identify repair needs and costs within minutes, offer cash payments, and even offer value-added services, like directing drivers to a repair garage.

OneConnect is also helping build the fintech ecosystem by working with governments, regulators, and stakeholders. It is working with Singapore’s blockchain association to build the skills, literacy, and talent pool needed to enable innovation and has partnered with Abu Dhabi Global Market, a financial center in the United Arab Emirates, to support the development of a “digital lab,” a sandbox for fintechs to collaborate and develop their innovations.   

Working closely with its partners at home and abroad, OneConnect is helping the finance industry move swiftly into the digital era by leveraging the right tools at the right time, benefiting customers and finance institutions alike by widening access to services and lowering costs.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.


Banks need to strike the right balance for digital transformation 2021/01/15 16:00

Worried about your firm’s AI ethics? These startups are here to help.

Rumman Chowdhury’s job used to involve a lot of translation. As the “responsible AI” lead at the consulting firm Accenture, she would work with clients struggling to understand their AI models. How did they know if the models were doing what they were supposed to? The confusion often came about partly because the company’s data scientists, lawyers, and executives seemed to be speaking different languages. Her team would act as the go-between so that all parties could get on the same page. It was inefficient, to say the least: auditing a single model could take months.

So in late 2020, Chowdhury left her post to start her own venture. Called Parity AI, it offers clients a set of tools that seek to shrink the process down to a few weeks. It first helps them identify how they want to audit their model—is it for bias or for legal compliance?—and then provides recommendations for tackling the issue.

Parity is among a growing crop of startups promising organizations ways to develop, monitor, and fix their AI models. They offer a range of products and services from bias-mitigation tools to explainability platforms. Initially most of their clients came from heavily regulated industries like finance and health care. But increased research and media attention on issues of bias, privacy, and transparency have shifted the focus of the conversation. New clients are often simply worried about being responsible, while others want to “future proof” themselves in anticipation of regulation.

“So many companies are really facing this for the first time,” Chowdhury says. “Almost all of them are actually asking for some help.”

From risk to impact

When working with new clients, Chowdhury avoids using the term “responsibility.” The word is too squishy and ill-defined; it leaves too much room for miscommunication. She instead begins with more familiar corporate lingo: the idea of risk. Many companies have risk and compliance arms, and established processes for risk mitigation.

AI risk mitigation is no different. A company should start by considering the different things it worries about. These can include legal risk, the possibility of breaking the law; organizational risk, the possibility of losing employees; or reputational risk, the possibility of suffering a PR disaster. From there, it can work backwards to decide how to audit its AI systems. A finance company, operating under the fair lending laws in the US, would want to check its lending models for bias to mitigate legal risk. A telehealth company, whose systems train on sensitive medical data, might perform privacy audits to mitigate reputational risk.

A screenshot of Parity's library of impact assessment questions.
Parity includes a library of suggested questions to help companies evaluate the risk of their AI models.
PARITY

Parity helps to organize this process. The platform first asks a company to build an internal impact assessment—in essence, a set of open-ended survey questions about how its business and AI systems operate. It can choose to write custom questions or select them from Parity’s library, which has more than 1,000 prompts adapted from AI ethics guidelines and relevant legislation from around the world. Once the assessment is built, employees across the company are encouraged to fill it out based on their job function and knowledge. The platform then runs their free-text responses through a natural-language processing model and translates them with an eye toward the company’s key areas of risk. Parity, in other words, serves as the new go-between in getting data scientists and lawyers on the same page.

Next, the platform recommends a corresponding set of risk mitigation actions. These could include creating a dashboard to continuously monitor a model’s accuracy, or implementing new documentation procedures to track how a model was trained and fine-tuned at each stage of its development. It also offers a collection of open-source frameworks and tools that might help, like IBM’s AI Fairness 360 for bias monitoring or Google’s Model Cards for documentation.

Chowdhury hopes that if companies can reduce the time it takes to audit their models, they will become more disciplined about doing it regularly and often. Over time, she hopes, this could also open them to thinking beyond risk mitigation. “My sneaky goal is actually to get more companies thinking about impact and not just risk,” she says. “Risk is the language people understand today, and it’s a very valuable language, but risk is often reactive and responsive. Impact is more proactive, and that’s actually the better way to frame what it is that we should be doing.”

A responsibility ecosystem

While Parity focuses on risk management, another startup, Fiddler, focuses on explainability. CEO Krishna Gade began thinking about the need for more transparency in how AI models make decisions while serving as the engineering manager of Facebook’s News Feed team. After the 2016 presidential election, the company made a big internal push to better understand how its algorithms were ranking content. Gade’s team developed an internal tool that later became the basis of the “Why am I seeing this?” feature.

Gade launched Fiddler shortly after that, in October 2018. It helps data science teams track their models’ evolving performance, and creates high-level reports for business executives based on the results. If a model’s accuracy deteriorates over time, or it shows biased behaviors, Fiddler helps debug why that might be happening. Gade sees monitoring models and improving explainability as the first steps to developing and deploying AI more intentionally.

Arthur, founded in 2019, and Weights & Biases, founded in 2017, are two more companies that offer monitoring platforms. Like Fiddler, Arthur emphasizes explainability and bias mitigation, while Weights & Biases tracks machine-learning experiments to improve research reproducibility. All three companies have observed a gradual shift in companies’ top concerns, from legal compliance or model performance to ethics and responsibility.

“The cynical part of me was worried at the beginning that we would see customers come in and think that they could just check a box by associating their brand with someone else doing responsible AI,” says Liz O’Sullivan, Arthur’s VP of responsible AI, who also serves as the technology director of the Surveillance Technology Oversight Project, an activist organization. But many of Arthur’s clients have sought to think beyond just technical fixes to their governance structures and approaches to inclusive design. “It’s been so exciting to see that they really are invested in doing the right thing,” she says.

O’Sullivan and Chowdhury are also both excited to see more startups like theirs coming online. “There isn’t just one tool or one thing that you need to be doing to do responsible AI,” O’Sullivan says. Chowdury agrees: “It’s going to be an ecosystem.”


Worried about your firm’s AI ethics? These startups are here to help. 2021/01/15 12:00

    2 / 2