why Pilgrims | what we do | who we are | our work | contact

network


For each project we gather a team with complementary skills and varying backgrounds. We prefer to work together with passionate, involved, flexible and independent professionals. 

 

 

 

Technology review


Inside the rise of police department real-time crime centers

At a conference in New Orleans in 2007, Jon Greiner, then the chief of police in Ogden, Utah, heard a presentation by the New York City Police Department about a sophisticated new data hub called a “real time crime center.” Reams of information rendered in red and green splotches, dotted lines, and tiny yellow icons appeared as overlays on an interactive map of New York City: Murders. Shootings. Road closures. You could see the routes of planes landing at LaGuardia and the schedules of container ships arriving at the mouth of the Hudson River. 

In the early 1990s, the NYPD had pioneered a system called CompStat that aimed to discern patterns in crime data, since widely adopted by large police departments around the country. With the real time crime center, the idea was to go a step further: What if dispatchers could use the department’s vast trove of data to inform the police response to incidents as they occurred?

Back in Ogden, population 82,702, the main problem on Greiner’s mind was a stubbornly high rate of vehicle burglaries. As it was, the department’s lone crime analyst was left to look for patterns by plotting addresses on paper maps, or by manually calculating the average time between similar crimes in a given area. The city had recently purchased license-plate readers with money from a federal grant, but it had no way to integrate the resulting archive of images with the rest of the department’s investigations. It was obvious that much more could be made of the data on hand.

“I’m not New York City,” Greiner thought, “but I could scale this down with the right software.” Greiner called a former colleague who’d gone to work for Esri, a large mapping company, and asked what kinds of disparate information he might put on a map. The answer, it turned out, was anything you could put in a spreadsheet: the address history of people on parole—sorting for those with past drug, burglary, or weapons convictions—or the respective locations of car thefts and car recoveries, to see if joyrides tended to end near the joyrider’s home. You could watch police cars and fire trucks move around the city, or plot cell-phone records over time to look back at a suspect’s whereabouts during the hours before and after a crime. 

Eric Young, a 28-year veteran of the department, became Ogden’s chief of police in January.
NIKI CHAN WYLIE

In 2021, it might be simpler to ask what can’t be mapped. Just as Google and social media have enabled each of us to reach into the figurative diaries and desk drawers of anyone we might be curious about, law enforcement agencies today have access to powerful new engines of data processing and association. Ogden is hardly the tip of the spear: police agencies in major cities are already using facial recognition to identify suspects—sometimes falsely—and deploying predictive policing to define patrol routes. 

“That’s not happening here,” Ogden’s current police chief, Eric Young, told me. “We don’t have any kind of machine intelligence.” 

The city council rebuffed Greiner’s first funding request for a real time crime center, in 2007. But the mayor gave his blessing to pursue the project within the existing police budget. Greiner approached Esri and flew down to the company’s headquarters in Redlands, California. He “started up a little friendship” with Esri’s billionaire cofounder, Jack Dangermond, and spoke at the company’s convention, floating a plan to fly a 30-foot camera-equipped blimp over Ogden to monitor emergencies as they developed. (“I got beat up by Jay Leno for that,” Greiner said. The blimp never launched.) Since Ogden already had a subscription to Esri’s flagship product, ArcGIS, which it used for planning and public works, the company offered to build a free test site for a real time crime center (RTCC).

Around the country, the expansion of police technology has followed a similar pattern, driven more by conversations between police agencies and their vendors than between police and the public they serve. The Electronic Frontier Foundation, an advocacy group that tracks the spread of surveillance technology among local law enforcement agencies, currently counts 85 RTCCs in cities as small as Westwego, Louisiana, whose population has yet to crack 10,000. I traveled to Ogden to find answers to a question Greiner phrased this way: “What are we gonna do with this new tool that gets really close to your constitutional rights?” And as federal and state laws take their time to catch up to the wares on offer at conventions like Esri’s, who gets to decide how close is too close?


Ogden grew up in the late 19th century, the junction nearest to the spot where the two halves of the transcontinental railroad were finally stitched together in 1869. Marketed at the time as the “crossroads of the West,” it sits at the seam between two of the region’s defining natural features. On one side, the Wasatch Mountains form the westernmost edge of the Rockies; on the other, the Great Basin extends outward from the shores of the Great Salt Lake. Ogden’s mayor, Mike Caldwell, likes to say the railroad made Ogden “rich at the right time.” But the railroad also brought an unsavory reputation it is still trying to overcome. Local legend has it that Al Capone stepped off a train in the 1920s, did a lap around 25th Street, and declared Ogden too wild a town for him to stay. By the time Jon Greiner took over as police chief in 1995, the main challenges on 25th Street were panhandling and public drunkenness. Still, the city’s leadership sees the real time crime center as a linchpin of efforts to revitalize its downtown.

What’s much harder to evaluate is how the use of surveillance tools affects the relationship between officers and the residents they encounter in their daily rounds.

The RTCC occupies a dim triangular office on the second floor of the city’s public safety building. Much of the light comes from twin monitors on each of six desks that wind their way along the wall, augmented by two rows of wall-mounted displays overhead. There’s a cell-phone extraction machine in the back corner, and several drones stacked in hard cases. 

A team of seven analysts works in staggered shifts, monitoring police-radio traffic and working “requests for information” from detectives and patrol officers. Their supervisor, David Weloth, is a laid-back former detective with a neatly trimmed beard and a silver crew cut. Weloth retired from the Ogden City Police Department (OPD) in 2005, but he came back less than a year later to work as a crime analyst and has stayed on ever since.

When I arrived for a visit in February, OPD detective Heather West was scrolling through a queue of hundreds of photos captured by a new license-plate-reading system called Flock Safety, looking for a distinctive pickup truck—gray with a red camper shell—thought to have been used in a theft. The previous week, Weloth explained, Flock had helped the department recover five stolen vehicles in three days. Since they got it in December 2020, they’d queried the system more than 800 times. On searches without a plate number, though, looking for a particular kind or color of car, the algorithm had a tendency to veer off course. “For some reason, it likes red Mazda 3s,” West said, still looking at her screen.

Weloth introduced the team as Fox News played silently on a TV in the corner. West holds one of two OPD detective positions on the team, which also includes a sheriff’s deputy from surrounding Weber County and four civilian analysts with backgrounds in federal law enforcement. A former US Treasury officer was going through a statewide register of pawned goods, looking for matches with property reported stolen in Ogden.

Weloth had one of the analysts cue up a video from a recent homicide investigation, in which cell-phone records obtained by subpoena helped disprove key parts of a suspect’s story about his whereabouts on the night his girlfriend was murdered. Footage from a city-owned surveillance camera at Ogden’s water treatment plant allowed Weloth’s team to “put him where the phone said he was,” tightening the case for the prosecution. 

This was one of a few greatest hits that came up repeatedly in discussions about how Ogden uses the technology in its real time crime center. In another, in 2018, analysts tapped into a network of city-owned cameras to locate a kidnapping suspect after the woman he’d held managed to flag down an officer and provide a physical description. When officers arrived on scene, the man shot at them; police returned fire and killed him.

If there’s any good reason to deploy invasive technology, surely solving a murder and stopping a violent crime both qualify. What’s much harder to evaluate is how the use of surveillance tools affects the relationship between officers and the residents they encounter in their daily rounds, or how they change the collective understanding of the purpose of policing.

Dave Weloth, a retired police detective, directs the Ogden Police Area Tactical Analysis Center (formerly know as the Real Time Crime Center).
NIKI CHAN WYLIE

Take car theft. Recovering stolen cars has been an early success of the city’s network of license-plate readers. As Greiner recalled, thefts increase in the winter, “because people warm up their cars in the driveway, then go back inside and leave their keys in the ignition.” Today, Weloth told me, “running and unattendeds” still account for about a third of car thefts in the city. This includes an incident last November when a young mother left her 10-month-old in the back seat of her running car, which was stolen. Both the mayor and the chief of police told me the license-plate reader had been instrumental in finding the kid within two hours. But they didn’t mention that two women had found the baby crying on a front porch some miles away—and that the automatic reader had only helped them recover the car.

The police department maintains a web page advising residents on “10 Ways to reduce your vehicle from being stolen” and periodically sends community policing officers out to relay the message. Would a more robust public education program be a better way to reduce car theft than an intrusive citywide license-plate surveillance system? That’s not a question anyone at OPD appears to be asking.


When the RTCC launched, Weloth explained, his goal was to “close the gap between raw data and something that’s actionable.” To do that, he first had to figure out “What have we already paid for?” More than 100 city-owned surveillance cameras, installed by Ogden’s public works department after 9/11, were trained on sites like the parking lot of the fleet and facilities building, or the door to the city’s computer server room. In some places, the cameras could be controlled remotely. Analysts could review footage and pan, tilt, or zoom those cameras in accordance with requests from dispatch or officers in the field. 

This is what had allowed Joshua Terry, who does much of the real time crime center’s mapping work, to follow along during the 2018 kidnapping call, zeroing in on a dark figure on the sidewalk in a Dallas Cowboys jacket seconds before he darted out of view. “That’s the reason we have it on,” Terry told me, playing back the footage of the incident on one of the big screens. The goal is not, he says, to constantly surveil everyone but to use what tools the analysts can to aid active investigations. “We couldn’t care less what people are doing,” he says, even though “people think we sit here watching these cameras.” 

“I’d be bored to death,” a colleague said with a chuckle. 

Besides, Weloth pointed out, the system had accountability: “I can tell exactly who moved what camera, where, when.” 

When the state chapter of the American Civil Liberties Union called a city council member with concerns about the possible use of facial recognition, Weloth explained, he offered a tour of the RTCC. “We’re very cautious about stuff that’s not supported by law,” he said. “One mistake and we’re gonna pay the price.” 

The challenge is that for much of police surveillance technology, the most relevant law is the Fourth Amendment prohibition on “unreasonable searches” of people’s “persons, houses, papers, and effects.” The court system has yet to figure out how this applies to modern surveillance systems. As Justice Sonia Sotomayor wrote in a 2012 Supreme Court opinion, “Awareness that the Government may be watching chills associational and expressive freedoms. And the Government’s unrestrained power to assemble data that reveal private aspects of identity is susceptible to abuse.” 

Utah is one of 16 states with statutes that explicitly address automated license-plate readers; the OPD’s policy calls for two supervisors to sign off before querying a plate number against the database, and plate information can’t be stored for longer than nine months; it’s usually deleted within 30 days. Still, there’s no federal or state law that specifically regulates government use of surveillance cameras, and none of the department’s audits are published.

Sotomayor’s 2012 opinion was nonbinding (but widely cited), and it served mostly to point out that important issues haven’t been addressed in law. As Weloth had said when I first called to plan my visit, “We regulate ourselves extremely well.”


One afternoon, I accompanied Heather West, the detective who’d been perusing gray pickups in the license-plate database, and Josh Terry, the analyst who’d spotted the kidnapper with the Cowboys jacket, to fly a drone over a park abutting a city-owned golf course on the edge of town. West was at the controls; Terry followed the drone’s path in the sky and maintained “situational awareness” for the crew; another detective focused on the iPad showing what the drone was seeing, as opposed to where and how it was flying. 

Of all the gadgets under the hood at the real time crime center, drones may well be the most tightly regulated, subject to safety (but not privacy) regulations and review by the Federal Aviation Administration. In Ogden, neighbor to a large Air Force base, these rules are compounded by flight restrictions covering most of the city. The police department had to obtain waivers to get its drones off the ground; it took two years to develop policies and get the necessary approvals to start making flights. 

Joshua Terry, an analyst who does much of the real time crime center’s mapping work, with a drone.
NIKI CHAN WYLIE

The police department purchased its drones with a mind to managing large public events or complex incidents like hostage situations. But, as Dave Weloth soon found, “the more we use our drones, the more use cases we find.” At the real time crime center, Terry, who has a master’s in geographic information technology, had given me a tour of the city with images gathered on recent drone flights, clicking through to cloud-shaped splotches, assembled from the drone’s composite photographs, that dotted the map of Ogden. 

Above 21st Street and Washington, he zoomed in on the site of a fatal crash caused by a motorcycle running a red light. A bloody sheet covered the driver’s body, legs splayed on the pavement, surrounded by a ring of fire trucks. Within minutes, the drone’s cameras had scanned the scene and created a 3D model accurate to a centimeter, replacing the complex choreography of place markers and fixed cameras on the ground that sometimes leave major intersections closed for hours after a deadly collision.

No one seemed to give much thought to the fact that quietly, people who were homeless had become the sight most frequently captured by the police department’s drone program.

When the region was hit by a powerful windstorm last September, Terry flew a drone over massive piles of downed trees and brush collected by the city. When county officials saw the resulting volumetric analysis—12,938 cubic yards—that would be submitted as part of a claim to the Federal Emergency Management Agency, they asked the police department to perform the same service for two neighboring towns. Ogden drones have also been used to pinpoint hot spots after wildland fires, locate missing persons, and fly “overwatch” for SWAT team raids.

This flight was more routine. When I pulled into the parking lot, two officers from Ogden’s community policing unit looked on as West steered the craft over a dense stand of Gambel oak and then hovered over a triangular log fort on a hillside a couple of hundred yards away. Though they’d never encountered people on drone sweeps through the area, trash and makeshift structures were commonplace. Once the RTCC pinpointed the location of any encampments, the community service officers would go in on foot to get a closer look. “We get a lot of positive feedback from runners, hikers,” one officer explained. After one recent visit to a camp near a pond on 21st Street, he and the county social service workers who accompanied him found housing for two people they’d met there. When clearing camps, police also “try and connect [people] with services they need,” Weloth said. The department recently hired a full-time homeless outreach coordinator to help. “We can’t police ourselves out of this problem,” he said, comparing the department’s efforts to keep new camps from springing up to “pushing water uphill.”

Still, no one seemed to give much thought to the fact that quietly, people who were homeless had become the sight most frequently captured by the police department’s drone program. Of the 137 non-training flights made since May 2019, nearly half—62—were flyovers of homeless encampments, with regular flights over a parkway on the Ogden River and in woods by the railroad, whose owner, Union Pacific, employs its own private security as well. It was easy to see the appeal: if, instead of spending hours clambering through the woods, you could find people in minutes by looking down from on high, why not? 

“We’ve had a lot of homicides come out of those illegal encampments,” Ogden’s mayor, Mike Caldwell, told me. Chief Young cited two incidents to support Caldwell’s claim. The first was the 2018 murder of a homeless man, whose killer told police he considered homeless people a “problem.” The second was a fatal stabbing in an encampment near the railroad tracks, just outside city limits; the suspect arrested in the case was homeless himself. Both incidents were tragic examples of the well-documented vulnerability to violence of people without shelter. But does it follow that drones would be an effective deterrent? 

The idea that police were flying over the city’s open spaces to investigate homicides is also hard to square with the contention that the flights were part of the city’s homeless outreach. Aren’t those different activities, or shouldn’t they be? Either way, Caldwell said, “if it wasn’t the drone, it would be officers climbing over deadfall and going into those places. That keeps our officers safe, and gives us more bandwidth.”

One important function of resource constraints, though—bandwidth, in the mayor’s equation—is that they force governments, and citizens, to consider priorities. One Friday afternoon, I met Doug Young, a 49-year-old who has lived outdoors in Ogden on and off for the last 12 years. He wore a gray poncho and a cowboy hat with a pin in the shape of a cow’s skull. Young said he often saw drones overhead when he camped behind a local Walmart, and he had learned to distinguish police drones by the whirr of their motors. “If it stops violent crime, cool. If it’s for some petty bullshit, leave us the fuck alone,” he said. 

To Mayor Caldwell, this wasn’t a meaningful distinction. Asked whether there were some complaints or alleged crimes that weren’t serious enough to justify use of the RTCC’s most invasive technologies, he said, “I think we should use all the tools … The average everyday person wouldn’t even know that these tools are out there or that anything is being monitored.”

For Betty Sawyer, president of the Ogden chapter of the NAACP, that’s precisely the problem. Sawyer told me she wasn’t aware the city had license-plate readers and remotely monitored surveillance cameras until I called her for an interview. When she asked the department for more information, Chief Young shared a presentation he’d made before the City Council in December—one week before the new license-plate readers were deployed. “How many people are listening to weekly city council meetings?” she asked.  “If no one’s talking about it but it’s here—how, why, what’s the reason for it? Is that the best use of our dollar when we’re down officers? These are things that should be put up front, not after the fact.” 

Betty Sawyer, president of the Ogden NAACP, says the department should do more to engage city residents in conversations about new police technologies.
NIKI CHAN WYLIE

Last summer, as protests flared across the country in response to the police killing of George Floyd in Minneapolis, Sawyer spearheaded a group that held a series of meetings with the mayor and police chief. It was an effort to improve police–community relations in a city where no Black cop serves in a department of 126 sworn officers, and where the police force is less than 10% Hispanic, though Hispanic residents make up more than 30% of Ogden’s population. “Our whole goal is: How do we build in transparency so we can dispel the myths and speak to the truth of what you are doing?” she said. 

One risk for the police department is that the RTCC’s usefulness is, at least for some of the city, ultimately overshadowed by mistrust over cops’ ability to use their new powers with restraint. As Malik Dayo, who organized several Black Lives Matter protests in Ogden last summer, told me, “I can leave my house, drive to the store, and come back, and if [police] wanted to, they can figure out what time I left, what time I came back, and if I made any stops along the way.” Some cities have preempted similar objections with an avalanche of public data: in Southern California, the city of Chula Vista publishes routes and accompanying case numbers for every drone flight its police department conducts. Weloth assured me the checks and balances on Ogden’s license-plate readers would prevent the scenario Dayo described. Dayo was unmoved. “I think it’s gonna be abused,” he said. “I really do.”

The city’s leadership sees the real-time crime center as a linchpin of efforts to revitalize downtown.
NIKI CHAN WYLIE

Police tend to view all the tools at their disposal as part of the same basic continuum—drones and bicycles alike helping “to protect and serve.” After a few days in Ogden, though, I couldn’t help but think that the RTCC’s tools were also functioning as a kind of digital armor for a particular worldview. Was the department’s reliance on technology allowing it to do more with less, or was it letting the city ignore the complexities of its most urgent social problems?

Last August, a covid-19 outbreak at the Lantern House, Ogden’s largest homeless shelter, infected at least 48 residents and killed two. Confirmed cases were quarantined in a separate wing of the shelter, but people soon began to set up tents on the sidewalk outside, where 33rd Street dead-ended by the railroad tracks.

Among them was a man who asked me to use only his first name, Ryan, and said he no longer felt safe sleeping on closely spaced bunks: “You’re within four feet of five people.” Outside, people had to move their stuff twice a week for workers to clear trash, and sometimes human waste, from the area—there were no dumpsters, and no porta-potties—but it felt safer than being indoors. “We were staying so close together it was a health risk,” he said.

The police department set up a trailer with surveillance cameras atop a high pole to record what happened in the new camp. Through the fall, as the group living outside the shelter swelled to some 60 people in about 30 tents, the cameras captured several incidents of violence. A car window was smashed. Someone punched a pizza delivery driver in the face. 

On December 10, a Thursday, a team including police, firefighters, and county social workers cleared the encampment once and for all. “Up to this point, Ogden city has taken a moderated approach during the pandemic. However, the situation has now become untenable,” a city press release read, identifying the encampment as a source of crime and a drain on city resources. 

“Given the potential for the spread of COVID-19 and other communicable diseases often found in camps like these, risks from camp members spread throughout the city.” This was not the approach advocated by the Centers for Disease Control, which recommends that local governments “allow people who are living unsheltered or in encampments to remain where they are,” emphasizing that dispersing encampments increases potential for disease spread

According to a report in the local paper, 10 people accepted the city’s offer to go sleep inside the Lantern House, and the rest dispersed. If they found themselves setting up tents along the Ogden River, they’d be spotted soon enough by one of the police department’s drones. 


Paige Berhow, who retired as assistant police chief in the Ogden suburb of Riverdale and now lives in the city, became an officer in the early 1980s, when her on-duty equipment consisted of little more than a uniform and a revolver. Then came tasers and bulletproof vests and computer dashboards in every patrol car. “With every layer of stuff, that’s another layer of detachment from the public, too,” she told me. As Berhow pointed out, much of the expanding footprint of technology in police departments has come in the name of officer safety, though on-duty officer deaths have declined dramatically over the last several decades.

David Weloth hesitated when I asked what would change, 10 years into Ogden’s experiment, if the police department suddenly had to do without the RTCC, since renamed the Area Tactical Analysis Center. “We would have a very difficult time,” he said. “There’s no crime reduction strategy that happens without ATAC.” 

“There’s no crime reduction strategy that happens without ATAC.”

David Weloth

ATAC’s role in the police department’s relationship with the city has steadily expanded over time. The number of “requests for information” completed by the group was up by over 20% last year. The police department now has a say in the city’s master plan for surveillance cameras; the popularity of Amazon Ring’s camera–equipped doorbells, meanwhile, has given analysts a new trove of data to peruse. 

But Ogden releases very little data to shed light on ATAC’s role, beyond confirmation that it’s still growing. In the fall of 2019, when the city launched an expanded network of surveillance cameras that ATAC could monitor remotely, employees accessed them only a handful of times each month. They soon found reasons to peer through the cameras daily. From November 23, 2020, to February 23, 2021 (the most recent three months for which the city provided data), ATAC processed over 27,000 queries, or about 300 each day.

Suresh Venkatasubramanian, a computer scientist at the University of Utah who studies the social implications of algorithmic decision-making, worries that police departments have embraced novel tools without the resources or the expertise to properly evaluate their influence. How might the distribution of surveillance cameras, for instance, affect the department’s understanding of the distribution of crime? How could software like that sold by Palantir (a data analytics firm with roots in the intelligence community) amplify existing biases and distortions in the criminal justice system? “A lot of government agencies who are getting solicited by vendors would like … to scrutinize them properly, but they don’t know how,” he told me. “The idea coming from vendors is that more data is always better. That’s really not the case.”

To their credit, the analysts working at ATAC made good on Weloth’s pledge of openness. They were candid, and willing to explore potential pitfalls in their work. Terry, who did much of the mapping work at ATAC, had spent four years as a contractor with the National Geospatial-Intelligence Agency working on American drone strikes. He told the story of a fellow image analyst who misidentified what he thought was a group of men making IEDs under cover of darkness. On the strength of that analysis, Terry says, “they blew up kids carrying firewood.” When Terry came to Ogden, he was surprised to see that local police departments had access to tools as powerful as Palantir’s. Another analyst swiveled in his chair and chimed in. “The technology is getting better and the cost is coming down,” he said. “At some point will we get access to technology we regret having? Probably.”  

Rowan Moore Gerety is a writer in Phoenix, Arizona.


Inside the rise of police department real-time crime centers 2021/04/19 11:00

NASA has selected SpaceX’s Starship as the lander to take astronauts to the moon

Later this decade, NASA astronauts are expected to touch down on the lunar surface for the first time in decades. When they do, according to an announcement made by the agency, they’ll be riding inside SpaceX’s Starship vehicle.

NASA’s award of a $2.9 billion contract to build Starship, first reported by the Washington Post on April 16 and later confirmed by NASA, is a huge achievement for the space company founded and run by billionaire Elon Musk, as well as a massive blow to the hopes of its rivals. 

The lander: SpaceX bills Starship as a next-generation spacecraft meant to take humans to the moon and, one day, Mars. Measuring around 160 feet tall and 30 feet in diameter, Starship is a reusable vehicle that’s designed to take off and land on the ground vertically. The plan is for it to launch separately and station itself in lunar orbit until NASA astronauts arrive aboard the agency’s Orion crew capsule. Starship would simply ferry astronauts to the moon’s surface and back.

Surprising selection: Last year, NASA awarded three different groups contracts to further develop their own proposals for lunar landers: $135 million to SpaceX, $253 million to defense company Dynetics (which was working with Sierra Nevada Corporation), and $579 million to a four-company team led by Blue Origin (working with Northrop Grumman, Lockheed Martin, and Draper). 

SpaceX didn’t just receive the least amount of money—its proposal also earned the worst technical and management ratings. NASA’s associate administrator (now acting administrator) Steve Jurczyk wrote (pdf) that Starship’s propulsion system was “notably complex and comprised of likewise complex individual subsystems that have yet to be developed, tested, and certified with very little schedule margin to accommodate delays.” The uncertainties were only exacerbated by SpaceX’s notoriously poor track record with meeting deadlines.

What changed: Since then, SpaceX has gone through a number of different flight tests of several full-scale Starship prototypes, including a 10-kilometer high-altitude flight and safe landing in March. (It also exploded a few times.) According to the Washington Post, documents suggest NASA was enamored with Starship’s ability to ferry a lot of cargo to the moon (up to 100 tons), not to mention its $2.9 billion bid for the contract, which was far lower than its rivals’. 

“This innovative human landing system will be a hallmark in spaceflight history,” says Lisa Watson-Morgan, NASA’s program manager for the lunar lander system. “We’re confident in NASA’s partnership with SpaceX.”

What this means: For SpaceX’s rivals, it’s a devastating blow—especially to Blue Origin. The company, founded by Jeff Bezos, had unveiled its Blue Moon lander concept in 2019 and has publicly campaigned for NASA to select it for future lunar missions. Blue Moon was arguably the most well-developed of the three proposals when NASA awarded its first round of contracts.

For SpaceX, it’s a big vote of confidence in Starship as a crucial piece of technology for the next generation of space exploration. It comes less than a year after the company’s Crew Dragon vehicle was certified as the only American spacecraft capable of taking NASA astronauts to space. And it seems to confirm that the SpaceX is now NASA’s biggest private partner, supplanting veteran firms like Northrop Grumman and shunting newer ones like Blue Origin further to the sidelines. However, there’s at least one major hurdle: Starship needs to launch using a Super Heavy rocket—a design that SpaceX has yet to fly.

For NASA, the biggest implication is that SpaceX’s vehicles will only continue to play a bigger role for Artemis, the lunar exploration program being touted as the successor to Apollo. Former president Donald Trump’s directive for NASA to return astronauts to the moon by 2024 was never actually going to be realized, but the selection of a single human lander concept suggests NASA may not miss that deadline by much. The first Artemis missions will use Orion, and the long-delayed Space Launch System rocket is expected to be ready soon


NASA has selected SpaceX’s Starship as the lander to take astronauts to the moon 2021/04/16 23:13

Geoffrey Hinton has a hunch about what’s next for AI

Back in November, the computer scientist and cognitive psychologist Geoffrey Hinton had a hunch. After a half-century’s worth of attempts—some wildly successful—he’d arrived at another promising insight into how the brain works and how to replicate its circuitry in a computer.

“It’s my current best bet about how things fit together,” Hinton says from his home office in Toronto, where he’s been sequestered during the pandemic. If his bet pays off, it might spark the next generation of artificial neural networks—mathematical computing systems, loosely inspired by the brain’s neurons and synapses, that are at the core of today’s artificial intelligence. His “honest motivation,” as he puts it, is curiosity. But the practical motivation—and, ideally, the consequence—is more reliable and more trustworthy AI.

A Google engineering fellow and cofounder of the Vector Institute for Artificial Intelligence, Hinton wrote up his hunch in fits and starts, and at the end of February announced via Twitter that he’d posted a 44-page paper on the arXiv preprint server. He began with a disclaimer: “This paper does not describe a working system,” he wrote. Rather, it presents an “imaginary system.” He named it, “GLOM.” The term derives from “agglomerate” and the expression “glom together.”

Hinton thinks of GLOM as a way to model human perception in a machine—it offers a new way to process and represent visual information in a neural network. On a technical level, the guts of it involve a glomming together of similar vectors. Vectors are fundamental to neural networks—a vector is an array of numbers that encodes information. The simplest example is the xyz coordinates of a point—three numbers that indicate where the point is in three-dimensional space. A six-dimensional vector contains three more pieces of information—maybe the red-green-blue values for the point’s color. In a neural net, vectors in hundreds or thousands of dimensions represent entire images or words. And dealing in yet higher dimensions, Hinton believes that what goes on in our brains involves “big vectors of neural activity.”

By way of analogy, Hinton likens his glomming together of similar vectors to the dynamic of an echo chamber—the amplification of similar beliefs. “An echo chamber is a complete disaster for politics and society, but for neural nets it’s a great thing,” Hinton says. The notion of echo chambers mapped onto neural networks he calls “islands of identical vectors,” or more colloquially, “islands of agreement”—when vectors agree about the nature of their information, they point in the same direction.

“If neural nets were more like people, at least they can go wrong the same ways as people do, and so we’ll get some insight into what might confuse them.”

Geoffrey Hinton

In spirit, GLOM also gets at the elusive goal of modelling intuition—Hinton thinks of intuition as crucial to perception. He defines intuition as our ability to effortlessly make analogies. From childhood through the course of our lives, we make sense of the world by using analogical reasoning, mapping similarities from one object or idea or concept to another—or, as Hinton puts it, one big vector to another. “Similarities of big vectors explain how neural networks do intuitive analogical reasoning,” he says. More broadly, intuition captures that ineffable way a human brain generates insight. Hinton himself works very intuitively—scientifically, he is guided by intuition and the tool of analogy making. And his theory of how the brain works is all about intuition. “I’m very consistent,” he says.

Hinton hopes GLOM might be one of several breakthroughs that he reckons are needed before AI is capable of truly nimble problem solving—the kind of human-like thinking that would allow a system to make sense of things never before encountered; to draw upon similarities from past experiences, play around with ideas, generalize, extrapolate, understand. “If neural nets were more like people,” he says, “at least they can go wrong the same ways as people do, and so we’ll get some insight into what might confuse them.”

For the time being, however, GLOM itself is only an intuition—it’s “vaporware,” says Hinton. And he acknowledges that as an acronym nicely matches, “Geoff’s Last Original Model.” It is, at the very least, his latest.

Outside the box

Hinton’s devotion to artificial neural networks (a mid-20th century invention) dates to the early 1970s. By 1986 he’d made considerable progress: whereas initially nets comprised only a couple of neuron layers, input and output, Hinton and collaborators came up with a technique for a deeper, multilayered network. But it took 26 years before computing power and data capacity caught up and capitalized on the deep architecture.

In 2012, Hinton gained fame and wealth from a deep learning breakthrough. With two students, he implemented a multilayered neural network that was trained to recognize objects in massive image data sets. The neural net learned to iteratively improve at classifying and identifying various objects—for instance, a mite, a mushroom, a motor scooter, a Madagascar cat. And it performed with unexpectedly spectacular accuracy.

Deep learning set off the latest AI revolution, transforming computer vision and the field as a whole. Hinton believes deep learning should be almost all that’s needed to fully replicate human intelligence.

But despite rapid progress, there are still major challenges. Expose a neural net to an unfamiliar data set or a foreign environment, and it reveals itself to be brittle and inflexible. Self-driving cars and essay-writing language generators impress, but things can go awry. AI visual systems can be easily confused: a coffee mug recognized from the side would be an unknown from above if the system had not been trained on that view; and with the manipulation of a few pixels, a panda can be mistaken for an ostrich, or even a school bus.

GLOM addresses two of the most difficult problems for visual perception systems: understanding a whole scene in terms of objects and their natural parts; and recognizing objects when seen from a new viewpoint.(GLOM’s focus is on vision, but Hinton expects the idea could be applied to language as well.)

An object such as Hinton’s face, for instance, is made up of his lively if dog-tired eyes (too many people asking questions; too little sleep), his mouth and ears, and a prominent nose, all topped by a not-too-untidy tousle of mostly gray. And given his nose, he is easily recognized even on first sight in profile view.

Both of these factors—the part-whole relationship and the viewpoint—are, from Hinton’s perspective, crucial to how humans do vision. “If GLOM ever works,” he says, “it’s going to do perception in a way that’s much more human-like than current neural nets.”

Grouping parts into wholes, however, can be a hard problem for computers, since parts are sometimes ambiguous. A circle could be an eye, or a doughnut, or a wheel. As Hinton explains it, the first generation of AI vision systems tried to recognize objects by relying mostly on the geometry of the part-whole-relationship—the spatial orientation among the parts and between the parts and the whole. The second generation instead relied mostly on deep learning—letting the neural net train on large amounts of data. With GLOM, Hinton combines the best aspects of both approaches.

“There’s a certain intellectual humility that I like about it,” says Gary Marcus, founder and CEO of Robust.AI and a well-known critic of the heavy reliance on deep learning. Marcus admires Hinton’s willingness to challenge something that brought him fame, to admit it’s not quite working. “It’s brave,” he says. “And it’s a great corrective to say, ‘I’m trying to think outside the box.’”

The GLOM architecture

In crafting GLOM, Hinton tried to model some of the mental shortcuts—intuitive strategies, or heuristics—that people use in making sense of the world. “GLOM, and indeed much of Geoff’s work, is about looking at heuristics that people seem to have, building neural nets that could themselves have those heuristics, and then showing that the nets do better at vision as a result,” says Nick Frosst, a computer scientist at a language startup in Toronto who worked with Hinton at Google Brain.

With visual perception, one strategy is to parse parts of an object—such as different facial features—and thereby understand the whole. If you see a certain nose, you might recognize it as part of Hinton’s face; it’s a part-whole hierarchy. To build a better vision system, Hinton says, “I have a strong intuition that we need to use part-whole hierarchies.” Human brains understand this part-whole composition by creating what’s called a “parse tree”—a branching diagram demonstrating the hierarchical relationship between the whole, its parts and subparts. The face itself is at the top of the tree, and the component eyes, nose, ears, and mouth form the branches below.

One of Hinton’s main goals with GLOM is to replicate the parse tree in a neural net—this would distinguish it from neural nets that came before. For technical reasons, it’s hard to do. “It’s difficult because each individual image would be parsed by a person into a unique parse tree, so we would want a neural net to do the same,” says Frosst. “It’s hard to get something with a static architecture—a neural net—to take on a new structure—a parse tree—for each new image it sees.” Hinton has made various attempts. GLOM is a major revision of his previous attempt in 2017, combined with other related advances in the field.

“I’m part of a nose!”

GLOM vector
Hinton face grid
MS TECH | EVIATAR BACH VIA WIKIMEDIA

A generalized way of thinking about the GLOM architecture is as follows: The image of interest (say, a photograph of Hinton’s face) is divided into a grid. Each region of the grid is a “location” on the image—one location might contain the iris of an eye, while another might contain the tip of his nose. For each location in the net there are about five layers, or levels. And level by level, the system makes a prediction, with a vector representing the content or information. At a level near the bottom, the vector representing the tip-of-the-nose location might predict: “I’m part of a nose!” And at the next level up, in building a more coherent representation of what it’s seeing, the vector might predict: “I’m part of a face at side-angle view!”

But then the question is, do neighboring vectors at the same level agree? When in agreement, vectors point in the same direction, toward the same conclusion: “Yes, we both belong to the same nose.” Or further up the parse tree. “Yes, we both belong to the same face.”

Seeking consensus about the nature of an object—about what precisely the object is, ultimately—GLOM’s vectors iteratively, location-by-location and layer-upon-layer, average with neighbouring vectors beside, as well as predicted vectors from levels above and below.

However, the net doesn’t “willy-nilly average” with just anything nearby, says Hinton. It averages selectively, with neighboring predictions that display similarities. “This is kind of well-known in America, this is called an echo chamber,” he says. “What you do is you only accept opinions from people who already agree with you; and then what happens is that you get an echo chamber where a whole bunch of people have exactly the same opinion. GLOM actually uses that in a constructive way.” The analogous phenomenon in Hinton’s system is those “islands of agreement.”

“Geoff is a highly unusual thinker…”

Sue Becker

“Imagine a bunch of people in a room, shouting slight variations of the same idea,” says Frosst—or imagine those people as vectors pointing in slight variations of the same direction. “They would, after a while, converge on the one idea, and they would all feel it stronger, because they had it confirmed by the other people around them.” That’s how GLOM’s vectors reinforce and amplify their collective predictions about an image.

GLOM uses these islands of agreeing vectors to accomplish the trick of representing a parse tree in a neural net. Whereas some recent neural nets use agreement among vectors for activation, GLOM uses agreement for representation—building up representations of things within the net. For instance, when several vectors agree that they all represent part of the nose, their small cluster of agreement collectively represents the nose in the net’s parse tree for the face. Another smallish cluster of agreeing vectors might represent the mouth in the parse tree; and the big cluster at the top of the tree would represent the emergent conclusion that the image as a whole is Hinton’s face. “The way the parse tree is represented here,” Hinton explains, “is that at the object level you have a big island; the parts of the object are smaller islands; the subparts are even smaller islands, and so on.”

Figure 2 from Hinton’s GLOM paper. The islands of identical vectors (arrows of the same color) at the various levels represent a parse tree.
GEOFFREY HINTON

According to Hinton’s long-time friend and collaborator Yoshua Bengio, a computer scientist at the University of Montreal, if GLOM manages to solve the engineering challenge of representing a parse tree in a neural net, it would be a feat—it would be important for making neural nets work properly. “Geoff has produced amazingly powerful intuitions many times in his career, many of which have proven right,” Bengio says. “Hence, I pay attention to them, especially when he feels as strongly about them as he does about GLOM.”

The strength of Hinton’s conviction is rooted not only in the echo chamber analogy, but also in mathematical and biological analogies that inspired and justified some of the design decisions in GLOM’s novel engineering.

“Geoff is a highly unusual thinker in that he is able to draw upon complex mathematical concepts and integrate them with biological constraints to develop theories,” says Sue Becker, a former student of Hinton’s, now a computational cognitive neuroscientist at McMaster University. “Researchers who are more narrowly focused on either the mathematical theory or the neurobiology are much less likely to solve the infinitely compelling puzzle of how both machines and humans might learn and think.”

Turning philosophy into engineering

So far, Hinton’s new idea has been well received, especially in some of the world’s greatest echo chambers. “On Twitter, I got a lot of likes,” he says. And a YouTube tutorial laid claim to the term “MeGLOMania.”

Hinton is the first to admit that at present GLOM is little more than philosophical musing (he spent a year as a philosophy undergrad before switching to experimental psychology). “If an idea sounds good in philosophy, it is good,” he says. “How would you ever have a philosophical idea that just sounds like rubbish, but actually turns out to be true? That wouldn’t pass as a philosophical idea.” Science, by comparison, is “full of things that sound like complete rubbish” but turn out to work remarkably well—for example, neural nets, he says.

GLOM is designed to sound philosophically plausible. But will it work?

Chris Williams, a professor of machine learning in the School of Informatics at the University of Edinburgh, expects that GLOM might well spawn great innovations. However, he says, “the thing that distinguishes AI from philosophy is that we can use computers to test such theories.” It’s possible that a flaw in the idea might be exposed—perhaps also repaired—by such experiments, he says. “At the moment I don’t think we have enough evidence to assess the real significance of the idea, although I believe it has a lot of promise.”

The GLOM test model inputs are ten ellipses that form a sheep or a face.
LAURA CULP

Some of Hinton’s colleagues at Google Research in Toronto are in the very early stages of investigating GLOM experimentally. Laura Culp, a software engineer who implements novel neural net architectures, is using a computer simulation to test whether GLOM can produce Hinton’s islands of agreement in understanding parts and wholes of an object, even when the input parts are ambiguous. In the experiments, the parts are 10 ellipses, ovals of varying sizes, that can be arranged to form either a face or a sheep.

With random inputs of one ellipse or another, the model should be able to make predictions, Culp says, and “deal with the uncertainty of whether or not the ellipse is part of a face or a sheep, and whether it is the leg of a sheep, or the head of a sheep.” Confronted with any perturbations, the model should be able to correct itself as well. A next step is establishing a baseline, indicating whether a standard deep-learning neural net would get befuddled by such a task. As yet, GLOM is highly supervised—Culp creates and labels the data, prompting and pressuring the model to find correct predictions and succeed over time. (The unsupervised version is named GLUM—“It’s a joke,” Hinton says.)

At this preliminary state, it’s too soon to draw any big conclusions. Culp is waiting for more numbers. Hinton is already impressed nonetheless. “A simple version of GLOM can look at 10 ellipses and see a face and a sheep based on the spatial relationships between the ellipses,” he says. “This is tricky, because an individual ellipse conveys nothing about which type of object it belongs to or which part of that object it is.”

And overall, Hinton is happy with the feedback. “I just wanted to put it out there for the community, so anybody who likes can try it out,” he says. “Or try some sub-combination of these ideas. And then that will turn philosophy into science.”


Geoffrey Hinton has a hunch about what’s next for AI 2021/04/16 12:00

The $1 billion Russian cyber company that the US says hacks for Moscow

The hackers at Positive Technologies are undeniably good at what they do. The Russian cybersecurity firm regularly publishes highly-regarded research, looks at cutting edge computer security flaws, and has spotted vulnerabilities in networking equipment, telephone signals, and electric car technology. 

But American intelligence agencies have concluded that this $1 billion company—which is headquartered in Moscow, but has offices around the world— does much more than that. 

Positive was one of a number of technology businesses sanctioned by the US on Thursday for its role in supporting Russian intelligence agencies. President Joe Biden declared a national emergency to deal with the threat he says Moscow poses to the United States. But the details of the sanctions released by the Treasury Department only cover a small fraction of what the Americans now believe about Positive’s role in Russia.

MIT Technology Review understands that US officials have privately concluded that the company is a major provider of offensive hacking tools, knowledge, and even operations to Russian spies. Positive is believed to be part of a constellation of private sector firms and cybercriminal groups that support Russia’s geopolitical goals, and which the US increasingly views as a direct threat. 

The public side of Positive is like many cybersecurity companies: staff look at high-tech security, publish research on new threats, and even have cutesy office signs that read “stay positive!” hanging above their desks. The company is open about some of its links to the Russian government, and boasts an 18-year track record of defensive cybersecurity expertise including a two-decade relationship with the Russian Ministry of Defense. But according to previously unreported US intelligence assessments, it also develops and sells weaponized software exploits to the Russian government. 

One area that’s stood out is the firm’s work on SS7, a technology that’s critical to global telephone networks. In a public demonstration for Forbes, Positive showed how it can bypass encryption by exploiting weaknesses in SS7. Privately, the US has concluded that Positive did not just discover and publicize flaws in the system, but also developed offensive hacking capabilities to exploit security holes that were then used by Russian intelligence in cyber campaigns.

Much of what Positive does for the Russian government’s hacking operations is similar to what American security contractors do for United States agencies. But there are major differences. One former American intelligence official, who requested anonymity because they are not authorized to discuss classified material, described the relationship between companies like Positive and their Russian intelligence counterparts as “complex” and even “abusive.” The pay is relatively low, the demands are one-sided, the power dynamic is skewed, and the implicit threat for non-cooperation can loom large.

Tight working relationship

American intelligence agencies have long concluded that Positive also runs actual hacking operations itself, with a large team allowed to run its own cyber campaigns as long as they are in Russia’s national interest. Such practices are illegal in the western world: American private military contractors are under direct and daily management of the agency they’re working for during cyber contracts. 

US intelligence has concluded that Positive did not just discover and publicize flaws, but also developed offensive hacking capabilities to exploit security holes that it found

Former US officials say there is a tight working relationship with the Russian intelligence agency FSB that includes exploit discovery, malware development, and even reverse engineering of cyber capabilities used by Western nations like the United States against Russia itself. 

The company’s marquee annual event, Positive Hack Days, was described in recent US sanctions as “recruiting events for the FSB and GRU.” The event has long been famous for being frequented by Russian agents. 

NSA director of cybersecurity Rob Joyce said the companies being sanctioned “provide a range of services to the SVR, from providing the expertise to developing tools, supplying infrastructure and even, sometimes, operationally supporting activities,” Politico reported.

One day after the sanctions announcement, Positive issued a statement denying “the groundless accusations” from the US. It pointed out that there is “no evidence” of wrongdoing and said it provides all vulnerabilities to software vendors “without exception.”

Tit for tat

Thursday’s announcement is not the first time that Russian security companies have come under scrutiny. 

The biggest Russian cybersecurity company, Kaspersky, has been under fire for years over its relationships with the Russian government—eventually being banned from US government networks. Kaspersky has always denied a special relationship with the Russian government.

But one factor that sets Kaspersky apart from Positive, at least in the eyes of American intelligence officials, is that Kaspersky sells antivirus software to western companies and governments. There are few better intelligence collection tools than an antivirus, software which is purposely designed to see everything happening on a computer, and can even take control of the machines it occupies. US officials believe Russian hackers have used Kaspersky software to spy on Americans, but Positive—a smaller company selling different products and services—has no equivalent. 

Recent sanctions are the latest step in a tit for tat between Moscow and Washington over escalating cyber operations, including the Russian-sponsored SolarWinds attack against the US, which led to nine federal agencies being hacked over a long period of time. Earlier this year, the acting head of the US cybersecurity agency said recovering from that attack could take the US at least 18 months.


The $1 billion Russian cyber company that the US says hacks for Moscow 2021/04/15 20:09

Building a high-performance data and AI organization

CxOs and boards recognize that their organization’s ability to generate actionable insights from data, often in real-time, is of the highest strategic importance. If there were any doubts on this score, consumers’ accelerated flight to digital in this past crisis year have dispelled them. To help them become data driven, companies are deploying increasingly advanced cloud-based technologies, including analytics tools with machine learning (ML) capabilities. What these tools deliver, however, will be of limited value without abundant, high-quality, and easily accessible data.

In this context, effective data management is one of the foundations of a data-driven organization. But managing data in an enterprise is highly complex. As new data technologies come on stream, the burden of legacy systems and data silos grows, unless they can be integrated or ring-fenced.

Fragmentation of architecture is a headache for many a chief data officer (CDO), due not just to silos but also to the variety of on-premise and cloud-based tools many organizations use. Along with poor data quality, these issues combine to deprive organizations’ data platforms—and the machine learning and analytics models they support—of the speed and scale needed to deliver the desired business results.

To understand how data management and the technologies it relies on are evolving amid such challenges, MIT Technology Review Insights surveyed 351 CDOs, chief analytics officers, chief information officers (CIOs), chief technology officers (CTOs), and other senior technology leaders. We also conducted in-depth interviews with several other senior technology leaders.  Here are the key findings:

  • Just 13% of organizations excel at delivering on their data strategy. This select group of “high-achievers” deliver measurable business results across the enterprise. They are succeeding thanks to their attention to the foundations of sound data management and architecture, which enable them to “democratize” data and derive value from machine learning.
  • Technology-enabled collaboration is creating a working data culture. The CDOs interviewed for the study ascribe great importance to democratizing analytics and ML capabilities. Pushing these to the edge with advanced data technologies will help end-users to make more informed business decisions — the hallmarks of a strong data culture.
  • ML’s business impact is limited by difficulties managing its end-to-end lifecycle. Scaling ML use cases is exceedingly complex for many organizations. The most significant challenge, according to 55% of respondents, is the lack of a central place to store and discover ML models.
  • Enterprises seek cloud-native platforms that support data management, analytics, and machine learning. Organizations’ top data priorities over the next two years fall into three areas, all supported by wider adoption of cloud platforms: improving data management, enhancing data analytics and ML, and expanding the use of all types of enterprise data, including streaming and unstructured data.
  • Open standards are the top requirements of future data architecture strategies. If respondents could build a new data architecture for their business, the most critical advantage over the existing architecture would be a greater embrace of open-source standards and open data formats.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.


Building a high-performance data and AI organization 2021/04/15 14:53

1 / 2