why Pilgrims | what we do | who we are | our work | contact


For each project we gather a team with complementary skills and varying backgrounds. We prefer to work together with passionate, involved, flexible and independent professionals. 




Technology review

No, coronavirus apps don’t need 60% adoption to be effective

With dozens of digital contact tracing apps already rolled out worldwide, and many more on the way, how many people need to use them for the system to work? One number has come up over and over again: 60%. 

That’s the percentage of the population that many public health authorities documented by MIT Technology Review’s Covid Tracing Tracker say they are targeting as they attempt to protect their communities from covid-19. The number is taken from an Oxford University study released in April. But since no nation has reached such levels, many have criticized “exposure notification” technologies as essentially worthless.

But the researchers who produced the original study say their work has been profoundly misunderstood, and that in fact much lower levels of app adoption could still be vitally important for tackling covid-19.

“There’s been a lot of misreporting around efficacy and uptake … suggesting that the app only works at 60%—which is not the case,”  says Andrea Stewart, a spokeswoman for the Oxford team. In fact, she says, “it starts to have a protective effect” at “much lower levels.”

Where it went wrong

Because of the way such digital contacting tracing and exposure notification apps work—by notifying users if their phone has been in proximity to the phone of somebody who later gets a diagnosis of covid-19—blanket coverage is preferable. The greater the number of users, the higher the likelihood that it will help at-risk people to self-quarantine before they can infect others.

But much of the debate over contact tracing apps has focused on the fact that reaching the 60% target seems almost impossibly difficult—especially because many people (including very young users, older users, and those with older model phones) may be unwilling or unable to download and use the software required.

Many media reports and analyses picked up on one sentence of the report that states: “Our models show we can stop the epidemic if approximately 60% of the population use the app.” 

But they have routinely omitted the second half of the sentence: “Even with lower numbers of app users, we still estimate a reduction in the number of coronavirus cases and deaths.” 

In fact, the Oxford model actually takes into account many of the factors that critics have been concerned about. The paper says that if 80% of all smartphone users download the app—a number that excludes groups less likely to have a smartphone and is equivalent to 56% of the overall population—this would be enough to suppress the pandemic on its own, without any other form of intervention.

While lower rates of adoption mean such apps won’t beat the disease on their own, that is not the same as suggesting that lower usage makes the apps ineffective. Instead, if fewer people download the app, say the researchers, other prevention and containment measures will be required. These include social distancing, widespread testing, manual contact tracing, medical treatment, and regional shutdowns—that is, many of the same processes already being used around the world. 

Professor Christophe Fraser, co-lead on the contact tracing program at Oxford University’s Nuffield Department of Medicine and an independent scientific advisor to the UK government’s contact tracing efforts, led the research. He says the 60% figure seems to have a mind of its own.

“That goes to show how difficult it is to control the media narrative,” he says.

What level of adoption is needed?

Correcting the 60% assumption is important because the way apps are received can shape the way nations respond to both this pandemic and future disease outbreaks. Widespread belief that any participation below that threshold will result in failure could be a fatal mistake. 

Some countries have reached significant levels of adoption: Iceland has achieved around 40% usage, while others such as Qatar and Turkey have made downloading their apps mandatory.

But even though the researchers know that lower levels of adoption will be useful, they aren’t entirely sure what different ranges will actually mean. Still, every successful notification means a life potentially saved.

Fraser says his team had assumed that lower levels of usage might have very small benefits—but that, in fact, simulations show the upsides are significantly higher than they thought.

“The expectation going in was that app usage wouldn’t be very effective at low levels,” he says. “If you have 10% of people using the app, then the chance of contact between two people being detected is 10% of 10%, which is 1%—a tiny fraction. What we found in the simulation was that that actually isn’t the case. We’ve been working to understand why we actually see benefits of usage accruing.”

Fraser also advocates continuously monitoring and auditing the functioning of the app so that it does what promises. 

And even if it doesn’t quell covid-19 on its own, digital contact tracing will be a part of the strategy against future disease outbreaks, he predicts. The lessons we learn here will pay off if covid-19 takes years to control, and if there are other pandemics in years to come.

“We know that public health is all about building trust,” Fraser says. “So how do we build an environment where people know that the data is being shared for good? People fear misuse of data, which we’ve seen in the digital space. How do we stop misuse while encouraging positive use of data? This is clearly an important area. The power to do good things increases as we share information, but we need frameworks.”

No, coronavirus apps don’t need 60% adoption to be effective 2020/06/05 12:00

The activist dismantling racist police algorithms

Hamid Khan has been a community organizer in Los Angeles for over 35 years, with a consistent focus on police violence and human rights. He talked to us on April 3, 2020, for a forthcoming podcast episode about artificial intelligence and policing. As the world turns its attention to police brutality and institutional racism, we thought our conversation with him about how he believes technology enables racism in policing should be published now.  

Khan is the founder of the Stop LAPD Spying Coalition, which has won many landmark court cases on behalf of the minority communities it fights for. Its work is perhaps best known for advocacy against predictive policing. On April 21, a few weeks after this interview, the LAPD announced an end to all predictive policing programs

Khan is a controversial figure who has turned down partnerships with groups like the Electronic Frontier Foundation (EFF) because of its emphasis on reform. He doesn’t believe reform will work. The interview has been edited for length and clarity. 

Tell us about your work. Why do you care about police surveillance?

The work that we do, particularly looking at the Los Angeles Police Department, looks at how surveillance, information gathering, storing, and sharing has historically been used to really cause harm, to trace, track, monitor, stalk particular communities: communities who are poor, who are black and brown, communities who would be considered suspect, and queer trans bodies. So on various levels, surveillance is a process of social control. 

Do you believe there is a role for technology in policing?

The Stop LAPD Spying Coalition has a few guiding values. The first one is that what we are looking at is not a moment in time but a continuation of history. Surveillance has been used for hundreds of years. Some of the earliest surveillance processes go back to lantern laws in New York City in the early 1700s. If you were an enslaved person, a black or an indigenous person, and if you were walking out into the public area without your master, you had to walk with an actual literal lantern, with the candle wick and everything, to basically self-identify yourself as a suspect, as the “other.”

Another guiding value is that there’s always an “other.” Historically speaking, there’s always a “threat to the system.” There’s always a body, an individual, or groups of people that are deemed dangerous. They are deemed suspect. 

The third value is that we are always looking to de-sensationalize the rhetoric of national security. To keep it very simple and straightforward, [we try to show] how the information-gathering and information-sharing environment moves and how it’s a process of keeping an eye on everybody.

“Algorithms have no place in policing.”

And one of our last guiding values is that our fight is rooted in human rights. We are fiercely an abolitionist group, so our goal is to dismantle the system. We don’t engage in reformist work. We also consider any policy development around transparency, accountability, and oversight a template for mission creep. Any time surveillance gets legitimized, then it is open to be expanded over time. Right now, we are fighting to keep the drones grounded in Los Angeles, and we were able to keep them grounded for a few years. And in late March, the Chula Vista Police Department in San Diego announced that they are going to equip their drones with loudspeakers to monitor the movement of unhoused people.

Can you explain the work the Stop LAPD Spying Coalition has been doing on predictive policing? What are the issues with it from your perspective?

PredPol was location-based predictive policing in which a 500-by-500-square-foot location was identified as a hot spot. The other companion program, Operation Laser, was person-based predictive policing.

In 2010, we looked at the various ways that these [LAPD surveillance] programs were being instituted. Predictive policing was a key program. We formally launched a campaign in 2016 to understand the impact of predictive policing in Los Angeles with the goal to dismantle the program, to bring this information to the community and to fight back.

Person-based predictive policing claimed that for individuals who are called “persons of interest” or “habitual offenders,” who may have had some history in the past, we could use a risk assessment tool to establish that they were going to recidivate. So it was a numbers game. If they had any gun possession in the past, they were assigned five points. If they were on parole or probation, they were assigned five points. If they were gang-affiliated, they were assigned five points. If they’d had interactions with the police like a stop-and-frisk, they would be assigned one point. And this became where individuals who were on parole or probation or minding their own business and rebuilding their lives were then placed in what became known as a Chronic Offender Program, unbeknownst to many people.

“So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.”

Then, based on this risk assessment, where Palantir is processing all the data, the LAPD created a list. They  started releasing bulletins, which were like a Most Wanted poster with these individuals’ photos, addresses, and history as well, and put them in patrol cars. [They] started deploying license plate readers, the stingray, the IMSI-Catcher, CCTV, and various other tech to track their movements, and then creating conditions on the ground to stop and to harass and intimidate them. We built a lot of grassroots power, and in April 2019 Operation Laser was formally dismantled. It was discontinued.

And right now we are going after PredPol and demanding that PredPol be dismantled as well. [LAPD announced an end to PredPol on April 21, 2020.] Our goal for the abolition and dismantlement of this program is not just rooted in garbage in, garbage out; racist data in and racist data out. Our work is really rooted in that it ultimately serves the whole ideological framework of patriarchy and capitalism and white supremacy and settler colonialism.

We released a report, “Before the Bullet Hits the Body,” in May 2018 on predictive policing in Los Angeles, which led to the city of Los Angeles holding public hearings on data-driven policing, which were the first of their kind in the country. We demanded a forensic audit of PredPol by the inspector general. In March 2019, the inspector general released the audit and it said that we cannot even audit PredPol because it’s just not possible. It’s so, so complicated.

Algorithms have no place in policing. I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.

Team leaders of Stop LAPD Spying Coalition Hamid Khan (right), Jamie Garcia (center), and Gen Dogon (left) in the Skid Row neighborhood of LA, where the coalition has its headquarters.

How do you ensure that the public understands these kinds of policing tactics? 

Public records are a really good tool to get information. What is the origin of this program? We want to know: What was the vision? How was it being articulated? What is the purpose for the funding? What is the vocabulary that they’re using? What are the outcomes that they’re presenting to the funder? 

“I’m a human, and I am not here that you just unpack me and just start experimenting on me and then package me.”

They [the LAPD] would deem an area, an apartment building, as hot spots and zones. And people were being stopped at a much faster pace [there]. Every time you stop somebody, that information goes into a database. It became a major data collection program. 

We demanded that they release the secret list that they had of these individuals. LAPD fought back, and we did win that public records lawsuit. So now we have a secret list of 679 individuals, which we’re now looking to reach out to. And these are all young individuals, predominantly about 90% to 95% black and brown. 

Redlining the area creates conditions on the ground for more development, more gentrification, more eviction, more displacement of people. So the police became protectors of private property and protectors of privilege.

What do you say to people who believe technology can help mitigate some of these issues in policing, such as biases, because technology can be objective? 

First of all, technology is not operating by itself.  From the design to the production to the deployment to the outcome, there is constantly bias built in. It’s not just the biases of the people themselves; it’s the inherent bias within the system

There’s so many points of influence that, quite frankly, our fight is not for cleaning up the data. Our fight is not for an unbiased algorithm, because we don’t believe that even mathematically, there could be an unbiased algorithm for policing at all.

What are the human rights considerations when it comes to police technology and surveillance?

The first human right would be to stop being experimented on. I’m a human, and I am not here that you just unpack me and just start experimenting on me and then package me. There’s so much datafication of our lives that has happened. From plantation capitalism to racialized capitalism to now surveillance capitalism as well, we are subject to being bought and sold. Our minds and our thoughts have been commodified. It has a dumbing-down effect as well on our creativity as human beings, as a part of a natural universe. Consent is being manufactured out of us.

With something like coronavirus, we certainly are seeing that some people are willing to give up some of their data and some of their privacy. What do you think about the choice or trade-off between utility and privacy? 

We have to really look at it through a much broader lens.  Going back to one of our guiding values: not a moment in time but a continuation of history. So we have to look at crises in the past, both real and concocted. 

Let’s look at the 1984 Olympics in Los Angeles. That led to the most massive expansion of police powers and militarization of the Los Angeles Police Department and the sheriff’s department under the guise of public safety. The thing was “Well, we want to keep everything safe.” But not only [did] it become a permanent feature and the new normal, but tactics were developed as well. Because streets had to be cleaned up, suspect bodies, unhoused folks, were forcibly removed. Gang sweeps supposedly started happening. So young black and brown youth were being arrested en masse. This is like 1983, leading to 1984.

By 1986-1987 in Los Angeles, gang injunctions became a permanent feature. This resulted in massive gang databases, and children as young as nine months old going into these gang databases. That became Operation Hammer, where they had gotten tanks and armored vehicles, used by SWAT, for delivering low-level drug offenses, and going down and breaking down people’s homes.

Now we are again at a moment. It’s not just the structural expansion of police powers; we have to look at police now increasingly taking on roles as social workers.  It’s been building over the last 10 years. There’s a lot of health and human services dollars attached to that too. For example, in Los Angeles, the city controller came out with an audit about five years ago, and they looked at $100 million for homeless services that the city provides. Well, guess what? Out of that, $87 million was going to LAPD.  

Can you provide a specific example of how police use of technology is impacting community members?

Intelligence-led policing is a concept that comes out of England, out of the Kent Constabulary, and started about 30 years ago in the US. The central theme of intelligence-led policing is behavioral surveillance.  People’s behavior needs to be monitored, and then be processed, and that information needs to be shared. People need to be traced and tracked.  

“There is no such thing as kinder, gentler racism, and these programs have to be dismantled.”

One program called Suspicious Activity Reporting came out of 9/11, in which several activities which are completely constitutionally protected are listed as potentially suspicious. For example, taking photographs in public, using video cameras in public, walking into infrastructure and asking about hours of operations. It’s observed behavior reasonably indicative of preoperational planning of criminal and/or terrorist activity. So you’re observing somebody’s behavior, which reasonably indicates there is no probable cause. It creates not a fact, but a concern. That speculative and hunch-based policing is real.  

We were able to get numbers from LAPD’s See Something, Say Something program. And what we found was that there was a 3:1 disparate impact on the black community. About 70% of these See Something, Say Something reports came from predominantly white communities in Los Angeles. So now a program is being weaponized and becomes a license to racially profile.

The goal is always to be building power toward abolition of these programs, because you can’t reform them. There is no such thing as kinder, gentler racism, and these programs have to be dismantled.

So, you really think that reform won’t allow for use of these technologies in policing?

I can only speak about my own history of 35 years of organizing in LA. It’s not a matter of getting better, it’s a matter of getting worse. And I think technology is furthering that. When you look at the history of reform, we keep on hitting our head against the wall, and it just keeps on coming back to the same old thing. We can’t really operate under the assumption that hearts and minds can change, particularly when somebody has a license to kill.

I’m not a technologist. Our caution is for the technologists: you know, stay in your lane. Follow the community and follow their guidance.

The activist dismantling racist police algorithms 2020/06/05 11:00

This startup is using AI to give workers a “productivity score”

In the last few months, millions of people around the world stopped going into offices and started doing their jobs from home. These workers may be out of sight of managers, but they are not out of mind. The upheaval has been accompanied by a reported spike in the use of surveillance software that lets employers track what their employees are doing and how long they spend doing it.

Companies have asked remote workers to install a whole range of such tools. Hubstaff is software that records users’ keyboard strokes, mouse movements, and the websites that they visit. Time Doctor goes further, taking videos of users’ screens. It can also take a picture via webcam every 10 minutes to check that employees are at their computer. And Isaak, a tool made by UK firm Status Today, monitors interactions between employees to identify who collaborates more, combining this data with information from personnel files to identify individuals who are “change-makers.” 

Now, one firm wants to take things even further. It is developing machine-learning software to measure how quickly employees complete different tasks and suggest ways to speed them up. The tool also gives each person a productivity score, which managers can use to identify those employees who are most worth retaining—and those who are not. 

How you feel about this will depend on how you view the covenant between employer and employee. Is it okay to be spied on by people because they pay you? Do you owe it to your employer to be as productive as possible, above all else?

Critics argue that workplace surveillance undermines trust and damages morale. Workers’ rights groups say that such systems should only be installed after consulting employees. “It can create a massive power imbalance between workers and the management,” says Cori Crider, a UK-based lawyer and cofounder of Foxglove, a nonprofit legal firm that works to stop governments and big companies from misusing technology. “And the workers have less ability to hold management to account.”

Whatever your views, this kind of software is here to stay—in part because remote work is normalizing it. “I think workplace monitoring is going to become mainstream,” says Tommy Weir, CEO of Enaible, the startup based in Boston that is developing the new monitoring software. “In the next six to 12 months it will become so pervasive it disappears.” 

Weir thinks most tools on the market don’t go far enough. “Imagine you’re managing somebody and you could stand and watch them all day long, and give them recommendations on how to do their job better,” says Weir. “That’s what we’re trying to do. That’s what we’ve built.”

Weir founded Enaible in 2018 after coaching CEOs for 20 years. The firm already provides its software to several large organizations around the world, including the Dubai customs agency and Omnicom Media Group, a multinational marketing and corporate communications company. But Weir claims to also be in in late-stage talks with Delta Airlines and CVS Health, a US health-care and pharmacy chain ranked #5 on the Fortune 500 list. Neither company would comment on if or when they were preparing to deploy the system.

Weir says he has been getting four times as many inquiries since the pandemic closed down offices. “I’ve never seen anything like it,” he says.

Why the sudden uptick in interest? “Bosses have been seeking to wring every last drop of productivity and labor out of their workers since before computers,” says Crider. “But the granularity of the surveillance now available is like nothing we’ve ever seen.”

It’s no surprise that this level of detail is attractive to employers, especially those looking to keep tabs on a newly remote workforce. But Enaible’s software, which it calls the AI Productivity Platform, goes beyond tracking things like email, Slack, Zoom, or web searches. None of that shows a full picture of what a worker is doing, says Weir⁠—it’s just checking if you are working or not.

Once set up, the software runs in the background all the time, monitoring whatever data trail a company can provide for each of its employees. Using an algorithm called Trigger-Task-Time, the system learns the typical workflow for different workers: what triggers, such as an email or a phone call, lead to what tasks and how long those tasks take to complete.

Once it has learned a typical pattern of behavior for an employee, the software gives that person a “productivity score” between 0 and 100. The AI is agnostic to tasks, says Weir. In theory, workers across a company can still be compared by their scores even if they do different jobs. A productivity score also reflects how your work increases or decreases the productivity of other people on your team. There are obvious limitations to this approach. The system works best with employees who do a lot of repetitive tasks in places like call centers or customer service departments rather than those in more complex or creative roles.

But the idea is that managers can use these scores to see how their employees are getting on, rewarding them if they get quicker at doing their job or checking in with them if performance slips. To help them, Enaible’s software also includes an algorithm called Leadership Recommender, which identifies specific points in an employee’s workflow that could be made more efficient.

For some tasks, that might mean cutting the human out of the loop and automating it. In one example, the tool suggested that automating a 40-second quality-checking task that was performed by customer service workers 186,000 times a year would save them 5,200 hours. This meant that the human employees could devote more attention to more valuable work, improving customer-service response times, suggests Weir.

Business as usual

But talk of cost cutting and time saving has long been double-speak for laying off staff. As the economy slumps, Enaible is promoting its software as a way for companies to identify the employees who must be retained—“those that are making a big difference in fulfilling company objectives and driving profits”—and keep them motivated and focused as they work from home.

The flipside, of course, is that the software can also be used by managers to choose whom to fire. “Companies will lay people off—they always have,” says Weir. “But you can be objective in how you do that, or subjective.” 

Crider sees it differently. “The thing that’s so insidious about these systems is that there’s a veneer of objectivity about them,” she says. “It’s a number, it’s on a computer—how could there be anything suspect? But you don’t have to scratch the surface very hard to see that behind the vast majority of these systems are values about what is to be prioritized.”

Machine-learning algorithms also encode hidden bias in the data they are trained on. Such bias is even harder to expose when it’s buried inside an automated system. If these algorithms are used to assess an employee’s performance, it can be hard to appeal an unfair review or dismissal. 

In a pitch deck, Enaible claims that the Dubai customs agency is now rolling out its software across the whole organization, with the goal of $75 million in “payroll savings” over the coming two years. “We’ve essentially decoupled our growth rate from our payroll,” the agency’s director general is quoted as saying. Omnicom Media Group is also happy with how Enaible helps it get more out its employees. “Our global team needs tools that can move the needle when it comes to building our internal capacity without adding to our head count,” says CEO Nadim Samara. In other words, squeezing more out of existing employees.

Crider insists there are better ways to encourage people to work. “What you’re seeing is an effort to turn a human into a machine before the machine replaces them,” she says. “You’ve got to create an environment in which people feel trusted to do their job. You don’t get that by surveilling them.”

This startup is using AI to give workers a “productivity score” 2020/06/04 15:18

Social bubbles may be the best way for societies to emerge from lockdown

The news: Holing up with groups of friends or neighbors or other families during lockdown has given many people, especially those stuck home alone, a way to relieve isolation without spreading covid-19. These groups are known as bubbles, and new computer simulations described in Nature today show they may really work. 

Why this matters: As countries around the world leave or get ready to leave lockdown, we need to come up with ways to mix with other people without causing another spike in covid-19 infections, one that balances public health concerns with our social, psychological, and economic needs to interact. 

How to do that isn’t clear. Medical advisors recommend measures such as sheltering in place, avoiding people outside your household as much as possible, and keeping two meters apart when you do interact. Yet there is little research on the effectiveness of such social distancing. Previous studies have mainly looked at the impact of broad restrictions, such as stopping travel, canceling public gatherings, and closing schools—not the specifics of social interaction at a person-to-person level. 

Who is it safe to see? A team led by Per Block, a sociologist at the University of Oxford and the Leverhulme Centre for Demographic Science in the UK, simulated three different social distancing strategies and found that each gave a way to extend our social circles while keeping transmission of covid-19 relatively low—as long as we still stick to certain rules. 

The first strategy is mixing only among people with something in common, such as those who live in the same neighborhood or are the  same age. Grouping employees together this way could reduce the risk of widespread transmission when businesses reopen, the researchers suggest. The second strategy is to stick to groups that already have strong social ties, such as friends who are also friends with each other. 

Bubbles are best: The third strategy the team simulated was bubbling, in which a group chooses its own social circle—and then everyone stays within it. All three strategies were more effective at reducing transmission than random social distancing, where people reduce the number of people they see but still come in contact with a few individuals from different groups. But according to the simulations, bubbles are the best of the bunch: they delay peak infection rate by 37%, decrease the height of the peak by 60%, and result in 30% fewer infected individuals overall. The first strategy, sticking to a group of people with something in common, was the second most effective. 

The researchers suggest that bubbles work well because they are built on a deliberate choice about who you will and won’t interact with, rather than depending on less deliberate social or geographic ties, which are more easily broken. 

Will this work for real? Sims are not real life. For a start, the researchers modeled relatively small networks of between 500 and 4,000 people. But the size did not make a significant difference to the effectiveness of the various strategies, which suggests that the results might also hold true for much larger populations. There’s also the question of public messaging: social distancing works best when the guidelines are as simple as possible. Muddying the message with more complicated rules may not work so well in reality.

Social bubbles may be the best way for societies to emerge from lockdown 2020/06/04 14:31

A drug that cools the body’s reaction to Covid-19 appears to save lives

In an advance toward conquering covid-19, doctors in Michigan say an antibody drug may sharply cut the chance patients on a ventilator will die.

The problem: The pandemic viral disease is infecting millions, and for those who end up on a ventilator in an ICU, the odds are grim. More than half are dying.

The drug: Doctors at the University of Michigan set out to control the haywire immune reaction that pushes some covid-19 patients into a death spiral. To do it, they gave 78 patients on ventilators the drug tocilizumab, which blocks IL-6, a molecule in the body that sets off a reaction to an infection. (The drug is sold by Roche under the trade name Actemra.)

The result: The doctors say in a preprint that patients who got the drug were 45% less likely to die than those who didn’t. But there’s a big caveat, which is that the doctors knew which patients got the drug and which didn’t. Their picks for the drug-taking group could have been biased— people more likely to improve anyway, for example—so further studies are needed.

Emerging cocktail: In late May, Roche said it would start a trial to combine its IL-6 blocker with remdesivir, an antiviral drug with modest benefits that got emergency approval in the US for treating covid-19. That drug is meant to block the virus from replicating.

By combining the two drugs, doctors may be closing in on a cocktail able to cut the death rate from the virus, a step that would help society return to normal.

A drug that cools the body’s reaction to Covid-19 appears to save lives 2020/06/04 01:51

1 / 2