四 责任止于何处

当信任在“自我管理”的数字世界崩溃,谁来负责?

On 20 February 2016, in Kalamazoo, Michigan, a deadly rampage played out. Over the course of five hours, a forty-five-year-old Uber driver named Jason Brian Dalton became a mass killer, shooting six people dead and leaving two seriously injured. In between the separate incidents of bloodshed, Dalton went back to routinely picking up Uber fares.

The shootings began around 5.40 p.m. EST on a Saturday evening and the first victim was twenty-five-year-old Tiana Carruthers.1 Carruthers was crossing a parking lot with five children, including her young daughter, when a silver Chevy Equinox with Dalton at the wheel veered towards her. Dalton rolled down his window. He asked if her name was Maisie (the name of the passenger he was circling around the estate trying to find and pick up was Maci.) ‘No, I am not that person,’ Carruthers replied.

Something didn’t seem quite right about the driver. She yelled at the children to run. Dalton sped off but then quickly turned the car round and headed directly for the terrified Carruthers. Pulling out a Glock 9mm semi-automatic pistol, he shot her at least ten times, hitting her arms and legs, the last bullet lodging in her liver. Remarkably, she survived.

Until that day, Dalton was by all accounts an ordinary Joe. Neighbours described him as ‘a little odd and awkward’ but generally sociable. He worked as a loss adjuster for a local insurance company. He had been married for almost twenty years and had two children: a fifteen-year-old son and a ten-year-old daughter. Dalton became an Uber driver on 25 January 2016, about a month before the shooting.2 He wanted to earn some extra money to take his family to Disney World. Before that tragic evening, he had clocked up more than a hundred Uber trips and passengers had given him a very good rating after rides: an average of 4.73 out of a possible five stars.

On the day of the shooting, Dalton had run a few errands, including a visit to the local gun store. It wasn’t unusual–he loved guns and owned sixteen firearms. He turned on his Uber app around 4 p.m. Shortly after, he picked up a young man called Matt Mellen. It seemed like a normal ride until Dalton took a call from his son, at which point something abruptly shifted in his mood. He accelerated wildly and took Mellen on a hair-raising ride. ‘We were driving through medians [central reservations], through lawns, speeding along,’ Mellen later told the police.3 Dalton even side-swiped another car but seemed completely unfazed by what he had done. ‘He wouldn’t stop. He just kind of kept looking at me like, “Don’t you want to get to your friend’s house?” and I’m like, “I want to get there alive,”’ said Mellen. When the car finally screeched to a halt, Mellen jumped out.

Both Mellen and a concerned bystander put in calls to 911, describing the car and Dalton. Mellen specifically identified him as an Uber driver. ‘I don’t want somebody to get hurt,’ he told emergency services. They didn’t seem too concerned. Mellen then tried getting hold of someone at Uber, to get the car off the road. Uber didn’t seem to prioritize the call, even though Dalton’s whereabouts could have been easily located through GPS tracking on the app.4 It was the first of many alarm bells that would go unheeded.

Soon after that, Mellen’s fiancée posted Dalton’s Uber photo on Facebook with a lengthy warning: ‘ATTENTION Kzoo peeps!!! This Uber driver named JASON drives a silver Chevy Equinox is NOT a safe ride!’ she wrote. ‘Hoping this man will be arrested or hospitalized soon if he has a medical condition causing his behaviour.’ Instead, Dalton went from his crazy ride with Mellen to shooting Carruthers.

After that first shooting, Dalton swapped his car for his wife’s black Chevrolet. Carole Dalton later told police he seemed a little ‘troubled’; he had also told her not to go back to work, to fetch the kids, stay inside and lock all the doors. Dalton, meanwhile, went back to picking up regular Uber fares. @IamKeithBlack tweeted that he got a ride with him at 8 p.m., including a clear screenshot of the driver. He had given Dalton a five out of five star rating after his ride. When he heard about the shootings, he tweeted ‘Lucky to be alive’.5

Others weren’t so lucky. In the lot of a nearby brightly lit car dealership around 10 p.m. that night, seventeen-year-old Tyler Smith and his father, Rich, were looking at vehicles, while Tyler’s girlfriend, Alexis, waited in the car. Dalton drove in, parked and walked up to the father and son, fatally shooting both of them. A terrified Alexis hid until he drove off.

Fifteen minutes later, in a final burst, Dalton killed four older women and critically injured a fourteen-year-old girl. Nothing connected the victims–male, female, white, black, young and old–and they weren’t targeted for any reason.

Late Saturday night is one of the busiest periods on Uber. Remarkably, after killing six people and going home to change guns, Dalton carried on picking up revellers requesting rides. By now, the shootings were all over the news. Some passengers had heard there was a mass murderer called Dalton, an Uber driver, on the loose in Kalamazoo, yet they continued to use the app to get to where they wanted to go. A passenger named Marc Dunton even asked Dalton, ‘You’re not that guy going around killing people, are you?’

‘Wow,’ Dalton answered. ‘That is crazy. No way–I’m not that guy.’

Only one passenger made the connection and refused the ride. The young woman’s father had texted and called several times to warn her. She requested an Uber around 12.30 a.m. ‘Jason. Chevy Equinox’ came up on the phone as he was the nearest driver. She cancelled the ride. The same thing happened on her next attempt. A few minutes later, Dalton was arrested in the parking lot of a downtown Kalamazoo bar. When police officers asked the shooter to explain his motive, he replied, ‘I don’t think there is a why.’

During his interrogation Dalton said he recognized the Uber logo as being the religious Eastern Star symbol. He said a horned ‘devil figure’ would pop up on his phone through the app and would cast an intoxicating spell over him. ‘It would give you an assignment and it would literally take over your whole body,’ he said.6 When asked why he randomly shot people, he calmly claimed, ‘The Uber app made me do it… It just had a hold of me.’ He has since been accused of six counts of open murder, two counts of attempted murder and eight counts of felony firearm.

The day after the shooting, Uber issued a press release. ‘We are horrified and heartbroken at the senseless violence in Kalamazoo,’ announced Uber’s chief security officer, Joe Sullivan. ‘Our hearts and prayers are with the families of the victims of this devastating crime.’ A few days later, the company asserted that the rampage could not have been predicted. ‘There were no red flags, if you will,’ said Sullivan. ‘Overall his rating was good, 4.73 out of five.’ Perhaps realizing that pointing to Dalton’s trust rating was not the best line of defence, he later added: ‘As this case shows, past behaviour doesn’t always predict how people will behave.’

It’s true, there may have been no reliable way to predict that Dalton would morph from Uber driver to psychotic mass shooter during a single shift. But why didn’t anyone seem to respond with any urgency when Mellen first contacted Uber about an hour before the first murder? Turns out, the complaint wasn’t prioritized by Uber’s customer response team because it wasn’t explicitly about violence. ‘He said the gentleman was driving erratically,’ an Uber spokesperson said, pausing. ‘Remember we’re doing three million rides a day. How do you prioritize that feedback and how do you think about it?’7 Still, killings aside, you would think side-swiping a car, speeding and running up on the central reservation might raise some immediate alarm about a driver’s fitness to be transporting passengers. Who could have foreseen such tragic consequences? But whose job is it to respond and act?

The horrific shootings intensified scrutiny on how Uber decides who is fit and safe to be a driver. The company claims it spends tens of millions every year performing background checks on applicants. San Francisco District Attorney George Gascón and many other critics have called those background checks without fingerprints ‘completely worthless’.8 Dalton had passed the checks before he started driving on the platform. But here is the problem: he had no prior criminal history, ever. He came out clean as a whistle.

Uber doesn’t run the fingerprint checks that taxi and limo companies are required to do. But would they help anyway? ‘As the Equal Employment Opportunity Commission (EEOC) has emphasized, background checks have limited predictive value and can have a disparate impact on minority drivers,’ says Brishen Rogers, an associate professor of law at Temple University who is researching the social costs of Uber.9 That said, fingerprinting might have at least revealed that Dalton owned sixteen guns, assuming he held a Federal Firearms License.

While Dalton’s crime was by far the worst an Uber driver has committed, there has been a troubling stream of other serious incidents. Uber drivers from Boston to Los Angeles, Delhi to Sydney, have been arrested for sexual assault, rape, kidnapping, theft and drink-driving while carrying passengers. In April 2016, a driver was arrested for slashing a passenger’s neck. On 23 May 2016, another was accused of strangling a student in a parking lot.10

In February 2017, the company was hit with sexual harassment allegations and former employees accusing the company of fostering a toxic, misogynist culture. Shortly after, a damning video surfaced online of former CEO Travis Kalanick verbally abusing a driver in the US called Fawzi Kamel, who had complained to the CEO about dropping prices and lower pay. ‘People are not trusting you any more,’ said Kamel. ‘I lost ninety-seven thousand dollars because of you. I’m bankrupt because of you.’

‘Bullshit,’ Kalanick angrily retorted, telling the driver his problems were his own fault. ‘Some people don’t like to take responsibility for their own shit. They blame everything in their life on someone else.’11 A fuming Kalanick then slammed the door. He later apologized for his actions.

Five days later, it was revealed that Uber had secretly used a software tool called Greyball, designed to identify city officials attempting sting operations to catch Uber drivers violating local regulations. Then Google’s self-driving car outfit, Waymo, filed a lawsuit accusing Uber of stealing technical trade secrets to fuel its own self-driving car research. Following this string of scandals, Jeff Jones, Uber’s company president, and six other key executives resigned. Kalanick, the notoriously hard-nosed co-founder, resigned as CEO in June 2017.

Clearly, for all its success–at its latest valuation of $68 billion it is the world’s most valuable private start-up in history–Uber has had many serious breaches of trust.12 And yet more than 5 million people every day, myself included, still tap the Uber app, and within minutes get in a car with a total stranger, often without a second thought.13 We have, in a sense, outsourced our capacity to trust to an algorithm, and that trust, perhaps for convenience’s sake, has proved hard to destroy. The question is, where does Uber’s responsibility start and finish?

There are dangerous taxi drivers and, for that matter, dangerous people in any industry. Here’s the difference: in Uber’s terms of service, the company denies any liability for how third-party drivers–whom Uber considers to be ‘independent contractors’–behave on its platform. The company says it is merely facilitating the needs of people who want to drive, and you are getting a ride from them in their car, not a company car. ‘Your day belongs to you,’ Uber enthuses to would-be drivers. Uber takes up to a 25 per cent cut of the total fare, a service fee, to play the role of go-between.14 The company claims it can’t control what drivers do on the job as they are interacting with an automated system delivered primarily via an app. Uber, however, is not a neutral platform, like a phone line, simply matching supply and demand. It controls surge pricing that temporarily raises fares, and drivers can be suspended for not accepting enough rides or for low passenger ratings. Uber has been involved in more than 170 lawsuits in the US alone, from class-action safety complaints to price gouging to data failures to privacy practices to the biggest lawsuits of all, the misclassification of drivers.15

Issues of accountability are incredibly complex in an age when platforms offer branded services without owning any assets or employing the providers. Tom Goodwin, a senior vice president at Havas Media, put it well when he wrote in an article: ‘Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening.’16

When disasters such as the Kalamazoo killings happen, they raise the question of where accountability should lie when things go wrong.

It’s much easier to know who to blame when traditional brands breach trust. Take the Tesco scandal that took place in January 2013.17 More than 10 million hamburgers and other meat products were withdrawn from supermarkets after traces of horsemeat were discovered in some of its beef products. The disclosure sparked a national outcry and it became one of the biggest food scandals of the twenty-first century. Even the then prime minister got involved. David Cameron reassured the British people that everything possible would be done to address a ‘very shocking crime’.18

In the wake of the scandal, Tesco issued an ‘unreserved apology’ and promised to introduce a robust new DNA-testing system to ensure the food customers buy is exactly as the label says. How did the 29 per cent horsemeat in the Tesco burger, falsely labelled ‘pure beef’, get in there? Although Tesco publicly accepted responsibility for the fiasco, they appeared to pin much of the blame on their supplier, Silvercrest, owned by the ABP Food group, who had ‘breached the company’s trust’.19

When a customer shops at Tesco, their trust clearly lies in the supermarket brand, what they experience in the store and the products they buy. Tesco, the company, has to behave in trustworthy ways so that their shoppers trust them and their products. But where does trust ultimately lie with platforms?

When I get in a car with a stranger, is it the driver I am trusting? Have I placed some faith in Uber, the company, its team? Am I trusting the Uber brand? Perhaps I have confidence in the platform itself, the app, payments, rating system and its mysterious pricing algorithm? Some of the answers lie in the history of trust between people, companies and brands.

There was a time when people lived in tiny communities, hamlets made up of perhaps no more than a hundred people. Everybody knew everybody else and relationships were tight-knit. People’s trustworthiness, or lack of it, was evident to everyone, given the close proximity.

As hamlets turned into villages and small towns, the population tipped well above what has become known as ‘Dunbar’s number’. The famous University of Oxford psychologist and anthropologist found that our brains, on average, are designed to have a limited number of people, around 150, in our social group. Yes, you can friend 500, even 5,000 people on your Facebook page, but Dunbar asserts that it is hard for us to maintain stable, meaningful relationships (online and offline) beyond 150. Within that 150, an inner circle of fifteen is the very limited number of people that you turn to for support when you most need it. And this circle of meaningful relationships is fluid: the friend you confide in this week may not be the person you turn to the following month. On the flip side, our brains can handle group sizes of up to 500 at what Dunbar calls the ‘acquaintance level’20–put simply, the people with whom we can put a name to a face.

When people moved into larger towns, with populations way above Dunbar’s number, a close circle of trust, based on direct knowledge of each other, was no longer possible. Our reputations became an essential asset. If the baker offered good bread, people would buy it and others would hear about its quality. Equally, if the local blacksmith did shoddy work or someone failed to pay back a loan, word would get round. That dynamic kept most people up to the mark. The evolutionary biologist Robert Axelrod called it in his classic book The Evolution of Cooperation the ‘shadow of the future’,21 referring to the idea that people behave better or more cooperatively when they know they’re likely to meet or meet again (as opposed to a one-off encounter) and might be judged on previous behaviour.22 These local traders knew that how they behaved in the present would shape future prospects. Just like the Maghribi traders, it’s the promise of benefits from continued cooperation that helps keeps us in line.

Even the earliest incarnations of what we now call a ‘brand’ first rode on the back of personal reputation. The agricultural machinery giant John Deere was founded in the 1830s by a young entrepreneurial blacksmith living in Illinois who had invented an innovative plough. The Mars brand empire had humble beginnings in the kitchen of Frank C. Mars’s home in Tacoma, Washington, when he started making and selling butter-cream candy. With these early brands, goods and services for the most part were associated with specific people, a name and face, not large corporations.

In the late 1800s, as cities expanded and goods became mass-produced, trust needed to keep up with the pace and scale of industrialization. As local merchants became massive companies, person-to-person trust was no longer viable. So how were people to know the quality of the goods and services they were buying? Take beer, a product that could easily be watered down.

Established in 1777 by William Bass, the Bass Brewery grew into one of the largest beer companies in England. By the nineteenth century business was booming and in 1876 the brewery registered its distinctive ale’s red triangle symbol and brand name to assure people of its quality.23 It was the very first trademark to be registered under the United Kingdom’s Trade Marks Registration Act. A new kind of branding was born.

‘Brands arose as a way to compensate for the dehumanizing effects of the Industrial Age,’ writes Douglas Rushkoff, author and professor of media theory at City University of New York’s Queens College. ‘The more people had previously needed to trust the person behind a product, the more important the brand became as a symbol of origin and authenticity.’24

Trust soon became centralized, top-down, opaque, controlled and institutional. Rules, regulations, auditors, market analysts, insurance and independent agencies such as the Better Business Bureau flourished, enabling people to trade beyond their immediate circle of trust. And so, by the mid twentieth century, companies faced a new challenge. With products and services now more or less standardized, how were they to stand out from the crowd?

At first, they relied on developing brand identities, recognizable by a name, logo, packaging and a tagline, which typically represented a promise–this is what this product or service will do for you. Oxo Cubes promised to ‘make cooking so easy’. Lava soap promised to ‘clean like no other soap’. By the 1950s, however, mass manufacturers such as Procter and Gamble, Unilever and General Foods realized these types of practical promises were not enough. The problem was, all washing powders and frozen peas do much the same job.

What would give consumers a reason to choose one product over another? Vanity, neediness, status anxiety, aspiration, nostalgia and hope, among other things, it turned out. Marketers began to tap into a whole new consumer psychology. Grandiose brand propositions were created, mixing functional benefits with emotional values. ‘When I buy or use this brand, I am…’ Coca-Cola wasn’t manufacturing sugary drinks; its product was about making you feel ‘refreshed’. Disney wasn’t making movies; it was celebrating dreams. Nike, named after the winged goddess of victory, didn’t sell trainers; it made you feel inspired. The idea that a brand would enable consumers to express something about themselves, something intangible, was revolutionary at the time. Crucially, and however artificially, it also bred a sense of intimacy and connection between consumer and multinational: ‘Look, they care about me.’ The result was that brand, with its fancy packaging and catchy slogans, developed enormous power and influence in our lives.

With the dawn of social media in the twenty-first century, everything changed. Marketers were hit with a seismic shift in the way trust worked with consumers. Through no-holds-barred comments and feedback, reviews and ratings, photo posts and ‘likes’, people started to share their experiences at scale. The person formerly known as a ‘passive consumer’ became a participant, a social ambassador, one who was not so easily duped and could be brutal when let down. Could Rice Krispies, the breakfast cereal, really ‘help support your child’s immunity’? Could a New Balance ‘toning’ sneaker, with hidden board technology that promised to activate the hamstrings and calves, really help burn calories as claimed?

It became much harder for brands to exaggerate or make false claims, no matter how flashy their ads. Companies had to start delivering authentic experiences and get comfortable with transparency. They had to learn how to listen, enable conversations and respond to customers’ needs in real time. Brands had to let go of an era where trust could be produced and controlled centrally; by them, that is.

Fast forward to today, where conventions of how trust is built, managed, lost and repaired are once again being turned upside down. Platforms create systems that act as social facilitators. They match us with goods, rides, dates, trips, recommendations and so on. Customers have become communities, and these communities are themselves platforms that shape the ups and downs of a brand. Indeed, a recent survey conducted by Nielsen revealed that the most credible advertising comes straight from the people we know and trust. More than 80 per cent of respondents say they completely or somewhat trust the recommendations of friends and family. And two-thirds say they trust consumer opinions posted online.25

Compare, say, Marriott hotels to Airbnb. Once upon a time, we trusted the hotel chain; the brand was what made people feel safe to spend the night there. With Airbnb, you need to have confidence in the platform itself and in the connections between hosts and guests. In other words, trust must exist in the platform and between people in the community. This is one of the key dynamics that distinguishes the new era of distributed trust from the old paradigm of institutional trust.

Joe Gebbia is the thirty-six-year-old co-founder and chief product officer of Airbnb. Gebbia is a designer rather than an engineer, and he studied at the Rhode Island School of Design, where he met Airbnb co-founder Brian Chesky. I met him in 2009, when I was writing my first book, What’s Mine is Yours. The marketplace was just starting to become more popular and Airbnb was still far from a billion-dollar idea. The founders were enthusiastic to tell the story of how they started the company from a couple of air mattresses in San Francisco; a story they have now told thousands of times.

Gebbia, Brian Chesky and fellow co-founder Nate Blecharczyk began Airbnb with the best of intentions, and had no idea of the magnitude of what they were building. Today, with the company now occupying slick 170,000 square-foot offices and with on average nearly 2 million guests staying in Airbnb rentals every night, the unintended consequences of their success have also mushroomed. One example is the concern over Airbnb’s role in distorting rental prices and creating housing shortages, especially for lower-income residents. As an exercise in trust-building, however, Airbnb is a standout.

Gebbia, though passionate about design and technology and their power to bring people together, is the first to admit that Airbnb is not a technology company but is in the ‘trust business’. ‘We bet our whole company on the hope that, with the right design, people would be willing to overcome the stranger-danger bias,’ Gebbia says in a TED talk on how Airbnb designs for trust.26 ‘What we didn’t realize is just how many people were ready and waiting to put the bias aside.’

He admits there are risks. ‘Obviously, there are times when things don’t work out. Guests have thrown unauthorized parties and trashed homes. Hosts have left guests stranded in the rain,’ he says. ‘In the early days, I was customer service, and those calls came right to my cell phone. I was at the front line of trust-breaking. And there’s nothing worse than those calls. It hurts to even think about them.’

Gebbia thinks of Airbnb as playing the role of the ‘mutual friend’ who introduces you to new friends, new places and experiences. ‘We have to create the conditions for a relationship to form between two people who have never met,’ he tells me.27 ‘And after the introduction has been made, we need to get out of the way.’ The role Airbnb is playing may be different from Uber’s, in that whom to trust is a choice that each host or guest must make for themselves, but what people want from the platform is similar. That is, we want platforms to mitigate the risk of bad things happening, and to be there for us if they do. ‘The number-one thing people want from Airbnb is that if something goes wrong or not as planned, we’ve got their back,’ Gebbia says. ‘If we get that right, we are 80 per cent there.’

Alok Gupta, thirty-three, was formerly a high-frequency trader on Wall Street and a research fellow in mathematics at the University of Oxford. He admits he is a big fan of observing patterns and predicting outcomes in enormous data sets. Three years ago, he joined Airbnb as a data science manager, applying his talents and thinking to a different problem: online-to-offline trust. That is, people using digital tools to meet up face-to-face. ‘I think Airbnb places itself as the company which does the hard work for you, in terms of trusting the individual,’ says Gupta. ‘We know there’s a barrier to trusting people you’ve never met before, but we want to fill that space, and we want to help overcome that barrier for you.’28

image

Gupta talks about the ‘defensive mechanisms’ Airbnb has developed that reduce uncertainty. For instance, in 2011, after the ‘EJ incident’ when a host infamously got her San Francisco apartment completely trashed, Airbnb introduced a ‘host guarantee’ covering property damage of up to $1 million per booking. In 2013, the company introduced ‘Airbnb Verified ID’ confirming a person’s online identity by matching it to offline ID documentation such as a driving licence and a passport.29 ‘There is no place for anonymity in a trusted community,’ the company wrote on its blog announcing the launch. ‘Trust and verification. They just go together.’ The challenge for Gupta and his team is that the spectrum of wrongdoing is vast, ranging from people using places as brothels to old-fashioned discrimination.

In January 2014, researchers from Harvard Business School released a controversial working paper on a study they had conducted. The study revealed that non-black Airbnb hosts could charge approximately 12 per cent more, on average, than black hosts–roughly $144 per night, versus $107.30 In September 2016, looking across 6,000 listings, the same researchers found that requests from guests with distinctively African-American-sounding names (like Tanisha Jackson) were 16 per cent less likely to be accepted by Airbnb hosts than those with Caucasian-sounding names (like Allison Sullivan). Particularly troubling was that, in some instances, Airbnb users would rather allow their property to remain vacant than rent to a black-identified person.31

In the United States, Title II of the Civil Rights Act of 1964 explicitly prohibits racial discrimination in ‘public accommodations’ such as restaurants, cinemas, motels and hotels.32 The law contains an exemption, however, for someone renting fewer than five rooms in his own home, a category that would seem to include many Airbnb hosts. ‘There have been too many unacceptable instances of people being discriminated against on the Airbnb platform because of who they are or what they look like,’ wrote Laura W. Murphy, a former director of the American Civil Liberties Union’s Washington legislative office, who was hired by Airbnb to compile a report to serve as a blueprint for how Airbnb plans to fight discrimination on the site.33

In the firestorm that followed, Airbnb users started sharing stories on social media with the hashtag #AirbnbWhileBlack about their experience of bookings being denied or cancelled because of their race. For example, one user @MiQL tweeted: ‘My wife & I tried to book w/@Airbnb for a vaycay. Hosts w/listed available rooms responded w/“Unavailable”. White friend got “available”.’34 Personal profiles and photos, which users put together to try to project trustworthiness, have the unintended consequence of facilitating discrimination. Indeed, it seems that distributed trust is not always fairly or evenly distributed.

Ben Edelman, one of the authors of the Harvard study, says Airbnb’s initial response to his findings ‘was kind of denial’.35 Nine months after the discrimination study was conducted, with pressure mounting, Airbnb released their report outlining its non-discrimination policies and promise to root out bias and bigotry. The company are trying fixes such as minimizing the prominence of user photos and trying to increase ‘instant bookings’ that don’t require pre-approval from hosts.36

But why hadn’t they noticed the discrimination happening on the platform? They had a blind spot. Brian, Joe and Nate are three young white American men. In other words, they hadn’t personally experienced the type of discrimination many members of the Airbnb community had. ‘Discrimination has no place on Airbnb. It is against what we stand for,’ says Gebbia. In March 2016, Airbnb hired David King III, who had previously held a prominent diversity role at the US State Department, as the company’s first director of diversity and belonging. King was given a team of talented engineers and data scientists, whose job includes identifying patterns of host behaviour and figuring out solutions to create a more inclusive platform.

But how do you go about stamping out those unconscious biases? It is difficult to remedy offline human prejudices that migrate online. You can’t make somebody trust somebody. Like so much else, it’s uncharted online territory. ‘New systems, new structures that haven’t been invented yet are needed to create environments that reduce discrimination or eliminate it,’ Gebbia says.

He takes me back to the early days of the Model T Ford, which revolutionized the take-up of automobiles in the early twentieth century. ‘I think the Model T has a lot of analogies to us,’ says Gebbia. ‘Look at early photos. It didn’t have doors; it didn’t have blinkers; it was missing all of these things that are needed for a safe ride, and that Ford added over the years. And sometimes I think we’re like a Model T, that we haven’t added our blinkers yet.’

In 1865, the British government passed a law called the ‘Locomotive Act’, which was a safety precaution to warn pedestrians and horse-drawn traffic of the terrifying approach of a motor vehicle. It stated that any locomotive or automobile must have a crew of three people: the driver, a stoker and a man whose job it was to walk at least fifty-five metres ahead of the vehicle, waving a red flag. The act, which later became known as the ‘Red Flag Law’, made it impossible for vehicles to drive more than two miles per hour in urban areas, meaning the usefulness of the new automobile was limited. The car is just one example of how, throughout history, a new technology that enables a trust leap can also introduce a new ‘risky’ behaviour–travelling mechanically at speed–and create a vexing challenge for lawmakers. With no precedents to go by, how do they figure out what kind of policies and restrictions will protect public interests?

For more than a decade, Coye Cheshire, a social psychologist and associate professor of UC Berkeley School of Information, has been studying how the internet is changing risk and trust. On a brisk autumn afternoon, I meet him in his campus office, a cosy room painted in moss green and filled with dark wood furniture and books piled high on every available surface. He makes me a cup of peppermint tea and gets straight to the heart of the matter. ‘I want to help understand how humans take risks in the presence of uncertainty.’

I’ve long admired Cheshire’s work, including a paper he wrote in 2011 called ‘Online Trust, Trustworthiness, or Assurance?’37 There, he asserted there was a difference between interpersonal trust (human to human) and system trust (human to system). Did he still think this distinction between people and technologies held true?

‘Back then, systems meant things like your telephone and computer, but to be blunt, I was taking a simplistic view of technology that didn’t take into account that systems are now capable of betrayal,’ he says. For example, a bot in a chat room that can express feelings and moods is very different from simply cooking food in a microwave. ‘Today, systems embody everything from online platforms that people are using, to autonomous agents that act on behalf of humans in ways that are blurring the line in terms of our awareness around what the machine is doing.’38

Cheshire admits that trust is a far more intricate business these days. ‘We are working with these systems that are using complex algorithms to manage our information and make decisions on our behalf. But they are getting too complex for our brains to understand.’ Consider this: to place a man on the moon in 1969 it took 145,000 lines of code.39 Today, it takes more than 2 billion lines of code to run all of Google’s internet services, dwarfing Facebook, which runs on more than 62 million lines of code.40 ‘I used to think that it was completely ridiculous to compare trust between people with trust in these online platforms,’ says Cheshire. ‘I’m not certain that is the case any more because in some ways we have offloaded some of our cognitive power.’

In the past, engineers would typically work on physical infrastructure projects such as roads, rail lines, gas pipelines and bridges. Today, however, they are designing new kinds of social infrastructure: online bridges that bring friends, families and strangers together. They are trust engineers. And one of the goals of these engineers is to get us to a place where we don’t even think about the risks we are taking. It should feel like magic: you just get the right recommendation, the nearest driver, the best match, no hassle, no dramas. Yet cultivating an overload of that blind confidence can also create the opposite problem: too much trust in untrustworthy people. Think about Marc Dunton, the Uber passenger who was out with his friends at a bar when he accepted his ride with the homicidal Dalton. Dunton admits he knew there was a mad shooter on the loose and that Dalton and his car fitted the description. But surely someone on a shooting spree wasn’t still on the Uber app and picking up fares? Surely the genie in the app wouldn’t send me someone bad, right?

Ironically, one of the issues we face today is the speed and ease at which we are trusting. And it’s not just happening on ride-sharing platforms such as Uber. Fancy a date? Download Tinder, Bumble, Happn or Tingle and zip through a fast set-up. When a profile comes up, swipe right on your phone if you like the look of them. And if they swipe right too, it’s a match and you are on your way to meet up with a stranger, anywhere from one to a hundred miles away thanks to the powers of geolocation. It’s accelerated trust based on a few photos and a handful of words: shopping through a catalogue of faces. It’s trust on speed. And when we are in an accelerated mode of trust, we can be impulsive. It requires a conscious gear change to slow down and think twice about our decisions.

Or what about news? Have you ever shared a link without ever reading the article or watching the clip? You are not alone. As a recent study conducted by computer scientists at Columbia University and the French National Institute suggested, many people on Twitter appear to retweet news without even reading it. The researchers found that 59 per cent of links shared on Twitter have never actually been clicked.41 ‘People are more willing to share an article than read it,’ says study co-author Arnaud Legout.42 ‘This is typical of modern information consumption. People form an opinion based on a summary, or a summary of summaries, without making the effort to go deeper.’

On 29 January 2017, the White House Press Secretary Sean Spicer emphatically retweeted on his personal account a video from the satirical site The Onion. ‘@SeanSpicer’s role in the Trump administration will be to provide the American public with robust and clearly articulated misinformation,’ The Onion’s tweet joked. Around an hour later, Spicer retweeted it, adding, ‘You nailed it. Period.’43 The clip included questionable ‘facts’ to know about Spicer, including his former role as a senior correspondent for NPR’s national desk (false), his snowy white pocket square with a quarter inch of clearance on his suit jacket, his ‘defensive’ speaking style and whether he has knowingly lied to the press. Could it have been that Spicer has a quirky sense of humour and was giving a sarcastic response to the video? It’s possible. But it’s also possible that he didn’t bother to watch the video before sharing it, or to read the headline carefully enough to realize he was the punchline. In an age of ‘fake news’ and media propaganda, this phenomenon is more problematic and frightening than ever.

Efficiency can be the enemy of trust. Trust needs a bit of friction. It needs time. It requires investment and effort. ‘Trust doesn’t form at an event in a day. Even bad times shared don’t form trust immediately,’ says author Simon Sinek. It comes from ‘slow, steady consistency and we need to create mechanisms where we allow for those little innocuous interactions to happen’.44 Systems are becoming so seamless that we are not always fully conscious of the risks we are taking or the falsehoods we are sharing.

‘I think the problem comes down to social translucence,’ says Coye Cheshire. ‘How much of the social interaction–our behaviours and the underlying mechanisms that enable interactions–how much of that is visible?’ Not much at present, he maintains. We need to crack open the ‘black boxes’ of the internet giants, to lift the veil on the behind-the-scenes operations of systems with which we interact daily and yet know very little about–and may trust too much.

Here is a simple example of how social translucence works in the physical world. If I have something valuable I want to mail, I will go to a post office rather than just drop it in a box at the end of the road and hope for the best. I will hand the parcel to the person behind the counter and pay for registered mail. A human being hands me back a paper tracking slip with a number that I can use to go online and see where my package is at any given stage of the journey. That’s social translucence–there are lots of visible cues that tell me what’s happening. ‘With online systems, the translucence breaks down,’ says Cheshire. ‘Take putting your credit card information into a site. I pass along the information but it just goes in one direction and I trust the system that the details are safe.’

Online systems seem as magic as the Wizard of Oz: we do not see the army of human beings, many of them single-minded maths nerds, involved in their operation. We can’t see the ghosts in the systems ranking people, places, objects and ideas, making choices and matches on our behalf. Then again, perhaps some of that ignorance is self-imposed. Many of us don’t like knowing the extent to which our lives are constantly being massaged by algorithms. We prefer to trust that it’s all above board.

A few years ago, a social psychologist and data scientist named Adam D. I. Kramer designed an experiment using the world’s largest laboratory of human behaviour: Facebook. A team of researchers from Cornell University and Facebook, including Kramer, joined up to study ‘emotional contagion’ on a massive scale. Do the emotions expressed by friends via online social networks influence our moods–in other words, can emotions online be transferred to others and how?

For one week in 2012, the researchers tweaked the algorithm to manipulate the emotional content appearing in the news feeds of 689,003 randomly selected, unwitting users. Posts were identified as either ‘positive’ (awesome!) or ‘negative’ (bummer) based on the words used. In one group, Facebook reduced the positive content of news feeds, and in the other, it reduced the negative content. ‘We did this research because we care about the emotional impact of Facebook and the people that use our product,’ Kramer says. ‘We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.’45

Did tinkering with the content change the emotional state of users? Yes, the authors discovered. The exposure led some users to change their own behaviours: the researchers found people who had positive words removed from their feeds made fewer positive posts and more negative ones, and vice versa. It could have been an online version of monkey see, monkey do, or simply a matter of keeping up with the Joneses. ‘The results show emotional contagion,’ Adam Kramer and his co-authors write in the academic paper published in the Proceedings of the National Academy of Science in 2014. ‘These results suggest that the emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks.’46

When the study was published, it sparked widespread public uproar. What drove the study into the spotlight weren’t its findings–in fact, the effect, as the authors acknowledge, was quite minimal, as little as one-tenth of a per cent of an observed change. (Given, however, the scale of Facebook, even tiny effects can have large social consequences such as online bullying.) Instead, the outrage centred on ethics. The researchers had failed to get informed consent from the Internal Review Board (IRB) that oversees ‘human subjects research’, or from the thousands of Facebook users who were subjected to the manipulation. In the blaze that followed, the company argued that its 1.86 billion monthly users give blanket consent to the company’s research on the personal data it collects and stores as a condition of its terms of service. Facebook’s data use policy warns users that Facebook ‘may use the information we receive about you… for internal operations, including troubleshooting, data analysis, testing, research and service improvement’.47 That is your price every time you log in, and the cost may be higher than you think.

Users’ willingness to tick the box labelled ‘Agree’, on this and other platforms, has seen the blithe handover of massive amounts of once private information. ‘It has enabled one of the biggest shifts in power between people and big institutions in the twenty-first century,’ says Zeynep Tufekci, a sociologist at the University of North Carolina and author of Twitter and Tear Gas: The Power and Fragility of Networked Protest. ‘These large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams.’48

The Facebook story is one example, and it became notorious. Shouldn’t Facebook explicitly ask people to ‘check the box’ if they want to be made to feel happier or sadder? Comments poured in across social media. ‘Does everyone who works at Facebook just have the “this is creepy as hell” part of their brain missing?’ tweeted @sarahjeong a few days after the study was published. Similarly, @Tomgara tweeted: ‘Impressive achievement by Facebook to snatch back the title of most dystopian nightmarish tech company.’49 People felt they had been treated like lab rats.

What surprised many academics and researchers, including me, was the level of shock and outrage. Didn’t people realize these platforms are essentially mysterious algorithms that exert immense control over what we see? Facebook is just like other content sites such as BuzzFeed and Upworthy, constantly turning one algorithmic knob and tweaking another to find an ideal ad placement, to get us to read and post more. Consider this: if you are a Facebook user, what is the statistical likelihood you have been a guinea pig in one of its experiments? According to the company, it’s 100 per cent. ‘At any given time, any given Facebook user will be part of ten experiments the company happens to be conducting,’ says Dan Ferrell, a Facebook data scientist.50

Companies conduct split testing all the time, ostensibly to improve user satisfaction.* We trust algorithms to determine our Netflix and Spotify recommendations; deliver the most relevant results to our Google searches; even assess our credit score. So why all the fuss and surprise? ‘The machine appears to be only a neutral go-between,’ writes Cathy O’Neil in her insightful book Weapons of Math Destruction. In 2013, Karrie Karahalios, an associate professor of computer science at the University of Illinois, carried out a survey on Facebook’s algorithm and found that 62 per cent of the people were unaware the company tinkered with the news feed.51 So of the 1.72 billion people on Facebook, 1 billion think the system instantly shares whatever they or their friends post.

The study struck a deep nerve–it was a reminder of how the internet churns, and where the power really lies. It illustrated the power of digital puppet masters or trust engineers constantly to manipulate our data and in different ways control our lives. And to many users it felt like they had been played; the Facebook study was considered a major betrayal of trust.

Beyond the initial brouhaha, though, the study raised a more profound question: if Facebook can manipulate a person’s moods with a minor tweak of its algorithms, what else can the platform control? ‘About two-thirds of American adults have a profile on Facebook. They spend thirty minutes a day on the site, only four minutes less than they dedicate to face-to-face socializing,’ writes O’Neil. Nearly half of them, according to a Pew Research Center report, count on Facebook to deliver at least some of their news.52 So this leads to the question: how else could Facebook change our minds by tweaking the algorithm? Could it change whom we vote for?

‘Pope Francis has broken with tradition and unequivocally endorsed Donald Trump for President of the United States.’ ‘WikiLeaks CONFIRMS Hillary has sold weapons to ISIS.’ ‘Clinton runs a child-trafficking ring out of a pizzeria.’ Many Facebook readers would have seen these posts and others in their news feed. A few days before the 2016 US election, I made a promise to myself not to check the site for at least a month, after a ludicrous article appeared at the top of my feed. It stated that an FBI agent suspected of involvement in leaking Hillary Clinton’s emails had been found dead after apparently murdering his wife and then turning the gun on himself. Two thoughts in succession ran quickly through my mind. Was this piece of ‘news’ true? And then, whom should I go to, what news source, to find out the real truth? All these stories were of course hoaxes, typifying the fake news and conspiracy theories that plagued the 2016 election.

A recent study by BuzzFeed found that 38 per cent of all posts from three of the largest hyper-partisan right-wing Facebook pages, such as Eagle Rising, contained a mixture of true and false, or mostly false information, compared to 19 per cent of posts from three hyper-partisan left-wing pages, such as Occupy Democrats. Cumulatively, the audiences of these pages are in the tens of millions. The top five fake news items in the last weeks of the election were all negatives for the Clinton campaign–the Facebook algorithm picked a side. And the spread of fake news is far more common on the right than it is on the left. ‘These findings suggest a troubling conclusion: The best way to attract and grow an audience for political content on the world’s biggest social network is to eschew factual reporting and instead play to partisan biases using false or misleading information that simply tells people what they want to hear,’ writes Buzzfeed.53

It’s an ironic turn of events, given that in May 2016, Facebook came under fire from Republicans and critics for allegedly suppressing conservative-leaning stories in its trending news section, curated by a small team of human editors. Facebook ended up replacing those human editors, accused of party bias, with software, but the plan clearly failed. And, tellingly, the top twenty fabricated election stories on Facebook netted more engagement than factual stories from mainstream news sources.54

In the wake of President Trump’s unexpected victory, many questions were raised about fake news and filter bubbles on Facebook influencing the election results. Mark Zuckerberg initially denied the allegations. ‘Personally, I think the idea that fake news on Facebook–it’s a very small amount of the content–to think it influenced the election in any way is a pretty crazy idea,’ he said a few days after the election.55 A few months later, however, he had significantly shifted his stance from the initial Who, us? shrug reaction. In February 2017, he published a 5,700-word manifesto on his Facebook page.56 It sounded a bit like a grandiose State of the Union address for ‘bringing us closer together’, outlining the immense challenges facing the world today, from terrorism to climate change, pandemics to online safety. ‘Every year, the world got more connected and this was seen as a positive trend. Yet now, across the world there are people left behind by globalization, and movements for withdrawing from global connection,’ he wrote. ‘There are questions about whether we can make a global community that works for everyone, and whether the path ahead is to connect more or reverse course.’

Zuckerberg dedicated approximately 1,000 words to how Facebook has become a hotbed for fake content and why this is leading to increased polarization. ‘If this continues and we lose common understanding, then even if we eliminated all misinformation, people would just emphasize different sets of facts to fit their polarized opinions. That’s why I’m so worried about sensationalism in media.’ The CEO then went on to describe a plan, albeit a vague one, for how Facebook proposes to deal with issues on the platform. But can the internet giant really wage war on disinformation and quash bogus memes?

Moving forward, Facebook will check whether people are reading the articles before sharing. If they are, those stories will get more prominence in the news feed. Users can flag posts they think are fake or suspect, helping Facebook detect the most blatant posts. Artificial intelligence and algorithm analysis will also be used to flag content and detect dangerous falsehoods. But how do you decide what is intentional disinformation and what is, say, a bit of an exaggeration? Who decides what the truth is? If it’s Facebook doing the deciding, we are awarding them even more power to set the agenda.

‘Accuracy of information is very important. We know there is misinformation and even outright hoax content on Facebook,’ Zuckerberg said in the letter. ‘In a free society, it’s important that people have the power to share their opinion, even if others think they’re wrong.’ He added, towards the end: ‘Our approach will focus less on banning misinformation, and more on surfacing additional perspectives and information, including that fact checkers dispute an item’s accuracy.’

To be clear, I do not think that the data scientists, researchers and engineers at Facebook are intentionally gaming the political system. But while Facebook isn’t responsible for untrue stories per se, the company wrote the code that meant the bogus news items appeared more prominently over the picture of a friend’s child, a funny video or a genuine news story. Is it up to Facebook to emphasize and provide information about the original news sources for news articles? Should they do more to contain hoax stories? Is it the fault of a Facebook business model that depends on clickbait? I think yes, to all of the above. But perhaps the betrayal lies in the system itself, a system that makes it easy for almost a third of the world’s population to gossip and gripe, share and like, even if the content is false, and without proper checks and balances or any real redress.

The online landscape is vastly populated and yet, all too often, empty of anyone to take charge or turn to when it counts. It’s rather like when you’re a teenager and you throw a party while your parents are away. At first, the freedom is thrilling but there comes a point, somewhere after the tequila slammers, window smashes and the gatecrashers, when you start wishing there was a responsible adult in the house.

Facebook insists it is a neutral technology pathway facilitating connections between people, and not a media company. It is a misconceived and dangerous position. It is the media company with control over how misinformation spreads and an enormous amount of influence shaping someone’s worldview about whom they can trust.

The questions around Facebook may sound different from the ones raised in the days after the Kalamazoo killings but in fact they are remarkably similar. When it comes to trust in distributed systems, we need to know who will tell the truth about a product, service or piece of news, and who to blame if that trust is broken.

Where does the buck stop? In this new era, people are still working that out.

With traditional institutions, the picture was clearer. For instance, when, say, your Barclays bank account was hacked, the bank would reimburse you. But when an online cryptocurrency fund such as the Decentralized Autonomous Organization (DAO) runs into trouble, there is no central ombudsman or traditional institution to turn to. (Instead, pandemonium rules, as we’ll see in chapter 10.) We are in unmapped territory, scrambling and fumbling around for mechanisms that can replace institutional trust, and at the same time looking for ways to improve the old world’s own shortcomings in matters of accountability. Platforms, meanwhile, are trying to figure out their role in it all–mere facilitators in bringing people together, or something more?

Going back to the analogy of the early days of the car, it took decades to create norms like traffic lights, stop signs and even something as simple as the concept of road lanes. ‘We will look back one day and laugh, “Imagine a car without blinkers,”’ Gebbia says. ‘Then we will realize how far we have come in this new era of trust.’57 Even at that point, with some clearer guidelines governing our coexistence in a world of distributed trust, we will still occasionally crash into each other. No system is foolproof. The hope is that it will be a minor collision, not a fatal accident.

Last updated

Was this helpful?