Zucked Summary and Review

by Roger McNamee

Has Zucked by Roger McNamee been sitting on your reading list? Pick up the key ideas in the book with this quick summary.

Facebook is one of the most wildly popular businesses in history. With 2.2 billion users, and revenues that exceeded $40 billion in 2017, it is nothing short of a wild success. But more than being popular – and profitable – Facebook is influential. It has, in less than two decades, become a crucial part of the public sphere, the platform on which we not only communicate with our friends, but read the news, exchange opinions and debate the news of the day.

But Facebook’s popularity and influence conceal a dark reality: it is lacking in clear moral or civic values to guide it. And in the absence of effective regulation, it is actively harming our society.

In this book summary, you’ll learn how Facebook uses manipulative techniques to keep you hooked, and how one side effect is polarizing public debate. The book summarys show how Facebook thrives on surveillance, gathering data on you to keep you hooked on the site and increasing your value to its advertisers. And you’ll come to understand just how easy it has been for external actors like Russia to use Facebook to influence users in the United States.

In this summary of Zucked by Roger McNamee, you’ll learn

  • how much data Facebook holds on you;
  • how Facebook has persistently disregarded the privacy of its users; and
  • why we should have nothing to fear about regulating Facebook and other tech giants.

Zucked Key Idea #1: Technological and economic changes enabled Facebook’s growth and a dangerous internal culture.

Back in the twentieth century, there weren’t many successful Silicon Valley start-ups run by people fresh out of college. Successful computer engineering relied on skill and experience and needed to overcome the constraints of limited computer processing power, storage and memory. The need for serious hardware infrastructure meant that not just anyone could build a start-up – and be an instant success.

Technological developments in the late twentieth and early twenty-first centuries fundamentally changed this. When Mark Zuckerberg started Facebook in 2004, many of these barriers to new companies had simply disappeared. Engineers could create a workable product quickly, thanks to open-source software components like the browser Mozilla. And the emergence of cloud storage meant that start-ups could simply pay a monthly fee for their network infrastructures, rather than having to build something costly themselves.

Suddenly, the lean start-up model emerged. Businesses like Facebook no longer needed to work slowly toward perfection before launching a product. They could quickly build something basic, push it out to users and update from there. Facebook’s famous “move fast and break things” philosophy was born.

This also had a profound impact on the culture of companies like Facebook. No longer did an entrepreneur like Zuckerberg need a large and experienced pool of engineers with serious systems expertise to deliver a business plan.

In fact, we know that Zuckerberg didn’t want people with experience. Inexperienced young men – and they were more often than not men – were not only cheaper, but could be molded in his image, making the company easier to manage.

In the early years of Facebook, Zuckerberg himself was resolutely confident, not just in his business plan, but in the self-evidently beneficial goal of connecting the world. And as Facebook’s user numbers – and eventually, profitability – skyrocketed, why would anyone on his team question him? And even if they wanted to, Zuckerberg had set up Facebook’s shareholding rules so that he held a “golden vote,” meaning the company would always do what he decided.

To grow as quickly as possible, Facebook did whatever it could to strip out sources of friction: the product would be free and the business would avoid regulation, thus also avoiding a need for transparency in its algorithms that might invite criticism.

Unfortunately, while these were the right conditions for growth of a global superstar, they were also conditions that bred a disregard for user privacy, safety and civic responsibility.

Zucked Key Idea #2: Facebook aggressively collects data on its users and has shown blatant disregard for user privacy.

Now you know a little bit about Facebook. But how well does Facebook know you?

Facebook holds up to 29,000 data points on each of its users. That’s 29,000 little things it knows about your life, from the fact that you like cat videos to whom you’ve been socializing with recently.

So where does Facebook get that data?

Take Connect, a service started in 2008, that allows users to sign into third-party websites through Facebook. Many users love the simplicity of not needing to remember countless complicated passwords for other sites. What most users don’t realize is that the service doesn’t just log them in. It also enables Facebook to surveil them on any site or application that used the log-in. Use Connect to log into news websites? Facebook knows exactly what you are reading.

Or take photos. Lots of us love tagging our friends after a fun day or night out. You may think it’s an easy way to share with your friends, but for Facebook, you’re providing a valuable collection of information about your location, your activities and your social connections.

Now, if a business is so greedy for your personal data, you’d at least hope that it would treat that data with care, right? Unfortunately, ever since the earliest days of Facebook, Mark Zuckerberg’s business has shown an apparent disregard for data privacy.

In fact, according to Business Insider, after Zuckerberg gathered his first few thousand users, he messaged a friend to tell them that if they ever wanted information on anyone at their university, they should just ask. He now had thousands of emails, photos and addresses. People had simply submitted them, the young entrepreneur said. They were, in his reported words, “dumb fucks.”

A cavalier attitude toward data privacy at Facebook has persisted ever since. For example, in 2018, journalists revealed that Facebook had sent marketing materials to phone numbers provided by users for two-factor authentication, a security feature, despite having promised not to do so.

And in the same year, it was revealed that Facebook had simply downloaded the phone records – including calls and texts – of those of its users who used Android phones. Again, the users in question had no idea this was happening.

Facebook wants your data for a reason: to make more money by keeping you on the platform for longer and thus making its offer to advertisers more valuable. Let’s take a look at this in more detail.

We read dozens of other great books like Zucked, and summarised their ideas in this article called Life purpose
Check it out here!

Zucked Key Idea #3: Facebook uses brain hacking to keep you online as long as possible, and to boost its profits.

For social media platforms, time is money. Specifically, your time is their money. Because the longer you spend on Facebook, Twitter or Instagram, and the more attention you give them, the more advertising they can sell.

As a result, capturing and keeping your attention is at the heart of Facebook’s commercial success. The business has gotten better than anyone else at getting inside your brain.

Some of the techniques it uses are about how it displays information. These include the automatic playing of videos, and a never-ending feed of information. These keep you hooked by eliminating the normal cues to disengage. You can reach the end of a newspaper, but never the end of Facebook’s news feed.

Other techniques go a little deeper into human psychology by, for example, exploiting FOMO – the fear of missing out. Try to deactivate a Facebook account, and you’ll be presented not just with a standard confirmation screen, but with the faces of your best friends, Tom and Jane, and the words “Tom and Jane will miss you.”

But the most sophisticated and sinister techniques used by Facebook lie in the decision-making process of its artificial intelligence, which decides what to show you.

When you scroll through Facebook, you might think you are looking at a simple news feed. But you aren’t. You are up against a mammoth artificial intelligence that has huge quantities of data about you, and is feeding you what it thinks will keep you engaged with the site for as long as possible. And the bad news for society is that that often means content that appeals to your most basic emotions.

That’s because triggering our basic emotions is what keeps you engaged. Joy works, which is why cute cat videos are so common. But what works best? Emotions like fear and anger.

As a result, Facebook tends to nudge us toward content that will get us riled up because riled-up users consume more content and share it more often. So you are less likely to see calm headlines describing events and more likely to see sensational claims in short punchy videos.

And that can become dangerous. Particularly when we get stuck in a bubble where our outrage, fears or other emotions are constantly reinforced by people with similar views. That’s the danger of the so-called filter-bubble, which we’ll look at in the next book summary.

Zucked Key Idea #4: Filter-bubbles breed polarization of views.

Every second you browse Facebook, you are feeding data into its filtering algorithm. And the result is a filter bubble, as Facebook filters out content that it thinks you won’t like, and filters in content that you are more likely to read, like and share.

Eli Pariser, president of the campaigning organization MoveOn, was one of the first to publicize the effect of filter bubbles, in a 2011 Ted Talk. Pariser noticed that, although his Facebook friends list was pretty evenly balanced between conservatives and liberals, there was nothing neutral about his newsfeed. His tendency to like, share or click on liberal content was leading Facebook to give him more of what it thought he wanted, until he never saw any conservative content at all.

As Pariser argued, this is problematic. Many people get their news and information from Facebook, and think they are receiving a balance of content. But in reality, algorithms with huge power but no civic responsibilities are feeding them a biased view of the world.

Even worse problems arise when filter-bubble effects shift users from mainstream to more extreme views. This can happen as a result of algorithms shifting users toward more emotive, outrageous content.

For example, a former YouTube employee, Guillaume Chaslot, wrote software that showed how YouTube’s algorithmic recommendations worked. It showed that, if a user watches any video on the platform about 9/11, that user will then receive recommendations for 9/11 conspiracy videos.

But even without algorithms, people are often radicalized by social media. And that’s particularly the case when they are members of Facebook groups. There are countless groups on Facebook, and whatever your political preferences, there’s one for you. And they are great for Facebook’s business, as they enable easy targeting for advertisers.

But they can be problematic. Cass Sunstein, the behavioral economist and coauthor of Nudge (2008), has shown that when people with similar views discuss issues, their opinions tend to become stronger and more extreme with time.

There’s another problem with groups: they are vulnerable to manipulation. The organization Data for Democracy has shown that just one or two percent of a group’s members can steer its conversation, if they know what they’re doing.

And this is exactly what the Russians did ahead of the 2016 US elections.

Zucked Key Idea #5: Russia used Facebook as a surreptitious but effective way to influence US elections.

Do you really know where the content you read on Facebook comes from? If you were in the United States in 2016, it’s very likely that you read, and maybe even shared, Facebook content that originated with Russian trolls.

Despite mounting evidence, Facebook denied that Russia had used the platform until, in September 2017, it admitted that it had discovered advertising spending of around $100,000 by Russian-hosted fake accounts. Facebook would later reveal that Russian interference had reached 126 million users on the platform, and another 20 million on Instagram. Given that 137 million people voted in the election, it’s hard not to believe that Russian interference had some impact.

Russia’s tactics in the 2016 election were to rile up Trump supporters, while depressing turnout among potential democrat voters.

And the truth is, it was easy, thanks to Facebook groups, which offered Russia an easy way to target key demographics. For example, Russian operatives ran a number of groups focused on people of color, such as the group Blacktivist, apparently with the purpose of spreading disinformation that would reduce the likelihood of users voting for Democrat Hillary Clinton.

Moreover, groups made it easy for content to get shared. We tend to trust our fellow group members – they share our interests and beliefs, after all. So we are often uncritical of where information is coming from, if it’s shared within a group with which we identify.

The author himself noticed that friends of his were sharing deeply misogynistic images of Hillary Clinton that had originated in Facebook groups supporting Bernie Sanders, Clinton’s opponent in the Democratic primaries. It was almost impossible to believe that Sanders’ campaign was behind them, but they were spreading virally.

And Russia’s ability to influence through groups was vividly shown with the notorious example of the 2016 Houston mosque protests, when Facebook events controlled by Russians organized simultaneous protests both for and against Islam outside a mosque in Houston, Texas. The manipulation was part of Russia’s overall efforts to sow discord and confrontation in the United States based on anti-minority and anti-immigrant sentiment, as Russia knew that this would play into the hands of the Trump campaign.

Four million people voted for Obama in 2012, but not for Clinton in 2016. How many of these four million didn’t vote Democrat because of Russian disinformation and lies about the Clinton campaign?

Zucked Key Idea #6: The Cambridge Analytica story blew the lid off Facebook’s cavalier approach to data privacy.

In 2011, Facebook entered into an agreement with the American consumer protection body and regulator, the Federal Trade Commission, that barred Facebook from deceptive data privacy practices. Under the decree, Facebook needed to get explicit, informed consent from users before it could share their data. But the sad reality is that Facebook did nothing of the kind.

In March 2018, a story broke that tied Facebook’s political impact to its disregard for user privacy. Cambridge Analytica, a company providing data analytics to Donald Trump’s election campaign, had harvested and misappropriated almost fifty million Facebook user profiles.

Cambridge Analytica funded a researcher, Aleksandr Kogan, to build a data set of American voters. He created a personality test on Facebook, which 270,000 people took in return for a couple of dollars. The test collected information on their personality traits.

Crucially, it also captured data about the test-takers’ Facebook friends – all 49 million of them collectively – without these friends knowing anything about it, let alone giving consent. Suddenly, the data team for a controversial presidential candidate had a trove of highly detailed personal data for about 49 million people. And while Cambridge Analytica wasn’t allowed, under Facebook’s terms of service, to use the data commercially, it did so anyway.

This was particularly controversial because, according to a whistleblower, Cambridge Analytica was able to match Facebook profiles with 30 million actual voter files. This gave the Trump campaign enormously valuable data on thirteen percent of the nation’s voters, allowing it to target propaganda at these voters with incredible precision. Remember that just three swing states, won by Trump with a combined margin of just 77,744 votes, gave him a victory in the Electoral College. It seems almost impossible that Cambridge Analytica’s targeting, based on Facebook’s data breach, didn’t influence this outcome.

As the story broke, Facebook tried to argue that it had been a victim of Cambridge Analytica’s malpractice. But Facebook’s actions suggest otherwise. When Facebook found out about the data breach, it wrote to Cambridge Analytica, asking for copies of the dataset to be destroyed. But no audit or inspection was ever carried out. Instead, Cambridge Analytica was just asked to tick a box on a form to confirm compliance. Moreover, Facebook had itself happily embedded three team members in the Trump campaign’s digital operations at the same time when Cambridge Analytica was working for Facebook.

The Cambridge Analytica story was a turning point. Many came to believe that, in the pursuit of growth and profit, Facebook had ignored its moral and societal obligations.

If this is true, this question remains: What can society do about it?

Zucked Key Idea #7: Facebook and other tech giants should be properly regulated to limit the harm they can do.

As the Russian interference and Cambridge Analytica scandals have shown, Facebook has not taken the need to regulate its own behavior seriously enough. Perhaps, then, the time has come to think about external regulation.

One aspect of this should be economic regulation designed to weaken the overall market power held by Facebook and other tech giants, just like the kind of regulation applied in the past to giants like Microsoft and IBM. One reason Facebook is so powerful is because it has used its financial weight simply to buy up competitors, like Instagram and WhatsApp.

This needn’t influence economic growth or overall innovation negatively, as the historical example of phone operator AT&T shows. In 1956, AT&T reached a settlement with the government to control the company’s spiraling power. It would limit itself to the landline telephone business and would license its patents at no cost so others could use them.

This turned out to be seriously good news for the US economy because, by making the AT&T’s crucial invention and patent – the transistor – freely available, this antitrust ruling essentially gave birth to Silicon Valley. Computers, video games, smartphones and the internet – all of it came from the transistor.

And crucially, the case also worked out for AT&T. Confined to a core business, it nonetheless became so successful that it was subject to another monopoly case in 1984. Applying the same kind of logic to the likes of Facebook and Google would still allow them to thrive but limit their market power and encourage more competition.

Economic regulation is one thing. But if we are truly to tackle the damaging impact of Facebook on society, we also need regulation that gets to the heart of its harmfulness.

One place to start would be to mandate the option of an unfiltered Facebook newsfeed view. With a click of a button, you could toggle your news feed from “your view” – based on Facebook’s artificial intelligence judgments of what will keep you interested the longest – to a more neutral or balanced view of what’s happening in the world.

Another positive step would be to regulate algorithms and artificial intelligence. In the US, this could be done via an equivalent to the Food and Drug Administration for technology, with responsibility for ensuring that algorithms serve, rather than exploit, humans. Mandated third-party auditing of algorithms would create sufficient transparency to avoid the worst cases of filter-bubbles and manipulation.

We accept and value regulation in many industries, using it to strike the right balance between public interest and economic freedom. At present, when it comes to tech, that balance is not being properly struck. It’s time for change.

Final summary

The key message in this book summary:

Facebook has become a catastrophe: keeping people hooked to their screens, pushing us toward more extreme views, riding roughshod over personal privacy and influencing elections. It’s time to fight back, and stop treating Facebook’s negative impacts on individuals and society as acceptable.

Actionable advice:

Change the physical appearance of your devices to reduce their impact on your health.

Two changes to the appearance of your digital devices can make a big difference. First, changing your device to night-shift mode will reduce the blues in the display, which lowers eye strain and makes it easier to get to sleep. Secondly, putting a smartphone in monochrome mode reduces its visual intensity, and therefore the dopamine hit you get from looking at it.

Suggested further reading: Find more great ideas like those contained in this summary in this article we wrote on Life purpose