Skip to content

Just Say NO to Facebook

What would happen if Facebook was turned off?

Quote

Imagine a world without the social network

THERE HAS never been such an agglomeration of humanity as Facebook. Some 2.3bn people, 30% of the world’s population, engage with the network each month. Economists reckon it may yield trillions of dollars’ worth of value for its users. But Facebook is also blamed for all sorts of social horrors: from addiction and bullying to the erosion of fact-based political discourse and the enabling of genocide. New research—and there is more all the time—suggests such accusations are not entirely without merit. It may be time to consider what life without Facebook would be like.

To begin to imagine such a world, suppose that researchers could kick a sample of people off Facebook and observe the results. In fact, several teams of scholars have done just that. In January Hunt Allcott, of New York University, and Luca Braghieri, Sarah Eichmeyer and Matthew Gentzkow, of Stanford University, published results of the largest such experiment yet. They recruited several thousand Facebookers and sorted them into control and treatment groups. Members of the treatment group were asked to deactivate their Facebook profiles for four weeks in late 2018. The researchers checked up on their volunteers to make sure they stayed off the social network, and then studied what happened to people cast into the digital wilderness.

Facebook is also blamed for all sorts of social horrors: from addiction and bullying to the erosion of fact-based political discourse and the enabling of genocide. New research—and there is more all the time—suggests such accusations are not entirely without merit. It may be time to consider what life without Facebook would be like.

 

THERE HAS never been such an agglomeration of humanity as Facebook. Some 2.3bn people, 30% of the world’s population, engage with the network each month. Economists reckon it may yield trillions of dollars’ worth of value for its users. But Facebook is also blamed for all sorts of social horrors: from addiction and bullying to the erosion of fact-based political discourse and the enabling of genocide. New research—and there is more all the time—suggests such accusations are not entirely without merit. It may be time to consider what life without Facebook would be like.

To begin to imagine such a world, suppose that researchers could kick a sample of people off Facebook and observe the results. In fact, several teams of scholars have done just that. In January Hunt Allcott, of New York University, and Luca Braghieri, Sarah Eichmeyer and Matthew Gentzkow, of Stanford University, published results of the largest such experiment yet. They recruited several thousand Facebookers and sorted them into control and treatment groups. Members of the treatment group were asked to deactivate their Facebook profiles for four weeks in late 2018. The researchers checked up on their volunteers to make sure they stayed off the social network, and then studied what happened to people cast into the digital wilderness.

Meanwhile back at the ranch – Alexa,Google Home, etc. are flying off the shelves.

Zucked: Waking Up to the Facebook Catastrophe -Book Review

Quote

An important investor explains how his enthusiasm has turned to shame

As the so-called Techlash gains pace and polemics on the downsides of the internet flood the book market, one omission seems to recur time and again. Facebook, Google, Amazon and the rest are too often written about as if their arrival in our lives started a new phase of history, rather than as corporations that have prospered thanks to an economic and cultural environment established in the days when platforms were things used by trains. To truly understand the revolutions in politics, culture and human behaviour these giants have accelerated, you need to start not some time in the last 15 or so years, but in the 1980s.

Early in that decade, the first arrival of digital technology in everyday life was marked by the brief microcomputer boom, which was followed by the marketing of more powerful personal computers. Meanwhile, Margaret Thatcher and Ronald Reagan were embedding the idea that government should keep its interference in industry and the economy to a minimum. In the US, a new way of thinking replaced the bipartisan belief that monopolies should always be resisted: concentrations of economic power were not a problem as long as they led to lower prices for consumers. And at the same time as old-school class politics was overshadowed, the lingering influence of the 60s counterculture gave the wealthy a new means of smoothing over their power and privilege: talking in vague terms about healing the world, and enthusiastically participating in acts of spectacular philanthropy.

If there was one period when all this cohered, it was between 1984 to 1985: the time of Band Aid and Live Aid, the launch of both Bill Gates’s Microsoft Windows operating system and the Apple Macintosh, and the advent of Reagan’s second term as president. And in 1984 Mark Zuckerberg, who would grow up in a country and culture defined by these events and forces, was born; he invented Facebook while he was at Harvard, and made his fortune via an intrusive, seemingly uncontrollable kind of capitalism, sold with the promise of “bringing the world closer together”.

Roger McNamee is a little longer in the tooth. Aged 62, he is old enough to know that the US beat the depression and won the second world war when “we subordinated the individual to the collective good, and it worked really well”. He knows that the anti-state, libertarian mores that define what we now know as Big Tech were born in the 1980s, and that by the early 21st century, “hardly anyone in Silicon Valley knew there had once been a different way of doing things”. Laissez-faire ideas, he says, joined with a bombastic arrogance in the minds of the “bros” who flocked to northern California to make their fortune from the mid 1990s onwards. What they did was founded on cutting-edge technology – but in terms of its underlying economic ideas, their business represented recently established nostrums being taken to their logical conclusion.


Should political will and public alarm eventually combine to finally break Silicon Valley’s remarkable power, McNamee knows roughly what ought to happen. He points to giving people control and ownership of their data, and the need to push through years of free-market dogma and convince the US authorities to reinvent anti-monopoly rules, and to take some action. What exactly this might entail remains frustratingly unclear, but he wants his readers to know he has made the ideological leap required. “Normally, I would approach regulation with extreme reluctance, but the ongoing damage to democracy, public health, privacy and competition justifies extraordinary measures,” he says. Unwittingly, the way he frames his point speaks volumes about how much we lost in the laissez-faire revolutions of the 1980s: what, after all, is so extraordinary about democratically elected governments taking action against corporations that are out of control?


 
This may suggest the perspective of an outsider, but McNamee does not quite fit that description. As a high-profile investor in tech businesses, he was co-founder of Elevation Partners, a private equity firm established with U2 frontman Paul “Bono” Hewson, the very embodiment of the 80s’ uneasy mixture of profit and philanthropy. In 2010, the firm acquired 1% of Facebook for $90m, but McNamee had already put money into the company, become a source of occasional advice for its founder, and been key in the appointment as chief operating officer of Sheryl Sandberg, the former Bill Clinton administration insider who brought business acumen and political connections to Zuckerberg’s inner circle. But now McNamee has come to the conclusion that what he helped bring about is a blend of hubris and dysfunction: Zucked is partly the story of his early enthusiasm giving way to mounting alarm at Facebook’s failure to match its power with responsibility, and what he has tried to do about it.

It is an unevenly told tale. McNamee wants readers to think of him as a player in the events he describes, but the text regularly has a sense of things viewed from too great a distance. That said, he knows enough about Facebook and its contexts to get to the heart of what its presence in our lives means for the world, and is bracingly blunt about the company’s threat to the basic tenets of democracy, and his own awakening to its dangers. In early passages about the initial occasions when he met Zuckerberg, he writes of a man then aged 22 appearing “consistently mature and responsible”, and “remarkably grown-up for his age”. He goes on: “I liked Zuck. I liked his team. I liked Facebook.” But by the time of the 2016 presidential election, everything had changed. In a memo to Zuckerberg and Sandberg, McNamee was blunt: “I am disappointed. I am embarrassed. I am ashamed.” And he had a keen sense of what had gone wrong, summarised here in the kind of aphoristic phrase for which he clearly has a talent: “Facebook has managed to connect 2.2 billion people and drive them apart at the same time.”

The account of how this played out is now familiar, and ends with the election and subsequent revelation that 126 million Facebook users were exposed to messages authored in Russia. McNamee deals with the Cambridge Analytica scandal, and how it highlighted Facebook’s blithe attitude to its users’ personal data (though he really should have mentioned the Observer journalist Carole Cadwalladr, whose curiosity and resilience ensured that the story broke, and Facebook was called to account). But some of his best material is about the elements of Facebook’s organisation and culture that created the mess, and the work he has done trying to alert powerful people to the need for action.

Once Zuckerberg realised his creation was eating the world, he and his colleagues did what “bros” do, and embraced a mindset known as “growth hacking”, whereby what mattered was “increasing user count, time on site, and revenue”: unrestrained capitalism, in other words. And as all these things endlessly increased, the company simply sped on. “In the world of growth hacking, users are a metric, not people,” McNamee writes. As Facebook expanded, he says, “it is highly unlikely that civic responsibility ever came up.”
Roger McNamee, founder of Elevation Partners.

If Facebook looks like a borderline autocracy (Zuckerberg controls around 60% of the company’s voting shares, because his stock has a “class B” status that gives him unchallengeable power), that is partly because it is different from comparable companies in one crucial sense: the simplicity of its business model. “The core platform consists of a product and a monetisation scheme,” McNamee points out, which “enables Facebook to centralise its decision making. There is a core team of roughly ten people who manage the company, but two people – Zuck and Sheryl Sandberg – are the arbiters of everything.” In the final analysis, Zuckerberg “is the undisputed boss”, both “rock star and cult leader”. It was always going to be a dangerous combination: global reach, a vast influence on events across the world, and a command structure too often reducible to the strengths and weaknesses of one man.

McNamee has worked hard to hold Facebook to account. His key ally is Tristan Harris, a former Google insider who is now an expert critic of Big Tech and its apparent ethical vacuum. As the most compelling passages here recount, while anxiety about the company began to spread, the pair lobbied members of Congress, and were not surprised to find that Washington “remained comfortably in the embrace of the major tech platforms” – but did their best to educate them on a subject many US legislators still seem to barely understand. Their efforts led to two hearings in late 2017, attended only by the big tech companies’ lawyers. Six months later, Zuckerberg finally went to Capitol Hill to testify over two days, but was initially confronted with some of the moronic questions imaginable (“How do you sustain a business model in which users don’t pay for your service?” asked Utah’s 84 year-old Senator, Orrin Hatch). His second session, in front of the House Of Representatives’ Committee on Energy And Commerce, was much better, full of biting criticism. But, as McNamee sighingly acknowledges, his former friend “caught a break”: TV news was suddenly consumed by fallout from the FBI raiding the home and office of Donald Trump’s attorney Michael Cohen, and Zuckerberg went back to northern California looking remarkably untroubled.

Should political will and public alarm eventually combine to finally break Silicon Valley’s remarkable power, McNamee knows roughly what ought to happen. He points to giving people control and ownership of their data, and the need to push through years of free-market dogma and convince the US authorities to reinvent anti-monopoly rules, and to take some action. What exactly this might entail remains frustratingly unclear, but he wants his readers to know he has made the ideological leap required. “Normally, I would approach regulation with extreme reluctance, but the ongoing damage to democracy, public health, privacy and competition justifies extraordinary measures,” he says. Unwittingly, the way he frames his point speaks volumes about how much we lost in the laissez-faire revolutions of the 1980s: what, after all, is so extraordinary about democratically elected governments taking action against corporations that are out of control?

Zucked! Why We Keep Forgiving Facebook

Here is an excellent Pod cast from the NPR 1A show on Facebook. Joshua Johnsonm is interviewing Roger McNamee, the author of ‘Zucked: Waking up to the Facebook Catastrophe’

You may have lots of friends on Facebook. But are you friends with Facebook?

It’s been 15 years since a Harvard student named Mark Zuckerberg co-created the social network in his dorm room. But like many teenagers, it’s prone to misbehave and worry the grown-ups. Some expect Facebook to implode before it turns sweet sixteen.

No one intended Facebook to cause the problems that it has: not Zuckerberg, its engineers, its early investors or advisors.

We spoke with Roger McNamee, former advisor and early investor, about how the company changed the world in unexpected ways and — in his view — refused to do right by its users in times of trouble.

How to Stop Facebook’s Dangerous App Integration Ploy

Here is a great op. ed. piece by Sally Hubbard who is a former assistant attorney general in the New York State Attorney General’s Antitrust Bureau and an editor at The Capitol Forum, where she covers technology and monopolization. She makes two points 1) Facebook is a monopolist and 2) the FTC is toothless. Both need to change.
Quote

In response to calls that Facebook be forced to divest itself of WhatsApp and Instagram, Mark Zuckerberg has instead made a strategic power grab: He intends to put Instagram, WhatsApp and Facebook Messenger onto a unified technical infrastructure. The integrated apps are to be encrypted to protect users from hackers. But who’s going to protect users from Facebook?

Ideally, that would be the Federal Trade Commission, the agency charged with enforcing the antitrust laws and protecting consumers from unfair business practices. But the F.T.C. has looked the other way for far too long, failing to enforce its own 2011 consent decree under which Facebook was ordered to stop deceiving users about its privacy claims. The F.T.C. has also allowed Facebook to gobble up any company that could possibly compete against it, including Instagram and WhatsApp.

Not that blocking these acquisitions would have been easy for the agency under the current state of antitrust law. Courts require antitrust enforcers to prove that a merger will raise prices or reduce production of a particular product or service. But proving that prices will increase is nearly impossible in a digital world where consumers pay not with money but with their personal data and by viewing ads.

The integration Mr. Zuckerberg plans would immunize Facebook’s monopoly power from attack. It would make breaking Instagram and WhatsApp off as independent and viable competitors much harder, and thus demands speedy action by the government before it’s too late to take the pieces apart. Mr. Zuckerberg might be betting that he can integrate these three applications faster than any antitrust case could proceed — and he would be right, because antitrust cases take years.

Luckily, the F.T.C. has a way to act quickly. Prompted by the Cambridge Analytica scandal, the agency has been investigating Facebook for violating that 2011 consent decree, which required it, among other things, to not misrepresent its handling of user information and to create a comprehensive privacy program. The F.T.C. can demand Facebook stop the integration as one of the conditions for settling any charges related to the consent decree, rather than just imposing an inconsequential fine.

If not stopped, the integration will cement Facebook’s monopoly power by enriching its data trove, allowing it to spy on users in new ways. Facebook might decide to sync data from one app to another so it can better track users. And Facebook needs user data: The reason it commands such a large share of digital advertising is that it tracks users — and even people without Facebook accounts — across millions of sites. It gathers data that allows it to target ads more precisely than many of its rivals for digital ad dollars, including news media sites and content creators.

After stopping Mr. Zuckerberg’s integration plan, the F.T.C. should reverse the WhatsApp and Instagram acquisitions as illegal under the Clayton Act, which prohibits mergers and acquisitions where the effect “may be substantially to lessen competition, or to tend to create a monopoly.” Undoing the mergers would give consumers an alternative to Facebook-owned apps and force Facebook to do better.

Without meaningful competition, Facebook has little incentive to protect users by making changes that could reduce profits. Users unhappy about data collection and algorithms that promote fake news and political polarization don’t have anywhere to go.

Any future Facebook acquisitions, no matter what the size, should be strictly reviewed because of the company’s history of deceiving users. Facebook uses technology, like its Onavo and Research apps, that monitor consumers’ app usage to identify potential rivals even before they are big enough to get on antitrust enforcers’ radars. Internal Facebook documents published by the British Parliament show Facebook used Onavo data to identify WhatsApp as a competitive threat, only to convince regulators otherwise.

Congress also should write legislation to overrule misguided cases that have neutered antitrust enforcement, and pass a strong privacy law with enough resources to enforce it. Only then, perhaps, will we be protected from Facebook.

Depression in girls linked to higher use of social media

Is anyone surprised?
Quote

Research suggests link between social media use and depressive symptoms was stronger for girls compared with boys

Girls’ much-higher rate of depression than boys is closely linked to the greater time they spend on social media, and online bullying and poor sleep are the main culprits for their low mood, new research reveals.


It found that many girls spend far more time using social media than boys, and also that they are much more likely to display signs of depression linked to their interaction on platforms such as Instagram, WhatsApp and Facebook.

As many as three-quarters of 14-year-old girls who suffer from depression also have low self-esteem, are unhappy with how they look and sleep for seven hours or less each night, the study found.

“Girls, it seems, are struggling with these aspects of their lives more than boys, in some cases considerably so,” said Prof Yvonne Kelly, from University College London, who led the team behind the findings.

The results prompted renewed concern about the rapidly accumulating evidence that many more girls and young women exhibit a range of mental health problems than boys and young men, and about the damage these can cause, including self-harm and suicidal thoughts.

The study is based on interviews with almost 11,000 14-year-olds who are taking part in the Millennium Cohort Study, a major research project into children’s lives.

It found that many girls spend far more time using social media than boys, and also that they are much more likely to display signs of depression linked to their interaction on platforms such as Instagram, WhatsApp and Facebook.

BOGUS SCIENCE: Facebook Takes On Tricky Public Health Role

Among the other 100s of reasons, it is time to stop using Facebook.

A police officer on the late shift in an Ohio town recently received an unusual call from Facebook.

Earlier that day, a local woman wrote a Facebook post saying she was walking home and intended to kill herself when she got there, according to a police report on the case. Facebook called to warn the Police Department about the suicide threat.

The officer who took the call quickly located the woman, but she denied having suicidal thoughts, the police report said. Even so, the officer believed she might harm herself and told the woman that she must go to a hospital — either voluntarily or in police custody. He ultimately drove her to a hospital for a mental health work-up, an evaluation prompted by Facebook’s intervention. (The New York Times withheld some details of the case for privacy reasons.)
….

Facebook has computer algorithms that scan the posts, comments and videos of users in the United States and other countries for indications of immediate suicide risk. When a post is flagged, by the technology or a concerned user, it moves to human reviewers at the company, who are empowered to call local law enforcement.

“In the last year, we’ve helped first responders quickly reach around 3,500 people globally who needed help,” Mr. Zuckerberg wrote in a November post about the efforts.

But other mental health experts said Facebook’s calls to the police could also cause harm — such as unintentionally precipitating suicide, compelling nonsuicidal people to undergo psychiatric evaluations, or prompting arrests or shootings.

And, they said, it is unclear whether the company’s approach is accurate, effective or safe. Facebook said that, for privacy reasons, it did not track the outcomes of its calls to the police. And it has not disclosed exactly how its reviewers decide whether to call emergency responders. Facebook, critics said, has assumed the authority of a public health agency while protecting its process as if it were a corporate secret.

Yes you read that right. “Facebook said that, for privacy reasons, it did not track the outcomes of its calls to the police.” B.S. — how about formal clinical trials like the rest of the medical world? Their algorithm should get FDA approval first at a minimum.

“It’s hard to know what Facebook is actually picking up on, what they are actually acting on, and are they giving the appropriate response to the appropriate risk,” said Dr. John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston. “It’s black box medicine.”


“In this climate in which trust in Facebook is really eroding, it concerns me that Facebook is just saying, ‘Trust us here,’” said Mr. Marks, a fellow at Yale Law School and New York University School of Law.

Right – Trust Facebook? Never. I submit the real reason that miscreant Zuckerberg is doing this is that it is now well known that a plausible link exists between increased social media use and depression and suicide. Just say no to Facebook.

2012 – Social Media and Suicide: A Public Health Perspective

2017 – The Risk Of Teen Depression And Suicide Is Linked To Smartphone Use

No one likes a lying a-hole like Zuckerberg and crew

Quote

Mark Zuckerberg did everything in his power to avoid Facebook becoming the next MySpace – but forgot one crucial detail…No one likes a lying asshole

Comment Let’s get one thing straight right off the bat: Facebook, its CEO Mark Zuckerberg, and its COO Sheryl Sandberg, and its public relations people, and its engineers have lied. They have lied repeatedly. They have lied exhaustively. They have lied so much they’ve lost track of their lies, and then lied about them.

For some reason, in an era where the defining characteristic of the President of the United States is that he lies with impunity, it feels as though everyone has started policing the use of the word “lie” with uncommon zeal. But it is not some holy relic, it is a word, and it has a definition.

Lie (verb)
1 : to make an untrue statement with intent to deceive
2 : to create a false or misleading impression

By any measure, Facebook as an organization has knowingly, willingly, purposefully, and repeatedly lied. And two reports this week demonstrate that the depth of its lying was even worse than we previously imagined.

Before we dig into the lies, though, it’s worth asking the question: why? Why has the corporation got itself into this position, and why does it have to be dragged kicking and screaming, time and again, to confront what it already knows to be true?

And the answer to that is at the very heart of Facebook, it goes to the core of Mark Zuckerberg’s personality, and it defines the company’s corporate culture: it is insecure. And it has good reason to be.

The truth is that Facebook is nothing special. It is a website. A very big and clever website but a website that is completely reliant on its users to post their own content. Those users don’t need Facebook and they could, in a matter of seconds, decide to tap on a different app and post their thoughts and updates there, instead. If enough people make that decision, the company collapses. All 340 billion dollars of it.

Mark Zuckerberg knows that all too well, and as internal emails handed over to the British Parliament and then published make clear, the top tier of Facebook was highly focused on that question of existential dread: how do we avoid becoming the next MySpace, Geocities, Google Plus, or Friendster?
Novelty item

With thousands of people working underneath them, the world’s largest companies knocking at their door with blank checks for advertising, and the globe’s political leaders inviting them to meetings, Facebook tasted greatness, but couldn’t shake a huge question underneath it all: how does Facebook survive once the novelty wears off?

And the answer was the smart one: make yourself a part of the digital ecosystem. Yes, Facebook was completely reliant on its users, but everyone else wanted those users, too, and while it had them, the corporation needed to make sure it became enmeshed in as many other systems as possible.

It became a savvy businessman making sure that all his money and resources aren’t in one market: diversify, Mark! And that became the driving force behind every subsequent strategic decision while the rest of the company focused on making Facebook a really good product – making it easy to do more, post more, interact more.

And so, we had music service Spotify granted access to Facebook users’ private messages, once users had linked their Spotify and Facebook accounts. Why on Earth would Spotify want to read people’s private messages?

Easy: it is a huge, tasty dataset. You could find out what bands people are excited about, and send them notices of new albums or gigs. You could see what they think of rival services, or the cost of your service. People were encouraged to message their pals on Facebook through Spotify, letting them know what they were listening to. All in all, it was access to private thoughts: companies spend small fortunes paying specialist survey companies for these sorts of insights.

Likewise Netflix. It had access to the same data under a special program that Facebook ran with other monster internet companies and banks in which they were granted extraordinary privileges to millions of people’s personal data.

Facebook cut data deals with all sorts of companies based on this premise: give them what they want, and in return they would be hauled onto Zuckerberg’s internet reservation.

For example, Yahoo! got real-time feeds of posts by users’ friends – reminding us of Cambridge Analytica gathering information on millions of voters via a quiz app, and using it to target them in contentious political campaigns in the US and Europe.

Microsoft’s Bing was able to access the names of nearly all Facebook users’ friends without permission, and Amazon was able to get at friends’ names and contact details. Russian search engine Yandex had Facebook account IDs at its fingertips, though it claims it didn’t even know this information was available. Facebook at first told the New York Times Yandex wasn’t a partner, and then told US Congress it was.

Crossing the line

Plugging large companies into users’ profiles, and their friends’ profiles, became a running theme, and for the antisocial network, it all worked: the data flowed.

But then things took a darker turn. The users and privacy groups started asking questions. Facebook’s entire strategy started looking shaky as people decided they should have control over what is done with their private data. In Europe, a long debate led to solid legislation: everyone in the EU would soon have a legal right to control their information and, much worse, organizations that didn’t respect that could face massive fines.

Facebook started cutting shadier and shadier deals to protect its bottom line. Its policy people started developing language that carefully skirted around reality; and its lawyers began working on semantic workarounds so that the Silicon Valley titan could make what looked like firm and unequivocal statements on privacy and data control, but in fact allowed things to continue on exactly as they had. What was being shared was not always completely clear.

The line was crossed when Facebook got in bed with smartphone manufacturers: it secretly gave the device makers access to each phone user’s Facebook friends’ profiles, when the handheld was linked to its owner’s account, bypassing protections.

And you know how you can turn off “location history” in the Facebook app, and you can go into your iPhone’s settings and select “never” for the Facebook app when it comes to knowing your location? And you can refuse to use Facebook’s built-in workaround where you “check in” to places – at which point it will re-grant itself access to your location with a single tap?

Well, you can do all that, and still Facebook will know where you are and sell that information to others.

To which the natural question is: how? Well, we have what we believe to be the technical answer. But the real answer is: because it lies. Because that information is valuable to it. Because that information forms the basis of mutually reinforcing data-sharing agreements with all the companies that could one day kill Facebook by simply shrugging their shoulders.

That is how Sandberg and Zuckerberg are able to rationalize their lies: because they believe the future of the entire company is dependent on maintaining the careful fiction that users have control over their data when they don’t.
Meet Stan

Here’s a personal example of how these lies have played out. Until recently, your humble Reg vulture lived next door to a man called Stan. Stan had spent his whole life in Oakland, California. He was a proud black man in his 70s who lived alone. This reporter moved next door to him having spent his entire life up until that moment not in Oakland; a white man in his 30s. To say we had no social connections in common would be an understatement. The only crossover in friends, family, culture, and hangouts were the occasional conversations we had in the street with our neighbors.

He had good taste in music. And I know that in the same way I knew he had an expensive and powerful stereo system. But we didn’t even go the same gigs because most of the music he played was from artists long since dead.

Despite all this, Facebook would persistently suggest that I knew Stan and should add him as a friend on Facebook. The same happened to my wife. I took this as a sign I needed to tighten up my privacy settings but even after making changes cutting Facebook off from my daily habits, it still recommended him as a friend. The only thing that finally stopped it? Deleting the Facebook app from my phone.

Sensing a story, and in my capacity as a tech reporter, I started asking Facebook questions about this extraordinary ability to know who I lived next to when it didn’t have access to my location. And the company responded, repeatedly, that it doesn’t. You have control over your data. You can choose what Facebook can see and do with that data. Facebook does not gather or sell data unless its users agree to it.

Except, of course, the opposite was true. It was a lie. And Facebook knew it. It had in fact gone to some lengths to make sure it knew where all its users were.

Precisely how it manages to say one thing and do the opposite is not yet clear but we are willing to bet it is a combination of two factors: one, its app stores and sends several data points that can be used to figure out location: your broadband IP address and/or Wi-Fi and Bluetooth network identifiers. It may be possible to figure out someone’s location from these data points: for example, your cable broadband IP address can often be narrowed down to a relatively precise location, such as a street or neighborhood, especially if you have a fixed IP address at home.

At this point, using Stan’s location from his IP address or from his phone app, Facebook could work out we live next to each other, or at least are near each other a lot, and thus might be friends.
Control is an illusion

With the news that Facebook signed dozens of data sharing agreements with large tech companies, it seems increasingly likely that Facebook was in fact not gathering my location data directly to figure out where I was, but was pulling in data from others, perhaps mixing in my home broadband IP address’s geolocation, and correlating it all to work out relationships and whereabouts.

We don’t yet know what precise methods Facebook uses to undercut its promises, but one thing is true – the company has made to this reporter, and many other reporters, users, lawmakers, federal agencies, and academics untrue statements with an intent to deceive. And it has created false or misleading impressions. It has lied. And it has done so deliberately. Over and over again.

And it is still lying today. Faced with evidence of its data-sharing agreements where – let’s not forget this – Facebook provided third parties access to people’s personal messages, and more importantly to their friends’ feeds, the company claims it broke no promises because it defined the outfits it signed agreements with as “service providers.” And so, according to Facebook, it didn’t break a pact it has with the US government’s trade watchdog, the FTC, not to share private data without permission, and likewise not to break agreements it has with its users.

Facebook also argues it had clearly communicated that it was granting apps access to people’s private messages, and that users had to link their Spotify, Netflix, Royal Bank of Canada, et al, accounts with their Facebook accounts to activate it. And while Facebook’s tie-ups with, say, Spotify and Netflix were well publicized, given this week’s outcry, arguably not every user was aware or made aware of what they were getting into. In any case, the “experimental” access to folks’ private conversations was discontinued nearly three years ago.

The social network claims it only ever shared with companies what people had agreed to share or chosen to make public, sidestepping a key issue: that people potentially had their profiles viewed, slurped, harvested, and exploited by their friends’ connected apps and websites.

As for the question of potential abuse of personal data handed to third parties, Facebook amazingly used the same line that it rolled out when it attempted to deflect the Cambridge Analytica scandal: that third parties were beholden to Facebook’s rules about using data. But, of course, Facebook doesn’t check or audit whether that is the case.
Sorry, again

And what is its self-reflective apology this time for granting such broad access to personal data to so many companies? It says that it is guilty of not keeping on top of old agreements, and the channels of private data to third parties stayed open much longer than they should have done after it had made privacy-enhancing changes.

We can’t prove it yet, and many never be able to unless more internal emails find their way out, but let’s be honest, we all know that this is another lie. Facebook didn’t touch those agreements because it didn’t want anyone to look at them. It chose to be willfully ignorant of the details of its most significant agreements with some of the world’s largest companies.

And it did so because it still believes it can ride this out, and that those agreements are going to be what keeps Facebook going as a corporation.

What Zuckerberg didn’t factor into his strategic masterstroke, however, was one critical detail: no one likes a liar. And when you lie repeatedly, to people’s faces, you go from liar to lying asshole. And lying asshole is enough to make people delete your app.

And when that app is deleted, the whole sorry house of cards will come tumbling down. And Facebook will become Friendster.

Call to Boycott All Businesses With Facebook Links

Well, at the moment, it seems, one would need to stop all commercial activities. But it needs to start somewhere. Look, before the Facebook scam, one could go to a website and not be inundated with Facebook analytics, prompts to use your Facebook login, links to “like us” and all the other gimmicks to get users to surrender their private information.

Perhaps it is time to start boycotting all businesses, charities, orgs, government entities, schools, etc. that insist on sporting and wiring their sites to enable the ilk that is Facebook. Speed kills. Facebook kills. Here are a few links about how Facebook has blood on their hands:

https://www.bbc.co.uk/news/resources/idt-sh/nigeria_fake_news

https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.htmlv

https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html

The list goes on….

Act now! Delete your Facebook Account and boycott those enterprises that continue to support Facebook.

Comments welcome.

How Facebook let Big Tech peers inside its privacy wall

https://www.nytimes.com/2018/12/19/business/dealbook/facebook-data-scandal.html

Facebook let some of the world’s largest technology companies have more intrusive access to users’ personal data than it had previously disclosed. That’s according to an investigation by Gabriel J.X. Dance, Michael LaForgia and Nicholas Confessore of the NYT, based on 270 pages of Facebook’s internal documents and interviews with more than 60 people.

The breadth of the data-sharing was vast. “Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends.”

Users often didn’t know. “Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.”

In fact, even Facebook had trouble keeping track. “By 2013, Facebook had entered into more such partnerships than its midlevel employees could easily track, according to interviews with two former employees,” explains the report. “So they built a tool that did the technical work of turning special access on and off.” It doesn’t seem to have solved the problem; as of last year, for instance, Yahoo “could view real-time feeds of friends’ posts for a feature that the company had ended in 2011.”

How did this happen? “Under the terms of a 2011 consent agreement with the Federal Trade Commission, Facebook was required to strengthen privacy safeguards and disclose data practices more thoroughly. The company hired an independent firm, PricewaterhouseCoopers, to formally assess its privacy procedures and report back to the F.T.C. every two years.” But “four former officials and employees of the F.T.C., briefed on The Times’s findings, said the data-sharing deals likely violated the consent agreement.”

Why it matters: Since the Cambridge Analytica scandal, Facebook has insisted that it does not sell data. But the NYT’s reporting suggests that it’s been eager to barter for arrangements that could speed its growth.

DC sues Facebook over Cambridge Analytica scandal

It is about time. Kudos AG Karl Racine! Come on State’s AGs, get off your duffs and join in. Note to EU Brussels – turn up the heat!

Quote

“Facebook failed to protect the privacy of its users,” AG Karl Racine said.

The attorney general of the District of Columbia has sued (PDF) Facebook, alleging violations of local consumer protection laws.

In a statement sent to reporters on Wednesday, AG Karl A. Racine said that the social media giant did not adequately protect users’ data, “enabling abuses like one that exposed nearly half of all District residents’ data to manipulation for political purposes during the 2016 election.”

“It allowed Cambridge Analytica to purchase personal information that was improperly obtained from 70 million [individuals], including 340,000 District of Columbia residents,” Racine said on a Wednesday call with reporters. “That’s nearly half of the people that live in the District of Columbia.”

Ben Wiseman, the director at the Office of Consumer Protection at the DC AG’s office, said that the lawsuit is seeking restitution and damages, including “civil penalties up to $5,000 per violation.”

340,000 users times $5,000 each would total $1.7 billion—but the case is likely to settle for far less than that.

Racine added that other states have expressed interest in joining this lawsuit.

“We think that bringing suit is necessary in order to bring these issues to light,” he said.

In the lawsuit, Racine points out that just 852 Facebook users in DC used Aleksandr Kogan’s “thisisyourdigitallife” personality quiz, but, due to the permissive data sharing that was in place at the time, hundreds of thousands of people were affected.

“Furthermore, after discovering the improper sale of consumer data by Kogan to Cambridge Analytica, Facebook failed to take reasonable steps to protect its consumers’ privacy by ensuring that the data was accounted for and deleted,” the complaint states.

“Facebook further failed to timely inform the public (including DC residents) that tens of millions of its consumers had their data sold to Cambridge Analytica, even though Facebook knew, or should have known, that such data was acquired in violation of its policies and was being used in connection with political advertising.”

Monique Hall, a Facebook spokeswoman, declined to respond to Ars’ questions about the new lawsuit but provided a corporate statement.

“We’re reviewing the complaint and look forward to continuing our discussions with attorneys general in DC and elsewhere,” the statement read.