Skip to content

Social Media Privacy

Facebook sever ties to data brokers

Quote

The Social Network™ all-but-admits its previous legalese for developers was useless

Facebook has outlined a set of changes to its platform that impact developers and data brokers.

Facebook has a program called “Partner Categories” that it tells advertisers will let them “further refine your targeting based on information compiled by … partners, such as offline demographic and behavioural information like homeownership or purchase history.”

The partners Facebook uses are Acxiom, CCC Marketing, Epsilon, Experian, Oracle Data Cloud and Quantium.

Graham Mudd, a Facebook product marketing director, said that using such providers to refine ad targeting “is common industry practice” but that Facebook feels “this step, winding down over the next six months, will help improve people’s privacy on Facebook.”

On its own platform, Facebook has promised new fine print for business-to-business applications, complete with “rigorous policies and terms”. Which kind of admits some of Facebook’s past fine print was floppy. Perhaps floppy enough to let data flow to Cambridge Analytica and beyond?

Also notable is a change that means apps that provides access to lists of a user’s friends will now be reviewed by Facebook.

So there you have it. No real change. They can’t change. Facebook needs to sell data like Starbooks needs to sell coffee. It is their business and you are their product. They will continue to mine and map your information with their third party partners to create highly targeted ads.

Want it to stop? Delete your Facebook Account now would be a good start.

Facebook Inspired Killings

This is an article from Oct 2017. While I have excerpted it here, but think it is worth a complete read (see Quote). It is an excellent article that I feel shows the complexity and human cost side of Facebook.

Quote

… But while the focus on Russia is understandable, Facebook has been much less vocal about the abuse of its services in other parts of the world, where the stakes can be much higher than an election.
..
the ethnic cleansing of Rohingya Muslims, an ethnic minority in Myanmar that has been subjected to brutal violence and mass displacement. Violence against the Rohingya has been fueled, in part, by misinformation and anti-Rohingya propaganda spread on Facebook, which is used as a primary news source by many people in the country. Doctored photos and unfounded rumors have gone viral on Facebook, including many shared by official government and military accounts….In Myanmar, the rise in anti-Rohingya sentiment coincided with a huge boom in social media use that was partly attributable to Facebook itself. In 2016, the company partnered with MPT, the state-run telecom company, to give subscribers access to its Free Basics program. Free Basics includes a limited suite of internet services, including Facebook, that can be used without counting toward a cellphone data plan. As a result, the number of Facebook users in Myanmar has skyrocketed to more than 30 million today from 2 million in 2014.

In India, where internet use has also surged in recent years, WhatsApp, the popular Facebook-owned messaging app, has been inundated with rumors, hoaxes and false stories. In May, the Jharkhand region in Eastern India was destabilized by a viral WhatsApp message that falsely claimed that gangs in the area were abducting children. The message incited widespread panic and led to a rash of retaliatory lynchings, in which at least seven people were beaten to death. A local filmmaker, Vinay Purty, told the Hindustan Times that many of the local villagers simply believed the abduction myth was real, since it came from WhatsApp….
The company has made many attempts to educate users about the dangers of misinformation. In India and Malaysia, it has taken out newspaper ads with tips for spotting false news. In Myanmar, it has partnered with local organizations to distribute printed copies of its community standards, as well as created educational materials to teach citizens about proper online behavior.

But these efforts, as well-intentioned as they may be, have not stopped the violence, and Facebook does not appear to have made them a top priority. The company has no office in Myanmar, and neither Mr. Zuckerberg nor Ms. Sandberg has made any public statements about the Rohingya crisis.

Facebook has argued that the benefits of providing internet access to international users will ultimately outweigh the costs. Adam Mosseri, a Facebook vice president who oversees the News Feed, told a journalism gathering this month, “In the end, I don’t think we as a human race will regret the internet.” Mr. Zuckerberg echoed that sentiment in a 2013 manifesto titled “Is Connectivity a Human Right?,” in which he said that bringing the world’s population online would be “one of the most important things we all do in our lifetimes.”

That optimism may be cold comfort to people in places like South Sudan. Despite being one of the poorest and least-wired countries in the world, with only around 20 percent of its citizens connected to the internet, the African nation has become a hotbed of social media misinformation. As BuzzFeed News has reported, political operatives inside and outside the country have used Facebook posts to spread rumors and incite anger between rival factions, fostering violence that threatens to escalate into a civil war. A United Nations report last year determined that in South Sudan, “social media has been used by partisans on all sides, including some senior government officials, to exaggerate incidents, spread falsehoods and veiled threats, or post outright messages of incitement.”

Peter Thiel Employee Helped Cambridge Analytica Before It Harvested Data

Quote

I think this story shows that the Facebook data mining is the tip of the iceberg. It will drag in Google and others.

As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.

It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook.

Google Link?

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.

In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.

“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”

Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.

One can only hope this will broaden the understanding of what “you are the product” means to free services peddled by big tech. Then again…..

See What Google Has on You

Want to see what Google has on you, well My Activity will do that. I love the innocent picture. Oh how sweet. Google working for to make a better experience. What bollocks. At every step of trying to delete your data, you get pop-ups warning you how bad what you are trying to do is (along with more innocent pictures).

Here is the real picture (lower right) that should be posted.

 

 

To be fair, if you ignore all the pretty happy warnings “do no harm” nonense warnings, you can turn a lot stuff off. That said, can you trust them? I can’t.

Security Mozilla pulls ads from Facebook after spat over privacy controls

Quote

The Mozilla Foundation has expressed its discomfort at the Cambridge Analytica revelations by pulling its ads from Facebook.

While the disappearance of Mozilla’s modest ad spend is hardly going to bring down The Social Network™, the organisation’s decision to “pause” its Facebook advertising came after Zuckerland tried to assure Mozilla that the conditions that prevailed in 2015 (when Cambridge Analytica breached its terms of service) had long been addressed.

On March 20, Mozilla made this statement on the scandal, asking Facebook to protect privacy “by default” [Good luck with that one – Ed], and saying its app permissions leave “billions of its users vulnerable without knowing it”.

Mozilla also launched a petition against apps that access data on people other than that of the individual who installed an app. Facebook apparently took exception to that. Here’s what Mozilla added on March 22:

Facebook reached out to us to discuss how we characterized their settings and to tell us that our original blog post overstated the scope of data sharing with app developers. What we described is an accurate characterization of what appears in Facebook’s settings.

What Facebook told us is that what we have written below is only true generally for third-party apps prior to 2015. Again, this isn’t clear in the user-facing tools and we think this needs to be fixed.

….
The Society’s position statement says the data-slurp, micro-targeting, psychographics and exploitation “raise questions about the possibility that Facebook data has been, or is being used improperly elsewhere. ISBA is asking Facebook for a full account of further potential issues so that advertisers can take appropriate measures.

Facebook’s Mark Zuckerberg Vows to Bolster Privacy Amid Cambridge

Sounds like bullshit to me. And how can he even do this. This is his business: harvesting and selling his user’s personal data by offering a honeypot free service to clueless (and some not so clueless) users?

After several days of silence, amid a growing chorus of criticism, Facebook chief executive Mark Zuckerberg publicly addressed the misuse of data belonging to 50 million users of the social network.

“We have a responsibility to protect your data,” Mr. Zuckerberg said Wednesday in a Facebook post, his preferred means of communication, “and if we can’t then we don’t deserve to serve you.”

Wait – you don’t serve your users..they serve you Zucky

Read more

WhatsApp co-founder joins call to #DeleteFacebook as fallout intensifies

Quote

Facebook’s troubles entered a fourth day with a rising chorus of people – including the co-founder of WhatsApp – joining the #DeleteFacebook movement as the Federal Trade Commission was reported to be investigating the company’s handling of personal data.

Momentum gathered behind the #DeleteFacebook campaign, with several media outlets publishing guides to permanently deleting your Facebook account. One surprising voice to emerge was that of Brian Acton, the co-founder of WhatsApp, which was bought by Facebook for $19bn in 2014.

Acton, who left WhatsApp in late 2017, posted to Twitter: “It is time. #deletefacebook.”

Meanwhile, in the United States, the FTC will examine whether the social networking site violated a 2011 agreement with the agency over data privacy, after reports that a firm called Global Science Research harvested information relating to 50 million Facebook profiles and provided the data to Cambridge Analytica

Of course this begs the question: Why on earth were you so ignorant to use FaceBook and What’s App (after Facebook bought it) in the first place?

Facebook Leak or OMG – you mean facebook has my data?

Well unless you live under a rock, Facebook has been caught once again with their pants down. Lets see…

LONDON — As the upstart voter-profiling company Cambridge Analytica prepared to wade into the 2014 American midterm elections, it had a problem.

The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.

So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.


But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.

Read more

Oh I am so shocked, SHOCKED I Say

And today learned that Cambridge Analytica Suspends C.E.O. Amid Facebook Data Scandal

Cambridge Analytica, the political data firm with ties to President Trump’s 2016 campaign, suspended its chief executive, Alexander Nix, on Tuesday, amid the furor over the access it gained to private information on more than 50 million Facebook users.

The decision came after a television broadcast in which Mr. Nix was recorded suggesting that the company had used seduction and bribery to entrap politicians and influence foreign elections.

The suspension marked a new low point for the fortunes of Cambridge Analytica and for Mr. Nix, who spent much of the past year making bold claims about the role his outfit played in the election of Mr. Trump. The company, founded by Stephen K. Bannon and Robert Mercer, a wealthy Republican donor who has put at least $15 million into it, offered tools that it claimed could identify the personalities of American voters and influence their behavior.

So-called psychographic modeling techniques, which were built in part with the data harvested from Facebook, underpinned Cambridge Analytica’s work for the Trump campaign in 2016. Mr. Nix once called the practice “our secret sauce,” though some have questioned its effectiveness.

But in recent days, the firm has found itself under increased scrutiny from lawmakers, regulators and prosecutors in the United States and Britain following reports in The New York Times and The Observer of London that the firm had harvested the Facebook data, and that it still had a copy of the information.

Read more

As I said before, anyone who uses facebook, has an Alexa, smart TVs (for the clueless) and so forth really needs to get educated on privacy and IT Security. I will copy and post the “Whips” excellent comment to this article

Let’s be clear here: Facebook doesn’t steal our data; we give it to them, one Like at a time.

For decades, Europe has had a Data Protection Directive that runs circles around the U.S.’s, such as it is–and it’s about to get even stronger with the GDPR, which will improve user control over our own data.

Instead of Americans spewing moral outrage at the weekly corporate affront (last week Experian, this week Facebook, next week who knows), why not grow up and demand a national approach to data protection?

Facebook needs to be regulated more tightly, or broken up so that no single entity controls all of its data

Quote

The message was clear: The company just wanted negative stories to stop. It didn’t really care how the data was used.

Facebook knows what you look like, your location, who your friends are, your interests, if you’re in a relationship or not, and what other pages you look at on the web. This data allows advertisers to target the more than one billion Facebook visitors a day. It’s no wonder the company has ballooned in size to a $500 billion behemoth in the five years since its I.P.O.

The more data it has on offer, the more value it creates for advertisers. That means it has no incentive to police the collection or use of that data — except when negative press or regulators are involved. Facebook is free to do almost whatever it wants with your personal information, and has no reason to put safeguards in place.

For a few years, Facebook’s developer platform hosted a thriving ecosystem of popular social games. Remember the age of Farmville and Candy Crush? The premise was simple: Users agreed to give game developers access to their data in exchange for free use of addictive games.

..

In one instance, a developer appeared to be using Facebook data to automatically generate profiles of children, without their consent. When I called the company responsible for the app, it claimed that Facebook’s policies on data use were not being violated, but we had no way to confirm whether that was true. Once data passed from the platform to a developer, Facebook had no view of the data or control over it. In other cases, developers asked for permission to get user data that their apps obviously didn’t need — such as a social game asking for all of your photos and messages. People rarely read permissions request forms carefully, so they often authorize access to sensitive information without realizing it.

At a company that was deeply concerned about protecting its users, this situation would have been met with a robust effort to cut off developers who were making questionable use of data. But when I was at Facebook, the typical reaction I recall looked like this: try to put any negative press coverage to bed as quickly as possible, with no sincere efforts to put safeguards in place or to identify and stop abusive developers. When I proposed a deeper audit of developers’ use of Facebook’s data, one executive asked me, “Do you really want to see what you’ll find?”

The message was clear: The company just wanted negative stories to stop. It didn’t really care how the data was used.

This makes for a dangerous mix: a company that reaches most of the country every day and has the most detailed set of personal data ever assembled, but has no incentive to prevent abuse. Facebook needs to be regulated more tightly, or broken up so that no single entity controls all of its data. The company won’t protect us by itself, and nothing less than our democracy is at stake.

Indeed. And users, including businesses, need to get serious about privacy and the damage the likes of facebook are doing and flee Facebook and their ilk in droves. Will this happen? I doubt it. As long as it is free they will come. As the increased popularity of Alexa, and other personal assistants that listen in shows, people are continuing to invite these modern forms of big brother into their private lives.

Russian Influence Reached 126 Million Through Facebook Alone

Quote

Russian agents intending to sow discord among American citizens disseminated inflammatory posts that reached 126 million users on Facebook, published more than 131,000 messages on Twitter and uploaded over 1,000 videos to Google’s YouTube service, according to copies of prepared remarks from the companies that were obtained by The New York Times.

The new information goes far beyond what the companies have revealed in the past and underline the breadth of the Kremlin’s efforts to lever open divisions in the United States using American technology platforms, especially Facebook. Multiple investigations of Russian meddling have loomed over the first 10 months of the Trump presidency, with one leading to the indictments of Paul Manafort, the former Trump campaign chief, and others on Monday.

For Facebook, Google and Twitter, the discovery of Russian influence by way of their sites has been a rude awakening. The companies had long positioned themselves as spreading information and connecting people for positive ends. Now the companies must grapple with how Russian agents used their technologies exactly as they were meant to be used — but for malevolent purposes.

Rude Awaking? Bullshit. For whom? Connecting people for positive good? More bullshit! It is about hoovering up personal user data and selling it! Come on, wake up!

Just say no to Social Media. Just say no to Google. Demand privacy.