Skip to content

Social Media Privacy

Facebook targets ads using phone numbers submitted for security purposes

Quote

If you sometimes — or often — wonder how or why you’re seeing a certain ad online, here’s a possible answer.

Most Facebook users know the company targets ads based on information they willingly give the company, but researchers have found that the social media giant also targets ads based on information users may not know is being used to target them — or information they did not explicitly give the company.

For example, phone numbers provided for two-factor authentication are also being used to target ads on Facebook, according to a new report that cites a study, titled “Investigating sources of PII used in Facebook’s targeted advertising,” by researchers from Northeastern and Princeton universities.

When a user gives Facebook a phone number for two-factor authentication or for the purpose of receiving alerts about log-ins, “that phone number became targetable by an advertiser within a couple of weeks,” Gizmodo reported.

A company spokeswoman told Gizmodo that “we use the information people provide to offer a more personalized experience, including showing more relevant ads.” The spokeswoman pointed out that people can set up two-factor authentication without offering their phone numbers.

However, the study also shows — and Gizmodo tested, by successfully targeting an ad at a computer science professor using a landline phone number — that contacts of Facebook users can be targeted without their consent. Facebook users who share their contacts are exposing those contacts to potential ad targeting.

This means that, as a Facebook spokeswoman told Gizmodo, “We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them.”

A Facebook spokeswoman told this news organization Thursday: “We are clear about how we use the information we collect, including the contact information that people upload or add to their own accounts. You can manage and delete the contact information you’ve uploaded at any time.”

In the study, the researchers said Facebook’s use of personally identifiable information in this way is to be expected, given that it’s the business the company is in. “This incentive is exacerbated with the recent introduction of PII-based targeting, which allows advertisers to specify exactly which users to target by specifying a list of their PII,” they said.

Facebook Does it Again! 50 million Facebook accounts breached

Quote

Facebook reset logins for millions of customers last night as it dealt with a data breach that may have exposed nearly 50 million accounts. The breach was caused by an exploit of three bugs in Facebook’s code that were introduced with the addition of a new video uploader in July of 2017. Facebook patched the vulnerabilities on Thursday, and it revoked access tokens for a total of 90 million users

In a call with press today, Facebook CEO Mark Zuckerberg said that the attack targeted the “view as” feature, “code that allowed people to see what other people were seeing when they viewed their profile,” Zuckerberg said. The attackers were able to use this feature, combined with the video uploader feature, to harvest access tokens. A surge in usage of the feature was detected on September 16, triggering the investigation that eventually discovered the breach.

“The attackers did try to query our APIs—but we do not yet know if any private information was exposed,” Zuckerberg said. The attackers used the profile retrieval API, which provides access to the information presented in a user’s profile page, but there’s no evidence yet that Facebook messages or other private data was viewed. No credit card data or other information was exposed, according to Facebook.

Regardless, the breach could do further damage to Facebook’s reputation as the company continues to attempt to regain public trust after a recent string of security and privacy issues. In addition to revelations about the misuse of Facebook user data by Cambridge Analytica during the run-up to the 2016 US presidential election, there have been questions about how Facebook itself uses customer data, including the discovery that Facebook had been routinely collecting full call logs and other data from some mobile users.

And if there were not 100 other reasons to ditch facebook, how about this?

Earlier this week, Facebook acknowledged that it provided phone numbers used for two-factor authentication to advertisers for the purpose of targeting users with advertisements. And Facebook’s Onavo virtual private network application was yanked from Apple’s App Store in August because it was being used by Facebook to collect data about users’ mobile application usage.

Facebook pulls ‘snoopy’ Onavo VPN from Apple’s App Store after falling foul of rules

Just say no to Facebook, they will never change, they can’t, because your private data is their product.

QUOTE

Facebook has pulled its data-snaffling Onavo VPN from Apple’s App Store after the iGiant said the tech violated recently tightened rules.

Onavo is a free VPN app that pipes user traffic through Facebook systems under the pretext of protecting surfers from malware-tainted websites and other threats. The app, which the social network acquired in 2013, sends users’ data back to Facebook, even when the app is turned off.

Security advocates have blasted Onavo for being a privacy threat, as previously reported. Onavo Protect was separately criticised for allegedly harvesting users’ psychological profiles.

Facebook has been accused of using the data gathered through the app to track rivals and provide pointers on new product development. Data from Onavo lit the way for its 2014 purchase of WhatsApp as well as the social network’s excursion into live video in 2016.

Apple updated its App Store guidelines in June to ban “[collecting] information about which other apps are installed on a user’s device for the purposes of analytics or advertising/marketing”. Apple also informed Facebook that Onavo violated developer rules that prevent apps from using data beyond what’s needed to deliver the service on offer, The Wall Street Journal reported

Break up Facebook up

Since the users of Facebook will never take action to fix their addiction, perhaps it is time for regulators to step in. The history of egregious breaches of public trust and leaks of privacy at Facebook demand action.

QUOTE

When the government broke up the telephone system in 1984, the fact that AT&T could count most citizens as customers and that it was arguably the best-run telephone company in the world was not deemed compelling enough to preserve its monopoly power. The breakup would unleash a wave of competition and innovation that ultimately benefited consumers and the economy.

Facebook seems to be in a similar position today — only with far greater global reach than Ma Bell could have imagined. Facebook’s two billion monthly active users, and the way those accounts are linked and viewed by users and by third parties, have made it the most powerful communications and media company in the world, even if its chief executive, Mark Zuckerberg, insists his is a technology business.

And that power is being abused. As The New York Times reported Tuesday, Facebook shared data with at least four Chinese electronics firms, including one flagged by American officials as a national security threat. We learned earlier this week, thanks to a Times investigation, that it allowed phone and other device makers, including Amazon, Apple, Samsung and Microsoft, to see vast amounts of your personal information without your knowledge. That behavior appears to violate a consent order that Facebook agreed to with the Federal Trade Commission in 2011, after Facebook was found to have made repeated changes to its privacy settings that allowed the company to transfer user data without bothering to inform the users. And it follows the even darker revelation that Facebook allowed a trove of information, including users’ education levels, likes, locations, and religious and political affiliations, to be exploited by the data mining firm Cambridge Analytica to manipulate potential voters for its Republican Party clients.

Throughout its history, Facebook has adamantly argued that it treats our data, and who has access to it, as a sort of sacred trust, with Zuckerberg & Company being the trustees. Yet at the same time, Facebook has continued to undermine privacy by making it cumbersome to opt out of sharing, trying to convince users that we actually do want to share all of our personal information (and some people actually do) and by leaving the door unlocked for its partners and clients to come in and help themselves. Those partners have included 60 device makers that used application programming interfaces, also known as A.P.I.s, so Facebook could run on their gadgets.

In Facebook’s view those partners functioned as extensions of the Facebook app itself and offered similar privacy protections. And the company said that most of this intrusive behavior happened a decade ago, when mobile apps barely existed and Facebook had to program its way onto those devices. “We controlled them tightly from the get-go,” said Facebook’s Ime Archibong, vice president for product partnerships, in a response to The Times’s article. Yet a Times reporter was able to retrieve information on 295,000 Facebook users using a five-year-old BlackBerry.

Facebook Gave Data Access to Chinese Firm Flagged by U.S. Intelligence

Suprise Surprise Surprise! Just say no to Facebook!

Quote

Facebook has data-sharing partnerships with at least four Chinese electronics companies, including a manufacturing giant that has a close relationship with China’s government, the social media company said on Tuesday.

The agreements, which date to at least 2010, gave private access to some user data to Huawei, a telecommunications equipment company that has been flagged by American intelligence officials as a national security threat, as well as to Lenovo, Oppo and TCL.

The four partnerships remain in effect, but Facebook officials said in an interview that the company would wind down the Huawei deal by the end of the week.

Facebook gave access to the Chinese device makers along with other manufacturers — including Amazon, Apple, BlackBerry and Samsung — whose agreements were disclosed by The New York Times on Sunday.

The deals were part of an effort to push more mobile users onto the social network starting in 2007, before stand-alone Facebook apps worked well on phones. The agreements allowed device makers to offer some Facebook features, such as address books, “like” buttons and status updates.

Facebook Gave Device Makers Deep Access to Data on Users and Friends

Dear Facebook users, you are the product, you are also morons. Freedom and privacy are rights that need to be defended, not given away for convenience.

Quote

As Facebook sought to become the world’s dominant social media service, it struck agreements allowing phone and other device makers access to vast amounts of its users’ personal information.

Facebook has reached data-sharing partnerships with at least 60 device makers — including Apple, Amazon, BlackBerry, Microsoft and Samsung — over the last decade, starting before Facebook apps were widely available on smartphones, company officials said. The deals allowed Facebook to expand its reach and let device makers offer customers popular features of the social network, such as messaging, “like” buttons and address books.

But the partnerships, whose scope has not previously been reported, raise concerns about the company’s privacy protections and compliance with a 2011 consent decree with the Federal Trade Commission. Facebook allowed the device companies access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders. Some device makers could retrieve personal information even from users’ friends who believed they had barred any sharing, The New York Times found.

Facebook sever ties to data brokers

Quote

The Social Network™ all-but-admits its previous legalese for developers was useless

Facebook has outlined a set of changes to its platform that impact developers and data brokers.

Facebook has a program called “Partner Categories” that it tells advertisers will let them “further refine your targeting based on information compiled by … partners, such as offline demographic and behavioural information like homeownership or purchase history.”

The partners Facebook uses are Acxiom, CCC Marketing, Epsilon, Experian, Oracle Data Cloud and Quantium.

Graham Mudd, a Facebook product marketing director, said that using such providers to refine ad targeting “is common industry practice” but that Facebook feels “this step, winding down over the next six months, will help improve people’s privacy on Facebook.”

On its own platform, Facebook has promised new fine print for business-to-business applications, complete with “rigorous policies and terms”. Which kind of admits some of Facebook’s past fine print was floppy. Perhaps floppy enough to let data flow to Cambridge Analytica and beyond?

Also notable is a change that means apps that provides access to lists of a user’s friends will now be reviewed by Facebook.

So there you have it. No real change. They can’t change. Facebook needs to sell data like Starbooks needs to sell coffee. It is their business and you are their product. They will continue to mine and map your information with their third party partners to create highly targeted ads.

Want it to stop? Delete your Facebook Account now would be a good start.

Facebook Inspired Killings

This is an article from Oct 2017. While I have excerpted it here, but think it is worth a complete read (see Quote). It is an excellent article that I feel shows the complexity and human cost side of Facebook.

Quote

… But while the focus on Russia is understandable, Facebook has been much less vocal about the abuse of its services in other parts of the world, where the stakes can be much higher than an election.
..
the ethnic cleansing of Rohingya Muslims, an ethnic minority in Myanmar that has been subjected to brutal violence and mass displacement. Violence against the Rohingya has been fueled, in part, by misinformation and anti-Rohingya propaganda spread on Facebook, which is used as a primary news source by many people in the country. Doctored photos and unfounded rumors have gone viral on Facebook, including many shared by official government and military accounts….In Myanmar, the rise in anti-Rohingya sentiment coincided with a huge boom in social media use that was partly attributable to Facebook itself. In 2016, the company partnered with MPT, the state-run telecom company, to give subscribers access to its Free Basics program. Free Basics includes a limited suite of internet services, including Facebook, that can be used without counting toward a cellphone data plan. As a result, the number of Facebook users in Myanmar has skyrocketed to more than 30 million today from 2 million in 2014.

In India, where internet use has also surged in recent years, WhatsApp, the popular Facebook-owned messaging app, has been inundated with rumors, hoaxes and false stories. In May, the Jharkhand region in Eastern India was destabilized by a viral WhatsApp message that falsely claimed that gangs in the area were abducting children. The message incited widespread panic and led to a rash of retaliatory lynchings, in which at least seven people were beaten to death. A local filmmaker, Vinay Purty, told the Hindustan Times that many of the local villagers simply believed the abduction myth was real, since it came from WhatsApp….
The company has made many attempts to educate users about the dangers of misinformation. In India and Malaysia, it has taken out newspaper ads with tips for spotting false news. In Myanmar, it has partnered with local organizations to distribute printed copies of its community standards, as well as created educational materials to teach citizens about proper online behavior.

But these efforts, as well-intentioned as they may be, have not stopped the violence, and Facebook does not appear to have made them a top priority. The company has no office in Myanmar, and neither Mr. Zuckerberg nor Ms. Sandberg has made any public statements about the Rohingya crisis.

Facebook has argued that the benefits of providing internet access to international users will ultimately outweigh the costs. Adam Mosseri, a Facebook vice president who oversees the News Feed, told a journalism gathering this month, “In the end, I don’t think we as a human race will regret the internet.” Mr. Zuckerberg echoed that sentiment in a 2013 manifesto titled “Is Connectivity a Human Right?,” in which he said that bringing the world’s population online would be “one of the most important things we all do in our lifetimes.”

That optimism may be cold comfort to people in places like South Sudan. Despite being one of the poorest and least-wired countries in the world, with only around 20 percent of its citizens connected to the internet, the African nation has become a hotbed of social media misinformation. As BuzzFeed News has reported, political operatives inside and outside the country have used Facebook posts to spread rumors and incite anger between rival factions, fostering violence that threatens to escalate into a civil war. A United Nations report last year determined that in South Sudan, “social media has been used by partisans on all sides, including some senior government officials, to exaggerate incidents, spread falsehoods and veiled threats, or post outright messages of incitement.”

Peter Thiel Employee Helped Cambridge Analytica Before It Harvested Data

Quote

I think this story shows that the Facebook data mining is the tip of the iceberg. It will drag in Google and others.

As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.

It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook.

Google Link?

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.

In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.

“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”

Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.

One can only hope this will broaden the understanding of what “you are the product” means to free services peddled by big tech. Then again…..

See What Google Has on You

Want to see what Google has on you, well My Activity will do that. I love the innocent picture. Oh how sweet. Google working for to make a better experience. What bollocks. At every step of trying to delete your data, you get pop-ups warning you how bad what you are trying to do is (along with more innocent pictures).

Here is the real picture (lower right) that should be posted.

 

 

To be fair, if you ignore all the pretty happy warnings “do no harm” nonense warnings, you can turn a lot stuff off. That said, can you trust them? I can’t.