Skip to content


AT&T, Sprint, Verizon, T-Mobile US pledge, again, to not sell your location to shady geezers. Sorry, we don’t believe them

…and neither should you!


US cellphone networks have promised – again – that they will stop selling records of their subscribers’ whereabouts to anyone willing to cough up cash.

In a statement on Thursday, AT&T said: “In light of recent reports about the misuse of location services, we have decided to eliminate all location aggregation services – even those with clear consumer benefits,” adding: “We are immediately eliminating the remaining services and will be done in March.”

That same March deadline was referenced by T-Mobile US’s CEO John Legere who had promised last June to end the sale of subscribers’ private location data. Legere tweeted this week: “T-Mobile is completely ending location aggregator work. We’re doing it the right way to avoid impacting consumers who use these types of services for things like emergency assistance. It will end in March, as planned and promised.”

While there is money to be made and no law preventing it, it is a virtual certainty that AT&T and others will figure out a way to profit from selling their customers’ private data. Last time around, FCC boss Ajit Pai refused to investigate the matter, and while there has been no response from Pai on the renewed calls for an investigation thanks to the partial US government shutdown, it is a virtual certainly that he will continue his pro-telco agenda and stay away from the issue.

Meanwhile, pressure grows in Congress to introduce a privacy law – an American version of Europe’s GDPR – especially in the light of abuses by Facebook and others. But that process is very far from certain given that many of the companies that benefit most from selling user data are also some of the most powerful and generous lobbyists in Washington DC.

Depression in girls linked to higher use of social media

Is anyone surprised?

Research suggests link between social media use and depressive symptoms was stronger for girls compared with boys

Girls’ much-higher rate of depression than boys is closely linked to the greater time they spend on social media, and online bullying and poor sleep are the main culprits for their low mood, new research reveals.

It found that many girls spend far more time using social media than boys, and also that they are much more likely to display signs of depression linked to their interaction on platforms such as Instagram, WhatsApp and Facebook.

As many as three-quarters of 14-year-old girls who suffer from depression also have low self-esteem, are unhappy with how they look and sleep for seven hours or less each night, the study found.

“Girls, it seems, are struggling with these aspects of their lives more than boys, in some cases considerably so,” said Prof Yvonne Kelly, from University College London, who led the team behind the findings.

The results prompted renewed concern about the rapidly accumulating evidence that many more girls and young women exhibit a range of mental health problems than boys and young men, and about the damage these can cause, including self-harm and suicidal thoughts.

The study is based on interviews with almost 11,000 14-year-olds who are taking part in the Millennium Cohort Study, a major research project into children’s lives.

It found that many girls spend far more time using social media than boys, and also that they are much more likely to display signs of depression linked to their interaction on platforms such as Instagram, WhatsApp and Facebook.

Asleep at the Switch


Facebook Data Scandals Stoke Criticism That a Privacy Watchdog Too Rarely Bites

Last spring, soon after Facebook acknowledged that the data of tens of millions of its users had improperly been obtained by the political consulting firm Cambridge Analytica, a top enforcement official at the Federal Trade Commission drafted a memo about the prospect of disciplining the social network.

Lawmakers, consumer advocates and even former commission officials were clamoring for tough action against Facebook, arguing that it had violated an earlier F.T.C. consent decree barring it from misleading users about how their information was shared.

But the enforcement official, James A. Kohm, took a different view. In a previously undisclosed memo in March, Mr. Kohm — echoing Facebook’s own argument — cautioned that Facebook was not responsible for the consulting firm’s reported abuses. The social network seemed to have taken reasonable steps to address the problem, he wrote, according to someone who read the memo, and most likely had not broken its promises to the F.T.C.

“They have been asleep at the switch,” said Senator Richard Blumenthal, the Connecticut Democrat and ranking member of the subcommittee charged with overseeing the agency.

The Cambridge Analytica data leak set off a reckoning for Facebook and a far-reaching debate about the tech industry, which has collected more information about more people than almost any other in history. At the same time, the F.T.C., which is investigating Facebook, is under growing attack for what critics say is a systemic failure to police Silicon Valley’s giants and their enormous appetite for personal data.

Almost alone among industrialized nations, the United States has no basic consumer privacy law. The F.T.C. serves as the country’s de facto privacy regulator, relying on more limited rules against deceptive trade practices to investigate Google, Twitter and other tech firms accused of misleading people about how their information is used.

But many in Washington view the agency as a watchdog that too rarely bites. In more than 40 interviews, former and current F.T.C. officials, lawmakers, Capitol Hill staff members, and consumer advocates said that as evidence of abuses has piled up against tech companies, the F.T.C. has been too cautious. Now, as the Trump administration and Congress debate whether to expand the agency and its authority over privacy violations, the Facebook inquiry looms as a referendum on the F.T.C.’s future.

“They have been asleep at the switch,” said Senator Richard Blumenthal, the Connecticut Democrat and ranking member of the subcommittee charged with overseeing the agency. “It’s a lack of will even more than paucity of resources.”

Long Overdue: It is time for the US to develop strong data privacy along the lines of the EU GDPR ( General Data Protection Regulation). It is also time for US “Netizens” to demand strong data privacy protect laws with extremely stiff penalties for non compliance.

BOGUS SCIENCE: Facebook Takes On Tricky Public Health Role

Among the other 100s of reasons, it is time to stop using Facebook.

A police officer on the late shift in an Ohio town recently received an unusual call from Facebook.

Earlier that day, a local woman wrote a Facebook post saying she was walking home and intended to kill herself when she got there, according to a police report on the case. Facebook called to warn the Police Department about the suicide threat.

The officer who took the call quickly located the woman, but she denied having suicidal thoughts, the police report said. Even so, the officer believed she might harm herself and told the woman that she must go to a hospital — either voluntarily or in police custody. He ultimately drove her to a hospital for a mental health work-up, an evaluation prompted by Facebook’s intervention. (The New York Times withheld some details of the case for privacy reasons.)

Facebook has computer algorithms that scan the posts, comments and videos of users in the United States and other countries for indications of immediate suicide risk. When a post is flagged, by the technology or a concerned user, it moves to human reviewers at the company, who are empowered to call local law enforcement.

“In the last year, we’ve helped first responders quickly reach around 3,500 people globally who needed help,” Mr. Zuckerberg wrote in a November post about the efforts.

But other mental health experts said Facebook’s calls to the police could also cause harm — such as unintentionally precipitating suicide, compelling nonsuicidal people to undergo psychiatric evaluations, or prompting arrests or shootings.

And, they said, it is unclear whether the company’s approach is accurate, effective or safe. Facebook said that, for privacy reasons, it did not track the outcomes of its calls to the police. And it has not disclosed exactly how its reviewers decide whether to call emergency responders. Facebook, critics said, has assumed the authority of a public health agency while protecting its process as if it were a corporate secret.

Yes you read that right. “Facebook said that, for privacy reasons, it did not track the outcomes of its calls to the police.” B.S. — how about formal clinical trials like the rest of the medical world? Their algorithm should get FDA approval first at a minimum.

“It’s hard to know what Facebook is actually picking up on, what they are actually acting on, and are they giving the appropriate response to the appropriate risk,” said Dr. John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston. “It’s black box medicine.”

“In this climate in which trust in Facebook is really eroding, it concerns me that Facebook is just saying, ‘Trust us here,’” said Mr. Marks, a fellow at Yale Law School and New York University School of Law.

Right – Trust Facebook? Never. I submit the real reason that miscreant Zuckerberg is doing this is that it is now well known that a plausible link exists between increased social media use and depression and suicide. Just say no to Facebook.

2012 – Social Media and Suicide: A Public Health Perspective

2017 – The Risk Of Teen Depression And Suicide Is Linked To Smartphone Use

No one likes a lying a-hole like Zuckerberg and crew


Mark Zuckerberg did everything in his power to avoid Facebook becoming the next MySpace – but forgot one crucial detail…No one likes a lying asshole

Comment Let’s get one thing straight right off the bat: Facebook, its CEO Mark Zuckerberg, and its COO Sheryl Sandberg, and its public relations people, and its engineers have lied. They have lied repeatedly. They have lied exhaustively. They have lied so much they’ve lost track of their lies, and then lied about them.

For some reason, in an era where the defining characteristic of the President of the United States is that he lies with impunity, it feels as though everyone has started policing the use of the word “lie” with uncommon zeal. But it is not some holy relic, it is a word, and it has a definition.

Lie (verb)
1 : to make an untrue statement with intent to deceive
2 : to create a false or misleading impression

By any measure, Facebook as an organization has knowingly, willingly, purposefully, and repeatedly lied. And two reports this week demonstrate that the depth of its lying was even worse than we previously imagined.

Before we dig into the lies, though, it’s worth asking the question: why? Why has the corporation got itself into this position, and why does it have to be dragged kicking and screaming, time and again, to confront what it already knows to be true?

And the answer to that is at the very heart of Facebook, it goes to the core of Mark Zuckerberg’s personality, and it defines the company’s corporate culture: it is insecure. And it has good reason to be.

The truth is that Facebook is nothing special. It is a website. A very big and clever website but a website that is completely reliant on its users to post their own content. Those users don’t need Facebook and they could, in a matter of seconds, decide to tap on a different app and post their thoughts and updates there, instead. If enough people make that decision, the company collapses. All 340 billion dollars of it.

Mark Zuckerberg knows that all too well, and as internal emails handed over to the British Parliament and then published make clear, the top tier of Facebook was highly focused on that question of existential dread: how do we avoid becoming the next MySpace, Geocities, Google Plus, or Friendster?
Novelty item

With thousands of people working underneath them, the world’s largest companies knocking at their door with blank checks for advertising, and the globe’s political leaders inviting them to meetings, Facebook tasted greatness, but couldn’t shake a huge question underneath it all: how does Facebook survive once the novelty wears off?

And the answer was the smart one: make yourself a part of the digital ecosystem. Yes, Facebook was completely reliant on its users, but everyone else wanted those users, too, and while it had them, the corporation needed to make sure it became enmeshed in as many other systems as possible.

It became a savvy businessman making sure that all his money and resources aren’t in one market: diversify, Mark! And that became the driving force behind every subsequent strategic decision while the rest of the company focused on making Facebook a really good product – making it easy to do more, post more, interact more.

And so, we had music service Spotify granted access to Facebook users’ private messages, once users had linked their Spotify and Facebook accounts. Why on Earth would Spotify want to read people’s private messages?

Easy: it is a huge, tasty dataset. You could find out what bands people are excited about, and send them notices of new albums or gigs. You could see what they think of rival services, or the cost of your service. People were encouraged to message their pals on Facebook through Spotify, letting them know what they were listening to. All in all, it was access to private thoughts: companies spend small fortunes paying specialist survey companies for these sorts of insights.

Likewise Netflix. It had access to the same data under a special program that Facebook ran with other monster internet companies and banks in which they were granted extraordinary privileges to millions of people’s personal data.

Facebook cut data deals with all sorts of companies based on this premise: give them what they want, and in return they would be hauled onto Zuckerberg’s internet reservation.

For example, Yahoo! got real-time feeds of posts by users’ friends – reminding us of Cambridge Analytica gathering information on millions of voters via a quiz app, and using it to target them in contentious political campaigns in the US and Europe.

Microsoft’s Bing was able to access the names of nearly all Facebook users’ friends without permission, and Amazon was able to get at friends’ names and contact details. Russian search engine Yandex had Facebook account IDs at its fingertips, though it claims it didn’t even know this information was available. Facebook at first told the New York Times Yandex wasn’t a partner, and then told US Congress it was.

Crossing the line

Plugging large companies into users’ profiles, and their friends’ profiles, became a running theme, and for the antisocial network, it all worked: the data flowed.

But then things took a darker turn. The users and privacy groups started asking questions. Facebook’s entire strategy started looking shaky as people decided they should have control over what is done with their private data. In Europe, a long debate led to solid legislation: everyone in the EU would soon have a legal right to control their information and, much worse, organizations that didn’t respect that could face massive fines.

Facebook started cutting shadier and shadier deals to protect its bottom line. Its policy people started developing language that carefully skirted around reality; and its lawyers began working on semantic workarounds so that the Silicon Valley titan could make what looked like firm and unequivocal statements on privacy and data control, but in fact allowed things to continue on exactly as they had. What was being shared was not always completely clear.

The line was crossed when Facebook got in bed with smartphone manufacturers: it secretly gave the device makers access to each phone user’s Facebook friends’ profiles, when the handheld was linked to its owner’s account, bypassing protections.

And you know how you can turn off “location history” in the Facebook app, and you can go into your iPhone’s settings and select “never” for the Facebook app when it comes to knowing your location? And you can refuse to use Facebook’s built-in workaround where you “check in” to places – at which point it will re-grant itself access to your location with a single tap?

Well, you can do all that, and still Facebook will know where you are and sell that information to others.

To which the natural question is: how? Well, we have what we believe to be the technical answer. But the real answer is: because it lies. Because that information is valuable to it. Because that information forms the basis of mutually reinforcing data-sharing agreements with all the companies that could one day kill Facebook by simply shrugging their shoulders.

That is how Sandberg and Zuckerberg are able to rationalize their lies: because they believe the future of the entire company is dependent on maintaining the careful fiction that users have control over their data when they don’t.
Meet Stan

Here’s a personal example of how these lies have played out. Until recently, your humble Reg vulture lived next door to a man called Stan. Stan had spent his whole life in Oakland, California. He was a proud black man in his 70s who lived alone. This reporter moved next door to him having spent his entire life up until that moment not in Oakland; a white man in his 30s. To say we had no social connections in common would be an understatement. The only crossover in friends, family, culture, and hangouts were the occasional conversations we had in the street with our neighbors.

He had good taste in music. And I know that in the same way I knew he had an expensive and powerful stereo system. But we didn’t even go the same gigs because most of the music he played was from artists long since dead.

Despite all this, Facebook would persistently suggest that I knew Stan and should add him as a friend on Facebook. The same happened to my wife. I took this as a sign I needed to tighten up my privacy settings but even after making changes cutting Facebook off from my daily habits, it still recommended him as a friend. The only thing that finally stopped it? Deleting the Facebook app from my phone.

Sensing a story, and in my capacity as a tech reporter, I started asking Facebook questions about this extraordinary ability to know who I lived next to when it didn’t have access to my location. And the company responded, repeatedly, that it doesn’t. You have control over your data. You can choose what Facebook can see and do with that data. Facebook does not gather or sell data unless its users agree to it.

Except, of course, the opposite was true. It was a lie. And Facebook knew it. It had in fact gone to some lengths to make sure it knew where all its users were.

Precisely how it manages to say one thing and do the opposite is not yet clear but we are willing to bet it is a combination of two factors: one, its app stores and sends several data points that can be used to figure out location: your broadband IP address and/or Wi-Fi and Bluetooth network identifiers. It may be possible to figure out someone’s location from these data points: for example, your cable broadband IP address can often be narrowed down to a relatively precise location, such as a street or neighborhood, especially if you have a fixed IP address at home.

At this point, using Stan’s location from his IP address or from his phone app, Facebook could work out we live next to each other, or at least are near each other a lot, and thus might be friends.
Control is an illusion

With the news that Facebook signed dozens of data sharing agreements with large tech companies, it seems increasingly likely that Facebook was in fact not gathering my location data directly to figure out where I was, but was pulling in data from others, perhaps mixing in my home broadband IP address’s geolocation, and correlating it all to work out relationships and whereabouts.

We don’t yet know what precise methods Facebook uses to undercut its promises, but one thing is true – the company has made to this reporter, and many other reporters, users, lawmakers, federal agencies, and academics untrue statements with an intent to deceive. And it has created false or misleading impressions. It has lied. And it has done so deliberately. Over and over again.

And it is still lying today. Faced with evidence of its data-sharing agreements where – let’s not forget this – Facebook provided third parties access to people’s personal messages, and more importantly to their friends’ feeds, the company claims it broke no promises because it defined the outfits it signed agreements with as “service providers.” And so, according to Facebook, it didn’t break a pact it has with the US government’s trade watchdog, the FTC, not to share private data without permission, and likewise not to break agreements it has with its users.

Facebook also argues it had clearly communicated that it was granting apps access to people’s private messages, and that users had to link their Spotify, Netflix, Royal Bank of Canada, et al, accounts with their Facebook accounts to activate it. And while Facebook’s tie-ups with, say, Spotify and Netflix were well publicized, given this week’s outcry, arguably not every user was aware or made aware of what they were getting into. In any case, the “experimental” access to folks’ private conversations was discontinued nearly three years ago.

The social network claims it only ever shared with companies what people had agreed to share or chosen to make public, sidestepping a key issue: that people potentially had their profiles viewed, slurped, harvested, and exploited by their friends’ connected apps and websites.

As for the question of potential abuse of personal data handed to third parties, Facebook amazingly used the same line that it rolled out when it attempted to deflect the Cambridge Analytica scandal: that third parties were beholden to Facebook’s rules about using data. But, of course, Facebook doesn’t check or audit whether that is the case.
Sorry, again

And what is its self-reflective apology this time for granting such broad access to personal data to so many companies? It says that it is guilty of not keeping on top of old agreements, and the channels of private data to third parties stayed open much longer than they should have done after it had made privacy-enhancing changes.

We can’t prove it yet, and many never be able to unless more internal emails find their way out, but let’s be honest, we all know that this is another lie. Facebook didn’t touch those agreements because it didn’t want anyone to look at them. It chose to be willfully ignorant of the details of its most significant agreements with some of the world’s largest companies.

And it did so because it still believes it can ride this out, and that those agreements are going to be what keeps Facebook going as a corporation.

What Zuckerberg didn’t factor into his strategic masterstroke, however, was one critical detail: no one likes a liar. And when you lie repeatedly, to people’s faces, you go from liar to lying asshole. And lying asshole is enough to make people delete your app.

And when that app is deleted, the whole sorry house of cards will come tumbling down. And Facebook will become Friendster.

Call to Boycott All Businesses With Facebook Links

Well, at the moment, it seems, one would need to stop all commercial activities. But it needs to start somewhere. Look, before the Facebook scam, one could go to a website and not be inundated with Facebook analytics, prompts to use your Facebook login, links to “like us” and all the other gimmicks to get users to surrender their private information.

Perhaps it is time to start boycotting all businesses, charities, orgs, government entities, schools, etc. that insist on sporting and wiring their sites to enable the ilk that is Facebook. Speed kills. Facebook kills. Here are a few links about how Facebook has blood on their hands:

The list goes on….

Act now! Delete your Facebook Account and boycott those enterprises that continue to support Facebook.

Comments welcome.

How Facebook let Big Tech peers inside its privacy wall

Facebook let some of the world’s largest technology companies have more intrusive access to users’ personal data than it had previously disclosed. That’s according to an investigation by Gabriel J.X. Dance, Michael LaForgia and Nicholas Confessore of the NYT, based on 270 pages of Facebook’s internal documents and interviews with more than 60 people.

The breadth of the data-sharing was vast. “Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends.”

Users often didn’t know. “Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.”

In fact, even Facebook had trouble keeping track. “By 2013, Facebook had entered into more such partnerships than its midlevel employees could easily track, according to interviews with two former employees,” explains the report. “So they built a tool that did the technical work of turning special access on and off.” It doesn’t seem to have solved the problem; as of last year, for instance, Yahoo “could view real-time feeds of friends’ posts for a feature that the company had ended in 2011.”

How did this happen? “Under the terms of a 2011 consent agreement with the Federal Trade Commission, Facebook was required to strengthen privacy safeguards and disclose data practices more thoroughly. The company hired an independent firm, PricewaterhouseCoopers, to formally assess its privacy procedures and report back to the F.T.C. every two years.” But “four former officials and employees of the F.T.C., briefed on The Times’s findings, said the data-sharing deals likely violated the consent agreement.”

Why it matters: Since the Cambridge Analytica scandal, Facebook has insisted that it does not sell data. But the NYT’s reporting suggests that it’s been eager to barter for arrangements that could speed its growth.

DC sues Facebook over Cambridge Analytica scandal

It is about time. Kudos AG Karl Racine! Come on State’s AGs, get off your duffs and join in. Note to EU Brussels – turn up the heat!


“Facebook failed to protect the privacy of its users,” AG Karl Racine said.

The attorney general of the District of Columbia has sued (PDF) Facebook, alleging violations of local consumer protection laws.

In a statement sent to reporters on Wednesday, AG Karl A. Racine said that the social media giant did not adequately protect users’ data, “enabling abuses like one that exposed nearly half of all District residents’ data to manipulation for political purposes during the 2016 election.”

“It allowed Cambridge Analytica to purchase personal information that was improperly obtained from 70 million [individuals], including 340,000 District of Columbia residents,” Racine said on a Wednesday call with reporters. “That’s nearly half of the people that live in the District of Columbia.”

Ben Wiseman, the director at the Office of Consumer Protection at the DC AG’s office, said that the lawsuit is seeking restitution and damages, including “civil penalties up to $5,000 per violation.”

340,000 users times $5,000 each would total $1.7 billion—but the case is likely to settle for far less than that.

Racine added that other states have expressed interest in joining this lawsuit.

“We think that bringing suit is necessary in order to bring these issues to light,” he said.

In the lawsuit, Racine points out that just 852 Facebook users in DC used Aleksandr Kogan’s “thisisyourdigitallife” personality quiz, but, due to the permissive data sharing that was in place at the time, hundreds of thousands of people were affected.

“Furthermore, after discovering the improper sale of consumer data by Kogan to Cambridge Analytica, Facebook failed to take reasonable steps to protect its consumers’ privacy by ensuring that the data was accounted for and deleted,” the complaint states.

“Facebook further failed to timely inform the public (including DC residents) that tens of millions of its consumers had their data sold to Cambridge Analytica, even though Facebook knew, or should have known, that such data was acquired in violation of its policies and was being used in connection with political advertising.”

Monique Hall, a Facebook spokeswoman, declined to respond to Ars’ questions about the new lawsuit but provided a corporate statement.

“We’re reviewing the complaint and look forward to continuing our discussions with attorneys general in DC and elsewhere,” the statement read.

Facebook users cannot avoid location-based ads, investigation finds

Look: Western Populations need to abandon Facebook NOW. It is only way this disgusting company will die. Just perhaps if western populations lead and perhaps, just perhaps, others will follow. They have blood on their hands. It is time.

Oh in the latest news.


Facebook users cannot avoid location-based ads, investigation finds
No combination of settings can stop location data being used by advertisers, says report

Facebook targets users with location-based adverts even if they block the company from accessing GPS on their phones, turn off location history in the app, hide their work location on their profile and never use the company’s “check in” feature, according to an investigation published this week.

There is no combination of settings that users can enable to prevent their location data from being used by advertisers to target them, according to the privacy researcher Aleksandra Korolova. “Taken together,” Korolova says, “Facebook creates an illusion of control rather than giving actual control over location-related ad targeting, which can lead to real harm.”

Facebook users can control to an extent how much information they give the company about their location. At the most revealing end, users may be happy to enable “location services” for Facebook, allowing their iPhone to provide ultra-precise location data to the company, or they may “check in” to shops, restaurants and theatres, telling the social network where they are on a sporadic basis.
Sign up to the Media Briefing: news for the news-makers
Read more

But while users can decide to give more information to Facebook, Korolova revealed they cannot decide to stop the social network knowing where they are altogether nor can they stop it selling the ability to advertise based on that knowledge.

Despite going to as much trouble as possible to minimise the location data received by the social network, the researcher wrote, “Facebook showed me ads targeted at ‘people who live near Santa Monica’ (which is where I live) or ‘people who live or were recently near Los Angeles’ (which is where I work). Moreover, I have noticed that whenever I travel for work or pleasure, Facebook continues to keep track of my location and use it for advertising: a trip to Glacier national park resulted in an ad for activities in Whitefish, Montana; a trip to Cambridge, MA, in an ad for a business there; and a visit to Herzliya, Israel, in an ad for a business there.

“Some of the explanations by Facebook for why I am seeing a particular ad even mention specifically that I am seeing the ad because I was ‘recently near their business’.”

The experience was mirrored by the Guardian reporter Julia Carrie Wong, who discovered in April that the site “knows that I took reporting trips to Montana and Seattle and San Diego, despite the fact that I have never allowed it to track me by GPS”.

Facebook tells advertisers that it learns user locations from the IP address, wifi and Bluetooth data, Korolova says.

In its pitch to advertisers, Facebook says: “Local awareness ads were built with privacy in mind […] People have control over the recent location information they share with Facebook and will only see ads based on their recent location if location services are enabled on their phone.” Korolova says her findings show that “this claim is false”.

The academic argues that Facebook needs to offer the ability to opt out of location use entirely, “or, at the very least, an ability to meaningfully specify the granularity of its use and exclude particular areas from being used”.

In 2015, according to leaked emails published by the UK parliament, the team behind a particular version of location-based advertising, which used Bluetooth “beacons” to track users’ shopping habits without resorting to uploading GPS data, was particularly concerned about appearing “scary”.

“We’re still in a precarious position of scaling without freaking people out,” wrote a Facebook product manager in charge of the location-tracking technology. “If a negative meme were to develop around Facebook Bluetooth beacons, businesses could become reticent to accept them from us.”

Facebook said in a statement: “Facebook does not use wifi data to determine your location for ads if you have location services turned off. We do use IP and other information such as check-ins and current city from your profile. We explain this to people, including in our Privacy Basics site and on the About Facebook Ads site.”

PLEASE STOP USING FACEBOOK! If anything, it will make the roads safer. It will also end the senseless killings spread by this rumor monger garbage company.

Facebook apologizes for bug leaking private photos

Hah hah hah – Facebook apologizes…deja vu (weekly)

Data gathering biz still having trouble keeping data secure

Why – because they do not want to. It is their business stupid!

Facebook on Friday apologized for a bug that may have exposed exposed private photos to third-party apps for the 12 day period from September 13 to September 25, 2018.

Yep you read that right – September 2018! Who do you think your fooling Zuckerberg (of …sorry, yes, of course, all the zuckers that use your site).

“We’re sorry this happened,” said Tomer Bar, Facebook engineering director, in a blog post intended for developers, noting that as many as 6.8 million users and 1,500 apps from by 876 developers may be affected.

Tomer explained that when a Facebook user grants permission for an app to access that individual’s photos on Facebook, the service should only grant access to photos shared on timelines.

Instead, the bug made photos shared elsewhere – in Marketplace or Facebook Stories – or uploaded but never posted available to developers’ apps, specifically those that had been approved by Facebook to use the photos API and by users.

Facebook intends to notify affected individuals, so they can check their photo apps for images that shouldn’t be there. And next week, the company says it will provide developers with a tool to determine which users of their apps may have been affected and to assist with the deletion of images that shouldn’t be there.

It was only a few days after the period of vulnerability, on September 28, that Facebook said a different bug had exposed as many as 90 million Facebook profiles to hackers, a figure it subsequently revised down to 30 million.

90 Million? Geez — no wonder miscreants have such an easy time influencing opinion.

In response to that incident, Guy Rosen, VP of product management, apologized.
This is getting to be a habit

The social data biz has apologized so often that its serial contrition came up when CEO Mark Zuckerberg testified before the House Energy and Commerce Committee in April.

Addressing Zuckerberg at the hearing, Rep. Jan Schakowsky (D-IL) said, “You have a long history of growth and success, but you also have a long list of apologies.” She then recited a partial litany of his mea culpas over the years:

“I apologize for any harm done as a result of my neglect.” – Harvard, 2003
“We really messed this one up.” – Facebook, 2006
“We simply did a bad job [with this release, and] I apologize for it.” – Facebook, 2007
“Sometimes we move too fast…” – Facebook, 2010
“I’m the first to admit we made a bunch of mistakes.” – Facebook, 2011
“[For those I hurt this year,] I ask forgiveness and I will try to be better.” Facebook, 2017

Schakowsky concluded from this that Facebook’s self-regulation doesn’t work.

No shit Jan — and no need to stop with them. It is the enter industry that has grown up mining, sharing and selling personal data that must be disassembled.

Legislative regulation may not be working either. Facebook in April, shortly after Zuckerberg’s Congressional testimony, made much of its effort to comply with Europe’s GDPR privacy regime.

“As soon as GDPR was finalized, we realized it was an opportunity to invest even more heavily in privacy,” said Erin Egan, veep and chief privacy officer of policy, and Ashlie Beringer, veep and deputy general counsel in a blog post at the time. “We not only want to comply with the law, but also go beyond our obligations to build new and improved privacy experiences for everyone on Facebook.”

Nonetheless, in response to complaints, the Irish Data Protection Commission has begun an investigation of the company’s privacy practices.

“The Irish DPC has received a number of breach notifications from Facebook since the introduction of the GDPR on May 25, 2018,” spokesperson for the watchdog said on Friday in an email to The Register. “With reference to these data breaches, including the breach in question, we have this week commenced a statutory inquiry examining Facebook’s compliance with the relevant provisions of the GDPR.”

Coming shortly after the British Parliament published a trove of Facebook emails about how the ad biz monetizes its user data, the investigation isn’t all that surprising.

The Register asked Facebook how users of the ad network should interpret the photo bug in light of CEO Mark Zuckerberg’s apology following the Cambridge Analytica scandal: “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you. ”

We’ve not heard back.