Skip to content

Privacy

Why I’m done with Chrome – Matthew Green

This article was published Sep 2018. I have warned people on chrome for numerous years before that. It is not just the log in policy, it is all the other data collection, no way to automatically dump cookies and private data on exit and so forth. But all that said, it is still worth re-posting this as I think Matt Green did a good job. And since no one listens to me (boo hoo), maybe they will listen to Matt. And litsening is needed as the likes of google, facebook, amazon are just getting stronger and more intrusive. Fight back!

QUOTE

Why I’m done with Chrome

This blog is mainly reserved for cryptography, and I try to avoid filling it with random 512px-Google_Chrome_icon_(September_2014).svg“someone is wrong on the Internet” posts. After all, that’s what Twitter is for! But from time to time something bothers me enough that I have to make an exception. Today I wanted to write specifically about Google Chrome, how much I’ve loved it in the past, and why — due to Chrome’s new user-unfriendly forced login policy — I won’t be using it going forward.
A brief history of Chrome

When Google launched Chrome ten years ago, it seemed like one of those rare cases where everyone wins. In 2008, the browser market was dominated by Microsoft, a company with an ugly history of using browser dominance to crush their competitors. Worse, Microsoft was making noises about getting into the search business. This posed an existential threat to Google’s internet properties.

In this setting, Chrome was a beautiful solution. Even if the browser never produced a scrap of revenue for Google, it served its purpose just by keeping the Internet open to Google’s other products. As a benefit, the Internet community would receive a terrific open source browser with the best development team money could buy. This might be kind of sad for Mozilla (who have paid a high price due to Chrome) but overall it would be a good thing for Internet standards.

For many years this is exactly how things played out. Sure, Google offered an optional “sign in” feature for Chrome, which presumably vacuumed up your browsing data and shipped it off to Google, but that was an option. An option you could easily ignore. If you didn’t take advantage of this option, Google’s privacy policy was clear: your data would stay on your computer where it belonged.
What changed?

A few weeks ago Google shipped an update to Chrome that fundamentally changes the sign-in experience. From now on, every time you log into a Google property (for example, Gmail), Chrome will automatically sign the browser into your Google account for you. It’ll do this without asking, or even explicitly notifying you. (However, and this is important: Google developers claim this will not actually start synchronizing your data to Google — yet. See further below.)

Your sole warning — in the event that you’re looking for it — is that your Google profile picture will appear in the upper-right hand corner of the browser window. I noticed mine the other day:

The change hasn’t gone entirely unnoticed: it received some vigorous discussion on sites like Hacker News. But the mainstream tech press seems to have ignored it completely. This is unfortunate — and I hope it changes — because this update has huge implications for Google and the future of Chrome.

In the rest of this post, I’m going to talk about why this matters. From my perspective, this comes down to basically four points:

Nobody on the Chrome development team can provide a clear rationale for why this change was necessary, and the explanations they’ve given don’t make any sense.
This change has enormous implications for user privacy and trust, and Google seems unable to grapple with this.
The change makes a hash out of Google’s own privacy policies for Chrome.
Google needs to stop treating customer trust like it’s a renewable resource, because they’re screwing up badly.

I warn you that this will get a bit ranty. Please read on anyway.
Google’s stated rationale makes no sense

The new feature that triggers this auto-login behavior is called “Identity consistency between browser and cookie jar” (HN). After conversations with two separate Chrome developers on Twitter (who will remain nameless — mostly because I don’t want them to hate me), I was given the following rationale for the change:

To paraphrase this explanation: if you’re in a situation where you’ve already signed into Chrome and your friend shares your computer, then you can wind up accidentally having your friend’s Google cookies get uploaded into your account. This seems bad, and sure, we want to avoid that.

But note something critical about this scenario. In order for this problem to apply to you, you already have to be signed into Chrome. There is absolutely nothing in this problem description that seems to affect users who chose not to sign into the browser in the first place.

So if signed-in users are your problem, why would you make a change that forces unsigned–in users to become signed-in? I could waste a lot more ink wondering about the mismatch between the stated “problem” and the “fix”, but I won’t bother: because nobody on the public-facing side of the Chrome team has been able to offer an explanation that squares this circle.

And this matters, because “sync” or not…
The change has serious implications for privacy and trust

The Chrome team has offered a single defense of the change. They point out that just because your browser is “signed in” does not mean it’s uploading your data to Google’s servers. Specifically:

While Chrome will now log into your Google account without your consent (following a Gmail login), Chrome will not activate the “sync” feature that sends your data to Google. That requires an additional consent step. So in theory your data should remain local.

This is my paraphrase. But I think it’s fair to characterize the general stance of the Chrome developers I spoke with as: without this “sync” feature, there’s nothing wrong with the change they’ve made, and everything is just fine.

This is nuts, for several reasons.

User consent matters. For ten years I’ve been asked a single question by the Chrome browser: “Do you want to log in with your Google account?” And for ten years I’ve said no thanks. Chrome still asks me that question — it’s just that now it doesn’t honor my decision.

The Chrome developers want me to believe that this is fine, since (phew!) I’m still protected by one additional consent guardrail. The problem here is obvious:

If you didn’t respect my lack of consent on the biggest user-facing privacy option in Chrome (and didn’t even notify me that you had stopped respecting it!) why should I trust any other consent option you give me? What stops you from changing your mind on that option in a few months, when we’ve all stopped paying attention?

The fact of the matter is that I’d never even heard of Chrome’s “sync” option — for the simple reason that up until September 2018, I had never logged into Chrome. Now I’m forced to learn these new terms, and hope that the Chrome team keeps promises to keep all of my data local as the barriers between “signed in” and “not signed in” are gradually eroded away.

The Chrome sync UI is a dark pattern. Now that I’m forced to log into Chrome, I’m faced with a brand new menu I’ve never seen before. It looks like this:

Does that big blue button indicate that I’m already synchronizing my data to Google? That’s scary! Wait, maybe it’s an invitation to synchronize! If so, what happens to my data if I click it by accident? (I won’t give it the answer away, you should go find out. Just make sure you don’t accidentally upload all your data in the process. It can happen quickly.)

In short, Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern. Whether intentional or not, it has the effect of making it easy for people to activate sync without knowing it, or to think they’re already syncing and thus there’s no additional cost to increasing Google’s access to their data.

Don’t take my word for it. It even gives (former) Google people the creeps.

Big brother doesn’t need to actually watch you. We tell things to our web browsers that we wouldn’t tell our best friends. We do this with some vague understanding that yes, the Internet spies on us. But we also believe that this spying is weak and probabilistic. It’s not like someone’s standing over our shoulder checking our driver’s license with each click.

What happens if you take that belief away? There are numerous studies indicating that even the perception of surveillance can significantly greatly magnify the degree of self-censorship users force on themselves. Will user feel comfortable browsing for information on sensitive mental health conditions — if their real name and picture are always loaded into the corner of their browser? The Chrome development team says “yes”. I think they’re wrong.

For all we know, the new approach has privacy implications even if sync is off. The Chrome developers claim that with “sync” off, a Chrome has no privacy implications. This might be true. But when pressed on the actual details, nobody seems quite sure.

For example, if I have my browser logged out, then I log in and turn on “sync”, does all my past (logged-out) data get pushed to Google? What happens if I’m forced to be logged in, and then subsequently turn on “sync”? Nobody can quite tell me if the data uploaded in these conditions is the same. These differences could really matter.
The changes make hash of the Chrome privacy policy

The Chrome privacy policy is a remarkably simple document. Unlike most privacy policies, it was clearly written as a promise to Chrome’s users — rather than as the usual lawyer CYA. Functionally, it describes two browsing modes: “Basic browser mode” and “signed-in mode”. These modes have very different properties. Read for yourself:

In “basic browser mode”, your data is stored locally. In “signed-in” mode, your data gets shipped to Google’s servers. This is easy to understand. If you want privacy, don’t sign in. But what happens if your browser decides to switch you from one mode to the other, all on its own?

Technically, the privacy policy is still accurate. If you’re in basic browsing mode, your data is still stored locally. The problem is that you no longer get to decide which mode you’re in. This makes a mockery out of whatever intentions the original drafters had. Maybe Google will update the document to reflect the new “sync” distinction that the Chrome developers have shared with me. We’ll see.

Update: After I tweeted about my concerns, I received a DM on Sunday from two different Chrome developers, each telling me the good news: Google is updating their privacy policy to reflect the new operation of Chrome. I think that’s, um, good news. But I also can’t help but note that updating a privacy policy on a weekend is an awful lot of trouble to go to for a change that… apparently doesn’t even solve a problem for signed-out users.
Trust is not a renewable resource

For a company that sustains itself by collecting massive amounts of user data, Google has managed to avoid the negative privacy connotations we associate with, say, Facebook. This isn’t because Google collects less data, it’s just that Google has consistently been more circumspect and responsible with it.

Where Facebook will routinely change privacy settings and apologize later, Google has upheld clear privacy policies that it doesn’t routinely change. Sure, when it collects, it collects gobs of data, but in the cases where Google explicitly makes user security and privacy promises — it tends to keep them. This seems to be changing.

Google’s reputation is hard-earned, and it can be easily lost. Changes like this burn a lot of trust with users. If the change is solving an absolutely critical problem for users , then maybe a loss of trust is worth it. I wish Google could convince me that was the case.
Conclusion

This post has gone on more than long enough, but before I finish I want to address two common counterarguments I’ve heard from people I generally respect in this area.

One argument is that Google already spies on you via cookies and its pervasive advertising network and partnerships, so what’s the big deal if they force your browser into a logged-in state? One individual I respect described the Chrome change as “making you wear two name tags instead of one”. I think this objection is silly both on moral grounds — just because you’re violating my privacy doesn’t make it ok to add a massive new violation — but also because it’s objectively silly. Google has spent millions of dollars adding additional tracking features to both Chrome and Android. They aren’t doing this for fun; they’re doing this because it clearly produces data they want.

The other counterargument (if you want to call it that) goes like this: I’m a n00b for using Google products at all, and of course they were always going to do this. The extreme version holds that I ought to be using lynx+Tor and DJB’s custom search engine, and if I’m not I pretty much deserve what’s coming to me.

I reject this argument. I think It’s entirely possible for a company like Google to make good, usable open source software that doesn’t massively violate user privacy. For ten years I believe Google Chrome did just this.

Why they’ve decided to change, I don’t know. It makes me sad.

Goodbye, Chrome: Google’s web browser has become spy software

No shit Sherlock…Been saying that about chrome for years. Nobody listened to me. And you think gmail better? Guess again. The entire Google ecosystem is setup for one thing — get the maximum amount of user private data and sell to advertisers. You are the product and there is no free lunch. But do not listen to me, someone with over 30 years in IT and Cyber Security, trust a the Washington Post talking head instead (although I am happy he published this).

The article, however, misses several points

1. Unlike Firefox, there is no setting to automatically dump private data upon exit of the browser. It must be set on Firefox or Firefox will approximate Chrome.
2. Firefox has telemetry tuned on SOME of which can only be turned off via advanced About:config actions – shame on Firefox
3. The article is silent on the add-ons to both chrome and firefox that make it more private. Example – they say they are working on a fingerprinting canvass blocker. Yet that has been available for years as an add-on.

Got to run…

Quote

Our latest privacy experiment found Chrome ushered more than 11,000 tracker cookies into our browser — in a single week. Here’s why Firefox is better

You open your browser to look at the Web. Do you know who is looking back at you?

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web.

This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

And that’s not the half of it.

Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

Firefox isn’t perfect — it still defaults searches to Google and permits some other tracking. But it doesn’t share browsing data with Mozilla, which isn’t in the data-collection business.

At a minimum, Web snooping can be annoying. Cookies are how a pair of pants you look at in one site end up following you around in ads elsewhere. More fundamentally, your Web history — like the color of your underpants — ain’t nobody’s business but your own. Letting anyone collect that data leaves it ripe for abuse by bullies, spies and hackers.

Google’s product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they’re working on new ones for cookies. But they also said they have to get the right balance with a “healthy Web ecosystem” (read: ad business).

Firefox’s product managers told me they don’t see privacy as an “option” relegated to controls. They’ve launched a war on surveillance, starting this month with “enhanced tracking protection” that blocks nosy cookies by default on new Firefox installations. But to succeed, first Firefox has to persuade people to care enough to overcome the inertia of switching.

It’s a tale of two browsers — and the diverging interests of the companies that make them.

A decade ago, Chrome and Firefox were taking on Microsoft’s lumbering giant Internet Explorer. The upstart Chrome solved real problems for consumers, making the Web safer and faster. Today it dominates more than half the market.

Lately, however, many of us have realized that our privacy is also a major concern on the Web — and Chrome’s interests no longer always seem aligned with our own.

That’s most visible in the fight over cookies. These code snippets can do helpful things, like remembering the contents of your shopping cart. But now many cookies belong to data companies, which use them to tag your browser so they can follow your path like crumbs in the proverbial forest.

They’re everywhere — one study found third-party tracking cookies on 92 percent of websites. The Washington Post website has about 40 tracker cookies, average for a news site, which the company said in a statement are used to deliver better-targeted ads and track ad performance.

You’ll also find them on sites without ads: Both Aetna and the FSA service said the cookies on their sites help measure their own external marketing campaigns.

The blame for this mess belongs to the entire advertising, publishing and tech industries. But what responsibility does a browser have in protecting us from code that isn’t doing much more than spying?

To see what cookies Firefox has blocked for a Web page, tap the shield icon, then “Blocking Tracker Cookies” to pull up a list. (Geoffrey Fowler/The Washington Post)

In 2015, Mozilla debuted a version of Firefox that included anti-tracking tech, turned on only in its “private” browsing mode. After years of testing and tweaking, that’s what it activated this month on all websites. This isn’t about blocking ads — those still come through. Rather, Firefox is parsing cookies to decide which ones to keep for critical site functions and which ones to block for spying.

Apple’s Safari browser, used on iPhones, also began applying “intelligent tracking protection” to cookies in 2017, using an algorithm to decide which ones were bad.

Chrome, so far, remains open to all cookies by default. Last month, Google announced a new effort to force third-party cookies to better self-identify, and said we can expect new controls for them after it rolls out. But it wouldn’t offer a timeline or say whether it would default to stopping trackers.

I’m not holding my breath. Google itself, through its Doubleclick and other ad businesses, is the No. 1 cookie maker — the Mrs. Fields of the Web. It’s hard to imagine Chrome ever cutting off Google’s moneymaker.

“Cookies play a role in user privacy, but a narrow focus on cookies obscures the broader privacy discussion because it’s just one way in which users can be tracked across sites,” said Ben Galbraith, Chrome’s director of product management. “This is a complex problem, and simple, blunt cookie blocking solutions force tracking into more opaque practices.”

There are other tracking techniques — and the privacy arms race will get harder. But saying things are too complicated is also a way of not doing anything.

“Our viewpoint is to deal with the biggest problem first, but anticipate where the ecosystem will shift and work on protecting against those things as well,” said Peter Dolanjski, Firefox’s product lead.

Both Google and Mozilla said they’re working on fighting “fingerprinting,” a way to sniff out other markers in your computer. Firefox is already testing its capabilities and plans to activate them soon.

Choosing a browser is no longer just about speed and convenience — it’s also about data defaults.

It’s true that Google usually obtains consent before gathering data, and offers a lot of knobs you can adjust to opt out of tracking and targeted advertising. But its controls often feel like a shell game that results in us sharing more personal data.

I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn’t cause anybody’s browsing history to be “synced” unless they specifically opted in — but I found mine was being sent to Google and don’t recall ever asking for extra surveillance. (You can turn off the Gmail auto-login by searching “Gmail” in Chrome settings and switching off “Allow Chrome sign-in.”)

After the sign-in shift, Johns Hopkins associate professor Matthew Green made waves in the computer science world when he blogged he was done with Chrome. “I lost faith,” he told me. “It only takes a few tiny changes to make it very privacy unfriendly.”

When you use Chrome, signing into Gmail automatically logs in the browser to your Google account. When “sync” is also on, Google receives your browsing history. (Geoffrey Fowler/The Washington Post)

There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.

Like Green, I’ve chosen Firefox, which works across phones, tablets, PCs and Macs. Apple’s Safari is also a good option on Macs, iPhones and iPads, and the niche Brave browser goes even further in trying to jam the ad-tech industry.

What does switching to Firefox cost you? It’s free, and downloading a different browser is much simpler than changing phones.

In 2017, Mozilla launched a new version of Firefox called Quantum that made it considerably faster. In my tests, it has felt almost as fast as Chrome, though benchmark tests have found it can be slower in some contexts. Firefox says it’s better about managing memory if you use lots and lots of tabs.

Switching means you’ll have to move your bookmarks, and Firefox offers tools to help. Shifting passwords is easy if you use a password manager. And most browser add-ons are available, though it’s possible you won’t find your favorite.

Mozilla has challenges to overcome. Among privacy advocates, the nonprofit is known for caution. It took a year longer than Apple to make cookie blocking a default.

And as a nonprofit, it earns money when people make searches in the browser and click on ads — which means its biggest source of income is Google. Mozilla’s chief executive says the company is exploring new paid privacy services to diversify its income.

Its biggest risk is that Firefox might someday run out of steam in its battle with the Chrome behemoth. Even though it’s the No. 2 desktop browser, with about 10 percent of the market, major sites could decide to drop support, leaving Firefox scrambling.

If you care about privacy, let’s hope for another David and Goliath outcome.

Why Privacy Is an Antitrust Issue

QUOTE

Finally the mainstream media is saying what many of us have known for years. Too bad they do not put their money where their mouth is and sever all business ties the likes of Facebook.

As Facebook has generated scandal after scandal in recent years, critics have started to wonder how we might use antitrust laws to rein in the company’s power. But many of the most pressing concerns about Facebook are its privacy abuses, which unlike price gouging, price discrimination or exclusive dealing, are not immediately recognized as antitrust violations. Is there an antitrust case to be made against Facebook on privacy grounds?

Yes, there is. In March, when Representative David N. Cicilline, Democrat of Rhode Island, called on the Federal Trade Commission to investigate Facebook’s potential violations of antitrust laws, he cited not only Facebook’s acquisitions (such as Instagram and WhatsApp), but also evidence that Facebook was “using its monopoly power to degrade” the quality of its service “below what a competitive marketplace would allow.”

It is this last point, which I made in a law journal article cited by Mr. Cicilline, that promises to change how antitrust law will protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

[changes are needed to (ED)…] protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

To see what I mean, let’s go back to the mid-2000s, when Facebook was an upstart social media platform. To differentiate itself from the market leader, Myspace, Facebook publicly pledged itself to privacy. Privacy provided its competitive advantage, with the company going so far as to promise users, “We do not and will not use cookies to collect private information from any user.”

When Facebook later attempted to change this bargain with users, the threat of losing its customers to its competitors forced the company to reverse course. In 2007, for example, Facebook introduced a program that recorded users’ activity on third-party sites and inserted it into the News Feed. Following public outrage and a class-action lawsuit, Facebook ended the program. “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them,” Facebook’s chief executive, Mark Zuckerberg, wrote in a public apology.

This sort of thing happened regularly for years. Facebook would try something sneaky, users would object and Facebook would back off.

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

This is how Facebook usurped our privacy: with the help of its market dominance. The price of using Facebook has stayed the same over the years (it’s free to join and use), but the cost of using it, calculated in terms of the amount of data that users now must provide, is an order of magnitude above what it was when Facebook faced real competition.
 

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

It is hard to believe that the Facebook of 2019, which is so consuming of and reckless with our data, was once the privacy-protecting Facebook of 2004. When users today sign up for Facebook, they agree to allow the company to track their activity across more than eight million websites and mobile applications that are connected to the internet. They cannot opt out of this. The ubiquitous tracking of consumers online allows Facebook to collect exponentially more data about them than it originally could, which it can use to its financial advantage.

And while users can control some of the ways in which Facebook uses their data by adjusting their privacy settings, if you choose to leave Facebook, the company still subjects you to surveillance — but you no longer have access to the settings. Staying on the platform is the only effective way to manage its harms.

Lowering the quality of a company’s services in this manner has always been one way a monopoly can squeeze consumers after it corners a market. If you go all the way back to the landmark “case of monopolies” in 17th-century England, for example, you find a court sanctioning a monopoly for fear that it might control either price or the quality of services.

But we must now aggressively enforce this antitrust principle to handle the problems of our modern economy. Our government should undertake the important task of restoring to the American people something they bargained for in the first place — their privacy.

LONG LONG Overdue!

Security iPhone gyroscopes, of all things, can uniquely ID handsets on anything earlier than iOS 12.2

QUOTE

Your iPhone can be uniquely fingerprinted by apps and websites in a way that you can never clear. Not by deleting cookies, not by clearing your cache, not even by reinstalling iOS.

Cambridge University researchers will present a paper to the IEEE Symposium on Security and Privacy 2019 today explaining how their fingerprinting technique uses a fiendishly clever method of inferring device-unique accelerometer calibration data.

“iOS has historically provided access to the accelerometer, gyroscope and the magnetometer,” Dr Alastair Beresford told The Register this morning. “These types of devices don’t seem like they’re troublesome from a privacy perspective, right? Which way up the phone is doesn’t seem that bad.

“In reality,” added the researcher, “it turns out that you can work out a globally unique identifier for the device by looking at these streams.”
Your orientation reveals an awful lot about you

“MEMS” – microelectromechanical systems – is the catchall term for things like your phone’s accelerometer, gyroscope and magnetometer. These sensors tell your handset which way up it is, whether it’s turning and, if so, how fast, and how strong a nearby magnetic field is. They are vital for mobile games that rely on the user tilting or turning the handset.

These, said Beresford, are mass produced. Like all mass-produced items, especially sensors, they have the normal distribution of inherent but minuscule errors and flaws, so high-quality manufacturers (like Apple) ensure each one is calibrated.

“That calibration step allows the device to produce a more accurate parameter,” explained Beresford. “But it turns out the values being put into the device are very likely to be globally unique.”

Beresford and co-researchers Jiexin Zhang, also from Cambridge’s Department of Computer Science and Technology, and Ian Sheret of Polymath Insight Ltd, devised a way of not only accessing data from MEMS sensors – that wasn’t the hard part – but of inferring the calibration data based on what the sensors were broadcasting in real time, during actual use by a real-world user. Even better (or worse, depending on your point of view), the data can be captured and reverse-engineered through any old website or app.

“It doesn’t require any specific confirmation from a user,” said Beresford. “This fingerprint never changes, even if you factory reset the handset or reinstall the OS. This is buried deep inside the firmware of the device so the fingerprint data doesn’t change. This provides a way to track users around the web.”
How they did it

“You need to record some samples,” said Beresford. “There’s an API in JavaScript or inside Swift that allows you to get samples from the hardware. Because you get many samples per second, we need around 100 samples to get the attack. Around half a second on many of the devices. So it’s quite quick to collect the data.”

Each device generates a stream of analogue data. By converting that into digital values and applying algorithms they developed in the lab using stationary or slow-moving devices, Beresford said, the researchers could then infer what a real-world user device was doing at a given time (say, being bounced around in a bag) and apply a known offset.

“We can guess what the input is going to be given the output that we observe,” he said. “If we guess correctly, we can then use that guess to estimate what the value of the scale factor and the orthogonality are.”

From there it is a small step to bake those algorithms into a website or an app. Although the actual technique does not necessarily have to be malicious in practice (for example, a bank might use it to uniquely fingerprint your phone as an anti-fraud measure), it does raise a number of questions.
Good news, fandroids: you’re not affected

Oddly enough, the attack doesn’t work on most Android devices because they’re cheaper than Apple’s, in all senses of the word, and generally aren’t calibrated, though the researchers did find that some Google Pixel handsets did feature calibrated MEMS.

Beresford joked: “There’s a certain sense of irony that because Apple has put more effort in to provide more accuracy, it has this unfortunate side effect!”

Apple has patched the flaws in iOS 12.2 by blocking “access to these sensors in Mobile Safari just by default” as well as adding “some noise to make the attack much more difficult”.

The researchers have set up a website which includes both the full research paper and their layman’s explanation, along with a proof-of-concept video. Get patching, Apple fanbois

Security Panic as panic alarms meant to keep granny and little Timmy safe prove a privacy fiasco

QUOTE

Simple hack turns them into super secret spying tool

A GPS tracker used by elderly people and young kids has a security hole that could allow others to track and secretly record their wearers.

The white-label product is manufactured in China and then rebadged and rebranded by a range of companies in the UK, US, Australia and elsewhere including Pebbell 2, OwnFone and SureSafeGo. Over 10,000 people in the UK use the devices.

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

 

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

 

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

The flaw was introduced in an update to the product: originally the portable fob communicated with a base station that was plugged into a phone line: an approach that provided no clear attack route. But in order to expand its range and usefulness, the SIM card was added so it was not reliant on a base station and would work over the mobile network.

The problem arises from the fact that the Chinese manufacturer built in a PIN to the device so it would be locked to the telephone number programmed into the device. Which is fine, except the PIN was disabled by default and the PIN is currently not needed to reboot or reset the device.

And so it is possible to send a reset command to the device – if you know its SIM telephone number – and restore it to factory settings. At that point, the device is wide open and doesn’t need the PIN to make changes to the other functions. Which all amounts to remote access.
Random access memory

But how would you find out the device’s number? Well, the researchers got hold of one such device and its number and then ran a script where they sent messages to thousands of similar numbers to see if they hit anything.

They did. “Out of the 2,500 messages we sent, we got responses from 175 devices (7 per cent),” they wrote. “So this is 175 devices being used at the time of writing as an aid for vulnerable people; all identified at a minimal cost. The potential for harm is massive, and in less than a couple of hours, we could interact with 175 of these devices!”

The good news is that it is easy to fix: in new devices. You would simply add a unique code to each device and require it be used to reset the device. And you could limit the device to only receive calls or texts from a list of approved contacts.

But in the devices already on the market, the fix is not so easy: even by using the default PIN to lock it down, the ability to reset the device is still possible because it doesn’t require the PIN to be entered. The researchers say they have contacted the companies that use the device “to help them understand the risks posed by our findings” and say that they are “looking into and are actively recalling devices.” But it also notes that some have not responded.

In short, poor design and the lack of a decent security audit prior to putting the updated product on the market has turned what is supposed to provide peace of mind into a potential stalking and listening nightmare.

Facebook’s third act: Mark Zuckerberg announces his firm’s next business model

Here lies Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.” I have a better idea: Get Facebook off the planet.

Quote

If it works, the social-networking giant will become more private and more powerful

THE FIRST big overhaul for Facebook came in 2012-14. Internet users were carrying out ever more tasks on smartphones rather than desktop or laptop computers. Mark Zuckerberg opted to follow them, concentrating on Facebook’s mobile app ahead of its website, and buying up two fast-growing communication apps, WhatsApp and Instagram. It worked. Facebook increased its market valuation from around $60bn at the end of 2012 to—for a brief period in 2018—more than $600bn.

On March 6th Mr Zuckerberg announced Facebook’s next pivot. As well as its existing moneymaking enterprise, selling targeted ads on its public social networks, it is building a “privacy-focused platform” around WhatsApp, Instagram and Messenger. The apps will be integrated, he said, and messages sent through them encrypted end-to-end, so that even Facebook cannot read them. While it was not made explicit, it is clear what the business model will be. Mr Zuckerberg wants all manner of businesses to use its messaging networks to provide services and accept payments. Facebook will take a cut.

A big shift was overdue at Facebook given the privacy and political scandals that have battered the firm. Even Mr Zuckerberg, who often appears incapable of seeing the gravity of Facebook’s situation, seemed to grasp the irony of it putting privacy first. “Frankly we don’t currently have a strong reputation for building privacy protective services,” he noted.

Still, he intends to do it. Mr Zuckerberg claims that users will benefit from his plan to integrate its messaging apps into a single, encrypted network. The content of messages will be safe from prying eyes of authoritarian snoops and criminals, as well as from Facebook itself. It will make messaging more convenient, and make profitable new services possible. But caution is warranted for three reasons.

The first is that Facebook has long been accused of misleading the public on privacy and security, so the potential benefits Mr Zuckerberg touts deserve to be treated sceptically. He is also probably underselling the benefits that running integrated messaging networks brings to his firm, even if they are encrypted so that Facebook cannot see the content. The metadata alone, ie, who is talking to whom, when and for how long, will still allow Facebook to target advertisements precisely, meaning its ad model will still function.

End-to-end encryption will also make Facebook’s business cheaper to run. Because it will be mathematically impossible to moderate encrypted communications, the firm will have an excuse to take less responsibility for content running through its apps, limiting its moderation costs.

If it can make the changes, Facebook’s dominance over messaging would probably increase. The newfound user-benefits of a more integrated Facebook might make it harder for regulators to argue that Mr Zuckerberg’s firm should be broken up.

Facebook’s plans in India provide some insight into the new model. It has built a payment system into WhatsApp, the country’s most-used messaging app. The system is waiting for regulatory approval. The market is huge. In the rest of the world, too, users are likely to be drawn in by the convenience of Facebook’s new networks. Mr Zuckerberg’s latest strategy is ingenious but may contain twists.

The Week in Tech: Facebook and Google Reshape the Narrative on Privacy

And from the bs department

QUOTE

…Stop me if you’ve heard this before: The chief executive of a huge tech company with vast stores of user data, and a business built on using it to target ads, now says his priority is privacy.

This time it was Google’s Sundar Pichai, at the company’s annual conference for developers. “We think privacy is for everyone,” he explained on Tuesday. “We want to do more to stay ahead of constantly evolving user expectations.” He reiterated the point in a New York Times Op-Ed, and highlighted the need for federal privacy rules.

The previous week, Mark Zuckerberg delivered similar messages at Facebook’s developer conference. “The future is private,” he said, and Facebook will focus on more intimate communications. He shared the idea in a Washington Post op-ed just weeks before, also highlighting the need for federal privacy rules.

Google went further than Facebook’s rough sketch of what this future looks, and unveiled tangible features: It will let users browse YouTube and Google Maps in “incognito mode,” will allow auto-deletion of Google history after a specified time and will make it easier to find out what the company knows about you, among other new privacy features.

Fatemeh Khatibloo, a vice president and principal analyst at Forrester, told The Times: “These are meaningful changes when it comes to the user’s expectations of privacy, but I don’t think this affects their business at all.” Google has to show that privacy is important, but it will still collect data.

What Google and Facebook are trying to do, though, is reshape the privacy narrative. You may think privacy means keeping hold of your data; they want privacy to mean they don’t hand data to others. (“Google will never sell any personal information to third parties,” Mr. Pichai wrote in his Op-Ed.)

Werner Goertz, a research director at Gartner, said Google had to respond with its own narrative. “It is trying to turn the conversation around and drive public discourse in a way that not only pacifies but also tries to get buy-in from consumers, to align them with its privacy strategy,” he said.
Politics of privacy law

Right – pacify the masses with BS.

Politics of privacy law

Facebook and Google may share a voice on privacy. Lawmakers don’t.

Members of the Federal Trade Commission renewed calls at a congressional hearing on Wednesday to regulate big tech companies’ stewardship of user data, my colleague Cecilia Kang reported. That was before a House Energy and Commerce subcommittee, on which “lawmakers of both parties agreed” that such a law was required, The Wall Street Journal reported.

Sounds promising.

But while the F.T.C. was united in asking for more power to police violations and greater authority to impose penalties, there were large internal tensions about how far it should be able to go in punishing companies. And the lawmakers in Congress “appeared divided over key points that legislation might address,” according to The Journal. Democrats favor harsh penalties and want to give the F.T.C. greater power; Republicans worry that strict regulation could stifle innovation and hurt smaller companies.

Finding compromise will be difficult, and conflicting views risk becoming noise through which a clear voice from Facebook and Google can cut. The longer disagreement rages, the more likely it is that Silicon Valley defines a mainstream view that could shape rules.

Yeah — more lobbyists and political donation subverting the democracy. The US should enact an EU equivalent GDPR now. And another thing, Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.”

Now for Sale on Facebook: Looted Middle Eastern Antiquities

Another reason Facebook is a disgusting dangerous corporation. A 5 billion dollat fine is nothing. It needs to be wound down and Zuckerberg and Sandberg given long hard prison terms for the evil and death they have caused.

Quote

Ancient treasures pillaged from conflict zones in the Middle East are being offered for sale on Facebook, researchers say, including items that may have been looted by Islamic State militants.

Facebook groups advertising the items grew rapidly during the upheaval of the Arab Spring and the ensuing wars, which created unprecedented opportunities for traffickers, said Amr Al-Azm, a professor of Middle East history and anthropology at Shawnee State University in Ohio and a former antiquities official in Syria. He has monitored the trade for years along with his colleagues at the Athar Project, named for the Arabic word for antiquities.

At the same time, Dr. Al-Azm said, social media lowered the barriers to entry to the marketplace. Now there are at least 90 Facebook groups, most in Arabic, connected to the illegal trade in Middle Eastern antiquities, with tens of thousands of members, he said.

They often post items or inquiries in the group, then take the discussion into chat or WhatsApp messaging, making it difficult to track. Some users circulate requests for certain types of items, providing an incentive for traffickers to produce them, a scenario that Dr. Al-Azm called “loot to order.”

Others post detailed instructions for aspiring looters on how to locate archaeological sites and dig up treasures.

Items for sale include a bust purportedly taken from the ancient city of Palmyra, which was occupied for periods by Islamic State militants and endured heavy looting and damage.

Other artifacts for sale come from Iraq, Yemen, Egypt, Tunisia and Libya. The majority do not come from museums or collections, where their existence would have been cataloged, Dr. Al-Azm said.

“They’re being looted straight from the ground,” he said. “They have never been seen. The only evidence we have of their existence is if someone happens to post a picture of them.”

Dr. Al-Azm and Katie A. Paul, the directors of the Athar Project, wrote in World Politics Review last year that the loot-to-order requests showed that traffickers were “targeting material with a previously unseen level of precision — a practice that Facebook makes remarkably easy.”

After the BBC published an article about the work of Dr. Al-Azm and his colleagues last week, Facebook said that it had removed 49 groups connected to antiquities trafficking.

 

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

Dr. Al-Azm countered that 90 groups were still up. But more important, he argued, Facebook should not simply delete the pages, which now constitute crucial evidence both for law enforcement and heritage experts.

In a statement on Tuesday, the company said it was “continuing to invest in people and technology to keep this activity off Facebook and encourage others to report anything they suspect of violating our Community Standards so we can quickly take action.”

A spokeswoman said that the company’s policy-enforcement team had 30,000 members and that it had introduced new tools to detect and remove content that violates the law or its policies using artificial intelligence, machine learning and computer vision.

Trafficking in antiquities is illegal across most of the Middle East, and dealing in stolen relics is illegal under international law. But it can be difficult to prosecute such cases.
Leila A. Amineddoleh, a lawyer in New York who specializes in art and cultural heritage, said that determining the provenance of looted items can be arduous, presenting an obstacle for lawyers and academics alike.

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

As the Islamic State expanded, it systematically looted and destroyed, using heavy machinery to dig into ancient sites that had scarcely been excavated before the war. The group allowed residents and other looters to take from heritage sites, imposing a 20 percent tax on their earnings.

Some local people and cultural heritage experts scrambled to document and save the antiquities, including efforts to physically safeguard them and to create 3-D models and maps. Despite their efforts, the losses were catastrophic.
Sign Up for Summer in the City

The best things to do in N.Y.C. during the hottest season of the year. This limited-edition newsletter will launch before Memorial Day and run through Labor Day.

Satellite images show invaluable sites, such as Mari and Dura-Europos in eastern Syria, pockmarked with excavation holes from looters. In the Mosul Museum in Iraq, the militants filmed themselves taking sledgehammers and drills to monuments they saw as idolatrous, acts designed for maximum propaganda value as the world watched with horror.

Other factions and people also profited from looting. In fact, the market was so saturated that prices dropped drastically for a time around 2016, Dr. Al-Azm said.

Around the same time, as Islamic State fighters scattered in the face of territorial losses, they took their new expertise in looting back to their countries, including Egypt, Tunisia and Libya, and to other parts of Syria, like Idlib Province, he added.

“This is a supply and demand issue,” Dr. Al-Azm said, repeating that any demand gives incentives to looters, possibly financing terrorist groups in the process.

Instead of simply deleting the pages, Dr. Al-Azm said, Facebook should devise a more comprehensive strategy to stop the sales while allowing investigators to preserve photos and records uploaded to the groups.

A hastily posted photo, after all, might be the only record of a looted object that is available to law enforcement or scholars. Simply deleting the page would destroy “a huge corpus of evidence” that will be needed to identify, track and recover looted treasures for years to come, he said.

Similar arguments have been made as social media sites, including YouTube, have deleted videos that show atrocities committed during the Syrian war that could be used to prosecute war crimes.

Facebook has also faced questions over its role as a platform for other types of illicit sales, including guns, poached ivory and more. It has generally responded by shutting down pages or groups in response to reports of illegal activity.

Some of the illicit items sold without proof of their ownership history, of course, could be fake. But given the volume of activity in the antiquities groups and the copious evidence of looting at famous sites, at least some of them are believed to be genuine.

The wave of items hitting the market will most likely continue for years. Some traffickers sit on looted antiquities for long periods, waiting for attention to die down and sometimes forging documents about the items’ origins before offering them for sale.

Boycott is the only way to force social media giants to protect kids online

About a month back I got into email exchange with a mother who had invited one of my children to a birthday party. I said ok but asked that no pictures be posted to social media. I explained my reasoning. She said I was crazy and would damage my children (as well as other things). I responded with advice from several reputable sources. No matter. Suffice, no birthday attendance.

I was never sure why she reacted this way. It was almost like I asked an addict to go cold turkey. Maybe that’s it. She is addicted.

A public boycott of social media may be the only way to force companies to protect children from abuse, the country’s leading child protection police officer has said.
QUOTE

Simon Bailey, the National Police Chiefs’ Council lead on child protection, said tech companies had abdicated their duty to safeguard children and were only paying attention due to fear of reputational damage.

The senior officer, who is Norfolk’s chief constable, said he believed sanctions such as fines would be “little more than a drop in the ocean” to social media companies, but that the government’s online harms white paper could be a “game changer” if it led to effective punitive measures.

Bailey suggested a boycott would be one way to hit big platforms, which he believes have the technology and funds to “pretty much eradicate the availability, the uploading, and the distribution of indecent imagery”.

Despite the growing problem, Bailey said he had seen nothing so far “that has given me the confidence that companies that are creating these platforms are taking their responsibilities seriously enough”.

He told the Press Association: “Ultimately I think the only thing they will genuinely respond to is when their brand is damaged. Ultimately the financial penalties for some of the giants of this world are going to be an absolute drop in the ocean.

“But if the brand starts to become tainted, and consumers start to see how certain platforms are permitting abuse, are permitting the exploitation of young people, then maybe the damage to that brand will be so significant that they will feel compelled to do something in response.

“We have got to look at how we drive a conversation within our society that says ‘do you know what, we are not going to use that any more, that system or that brand or that site’ because of what they are permitting to be hosted or what they are allowing to take place.”

In every playground there is likely to be someone with pornography on their phone, Bailey said as he described how a growing number of young men are becoming “increasingly desensitised” and progressing to easily available illegal material. Society is “not far off the point where somebody will know somebody” who has viewed illegal images, he said.

There has been a sharp rise in the number of images on the child abuse image database from fewer than 10,000 in the 1990s to 13.4m, with more than 100m variations of these.

Last month, the government launched a consultation on new laws proposed to tackle illegal content online. The white paper, which was revealed in the Guardian, legislated for a new statutory duty of care by social media firms and the appointment of an independent regulator, which is likely to be funded through a levy on the companies. It was welcomed by senior police and children’s charities.

Bailey believes if effective regulation is put in place it could free up resources to begin tackling the vaster dark web. He expressed concern that the spread of 4G and 5G networks worldwide would open up numerous further opportunities for the sexual exploitation of children.

Speaking at a conference organised by StopSO, a charity that works with offenders and those concerned about their sexual behaviour to minimise the risk of offending, of which Bailey is patron, he recently said that plans from Facebook’s Mark Zuckerberg to increase privacy on the social network would make life harder for child protection units. But he told the room: “There is no doubt that thinking is shifting around responsibility of tech companies. I think that argument has been won, genuinely.

“Of course, the proof is going to be in the pudding with just how ambitious the white paper is, how effective the punitive measures will be, or not.”

Andy Burrows, the National Society for the Prevention of Cruelty to Children’s associate head of child safety online, said: “It feels like social media sites treat child safeguarding crises as a bad news cycle to ride out, rather than a chance to make changes to protect children.”

Google Spies! The worst kind of microphone is a hidden microphone.

Google says the built-in microphone it never told Nest users about was ‘never supposed to be a secret’

Yeah right.
Quote

  • In early February, Google announced that Assistant would work with its home security and alarm system, Nest Secure.
  • The problem: Users didn’t know a microphone existed on their Nest security devices to begin with.
  • On Tuesday, a Google representative told Business Insider the company had made an “error.”
  • “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the person said. “That was an error on our part.”

In early February, Google announced that its home security and alarm system Nest Secure would be getting an update. Users, the company said, could now enable its virtual-assistant technology, Google Assistant.

The problem: Nest users didn’t know a microphone existed on their security device to begin with.

The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device.

On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”

Google says “the microphone has never been on and is only activated when users specifically enable the option.”

It also said the microphone was originally included in the Nest Guard for the possibility of adding new security features down the line, like the ability to detect broken glass.

Still, even if Google included the microphone in its Nest Guard device for future updates — like its Assistant integration — the news comes as consumers have grown increasingly wary of major tech companies and their commitment to consumer privacy.

For Google, the revelation is particularly problematic and brings to mind previous privacy controversies, such as the 2010 incident in which the company acknowledged that its fleet of Street View cars “accidentally” collected personal data transmitted over consumers’ unsecured WiFi networks, including emails.