Skip to content

Monthly Archives: June 2019

Why I’m done with Chrome – Matthew Green

This article was published Sep 2018. I have warned people on chrome for numerous years before that. It is not just the log in policy, it is all the other data collection, no way to automatically dump cookies and private data on exit and so forth. But all that said, it is still worth re-posting this as I think Matt Green did a good job. And since no one listens to me (boo hoo), maybe they will listen to Matt. And litsening is needed as the likes of google, facebook, amazon are just getting stronger and more intrusive. Fight back!

QUOTE

Why I’m done with Chrome

This blog is mainly reserved for cryptography, and I try to avoid filling it with random 512px-Google_Chrome_icon_(September_2014).svg“someone is wrong on the Internet” posts. After all, that’s what Twitter is for! But from time to time something bothers me enough that I have to make an exception. Today I wanted to write specifically about Google Chrome, how much I’ve loved it in the past, and why — due to Chrome’s new user-unfriendly forced login policy — I won’t be using it going forward.
A brief history of Chrome

When Google launched Chrome ten years ago, it seemed like one of those rare cases where everyone wins. In 2008, the browser market was dominated by Microsoft, a company with an ugly history of using browser dominance to crush their competitors. Worse, Microsoft was making noises about getting into the search business. This posed an existential threat to Google’s internet properties.

In this setting, Chrome was a beautiful solution. Even if the browser never produced a scrap of revenue for Google, it served its purpose just by keeping the Internet open to Google’s other products. As a benefit, the Internet community would receive a terrific open source browser with the best development team money could buy. This might be kind of sad for Mozilla (who have paid a high price due to Chrome) but overall it would be a good thing for Internet standards.

For many years this is exactly how things played out. Sure, Google offered an optional “sign in” feature for Chrome, which presumably vacuumed up your browsing data and shipped it off to Google, but that was an option. An option you could easily ignore. If you didn’t take advantage of this option, Google’s privacy policy was clear: your data would stay on your computer where it belonged.
What changed?

A few weeks ago Google shipped an update to Chrome that fundamentally changes the sign-in experience. From now on, every time you log into a Google property (for example, Gmail), Chrome will automatically sign the browser into your Google account for you. It’ll do this without asking, or even explicitly notifying you. (However, and this is important: Google developers claim this will not actually start synchronizing your data to Google — yet. See further below.)

Your sole warning — in the event that you’re looking for it — is that your Google profile picture will appear in the upper-right hand corner of the browser window. I noticed mine the other day:

The change hasn’t gone entirely unnoticed: it received some vigorous discussion on sites like Hacker News. But the mainstream tech press seems to have ignored it completely. This is unfortunate — and I hope it changes — because this update has huge implications for Google and the future of Chrome.

In the rest of this post, I’m going to talk about why this matters. From my perspective, this comes down to basically four points:

Nobody on the Chrome development team can provide a clear rationale for why this change was necessary, and the explanations they’ve given don’t make any sense.
This change has enormous implications for user privacy and trust, and Google seems unable to grapple with this.
The change makes a hash out of Google’s own privacy policies for Chrome.
Google needs to stop treating customer trust like it’s a renewable resource, because they’re screwing up badly.

I warn you that this will get a bit ranty. Please read on anyway.
Google’s stated rationale makes no sense

The new feature that triggers this auto-login behavior is called “Identity consistency between browser and cookie jar” (HN). After conversations with two separate Chrome developers on Twitter (who will remain nameless — mostly because I don’t want them to hate me), I was given the following rationale for the change:

To paraphrase this explanation: if you’re in a situation where you’ve already signed into Chrome and your friend shares your computer, then you can wind up accidentally having your friend’s Google cookies get uploaded into your account. This seems bad, and sure, we want to avoid that.

But note something critical about this scenario. In order for this problem to apply to you, you already have to be signed into Chrome. There is absolutely nothing in this problem description that seems to affect users who chose not to sign into the browser in the first place.

So if signed-in users are your problem, why would you make a change that forces unsigned–in users to become signed-in? I could waste a lot more ink wondering about the mismatch between the stated “problem” and the “fix”, but I won’t bother: because nobody on the public-facing side of the Chrome team has been able to offer an explanation that squares this circle.

And this matters, because “sync” or not…
The change has serious implications for privacy and trust

The Chrome team has offered a single defense of the change. They point out that just because your browser is “signed in” does not mean it’s uploading your data to Google’s servers. Specifically:

While Chrome will now log into your Google account without your consent (following a Gmail login), Chrome will not activate the “sync” feature that sends your data to Google. That requires an additional consent step. So in theory your data should remain local.

This is my paraphrase. But I think it’s fair to characterize the general stance of the Chrome developers I spoke with as: without this “sync” feature, there’s nothing wrong with the change they’ve made, and everything is just fine.

This is nuts, for several reasons.

User consent matters. For ten years I’ve been asked a single question by the Chrome browser: “Do you want to log in with your Google account?” And for ten years I’ve said no thanks. Chrome still asks me that question — it’s just that now it doesn’t honor my decision.

The Chrome developers want me to believe that this is fine, since (phew!) I’m still protected by one additional consent guardrail. The problem here is obvious:

If you didn’t respect my lack of consent on the biggest user-facing privacy option in Chrome (and didn’t even notify me that you had stopped respecting it!) why should I trust any other consent option you give me? What stops you from changing your mind on that option in a few months, when we’ve all stopped paying attention?

The fact of the matter is that I’d never even heard of Chrome’s “sync” option — for the simple reason that up until September 2018, I had never logged into Chrome. Now I’m forced to learn these new terms, and hope that the Chrome team keeps promises to keep all of my data local as the barriers between “signed in” and “not signed in” are gradually eroded away.

The Chrome sync UI is a dark pattern. Now that I’m forced to log into Chrome, I’m faced with a brand new menu I’ve never seen before. It looks like this:

Does that big blue button indicate that I’m already synchronizing my data to Google? That’s scary! Wait, maybe it’s an invitation to synchronize! If so, what happens to my data if I click it by accident? (I won’t give it the answer away, you should go find out. Just make sure you don’t accidentally upload all your data in the process. It can happen quickly.)

In short, Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern. Whether intentional or not, it has the effect of making it easy for people to activate sync without knowing it, or to think they’re already syncing and thus there’s no additional cost to increasing Google’s access to their data.

Don’t take my word for it. It even gives (former) Google people the creeps.

Big brother doesn’t need to actually watch you. We tell things to our web browsers that we wouldn’t tell our best friends. We do this with some vague understanding that yes, the Internet spies on us. But we also believe that this spying is weak and probabilistic. It’s not like someone’s standing over our shoulder checking our driver’s license with each click.

What happens if you take that belief away? There are numerous studies indicating that even the perception of surveillance can significantly greatly magnify the degree of self-censorship users force on themselves. Will user feel comfortable browsing for information on sensitive mental health conditions — if their real name and picture are always loaded into the corner of their browser? The Chrome development team says “yes”. I think they’re wrong.

For all we know, the new approach has privacy implications even if sync is off. The Chrome developers claim that with “sync” off, a Chrome has no privacy implications. This might be true. But when pressed on the actual details, nobody seems quite sure.

For example, if I have my browser logged out, then I log in and turn on “sync”, does all my past (logged-out) data get pushed to Google? What happens if I’m forced to be logged in, and then subsequently turn on “sync”? Nobody can quite tell me if the data uploaded in these conditions is the same. These differences could really matter.
The changes make hash of the Chrome privacy policy

The Chrome privacy policy is a remarkably simple document. Unlike most privacy policies, it was clearly written as a promise to Chrome’s users — rather than as the usual lawyer CYA. Functionally, it describes two browsing modes: “Basic browser mode” and “signed-in mode”. These modes have very different properties. Read for yourself:

In “basic browser mode”, your data is stored locally. In “signed-in” mode, your data gets shipped to Google’s servers. This is easy to understand. If you want privacy, don’t sign in. But what happens if your browser decides to switch you from one mode to the other, all on its own?

Technically, the privacy policy is still accurate. If you’re in basic browsing mode, your data is still stored locally. The problem is that you no longer get to decide which mode you’re in. This makes a mockery out of whatever intentions the original drafters had. Maybe Google will update the document to reflect the new “sync” distinction that the Chrome developers have shared with me. We’ll see.

Update: After I tweeted about my concerns, I received a DM on Sunday from two different Chrome developers, each telling me the good news: Google is updating their privacy policy to reflect the new operation of Chrome. I think that’s, um, good news. But I also can’t help but note that updating a privacy policy on a weekend is an awful lot of trouble to go to for a change that… apparently doesn’t even solve a problem for signed-out users.
Trust is not a renewable resource

For a company that sustains itself by collecting massive amounts of user data, Google has managed to avoid the negative privacy connotations we associate with, say, Facebook. This isn’t because Google collects less data, it’s just that Google has consistently been more circumspect and responsible with it.

Where Facebook will routinely change privacy settings and apologize later, Google has upheld clear privacy policies that it doesn’t routinely change. Sure, when it collects, it collects gobs of data, but in the cases where Google explicitly makes user security and privacy promises — it tends to keep them. This seems to be changing.

Google’s reputation is hard-earned, and it can be easily lost. Changes like this burn a lot of trust with users. If the change is solving an absolutely critical problem for users , then maybe a loss of trust is worth it. I wish Google could convince me that was the case.
Conclusion

This post has gone on more than long enough, but before I finish I want to address two common counterarguments I’ve heard from people I generally respect in this area.

One argument is that Google already spies on you via cookies and its pervasive advertising network and partnerships, so what’s the big deal if they force your browser into a logged-in state? One individual I respect described the Chrome change as “making you wear two name tags instead of one”. I think this objection is silly both on moral grounds — just because you’re violating my privacy doesn’t make it ok to add a massive new violation — but also because it’s objectively silly. Google has spent millions of dollars adding additional tracking features to both Chrome and Android. They aren’t doing this for fun; they’re doing this because it clearly produces data they want.

The other counterargument (if you want to call it that) goes like this: I’m a n00b for using Google products at all, and of course they were always going to do this. The extreme version holds that I ought to be using lynx+Tor and DJB’s custom search engine, and if I’m not I pretty much deserve what’s coming to me.

I reject this argument. I think It’s entirely possible for a company like Google to make good, usable open source software that doesn’t massively violate user privacy. For ten years I believe Google Chrome did just this.

Why they’ve decided to change, I don’t know. It makes me sad.

Goodbye, Chrome: Google’s web browser has become spy software

No shit Sherlock…Been saying that about chrome for years. Nobody listened to me. And you think gmail better? Guess again. The entire Google ecosystem is setup for one thing — get the maximum amount of user private data and sell to advertisers. You are the product and there is no free lunch. But do not listen to me, someone with over 30 years in IT and Cyber Security, trust a the Washington Post talking head instead (although I am happy he published this).

The article, however, misses several points

1. Unlike Firefox, there is no setting to automatically dump private data upon exit of the browser. It must be set on Firefox or Firefox will approximate Chrome.
2. Firefox has telemetry tuned on SOME of which can only be turned off via advanced About:config actions – shame on Firefox
3. The article is silent on the add-ons to both chrome and firefox that make it more private. Example – they say they are working on a fingerprinting canvass blocker. Yet that has been available for years as an add-on.

Got to run…

Quote

Our latest privacy experiment found Chrome ushered more than 11,000 tracker cookies into our browser — in a single week. Here’s why Firefox is better

You open your browser to look at the Web. Do you know who is looking back at you?

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web.

This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

And that’s not the half of it.

Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

Firefox isn’t perfect — it still defaults searches to Google and permits some other tracking. But it doesn’t share browsing data with Mozilla, which isn’t in the data-collection business.

At a minimum, Web snooping can be annoying. Cookies are how a pair of pants you look at in one site end up following you around in ads elsewhere. More fundamentally, your Web history — like the color of your underpants — ain’t nobody’s business but your own. Letting anyone collect that data leaves it ripe for abuse by bullies, spies and hackers.

Google’s product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they’re working on new ones for cookies. But they also said they have to get the right balance with a “healthy Web ecosystem” (read: ad business).

Firefox’s product managers told me they don’t see privacy as an “option” relegated to controls. They’ve launched a war on surveillance, starting this month with “enhanced tracking protection” that blocks nosy cookies by default on new Firefox installations. But to succeed, first Firefox has to persuade people to care enough to overcome the inertia of switching.

It’s a tale of two browsers — and the diverging interests of the companies that make them.

A decade ago, Chrome and Firefox were taking on Microsoft’s lumbering giant Internet Explorer. The upstart Chrome solved real problems for consumers, making the Web safer and faster. Today it dominates more than half the market.

Lately, however, many of us have realized that our privacy is also a major concern on the Web — and Chrome’s interests no longer always seem aligned with our own.

That’s most visible in the fight over cookies. These code snippets can do helpful things, like remembering the contents of your shopping cart. But now many cookies belong to data companies, which use them to tag your browser so they can follow your path like crumbs in the proverbial forest.

They’re everywhere — one study found third-party tracking cookies on 92 percent of websites. The Washington Post website has about 40 tracker cookies, average for a news site, which the company said in a statement are used to deliver better-targeted ads and track ad performance.

You’ll also find them on sites without ads: Both Aetna and the FSA service said the cookies on their sites help measure their own external marketing campaigns.

The blame for this mess belongs to the entire advertising, publishing and tech industries. But what responsibility does a browser have in protecting us from code that isn’t doing much more than spying?

To see what cookies Firefox has blocked for a Web page, tap the shield icon, then “Blocking Tracker Cookies” to pull up a list. (Geoffrey Fowler/The Washington Post)

In 2015, Mozilla debuted a version of Firefox that included anti-tracking tech, turned on only in its “private” browsing mode. After years of testing and tweaking, that’s what it activated this month on all websites. This isn’t about blocking ads — those still come through. Rather, Firefox is parsing cookies to decide which ones to keep for critical site functions and which ones to block for spying.

Apple’s Safari browser, used on iPhones, also began applying “intelligent tracking protection” to cookies in 2017, using an algorithm to decide which ones were bad.

Chrome, so far, remains open to all cookies by default. Last month, Google announced a new effort to force third-party cookies to better self-identify, and said we can expect new controls for them after it rolls out. But it wouldn’t offer a timeline or say whether it would default to stopping trackers.

I’m not holding my breath. Google itself, through its Doubleclick and other ad businesses, is the No. 1 cookie maker — the Mrs. Fields of the Web. It’s hard to imagine Chrome ever cutting off Google’s moneymaker.

“Cookies play a role in user privacy, but a narrow focus on cookies obscures the broader privacy discussion because it’s just one way in which users can be tracked across sites,” said Ben Galbraith, Chrome’s director of product management. “This is a complex problem, and simple, blunt cookie blocking solutions force tracking into more opaque practices.”

There are other tracking techniques — and the privacy arms race will get harder. But saying things are too complicated is also a way of not doing anything.

“Our viewpoint is to deal with the biggest problem first, but anticipate where the ecosystem will shift and work on protecting against those things as well,” said Peter Dolanjski, Firefox’s product lead.

Both Google and Mozilla said they’re working on fighting “fingerprinting,” a way to sniff out other markers in your computer. Firefox is already testing its capabilities and plans to activate them soon.

Choosing a browser is no longer just about speed and convenience — it’s also about data defaults.

It’s true that Google usually obtains consent before gathering data, and offers a lot of knobs you can adjust to opt out of tracking and targeted advertising. But its controls often feel like a shell game that results in us sharing more personal data.

I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn’t cause anybody’s browsing history to be “synced” unless they specifically opted in — but I found mine was being sent to Google and don’t recall ever asking for extra surveillance. (You can turn off the Gmail auto-login by searching “Gmail” in Chrome settings and switching off “Allow Chrome sign-in.”)

After the sign-in shift, Johns Hopkins associate professor Matthew Green made waves in the computer science world when he blogged he was done with Chrome. “I lost faith,” he told me. “It only takes a few tiny changes to make it very privacy unfriendly.”

When you use Chrome, signing into Gmail automatically logs in the browser to your Google account. When “sync” is also on, Google receives your browsing history. (Geoffrey Fowler/The Washington Post)

There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.

Like Green, I’ve chosen Firefox, which works across phones, tablets, PCs and Macs. Apple’s Safari is also a good option on Macs, iPhones and iPads, and the niche Brave browser goes even further in trying to jam the ad-tech industry.

What does switching to Firefox cost you? It’s free, and downloading a different browser is much simpler than changing phones.

In 2017, Mozilla launched a new version of Firefox called Quantum that made it considerably faster. In my tests, it has felt almost as fast as Chrome, though benchmark tests have found it can be slower in some contexts. Firefox says it’s better about managing memory if you use lots and lots of tabs.

Switching means you’ll have to move your bookmarks, and Firefox offers tools to help. Shifting passwords is easy if you use a password manager. And most browser add-ons are available, though it’s possible you won’t find your favorite.

Mozilla has challenges to overcome. Among privacy advocates, the nonprofit is known for caution. It took a year longer than Apple to make cookie blocking a default.

And as a nonprofit, it earns money when people make searches in the browser and click on ads — which means its biggest source of income is Google. Mozilla’s chief executive says the company is exploring new paid privacy services to diversify its income.

Its biggest risk is that Firefox might someday run out of steam in its battle with the Chrome behemoth. Even though it’s the No. 2 desktop browser, with about 10 percent of the market, major sites could decide to drop support, leaving Firefox scrambling.

If you care about privacy, let’s hope for another David and Goliath outcome.

LabCorp: 7.7 Million Consumers Hit in Collections Firm Breach

Sadly I am not surprised. As I said countless times, until there is teeth in breach laws, no one in corporate America, and gov for that matter, will spend the money to seriously enough increase cyber security. With a breach of this size, LabCorp/Quest, etc. should be wound down and their C-Suite executives held legally responsible.

QUOTE

Medical testing giant LabCorp. said today personal and financial data on some 7.7 million consumers were exposed by a breach at a third-party billing collections firm. That third party — the American Medical Collection Agency (AMCA) — also recently notified competing firm Quest Diagnostics that an intrusion in its payments Web site exposed personal, financial and medical data on nearly 12 million Quest patients.

Just a few days ago, the news was all about how Quest had suffered a major breach. But today’s disclosure by LabCorp. suggests we are nowhere near done hearing about other companies with millions of consumers victimized because of this incident: The AMCA is a New York company with a storied history of aggressively collecting debt for a broad range of businesses, including medical labs and hospitals, direct marketers, telecom companies, and state and local traffic/toll agencies.

In a filing today with the U.S. Securities and Exchange Commission, LabCorp. said it learned that the breach at AMCA persisted between Aug. 1, 2018 and March 30, 2019. It said the information exposed could include first and last name, date of birth, address, phone, date of service, provider, and balance information.

“AMCA’s affected system also included credit card or bank account information that was provided by the consumer to AMCA (for those who sought to pay their balance),” the filing reads. “LabCorp provided no ordered test, laboratory results, or diagnostic information to AMCA. AMCA has advised LabCorp that Social Security Numbers and insurance identification information are not stored or maintained for LabCorp consumers.”

LabCorp further said the AMCA has informed LabCorp “it is in the process of sending notices to approximately 200,000 LabCorp consumers whose credit card or bank account information may have been accessed. AMCA has not yet provided LabCorp a list of the affected LabCorp consumers or more specific information about them.”

The LabCorp disclosure comes just days after competing lab testing firm Quest Diagnostics disclosed that the hack of AMCA exposed the personal, financial and medical data on approximately 11.9 million patients.

Quest said it first heard from the AMCA about the breach on May 14, but that it wasn’t until two weeks later that AMCA disclosed the number of patients affected and what information was accessed, which includes financial information (e.g., credit card numbers and bank account information), medical information and Social Security Numbers.

Quest says it has since stopped doing business with the AMCA and has hired a security firm to investigate the incident. Much like LabCorp, Quest also alleges the AMCA still hasn’t said which 11.9 million patients were impacted and that the company was withholding information about the incident.

The AMCA declined to answer any questions about whether the breach of its payment’s page impacted anyone who entered payment data into the company’s site during the breach. But through an outside PR firm, it issued the following statement:

“We are investigating a data incident involving an unauthorized user accessing the American Medical Collection Agency system,” reads a written statement attributed to the AMCA. “Upon receiving information from a security compliance firm that works with credit card companies of a possible security compromise, we conducted an internal review, and then took down our web payments page.”

The statement continues:

“We hired a third-party external forensics firm to investigate any potential security breach in our systems, migrated our web payments portal services to a third-party vendor, and retained additional experts to advise on, and implement, steps to increase our systems’ security. We have also advised law enforcement of this incident. We remain committed to our system’s security, data privacy, and the protection of personal information.”

Firefox Bug – Patch now

Quote

Mozilla has released an emergency critical update for Firefox to squash a zero-day vulnerability that is under active attack.

The Firefox 67.0.3 and ESR 60.7.1 builds include a patch for CVE-2019-11707. The vulnerability is a type confusion bug in the way Firefox handles JavaScript objects in Array.pop. By manipulating the object in the array, malicious JavaScript on a webpage could get the ability to remotely execute code without any user interaction.

This is a bad thing.

What’s worse, Mozilla says it has already received reports that the flaw is being actively exploited in the wild by miscreants, making it critical for users to install the latest patched versions of the browser.

Fortunately, because Mozilla automatically updates Firefox with new patches and bug fixes, both Linux, Mac, and Windows PC users can install the patch with a simple browser restart.

Credit for the discovery and disclosure of the bug was given to Samuel Groß of Project Zero. ®