Skip to content

Apple contractors ‘regularly hear confidential details’ on Siri recordings

Why anyone in their right mind would us a “smartspeaker” or digital assistance is beyond me. Apple got caught with their pants down and has “suspended” the practice. That said, can you trust them, or Google, or Amazon? No. Just say no to this technology. There will always be leaks. “What happens on your iPhone goes to our contractors” in this case.

Quote

Apple contractors regularly hear confidential medical information, drug deals, and recordings of couples having sex, as part of their job providing quality control, or “grading”, the company’s Siri voice assistant, the Guardian has learned.

Although Apple does not explicitly disclose it in its consumer-facing privacy documentation, a small proportion of Siri recordings are passed on to contractors working for the company around the world. They are tasked with grading the responses on a variety of factors, including whether the activation of the voice assistant was deliberate or accidental, whether the query was something Siri could be expected to help with and whether Siri’s response was appropriate.

Apple says the data “is used to help Siri and dictation … understand you better and recognise what you say”.

But the company does not explicitly state that that work is undertaken by humans who listen to the pseudonymised recordings.

Apple told the Guardian: “A small portion of Siri requests are analysed to improve Siri and dictation. User requests are not associated with the user’s Apple ID. Siri responses are analysed in secure facilities and all reviewers are under the obligation to adhere to Apple’s strict confidentiality requirements.” The company added that a very small random subset, less than 1% of daily Siri activations, are used for grading, and those used are typically only a few seconds long.

A whistleblower working for the firm, who asked to remain anonymous due to fears over their job, expressed concerns about this lack of disclosure, particularly given the frequency with which accidental activations pick up extremely sensitive personal information.

Siri can be accidentally activated when it mistakenly hears its “wake word”, the phrase “hey Siri”. Those mistakes can be understandable – a BBC interview about Syria was interrupted by the assistant last year – or less so. “The sound of a zip, Siri often hears as a trigger,” the contractor said. The service can also be activated in other ways. For instance, if an Apple Watch detects it has been raised and then hears speech, Siri is automatically activated.

The whistleblower said: “There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data.”

That accompanying information may be used to verify whether a request was successfully dealt with. In its privacy documents, Apple says the Siri data “is not linked to other data that Apple may have from your use of other Apple services”. There is no specific name or identifier attached to a record and no individual recording can be easily linked to other recordings.

Accidental activations led to the receipt of the most sensitive data that was sent to Apple. Although Siri is included on most Apple devices, the contractor highlighted the Apple Watch and the company’s HomePod smart speaker as the most frequent sources of mistaken recordings. “The regularity of accidental triggers on the watch is incredibly high,” they said. “The watch can record some snippets that will be 30 seconds – not that long but you can gather a good idea of what’s going on.”

Sometimes, “you can definitely hear a doctor and patient, talking about the medical history of the patient. Or you’d hear someone, maybe with car engine background noise – you can’t say definitely, but it’s a drug deal … you can definitely hear it happening. And you’d hear, like, people engaging in sexual acts that are accidentally recorded on the pod or the watch.”

The contractor said staff were encouraged to report accidental activations “but only as a technical problem”, with no specific procedures to deal with sensitive recordings. “We’re encouraged to hit targets, and get through work as fast as possible. The only function for reporting what you’re listening to seems to be for technical problems. There’s nothing about reporting the content.”

As well as the discomfort they felt listening to such private information, the contractor said they were motivated to go public about their job because of their fears that such information could be misused. “There’s not much vetting of who works there, and the amount of data that we’re free to look through seems quite broad. It wouldn’t be difficult to identify the person that you’re listening to, especially with accidental triggers – addresses, names and so on.

“Apple is subcontracting out, there’s a high turnover. It’s not like people are being encouraged to have consideration for people’s privacy, or even consider it. If there were someone with nefarious intentions, it wouldn’t be hard to identify [people on the recordings].”

The contractor argued Apple should reveal to users this human oversight exists – and, specifically, stop publishing some of its jokier responses to Siri queries. Ask the personal assistant “are you always listening”, for instance, and it will respond with: “I only listen when you’re talking to me.”

That is patently false, the contractor said. They argued that accidental triggers are too regular for such a lighthearted response.

Apple is not alone in employing human oversight of its automatic voice assistants. In April, Amazon was revealed to employ staff to listen to some Alexa recordings, and earlier this month, Google workers were found to be doing the same with Google Assistant.

Apple differs from those companies in some ways, however. For one, Amazon and Google allow users to opt out of some uses of their recordings; Apple offers no similar choice short of disabling Siri entirely. According to Counterpoint Research, Apple has 35% of the smartwatch market, more than three times its nearest competitor Samsung, and more than its next six biggest competitors combined.

The company values its reputation for user privacy highly, regularly wielding it as a competitive advantage against Google and Amazon. In January, it bought a billboard at the Consumer Electronics Show in Las Vegas announcing that “what happens on your iPhone stays on your iPhone”.

Capital One – Hacker swipes personal info on 106 million US, Canadian credit card applicants

Maybe if Capital One stopped practicing age discrimination and hired more experience IT workers (problem at numerous companies btw), it could have avoided the breach. In my opinion, this breach should result in crippling fines and C-suite execs being held criminally liable. Of course that will never happen in the U.S.
Quote

A hacker raided Capital One’s cloud storage buckets and stole personal information on 106 million credit card applicants in America and Canada.

The swiped data includes 140,000 US social security numbers and 80,000 bank account numbers, we’re told, as well as one million Canadian social insurance numbers, plus names, addresses, phone numbers, dates of birth, and reported incomes.

The pilfered data was submitted to Capital One by credit card hopefuls between 2005 and early 2019. The info was siphoned between March this year and July 17, and Capital One learned of the intrusion on July 19.

Seattle software engineer Paige A. Thompson, aka “erratic,” aka 0xA3A97B6C on Twitter, was suspected of nicking the data, and was collared by the FBI at her home on Monday this week. The 33-year-old has already appeared in court, charged with violating the US Computer Fraud and Abuse Act. She will remain in custody until her next hearing on August 1.

According to the Feds in their court paperwork [PDF], Thompson broke into Capital One’s cloud-hosted storage, believed to be Amazon Web Services’ S3 buckets, and downloaded their contents.

The financial giant said the intruder exploited a “configuration vulnerability,” while the Feds said a “firewall misconfiguration permitted commands to reach and be executed” by Capital One’s cloud-based storage servers. US prosecutors said the thief slipped past a “misconfigured web application firewall.”

Either way, someone using VPN service IPredator and the anonymizing Tor network illegally accessed the bank’s in-the-cloud systems, and downloaded citizens’ private data. This “misconfiguration” has since been fixed.

Thompson was, for what it’s worth, an engineer at Amazon Web Services, specifically on its cloud storage systems, between 2015 and 2016, and worked on various software projects in her spare time as well as running her own server-hosting outfit…

What You Should Know About the Equifax Data Breach Settlement

Brian Krebs has a good summary

Excerpt

Q: What happened?

A: If the terms of the settlement are approved by a court, the Federal Trade Commission says Equifax will be required to spend up to $425 million helping consumers who can demonstrate they were financially harmed by the breach. The company also will provide up to 10 years of free credit monitoring to those who had their data exposed.

Q: What about the rest of the money in the settlement?

A: An as-yet undisclosed amount will go to pay lawyers fees for the plaintiffs.

Q: $650 million seems like a lot. Is that some kind of record?

A: If not, it’s pretty close. The New York Times reported earlier today that it was thought to be the largest settlement ever paid by a company over a data breach, but that statement doesn’t appear anywhere in their current story.

Q: Hang on…148 million affected consumers…out of that $425 million pot that comes to just $2.87 per victim, right?

A: That’s one way of looking at it. But as always, the devil is in the details. You won’t see a penny or any other benefit unless you do something about it, and how much you end up costing the company (within certain limits) is up to you.

The Times reports that the proposed settlement assumes that only around seven million people will sign up for their credit monitoring offers. “If more do, Equifax’s costs for providing it could rise meaningfully,” the story observes.

Q: Okay. What can I do?

A: You can visit www.equifaxbreachsettlement.com, although none of this will be official or on offer until a court approves the settlement.

Q: Uh, that doesn’t look like Equifax’s site…

A: Good eyes! It’s not. It’s run by a third party. But we should probably just be grateful for that; given Equifax’s total dumpster fire of a public response to the breach, the company has shown itself incapable of operating (let alone securing) a properly functioning Web site.

Q: What can I get out of this?

A: In a nutshell, affected consumers are eligible to apply for one or more remedies, including:

–Free credit monitoring: At least three years of credit monitoring via all three major bureaus simultaneously, including Equifax, Experian and Trans Union. The settlement also envisions up to six more years of single bureau monitoring through Experian. Or, if you don’t want to take advantage of the credit monitoring offers, you can opt instead for a $125 cash payment. You can’t get both.

–Reimbursement: …For the time you spent remedying identity theft or misuse of your personal information caused by the breach, or purchasing credit monitoring or credit reports. This is capped at 20 total hours at $25 per hour ($500). Total cash reimbursement payment will not exceed $20,000 per consumer.

–Help with ongoing identity theft issues: Up to seven years of “free assisted identity restoration services.” Again, the existing breach settlement page is light on specifics there.

Q: Does this cover my kids/dependents, too?

A: The FTC says if you were a minor in May 2017 (when Equifax first learned of the breach), you are eligible for a total of 18 years of free credit monitoring.

Q: How do I take advantage of any of these?

A: You can’t yet. The settlement has to be approved first. The settlement Web site says to check back again later. In addition to checking the breach settlement site periodically, consumers can sign up with the FTC to receive email updates about this settlement.

Update: The eligibility site is now active, at this link.

The settlement site said consumers also can call 1-833-759-2982 for more information. Press #2 on your phone’s keypad if you want to skip the 1-minute preamble and get straight into the queue to speak with a real person.

KrebsOnSecurity dialed in to ask for more details on the “free assisted identity restoration services,” and the person who took my call said they’d need to have some basic information about me in order to proceed. He said they needed my name, address and phone number to proceed. I gave him a number and a name, and after checking with someone he came back and said the restoration services would be offered by Equifax, but confirmed that affected consumers would still have to apply for it.

He added that the Equifaxbreachsettlement.com site will soon include a feature that lets visitors check to see if they’re eligible, but also confirmed that just checking eligibility won’t entitle one to any of the above benefits: Consumers will still need to file a claim through the site (when it’s available to do so).

Well the real issue is that company is allowed to continue operations. It should have been wound down and the C-suite execs held liable, perhaps criminally liable.

….and as Brian’s and my Senator, Sen. Mark Warner (D-Va.) said

“Americans don’t choose to have companies like Equifax collecting their data – by the nature of their business models, credit bureaus collect your personal information whether you want them to or not. In light of that, the penalties for failing to secure that data should be appropriately steep. While I’m happy to see that customers who have been harmed as a result of Equifax’s shoddy cybersecurity practices will see some compensation, we need structural reforms and increased oversight of credit reporting agencies in order to make sure that this never happens again.”

Until real teeth are in breach laws which hold c-suite executives criminal liable and such breach threatens the survival of the company, nothing will change. Even the EU general data protection regulation does not go far enough (but much further than the laughable U.S. regs.)

US Guard Warns Again about Cyber Incident Exposes Potential Vulnerabilities Onboard Commercial Vessels

For the second time this year the US Coast Guard has issued a warning about the cybersecurity practices aboard commercial sea vessels. Full US Guard Alert Here

To us in Cyber Security, the recommendations are fairly standard. But for the Maritime industry, it seems new.

In order to improve the resilience of vessels and facilities, and to protect the safety of the waterways in
which they operate, the U.S. Coast Guard strongly recommends that vessel and facility owners,
operators and other responsible parties take the following basic measures to improve their
cybersecurity:

  • Segment Networks. “Flat” networks allow an adversary to easily maneuver to any system
    connected to that network. Segment your networks into “subnetworks” to make it harder for an
    adversary to gain access to essential systems and equipment.
  • Per-user Profiles & Passwords. Eliminate the use of generic log-in credentials for multiple
    personnel. Create network profiles for each employee. Require employees to enter a password
    and/or insert an ID card to log on to onboard equipment. Limit access/privileges to only those
    levels necessary to allow each user to do his or her job. Administrator accounts should be used
    sparingly and only when necessary.
  • Be Wary of External Media. This incident revealed that it is common practice for cargo data to
    be transferred at the pier, via USB drive. Those USB drives were routinely plugged directly into
    the ship’s computers without prior scanning for malware. It is critical that any external media is
    scanned for malware on a standalone system before being plugged into any shipboard network.
    Never run executable media from an untrusted source.
  • Install Basic Antivirus Software. Basic cyber hygiene can stop incidents before they impact
    operations. Install and routinely update basic antivirus software.
  • Don’t Forget to Patch. Patching is no small task, but it is the core of cyber hygiene.
    Vulnerabilities impacting operating systems and applications are constantly changing – patching
    is critical to effective cybersecurity.

Maintaining effective cybersecurity is not just an IT issue, but is rather a fundamental operational
imperative in the 21st century maritime environment. The Coast Guard therefore strongly encourages
l vessel and facility owners and operators to conduct cybersecurity assessments to better understand
he extent of their cyber vulnerabilities.

We recommend using a full UTM Firewall on all commercial vessels that have internet connectivity. In addition, individual connected endpoint devices, need to have active anti-malware software installed and running. L4 Networks can help! Contact Us Please.

Why I’m done with Chrome – Matthew Green

This article was published Sep 2018. I have warned people on chrome for numerous years before that. It is not just the log in policy, it is all the other data collection, no way to automatically dump cookies and private data on exit and so forth. But all that said, it is still worth re-posting this as I think Matt Green did a good job. And since no one listens to me (boo hoo), maybe they will listen to Matt. And litsening is needed as the likes of google, facebook, amazon are just getting stronger and more intrusive. Fight back!

QUOTE

Why I’m done with Chrome

This blog is mainly reserved for cryptography, and I try to avoid filling it with random 512px-Google_Chrome_icon_(September_2014).svg“someone is wrong on the Internet” posts. After all, that’s what Twitter is for! But from time to time something bothers me enough that I have to make an exception. Today I wanted to write specifically about Google Chrome, how much I’ve loved it in the past, and why — due to Chrome’s new user-unfriendly forced login policy — I won’t be using it going forward.
A brief history of Chrome

When Google launched Chrome ten years ago, it seemed like one of those rare cases where everyone wins. In 2008, the browser market was dominated by Microsoft, a company with an ugly history of using browser dominance to crush their competitors. Worse, Microsoft was making noises about getting into the search business. This posed an existential threat to Google’s internet properties.

In this setting, Chrome was a beautiful solution. Even if the browser never produced a scrap of revenue for Google, it served its purpose just by keeping the Internet open to Google’s other products. As a benefit, the Internet community would receive a terrific open source browser with the best development team money could buy. This might be kind of sad for Mozilla (who have paid a high price due to Chrome) but overall it would be a good thing for Internet standards.

For many years this is exactly how things played out. Sure, Google offered an optional “sign in” feature for Chrome, which presumably vacuumed up your browsing data and shipped it off to Google, but that was an option. An option you could easily ignore. If you didn’t take advantage of this option, Google’s privacy policy was clear: your data would stay on your computer where it belonged.
What changed?

A few weeks ago Google shipped an update to Chrome that fundamentally changes the sign-in experience. From now on, every time you log into a Google property (for example, Gmail), Chrome will automatically sign the browser into your Google account for you. It’ll do this without asking, or even explicitly notifying you. (However, and this is important: Google developers claim this will not actually start synchronizing your data to Google — yet. See further below.)

Your sole warning — in the event that you’re looking for it — is that your Google profile picture will appear in the upper-right hand corner of the browser window. I noticed mine the other day:

The change hasn’t gone entirely unnoticed: it received some vigorous discussion on sites like Hacker News. But the mainstream tech press seems to have ignored it completely. This is unfortunate — and I hope it changes — because this update has huge implications for Google and the future of Chrome.

In the rest of this post, I’m going to talk about why this matters. From my perspective, this comes down to basically four points:

Nobody on the Chrome development team can provide a clear rationale for why this change was necessary, and the explanations they’ve given don’t make any sense.
This change has enormous implications for user privacy and trust, and Google seems unable to grapple with this.
The change makes a hash out of Google’s own privacy policies for Chrome.
Google needs to stop treating customer trust like it’s a renewable resource, because they’re screwing up badly.

I warn you that this will get a bit ranty. Please read on anyway.
Google’s stated rationale makes no sense

The new feature that triggers this auto-login behavior is called “Identity consistency between browser and cookie jar” (HN). After conversations with two separate Chrome developers on Twitter (who will remain nameless — mostly because I don’t want them to hate me), I was given the following rationale for the change:

To paraphrase this explanation: if you’re in a situation where you’ve already signed into Chrome and your friend shares your computer, then you can wind up accidentally having your friend’s Google cookies get uploaded into your account. This seems bad, and sure, we want to avoid that.

But note something critical about this scenario. In order for this problem to apply to you, you already have to be signed into Chrome. There is absolutely nothing in this problem description that seems to affect users who chose not to sign into the browser in the first place.

So if signed-in users are your problem, why would you make a change that forces unsigned–in users to become signed-in? I could waste a lot more ink wondering about the mismatch between the stated “problem” and the “fix”, but I won’t bother: because nobody on the public-facing side of the Chrome team has been able to offer an explanation that squares this circle.

And this matters, because “sync” or not…
The change has serious implications for privacy and trust

The Chrome team has offered a single defense of the change. They point out that just because your browser is “signed in” does not mean it’s uploading your data to Google’s servers. Specifically:

While Chrome will now log into your Google account without your consent (following a Gmail login), Chrome will not activate the “sync” feature that sends your data to Google. That requires an additional consent step. So in theory your data should remain local.

This is my paraphrase. But I think it’s fair to characterize the general stance of the Chrome developers I spoke with as: without this “sync” feature, there’s nothing wrong with the change they’ve made, and everything is just fine.

This is nuts, for several reasons.

User consent matters. For ten years I’ve been asked a single question by the Chrome browser: “Do you want to log in with your Google account?” And for ten years I’ve said no thanks. Chrome still asks me that question — it’s just that now it doesn’t honor my decision.

The Chrome developers want me to believe that this is fine, since (phew!) I’m still protected by one additional consent guardrail. The problem here is obvious:

If you didn’t respect my lack of consent on the biggest user-facing privacy option in Chrome (and didn’t even notify me that you had stopped respecting it!) why should I trust any other consent option you give me? What stops you from changing your mind on that option in a few months, when we’ve all stopped paying attention?

The fact of the matter is that I’d never even heard of Chrome’s “sync” option — for the simple reason that up until September 2018, I had never logged into Chrome. Now I’m forced to learn these new terms, and hope that the Chrome team keeps promises to keep all of my data local as the barriers between “signed in” and “not signed in” are gradually eroded away.

The Chrome sync UI is a dark pattern. Now that I’m forced to log into Chrome, I’m faced with a brand new menu I’ve never seen before. It looks like this:

Does that big blue button indicate that I’m already synchronizing my data to Google? That’s scary! Wait, maybe it’s an invitation to synchronize! If so, what happens to my data if I click it by accident? (I won’t give it the answer away, you should go find out. Just make sure you don’t accidentally upload all your data in the process. It can happen quickly.)

In short, Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern. Whether intentional or not, it has the effect of making it easy for people to activate sync without knowing it, or to think they’re already syncing and thus there’s no additional cost to increasing Google’s access to their data.

Don’t take my word for it. It even gives (former) Google people the creeps.

Big brother doesn’t need to actually watch you. We tell things to our web browsers that we wouldn’t tell our best friends. We do this with some vague understanding that yes, the Internet spies on us. But we also believe that this spying is weak and probabilistic. It’s not like someone’s standing over our shoulder checking our driver’s license with each click.

What happens if you take that belief away? There are numerous studies indicating that even the perception of surveillance can significantly greatly magnify the degree of self-censorship users force on themselves. Will user feel comfortable browsing for information on sensitive mental health conditions — if their real name and picture are always loaded into the corner of their browser? The Chrome development team says “yes”. I think they’re wrong.

For all we know, the new approach has privacy implications even if sync is off. The Chrome developers claim that with “sync” off, a Chrome has no privacy implications. This might be true. But when pressed on the actual details, nobody seems quite sure.

For example, if I have my browser logged out, then I log in and turn on “sync”, does all my past (logged-out) data get pushed to Google? What happens if I’m forced to be logged in, and then subsequently turn on “sync”? Nobody can quite tell me if the data uploaded in these conditions is the same. These differences could really matter.
The changes make hash of the Chrome privacy policy

The Chrome privacy policy is a remarkably simple document. Unlike most privacy policies, it was clearly written as a promise to Chrome’s users — rather than as the usual lawyer CYA. Functionally, it describes two browsing modes: “Basic browser mode” and “signed-in mode”. These modes have very different properties. Read for yourself:

In “basic browser mode”, your data is stored locally. In “signed-in” mode, your data gets shipped to Google’s servers. This is easy to understand. If you want privacy, don’t sign in. But what happens if your browser decides to switch you from one mode to the other, all on its own?

Technically, the privacy policy is still accurate. If you’re in basic browsing mode, your data is still stored locally. The problem is that you no longer get to decide which mode you’re in. This makes a mockery out of whatever intentions the original drafters had. Maybe Google will update the document to reflect the new “sync” distinction that the Chrome developers have shared with me. We’ll see.

Update: After I tweeted about my concerns, I received a DM on Sunday from two different Chrome developers, each telling me the good news: Google is updating their privacy policy to reflect the new operation of Chrome. I think that’s, um, good news. But I also can’t help but note that updating a privacy policy on a weekend is an awful lot of trouble to go to for a change that… apparently doesn’t even solve a problem for signed-out users.
Trust is not a renewable resource

For a company that sustains itself by collecting massive amounts of user data, Google has managed to avoid the negative privacy connotations we associate with, say, Facebook. This isn’t because Google collects less data, it’s just that Google has consistently been more circumspect and responsible with it.

Where Facebook will routinely change privacy settings and apologize later, Google has upheld clear privacy policies that it doesn’t routinely change. Sure, when it collects, it collects gobs of data, but in the cases where Google explicitly makes user security and privacy promises — it tends to keep them. This seems to be changing.

Google’s reputation is hard-earned, and it can be easily lost. Changes like this burn a lot of trust with users. If the change is solving an absolutely critical problem for users , then maybe a loss of trust is worth it. I wish Google could convince me that was the case.
Conclusion

This post has gone on more than long enough, but before I finish I want to address two common counterarguments I’ve heard from people I generally respect in this area.

One argument is that Google already spies on you via cookies and its pervasive advertising network and partnerships, so what’s the big deal if they force your browser into a logged-in state? One individual I respect described the Chrome change as “making you wear two name tags instead of one”. I think this objection is silly both on moral grounds — just because you’re violating my privacy doesn’t make it ok to add a massive new violation — but also because it’s objectively silly. Google has spent millions of dollars adding additional tracking features to both Chrome and Android. They aren’t doing this for fun; they’re doing this because it clearly produces data they want.

The other counterargument (if you want to call it that) goes like this: I’m a n00b for using Google products at all, and of course they were always going to do this. The extreme version holds that I ought to be using lynx+Tor and DJB’s custom search engine, and if I’m not I pretty much deserve what’s coming to me.

I reject this argument. I think It’s entirely possible for a company like Google to make good, usable open source software that doesn’t massively violate user privacy. For ten years I believe Google Chrome did just this.

Why they’ve decided to change, I don’t know. It makes me sad.

Goodbye, Chrome: Google’s web browser has become spy software

No shit Sherlock…Been saying that about chrome for years. Nobody listened to me. And you think gmail better? Guess again. The entire Google ecosystem is setup for one thing — get the maximum amount of user private data and sell to advertisers. You are the product and there is no free lunch. But do not listen to me, someone with over 30 years in IT and Cyber Security, trust a the Washington Post talking head instead (although I am happy he published this).

The article, however, misses several points

1. Unlike Firefox, there is no setting to automatically dump private data upon exit of the browser. It must be set on Firefox or Firefox will approximate Chrome.
2. Firefox has telemetry tuned on SOME of which can only be turned off via advanced About:config actions – shame on Firefox
3. The article is silent on the add-ons to both chrome and firefox that make it more private. Example – they say they are working on a fingerprinting canvass blocker. Yet that has been available for years as an add-on.

Got to run…

Quote

Our latest privacy experiment found Chrome ushered more than 11,000 tracker cookies into our browser — in a single week. Here’s why Firefox is better

You open your browser to look at the Web. Do you know who is looking back at you?

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web.

This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

And that’s not the half of it.

Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

Firefox isn’t perfect — it still defaults searches to Google and permits some other tracking. But it doesn’t share browsing data with Mozilla, which isn’t in the data-collection business.

At a minimum, Web snooping can be annoying. Cookies are how a pair of pants you look at in one site end up following you around in ads elsewhere. More fundamentally, your Web history — like the color of your underpants — ain’t nobody’s business but your own. Letting anyone collect that data leaves it ripe for abuse by bullies, spies and hackers.

Google’s product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they’re working on new ones for cookies. But they also said they have to get the right balance with a “healthy Web ecosystem” (read: ad business).

Firefox’s product managers told me they don’t see privacy as an “option” relegated to controls. They’ve launched a war on surveillance, starting this month with “enhanced tracking protection” that blocks nosy cookies by default on new Firefox installations. But to succeed, first Firefox has to persuade people to care enough to overcome the inertia of switching.

It’s a tale of two browsers — and the diverging interests of the companies that make them.

A decade ago, Chrome and Firefox were taking on Microsoft’s lumbering giant Internet Explorer. The upstart Chrome solved real problems for consumers, making the Web safer and faster. Today it dominates more than half the market.

Lately, however, many of us have realized that our privacy is also a major concern on the Web — and Chrome’s interests no longer always seem aligned with our own.

That’s most visible in the fight over cookies. These code snippets can do helpful things, like remembering the contents of your shopping cart. But now many cookies belong to data companies, which use them to tag your browser so they can follow your path like crumbs in the proverbial forest.

They’re everywhere — one study found third-party tracking cookies on 92 percent of websites. The Washington Post website has about 40 tracker cookies, average for a news site, which the company said in a statement are used to deliver better-targeted ads and track ad performance.

You’ll also find them on sites without ads: Both Aetna and the FSA service said the cookies on their sites help measure their own external marketing campaigns.

The blame for this mess belongs to the entire advertising, publishing and tech industries. But what responsibility does a browser have in protecting us from code that isn’t doing much more than spying?

To see what cookies Firefox has blocked for a Web page, tap the shield icon, then “Blocking Tracker Cookies” to pull up a list. (Geoffrey Fowler/The Washington Post)

In 2015, Mozilla debuted a version of Firefox that included anti-tracking tech, turned on only in its “private” browsing mode. After years of testing and tweaking, that’s what it activated this month on all websites. This isn’t about blocking ads — those still come through. Rather, Firefox is parsing cookies to decide which ones to keep for critical site functions and which ones to block for spying.

Apple’s Safari browser, used on iPhones, also began applying “intelligent tracking protection” to cookies in 2017, using an algorithm to decide which ones were bad.

Chrome, so far, remains open to all cookies by default. Last month, Google announced a new effort to force third-party cookies to better self-identify, and said we can expect new controls for them after it rolls out. But it wouldn’t offer a timeline or say whether it would default to stopping trackers.

I’m not holding my breath. Google itself, through its Doubleclick and other ad businesses, is the No. 1 cookie maker — the Mrs. Fields of the Web. It’s hard to imagine Chrome ever cutting off Google’s moneymaker.

“Cookies play a role in user privacy, but a narrow focus on cookies obscures the broader privacy discussion because it’s just one way in which users can be tracked across sites,” said Ben Galbraith, Chrome’s director of product management. “This is a complex problem, and simple, blunt cookie blocking solutions force tracking into more opaque practices.”

There are other tracking techniques — and the privacy arms race will get harder. But saying things are too complicated is also a way of not doing anything.

“Our viewpoint is to deal with the biggest problem first, but anticipate where the ecosystem will shift and work on protecting against those things as well,” said Peter Dolanjski, Firefox’s product lead.

Both Google and Mozilla said they’re working on fighting “fingerprinting,” a way to sniff out other markers in your computer. Firefox is already testing its capabilities and plans to activate them soon.

Choosing a browser is no longer just about speed and convenience — it’s also about data defaults.

It’s true that Google usually obtains consent before gathering data, and offers a lot of knobs you can adjust to opt out of tracking and targeted advertising. But its controls often feel like a shell game that results in us sharing more personal data.

I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn’t cause anybody’s browsing history to be “synced” unless they specifically opted in — but I found mine was being sent to Google and don’t recall ever asking for extra surveillance. (You can turn off the Gmail auto-login by searching “Gmail” in Chrome settings and switching off “Allow Chrome sign-in.”)

After the sign-in shift, Johns Hopkins associate professor Matthew Green made waves in the computer science world when he blogged he was done with Chrome. “I lost faith,” he told me. “It only takes a few tiny changes to make it very privacy unfriendly.”

When you use Chrome, signing into Gmail automatically logs in the browser to your Google account. When “sync” is also on, Google receives your browsing history. (Geoffrey Fowler/The Washington Post)

There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.

Like Green, I’ve chosen Firefox, which works across phones, tablets, PCs and Macs. Apple’s Safari is also a good option on Macs, iPhones and iPads, and the niche Brave browser goes even further in trying to jam the ad-tech industry.

What does switching to Firefox cost you? It’s free, and downloading a different browser is much simpler than changing phones.

In 2017, Mozilla launched a new version of Firefox called Quantum that made it considerably faster. In my tests, it has felt almost as fast as Chrome, though benchmark tests have found it can be slower in some contexts. Firefox says it’s better about managing memory if you use lots and lots of tabs.

Switching means you’ll have to move your bookmarks, and Firefox offers tools to help. Shifting passwords is easy if you use a password manager. And most browser add-ons are available, though it’s possible you won’t find your favorite.

Mozilla has challenges to overcome. Among privacy advocates, the nonprofit is known for caution. It took a year longer than Apple to make cookie blocking a default.

And as a nonprofit, it earns money when people make searches in the browser and click on ads — which means its biggest source of income is Google. Mozilla’s chief executive says the company is exploring new paid privacy services to diversify its income.

Its biggest risk is that Firefox might someday run out of steam in its battle with the Chrome behemoth. Even though it’s the No. 2 desktop browser, with about 10 percent of the market, major sites could decide to drop support, leaving Firefox scrambling.

If you care about privacy, let’s hope for another David and Goliath outcome.

LabCorp: 7.7 Million Consumers Hit in Collections Firm Breach

Sadly I am not surprised. As I said countless times, until there is teeth in breach laws, no one in corporate America, and gov for that matter, will spend the money to seriously enough increase cyber security. With a breach of this size, LabCorp/Quest, etc. should be wound down and their C-Suite executives held legally responsible.

QUOTE

Medical testing giant LabCorp. said today personal and financial data on some 7.7 million consumers were exposed by a breach at a third-party billing collections firm. That third party — the American Medical Collection Agency (AMCA) — also recently notified competing firm Quest Diagnostics that an intrusion in its payments Web site exposed personal, financial and medical data on nearly 12 million Quest patients.

Just a few days ago, the news was all about how Quest had suffered a major breach. But today’s disclosure by LabCorp. suggests we are nowhere near done hearing about other companies with millions of consumers victimized because of this incident: The AMCA is a New York company with a storied history of aggressively collecting debt for a broad range of businesses, including medical labs and hospitals, direct marketers, telecom companies, and state and local traffic/toll agencies.

In a filing today with the U.S. Securities and Exchange Commission, LabCorp. said it learned that the breach at AMCA persisted between Aug. 1, 2018 and March 30, 2019. It said the information exposed could include first and last name, date of birth, address, phone, date of service, provider, and balance information.

“AMCA’s affected system also included credit card or bank account information that was provided by the consumer to AMCA (for those who sought to pay their balance),” the filing reads. “LabCorp provided no ordered test, laboratory results, or diagnostic information to AMCA. AMCA has advised LabCorp that Social Security Numbers and insurance identification information are not stored or maintained for LabCorp consumers.”

LabCorp further said the AMCA has informed LabCorp “it is in the process of sending notices to approximately 200,000 LabCorp consumers whose credit card or bank account information may have been accessed. AMCA has not yet provided LabCorp a list of the affected LabCorp consumers or more specific information about them.”

The LabCorp disclosure comes just days after competing lab testing firm Quest Diagnostics disclosed that the hack of AMCA exposed the personal, financial and medical data on approximately 11.9 million patients.

Quest said it first heard from the AMCA about the breach on May 14, but that it wasn’t until two weeks later that AMCA disclosed the number of patients affected and what information was accessed, which includes financial information (e.g., credit card numbers and bank account information), medical information and Social Security Numbers.

Quest says it has since stopped doing business with the AMCA and has hired a security firm to investigate the incident. Much like LabCorp, Quest also alleges the AMCA still hasn’t said which 11.9 million patients were impacted and that the company was withholding information about the incident.

The AMCA declined to answer any questions about whether the breach of its payment’s page impacted anyone who entered payment data into the company’s site during the breach. But through an outside PR firm, it issued the following statement:

“We are investigating a data incident involving an unauthorized user accessing the American Medical Collection Agency system,” reads a written statement attributed to the AMCA. “Upon receiving information from a security compliance firm that works with credit card companies of a possible security compromise, we conducted an internal review, and then took down our web payments page.”

The statement continues:

“We hired a third-party external forensics firm to investigate any potential security breach in our systems, migrated our web payments portal services to a third-party vendor, and retained additional experts to advise on, and implement, steps to increase our systems’ security. We have also advised law enforcement of this incident. We remain committed to our system’s security, data privacy, and the protection of personal information.”

Firefox Bug – Patch now

Quote

Mozilla has released an emergency critical update for Firefox to squash a zero-day vulnerability that is under active attack.

The Firefox 67.0.3 and ESR 60.7.1 builds include a patch for CVE-2019-11707. The vulnerability is a type confusion bug in the way Firefox handles JavaScript objects in Array.pop. By manipulating the object in the array, malicious JavaScript on a webpage could get the ability to remotely execute code without any user interaction.

This is a bad thing.

What’s worse, Mozilla says it has already received reports that the flaw is being actively exploited in the wild by miscreants, making it critical for users to install the latest patched versions of the browser.

Fortunately, because Mozilla automatically updates Firefox with new patches and bug fixes, both Linux, Mac, and Windows PC users can install the patch with a simple browser restart.

Credit for the discovery and disclosure of the bug was given to Samuel Groß of Project Zero. ®

Why Privacy Is an Antitrust Issue

QUOTE

Finally the mainstream media is saying what many of us have known for years. Too bad they do not put their money where their mouth is and sever all business ties the likes of Facebook.

As Facebook has generated scandal after scandal in recent years, critics have started to wonder how we might use antitrust laws to rein in the company’s power. But many of the most pressing concerns about Facebook are its privacy abuses, which unlike price gouging, price discrimination or exclusive dealing, are not immediately recognized as antitrust violations. Is there an antitrust case to be made against Facebook on privacy grounds?

Yes, there is. In March, when Representative David N. Cicilline, Democrat of Rhode Island, called on the Federal Trade Commission to investigate Facebook’s potential violations of antitrust laws, he cited not only Facebook’s acquisitions (such as Instagram and WhatsApp), but also evidence that Facebook was “using its monopoly power to degrade” the quality of its service “below what a competitive marketplace would allow.”

It is this last point, which I made in a law journal article cited by Mr. Cicilline, that promises to change how antitrust law will protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

[changes are needed to (ED)…] protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

To see what I mean, let’s go back to the mid-2000s, when Facebook was an upstart social media platform. To differentiate itself from the market leader, Myspace, Facebook publicly pledged itself to privacy. Privacy provided its competitive advantage, with the company going so far as to promise users, “We do not and will not use cookies to collect private information from any user.”

When Facebook later attempted to change this bargain with users, the threat of losing its customers to its competitors forced the company to reverse course. In 2007, for example, Facebook introduced a program that recorded users’ activity on third-party sites and inserted it into the News Feed. Following public outrage and a class-action lawsuit, Facebook ended the program. “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them,” Facebook’s chief executive, Mark Zuckerberg, wrote in a public apology.

This sort of thing happened regularly for years. Facebook would try something sneaky, users would object and Facebook would back off.

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

This is how Facebook usurped our privacy: with the help of its market dominance. The price of using Facebook has stayed the same over the years (it’s free to join and use), but the cost of using it, calculated in terms of the amount of data that users now must provide, is an order of magnitude above what it was when Facebook faced real competition.
 

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

It is hard to believe that the Facebook of 2019, which is so consuming of and reckless with our data, was once the privacy-protecting Facebook of 2004. When users today sign up for Facebook, they agree to allow the company to track their activity across more than eight million websites and mobile applications that are connected to the internet. They cannot opt out of this. The ubiquitous tracking of consumers online allows Facebook to collect exponentially more data about them than it originally could, which it can use to its financial advantage.

And while users can control some of the ways in which Facebook uses their data by adjusting their privacy settings, if you choose to leave Facebook, the company still subjects you to surveillance — but you no longer have access to the settings. Staying on the platform is the only effective way to manage its harms.

Lowering the quality of a company’s services in this manner has always been one way a monopoly can squeeze consumers after it corners a market. If you go all the way back to the landmark “case of monopolies” in 17th-century England, for example, you find a court sanctioning a monopoly for fear that it might control either price or the quality of services.

But we must now aggressively enforce this antitrust principle to handle the problems of our modern economy. Our government should undertake the important task of restoring to the American people something they bargained for in the first place — their privacy.

LONG LONG Overdue!

Security iPhone gyroscopes, of all things, can uniquely ID handsets on anything earlier than iOS 12.2

QUOTE

Your iPhone can be uniquely fingerprinted by apps and websites in a way that you can never clear. Not by deleting cookies, not by clearing your cache, not even by reinstalling iOS.

Cambridge University researchers will present a paper to the IEEE Symposium on Security and Privacy 2019 today explaining how their fingerprinting technique uses a fiendishly clever method of inferring device-unique accelerometer calibration data.

“iOS has historically provided access to the accelerometer, gyroscope and the magnetometer,” Dr Alastair Beresford told The Register this morning. “These types of devices don’t seem like they’re troublesome from a privacy perspective, right? Which way up the phone is doesn’t seem that bad.

“In reality,” added the researcher, “it turns out that you can work out a globally unique identifier for the device by looking at these streams.”
Your orientation reveals an awful lot about you

“MEMS” – microelectromechanical systems – is the catchall term for things like your phone’s accelerometer, gyroscope and magnetometer. These sensors tell your handset which way up it is, whether it’s turning and, if so, how fast, and how strong a nearby magnetic field is. They are vital for mobile games that rely on the user tilting or turning the handset.

These, said Beresford, are mass produced. Like all mass-produced items, especially sensors, they have the normal distribution of inherent but minuscule errors and flaws, so high-quality manufacturers (like Apple) ensure each one is calibrated.

“That calibration step allows the device to produce a more accurate parameter,” explained Beresford. “But it turns out the values being put into the device are very likely to be globally unique.”

Beresford and co-researchers Jiexin Zhang, also from Cambridge’s Department of Computer Science and Technology, and Ian Sheret of Polymath Insight Ltd, devised a way of not only accessing data from MEMS sensors – that wasn’t the hard part – but of inferring the calibration data based on what the sensors were broadcasting in real time, during actual use by a real-world user. Even better (or worse, depending on your point of view), the data can be captured and reverse-engineered through any old website or app.

“It doesn’t require any specific confirmation from a user,” said Beresford. “This fingerprint never changes, even if you factory reset the handset or reinstall the OS. This is buried deep inside the firmware of the device so the fingerprint data doesn’t change. This provides a way to track users around the web.”
How they did it

“You need to record some samples,” said Beresford. “There’s an API in JavaScript or inside Swift that allows you to get samples from the hardware. Because you get many samples per second, we need around 100 samples to get the attack. Around half a second on many of the devices. So it’s quite quick to collect the data.”

Each device generates a stream of analogue data. By converting that into digital values and applying algorithms they developed in the lab using stationary or slow-moving devices, Beresford said, the researchers could then infer what a real-world user device was doing at a given time (say, being bounced around in a bag) and apply a known offset.

“We can guess what the input is going to be given the output that we observe,” he said. “If we guess correctly, we can then use that guess to estimate what the value of the scale factor and the orthogonality are.”

From there it is a small step to bake those algorithms into a website or an app. Although the actual technique does not necessarily have to be malicious in practice (for example, a bank might use it to uniquely fingerprint your phone as an anti-fraud measure), it does raise a number of questions.
Good news, fandroids: you’re not affected

Oddly enough, the attack doesn’t work on most Android devices because they’re cheaper than Apple’s, in all senses of the word, and generally aren’t calibrated, though the researchers did find that some Google Pixel handsets did feature calibrated MEMS.

Beresford joked: “There’s a certain sense of irony that because Apple has put more effort in to provide more accuracy, it has this unfortunate side effect!”

Apple has patched the flaws in iOS 12.2 by blocking “access to these sensors in Mobile Safari just by default” as well as adding “some noise to make the attack much more difficult”.

The researchers have set up a website which includes both the full research paper and their layman’s explanation, along with a proof-of-concept video. Get patching, Apple fanbois