Skip to content

US Guard Warns Again about Cyber Incident Exposes Potential Vulnerabilities Onboard Commercial Vessels

For the second time this year the US Coast Guard has issued a warning about the cybersecurity practices aboard commercial sea vessels. Full US Guard Alert Here

To us in Cyber Security, the recommendations are fairly standard. But for the Maritime industry, it seems new.

In order to improve the resilience of vessels and facilities, and to protect the safety of the waterways in
which they operate, the U.S. Coast Guard strongly recommends that vessel and facility owners,
operators and other responsible parties take the following basic measures to improve their
cybersecurity:

  • Segment Networks. “Flat” networks allow an adversary to easily maneuver to any system
    connected to that network. Segment your networks into “subnetworks” to make it harder for an
    adversary to gain access to essential systems and equipment.
  • Per-user Profiles & Passwords. Eliminate the use of generic log-in credentials for multiple
    personnel. Create network profiles for each employee. Require employees to enter a password
    and/or insert an ID card to log on to onboard equipment. Limit access/privileges to only those
    levels necessary to allow each user to do his or her job. Administrator accounts should be used
    sparingly and only when necessary.
  • Be Wary of External Media. This incident revealed that it is common practice for cargo data to
    be transferred at the pier, via USB drive. Those USB drives were routinely plugged directly into
    the ship’s computers without prior scanning for malware. It is critical that any external media is
    scanned for malware on a standalone system before being plugged into any shipboard network.
    Never run executable media from an untrusted source.
  • Install Basic Antivirus Software. Basic cyber hygiene can stop incidents before they impact
    operations. Install and routinely update basic antivirus software.
  • Don’t Forget to Patch. Patching is no small task, but it is the core of cyber hygiene.
    Vulnerabilities impacting operating systems and applications are constantly changing – patching
    is critical to effective cybersecurity.

Maintaining effective cybersecurity is not just an IT issue, but is rather a fundamental operational
imperative in the 21st century maritime environment. The Coast Guard therefore strongly encourages
l vessel and facility owners and operators to conduct cybersecurity assessments to better understand
he extent of their cyber vulnerabilities.

We recommend using a full UTM Firewall on all commercial vessels that have internet connectivity. In addition, individual connected endpoint devices, need to have active anti-malware software installed and running. L4 Networks can help! Contact Us Please.

Why I’m done with Chrome – Matthew Green

This article was published Sep 2018. I have warned people on chrome for numerous years before that. It is not just the log in policy, it is all the other data collection, no way to automatically dump cookies and private data on exit and so forth. But all that said, it is still worth re-posting this as I think Matt Green did a good job. And since no one listens to me (boo hoo), maybe they will listen to Matt. And litsening is needed as the likes of google, facebook, amazon are just getting stronger and more intrusive. Fight back!

QUOTE

Why I’m done with Chrome

This blog is mainly reserved for cryptography, and I try to avoid filling it with random 512px-Google_Chrome_icon_(September_2014).svg“someone is wrong on the Internet” posts. After all, that’s what Twitter is for! But from time to time something bothers me enough that I have to make an exception. Today I wanted to write specifically about Google Chrome, how much I’ve loved it in the past, and why — due to Chrome’s new user-unfriendly forced login policy — I won’t be using it going forward.
A brief history of Chrome

When Google launched Chrome ten years ago, it seemed like one of those rare cases where everyone wins. In 2008, the browser market was dominated by Microsoft, a company with an ugly history of using browser dominance to crush their competitors. Worse, Microsoft was making noises about getting into the search business. This posed an existential threat to Google’s internet properties.

In this setting, Chrome was a beautiful solution. Even if the browser never produced a scrap of revenue for Google, it served its purpose just by keeping the Internet open to Google’s other products. As a benefit, the Internet community would receive a terrific open source browser with the best development team money could buy. This might be kind of sad for Mozilla (who have paid a high price due to Chrome) but overall it would be a good thing for Internet standards.

For many years this is exactly how things played out. Sure, Google offered an optional “sign in” feature for Chrome, which presumably vacuumed up your browsing data and shipped it off to Google, but that was an option. An option you could easily ignore. If you didn’t take advantage of this option, Google’s privacy policy was clear: your data would stay on your computer where it belonged.
What changed?

A few weeks ago Google shipped an update to Chrome that fundamentally changes the sign-in experience. From now on, every time you log into a Google property (for example, Gmail), Chrome will automatically sign the browser into your Google account for you. It’ll do this without asking, or even explicitly notifying you. (However, and this is important: Google developers claim this will not actually start synchronizing your data to Google — yet. See further below.)

Your sole warning — in the event that you’re looking for it — is that your Google profile picture will appear in the upper-right hand corner of the browser window. I noticed mine the other day:

The change hasn’t gone entirely unnoticed: it received some vigorous discussion on sites like Hacker News. But the mainstream tech press seems to have ignored it completely. This is unfortunate — and I hope it changes — because this update has huge implications for Google and the future of Chrome.

In the rest of this post, I’m going to talk about why this matters. From my perspective, this comes down to basically four points:

Nobody on the Chrome development team can provide a clear rationale for why this change was necessary, and the explanations they’ve given don’t make any sense.
This change has enormous implications for user privacy and trust, and Google seems unable to grapple with this.
The change makes a hash out of Google’s own privacy policies for Chrome.
Google needs to stop treating customer trust like it’s a renewable resource, because they’re screwing up badly.

I warn you that this will get a bit ranty. Please read on anyway.
Google’s stated rationale makes no sense

The new feature that triggers this auto-login behavior is called “Identity consistency between browser and cookie jar” (HN). After conversations with two separate Chrome developers on Twitter (who will remain nameless — mostly because I don’t want them to hate me), I was given the following rationale for the change:

To paraphrase this explanation: if you’re in a situation where you’ve already signed into Chrome and your friend shares your computer, then you can wind up accidentally having your friend’s Google cookies get uploaded into your account. This seems bad, and sure, we want to avoid that.

But note something critical about this scenario. In order for this problem to apply to you, you already have to be signed into Chrome. There is absolutely nothing in this problem description that seems to affect users who chose not to sign into the browser in the first place.

So if signed-in users are your problem, why would you make a change that forces unsigned–in users to become signed-in? I could waste a lot more ink wondering about the mismatch between the stated “problem” and the “fix”, but I won’t bother: because nobody on the public-facing side of the Chrome team has been able to offer an explanation that squares this circle.

And this matters, because “sync” or not…
The change has serious implications for privacy and trust

The Chrome team has offered a single defense of the change. They point out that just because your browser is “signed in” does not mean it’s uploading your data to Google’s servers. Specifically:

While Chrome will now log into your Google account without your consent (following a Gmail login), Chrome will not activate the “sync” feature that sends your data to Google. That requires an additional consent step. So in theory your data should remain local.

This is my paraphrase. But I think it’s fair to characterize the general stance of the Chrome developers I spoke with as: without this “sync” feature, there’s nothing wrong with the change they’ve made, and everything is just fine.

This is nuts, for several reasons.

User consent matters. For ten years I’ve been asked a single question by the Chrome browser: “Do you want to log in with your Google account?” And for ten years I’ve said no thanks. Chrome still asks me that question — it’s just that now it doesn’t honor my decision.

The Chrome developers want me to believe that this is fine, since (phew!) I’m still protected by one additional consent guardrail. The problem here is obvious:

If you didn’t respect my lack of consent on the biggest user-facing privacy option in Chrome (and didn’t even notify me that you had stopped respecting it!) why should I trust any other consent option you give me? What stops you from changing your mind on that option in a few months, when we’ve all stopped paying attention?

The fact of the matter is that I’d never even heard of Chrome’s “sync” option — for the simple reason that up until September 2018, I had never logged into Chrome. Now I’m forced to learn these new terms, and hope that the Chrome team keeps promises to keep all of my data local as the barriers between “signed in” and “not signed in” are gradually eroded away.

The Chrome sync UI is a dark pattern. Now that I’m forced to log into Chrome, I’m faced with a brand new menu I’ve never seen before. It looks like this:

Does that big blue button indicate that I’m already synchronizing my data to Google? That’s scary! Wait, maybe it’s an invitation to synchronize! If so, what happens to my data if I click it by accident? (I won’t give it the answer away, you should go find out. Just make sure you don’t accidentally upload all your data in the process. It can happen quickly.)

In short, Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern. Whether intentional or not, it has the effect of making it easy for people to activate sync without knowing it, or to think they’re already syncing and thus there’s no additional cost to increasing Google’s access to their data.

Don’t take my word for it. It even gives (former) Google people the creeps.

Big brother doesn’t need to actually watch you. We tell things to our web browsers that we wouldn’t tell our best friends. We do this with some vague understanding that yes, the Internet spies on us. But we also believe that this spying is weak and probabilistic. It’s not like someone’s standing over our shoulder checking our driver’s license with each click.

What happens if you take that belief away? There are numerous studies indicating that even the perception of surveillance can significantly greatly magnify the degree of self-censorship users force on themselves. Will user feel comfortable browsing for information on sensitive mental health conditions — if their real name and picture are always loaded into the corner of their browser? The Chrome development team says “yes”. I think they’re wrong.

For all we know, the new approach has privacy implications even if sync is off. The Chrome developers claim that with “sync” off, a Chrome has no privacy implications. This might be true. But when pressed on the actual details, nobody seems quite sure.

For example, if I have my browser logged out, then I log in and turn on “sync”, does all my past (logged-out) data get pushed to Google? What happens if I’m forced to be logged in, and then subsequently turn on “sync”? Nobody can quite tell me if the data uploaded in these conditions is the same. These differences could really matter.
The changes make hash of the Chrome privacy policy

The Chrome privacy policy is a remarkably simple document. Unlike most privacy policies, it was clearly written as a promise to Chrome’s users — rather than as the usual lawyer CYA. Functionally, it describes two browsing modes: “Basic browser mode” and “signed-in mode”. These modes have very different properties. Read for yourself:

In “basic browser mode”, your data is stored locally. In “signed-in” mode, your data gets shipped to Google’s servers. This is easy to understand. If you want privacy, don’t sign in. But what happens if your browser decides to switch you from one mode to the other, all on its own?

Technically, the privacy policy is still accurate. If you’re in basic browsing mode, your data is still stored locally. The problem is that you no longer get to decide which mode you’re in. This makes a mockery out of whatever intentions the original drafters had. Maybe Google will update the document to reflect the new “sync” distinction that the Chrome developers have shared with me. We’ll see.

Update: After I tweeted about my concerns, I received a DM on Sunday from two different Chrome developers, each telling me the good news: Google is updating their privacy policy to reflect the new operation of Chrome. I think that’s, um, good news. But I also can’t help but note that updating a privacy policy on a weekend is an awful lot of trouble to go to for a change that… apparently doesn’t even solve a problem for signed-out users.
Trust is not a renewable resource

For a company that sustains itself by collecting massive amounts of user data, Google has managed to avoid the negative privacy connotations we associate with, say, Facebook. This isn’t because Google collects less data, it’s just that Google has consistently been more circumspect and responsible with it.

Where Facebook will routinely change privacy settings and apologize later, Google has upheld clear privacy policies that it doesn’t routinely change. Sure, when it collects, it collects gobs of data, but in the cases where Google explicitly makes user security and privacy promises — it tends to keep them. This seems to be changing.

Google’s reputation is hard-earned, and it can be easily lost. Changes like this burn a lot of trust with users. If the change is solving an absolutely critical problem for users , then maybe a loss of trust is worth it. I wish Google could convince me that was the case.
Conclusion

This post has gone on more than long enough, but before I finish I want to address two common counterarguments I’ve heard from people I generally respect in this area.

One argument is that Google already spies on you via cookies and its pervasive advertising network and partnerships, so what’s the big deal if they force your browser into a logged-in state? One individual I respect described the Chrome change as “making you wear two name tags instead of one”. I think this objection is silly both on moral grounds — just because you’re violating my privacy doesn’t make it ok to add a massive new violation — but also because it’s objectively silly. Google has spent millions of dollars adding additional tracking features to both Chrome and Android. They aren’t doing this for fun; they’re doing this because it clearly produces data they want.

The other counterargument (if you want to call it that) goes like this: I’m a n00b for using Google products at all, and of course they were always going to do this. The extreme version holds that I ought to be using lynx+Tor and DJB’s custom search engine, and if I’m not I pretty much deserve what’s coming to me.

I reject this argument. I think It’s entirely possible for a company like Google to make good, usable open source software that doesn’t massively violate user privacy. For ten years I believe Google Chrome did just this.

Why they’ve decided to change, I don’t know. It makes me sad.

Goodbye, Chrome: Google’s web browser has become spy software

No shit Sherlock…Been saying that about chrome for years. Nobody listened to me. And you think gmail better? Guess again. The entire Google ecosystem is setup for one thing — get the maximum amount of user private data and sell to advertisers. You are the product and there is no free lunch. But do not listen to me, someone with over 30 years in IT and Cyber Security, trust a the Washington Post talking head instead (although I am happy he published this).

The article, however, misses several points

1. Unlike Firefox, there is no setting to automatically dump private data upon exit of the browser. It must be set on Firefox or Firefox will approximate Chrome.
2. Firefox has telemetry tuned on SOME of which can only be turned off via advanced About:config actions – shame on Firefox
3. The article is silent on the add-ons to both chrome and firefox that make it more private. Example – they say they are working on a fingerprinting canvass blocker. Yet that has been available for years as an add-on.

Got to run…

Quote

Our latest privacy experiment found Chrome ushered more than 11,000 tracker cookies into our browser — in a single week. Here’s why Firefox is better

You open your browser to look at the Web. Do you know who is looking back at you?

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web.

This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

And that’s not the half of it.

Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

Firefox isn’t perfect — it still defaults searches to Google and permits some other tracking. But it doesn’t share browsing data with Mozilla, which isn’t in the data-collection business.

At a minimum, Web snooping can be annoying. Cookies are how a pair of pants you look at in one site end up following you around in ads elsewhere. More fundamentally, your Web history — like the color of your underpants — ain’t nobody’s business but your own. Letting anyone collect that data leaves it ripe for abuse by bullies, spies and hackers.

Google’s product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they’re working on new ones for cookies. But they also said they have to get the right balance with a “healthy Web ecosystem” (read: ad business).

Firefox’s product managers told me they don’t see privacy as an “option” relegated to controls. They’ve launched a war on surveillance, starting this month with “enhanced tracking protection” that blocks nosy cookies by default on new Firefox installations. But to succeed, first Firefox has to persuade people to care enough to overcome the inertia of switching.

It’s a tale of two browsers — and the diverging interests of the companies that make them.

A decade ago, Chrome and Firefox were taking on Microsoft’s lumbering giant Internet Explorer. The upstart Chrome solved real problems for consumers, making the Web safer and faster. Today it dominates more than half the market.

Lately, however, many of us have realized that our privacy is also a major concern on the Web — and Chrome’s interests no longer always seem aligned with our own.

That’s most visible in the fight over cookies. These code snippets can do helpful things, like remembering the contents of your shopping cart. But now many cookies belong to data companies, which use them to tag your browser so they can follow your path like crumbs in the proverbial forest.

They’re everywhere — one study found third-party tracking cookies on 92 percent of websites. The Washington Post website has about 40 tracker cookies, average for a news site, which the company said in a statement are used to deliver better-targeted ads and track ad performance.

You’ll also find them on sites without ads: Both Aetna and the FSA service said the cookies on their sites help measure their own external marketing campaigns.

The blame for this mess belongs to the entire advertising, publishing and tech industries. But what responsibility does a browser have in protecting us from code that isn’t doing much more than spying?

To see what cookies Firefox has blocked for a Web page, tap the shield icon, then “Blocking Tracker Cookies” to pull up a list. (Geoffrey Fowler/The Washington Post)

In 2015, Mozilla debuted a version of Firefox that included anti-tracking tech, turned on only in its “private” browsing mode. After years of testing and tweaking, that’s what it activated this month on all websites. This isn’t about blocking ads — those still come through. Rather, Firefox is parsing cookies to decide which ones to keep for critical site functions and which ones to block for spying.

Apple’s Safari browser, used on iPhones, also began applying “intelligent tracking protection” to cookies in 2017, using an algorithm to decide which ones were bad.

Chrome, so far, remains open to all cookies by default. Last month, Google announced a new effort to force third-party cookies to better self-identify, and said we can expect new controls for them after it rolls out. But it wouldn’t offer a timeline or say whether it would default to stopping trackers.

I’m not holding my breath. Google itself, through its Doubleclick and other ad businesses, is the No. 1 cookie maker — the Mrs. Fields of the Web. It’s hard to imagine Chrome ever cutting off Google’s moneymaker.

“Cookies play a role in user privacy, but a narrow focus on cookies obscures the broader privacy discussion because it’s just one way in which users can be tracked across sites,” said Ben Galbraith, Chrome’s director of product management. “This is a complex problem, and simple, blunt cookie blocking solutions force tracking into more opaque practices.”

There are other tracking techniques — and the privacy arms race will get harder. But saying things are too complicated is also a way of not doing anything.

“Our viewpoint is to deal with the biggest problem first, but anticipate where the ecosystem will shift and work on protecting against those things as well,” said Peter Dolanjski, Firefox’s product lead.

Both Google and Mozilla said they’re working on fighting “fingerprinting,” a way to sniff out other markers in your computer. Firefox is already testing its capabilities and plans to activate them soon.

Choosing a browser is no longer just about speed and convenience — it’s also about data defaults.

It’s true that Google usually obtains consent before gathering data, and offers a lot of knobs you can adjust to opt out of tracking and targeted advertising. But its controls often feel like a shell game that results in us sharing more personal data.

I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn’t cause anybody’s browsing history to be “synced” unless they specifically opted in — but I found mine was being sent to Google and don’t recall ever asking for extra surveillance. (You can turn off the Gmail auto-login by searching “Gmail” in Chrome settings and switching off “Allow Chrome sign-in.”)

After the sign-in shift, Johns Hopkins associate professor Matthew Green made waves in the computer science world when he blogged he was done with Chrome. “I lost faith,” he told me. “It only takes a few tiny changes to make it very privacy unfriendly.”

When you use Chrome, signing into Gmail automatically logs in the browser to your Google account. When “sync” is also on, Google receives your browsing history. (Geoffrey Fowler/The Washington Post)

There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.

Like Green, I’ve chosen Firefox, which works across phones, tablets, PCs and Macs. Apple’s Safari is also a good option on Macs, iPhones and iPads, and the niche Brave browser goes even further in trying to jam the ad-tech industry.

What does switching to Firefox cost you? It’s free, and downloading a different browser is much simpler than changing phones.

In 2017, Mozilla launched a new version of Firefox called Quantum that made it considerably faster. In my tests, it has felt almost as fast as Chrome, though benchmark tests have found it can be slower in some contexts. Firefox says it’s better about managing memory if you use lots and lots of tabs.

Switching means you’ll have to move your bookmarks, and Firefox offers tools to help. Shifting passwords is easy if you use a password manager. And most browser add-ons are available, though it’s possible you won’t find your favorite.

Mozilla has challenges to overcome. Among privacy advocates, the nonprofit is known for caution. It took a year longer than Apple to make cookie blocking a default.

And as a nonprofit, it earns money when people make searches in the browser and click on ads — which means its biggest source of income is Google. Mozilla’s chief executive says the company is exploring new paid privacy services to diversify its income.

Its biggest risk is that Firefox might someday run out of steam in its battle with the Chrome behemoth. Even though it’s the No. 2 desktop browser, with about 10 percent of the market, major sites could decide to drop support, leaving Firefox scrambling.

If you care about privacy, let’s hope for another David and Goliath outcome.

LabCorp: 7.7 Million Consumers Hit in Collections Firm Breach

Sadly I am not surprised. As I said countless times, until there is teeth in breach laws, no one in corporate America, and gov for that matter, will spend the money to seriously enough increase cyber security. With a breach of this size, LabCorp/Quest, etc. should be wound down and their C-Suite executives held legally responsible.

QUOTE

Medical testing giant LabCorp. said today personal and financial data on some 7.7 million consumers were exposed by a breach at a third-party billing collections firm. That third party — the American Medical Collection Agency (AMCA) — also recently notified competing firm Quest Diagnostics that an intrusion in its payments Web site exposed personal, financial and medical data on nearly 12 million Quest patients.

Just a few days ago, the news was all about how Quest had suffered a major breach. But today’s disclosure by LabCorp. suggests we are nowhere near done hearing about other companies with millions of consumers victimized because of this incident: The AMCA is a New York company with a storied history of aggressively collecting debt for a broad range of businesses, including medical labs and hospitals, direct marketers, telecom companies, and state and local traffic/toll agencies.

In a filing today with the U.S. Securities and Exchange Commission, LabCorp. said it learned that the breach at AMCA persisted between Aug. 1, 2018 and March 30, 2019. It said the information exposed could include first and last name, date of birth, address, phone, date of service, provider, and balance information.

“AMCA’s affected system also included credit card or bank account information that was provided by the consumer to AMCA (for those who sought to pay their balance),” the filing reads. “LabCorp provided no ordered test, laboratory results, or diagnostic information to AMCA. AMCA has advised LabCorp that Social Security Numbers and insurance identification information are not stored or maintained for LabCorp consumers.”

LabCorp further said the AMCA has informed LabCorp “it is in the process of sending notices to approximately 200,000 LabCorp consumers whose credit card or bank account information may have been accessed. AMCA has not yet provided LabCorp a list of the affected LabCorp consumers or more specific information about them.”

The LabCorp disclosure comes just days after competing lab testing firm Quest Diagnostics disclosed that the hack of AMCA exposed the personal, financial and medical data on approximately 11.9 million patients.

Quest said it first heard from the AMCA about the breach on May 14, but that it wasn’t until two weeks later that AMCA disclosed the number of patients affected and what information was accessed, which includes financial information (e.g., credit card numbers and bank account information), medical information and Social Security Numbers.

Quest says it has since stopped doing business with the AMCA and has hired a security firm to investigate the incident. Much like LabCorp, Quest also alleges the AMCA still hasn’t said which 11.9 million patients were impacted and that the company was withholding information about the incident.

The AMCA declined to answer any questions about whether the breach of its payment’s page impacted anyone who entered payment data into the company’s site during the breach. But through an outside PR firm, it issued the following statement:

“We are investigating a data incident involving an unauthorized user accessing the American Medical Collection Agency system,” reads a written statement attributed to the AMCA. “Upon receiving information from a security compliance firm that works with credit card companies of a possible security compromise, we conducted an internal review, and then took down our web payments page.”

The statement continues:

“We hired a third-party external forensics firm to investigate any potential security breach in our systems, migrated our web payments portal services to a third-party vendor, and retained additional experts to advise on, and implement, steps to increase our systems’ security. We have also advised law enforcement of this incident. We remain committed to our system’s security, data privacy, and the protection of personal information.”

Firefox Bug – Patch now

Quote

Mozilla has released an emergency critical update for Firefox to squash a zero-day vulnerability that is under active attack.

The Firefox 67.0.3 and ESR 60.7.1 builds include a patch for CVE-2019-11707. The vulnerability is a type confusion bug in the way Firefox handles JavaScript objects in Array.pop. By manipulating the object in the array, malicious JavaScript on a webpage could get the ability to remotely execute code without any user interaction.

This is a bad thing.

What’s worse, Mozilla says it has already received reports that the flaw is being actively exploited in the wild by miscreants, making it critical for users to install the latest patched versions of the browser.

Fortunately, because Mozilla automatically updates Firefox with new patches and bug fixes, both Linux, Mac, and Windows PC users can install the patch with a simple browser restart.

Credit for the discovery and disclosure of the bug was given to Samuel Groß of Project Zero. ®

Why Privacy Is an Antitrust Issue

QUOTE

Finally the mainstream media is saying what many of us have known for years. Too bad they do not put their money where their mouth is and sever all business ties the likes of Facebook.

As Facebook has generated scandal after scandal in recent years, critics have started to wonder how we might use antitrust laws to rein in the company’s power. But many of the most pressing concerns about Facebook are its privacy abuses, which unlike price gouging, price discrimination or exclusive dealing, are not immediately recognized as antitrust violations. Is there an antitrust case to be made against Facebook on privacy grounds?

Yes, there is. In March, when Representative David N. Cicilline, Democrat of Rhode Island, called on the Federal Trade Commission to investigate Facebook’s potential violations of antitrust laws, he cited not only Facebook’s acquisitions (such as Instagram and WhatsApp), but also evidence that Facebook was “using its monopoly power to degrade” the quality of its service “below what a competitive marketplace would allow.”

It is this last point, which I made in a law journal article cited by Mr. Cicilline, that promises to change how antitrust law will protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

[changes are needed to (ED)…] protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

To see what I mean, let’s go back to the mid-2000s, when Facebook was an upstart social media platform. To differentiate itself from the market leader, Myspace, Facebook publicly pledged itself to privacy. Privacy provided its competitive advantage, with the company going so far as to promise users, “We do not and will not use cookies to collect private information from any user.”

When Facebook later attempted to change this bargain with users, the threat of losing its customers to its competitors forced the company to reverse course. In 2007, for example, Facebook introduced a program that recorded users’ activity on third-party sites and inserted it into the News Feed. Following public outrage and a class-action lawsuit, Facebook ended the program. “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them,” Facebook’s chief executive, Mark Zuckerberg, wrote in a public apology.

This sort of thing happened regularly for years. Facebook would try something sneaky, users would object and Facebook would back off.

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

This is how Facebook usurped our privacy: with the help of its market dominance. The price of using Facebook has stayed the same over the years (it’s free to join and use), but the cost of using it, calculated in terms of the amount of data that users now must provide, is an order of magnitude above what it was when Facebook faced real competition.
 

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

It is hard to believe that the Facebook of 2019, which is so consuming of and reckless with our data, was once the privacy-protecting Facebook of 2004. When users today sign up for Facebook, they agree to allow the company to track their activity across more than eight million websites and mobile applications that are connected to the internet. They cannot opt out of this. The ubiquitous tracking of consumers online allows Facebook to collect exponentially more data about them than it originally could, which it can use to its financial advantage.

And while users can control some of the ways in which Facebook uses their data by adjusting their privacy settings, if you choose to leave Facebook, the company still subjects you to surveillance — but you no longer have access to the settings. Staying on the platform is the only effective way to manage its harms.

Lowering the quality of a company’s services in this manner has always been one way a monopoly can squeeze consumers after it corners a market. If you go all the way back to the landmark “case of monopolies” in 17th-century England, for example, you find a court sanctioning a monopoly for fear that it might control either price or the quality of services.

But we must now aggressively enforce this antitrust principle to handle the problems of our modern economy. Our government should undertake the important task of restoring to the American people something they bargained for in the first place — their privacy.

LONG LONG Overdue!

Security iPhone gyroscopes, of all things, can uniquely ID handsets on anything earlier than iOS 12.2

QUOTE

Your iPhone can be uniquely fingerprinted by apps and websites in a way that you can never clear. Not by deleting cookies, not by clearing your cache, not even by reinstalling iOS.

Cambridge University researchers will present a paper to the IEEE Symposium on Security and Privacy 2019 today explaining how their fingerprinting technique uses a fiendishly clever method of inferring device-unique accelerometer calibration data.

“iOS has historically provided access to the accelerometer, gyroscope and the magnetometer,” Dr Alastair Beresford told The Register this morning. “These types of devices don’t seem like they’re troublesome from a privacy perspective, right? Which way up the phone is doesn’t seem that bad.

“In reality,” added the researcher, “it turns out that you can work out a globally unique identifier for the device by looking at these streams.”
Your orientation reveals an awful lot about you

“MEMS” – microelectromechanical systems – is the catchall term for things like your phone’s accelerometer, gyroscope and magnetometer. These sensors tell your handset which way up it is, whether it’s turning and, if so, how fast, and how strong a nearby magnetic field is. They are vital for mobile games that rely on the user tilting or turning the handset.

These, said Beresford, are mass produced. Like all mass-produced items, especially sensors, they have the normal distribution of inherent but minuscule errors and flaws, so high-quality manufacturers (like Apple) ensure each one is calibrated.

“That calibration step allows the device to produce a more accurate parameter,” explained Beresford. “But it turns out the values being put into the device are very likely to be globally unique.”

Beresford and co-researchers Jiexin Zhang, also from Cambridge’s Department of Computer Science and Technology, and Ian Sheret of Polymath Insight Ltd, devised a way of not only accessing data from MEMS sensors – that wasn’t the hard part – but of inferring the calibration data based on what the sensors were broadcasting in real time, during actual use by a real-world user. Even better (or worse, depending on your point of view), the data can be captured and reverse-engineered through any old website or app.

“It doesn’t require any specific confirmation from a user,” said Beresford. “This fingerprint never changes, even if you factory reset the handset or reinstall the OS. This is buried deep inside the firmware of the device so the fingerprint data doesn’t change. This provides a way to track users around the web.”
How they did it

“You need to record some samples,” said Beresford. “There’s an API in JavaScript or inside Swift that allows you to get samples from the hardware. Because you get many samples per second, we need around 100 samples to get the attack. Around half a second on many of the devices. So it’s quite quick to collect the data.”

Each device generates a stream of analogue data. By converting that into digital values and applying algorithms they developed in the lab using stationary or slow-moving devices, Beresford said, the researchers could then infer what a real-world user device was doing at a given time (say, being bounced around in a bag) and apply a known offset.

“We can guess what the input is going to be given the output that we observe,” he said. “If we guess correctly, we can then use that guess to estimate what the value of the scale factor and the orthogonality are.”

From there it is a small step to bake those algorithms into a website or an app. Although the actual technique does not necessarily have to be malicious in practice (for example, a bank might use it to uniquely fingerprint your phone as an anti-fraud measure), it does raise a number of questions.
Good news, fandroids: you’re not affected

Oddly enough, the attack doesn’t work on most Android devices because they’re cheaper than Apple’s, in all senses of the word, and generally aren’t calibrated, though the researchers did find that some Google Pixel handsets did feature calibrated MEMS.

Beresford joked: “There’s a certain sense of irony that because Apple has put more effort in to provide more accuracy, it has this unfortunate side effect!”

Apple has patched the flaws in iOS 12.2 by blocking “access to these sensors in Mobile Safari just by default” as well as adding “some noise to make the attack much more difficult”.

The researchers have set up a website which includes both the full research paper and their layman’s explanation, along with a proof-of-concept video. Get patching, Apple fanbois

Boeing 737 Max Simulators Are in High Demand. They Are Flawed.

QUOTE

Since the two fatal crashes of the Boeing 737 Max, airlines around the world have moved to buy flight simulators to train their pilots.

They don’t always work.

Boeing recently discovered that the simulators could not accurately replicate the difficult conditions created by a malfunctioning anti-stall system, which played a role in both disasters. The simulators did not reflect the immense force that it would take for pilots to regain control of the aircraft once the system activated on a plane traveling at a high speed.

The mistake is likely to intensify concerns about Boeing, as it tries to regain credibility following the crashes of Lion Air and Ethiopian Airlines flights. In the months since the disasters, Boeing has faced criticism for serious oversights in the Max’s design. The anti-stall system was designed with a single point of failure. A warning light that Boeing thought was standard turned out to be part of a premium add-on.

“Every day, there is new news about something not being disclosed or something was done in error or was not complete,” said Dennis Tajer, a spokesman for the American Airlines pilots union and a 737 pilot.

The training procedures have been a source of contention. Boeing has maintained that simulator training is not necessary for the 737 Max and regulators do not require it, but many airlines bought the multimillion-dollar machines to give their pilots more practice. Some pilots want continuing simulator training.

The flight simulators, on-the-ground versions of cockpits that mimic the flying experience, are not made by Boeing. But Boeing provides the underlying information on which they are designed and built.
 

The simulators did not reflect the immense force that it would take for pilots to regain control of the aircraft once the system activated on a plane traveling at a high speed.

 

“Boeing has made corrections to the 737 Max simulator software and has provided additional information to device operators to ensure that the simulator experience is representative across different flight conditions,” said Gordon Johndroe, a Boeing spokesman. “Boeing is working closely with the device manufacturers and regulators on these changes and improvements, and to ensure that customer training is not disrupted.”

In recent weeks, Boeing has been developing a fix to the system, known as MCAS. As part of that work, the company tried to test on a simulator how the updated system would perform, including by replicating the problems with the doomed Ethiopian Airlines flight.

It recreated the actions of the pilots on that flight, including taking manual control of the plane as outlined by Boeing’s recommended procedures. When MCAS activates erroneously, pilots are supposed to turn off the electricity to a motor that allows the system to push the plane toward the ground. Then, pilots need to crank a wheel to right the plane. They have limited time to act.

On the Ethiopian flight, the pilots struggled to turn the wheel while the plane was moving at a high speed, when there is immense pressure on the tail. The simulators did not properly match those conditions, and Boeing pilots found that the wheel was far easier to turn than it should have been.

Regulators are now trying to determine what training will be required.

When the Max was introduced, Boeing believed that pilots did not need experience on the flight simulators, and the Federal Aviation Administration agreed. Many pilots learned about the plane on iPads. And they were not informed about the anti-stall system.

The limited training was a selling point of the plane. It can cost airlines tens of millions of dollars to maintain and operate flight simulators over the life of an aircraft.

After the first crash, Boeing gave airlines and pilots a full rundown of MCAS. But the company and regulators said that additional training was not necessary. Simply knowing about the system would be sufficient.

In a tense meeting with the American Airlines pilots union after the crash, a Boeing vice president, Mike Sinnett, said he was confident that pilots were equipped to deal with problems, according to an audio recording review by The New York Times. A top Boeing test pilot, Craig Bomben, agreed, saying, “I don’t know that understanding the system would have changed the outcome of this.”

Since the Ethiopian Airlines disaster in March, lawmakers and regulators are taking a closer look at the training procedures for the 737 Max, and whether they should be more robust. At a congressional hearing this week, the acting head of the F.A.A., Daniel Elwell, testified that MCAS should “have been more adequately explained.”

Boeing said on Thursday that it had completed its fix to the 737 Max. Along with changes to the anti-stall system, the fix will include additional education for pilots.
Subscribe to With Interest

Catch up and prep for the week ahead with this newsletter of the most important business insights, delivered Sundays.

The company still has to submit the changes to regulators, who will need to approve them before the plane can start flying again. The updates are not expected to include training on simulators, but the F.A.A. and other global regulators could push to require it.

“The F.A.A. is aware that Boeing Company is working with the manufacturers of Boeing 737 Max flight simulators,” a spokesman for the agency said in an emailed statement. “The F.A.A. will review any proposed adjustments as part of its ongoing oversight of the company’s efforts to address safety concerns.”

Airlines have already been pushing to get more simulators and develop their own training.

Pilots at American Airlines, which began asking for simulators when they started flying the planes, ratcheted up their requests after the Lion Air crash. Regardless of what the F.A.A. requires, the union believes pilots should get the experience. A spokesman for the airline said it had ordered a simulator that would be up and running by December.

“We value simulators in this situation,” said Mr. Tajer. “It’s not a condition of the Max flying again, but it is something we want.”

Bug-hunter reveals another ‘make me admin’ Windows 10 zero-day – and vows: ‘There’s more where that came from’

Quote

Vulnerability can be exploited to turn users into system stars, no patch available yet

A bug-hunter who previously disclosed Windows security flaws has publicly revealed another zero-day vulnerability in Microsoft’s latest operating systems.

The discovered hole can be exploited by malware and rogue logged-in users to gain system-level privileges on Windows 10 and recent Server releases, allowing them to gain full control of the machine. No patch exists for this bug, details and exploit code for which were shared online on Tuesday for anyone to use and abuse.

The flaw was uncovered, and revealed on Microsoft-owned GitHub, funnily enough, by a pseudonymous netizen going by the handle SandboxEscaper. She has previously dropped Windows zero-days that can be exploited to delete or tamper with operating system components, elevate local privileges, and so on.

This latest one works by abusing Windows’ schtasks tool, designed to run programs at scheduled times, along with quirks in the operating system.
 

Meanwhile… If you haven’t yet patched the wormable RDP security flaw in Windows (CVE-2019-0708), please do so ASAP – exploit code that can crash vulnerable systems is doing the rounds, and McAfee eggheads have developed and described a proof-of-concept attack that executes arbitrary software on remote machines, with no authentication required. Eek.

It appears the exploit code imports a legacy job file into the Windows Task Scheduler using schtasks, creating a new task, and then deletes that new task’s file from the Windows folder. Next, it creates a hard filesystem link pointing from where the new task’s file was created to pci.sys, one of Windows’ kernel-level driver files, and then runs the same schtasks command again. This clobbers pci.sys’s access permissions so that it can be modified and overwritten by the user, thus opening the door to privileged code execution.

The exploit, as implemented, needs to know a valid username and password combo on the machine to proceed, it seems. It can be tweaked and rebuilt from its source code to target other system files, other than pci.sys. …….

Rampant Android bloatware a privacy and security hellscape

I spent the past week examining an AT&T android. The bloatware was off the scale as was the spyware. Removing these via ADB broke the system. Even installing a firewall broke the system as it appeared that the firewall was detected and it simply blocked calls even the firewall was disabled (but installed). I will next look at and Android One Device to see if it is any better as they claim to be pure Android and no bloatware. I am not just picking on AT&T, but as the article and the PDF study that generated it points out, the practice is rampant.

Quote

The apps bundled with many Android phones are presenting threats to security and privacy greater than most users think.

This according to a paper (PDF) from university researchers in the US and Spain who studied the pre-installed software that 214 different vendors included in their Android devices. They found that everyone from the hardware builders to mobile carriers and third-party advertisers were loading products up with risky code.

“Our results reveal that a significant part of the pre-installed software exhibit potentially harmful or unwanted behavior,” the team from Universidad Carlos III de Madrid, Stony Brook University and UC Berkeley ICSI said.

 

The study, An Analysis of Pre-installed Android Software, was written by Julien Gamba, Mohammed Rashed, Abbas Razaghpanah, Juan Tapiador, and Narseo Vallina-Rodriguez. It is being presented later this month at the 41st IEEE Symposium on Security and Privacy.

 

“While it is known that personal data collection and user tracking is pervasive in the Android app ecosystem as a whole we find that it is also quite prevalent in pre-installed apps.”

To study bundled software, the team crowdsourced firmware and traffic information from a field of 2,748 volunteers running 1,742 different models of devices from 130 different countries.

Across all those different vendors, carriers, and locales, one theme was found: Android devices are lousy with bloatware that not only takes up storage, but also harvests personal information and in some cases even introduces malware.

“We have identified instances of user tracking activities by preinstalled Android software – and embedded third-party libraries – which range from collecting the usual set of PII and geolocation data to more invasive practices that include personal email and phone call metadata, contacts, and a variety of behavioral and usage statistics in some cases,” the team wrote.

“We also found a few isolated malware samples belonging to known families, according to VirusTotal, with prevalence in the last few years (e.g., Xynyin, SnowFox, Rootnik, Triada and Ztorg), and generic trojans displaying a standard set of malicious behaviors (e.g., silent app promotion, SMS fraud, ad fraud, and URL click fraud).”
Beware the bloat

The device vendors themselves were not the only culprits. While the bundled apps can be installed by the vendors, bloatware can also be introduced by the carriers who add their own software to devices as well as third parties that may slip in additional advertising or tracking tools into otherwise harmless and useful software.

Addressing this issue could prove particularly difficult, the researchers note. With vendors and carriers alike looking to eke a few extra bucks out of every device sold, bundled apps and bolted on advertising and tracking tools are highly attractive to companies, and absent pressure from a higher-up body, the bottom line will almost always win out.

To that end, they recommend someone steps in to offer audits of the supply chain and catch potential security and privacy threats in bundled software.

“Google might be a prime candidate for it given its capacity for licensing vendors and its certification programs,” the researchers note.

“Alternatively, in absence of self-regulation, governments and regulatory bodies could step in and enact regulations and execute enforcement actions that wrest back some of the control from the various actors in the supply chain.”

The study, An Analysis of Pre-installed Android Software, was written by Julien Gamba, Mohammed Rashed, Abbas Razaghpanah, Juan Tapiador, and Narseo Vallina-Rodriguez. It is being presented later this month at the 41st IEEE Symposium on Security and Privacy. ®