Skip to content

Feeling Safer?

The House voted to wipe away the FCC’s Internet privacy protections

SJ 34 would repeal safeguards that prohibit Internet service providers (ISPs) from sharing data, such as e-mails and web history, with third parties without user consent. It would also do away with transparency requirements, which mandate that ISPs provide easily accessible privacy notices to customers and advanced notice prior to changes…..Assuming Trump signs the measure, Internet providers will be freed from those obligations, which would otherwise have taken effect later this year. With this data, Internet providers can sell highly targeted ads, making them rivals to Google and Facebook, analysts say.

Internet providers also will be free to use customer data in other ways, such as selling the information directly to data brokers that target lucrative or vulnerable demographics.

“ISPs like Comcast, AT&T, and Charter will be free to sell your personal information to the highest bidder without your permission — and no one will be able to protect you,” wrote Gigi Sohn, a former FCC staffer who helped draft the privacy rules, in a recent blog post on the Verge.

Selling your data is merely one of the four ways in which Internet providers intend to make money off consumers. The others include selling you access to the Internet, as they have traditionally done; selling access to media content they’ve acquired by purchasing large entertainment companies; and selling advertising that directly targets you based on the data the provider has collected by watching how you use the Internet and what content you consume.

Sources: The Hill, Washington Post

Here is the roll call Miscreants who voted to repeal. Source Senate.Gov

Miscreants who voted For BillVoted AgainstNot Voting
Alexander (R-TN)Baldwin (D-WI)sakson (R-GA)
Barrasso (R-WY)Bennet (D-CO)Paul (R-KY)
Blunt (R-MO)Blumenthal (D-CT)
Boozman (R-AR)Booker (D-NJ)
Burr (R-NC)Brown (D-OH)
Capito (R-WV)Cantwell (D-WA)
Cassidy (R-LA)Cardin (D-MD)
Cochran (R-MS)Carper (D-DE)
Collins (R-ME)Casey (D-PA)
Corker (R-TN)Coons (D-DE)
Cornyn (R-TX)Cortez Masto (D-NV)
Cotton (R-AR)Donnelly (D-IN)
Crapo (R-ID)Duckworth (D-IL)
Cruz (R-TX)Durbin (D-IL)
Daines (R-MT)Feinstein (D-CA)
Enzi (R-WY)Franken (D-MN)
Ernst (R-IA)Gillibrand (D-NY)
Fischer (R-NE)Harris (D-CA)
Flake (R-AZ)Hassan (D-NH)
Gardner (R-CO)Heinrich (D-NM)
Graham (R-SC)Heitkamp (D-ND)
Grassley (R-IA)Hirono (D-HI)
Hatch (R-UT)Kaine (D-VA)
Heller (R-NV)King (I-ME)
Hoeven (R-ND)Klobuchar (D-MN)
Inhofe (R-OK)Leahy (D-VT)
Johnson (R-WI)Manchin (D-WV)
Kennedy (R-LA)Markey (D-MA)
Lankford (R-OK)McCaskill (D-MO)
Lee (R-UT)Menendez (D-NJ)
McCain (R-AZ)Merkley (D-OR)
McConnell (R-KY)Murphy (D-CT)
Moran (R-KS)Murray (D-WA)
Murkowski (R-AK)Nelson (D-FL)
Perdue (R-GA)Peters (D-MI)
Portman (R-OH)Reed (D-RI)
Risch (R-ID)Sanders (I-VT)
Roberts (R-KS)Schatz (D-HI)
Rounds (R-SD)Schumer (D-NY)
Rubio (R-FL)Shaheen (D-NH)
Sasse (R-NE)Stabenow (D-MI)
Scott (R-SC)Tester (D-MT)
Shelby (R-AL)Udall (D-NM)
Strange (R-AL)Van Hollen (D-MD)
Sullivan (R-AK)Warner (D-VA)
Thune (R-SD)Warren (D-MA)
Tillis (R-NC)Whitehouse (D-RI)
Toomey (R-PA)Wyden (D-OR)
Wicker (R-MS)
Young (R-IN)

Ethiopia is Free to Spy on Americans in Their Own Homes – DC Court

I have always banged on about how no-one in the U.S. seems to take I.T. Security seriously, even in the face of daily news about hacks and breaches. What should I expect when our “leaders” set such a fine example.

The United States Court of Appeals for the District of Columbia Circuit today held that foreign governments are free to spy on, injure, or even kill Americans in their own homes–so long as they do so by remote control. The decision comes in a case called Kidane v. Ethiopia, which we filed in February 2014.

Our client, who goes by the pseudonym Mr. Kidane, is a U.S. citizen who was born in Ethiopia and has lived here for over 30 years. In 2012 through 2013, his family home computer was attacked by malware that captured and then sent his every keystroke and Skype call to a server controlled by the Ethiopian government, likely in response to his political activity in favor of democratic reforms in Ethiopia. In a stunningly dangerous decision today, the D.C. Circuit ruled that Mr. Kidane had no legal remedy against Ethiopia for this attack, despite the fact that he was wiretapped at home in Maryland. The court held that, because the Ethiopian government hatched its plan in Ethiopia and its agents launched the attack that occurred in Maryland from outside the U.S., a law called the Foreign Sovereign Immunities Act (FSIA) prevented U.S. courts from even hearing the case.

The decision is extremely dangerous for cybersecurity. Under it, you have no recourse under law if a foreign government that hacks into your car and drives it off the road, targets you for a drone strike, or even sends a virus to your pacemaker, as long as the government planned the attack on foreign soil. It flies in the face of the idea that Americans should always be safe in their homes, and that safety should continue even if they speak out against foreign government activity abroad. Source: Here

Following the same logic, the U.S. shall have no recourse against supposedly Russian hacking the U.S. Elections

Kuwaiti Government will DNA Test Everyone

Quote

There’s a new law that will enforce DNA testing for everyone: citizens, expatriates, and visitors. They promise that the program “does not include genealogical implications or affects personal freedoms and privacy.”

I assume that “visitors” includes tourists, so presumably the entry procedure at passport control will now include a cheek swab. And there is nothing preventing the Kuwaiti government from sharing that information with any other government.

Despicable

Why the FBI’s request to Apple will affect civil rights for a generation

fbi-cracked-iphone

“No legal case applies in a vacuum, and in this case the FBI needs the precedent more than the evidence.”

Before posting the full article, I want to state I fully support Apple in this matter. As a security professional, I agree with the author Rich Mogull

 

“What matters is if we have a right to the security and privacy of our devices, and of our communications, which are also under assault. If we have the right to tools to defend ourselves from the government and criminals alike. Yes, these tools will be sometimes used for the worst of crimes, but they’re also fundamental to our civil rights, freedom of discourse, and our ability to protect our digital lives from the less impactful, but far more frequent criminal attacks.

 

Quote

On Tuesday, the United States District Court of California issued an order requiring Apple to assist the FBI in accessing a locked iPhone (PDF)—and not just any iPhone, but the iPhone 5c used by one of the San Bernardino shooters. The order is very clear: Build new firmware to enable the FBI to perform an unlimited, high speed brute force attack, and place that firmware on the device.

Apple is not only fighting the request, but posted a public letter signed by Tim Cook and linked on Apple’s front page.

Make no mistake: This is unprecedented, and the situation was deliberately engineered by the FBI and Department of Justice to force a showdown that could define limits our civil rights for generations to come. This is an issue with far-reaching implications well beyond a single phone, a single case, or even Apple itself.

As a career security professional, this case has chilling implications.

Why now?

I’ve been writing about Apple’s role in our digital civil rights since 2014, and specifically addressed why Apple is at the center of the battle over encryption last month on TidBITS. The short version is that Apple is one of the only companies with the technologies, high profile, and business model to both find themselves in the cross hairs, and take a strong position.

Make no mistake, Apple has a long history of complying with court orders and assisting law enforcement. Previous to iOS 8, they could extract data off devices. Even today, data in most of their online services (iCloud, excluding iMessage and FaceTime) can be provided upon legal request.

This case is different for multiple reasons:

  • Apple is being asked to specifically create new software to circumvent their security controls. They aren’t being asked to use existing capabilities, since those no longer work. The FBI wants a new version of the operating system designed to allow the FBI to brute force attack the phone.
  • The FBI is using a highly emotional, nationally infamous terrorism case as justification for the request.
  • The request refers to the All Writs Act, which is itself under scrutiny in a case in New York involving Apple. Federal Magistrate Judge James Orenstein of the Eastern District of New York is currently evaluating if the Act applies in these cases.

That’s why this is about far more than a single phone. Apple does not have the existing capability to assist the FBI. The FBI engineered a case where the perpetrators are already dead, but emotions are charged. And the law cited is under active legal debate within the federal courts.

The crux of the issue is should companies be required to build security circumvention technologies to expose their own customers? Not “assist law enforcement with existing tools,” but “build new tools.”

The FBI Director has been clear that the government wants back doors into our devices, even though the former head of the NSA disagrees and supports strong consumer encryption. One reason Apple is likely fighting this case so publicly is that it is a small legal step from requiring new circumvention technology, to building such access into devices. The FBI wants the precedent far more than they need the evidence, and this particular case is incredibly high profile and emotional.

The results will, without question, establish precedence beyond one killer’s iPhone.

The technical details

The court order is quite specific. It applies only to one iPhone, and requests Apple create a new version of the firmware that eliminates the existing feature that erases the iPhone after 10 failed attempts at entering the passcode. It further asks Apple to allow passcode attempts to be performed as rapidly as possible.

Apple has been prompting users to choose longer and more complicated—and harder to crack—iPhone passcodes.

Beginning with iOS 8, devices are encrypted using a key derived from your passcode. This is combined with a hardware key specific to the device. Apple has no way of knowing or circumventing that key. On newer devices, the hardware key is embedded in the device and is not recoverable. Thus the passcode must be combined with the device key in a chip on the phone, and that chip rate-limits passcode attempts to make a brute force attack slower.

Reading through the order, it seems the FBI thinks that a modified version of the operating system would allow them to engage in high-speed attacks, if the 10-tries limit was removed. The request indicates they likely can’t image the device and perform all the attacks on their own super-fast computers, due to that hardware key. With a four-character passcode the device could probably be cracked in hours. A six-character code might take days or weeks, and anything longer could take months or years.

Dan Guido over at Trail of Bits posted a great explanation:

As many jailbreakers are familiar, firmware can be loaded via Device Firmware Upgrade (DFU) Mode. Once an iPhone enters DFU mode, it will accept a new firmware image over a USB cable. Before any firmware image is loaded by an iPhone, the device first checks whether the firmware has a valid signature from Apple. This signature check is why the FBI cannot load new software onto an iPhone on their own—the FBI does not have the secret keys that Apple uses to sign firmware.

This opens up a few questions. Could this work on newer devices with the enhanced encryption of the Secure Enclave? How can Apple pair the device and replace the firmware in the first place? Would they be using the shooter’s computer? An over-the-air update? Apple says that all devices (with or without the Secure Enclave) are vulnerable to this kind of attack, but declined to comment on the specific technical methods, a position I initially disagreed with, but on reflection is probably the right move for reasons we will get to in a moment.

Thus the FBI wants a new version of iOS, signed by Apple and installed on the device, that removes limitations on their attempts to brute-force the password.

Why this matters

Legal precedent is like a glacier, slowly building over time until it becomes nigh unstoppable. Major issues like this are first, and sometimes ultimately, decided on a series of small steps that build on each other. It’s the reason the NRA fights any attempts at gun control, since they fear a slow build, not a single small law.

The crux of this round of the encryption debate is if companies should be forced to build tools to circumvent their customers’ security. If the answer is “yes,” it could be a small step to “should they just build these tools into the OS from the start?”

I have no doubt the FBI deliberately chose the highest-profile domestic terrorism case in possibly a decade. We, average citizens, want the FBI to stop this sort of evil. We don’t necessarily see this one case as applying to our lives and our rights. Why the big deal? What if the FBI could find the terrorists’ contacts and stop other attacks?

What matters is if we have a right to the security and privacy of our devices and communications.

But the truth is, no legal case applies in a vacuum. If this goes through, if Apple is forced to assist, it will open a floodgate of law enforcement requests. Then what about civil cases? Opening a phone to support a messy divorce and child custody battle? Or what about requests from other nations, especially places like China and the UAE that already forced BlackBerry and others to compromise the security of their customers?

And once the scale of these requests increases, as a security professional I guarantee the tools will leak, the techniques will be exploited by criminals, and our collective security will decline. It really doesn’t matter if it’s the iPhone 5c or 6s. It really doesn’t matter if this is about dead terrorists or a drug dealer. It doesn’t matter what specific circumvention Apple is being asked to create.

What matters is if we have a right to the security and privacy of our devices, and of our communications, which are also under assault. If we have the right to tools to defend ourselves from the government and criminals alike. Yes, these tools will be sometimes used for the worst of crimes, but they’re also fundamental to our civil rights, freedom of discourse, and our ability to protect our digital lives from the less impactful, but far more frequent criminal attacks.

This situation was engineered by the FBI and Department of Justice for the maximum impact and chances of success. Apple is fighting, and as a security professional it’s my obligation to support their position, and stronger security.

Comcast’s Xfinity home alarms can be disabled by wireless jammers

Comcast-security

If you trust your ISP to provide Network and Physical Security, you have a fool for an adviser

Quote

Some intruders no longer need to come in through the kitchen window. Instead, they can waltz right in through the front door, even when a home is protected by an internet-connected alarm system. A vulnerability in Comcast’s Xfinity Home Security System could allow attackers to open protected doors and windows without triggering alarms, researchers with cybersecurity firm Rapid7 wrote in a blog post today.

The security bug relates back to the way in which the system’s sensors communicate with their home base station. Comcast’s system uses the popular ZigBee protocol, but doesn’t maintain the proper checks and balances, allowing a given sensor to go minutes or even hours without checking in. The biggest hurdle in exploiting the vulnerability is finding or building a radio jammer, which are illegal under federal law. Attackers can also circumvent alarms with a software-based de-authentication attack on the ZigBee protocol itself, although that method requires more expertise. Attackers would also need to know a house was using the Xfinity system before attempting to break in, a major hurdle in exploiting the finding.

“The sensor had no memory of the break-in happening”

To prove his findings, Rapid7 researcher Phil Bosco simulated a radio jamming attack on one of his system’s armed window sensors. While jamming the sensor’s signal, he opened a monitored window. The sensor said it was armed, but it failed to detect anything out of the ordinary. But perhaps even more worrisome than the active intrusion itself is that the sensor had no memory of it happening and took anywhere from several minutes to three hours to come back online and reestablish communication with its home base.

If you let in the Feds, you’ll let in anyone

Quote

Juniper’s VPN security hole is proof that govt backdoors are bonkers

Juniper’s security nightmare gets worse and worse as experts comb the ScreenOS firmware in its old NetScreen firewalls.

Just before the weekend, the networking biz admitted there had been “unauthorized” changes to its software, allowing hackers to commandeer equipment and decrypt VPN traffic.

In response, Rapid7 reverse engineered the code, and found a hardwired password that allows anyone to log into the boxes as an administrator via SSH or Telnet.

Now an analysis of NetScreen’s encryption algorithms by Matthew Green, Ralf-Philipp Weinmann, and others, has found another major problem.

“For the past several years, it appears that Juniper NetScreen devices have incorporated a potentially backdoored random number generator, based on the NSA’s Dual EC DRBG algorithm,” wrote Green, a cryptographer at Johns Hopkins University in Maryland, US.

“At some point in 2012, the NetScreen code was further subverted by some unknown party, so that the very same backdoor could be used to eavesdrop on NetScreen connections. While this alteration was not authorized by Juniper, it’s important to note that the attacker made no major code changes to the encryption mechanism – they only changed parameters.”

The Dual EC DRBG random number generator was championed by the NSA, although researchers who studied the spec found that data encrypted using the generator could be decoded by clever eavesdroppers.

ScreenOS uses the Dual EC DRBG in its VPN technology, but as a secondary mechanism: it’s used to prime a fast 3DES-based number generator called ANSI X9.17, which is secure enough to kill off any cryptographic weaknesses introduced by Dual EC. Phew, right? Bullet dodged, huh?

No. In Juniper’s case there’s a problem. The encrypted communications can still be decoded using just 30 or so bytes of raw Dual EC output. And, lo, conveniently, there’s a bug in ScreenOS that will cause the firmware to leak that very sequence of numbers, undermining the security of the system.

Also, worryingly, ScreenOS does not use Dual EC with the special constant Q defined by the US government – it uses its own value.

Armed with those 30 bytes of seed data, and knowledge of Juniper’s weird Dual EC parameters, eavesdroppers can decrypt intercepted VPN traffic.

….
Green points out that this is a classic example of why backdoors are a bad idea all round. It’s something politicians and law enforcement officials may want to ponder the next time they call for mandatory government access to encrypted communications.

If they are going to build backdoors into encryption, such as by fiddling with the mathematics or sliding in convenient bugs, someone else is going to find the way in.

Hillary Clinton: Stop helping terrorists, Silicon Valley – weaken your encryption

Sorry Hillary, you are just proving yourself as clueless as ever.

There remains no evidence the attackers used encryption to communicate. The Paris police found unencrypted text messages concerned the attack, and a public Facebook post from one of the attackers has also been uncovered. Early reports that the attackers used PlayStation 4s to communicate surreptitiously have also been dismissed.
it now appears that the attackers communicated via unencrypted SMS and did little to hide their tracks. On top of that, as Ryan Gallagher at the Intercept notes, some of the attackers were already known to law enforcement and the intelligence community as possible problems. But they were still able to plan and carry out the attacks. Even more to the point, Gallagher points out that after looking at the 10 most recent high profile terrorist attacks, the same can be said for each of them: sources: 1) 2)

Time and again throughout history, governments have used fear to strip people of their rights and increase their power. This is no different. This is a failure of intelligence. These thugs are smart and use face to face communications more than anything else. Studies (read more) have shown that the US Gov’s massive hoovering of data has had the perverse affect of making them more blind to what is really happening – than the other way around.

And I leave leave you this: If the gov weakens encryption, how long will it take for other miscreants to find the holes and exploit them for nefarious reasons? No long. That is why corporations are pushing back. Hillary, if you want to lead, better do your homework instead of pandering to fear.

Let’s get physical…

Almost everyone worries about computer security in one way or another.  Much is written about network security, of course, and lots of attention is payed to file and operating system security, often as it relates to viruses.

Security of your physical computing assets, however, is just as important, and perhaps more important.  If someone has physical access to one of your assets, say a desktop computer, laptop, server or router, then, given enough time, they can compromise that asset.  Without sufficient physical security, all of the time, attention and resources spent on other security matters can be wasted.

For example, someone who has physical access to a device that uses a hard drive can, in a sometimes surprisingly short period of time, clone a hard disk of your device, and then study that at their leisure at another location.  You might not even know anyone was there.   As another example, they could replace your passwords, giving the perpetrator access to all or a portion of your environment while locking you out at the same time.

It is a truism that you cannot hold out forever against someone who can gain physical access to your environment, and someone who has access can of course do untold damage simply by destroying computing assets.  However that does not mean that you cannot or should not take steps to protect your computing environment.  On the other hand, as with all things computer security, there is a risk/cost trade off (more on that in another upcoming blog).

So, if you haven’t done so in a while, it might be a good idea to take stock of the physical security of your computing environment, with respect to access and damage.

For access, you should consider questions like: Who has access to each device? Should the device protected by some sort of physical barrier to prevent access?  Are there multiple levels of physical security?   For example, if you are in a large corporation, your first layer of physical security might be a locked building with a guard. A second layer might be locating critical computing assets on a floor whose elevator requires a key to access that floor.  Finally, you might put particularly critical devices in a room whose door uses a keypad, fingerprint or even a retinal scanner, and log all access.

And don’t forget to consider that dropped ceiling or raised floor as you think about a high security area.  A locked door might not be as secure as you thought if someone can go over it via a dropped ceiling or under it via a raised floor.

Regarding damage, aside from a beserker with a sledge hammer, one big risk is fire. Of course, if the entire building burns down it isn’t likely that your computing assets will survive.  However, what if a smaller fire triggers the sprinkler system?  What will happen to your computers?

Naturally, the list of things that good physical security might entail is a lot longer than a short article can cover.  But this article can perhaps serve as a jumping off point to a review of the physical security of your computing assets.

Security of credit cards using “chips”

As you may know, starting in October the credit card companies are changing the rules on credit card liability for transactions where the credit card is present at the location of the purchase.  The idea is to encourage merchants and financial institutions to adopt the “EMV” (Europay/MasterCard/Visa) “chip” credit cards.

The EMV cards are generally considered to be more secure, because the chip creates a unique transaction code for each transaction, whereas if someone manages to read the magnetic stripe on a traditional credit card (and acquires the 3 digit verification number), there is nothing to stop repeated use of that credit card.

However, readers should be aware that there is a downside to the EMV chip technology.  While magnetic strips can be easily read (say, after theft of a card, or by a physically compromised ATM), magnetic strips cannot be read remotely.   On the other hand, the card chips can be accessed remotely.  Thus information on these new EMV cards can be read from a few inches away, even while the card is in your wallet or purse, by anyone passing near to you.  While some cards do not reveal account numbers this way (American Express claims to be in this group), others have been shown to do so.

So, what can be done to protect your new EMV credit and debit cards?  The answer is to protect them by blocking radio frequencies (RF) from reaching the card when it is not in use.  One suggestion is to wrap them in aluminum foil.  While this is 100% effective (providing what is known as a Faraday cage around the card), it is bulky and inconvenient.  A less bulky and more convenient alternative is to place the cards in an RFID shield sleeve.  These sleeves, available from retailers (Amazon, REI and many others), are inexpensive, and do not take up appreciable space in your purse or wallet, and should also serve as a reasonably effective Faraday cage to protect your cards – not only credit cards, but any card that uses this kind of chip technology, which might include educational institution cards, company security access cards, driver licenses and others.

3D printed TSA Travel Sentry keys Open TSA Locks

Quote

Last year, the Washington Post published a story on airport luggage handling that contained unobscured images of the “backdoor” keys of the Transportation Safety Administration, along with many other security agencies around the world, used to gain access to luggage secured with Travel Sentry locks. These locks are designed to allow travelers to secure their suitcases and other baggage items against theft with a key or a combination, while still allowing the secured luggage to be opened for inspection—ostensibly by authorized persons only. The publication of the images effectively undermined the security of the Travel Sentry system, since the images were of sufficient quality to create real-world duplicate keys….

A few enterprising hackers (in the correct sense of the word “hacker”) have put together 3D printable model files of the TSA keys and uploaded them to a GitHub repository. Now, rather than needing specialized skills and tooling to craft a duplicate Travel Sentry key, all you need is a 3D printer that can handle STL files (and that’s basically any 3D printer)….

Is this disheartening news? Not particularly. Locking your luggage has never provided any real additional protection against all but the most casual theft attempts (as evidenced by the fact that almost any piece of luggage with a zipper can be opened with a screwdriver or a pen regardless of how many locks are hanging off of it). The spreading of 3D printable Travel Sentry keys is more of a criticism of any kind of “backdoor” cryptography—be it one that involves physical keys or mathematical. The backdoor itself undermines any and all trust in the system.

Anyone who thinks otherwise is fooling themselves.

Feeling safer yet?