Skip to content

IoT

Researcher: 90% Of ‘Smart’ TVs Can Be Compromised Remotely

Quote
“So yeah, that internet of broken things security we’ve spent the last few years mercilessly making fun of? It’s significantly worse than anybody imagined. “

So we’ve noted for some time how “smart” TVs, like most internet of things devices, have exposed countless users’ privacy courtesy of some decidedly stupid privacy and security practices. Several times now smart TV manufacturers have been caught storing and transmitting personal user data unencrypted over the internet (including in some instances living room conversations). And in some instances, consumers are forced to eliminate useful features unless they agree to have their viewing and other data collected, stored and monetized via these incredible “advancements” in television technology.

As recent Wikileaks data revealed, the lack of security and privacy standards in this space has proven to be a field day for hackers and intelligence agencies alike.

And new data suggests that these televisions are even more susceptible to attack than previously thought. While the recent Samsung Smart TV vulnerabilities exposed by Wikileaks (aka Weeping Angel) required an in-person delivery of a malicious payload via USB drive, more distant, remote attacks are unsurprisingly also a problem. Rafael Scheel, a security researcher working for Swiss cyber security consulting company Oneconsult, recently revealed that around 90% of smart televisions are vulnerable to a remote attack using rogue DVB-T (Digital Video Broadcasting – Terrestrial) signals.

This attack leans heavily on Hybrid Broadcast Broadband TV (HbbTV), an industry standard supported by most cable companies and set top manufacturers that helps integrate classic broadcast, IPTV, and broadband delivery systems. Using $50-$150 DVB-T transmitter equipment, an attacker can use this standard to exploit smart dumb television sets on a pretty intimidating scale, argues Scheel:

“By design, any nearby TV will connect to the stronger signal. Since cable providers send their signals from tens or hundreds of miles away, attacks using rogue DVB-T signals could be mounted on nearby houses, a neighborhood, or small city. Furthermore, an attack could be carried out by mounting the DVB-T transmitter on a drone, targeting a specific room in a building, or flying over an entire city.”

Scheel says he has developed two exploits that, when loaded in the TV’s built-in browser, execute malicious code, and provide root access. Once compromised, these devices can be used for everything from DDoS attacks to surveillance. And because these devices are never really designed with consumer-friendly transparency in mind, users never have much of an understanding of what kind of traffic the television is sending and receiving, preventing them from noticing the device is compromised.

Scheel also notes that the uniformity of smart TV OS design (uniformly bad, notes a completely different researcher this week) and the lack of timely updates mean crafting exploits for multiple sets is relatively easy, and firmware updates can often take months or years to arrive. Oh, and did we mention these attacks are largely untraceable?:

“But the best feature of his attack, which makes his discovery extremely dangerous, is the fact that DVB-T, the transmission method for HbbTV commands, is a uni-directional signal, meaning data flows from the attacker to the victim only. This makes the attack traceable only if the attacker is caught transmitting the rogue HbbTV signal in real-time. According to Scheel, an attacker can activate his HbbTV transmitter for one minute, deliver the exploit, and then shut it off for good.”

The Death of Smart Devices?

With the release by WikiLeaks today that detail how U.S. spy agencies can hack into phones, T.V.s and other “smart devices,”  I am wondering if this will slow down the mindless adoption of such devices by consumers.

….probably not, there is no shortage of mindlessness.

Among other disclosures that, if confirmed, would rock the technology world, the WikiLeaks release said that the C.I.A. and allied intelligence services had managed to bypass encryption on popular phone and messaging services such as Signal, WhatsApp and Telegram. According to the statement from WikiLeaks, government hackers can penetrate Android phones and collect “audio and message traffic before encryption is applied.”…

If C.I.A. agents did manage to hack the smart TVs, they would not be the only ones. Since their release, internet-connected televisions have been a focus for hackers and cybersecurity experts, many of whom see the sets’ ability to record and transmit conversations as a potentially dangerous vulnerability.

In early 2015, Samsung appeared to acknowledge the televisions posed a risk to privacy. The fine print terms of service included with its smart TVs said that the television sets could capture background conversations, and that they could be passed on to third parties.

The company also provided a remarkably blunt warning: “Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.”

source: NYT Article Here

Google Voice, Siri, Alexa, IoT devices — Just say No

Cloud Pets! Your Family & Intimate Messages exposed to all sorts of Miscreants

… Now I know the average parent spends a good deal their time on Facebook and other “look at me .. look at me” social media and can care less about such hard to understand things like I.T. Security.

BUT THESE ARE YOUR CHILDREN AND YOU NEED TO PROTECT THEM!

…sorry, as a parent, this stuff makes my blood boil. Look parents, you scour the pedophile databases for your neighborhood, but leave the barn door open on the Internet. If you think governmental entities are going to protect you, you are only fooling yourselves. Companies peddling these things are about making the maximum amount of money at the lowest possible cost. They will **NOT** invest in expensive and complex security. Why? they do not have to. By the time the breach is discovered, they have made there millions. And there is absolutely no teeth in any governmental mandates op provide security such that any really exist in the first place.

Ok, on with the story!

The personal information of more than half a million people who bought internet-connected fluffy animals has been compromised.

The details, which include email addresses and passwords, were leaked along with access to profile pictures and more than 2m voice recordings of children and adults who had used the CloudPets stuffed toys.

The US company’s toys can connect over Bluetooth to an app to allow a parent to upload or download audio messages for their child.

Of course the company denied it and shot at the messenger

CloudPets’s chief executive, Mark Myers, denied that voice recordings were stolen in a statement to NetworkWorld magazine. “Were voice recordings stolen? Absolutely not.” He added: “The headlines that say 2m messages were leaked on the internet are completely false.” Myers also told NetworkWorld that when Motherboard raised the issue with CloudPets, “we looked at it and thought it was a very minimal issue”. Myers added that a hacker would only be able to access the sound recordings if they managed to guess the password. When the Guardian tried to contact Myers on Tuesday, emails to CloudPets’s official contact address were returned as undeliverable.

Troy Hunt, owner of data breach monitoring service Have I Been Pwned, drew attention to the breach, which he first became aware of in mid-February. At that point, more than half a million records were being traded online. Hunt’s own source had first attempted to contact CloudPets in late December, but also received no response. While the database had been connected to the internet, it had more than 800,000 user records in it, suggesting that the data dump Hunt received is just a fraction of the full information potentially stolen.

The personal information was contained in a database connected directly to the internet, with no usernames or passwords preventing any visitor from accessing all the data. A week after Hunt’s contact first attempted to alert CloudPets, the original databases were deleted, and a ransom demand was left, and a week after that, no remaining databases were publicly accessible. CloudPets has not notified users of the hack.

Hunt argues the security flaws should undercut the entire premise of connected toys. “It only takes one little mistake on behalf of the data custodian – such as misconfiguring the database security – and every single piece of data they hold on you and your family can be in the public domain in mere minutes.

“If you’re fine with your kids’ recordings ending up in unexpected places then so be it, but that’s the assumption you have to work on because there’s a very real chance it’ll happen. There’s no doubt whatsoever in my mind that there are many other connected toys out there with serious security vulnerabilities in the services that sit behind them. Inevitably, some would already have been compromised and the data taken without the knowledge of the manufacturer or parents.”

John Madelin, CEO at IT security experts RelianceACSN, echoes Hunt’s warnings. “Connected toys that are easily accessible by hackers are sinister. The CloudPets issue highlights the fact that manufacturers of connected devices really struggle to bake security in from the start. The 2.2m voice recordings were stored online, but not securely, along with email addresses and passwords of 800,000 users, this is unforgivable.”  Source: Guardian Article Here

Now for the technical, here are some tid-bits from the researcher. Full article here

Clearly, CloudPets weren’t just ignoring my contact, they simply weren’t even reading their emails”

There are references to almost 2.2 million voice recordings of parents and their children exposed by databases that should never have contained production data.

But then I dug a little deeper and took a look at the mobile app:

CloudPets app

This app communicates with a website at spiraltoys.s.mready.net which is on a domain owned by Romanian company named mReady. That URL is bound to a server with IP address 45.79.147.159, the exact same address the exposed databases were on. That’s a production website there too because it’s the one the mobile app is hitting so in other words, the test and staging databases along with the production website were all sitting on the one box. The most feasible explanation I can come up with for this is that one of those databases is being used for production purposes and the other non-production (a testing environment, for example).

Bonk Detecting WiFi Mattress

Quote

Researchers James Scott and Drew Spaniel point out in their report Rise of the Machines: The Dyn Attack Was Just a Practice Run [PDF] that IoT represents a threat that is only beginning to be understood.

The pair say the risk that regulation could stifle market-making IoT innovation (like the WiFi cheater-detection mattress) is outweighed by the need to stop feeding Shodan.

“National IoT regulation and economic incentives that mandate security-by-design are worthwhile as best practices, but regulation development faces the challenge of … security-by-design without stifling innovation, and remaining actionable, implementable and binding,” Scott and Spaniel say.

“Regulation on IoT devices by the United States will influence global trends and economies in the IoT space, because every stakeholder operates in the United States, works directly with United States manufacturers, or relies on the United States economy.

“Nonetheless, IoT regulation will have a limited impact on reducing IoT DDoS attacks as the United States government only has limited direct influence on IoT manufacturers and because the United States is not even in the top 10 countries from which malicious IoT traffic originates.” …


I have two comments:

To think any agency could actually do this correctly is laughable given complexity and the track record of the gov. Hey they cannot even stop the robo calls from the likes “Card Redemption Services” The trove of treasure, additionally, to be gained from leaks is far too valuable to both gov. and industry to limit it with some solid standard.

But the Wifi Mattress idea may have legs (4 of them at least…) A Wifi enabled mattress — why with the addition of an accelerometer and a gui for to put in your social media credentials – well then your bedroom gymnastics can be posted instantly to your facebook page. A whole new level in selfies! (..or as I to call it the “look at me, look at me mommy” website that dumps all your info in the hungry jaws of advertisers)

My Friend Cayla

…Or is it My Friend Spy Cayla. And what is the difference between this and Google Voice and Siri? Not much.

Quote:

The My Friend Cayla doll has been shown in the past to be hackable

An official watchdog in Germany has told parents to destroy a talking doll called Cayla because its smart technology can reveal personal data.

The warning was issued by the Federal Network Agency (Bundesnetzagentur), which oversees telecommunications.

Researchers say hackers can use an unsecure bluetooth device embedded in the toy to listen and talk to the child playing with it.

But the UK Toy Retailers Association said Cayla “offers no special risk”.

In a statement sent to the BBC, the TRA also said “there is no reason for alarm”.

The Vivid Toy group, which distributes My Friend Cayla, has previously said that examples of hacking were isolated and carried out by specialists. However, it said the company would take the information on board as it was able to upgrade the app used with the doll.

But experts have warned that the problem has not been fixed.

The Cayla doll can respond to a user’s question by accessing the internet. For example, if a child asks the doll “what is a little horse called?” the doll can reply “it’s called a foal”.
Media captionRory Cellan-Jones sees how Cayla, a talking child’s doll, can be hacked to say any number of offensive things.

A vulnerability in Cayla’s software was first revealed in January 2015.

Complaints have been filed by US and EU consumer groups.

The EU Commissioner for Justice, Consumers and Gender Equality, Vera Jourova, told the BBC: “I’m worried about the impact of connected dolls on children’s privacy and safety.”

The Commission is investigating whether such smart dolls breach EU data protection safeguards.

In addition to those concerns, a hack allowing strangers to speak directly to children via the My Friend Cayla doll has been shown to be possible.

The TRA said “we would always expect parents to supervise their children at least intermittently”.

It said the distributor Vivid had “restated that the toy is perfectly safe to own and use when following the user instructions”.
Privacy laws

Under German law, it is illegal to sell or possess a banned surveillance device. A breach of that law can result in a jail term of up to two years, according to German media reports.

Germany has strict privacy laws to protect against surveillance. In the 20th Century Germans experienced abusive surveillance by the state – in Nazi Germany and communist East Germany.

The warning by Germany’s Federal Network Agency came after student Stefan Hessel, from the University of Saarland, raised legal concerns about My Friend Cayla.

Mr Hessel, quoted by the German website Netzpolitik.org, said a bluetooth-enabled device could connect to Cayla’s speaker and microphone system within a radius of 10m (33ft). He said an eavesdropper could even spy on someone playing with the doll “through several walls”.

A spokesman for the federal agency told Sueddeutsche Zeitung daily that Cayla amounted to a “concealed transmitting device”, illegal under an article in German telecoms law (in German).

“It doesn’t matter what that object is – it could be an ashtray or fire alarm,” he explained.

Manufacturer Genesis Toys has not yet commented on the German warning.

Not so Smart using a Smart TV

As reported Vizio’s Smart TVs spied on you

Starting in 2014, Vizio made TVs that automatically tracked what consumers were watching and transmitted that data back to its servers. Vizio even retrofitted older models by installing its tracking software remotely. All of this, the FTC and AG allege, was done without clearly telling consumers or getting their consent.

What did Vizio know about what was going on in the privacy of consumers’ homes? On a second-by-second basis, Vizio collected a selection of pixels on the screen that it matched to a database of TV, movie, and commercial content. What’s more, Vizio identified viewing data from cable or broadband service providers, set-top boxes, streaming devices, DVD players, and over-the-air broadcasts. Add it all up and Vizio captured as many as 100 billion data points each day from millions of TVs.

Vizio then turned that mountain of data into cash by selling consumers’ viewing histories to advertisers and others. And let’s be clear: We’re not talking about summary information about national viewing trends. According to the complaint, Vizio got personal. The company provided consumers’ IP addresses to data aggregators, who then matched the address with an individual consumer or household. Vizio’s contracts with third parties prohibited the re-identification of consumers and households by name, but allowed a host of other personal details – for example, sex, age, income, marital status, household size, education, and home ownership. And Vizio permitted these companies to track and target its consumers across devices.

That’s what Vizio was up to behind the screen, but what was the company telling consumers? Not much, according to the complaint.

Source here

Well for their offense Vizio was slapped with 2.2million fine. Sounds like a lot, right? Well as a colleague of mine observed, that is 20cents per TV. In other words, it was a great ROI for Vizio and points out how toothless the FTC really is.

So what to do? Turn off all the Smart TV features, boycott Vizio (that said, Samsung and others are just as bad it may appear). Better Yet, unplug the TV from the Internet.

Some sites suggest that Roku and Apple streaming boxes front-ending your TV are better. I am not so sure as I know with the Roku, at least, one needs to reset your ID often to clear the tracking and there does not appear to be a permanent “Kill” switch for this type of spyware crap.

I am toying of building my own set top streaming device using the RasberryPI. If I do so, I will pay pay special attention to the privacy aspects of the embedded software I use and report findings here. Don’t hold your breath, time is at a premium of here.

Anyway – welcome to the iDIoT. The Insecure Dumbed-down Internet of Things

Nick

Trump: Blame the Computers not Russia

Trump: “I think we ought to get on with our lives. I think that computers have complicated lives very greatly. The whole age of computer has made it where nobody knows exactly what is going on. We have speed, we have a lot of other things, but I’m not sure we have the kind the security we need,” Trump said according to press pool report. He was at the Mar-a-Lago resort at the time of making the statement.” Source

Actually, I agree with Trump on this. We do not have the security we need. More fundamental to that, we do not have a mindset that puts computer security first. We bolt the front door and secure our physical premises with 24/7 monitoring services, yet we leave the barn door wide open for our online presence be it email, social media, browsing and shopping.

Privacy and security is an option when in fact it should come first. Imagine if the internet was built from the ground up with privacy and security as the foundation layer? That would mean no web bugs, tracking cookies, targeted advertising, privacy statements like Netflix’s (for example) that say, let me rape you and sell my experience and if you do not agree, your option is to cancel your subscription.

And home router manufacturers that make appliances so easily hacked it is a joke. And Microsoft windows that to this day facilitates users running with administrator privileges in everyday use. And the IoT – internet of things that have little if any security. And the mindset of the average consumer the allows Amazon’s Alexa into their home. Completely secure, right? Yeah sure, Why then, I ask, did this happen: “Amazon had been served with a search warrant in a murder case, as detectives in Bentonville, Ark., want to know what Alexa heard in the early morning hours of Nov. 22, 2015 — when Victor Collins was found dead in a hot tub behind a home after an Arkansas Razorbacks football game. (Read more) Come on! Lock the door, arm yourself to the teeth, **but** let a device with 7 microphones listening to every sound in your house connected to ?? and easily hacked by ?? (you’ll never know!). By the way, the same goes with Siri and Google voice on your smart phones.

Don’t blame the Russians, blame yourself. Yes, the mindset needs to change indeed.

Happy New Year.

IoT worm can hack Philips Hue lightbulbs, spread across cities

Quote

Researchers have developed a proof-of-concept worm they say can rip through Philips Hue lightbulbs across entire cities – causing the insecure web-connected globes to flick on and off.

The software nasty, detailed in a paper titled IoT Goes Nuclear: Creating a ZigBee Chain Reaction [PDF], exploits hardcoded symmetric encryption keys to control devices over Zigbee wireless networks. This allows the malware to compromise a single light globe from up to 400 metres away.

The worm can then spread from a single smart bulb to those nearby thanks to the use of these skeleton keys.

The attack is the handiwork of researchers Eyal Ronen, Adi Shamir, and Achi-Or Weingarten of the Weizmann Institute of Science, Israel, along with Colin O’Flynn of Dalhousie University, Canada.

It triggered Philips to release a firmware patch for owners of its “Hue” connected bulbs. This is not without some risk as users must first set up the Philips Hue app in order to receive the automatic patches, and do so before attacks take place since the worm can easily override update attempts.

Comment: Why they call these smart devices is beyond me. Not have rock solid security is pure stupidity. Oh wait, we are talking of IoT security.

Hijack Your Wireless Mice to Hack Computers from Afar

wireless keyboard mice hacked

Quote

A flaw in the way several popular models of wireless mice and their corresponding receivers, the sticks or “dongles” that plug into a USB port and transmit data between the mouse and the computer, handle encryption could leave “billions” of computers vulnerable to hackers, security firm Bastille warned on Tuesday.

In short, a hacker standing within 100 yards of the victim’s computer and using a $30 long-range radio dongle and a few lines of code could intercept the radio signal between the victim’s mouse and the dongle plugged into the victim’s computer. Then this hacker could replace the signal with her own, and use her own keyboard to control victim’s computer.

….

For Rouland, these vulnerabilities, which affect non-Bluetooth mice produced by Logitech, Dell, Lenovo and other brands, are a harbinger of the near future of the Internet of Things when both companies and regular consumers will have hackable radio-enabled devices in their offices or homes. It’s worth noting that Bastille specializes in Internet of Things (IoT) security, and sells a product for corporations that promises to “detect and mitigate” threats from IoT devices across all the radio spectrum. That obviously means the firm has a vested interest in highlighting ways companies could get hacked.

This attack in particular, which Bastille has branded with the hashtag-friendly word “MouseJack,” builds on previous research done on hacking wireless keyboards. But in this case, the issue is that manufacturers don’t properly encrypt data transmitted between the mouse and the dongle, according to Bastille’s white paper.

This is Why People Fear the ‘Internet of Things’

IoT Spy

Quote

Imagine buying an internet-enabled surveillance camera, network attached storage device, or home automation gizmo, only to find that it secretly and constantly phones home to a vast peer-to-peer (P2P) network run by the Chinese manufacturer of the hardware. Now imagine that the geek gear you bought doesn’t actually let you block this P2P communication without some serious networking expertise or hardware surgery that few users would attempt.

This is the nightmare “Internet of Things” (IoT) scenario for any system administrator: The IP cameras that you bought to secure your physical space suddenly turn into a vast cloud network designed to share your pictures and videos far and wide. The best part? It’s all plug-and-play, no configuration necessary!

I first became aware of this bizarre experiment in how not to do IoT last week when a reader sent a link to a lengthy discussion thread on the support forum for Foscam, a Chinese firm that makes and sells security cameras. The thread was started by a Foscam user who noticed his IP camera was noisily and incessantly calling out to more than a dozen online hosts in almost as many countries.

Turns out, this Focscam camera was one of several newer models the company makes that comes with peer-to-peer networking capabilities baked in. This fact is not exactly spelled out for the user (although some of the models listed do say “P2P” in the product name, others do not).

But the bigger issue with these P2P -based cameras is that while the user interface for the camera has a setting to disable P2P traffic (it is enabled by default), Foscam admits that disabling the P2P option doesn’t actually do anything to stop the device from seeking out other P2P hosts online (see screenshot below).

This is a concern because the P2P function built into Foscam P2P cameras is designed to punch through firewalls and can’t be switched off without applying a firmware update plus an additional patch that the company only released after repeated pleas from users on its support forum.
Yeah, this setting doesn’t work. P2P is still enabled even after you uncheck the box.

One of the many hosts that Foscam users reported seeing in their firewall logs was iotcplatform.com, a domain registered to Chinese communications firm ThroughTek Co., Ltd. Turns out, this domain has shown up in firewall logs for a number of other curious tinkerers who cared to take a closer look at what their network attached storage and home automation toys were doing on their network.

In January 2015, a contributing writer for the threat-tracking SANS Internet Storm Center wrote in IoT: The Rise of the Machines that he found the same iotcplatform.com domain called out in network traffic generated by a Maginon SmartPlug he’d purchased (smart plugs are power receptacles into which you plug lights and other appliances you may wish to control remotely).

….

“The details about how P2P feature works which will be helpful for you understand why the camera need communicate with P2P servers,” Qu explained. “Our company deploy many servers in some regions of global world.” Qu further explained:

1. When the camera is powered on and connected to the internet, the camera will log in our main P2P server with fastest response and get the IP address of other server with low load and log in it. Then the camera will not connect the main P2P server.

2. When log in the camera via P2P with Foscam App, the app will also log in our main P2P server with fastest response and get the IP address of the server the camera connect to.

3. The App will ask the server create an independent tunnel between the app and the camera. The data and video will transfers directly between them and will not pass through the server. If the server fail to create the tunnel, the data and video will be forwarded by the server and all of them are encrypted.

4. Finally the camera will keep hearbeat connection with our P2P server in order to check the connection status with the servers so that the app can visit the camera directly via the server. Only when the camera power off/on or change another network, it will replicate the steps above.”

As I noted in a recent column IoT Reality: Smart Devices, Dumb Defaults, the problem with so many IoT devices is not necessarily that they’re ill-conceived, it’s that their default settings often ignore security and/or privacy concerns. I’m baffled as to why such a well-known brand as Foscam would enable P2P communications on a product that is primarily used to monitor and secure homes and offices.

Apparently I’m not alone in my bafflement. Nicholas Weaver, a senior researcher in networking and security for the International Computer Science Institute (ICSI), called the embedded P2P feature “an insanely bad idea” all around.

“It opens up all Foscam users not only to attacks on their cameras themselves (which may be very sensitive), but an exploit of the camera also enables further intrusions into the home network,” Weaver said.