Skip to content

Just say “NO” to IoT

Security Panic as panic alarms meant to keep granny and little Timmy safe prove a privacy fiasco

QUOTE

Simple hack turns them into super secret spying tool

A GPS tracker used by elderly people and young kids has a security hole that could allow others to track and secretly record their wearers.

The white-label product is manufactured in China and then rebadged and rebranded by a range of companies in the UK, US, Australia and elsewhere including Pebbell 2, OwnFone and SureSafeGo. Over 10,000 people in the UK use the devices.

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

 

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

 

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

The flaw was introduced in an update to the product: originally the portable fob communicated with a base station that was plugged into a phone line: an approach that provided no clear attack route. But in order to expand its range and usefulness, the SIM card was added so it was not reliant on a base station and would work over the mobile network.

The problem arises from the fact that the Chinese manufacturer built in a PIN to the device so it would be locked to the telephone number programmed into the device. Which is fine, except the PIN was disabled by default and the PIN is currently not needed to reboot or reset the device.

And so it is possible to send a reset command to the device – if you know its SIM telephone number – and restore it to factory settings. At that point, the device is wide open and doesn’t need the PIN to make changes to the other functions. Which all amounts to remote access.
Random access memory

But how would you find out the device’s number? Well, the researchers got hold of one such device and its number and then ran a script where they sent messages to thousands of similar numbers to see if they hit anything.

They did. “Out of the 2,500 messages we sent, we got responses from 175 devices (7 per cent),” they wrote. “So this is 175 devices being used at the time of writing as an aid for vulnerable people; all identified at a minimal cost. The potential for harm is massive, and in less than a couple of hours, we could interact with 175 of these devices!”

The good news is that it is easy to fix: in new devices. You would simply add a unique code to each device and require it be used to reset the device. And you could limit the device to only receive calls or texts from a list of approved contacts.

But in the devices already on the market, the fix is not so easy: even by using the default PIN to lock it down, the ability to reset the device is still possible because it doesn’t require the PIN to be entered. The researchers say they have contacted the companies that use the device “to help them understand the risks posed by our findings” and say that they are “looking into and are actively recalling devices.” But it also notes that some have not responded.

In short, poor design and the lack of a decent security audit prior to putting the updated product on the market has turned what is supposed to provide peace of mind into a potential stalking and listening nightmare.

Google Spies! The worst kind of microphone is a hidden microphone.

Google says the built-in microphone it never told Nest users about was ‘never supposed to be a secret’

Yeah right.
Quote

  • In early February, Google announced that Assistant would work with its home security and alarm system, Nest Secure.
  • The problem: Users didn’t know a microphone existed on their Nest security devices to begin with.
  • On Tuesday, a Google representative told Business Insider the company had made an “error.”
  • “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the person said. “That was an error on our part.”

In early February, Google announced that its home security and alarm system Nest Secure would be getting an update. Users, the company said, could now enable its virtual-assistant technology, Google Assistant.

The problem: Nest users didn’t know a microphone existed on their security device to begin with.

The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device.

On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”

Google says “the microphone has never been on and is only activated when users specifically enable the option.”

It also said the microphone was originally included in the Nest Guard for the possibility of adding new security features down the line, like the ability to detect broken glass.

Still, even if Google included the microphone in its Nest Guard device for future updates — like its Assistant integration — the news comes as consumers have grown increasingly wary of major tech companies and their commitment to consumer privacy.

For Google, the revelation is particularly problematic and brings to mind previous privacy controversies, such as the 2010 incident in which the company acknowledged that its fleet of Street View cars “accidentally” collected personal data transmitted over consumers’ unsecured WiFi networks, including emails.

How Nest, designed to keep intruders out of people’s homes, effectively allowed hackers to get in

What strikes me about this article is simply how illiterate people are when it comes to their digital security. “Only about 1 percent of Internet users, he said, use some kind of password manager.” That is crazy. As this article states, “Credential stuffing is “at the root of probably 90 percent of the things we see happening” There are so many stolen passwords being dumped online that if you do not use a different password for each site, you are simply asking for trouble. You can check for yourself here to see if your email credential has been part of a breach. , The only way you will be able to use a different password for each site is to use a password manager.

Another conclusion drawn from this article is how IoT companies purposely make there devices hackers dreams by deliberating not employing better security methods. They do this to make their products idiot proof. Reread that, Idiot Proof. In other words, they think their users are idiots and do want to burden them with nonsense like better security.

Quote

Tara Thomas thought her daughter was just having nightmares. “There’s a monster in my room,” the almost-3-year-old would say, sometimes pointing to the green light on the Nest Cam installed on the wall above her bed.

Then Thomas realized her daughter’s nightmares were real. In August, she walked into the room and heard pornography playing through the Nest Cam, which she had used for years as a baby monitor in their Novato, Calif., home. Hackers, whose voices could be heard faintly in the background, were playing the recording, using the intercom feature in the software. “I’m really sad I doubted my daughter,” she said.

Though it would be nearly impossible to find out who was behind it, a hack like this one doesn’t require much effort, for two reasons: Software designed to help people break into websites and devices has gotten so easy to use that it’s practically child’s play, and many companies, including Nest, have effectively chosen to let some hackers slip through the cracks rather than impose an array of inconvenient countermeasures that could detract from their users’ experience and ultimately alienate their customers.

The result is that anyone in the world with an Internet connection and rudimentary skills has the ability to virtually break into homes through devices designed to keep physical intruders out.

As hacks such as the one the Thomases suffered become public, tech companies are deciding between user convenience and potential damage to their brands. Nest could make it more difficult for hackers to break into Nest cameras, for instance, by making the log-in process more cumbersome. But doing so would introduce what Silicon Valley calls “friction” — anything that can slow down or stand in the way of someone using a product.

At the same time, tech companies pay a reputational price for each high-profile incident. Nest, which is part of Google, has been featured on local news stations throughout the country for hacks similar to what the Thomases experienced. And Nest’s recognizable brand name may have made it a bigger target. While Nest’s thermostats are dominant in the market, its connected security cameras trail the market leader, Arlo, according to Jack Narcotta, an analyst at the market research firm Strategy Analytics. Arlo, which spun out of Netgear, has around 30 percent of the market, he said. Nest is in the top five, he said.

Nik Sathe, vice president of software engineering for Google Home and Nest, said Nest has tried to weigh protecting its less security-savvy customers while taking care not to unduly inconvenience legitimate users to keep out the bad ones. “It’s a balance,” he said. Whatever security Nest uses, Sathe said, needs to avoid “bad outcomes in terms of user experience.”

Google spokeswoman Nicol Addison said Thomas could have avoided being hacked by implementing two-factor authentication, where in addition to a password, the user must enter a six-digit code sent via text message. Thomas said she had activated two-factor authentication; Addison said it had never been activated on the account.

The method used to spy on the Thomases is one of the oldest tricks on the Internet. Hackers essentially look for email addresses and passwords that have been dumped online after being stolen from one website or service and then check to see whether the same credentials work on another site. Like the vast majority of Internet users, the family used similar passwords on more than one account. While their Nest account had not been hacked, their password had essentially become public knowledge, thanks to other data breaches.

In recent years, this practice, which the security industry calls “credential stuffing,” has gotten incredibly easy. One factor is the sheer number of stolen passwords being dumped online publicly. It’s difficult to find someone who hasn’t been victimized. (You can check for yourself here.)

A new breed of credential-stuffing software programs allows people with little to no computer skills to check the log-in credentials of millions of users against hundreds of websites and online services such as Netflix and Spotify in a matter of minutes. Netflix and Spotify both said in statements that they were aware of credential stuffing and employ measures to guard against it. Netflix, for instance, monitors websites with stolen passwords and notifies users when it detects suspicious activity. Neither Netflix nor Spotify offer two-factor authentication.

But the potential for harm is higher for the 20 billion Internet-connected things expected to be online by next year, according to the research firm Gartner. Securing these devices has public safety implications. Hacked devices can be used in large-scale cyberattacks such as the “Dyn hack” that mobilized millions of compromised “Internet of things” devices to take down Twitter, Spotify and others in 2016.

In January, Japanese lawmakers passed an amendment to allow the government to essentially do what hackers do and scour the Internet for stolen passwords and test them to see whether they have been reused on other platforms. The hope is that the government can force tech companies to fix the problem.

Security experts worry the problem has gotten so big that there could be attacks similar to the Dyn hack, this time as a result of a rise in credential stuffing.

“They almost make it foolproof,” said Anthony Ferrante, the global head of cybersecurity at FTI Consulting and a former member of the National Security Council. He said the new tools have made it even more important to stop reusing passwords.

Tech companies have been aware of the threat of credential stuffing for years, but the way they think about it has evolved as it has become a bigger problem. There was once a sense that users should take responsibility for their security by refraining from using the same password on multiple websites. But as gigantic dumps of passwords have gotten more frequent, technology companies have found that it is not just a few inattentive customers who reuse the same passwords for different accounts — it’s the majority of people online.

Credential stuffing is “at the root of probably 90 percent of the things we see happening,” said Emmanuel Schalit, chief executive of Dashlane, a password manager that allows people to store unique, random passwords in one place. Only about 1 percent of Internet users, he said, use some kind of password manager.

“We saw this coming in late 2017, early 2018 when we saw these big credential dumps start to happen,” Google’s Sathe said. In response, Nest says, it implemented security measures around that time.

It did its own research into stolen passwords available on the Web and cross-referenced them with its records, using an encryption technique that ensured Nest could not actually see the passwords. In emails sent to customers, including the Thomases, it notified customers when they were vulnerable. It also tried to block log-in attempts that veered from the way legitimate users log into accounts. For instance, if a computer from the same Internet-protocol address attempted to log into 10 Nest accounts, the algorithm would block that address from logging into any more accounts.

But Nest’s defenses were not good enough to stop several high-profile incidents throughout last year in which hackers used credential stuffing to break into Nest cameras for kicks. Hackers told a family in a San Francisco suburb, using the family’s Nest Cam, that there was an imminent missile attack from North Korea. Someone hurled racial epithets at a family in Illinois through a Nest Cam. There were also reports of hackers changing the temperature on Nest thermostats. And while only a handful of hacks became public, other users may not even be aware their cameras are compromised.

The company was forced to respond. “Nest was not breached,” it said in a January statement. “These recent reports are based on customers using compromised passwords,” it said, urging its customers to use two-factor authentication. Nest started forcing some users to change their passwords.

This was a big step for Nest because it created the kind of friction that technology companies usually try to avoid. “As we saw the threat evolve, we put more explicit measures in place,” Sathe said. Nest says only a small percentage of its millions of customers are vulnerable to this type of attack.

According to at least one expert, though, Nest users are still exposed. Hank Fordham, a security researcher, sat in his Calgary, Alberta, home recently and opened up a credential-stuffing software program known as Snipr. Instantly, Fordham said, he found thousands of Nest accounts that he could access. Had he wanted to, he would have been able to view cameras and change thermostat settings with relative ease.

While other similar programs have been around for years, Snipr, which costs $20 to download, is easier to use. Snipr provides the code required to check whether hundreds of the most popular platforms, such as “League of Legends” and Netflix, are accessible with a bunch of usernames and passwords — and those have become abundantly available all over the Internet.

Fordham, who had been monitoring the software and testing it for malware, noticed that after Snipr added functionality for Nest accounts last May, news reports of attacks started coming out. “I think the credential-stuffing community was made aware of it, and that was the dam breaking,” he said.

Nest said the company had never heard of Snipr, though it is generally aware of credential-stuffing software. It said it cannot be sure whether any one program drives more credential stuffing toward Nest products.

What surprises Fordham and other security researchers about the vulnerability of Nest accounts is the fact that Nest’s parent company, Google, is widely known for having the best methods for stopping credential-stuffing attacks. Google’s vast user base gives it data that it can use to determine whether someone trying to log into an account is a human or a robot.

The reason Nest has not employed all of Google’s know-how on security goes back to Nest’s roots, according to Nest and people with knowledge of its history. Founded in 2010 by longtime Apple executive Tony Fadell, Nest promised at the time that it would not collect data on users for marketing purposes.

In 2014, Nest was acquired by Google, which has the opposite business model. Google’s products are free or inexpensive and, in exchange, it profits from the personal information it collects about its users. The people familiar with Nest’s history said the different terms of service and technical challenges have prevented Nest from using all of Google’s security products. Google declined to discuss whether any of its security features were withheld because of incompatibility with Nest’s policies.

Under Alphabet, Google’s parent company, Nest employed its own security team. While Google shared knowledge about security, Nest developed its own software. In some ways, Nest’s practices appear to lag well behind Google’s. For instance, Nest still uses SMS messages for two-factor authentication. Using SMS is generally not recommended by security experts, because text messages can be easily hijacked by hackers. Google allows people to use authentication apps, including one it developed in-house, instead of text messages. And Nest does not use ReCaptcha, which Google acquired in 2009 and which can separate humans from automated software, such as what credential stuffers use to identify vulnerable accounts.

Sathe said Nest employed plenty of advanced techniques to stop credential stuffing, such as machine learning algorithms that “score” log-ins based on how suspicious they are and block them accordingly. “We have many layers of security in conjunction with what the industry would consider best practices,” he said.

When asked why Nest does not use ReCaptcha, Sathe cited difficulty in implementing it on mobile apps, and user convenience. “Captchas do create a speed bump for the users,” he said.

The person behind Snipr, who goes by the name “Pragma” and communicates via an encrypted chat, put the blame on the company. “I can tell you right now, Nest can easily secure all of this,” he said when asked whether his software had enabled people to listen in and harass people via Nest Cams. “This is like stupidly bad security, like, extremely bad.” He also said he would remove the capability to log into Nest accounts, which he said he added last May when one of his customers asked for it, if the company asked. Pragma would not identify himself, for fear of getting in “some kind of serious trouble.”

That’s when Fordham, the Calgary security researcher, became concerned. He noticed the addition of Nest on the dashboard and took it upon himself to start warning people who were vulnerable. He logged into their Nest cams and spoke to them, imploring them to change their passwords. One of those interactions ended up being recorded by the person on the other end of the camera. A local news station broadcast the video.

Fordham said he is miffed that it is still so easy to log into Nest accounts. He noted that Dunkin’ Donuts, after seeing its users fall victim to credential-stuffing attacks aimed at taking their rewards points, implemented measures, including captchas, that have helped solve the problem. “It’s a little alarming that a company owned by Google hasn’t done the same thing as Dunkin’ Donuts,” Fordham said.

A spokeswoman for Dunkin’ declined to comment.

Doorbells are gaining cameras for security and catching porch pirates. The Post’s Geoff Fowler goes over security camera etiquette you may not have considered. (Jhaan Elker, Geoffrey Fowler/The Washington Post)

According to people familiar with the matter, Google is in the process of converting Nest user accounts so that they utilize Google’s security methods via Google’s log-in, in part to deal with the problem. Addison said that Nest user data will not be subject to tracking by Google. She later said that she misspoke but would not clarify what that meant.

Knowing that the hack could have been stopped with a unique password or two-factor authentication has not made Thomas, whose camera was hacked, feel any better. “I continuously get emails saying it wasn’t their fault,” she said.

She unplugged the camera and another one she used to have in her son’s bedroom, and she doesn’t plan to turn them on again: “That was the solution.”

No guns or lockpicks needed to nick modern cars if they’re fitted with hackable ‘smart’ alarms

Vulnerable kit can immobilise motors and even unlock doors

Quote

Researchers have discovered that “smart” alarms can allow thieves to remotely kill your engine at speed, unlock car doors and even tamper with cruise control speed.

British infosec biz Pen Test Partners found that the Viper Smart Start alarm and products from vendor Pandora were riddled with flaws, allowing an attacker to steal a car fitted with one of the affected devices.

“Before we contacted them, the manufacturers had inadvertently exposed around 3 million cars to theft and their users to hijack,” said PTP in a blog post about their findings. The firm was inspired to start looking at Pandora’s alarms after noticing that the company boasted their security was “unhackable”.

Thanks to an unauthenticated corner of the service’s API and a simple parameter manipulation (an indirect object request, IDOR), PTP said they were able to change a Viper Smart Start user account’s password and registered email address, giving them full control over the app and the car that the alarm system was installed on.

All they had to do was send a POST request to the API with the parameter “email” redefined to one of their choice in order to overwrite the legitimate owner’s email address, thus gaining access and control over the account.

PTP said that in a live proof-of-concept demo they were able to geolocate a target car using the Viper Smart Start account’s inbuilt functionality, set off the alarm (causing the driver to stop and investigate), activated the car’s immobiliser once it was stationary and then remotely unlocked the car’s doors, using the app’s ability to clone the key fob and issue RF commands from a user’s mobile phone.

Even worse, after further API digging, PTP researchers discovered a function in the Viper API that remotely turned off the car’s engine. The Pandora API also allowed researchers to remotely enable the car’s microphone, allowing nefarious people to eavesdrop on the occupants.

They also said: “Mazda 6, Range Rover Sport, Kia Quoris, Toyota Fortuner, Mitsubishi Pajero, Toyota Prius 50 and RAV4 – these all appear to have undocumented functionality present in the alarm API to remotely adjust cruise control speed!”

Both Pandora and Viper had fixed the offending IDORs before PTP went public. The infosec firm noted that modern alarm systems tend to have direct access to the CANbus, the heart of a modern electronic vehicle.

A year ago infosec researchers wailed that car security in general is poor, while others discovered that electronic control units (ECUs), small modular computers used for controlling specific vehicle routines that were done mechanically years ago, were vulnerable to certain types of hack even with the engine off and the car stationary.

World’s largest CCTV maker leaves at least 9 million cameras open to public viewing

Made in China. Maybe it also has an ethernet hardware implant chip if all else fails. HHmmm I see a trend here.

QUOTE

Xiongmai’s cloud portal opens sneaky backdoor into servers

Yet another IoT device vendor has been found to be exposing their products to attackers with basic security lapses.

This time, it’s Chinese surveillance camera maker Xiongmai named and shamed this week by researchers with SEC Consult for the poor security in the XMEye P2P Cloud service. Among the problems researchers pointed to were exposed default credentials and unsigned firmware updates that could be delivered via the service.

As a result, SEC Consult warns, the cameras could be compromised to do everything from spy on their owners, to carry out botnet instructions and even to serve as an entry point for larger network intrusions.

“Our recommendation is to stop using Xiongmai and Xiongmai OEM devices altogether,” SEC Consult recommended.

“The company has a bad security track record including its role in Mirai and various other IoT botnets. There are vulnerabilities that have been published in 2017, which are still not fixed in the most recent firmware version.”

Enabled by default, the P2P Cloud service allows users to remotely connect to devices via either a web browser or an iOS/Android app and control the hardware without needing a local network connection.

Unfortunately, SEC Consult explained, shortcomings in both the devices themselves and the service, such as unencrypted connections and default passwords (owners are not required to change the defaults when setting up the device) mean that in many cases, accessing and compromising camera could be a cinch.

Additionally, SEC Consult notes, the Xiongmai devices do not require that firmware updates be signed, meaning it would be possible for an attacker to install malware-laden firmware updates to build a botnet or stage further attacks on the local network.

“This is either possible by modifying the filesystems, contained in a firmware update, or modifying the ‘InstallDesc’ file in a firmware update file,” researchers explain.

“The ‘InstallDesc’ is a text file that contains commands that are executed during the update.”

On top of it all, SEC Consult accuses Xiongmai of a pattern of ignoring security warnings and failing to take basic precautions.

The research house claims that not only were its latest warnings to the company ignored, but Xiongmai has a history of bad security going all the way back to its days as fodder for the notorious Mirai botnet. As such, the researchers advise companies stop using any OEM hardware that is based on the Xiongmai hardware. The devices can be identified by their web interface, error page, or product pages advertising the EMEye service.

IoT Vacuum Spying

Quote

Vulnerabilities in a range of robot vacuum cleaners allow miscreants to access the gadgets’ camera, and remote-control the gizmos.

Security researchers at Positive Technologies (PT) this week disclosed that Dongguan Diqee 360 smart vacuum cleaners contain security flaws that hackers can exploit to snoop on people through the night-vision camera and mic, and take control of the Roomba rip-off.

Think of it as a handy little spy-on-wheels.

Internet of insecure Things: Software still riddled with security holes

Quote

An audit of the security of IoT mobile applications available on official stores has found that tech to safeguard the world of connected things remains outstandingly mediocre.

Pradeo Security put a representative sample of 100 iOS and Android applications developed to manage connected objects (heaters, lights, door-locks, baby monitors, CCTV etc) through their paces.

Researchers at the mobile security firm found that around one in seven (15 per cent) applications sourced from the Google Play and Apple App Store were vulnerable to takeover. Hijacking was a risk because these apps were discovered to be defenceless against bugs that might lend themselves to man-in-the-middle attacks.

Four in five of the tested applications carry vulnerabilities, with an average of 15 per application.

Security
Internet of insecure Things: Software still riddled with security holes
Which means devices could be pwned by crooks
By John Leyden 28 Mar 2018 at 15:29
15 Reg comments SHARE ▼

An audit of the security of IoT mobile applications available on official stores has found that tech to safeguard the world of connected things remains outstandingly mediocre.

Pradeo Security put a representative sample of 100 iOS and Android applications developed to manage connected objects (heaters, lights, door-locks, baby monitors, CCTV etc) through their paces.

Researchers at the mobile security firm found that around one in seven (15 per cent) applications sourced from the Google Play and Apple App Store were vulnerable to takeover. Hijacking was a risk because these apps were discovered to be defenceless against bugs that might lend themselves to man-in-the-middle attacks.

Four in five of the tested applications carry vulnerabilities, with an average of 15 per application.

Around one in 12 (8 per cent) of applications phoned home or otherwise connected to uncertified servers. “Among these, some [certificates] have expired and are available for sale. Anyone buying them could access all the data they receive,” Pradeo warns.

Pradeo’s team also discovered that the vast majority of the apps leaked the data they processed. Failings in this area were many and varied.

Application file content: 81 per cent of applications
Hardware information (device manufacturer, commercial name, battery status…): 73 per cent
Device information (OS version number…): 73 per cent
Temporary files: 38 per cent
Phone network information (service provider, country code…): 27 per cent
Video and audio records: 19 per cent
Files coming from app static data: 19 per cent
Geolocation: 12 per cent
Network information (IP address, 2D address, Wi-Fi connection state): 12 per cent
Device identifiers (IMEI): 8 per cent

Pradeo Security said it had notified the vendors involved about the security problems it uncovered in their kit

Police say fridges could be turned into listening devices

Quote

Just say NO to IOT

Your fridge could be turned into a covert listening device by Queensland Police conducting surveillance.

The revelation was made during a Parliamentary committee hearing on proposed legislation to give police more powers to combat terrorism.

Police Commissioner Ian Stewart said technology was rapidly changing and police and security agencies could use devices already in place, and turn them into listening devices.

“It is not outside the realm that, if you think about the connected home that we now look at quite regularly where people have their security systems, their CCTV systems and their computerised refrigerator all hooked up wirelessly, you could actually turn someone’s fridge into a listening device,” Mr Stewart said.

Share on Facebook SHARE
Share on Twitter TWEET

Queensland Police Commissioner Ian Stewart said the proposed new laws were necessary to keep people safe.
Queensland Police Commissioner Ian Stewart said the proposed new laws were necessary to keep people safe. Photo: Glenn Hunt

“This is the type of challenge that law enforcement is facing in trying to keep pace with events and premises where terrorists may be planning, they may be gathering to discuss deployment in a tactical way and they may be building devices in that place.

“All of that is taken into account by these new proposed laws.”

The Counter-Terrorism and Other Legislation Amendment bill would give police more powers during and following attacks.

Researcher: 90% Of ‘Smart’ TVs Can Be Compromised Remotely

Quote
“So yeah, that internet of broken things security we’ve spent the last few years mercilessly making fun of? It’s significantly worse than anybody imagined. “

So we’ve noted for some time how “smart” TVs, like most internet of things devices, have exposed countless users’ privacy courtesy of some decidedly stupid privacy and security practices. Several times now smart TV manufacturers have been caught storing and transmitting personal user data unencrypted over the internet (including in some instances living room conversations). And in some instances, consumers are forced to eliminate useful features unless they agree to have their viewing and other data collected, stored and monetized via these incredible “advancements” in television technology.

As recent Wikileaks data revealed, the lack of security and privacy standards in this space has proven to be a field day for hackers and intelligence agencies alike.

And new data suggests that these televisions are even more susceptible to attack than previously thought. While the recent Samsung Smart TV vulnerabilities exposed by Wikileaks (aka Weeping Angel) required an in-person delivery of a malicious payload via USB drive, more distant, remote attacks are unsurprisingly also a problem. Rafael Scheel, a security researcher working for Swiss cyber security consulting company Oneconsult, recently revealed that around 90% of smart televisions are vulnerable to a remote attack using rogue DVB-T (Digital Video Broadcasting – Terrestrial) signals.

This attack leans heavily on Hybrid Broadcast Broadband TV (HbbTV), an industry standard supported by most cable companies and set top manufacturers that helps integrate classic broadcast, IPTV, and broadband delivery systems. Using $50-$150 DVB-T transmitter equipment, an attacker can use this standard to exploit smart dumb television sets on a pretty intimidating scale, argues Scheel:

“By design, any nearby TV will connect to the stronger signal. Since cable providers send their signals from tens or hundreds of miles away, attacks using rogue DVB-T signals could be mounted on nearby houses, a neighborhood, or small city. Furthermore, an attack could be carried out by mounting the DVB-T transmitter on a drone, targeting a specific room in a building, or flying over an entire city.”

Scheel says he has developed two exploits that, when loaded in the TV’s built-in browser, execute malicious code, and provide root access. Once compromised, these devices can be used for everything from DDoS attacks to surveillance. And because these devices are never really designed with consumer-friendly transparency in mind, users never have much of an understanding of what kind of traffic the television is sending and receiving, preventing them from noticing the device is compromised.

Scheel also notes that the uniformity of smart TV OS design (uniformly bad, notes a completely different researcher this week) and the lack of timely updates mean crafting exploits for multiple sets is relatively easy, and firmware updates can often take months or years to arrive. Oh, and did we mention these attacks are largely untraceable?:

“But the best feature of his attack, which makes his discovery extremely dangerous, is the fact that DVB-T, the transmission method for HbbTV commands, is a uni-directional signal, meaning data flows from the attacker to the victim only. This makes the attack traceable only if the attacker is caught transmitting the rogue HbbTV signal in real-time. According to Scheel, an attacker can activate his HbbTV transmitter for one minute, deliver the exploit, and then shut it off for good.”

Amnesia’ IoT botnet feasts on year-old unpatched vulnerability

Why anyone would want to connect any home device to the internet at this stage in the game is beyond me.

“Hackers have brewed up a new variant of the IoT/Linux botnet “Tsunami” that exploits a year-old but as yet unresolved vulnerability.

The Amnesia botnet targets an unpatched remote code execution vulnerability publicly disclosed more than a year ago in DVR (digital video recorder) devices made by TVT Digital and branded by over 70 vendors worldwide.

The vulnerability affects approximately 227,000 devices around the world with Taiwan, the United States, Israel, Turkey, and India being the most exposed, specialists at Unit 42, Palo Alto Networks’ threat research unit, warn.

The Amnesia botnet is yet to be abused to mount a large-scale attack but the potential for harm is all too real.

“Amnesia exploits this remote code execution vulnerability by scanning for, locating, and attacking vulnerable systems,” the researchers warn. “A successful attack results in Amnesia gaining full control of the device. Attackers could potentially harness the Amnesia botnet to launch broad DDoS attacks similar to the Mirai botnet attacks we saw in Fall [autumn] 2016.”

El Reg asked TVT Digital, based in Shenzhen, China, for a response to Palo Alto’s warning but are yet to receive a reply. We’ll update the story as and when we hear more.” Source: Here