Skip to content

Nick L

Security Panic as panic alarms meant to keep granny and little Timmy safe prove a privacy fiasco

QUOTE

Simple hack turns them into super secret spying tool

A GPS tracker used by elderly people and young kids has a security hole that could allow others to track and secretly record their wearers.

The white-label product is manufactured in China and then rebadged and rebranded by a range of companies in the UK, US, Australia and elsewhere including Pebbell 2, OwnFone and SureSafeGo. Over 10,000 people in the UK use the devices.

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

 

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

 

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

The flaw was introduced in an update to the product: originally the portable fob communicated with a base station that was plugged into a phone line: an approach that provided no clear attack route. But in order to expand its range and usefulness, the SIM card was added so it was not reliant on a base station and would work over the mobile network.

The problem arises from the fact that the Chinese manufacturer built in a PIN to the device so it would be locked to the telephone number programmed into the device. Which is fine, except the PIN was disabled by default and the PIN is currently not needed to reboot or reset the device.

And so it is possible to send a reset command to the device – if you know its SIM telephone number – and restore it to factory settings. At that point, the device is wide open and doesn’t need the PIN to make changes to the other functions. Which all amounts to remote access.
Random access memory

But how would you find out the device’s number? Well, the researchers got hold of one such device and its number and then ran a script where they sent messages to thousands of similar numbers to see if they hit anything.

They did. “Out of the 2,500 messages we sent, we got responses from 175 devices (7 per cent),” they wrote. “So this is 175 devices being used at the time of writing as an aid for vulnerable people; all identified at a minimal cost. The potential for harm is massive, and in less than a couple of hours, we could interact with 175 of these devices!”

The good news is that it is easy to fix: in new devices. You would simply add a unique code to each device and require it be used to reset the device. And you could limit the device to only receive calls or texts from a list of approved contacts.

But in the devices already on the market, the fix is not so easy: even by using the default PIN to lock it down, the ability to reset the device is still possible because it doesn’t require the PIN to be entered. The researchers say they have contacted the companies that use the device “to help them understand the risks posed by our findings” and say that they are “looking into and are actively recalling devices.” But it also notes that some have not responded.

In short, poor design and the lack of a decent security audit prior to putting the updated product on the market has turned what is supposed to provide peace of mind into a potential stalking and listening nightmare.

Facebook’s third act: Mark Zuckerberg announces his firm’s next business model

Here lies Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.” I have a better idea: Get Facebook off the planet.

Quote

If it works, the social-networking giant will become more private and more powerful

THE FIRST big overhaul for Facebook came in 2012-14. Internet users were carrying out ever more tasks on smartphones rather than desktop or laptop computers. Mark Zuckerberg opted to follow them, concentrating on Facebook’s mobile app ahead of its website, and buying up two fast-growing communication apps, WhatsApp and Instagram. It worked. Facebook increased its market valuation from around $60bn at the end of 2012 to—for a brief period in 2018—more than $600bn.

On March 6th Mr Zuckerberg announced Facebook’s next pivot. As well as its existing moneymaking enterprise, selling targeted ads on its public social networks, it is building a “privacy-focused platform” around WhatsApp, Instagram and Messenger. The apps will be integrated, he said, and messages sent through them encrypted end-to-end, so that even Facebook cannot read them. While it was not made explicit, it is clear what the business model will be. Mr Zuckerberg wants all manner of businesses to use its messaging networks to provide services and accept payments. Facebook will take a cut.

A big shift was overdue at Facebook given the privacy and political scandals that have battered the firm. Even Mr Zuckerberg, who often appears incapable of seeing the gravity of Facebook’s situation, seemed to grasp the irony of it putting privacy first. “Frankly we don’t currently have a strong reputation for building privacy protective services,” he noted.

Still, he intends to do it. Mr Zuckerberg claims that users will benefit from his plan to integrate its messaging apps into a single, encrypted network. The content of messages will be safe from prying eyes of authoritarian snoops and criminals, as well as from Facebook itself. It will make messaging more convenient, and make profitable new services possible. But caution is warranted for three reasons.

The first is that Facebook has long been accused of misleading the public on privacy and security, so the potential benefits Mr Zuckerberg touts deserve to be treated sceptically. He is also probably underselling the benefits that running integrated messaging networks brings to his firm, even if they are encrypted so that Facebook cannot see the content. The metadata alone, ie, who is talking to whom, when and for how long, will still allow Facebook to target advertisements precisely, meaning its ad model will still function.

End-to-end encryption will also make Facebook’s business cheaper to run. Because it will be mathematically impossible to moderate encrypted communications, the firm will have an excuse to take less responsibility for content running through its apps, limiting its moderation costs.

If it can make the changes, Facebook’s dominance over messaging would probably increase. The newfound user-benefits of a more integrated Facebook might make it harder for regulators to argue that Mr Zuckerberg’s firm should be broken up.

Facebook’s plans in India provide some insight into the new model. It has built a payment system into WhatsApp, the country’s most-used messaging app. The system is waiting for regulatory approval. The market is huge. In the rest of the world, too, users are likely to be drawn in by the convenience of Facebook’s new networks. Mr Zuckerberg’s latest strategy is ingenious but may contain twists.

The Week in Tech: Facebook and Google Reshape the Narrative on Privacy

And from the bs department

QUOTE

…Stop me if you’ve heard this before: The chief executive of a huge tech company with vast stores of user data, and a business built on using it to target ads, now says his priority is privacy.

This time it was Google’s Sundar Pichai, at the company’s annual conference for developers. “We think privacy is for everyone,” he explained on Tuesday. “We want to do more to stay ahead of constantly evolving user expectations.” He reiterated the point in a New York Times Op-Ed, and highlighted the need for federal privacy rules.

The previous week, Mark Zuckerberg delivered similar messages at Facebook’s developer conference. “The future is private,” he said, and Facebook will focus on more intimate communications. He shared the idea in a Washington Post op-ed just weeks before, also highlighting the need for federal privacy rules.

Google went further than Facebook’s rough sketch of what this future looks, and unveiled tangible features: It will let users browse YouTube and Google Maps in “incognito mode,” will allow auto-deletion of Google history after a specified time and will make it easier to find out what the company knows about you, among other new privacy features.

Fatemeh Khatibloo, a vice president and principal analyst at Forrester, told The Times: “These are meaningful changes when it comes to the user’s expectations of privacy, but I don’t think this affects their business at all.” Google has to show that privacy is important, but it will still collect data.

What Google and Facebook are trying to do, though, is reshape the privacy narrative. You may think privacy means keeping hold of your data; they want privacy to mean they don’t hand data to others. (“Google will never sell any personal information to third parties,” Mr. Pichai wrote in his Op-Ed.)

Werner Goertz, a research director at Gartner, said Google had to respond with its own narrative. “It is trying to turn the conversation around and drive public discourse in a way that not only pacifies but also tries to get buy-in from consumers, to align them with its privacy strategy,” he said.
Politics of privacy law

Right – pacify the masses with BS.

Politics of privacy law

Facebook and Google may share a voice on privacy. Lawmakers don’t.

Members of the Federal Trade Commission renewed calls at a congressional hearing on Wednesday to regulate big tech companies’ stewardship of user data, my colleague Cecilia Kang reported. That was before a House Energy and Commerce subcommittee, on which “lawmakers of both parties agreed” that such a law was required, The Wall Street Journal reported.

Sounds promising.

But while the F.T.C. was united in asking for more power to police violations and greater authority to impose penalties, there were large internal tensions about how far it should be able to go in punishing companies. And the lawmakers in Congress “appeared divided over key points that legislation might address,” according to The Journal. Democrats favor harsh penalties and want to give the F.T.C. greater power; Republicans worry that strict regulation could stifle innovation and hurt smaller companies.

Finding compromise will be difficult, and conflicting views risk becoming noise through which a clear voice from Facebook and Google can cut. The longer disagreement rages, the more likely it is that Silicon Valley defines a mainstream view that could shape rules.

Yeah — more lobbyists and political donation subverting the democracy. The US should enact an EU equivalent GDPR now. And another thing, Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.”

Now for Sale on Facebook: Looted Middle Eastern Antiquities

Another reason Facebook is a disgusting dangerous corporation. A 5 billion dollat fine is nothing. It needs to be wound down and Zuckerberg and Sandberg given long hard prison terms for the evil and death they have caused.

Quote

Ancient treasures pillaged from conflict zones in the Middle East are being offered for sale on Facebook, researchers say, including items that may have been looted by Islamic State militants.

Facebook groups advertising the items grew rapidly during the upheaval of the Arab Spring and the ensuing wars, which created unprecedented opportunities for traffickers, said Amr Al-Azm, a professor of Middle East history and anthropology at Shawnee State University in Ohio and a former antiquities official in Syria. He has monitored the trade for years along with his colleagues at the Athar Project, named for the Arabic word for antiquities.

At the same time, Dr. Al-Azm said, social media lowered the barriers to entry to the marketplace. Now there are at least 90 Facebook groups, most in Arabic, connected to the illegal trade in Middle Eastern antiquities, with tens of thousands of members, he said.

They often post items or inquiries in the group, then take the discussion into chat or WhatsApp messaging, making it difficult to track. Some users circulate requests for certain types of items, providing an incentive for traffickers to produce them, a scenario that Dr. Al-Azm called “loot to order.”

Others post detailed instructions for aspiring looters on how to locate archaeological sites and dig up treasures.

Items for sale include a bust purportedly taken from the ancient city of Palmyra, which was occupied for periods by Islamic State militants and endured heavy looting and damage.

Other artifacts for sale come from Iraq, Yemen, Egypt, Tunisia and Libya. The majority do not come from museums or collections, where their existence would have been cataloged, Dr. Al-Azm said.

“They’re being looted straight from the ground,” he said. “They have never been seen. The only evidence we have of their existence is if someone happens to post a picture of them.”

Dr. Al-Azm and Katie A. Paul, the directors of the Athar Project, wrote in World Politics Review last year that the loot-to-order requests showed that traffickers were “targeting material with a previously unseen level of precision — a practice that Facebook makes remarkably easy.”

After the BBC published an article about the work of Dr. Al-Azm and his colleagues last week, Facebook said that it had removed 49 groups connected to antiquities trafficking.

 

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

Dr. Al-Azm countered that 90 groups were still up. But more important, he argued, Facebook should not simply delete the pages, which now constitute crucial evidence both for law enforcement and heritage experts.

In a statement on Tuesday, the company said it was “continuing to invest in people and technology to keep this activity off Facebook and encourage others to report anything they suspect of violating our Community Standards so we can quickly take action.”

A spokeswoman said that the company’s policy-enforcement team had 30,000 members and that it had introduced new tools to detect and remove content that violates the law or its policies using artificial intelligence, machine learning and computer vision.

Trafficking in antiquities is illegal across most of the Middle East, and dealing in stolen relics is illegal under international law. But it can be difficult to prosecute such cases.
Leila A. Amineddoleh, a lawyer in New York who specializes in art and cultural heritage, said that determining the provenance of looted items can be arduous, presenting an obstacle for lawyers and academics alike.

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

As the Islamic State expanded, it systematically looted and destroyed, using heavy machinery to dig into ancient sites that had scarcely been excavated before the war. The group allowed residents and other looters to take from heritage sites, imposing a 20 percent tax on their earnings.

Some local people and cultural heritage experts scrambled to document and save the antiquities, including efforts to physically safeguard them and to create 3-D models and maps. Despite their efforts, the losses were catastrophic.
Sign Up for Summer in the City

The best things to do in N.Y.C. during the hottest season of the year. This limited-edition newsletter will launch before Memorial Day and run through Labor Day.

Satellite images show invaluable sites, such as Mari and Dura-Europos in eastern Syria, pockmarked with excavation holes from looters. In the Mosul Museum in Iraq, the militants filmed themselves taking sledgehammers and drills to monuments they saw as idolatrous, acts designed for maximum propaganda value as the world watched with horror.

Other factions and people also profited from looting. In fact, the market was so saturated that prices dropped drastically for a time around 2016, Dr. Al-Azm said.

Around the same time, as Islamic State fighters scattered in the face of territorial losses, they took their new expertise in looting back to their countries, including Egypt, Tunisia and Libya, and to other parts of Syria, like Idlib Province, he added.

“This is a supply and demand issue,” Dr. Al-Azm said, repeating that any demand gives incentives to looters, possibly financing terrorist groups in the process.

Instead of simply deleting the pages, Dr. Al-Azm said, Facebook should devise a more comprehensive strategy to stop the sales while allowing investigators to preserve photos and records uploaded to the groups.

A hastily posted photo, after all, might be the only record of a looted object that is available to law enforcement or scholars. Simply deleting the page would destroy “a huge corpus of evidence” that will be needed to identify, track and recover looted treasures for years to come, he said.

Similar arguments have been made as social media sites, including YouTube, have deleted videos that show atrocities committed during the Syrian war that could be used to prosecute war crimes.

Facebook has also faced questions over its role as a platform for other types of illicit sales, including guns, poached ivory and more. It has generally responded by shutting down pages or groups in response to reports of illegal activity.

Some of the illicit items sold without proof of their ownership history, of course, could be fake. But given the volume of activity in the antiquities groups and the copious evidence of looting at famous sites, at least some of them are believed to be genuine.

The wave of items hitting the market will most likely continue for years. Some traffickers sit on looted antiquities for long periods, waiting for attention to die down and sometimes forging documents about the items’ origins before offering them for sale.

Boycott is the only way to force social media giants to protect kids online

About a month back I got into email exchange with a mother who had invited one of my children to a birthday party. I said ok but asked that no pictures be posted to social media. I explained my reasoning. She said I was crazy and would damage my children (as well as other things). I responded with advice from several reputable sources. No matter. Suffice, no birthday attendance.

I was never sure why she reacted this way. It was almost like I asked an addict to go cold turkey. Maybe that’s it. She is addicted.

A public boycott of social media may be the only way to force companies to protect children from abuse, the country’s leading child protection police officer has said.
QUOTE

Simon Bailey, the National Police Chiefs’ Council lead on child protection, said tech companies had abdicated their duty to safeguard children and were only paying attention due to fear of reputational damage.

The senior officer, who is Norfolk’s chief constable, said he believed sanctions such as fines would be “little more than a drop in the ocean” to social media companies, but that the government’s online harms white paper could be a “game changer” if it led to effective punitive measures.

Bailey suggested a boycott would be one way to hit big platforms, which he believes have the technology and funds to “pretty much eradicate the availability, the uploading, and the distribution of indecent imagery”.

Despite the growing problem, Bailey said he had seen nothing so far “that has given me the confidence that companies that are creating these platforms are taking their responsibilities seriously enough”.

He told the Press Association: “Ultimately I think the only thing they will genuinely respond to is when their brand is damaged. Ultimately the financial penalties for some of the giants of this world are going to be an absolute drop in the ocean.

“But if the brand starts to become tainted, and consumers start to see how certain platforms are permitting abuse, are permitting the exploitation of young people, then maybe the damage to that brand will be so significant that they will feel compelled to do something in response.

“We have got to look at how we drive a conversation within our society that says ‘do you know what, we are not going to use that any more, that system or that brand or that site’ because of what they are permitting to be hosted or what they are allowing to take place.”

In every playground there is likely to be someone with pornography on their phone, Bailey said as he described how a growing number of young men are becoming “increasingly desensitised” and progressing to easily available illegal material. Society is “not far off the point where somebody will know somebody” who has viewed illegal images, he said.

There has been a sharp rise in the number of images on the child abuse image database from fewer than 10,000 in the 1990s to 13.4m, with more than 100m variations of these.

Last month, the government launched a consultation on new laws proposed to tackle illegal content online. The white paper, which was revealed in the Guardian, legislated for a new statutory duty of care by social media firms and the appointment of an independent regulator, which is likely to be funded through a levy on the companies. It was welcomed by senior police and children’s charities.

Bailey believes if effective regulation is put in place it could free up resources to begin tackling the vaster dark web. He expressed concern that the spread of 4G and 5G networks worldwide would open up numerous further opportunities for the sexual exploitation of children.

Speaking at a conference organised by StopSO, a charity that works with offenders and those concerned about their sexual behaviour to minimise the risk of offending, of which Bailey is patron, he recently said that plans from Facebook’s Mark Zuckerberg to increase privacy on the social network would make life harder for child protection units. But he told the room: “There is no doubt that thinking is shifting around responsibility of tech companies. I think that argument has been won, genuinely.

“Of course, the proof is going to be in the pudding with just how ambitious the white paper is, how effective the punitive measures will be, or not.”

Andy Burrows, the National Society for the Prevention of Cruelty to Children’s associate head of child safety online, said: “It feels like social media sites treat child safeguarding crises as a bad news cycle to ride out, rather than a chance to make changes to protect children.”

Google Spies! The worst kind of microphone is a hidden microphone.

Google says the built-in microphone it never told Nest users about was ‘never supposed to be a secret’

Yeah right.
Quote

  • In early February, Google announced that Assistant would work with its home security and alarm system, Nest Secure.
  • The problem: Users didn’t know a microphone existed on their Nest security devices to begin with.
  • On Tuesday, a Google representative told Business Insider the company had made an “error.”
  • “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the person said. “That was an error on our part.”

In early February, Google announced that its home security and alarm system Nest Secure would be getting an update. Users, the company said, could now enable its virtual-assistant technology, Google Assistant.

The problem: Nest users didn’t know a microphone existed on their security device to begin with.

The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device.

On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”

Google says “the microphone has never been on and is only activated when users specifically enable the option.”

It also said the microphone was originally included in the Nest Guard for the possibility of adding new security features down the line, like the ability to detect broken glass.

Still, even if Google included the microphone in its Nest Guard device for future updates — like its Assistant integration — the news comes as consumers have grown increasingly wary of major tech companies and their commitment to consumer privacy.

For Google, the revelation is particularly problematic and brings to mind previous privacy controversies, such as the 2010 incident in which the company acknowledged that its fleet of Street View cars “accidentally” collected personal data transmitted over consumers’ unsecured WiFi networks, including emails.

How Nest, designed to keep intruders out of people’s homes, effectively allowed hackers to get in

What strikes me about this article is simply how illiterate people are when it comes to their digital security. “Only about 1 percent of Internet users, he said, use some kind of password manager.” That is crazy. As this article states, “Credential stuffing is “at the root of probably 90 percent of the things we see happening” There are so many stolen passwords being dumped online that if you do not use a different password for each site, you are simply asking for trouble. You can check for yourself here to see if your email credential has been part of a breach. , The only way you will be able to use a different password for each site is to use a password manager.

Another conclusion drawn from this article is how IoT companies purposely make there devices hackers dreams by deliberating not employing better security methods. They do this to make their products idiot proof. Reread that, Idiot Proof. In other words, they think their users are idiots and do want to burden them with nonsense like better security.

Quote

Tara Thomas thought her daughter was just having nightmares. “There’s a monster in my room,” the almost-3-year-old would say, sometimes pointing to the green light on the Nest Cam installed on the wall above her bed.

Then Thomas realized her daughter’s nightmares were real. In August, she walked into the room and heard pornography playing through the Nest Cam, which she had used for years as a baby monitor in their Novato, Calif., home. Hackers, whose voices could be heard faintly in the background, were playing the recording, using the intercom feature in the software. “I’m really sad I doubted my daughter,” she said.

Though it would be nearly impossible to find out who was behind it, a hack like this one doesn’t require much effort, for two reasons: Software designed to help people break into websites and devices has gotten so easy to use that it’s practically child’s play, and many companies, including Nest, have effectively chosen to let some hackers slip through the cracks rather than impose an array of inconvenient countermeasures that could detract from their users’ experience and ultimately alienate their customers.

The result is that anyone in the world with an Internet connection and rudimentary skills has the ability to virtually break into homes through devices designed to keep physical intruders out.

As hacks such as the one the Thomases suffered become public, tech companies are deciding between user convenience and potential damage to their brands. Nest could make it more difficult for hackers to break into Nest cameras, for instance, by making the log-in process more cumbersome. But doing so would introduce what Silicon Valley calls “friction” — anything that can slow down or stand in the way of someone using a product.

At the same time, tech companies pay a reputational price for each high-profile incident. Nest, which is part of Google, has been featured on local news stations throughout the country for hacks similar to what the Thomases experienced. And Nest’s recognizable brand name may have made it a bigger target. While Nest’s thermostats are dominant in the market, its connected security cameras trail the market leader, Arlo, according to Jack Narcotta, an analyst at the market research firm Strategy Analytics. Arlo, which spun out of Netgear, has around 30 percent of the market, he said. Nest is in the top five, he said.

Nik Sathe, vice president of software engineering for Google Home and Nest, said Nest has tried to weigh protecting its less security-savvy customers while taking care not to unduly inconvenience legitimate users to keep out the bad ones. “It’s a balance,” he said. Whatever security Nest uses, Sathe said, needs to avoid “bad outcomes in terms of user experience.”

Google spokeswoman Nicol Addison said Thomas could have avoided being hacked by implementing two-factor authentication, where in addition to a password, the user must enter a six-digit code sent via text message. Thomas said she had activated two-factor authentication; Addison said it had never been activated on the account.

The method used to spy on the Thomases is one of the oldest tricks on the Internet. Hackers essentially look for email addresses and passwords that have been dumped online after being stolen from one website or service and then check to see whether the same credentials work on another site. Like the vast majority of Internet users, the family used similar passwords on more than one account. While their Nest account had not been hacked, their password had essentially become public knowledge, thanks to other data breaches.

In recent years, this practice, which the security industry calls “credential stuffing,” has gotten incredibly easy. One factor is the sheer number of stolen passwords being dumped online publicly. It’s difficult to find someone who hasn’t been victimized. (You can check for yourself here.)

A new breed of credential-stuffing software programs allows people with little to no computer skills to check the log-in credentials of millions of users against hundreds of websites and online services such as Netflix and Spotify in a matter of minutes. Netflix and Spotify both said in statements that they were aware of credential stuffing and employ measures to guard against it. Netflix, for instance, monitors websites with stolen passwords and notifies users when it detects suspicious activity. Neither Netflix nor Spotify offer two-factor authentication.

But the potential for harm is higher for the 20 billion Internet-connected things expected to be online by next year, according to the research firm Gartner. Securing these devices has public safety implications. Hacked devices can be used in large-scale cyberattacks such as the “Dyn hack” that mobilized millions of compromised “Internet of things” devices to take down Twitter, Spotify and others in 2016.

In January, Japanese lawmakers passed an amendment to allow the government to essentially do what hackers do and scour the Internet for stolen passwords and test them to see whether they have been reused on other platforms. The hope is that the government can force tech companies to fix the problem.

Security experts worry the problem has gotten so big that there could be attacks similar to the Dyn hack, this time as a result of a rise in credential stuffing.

“They almost make it foolproof,” said Anthony Ferrante, the global head of cybersecurity at FTI Consulting and a former member of the National Security Council. He said the new tools have made it even more important to stop reusing passwords.

Tech companies have been aware of the threat of credential stuffing for years, but the way they think about it has evolved as it has become a bigger problem. There was once a sense that users should take responsibility for their security by refraining from using the same password on multiple websites. But as gigantic dumps of passwords have gotten more frequent, technology companies have found that it is not just a few inattentive customers who reuse the same passwords for different accounts — it’s the majority of people online.

Credential stuffing is “at the root of probably 90 percent of the things we see happening,” said Emmanuel Schalit, chief executive of Dashlane, a password manager that allows people to store unique, random passwords in one place. Only about 1 percent of Internet users, he said, use some kind of password manager.

“We saw this coming in late 2017, early 2018 when we saw these big credential dumps start to happen,” Google’s Sathe said. In response, Nest says, it implemented security measures around that time.

It did its own research into stolen passwords available on the Web and cross-referenced them with its records, using an encryption technique that ensured Nest could not actually see the passwords. In emails sent to customers, including the Thomases, it notified customers when they were vulnerable. It also tried to block log-in attempts that veered from the way legitimate users log into accounts. For instance, if a computer from the same Internet-protocol address attempted to log into 10 Nest accounts, the algorithm would block that address from logging into any more accounts.

But Nest’s defenses were not good enough to stop several high-profile incidents throughout last year in which hackers used credential stuffing to break into Nest cameras for kicks. Hackers told a family in a San Francisco suburb, using the family’s Nest Cam, that there was an imminent missile attack from North Korea. Someone hurled racial epithets at a family in Illinois through a Nest Cam. There were also reports of hackers changing the temperature on Nest thermostats. And while only a handful of hacks became public, other users may not even be aware their cameras are compromised.

The company was forced to respond. “Nest was not breached,” it said in a January statement. “These recent reports are based on customers using compromised passwords,” it said, urging its customers to use two-factor authentication. Nest started forcing some users to change their passwords.

This was a big step for Nest because it created the kind of friction that technology companies usually try to avoid. “As we saw the threat evolve, we put more explicit measures in place,” Sathe said. Nest says only a small percentage of its millions of customers are vulnerable to this type of attack.

According to at least one expert, though, Nest users are still exposed. Hank Fordham, a security researcher, sat in his Calgary, Alberta, home recently and opened up a credential-stuffing software program known as Snipr. Instantly, Fordham said, he found thousands of Nest accounts that he could access. Had he wanted to, he would have been able to view cameras and change thermostat settings with relative ease.

While other similar programs have been around for years, Snipr, which costs $20 to download, is easier to use. Snipr provides the code required to check whether hundreds of the most popular platforms, such as “League of Legends” and Netflix, are accessible with a bunch of usernames and passwords — and those have become abundantly available all over the Internet.

Fordham, who had been monitoring the software and testing it for malware, noticed that after Snipr added functionality for Nest accounts last May, news reports of attacks started coming out. “I think the credential-stuffing community was made aware of it, and that was the dam breaking,” he said.

Nest said the company had never heard of Snipr, though it is generally aware of credential-stuffing software. It said it cannot be sure whether any one program drives more credential stuffing toward Nest products.

What surprises Fordham and other security researchers about the vulnerability of Nest accounts is the fact that Nest’s parent company, Google, is widely known for having the best methods for stopping credential-stuffing attacks. Google’s vast user base gives it data that it can use to determine whether someone trying to log into an account is a human or a robot.

The reason Nest has not employed all of Google’s know-how on security goes back to Nest’s roots, according to Nest and people with knowledge of its history. Founded in 2010 by longtime Apple executive Tony Fadell, Nest promised at the time that it would not collect data on users for marketing purposes.

In 2014, Nest was acquired by Google, which has the opposite business model. Google’s products are free or inexpensive and, in exchange, it profits from the personal information it collects about its users. The people familiar with Nest’s history said the different terms of service and technical challenges have prevented Nest from using all of Google’s security products. Google declined to discuss whether any of its security features were withheld because of incompatibility with Nest’s policies.

Under Alphabet, Google’s parent company, Nest employed its own security team. While Google shared knowledge about security, Nest developed its own software. In some ways, Nest’s practices appear to lag well behind Google’s. For instance, Nest still uses SMS messages for two-factor authentication. Using SMS is generally not recommended by security experts, because text messages can be easily hijacked by hackers. Google allows people to use authentication apps, including one it developed in-house, instead of text messages. And Nest does not use ReCaptcha, which Google acquired in 2009 and which can separate humans from automated software, such as what credential stuffers use to identify vulnerable accounts.

Sathe said Nest employed plenty of advanced techniques to stop credential stuffing, such as machine learning algorithms that “score” log-ins based on how suspicious they are and block them accordingly. “We have many layers of security in conjunction with what the industry would consider best practices,” he said.

When asked why Nest does not use ReCaptcha, Sathe cited difficulty in implementing it on mobile apps, and user convenience. “Captchas do create a speed bump for the users,” he said.

The person behind Snipr, who goes by the name “Pragma” and communicates via an encrypted chat, put the blame on the company. “I can tell you right now, Nest can easily secure all of this,” he said when asked whether his software had enabled people to listen in and harass people via Nest Cams. “This is like stupidly bad security, like, extremely bad.” He also said he would remove the capability to log into Nest accounts, which he said he added last May when one of his customers asked for it, if the company asked. Pragma would not identify himself, for fear of getting in “some kind of serious trouble.”

That’s when Fordham, the Calgary security researcher, became concerned. He noticed the addition of Nest on the dashboard and took it upon himself to start warning people who were vulnerable. He logged into their Nest cams and spoke to them, imploring them to change their passwords. One of those interactions ended up being recorded by the person on the other end of the camera. A local news station broadcast the video.

Fordham said he is miffed that it is still so easy to log into Nest accounts. He noted that Dunkin’ Donuts, after seeing its users fall victim to credential-stuffing attacks aimed at taking their rewards points, implemented measures, including captchas, that have helped solve the problem. “It’s a little alarming that a company owned by Google hasn’t done the same thing as Dunkin’ Donuts,” Fordham said.

A spokeswoman for Dunkin’ declined to comment.

Doorbells are gaining cameras for security and catching porch pirates. The Post’s Geoff Fowler goes over security camera etiquette you may not have considered. (Jhaan Elker, Geoffrey Fowler/The Washington Post)

According to people familiar with the matter, Google is in the process of converting Nest user accounts so that they utilize Google’s security methods via Google’s log-in, in part to deal with the problem. Addison said that Nest user data will not be subject to tracking by Google. She later said that she misspoke but would not clarify what that meant.

Knowing that the hack could have been stopped with a unique password or two-factor authentication has not made Thomas, whose camera was hacked, feel any better. “I continuously get emails saying it wasn’t their fault,” she said.

She unplugged the camera and another one she used to have in her son’s bedroom, and she doesn’t plan to turn them on again: “That was the solution.”

Sri Lanka Shut Down Social Media. My First Thought Was ‘Good.’

So was mine.

Quote

As a tech journalist, I’m ashamed to admit it. But this is how bad the situation has gotten.

This is the ugly conundrum of the digital age: When you traffic in outrage, you get death.

So when the Sri Lankan government temporarily shut down access to American social media services like Facebook and Google’s YouTube after the bombings there on Easter morning, my first thought was “good.”

Good, because it could save lives. Good, because the companies that run these platforms seem incapable of controlling the powerful global tools they have built. Good, because the toxic digital waste of misinformation that floods these platforms has overwhelmed what was once so very good about them. And indeed, by Sunday morning so many false reports about the carnage were already circulating online that the Sri Lankan government worried more violence would follow.

It pains me as a journalist, and someone who once believed that a worldwide communications medium would herald more tolerance, to admit this — to say that my first instinct was to turn it all off. But it has become clear to me with every incident that the greatest experiment in human interaction in the history of the world continues to fail in ever more dangerous ways.

In short: Stop the Facebook/YouTube/Twitter world — we want to get off.

Obviously, that is an impossible request and one that does not address the root cause of the problem, which is that humanity can be deeply inhumane. But that tendency has been made worse by tech in ways that were not anticipated by those who built it.

I noted this in my very first column for The Times almost a year ago, when I called social media giants “digital arms dealers of the modern age” who had, by sloppy design, weaponized pretty much everything that could be weaponized.

“They have weaponized civic discourse,” I wrote. “And they have weaponized, most of all, politics. Which is why malevolent actors continue to game the platforms and why there’s still no real solution in sight anytime soon, because they were built to work exactly this way.”

So it is no surprise that we are where we are now, with the Sri Lankan government closing off its citizens’ access to social media, fearing misinformation would lead to more violence. A pre-crime move, if you will, and a drastic one, since much critical information in that country flows over these platforms. Facebook and YouTube, and to a lesser extent services like Viber, are how news is distributed and consumed and also how it is abused. Imagine if you mashed up newspapers, cable, radio and the internet into one outlet in the United States and you have the right idea.

A Facebook spokesman stressed to me that “people rely on our services to communicate with their loved ones.” He told me the company is working with Sri Lankan law enforcement and trying to remove content that violates its standards.

It pains me as a journalist, and someone who once believed that a worldwide communications medium would herald more tolerance, to admit this — to say that my first instinct was to turn it all off. But it has become clear to me with every incident that the greatest experiment in human interaction in the history of the world continues to fail in ever more dangerous ways.

In short: Stop the Facebook/YouTube/Twitter world — we want to get off.

 

But while social media had once been credited with helping foster democracy in places like Sri Lanka, it is now blamed for an increase in religious hatred. That justification was behind another brief block a year ago, aimed at Facebook, where the Sri Lankan government said posts appeared to have incited anti-Muslim violence.

“The extraordinary step reflects growing global concern, particularly among governments, about the capacity of American-owned networks to spin up violence,” The Times reported on Sunday.

Spin up violence indeed. Just a month ago in New Zealand, a murderous shooter apparently radicalized by social media broadcast his heinous acts on those same platforms. Let’s be clear, the hateful killer is to blame, but it is hard to deny that his crime was facilitated by tech.

In that case, the New Zealand government did not turn off the tech faucets, but it did point to those companies as a big part of the problem. After the attacks, neither Facebook nor YouTube could easily stop the ever-looping videos of the killings, which proliferated too quickly for their clever algorithms to keep up. One insider at YouTube described the experience to me as a “nightmare version of Whack-a-Mole.”

New Zealand, under the suffer-no-foolish-techies leadership of Jacinda Ardern, will be looking hard at imposing penalties on these companies for not controlling the spread of extremist content. Australia already passed such a law in early April. Here in the United States, our regulators are much farther behind, still debating whether it is a problem or not.

It is a problem, even if the manifestations of how these platforms get warped vary across the world. They are different in ways that make no difference and the same in one crucial way that does. Namely, social media has blown the lids off controls that have kept society in check. These platforms give voice to everyone, but some of those voices are false or, worse, malevolent, and the companies continue to struggle with how to deal with them.

In the early days of the internet, there was a lot of talk of how this was a good thing, getting rid of those gatekeepers. Well, they are gone now, and that means we need to have a global discussion involving all parties on how to handle the resulting disaster, well beyond adding more moderators or better algorithms.

Shutting social media down in times of crisis isn’t going to work. I raised that idea with a top executive at a big tech company I visited last week, during a discussion of what had happened in New Zealand.

“You can’t shut it off,” the executive said flatly. “It’s too late.”

True – but we can encourage or even ban businesses from advertising on it. Then they would whither and die and good riddance.

Don’t Plummet with Summit- another Zuckerberg Insidious Failure

…Public schools near Wichita had rolled out a web-based platform and curriculum from Summit Learning. The Silicon Valley-based program promotes an educational approach called “personalized learning,” which uses online tools to customize education. The platform that Summit provides was developed by Facebook engineers. It is funded by Mark Zuckerberg, Facebook’s chief executive, and his wife, Priscilla Chan, a pediatrician.

Under Summit’s program, students spend much of the day on their laptops and go online for lesson plans and quizzes, which they complete at their own pace. Teachers assist students with the work, hold mentoring sessions and lead special projects. The system is free to schools. The laptops are typically bought separately.

Then, students started coming home with headaches and hand cramps. Some said they felt more anxious. One child began having a recurrence of seizures. Another asked to bring her dad’s hunting earmuffs to class to block out classmates because work was now done largely alone.

“We’re allowing the computers to teach and the kids all looked like zombies,” said Tyson Koenig, a factory supervisor in McPherson, who visited his son’s fourth-grade class. In October, he pulled the 10-year-old out of the school.

Yes – personalized learning meant teachers sat doing nothing and kids sat working with their computers alone.


“We’re allowing the computers to teach and the kids all looked like zombies,” said Tyson Koenig, a factory supervisor in McPherson, who visited his son’s fourth-grade class. In October, he pulled the 10-year-old out of the school.

 

When this school year started, children got laptops to use Summit software and curriculums. In class, they sat at the computers working through subjects from math to English to history. Teachers told students that their role was now to be a mentor.
In September, some students stumbled onto questionable content while working in the Summit platform, which often directs them to click on links to the open web.

In one class covering Paleolithic history, Summit included a link to an article in The Daily Mail, the British newspaper, that showed racy ads with bikini-clad women. For a list of the Ten Commandments, two parents said their children were directed to a Christian conversion site.

Ms. Tavenner said building a curriculum from the open internet meant that a Daily Mail article was fair game for lesson plans. “The Daily Mail is written at a very low reading level,” she said, later adding that it was a bad link to include. She added that as far as she was aware, Summit’s curriculum did not send students to a Christian conversion site.

Around the country, teachers said they were split on Summit. Some said it freed them from making lesson plans and grading quizzes so they had more time for individual students. Others said it left them as bystanders. Some parents said they worried about their children’s data privacy.

“Summit demands an extraordinary amount of personal information about each student and plans to track them through college and beyond,” said Leonie Haimson, co-chairwoman of the Parent Coalition for Student Privacy, a national organization.

Of course! That is Zuckerberg’s business. Get them hooked on isolation from real human interaction, develop their online dossier early and then sell sell sell their data to advertisers. Mark and Chan, do really want to help society? Then walk off a California cliff now. You are the real merchants of death – mental and physical.

Full article here

On the Great Russian Heist of 2016 and Cyber Security

The Mueller report is a good reminder of how important it is to prevent foreign interference in American elections. “

True. And how they did it is telling also. They used social media and hacking. I will address hacking first. I have been focusing on the cyber security business nearly 20 years and to this day it astounds me how lax businesses are about their information security. The reason is money. They do not want to spend it. Why? The cost of a breach is really cheaper than that which is needed secure and monitor a business. Solution? Easy – since businesses will not solve this, we need very strong data security and privacy laws at the federal level coupled with stiff fines for breaches.

Next social media – here is where businesses can play a roll. They need to wake up to the fact that their use of companies like Facebook, Twiiter, and Google are all part of the problem. They need to simply stop using their advertising platforms. Of course that will not happen.

Consumers – at the Federal level, mandate that all browsers have effective ad blockers and browser privacy data is dumped when the browsers is closed inclusive of cookies and cache. If consumers can’t be targeted or tracked, then their value as ad revenue will dry up and so will the likes of Facebook etc.

There are other things that need to be done along the anti-trust route for big tech and in counseling for social media addiction. Perhaps a minimum age limit of 18 for using any digital social media would facilitate real social development in our children and eliminate the ills of the false digital social media ills.

Anyway – on to the op. ed. piece. It is very old news of course. Unless you were living under a rock, Russian influence in the election was well known a long time ago. And I do not wish to simply pick on Russia. Other countries are targeting us also – on many fronts. Should we ask them to stop? Well we can, but the results will be the same as when Obama did this. A temporary pause and then right back at it (China in this case). In fact, we should thank Russia and China for showing us how weak, and in many case, non existent, our cyber defenses are. They did cause they could. I maintain, therefore, the answers is federal action to guide and mandate information security.

Quote

Heist of 2016

The report of the special counsel Robert Mueller leaves considerable space for partisan warfare over the role of President Trump and his political campaign in Russia’s interference in the 2016 election. But one conclusion is categorical: “The Russian government interfered in the 2016 presidential election in sweeping and systematic fashion.”

That may sound like old news. The Justice Department’s indictment of 13 Russians and three companies in February 2018 laid bare much of the sophisticated Russian campaign to blacken the American democratic process and support the Trump campaign, including the theft of American identities and creation of phony political organizations to fan division on immigration, religion or race. The extensive hacks of Hillary Clinton’s campaign emails and a host of other dirty tricks have likewise been exhaustively chronicled.

But Russia’s interference in the campaign was the core issue that Mr. Mueller was appointed to investigate, and if he stopped short of accusing the Trump campaign of overtly cooperating with the Russians — the report mercifully rejects speaking of “collusion,” a term that has no meaning in American law — he was unequivocal on Russia’s culpability: “First, the Office determined that Russia’s two principal interference operations in the 2016 U.S. presidential election — the social media campaign and the hacking-and-dumping operations — violated U.S. criminal law.”

The first part of the report, which describes these crimes, is worthy of a close read. Despite a thick patchwork of redactions, it details serious and dangerous actions against the United States that Mr. Trump, for all his endless tweeting and grousing about the special counsel’s investigation, has never overtly confronted, acknowledged, condemned or comprehended. Culpable or not, he must be made to understand that a foreign power that interferes in American elections is, in fact, trying to distort American foreign policy and national security.

The earliest interference described in the report was a social media campaign intended to fan social rifts in the United States, carried out by an outfit funded by an oligarch known as “Putin’s chef” for the feasts he catered. Called the Internet Research Agency, the unit actually sent agents to the United States to gather information at one point. What the unit called “information warfare” evolved by 2016 into an operation targeted at favoring Mr. Trump and disparaging Mrs. Clinton. This included posing as American people or grass-roots organizations such as the Tea Party, anti-immigration groups, Black Lives Matter and others to buy political ads or organize political rallies.

At the same time, the report said, the cyberwarfare arm of the Russian army’s intelligence service opened another front, hacking the Clinton campaign and the Democratic National Committee and releasing reams of damaging materials through the front groups DCLeaks and Guccifer 2.0, and later through WikiLeaks. The releases were carefully timed for impact — emails stolen from the Clinton campaign chairman John Podesta, for example, were released less than an hour after the “Access Hollywood” tape damaging to Mr. Trump came out.

 

A perceived victory for Russian interference poses a serious danger to the United States. Already, several American agencies are working, in partnership with the tech industry, to prevent election interference going forward. But the Kremlin is not the only hostile government mucking around in America’s cyberspace — China and North Korea are two others honing their cyber-arsenals, and they, too, could be tempted to manipulate partisan strife for their ends.

That is something neither Republicans nor Democrats should allow. The two parties may not agree on Mr. Trump’s culpability, but they have already found a measure of common ground with the sanctions they have imposed on Russia over its interference in the campaign. Now they could justify the considerable time and expense of the special counsel investigation, and at the same time demonstrate that the fissure in American politics is not terminal, by jointly making clear to Russia and other hostile forces that the democratic process, in the United States and its allies, is strictly off limits to foreign clandestine manipulation, and that anyone who tries will pay a heavy price.

All this activity, the report said, was accompanied by the well documented efforts to contact the Trump campaign through business connections, offers of assistance to the campaign, invitations for Mr. Trump to meet Mr. Putin and plans for improved American-Russian relations. Both sides saw potential gains, the report said — Russia in a Trump presidency, the campaign from the stolen information. The Times documented 140 contacts between Mr. Trump and his associates and Russian nationals and WikiLeaks or their intermediaries. But the Mueller investigation “did not establish that members of the Trump campaign conspired or coordinated with the Russian government in its election interference activities.”

That is the part Mr. Trump sees as vindication, though the activities of his chaotic campaign team that the report describes are — at best — naïve. It is obviously difficult for this president to acknowledge that he was aided in his election by Russia, and there is no way to gauge with any certainty how much impact the Russian activities actually had on voters.

But the real danger that the Mueller report reveals is not of a president who knowingly or unknowingly let a hostile power do dirty tricks on his behalf, but of a president who refuses to see that he has been used to damage American democracy and national security.

Since the publication of the report, Vladimir Putin and his government have been crowing that they, too, are now somehow vindicated, joining the White House in creating the illusion that the investigation was all about “collusion” rather than a condemnation of criminal Russian actions. If their hope in a Trump presidency was to restore relations between the United States and Russia, and to ease sanctions, the Russians certainly failed, especially given the added sanctions ordered by Congress over Moscow’s interference.

But if the main intent was to intensify the rifts in American society, Russia backed a winner in Mr. Trump.

A perceived victory for Russian interference poses a serious danger to the United States. Already, several American agencies are working, in partnership with the tech industry, to prevent election interference going forward. But the Kremlin is not the only hostile government mucking around in America’s cyberspace — China and North Korea are two others honing their cyber-arsenals, and they, too, could be tempted to manipulate partisan strife for their ends.

That is something neither Republicans nor Democrats should allow. The two parties may not agree on Mr. Trump’s culpability, but they have already found a measure of common ground with the sanctions they have imposed on Russia over its interference in the campaign. Now they could justify the considerable time and expense of the special counsel investigation, and at the same time demonstrate that the fissure in American politics is not terminal, by jointly making clear to Russia and other hostile forces that the democratic process, in the United States and its allies, is strictly off limits to foreign clandestine manipulation, and that anyone who tries will pay a heavy price.