Skip to content

Boeing 737 Max Simulators Are in High Demand. They Are Flawed.

QUOTE

Since the two fatal crashes of the Boeing 737 Max, airlines around the world have moved to buy flight simulators to train their pilots.

They don’t always work.

Boeing recently discovered that the simulators could not accurately replicate the difficult conditions created by a malfunctioning anti-stall system, which played a role in both disasters. The simulators did not reflect the immense force that it would take for pilots to regain control of the aircraft once the system activated on a plane traveling at a high speed.

The mistake is likely to intensify concerns about Boeing, as it tries to regain credibility following the crashes of Lion Air and Ethiopian Airlines flights. In the months since the disasters, Boeing has faced criticism for serious oversights in the Max’s design. The anti-stall system was designed with a single point of failure. A warning light that Boeing thought was standard turned out to be part of a premium add-on.

“Every day, there is new news about something not being disclosed or something was done in error or was not complete,” said Dennis Tajer, a spokesman for the American Airlines pilots union and a 737 pilot.

The training procedures have been a source of contention. Boeing has maintained that simulator training is not necessary for the 737 Max and regulators do not require it, but many airlines bought the multimillion-dollar machines to give their pilots more practice. Some pilots want continuing simulator training.

The flight simulators, on-the-ground versions of cockpits that mimic the flying experience, are not made by Boeing. But Boeing provides the underlying information on which they are designed and built.
 

The simulators did not reflect the immense force that it would take for pilots to regain control of the aircraft once the system activated on a plane traveling at a high speed.

 

“Boeing has made corrections to the 737 Max simulator software and has provided additional information to device operators to ensure that the simulator experience is representative across different flight conditions,” said Gordon Johndroe, a Boeing spokesman. “Boeing is working closely with the device manufacturers and regulators on these changes and improvements, and to ensure that customer training is not disrupted.”

In recent weeks, Boeing has been developing a fix to the system, known as MCAS. As part of that work, the company tried to test on a simulator how the updated system would perform, including by replicating the problems with the doomed Ethiopian Airlines flight.

It recreated the actions of the pilots on that flight, including taking manual control of the plane as outlined by Boeing’s recommended procedures. When MCAS activates erroneously, pilots are supposed to turn off the electricity to a motor that allows the system to push the plane toward the ground. Then, pilots need to crank a wheel to right the plane. They have limited time to act.

On the Ethiopian flight, the pilots struggled to turn the wheel while the plane was moving at a high speed, when there is immense pressure on the tail. The simulators did not properly match those conditions, and Boeing pilots found that the wheel was far easier to turn than it should have been.

Regulators are now trying to determine what training will be required.

When the Max was introduced, Boeing believed that pilots did not need experience on the flight simulators, and the Federal Aviation Administration agreed. Many pilots learned about the plane on iPads. And they were not informed about the anti-stall system.

The limited training was a selling point of the plane. It can cost airlines tens of millions of dollars to maintain and operate flight simulators over the life of an aircraft.

After the first crash, Boeing gave airlines and pilots a full rundown of MCAS. But the company and regulators said that additional training was not necessary. Simply knowing about the system would be sufficient.

In a tense meeting with the American Airlines pilots union after the crash, a Boeing vice president, Mike Sinnett, said he was confident that pilots were equipped to deal with problems, according to an audio recording review by The New York Times. A top Boeing test pilot, Craig Bomben, agreed, saying, “I don’t know that understanding the system would have changed the outcome of this.”

Since the Ethiopian Airlines disaster in March, lawmakers and regulators are taking a closer look at the training procedures for the 737 Max, and whether they should be more robust. At a congressional hearing this week, the acting head of the F.A.A., Daniel Elwell, testified that MCAS should “have been more adequately explained.”

Boeing said on Thursday that it had completed its fix to the 737 Max. Along with changes to the anti-stall system, the fix will include additional education for pilots.
Subscribe to With Interest

Catch up and prep for the week ahead with this newsletter of the most important business insights, delivered Sundays.

The company still has to submit the changes to regulators, who will need to approve them before the plane can start flying again. The updates are not expected to include training on simulators, but the F.A.A. and other global regulators could push to require it.

“The F.A.A. is aware that Boeing Company is working with the manufacturers of Boeing 737 Max flight simulators,” a spokesman for the agency said in an emailed statement. “The F.A.A. will review any proposed adjustments as part of its ongoing oversight of the company’s efforts to address safety concerns.”

Airlines have already been pushing to get more simulators and develop their own training.

Pilots at American Airlines, which began asking for simulators when they started flying the planes, ratcheted up their requests after the Lion Air crash. Regardless of what the F.A.A. requires, the union believes pilots should get the experience. A spokesman for the airline said it had ordered a simulator that would be up and running by December.

“We value simulators in this situation,” said Mr. Tajer. “It’s not a condition of the Max flying again, but it is something we want.”

Bug-hunter reveals another ‘make me admin’ Windows 10 zero-day – and vows: ‘There’s more where that came from’

Quote

Vulnerability can be exploited to turn users into system stars, no patch available yet

A bug-hunter who previously disclosed Windows security flaws has publicly revealed another zero-day vulnerability in Microsoft’s latest operating systems.

The discovered hole can be exploited by malware and rogue logged-in users to gain system-level privileges on Windows 10 and recent Server releases, allowing them to gain full control of the machine. No patch exists for this bug, details and exploit code for which were shared online on Tuesday for anyone to use and abuse.

The flaw was uncovered, and revealed on Microsoft-owned GitHub, funnily enough, by a pseudonymous netizen going by the handle SandboxEscaper. She has previously dropped Windows zero-days that can be exploited to delete or tamper with operating system components, elevate local privileges, and so on.

This latest one works by abusing Windows’ schtasks tool, designed to run programs at scheduled times, along with quirks in the operating system.
 

Meanwhile… If you haven’t yet patched the wormable RDP security flaw in Windows (CVE-2019-0708), please do so ASAP – exploit code that can crash vulnerable systems is doing the rounds, and McAfee eggheads have developed and described a proof-of-concept attack that executes arbitrary software on remote machines, with no authentication required. Eek.

It appears the exploit code imports a legacy job file into the Windows Task Scheduler using schtasks, creating a new task, and then deletes that new task’s file from the Windows folder. Next, it creates a hard filesystem link pointing from where the new task’s file was created to pci.sys, one of Windows’ kernel-level driver files, and then runs the same schtasks command again. This clobbers pci.sys’s access permissions so that it can be modified and overwritten by the user, thus opening the door to privileged code execution.

The exploit, as implemented, needs to know a valid username and password combo on the machine to proceed, it seems. It can be tweaked and rebuilt from its source code to target other system files, other than pci.sys. …….

Rampant Android bloatware a privacy and security hellscape

I spent the past week examining an AT&T android. The bloatware was off the scale as was the spyware. Removing these via ADB broke the system. Even installing a firewall broke the system as it appeared that the firewall was detected and it simply blocked calls even the firewall was disabled (but installed). I will next look at and Android One Device to see if it is any better as they claim to be pure Android and no bloatware. I am not just picking on AT&T, but as the article and the PDF study that generated it points out, the practice is rampant.

Quote

The apps bundled with many Android phones are presenting threats to security and privacy greater than most users think.

This according to a paper (PDF) from university researchers in the US and Spain who studied the pre-installed software that 214 different vendors included in their Android devices. They found that everyone from the hardware builders to mobile carriers and third-party advertisers were loading products up with risky code.

“Our results reveal that a significant part of the pre-installed software exhibit potentially harmful or unwanted behavior,” the team from Universidad Carlos III de Madrid, Stony Brook University and UC Berkeley ICSI said.

 

The study, An Analysis of Pre-installed Android Software, was written by Julien Gamba, Mohammed Rashed, Abbas Razaghpanah, Juan Tapiador, and Narseo Vallina-Rodriguez. It is being presented later this month at the 41st IEEE Symposium on Security and Privacy.

 

“While it is known that personal data collection and user tracking is pervasive in the Android app ecosystem as a whole we find that it is also quite prevalent in pre-installed apps.”

To study bundled software, the team crowdsourced firmware and traffic information from a field of 2,748 volunteers running 1,742 different models of devices from 130 different countries.

Across all those different vendors, carriers, and locales, one theme was found: Android devices are lousy with bloatware that not only takes up storage, but also harvests personal information and in some cases even introduces malware.

“We have identified instances of user tracking activities by preinstalled Android software – and embedded third-party libraries – which range from collecting the usual set of PII and geolocation data to more invasive practices that include personal email and phone call metadata, contacts, and a variety of behavioral and usage statistics in some cases,” the team wrote.

“We also found a few isolated malware samples belonging to known families, according to VirusTotal, with prevalence in the last few years (e.g., Xynyin, SnowFox, Rootnik, Triada and Ztorg), and generic trojans displaying a standard set of malicious behaviors (e.g., silent app promotion, SMS fraud, ad fraud, and URL click fraud).”
Beware the bloat

The device vendors themselves were not the only culprits. While the bundled apps can be installed by the vendors, bloatware can also be introduced by the carriers who add their own software to devices as well as third parties that may slip in additional advertising or tracking tools into otherwise harmless and useful software.

Addressing this issue could prove particularly difficult, the researchers note. With vendors and carriers alike looking to eke a few extra bucks out of every device sold, bundled apps and bolted on advertising and tracking tools are highly attractive to companies, and absent pressure from a higher-up body, the bottom line will almost always win out.

To that end, they recommend someone steps in to offer audits of the supply chain and catch potential security and privacy threats in bundled software.

“Google might be a prime candidate for it given its capacity for licensing vendors and its certification programs,” the researchers note.

“Alternatively, in absence of self-regulation, governments and regulatory bodies could step in and enact regulations and execute enforcement actions that wrest back some of the control from the various actors in the supply chain.”

The study, An Analysis of Pre-installed Android Software, was written by Julien Gamba, Mohammed Rashed, Abbas Razaghpanah, Juan Tapiador, and Narseo Vallina-Rodriguez. It is being presented later this month at the 41st IEEE Symposium on Security and Privacy. ®

Security Panic as panic alarms meant to keep granny and little Timmy safe prove a privacy fiasco

QUOTE

Simple hack turns them into super secret spying tool

A GPS tracker used by elderly people and young kids has a security hole that could allow others to track and secretly record their wearers.

The white-label product is manufactured in China and then rebadged and rebranded by a range of companies in the UK, US, Australia and elsewhere including Pebbell 2, OwnFone and SureSafeGo. Over 10,000 people in the UK use the devices.

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

 

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

 

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

The flaw was introduced in an update to the product: originally the portable fob communicated with a base station that was plugged into a phone line: an approach that provided no clear attack route. But in order to expand its range and usefulness, the SIM card was added so it was not reliant on a base station and would work over the mobile network.

The problem arises from the fact that the Chinese manufacturer built in a PIN to the device so it would be locked to the telephone number programmed into the device. Which is fine, except the PIN was disabled by default and the PIN is currently not needed to reboot or reset the device.

And so it is possible to send a reset command to the device – if you know its SIM telephone number – and restore it to factory settings. At that point, the device is wide open and doesn’t need the PIN to make changes to the other functions. Which all amounts to remote access.
Random access memory

But how would you find out the device’s number? Well, the researchers got hold of one such device and its number and then ran a script where they sent messages to thousands of similar numbers to see if they hit anything.

They did. “Out of the 2,500 messages we sent, we got responses from 175 devices (7 per cent),” they wrote. “So this is 175 devices being used at the time of writing as an aid for vulnerable people; all identified at a minimal cost. The potential for harm is massive, and in less than a couple of hours, we could interact with 175 of these devices!”

The good news is that it is easy to fix: in new devices. You would simply add a unique code to each device and require it be used to reset the device. And you could limit the device to only receive calls or texts from a list of approved contacts.

But in the devices already on the market, the fix is not so easy: even by using the default PIN to lock it down, the ability to reset the device is still possible because it doesn’t require the PIN to be entered. The researchers say they have contacted the companies that use the device “to help them understand the risks posed by our findings” and say that they are “looking into and are actively recalling devices.” But it also notes that some have not responded.

In short, poor design and the lack of a decent security audit prior to putting the updated product on the market has turned what is supposed to provide peace of mind into a potential stalking and listening nightmare.

Facebook’s third act: Mark Zuckerberg announces his firm’s next business model

Here lies Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.” I have a better idea: Get Facebook off the planet.

Quote

If it works, the social-networking giant will become more private and more powerful

THE FIRST big overhaul for Facebook came in 2012-14. Internet users were carrying out ever more tasks on smartphones rather than desktop or laptop computers. Mark Zuckerberg opted to follow them, concentrating on Facebook’s mobile app ahead of its website, and buying up two fast-growing communication apps, WhatsApp and Instagram. It worked. Facebook increased its market valuation from around $60bn at the end of 2012 to—for a brief period in 2018—more than $600bn.

On March 6th Mr Zuckerberg announced Facebook’s next pivot. As well as its existing moneymaking enterprise, selling targeted ads on its public social networks, it is building a “privacy-focused platform” around WhatsApp, Instagram and Messenger. The apps will be integrated, he said, and messages sent through them encrypted end-to-end, so that even Facebook cannot read them. While it was not made explicit, it is clear what the business model will be. Mr Zuckerberg wants all manner of businesses to use its messaging networks to provide services and accept payments. Facebook will take a cut.

A big shift was overdue at Facebook given the privacy and political scandals that have battered the firm. Even Mr Zuckerberg, who often appears incapable of seeing the gravity of Facebook’s situation, seemed to grasp the irony of it putting privacy first. “Frankly we don’t currently have a strong reputation for building privacy protective services,” he noted.

Still, he intends to do it. Mr Zuckerberg claims that users will benefit from his plan to integrate its messaging apps into a single, encrypted network. The content of messages will be safe from prying eyes of authoritarian snoops and criminals, as well as from Facebook itself. It will make messaging more convenient, and make profitable new services possible. But caution is warranted for three reasons.

The first is that Facebook has long been accused of misleading the public on privacy and security, so the potential benefits Mr Zuckerberg touts deserve to be treated sceptically. He is also probably underselling the benefits that running integrated messaging networks brings to his firm, even if they are encrypted so that Facebook cannot see the content. The metadata alone, ie, who is talking to whom, when and for how long, will still allow Facebook to target advertisements precisely, meaning its ad model will still function.

End-to-end encryption will also make Facebook’s business cheaper to run. Because it will be mathematically impossible to moderate encrypted communications, the firm will have an excuse to take less responsibility for content running through its apps, limiting its moderation costs.

If it can make the changes, Facebook’s dominance over messaging would probably increase. The newfound user-benefits of a more integrated Facebook might make it harder for regulators to argue that Mr Zuckerberg’s firm should be broken up.

Facebook’s plans in India provide some insight into the new model. It has built a payment system into WhatsApp, the country’s most-used messaging app. The system is waiting for regulatory approval. The market is huge. In the rest of the world, too, users are likely to be drawn in by the convenience of Facebook’s new networks. Mr Zuckerberg’s latest strategy is ingenious but may contain twists.

The Week in Tech: Facebook and Google Reshape the Narrative on Privacy

And from the bs department

QUOTE

…Stop me if you’ve heard this before: The chief executive of a huge tech company with vast stores of user data, and a business built on using it to target ads, now says his priority is privacy.

This time it was Google’s Sundar Pichai, at the company’s annual conference for developers. “We think privacy is for everyone,” he explained on Tuesday. “We want to do more to stay ahead of constantly evolving user expectations.” He reiterated the point in a New York Times Op-Ed, and highlighted the need for federal privacy rules.

The previous week, Mark Zuckerberg delivered similar messages at Facebook’s developer conference. “The future is private,” he said, and Facebook will focus on more intimate communications. He shared the idea in a Washington Post op-ed just weeks before, also highlighting the need for federal privacy rules.

Google went further than Facebook’s rough sketch of what this future looks, and unveiled tangible features: It will let users browse YouTube and Google Maps in “incognito mode,” will allow auto-deletion of Google history after a specified time and will make it easier to find out what the company knows about you, among other new privacy features.

Fatemeh Khatibloo, a vice president and principal analyst at Forrester, told The Times: “These are meaningful changes when it comes to the user’s expectations of privacy, but I don’t think this affects their business at all.” Google has to show that privacy is important, but it will still collect data.

What Google and Facebook are trying to do, though, is reshape the privacy narrative. You may think privacy means keeping hold of your data; they want privacy to mean they don’t hand data to others. (“Google will never sell any personal information to third parties,” Mr. Pichai wrote in his Op-Ed.)

Werner Goertz, a research director at Gartner, said Google had to respond with its own narrative. “It is trying to turn the conversation around and drive public discourse in a way that not only pacifies but also tries to get buy-in from consumers, to align them with its privacy strategy,” he said.
Politics of privacy law

Right – pacify the masses with BS.

Politics of privacy law

Facebook and Google may share a voice on privacy. Lawmakers don’t.

Members of the Federal Trade Commission renewed calls at a congressional hearing on Wednesday to regulate big tech companies’ stewardship of user data, my colleague Cecilia Kang reported. That was before a House Energy and Commerce subcommittee, on which “lawmakers of both parties agreed” that such a law was required, The Wall Street Journal reported.

Sounds promising.

But while the F.T.C. was united in asking for more power to police violations and greater authority to impose penalties, there were large internal tensions about how far it should be able to go in punishing companies. And the lawmakers in Congress “appeared divided over key points that legislation might address,” according to The Journal. Democrats favor harsh penalties and want to give the F.T.C. greater power; Republicans worry that strict regulation could stifle innovation and hurt smaller companies.

Finding compromise will be difficult, and conflicting views risk becoming noise through which a clear voice from Facebook and Google can cut. The longer disagreement rages, the more likely it is that Silicon Valley defines a mainstream view that could shape rules.

Yeah — more lobbyists and political donation subverting the democracy. The US should enact an EU equivalent GDPR now. And another thing, Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.”

Now for Sale on Facebook: Looted Middle Eastern Antiquities

Another reason Facebook is a disgusting dangerous corporation. A 5 billion dollat fine is nothing. It needs to be wound down and Zuckerberg and Sandberg given long hard prison terms for the evil and death they have caused.

Quote

Ancient treasures pillaged from conflict zones in the Middle East are being offered for sale on Facebook, researchers say, including items that may have been looted by Islamic State militants.

Facebook groups advertising the items grew rapidly during the upheaval of the Arab Spring and the ensuing wars, which created unprecedented opportunities for traffickers, said Amr Al-Azm, a professor of Middle East history and anthropology at Shawnee State University in Ohio and a former antiquities official in Syria. He has monitored the trade for years along with his colleagues at the Athar Project, named for the Arabic word for antiquities.

At the same time, Dr. Al-Azm said, social media lowered the barriers to entry to the marketplace. Now there are at least 90 Facebook groups, most in Arabic, connected to the illegal trade in Middle Eastern antiquities, with tens of thousands of members, he said.

They often post items or inquiries in the group, then take the discussion into chat or WhatsApp messaging, making it difficult to track. Some users circulate requests for certain types of items, providing an incentive for traffickers to produce them, a scenario that Dr. Al-Azm called “loot to order.”

Others post detailed instructions for aspiring looters on how to locate archaeological sites and dig up treasures.

Items for sale include a bust purportedly taken from the ancient city of Palmyra, which was occupied for periods by Islamic State militants and endured heavy looting and damage.

Other artifacts for sale come from Iraq, Yemen, Egypt, Tunisia and Libya. The majority do not come from museums or collections, where their existence would have been cataloged, Dr. Al-Azm said.

“They’re being looted straight from the ground,” he said. “They have never been seen. The only evidence we have of their existence is if someone happens to post a picture of them.”

Dr. Al-Azm and Katie A. Paul, the directors of the Athar Project, wrote in World Politics Review last year that the loot-to-order requests showed that traffickers were “targeting material with a previously unseen level of precision — a practice that Facebook makes remarkably easy.”

After the BBC published an article about the work of Dr. Al-Azm and his colleagues last week, Facebook said that it had removed 49 groups connected to antiquities trafficking.

 

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

Dr. Al-Azm countered that 90 groups were still up. But more important, he argued, Facebook should not simply delete the pages, which now constitute crucial evidence both for law enforcement and heritage experts.

In a statement on Tuesday, the company said it was “continuing to invest in people and technology to keep this activity off Facebook and encourage others to report anything they suspect of violating our Community Standards so we can quickly take action.”

A spokeswoman said that the company’s policy-enforcement team had 30,000 members and that it had introduced new tools to detect and remove content that violates the law or its policies using artificial intelligence, machine learning and computer vision.

Trafficking in antiquities is illegal across most of the Middle East, and dealing in stolen relics is illegal under international law. But it can be difficult to prosecute such cases.
Leila A. Amineddoleh, a lawyer in New York who specializes in art and cultural heritage, said that determining the provenance of looted items can be arduous, presenting an obstacle for lawyers and academics alike.

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

As the Islamic State expanded, it systematically looted and destroyed, using heavy machinery to dig into ancient sites that had scarcely been excavated before the war. The group allowed residents and other looters to take from heritage sites, imposing a 20 percent tax on their earnings.

Some local people and cultural heritage experts scrambled to document and save the antiquities, including efforts to physically safeguard them and to create 3-D models and maps. Despite their efforts, the losses were catastrophic.
Sign Up for Summer in the City

The best things to do in N.Y.C. during the hottest season of the year. This limited-edition newsletter will launch before Memorial Day and run through Labor Day.

Satellite images show invaluable sites, such as Mari and Dura-Europos in eastern Syria, pockmarked with excavation holes from looters. In the Mosul Museum in Iraq, the militants filmed themselves taking sledgehammers and drills to monuments they saw as idolatrous, acts designed for maximum propaganda value as the world watched with horror.

Other factions and people also profited from looting. In fact, the market was so saturated that prices dropped drastically for a time around 2016, Dr. Al-Azm said.

Around the same time, as Islamic State fighters scattered in the face of territorial losses, they took their new expertise in looting back to their countries, including Egypt, Tunisia and Libya, and to other parts of Syria, like Idlib Province, he added.

“This is a supply and demand issue,” Dr. Al-Azm said, repeating that any demand gives incentives to looters, possibly financing terrorist groups in the process.

Instead of simply deleting the pages, Dr. Al-Azm said, Facebook should devise a more comprehensive strategy to stop the sales while allowing investigators to preserve photos and records uploaded to the groups.

A hastily posted photo, after all, might be the only record of a looted object that is available to law enforcement or scholars. Simply deleting the page would destroy “a huge corpus of evidence” that will be needed to identify, track and recover looted treasures for years to come, he said.

Similar arguments have been made as social media sites, including YouTube, have deleted videos that show atrocities committed during the Syrian war that could be used to prosecute war crimes.

Facebook has also faced questions over its role as a platform for other types of illicit sales, including guns, poached ivory and more. It has generally responded by shutting down pages or groups in response to reports of illegal activity.

Some of the illicit items sold without proof of their ownership history, of course, could be fake. But given the volume of activity in the antiquities groups and the copious evidence of looting at famous sites, at least some of them are believed to be genuine.

The wave of items hitting the market will most likely continue for years. Some traffickers sit on looted antiquities for long periods, waiting for attention to die down and sometimes forging documents about the items’ origins before offering them for sale.

Boycott is the only way to force social media giants to protect kids online

About a month back I got into email exchange with a mother who had invited one of my children to a birthday party. I said ok but asked that no pictures be posted to social media. I explained my reasoning. She said I was crazy and would damage my children (as well as other things). I responded with advice from several reputable sources. No matter. Suffice, no birthday attendance.

I was never sure why she reacted this way. It was almost like I asked an addict to go cold turkey. Maybe that’s it. She is addicted.

A public boycott of social media may be the only way to force companies to protect children from abuse, the country’s leading child protection police officer has said.
QUOTE

Simon Bailey, the National Police Chiefs’ Council lead on child protection, said tech companies had abdicated their duty to safeguard children and were only paying attention due to fear of reputational damage.

The senior officer, who is Norfolk’s chief constable, said he believed sanctions such as fines would be “little more than a drop in the ocean” to social media companies, but that the government’s online harms white paper could be a “game changer” if it led to effective punitive measures.

Bailey suggested a boycott would be one way to hit big platforms, which he believes have the technology and funds to “pretty much eradicate the availability, the uploading, and the distribution of indecent imagery”.

Despite the growing problem, Bailey said he had seen nothing so far “that has given me the confidence that companies that are creating these platforms are taking their responsibilities seriously enough”.

He told the Press Association: “Ultimately I think the only thing they will genuinely respond to is when their brand is damaged. Ultimately the financial penalties for some of the giants of this world are going to be an absolute drop in the ocean.

“But if the brand starts to become tainted, and consumers start to see how certain platforms are permitting abuse, are permitting the exploitation of young people, then maybe the damage to that brand will be so significant that they will feel compelled to do something in response.

“We have got to look at how we drive a conversation within our society that says ‘do you know what, we are not going to use that any more, that system or that brand or that site’ because of what they are permitting to be hosted or what they are allowing to take place.”

In every playground there is likely to be someone with pornography on their phone, Bailey said as he described how a growing number of young men are becoming “increasingly desensitised” and progressing to easily available illegal material. Society is “not far off the point where somebody will know somebody” who has viewed illegal images, he said.

There has been a sharp rise in the number of images on the child abuse image database from fewer than 10,000 in the 1990s to 13.4m, with more than 100m variations of these.

Last month, the government launched a consultation on new laws proposed to tackle illegal content online. The white paper, which was revealed in the Guardian, legislated for a new statutory duty of care by social media firms and the appointment of an independent regulator, which is likely to be funded through a levy on the companies. It was welcomed by senior police and children’s charities.

Bailey believes if effective regulation is put in place it could free up resources to begin tackling the vaster dark web. He expressed concern that the spread of 4G and 5G networks worldwide would open up numerous further opportunities for the sexual exploitation of children.

Speaking at a conference organised by StopSO, a charity that works with offenders and those concerned about their sexual behaviour to minimise the risk of offending, of which Bailey is patron, he recently said that plans from Facebook’s Mark Zuckerberg to increase privacy on the social network would make life harder for child protection units. But he told the room: “There is no doubt that thinking is shifting around responsibility of tech companies. I think that argument has been won, genuinely.

“Of course, the proof is going to be in the pudding with just how ambitious the white paper is, how effective the punitive measures will be, or not.”

Andy Burrows, the National Society for the Prevention of Cruelty to Children’s associate head of child safety online, said: “It feels like social media sites treat child safeguarding crises as a bad news cycle to ride out, rather than a chance to make changes to protect children.”

Google Spies! The worst kind of microphone is a hidden microphone.

Google says the built-in microphone it never told Nest users about was ‘never supposed to be a secret’

Yeah right.
Quote

  • In early February, Google announced that Assistant would work with its home security and alarm system, Nest Secure.
  • The problem: Users didn’t know a microphone existed on their Nest security devices to begin with.
  • On Tuesday, a Google representative told Business Insider the company had made an “error.”
  • “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the person said. “That was an error on our part.”

In early February, Google announced that its home security and alarm system Nest Secure would be getting an update. Users, the company said, could now enable its virtual-assistant technology, Google Assistant.

The problem: Nest users didn’t know a microphone existed on their security device to begin with.

The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device.

On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”

Google says “the microphone has never been on and is only activated when users specifically enable the option.”

It also said the microphone was originally included in the Nest Guard for the possibility of adding new security features down the line, like the ability to detect broken glass.

Still, even if Google included the microphone in its Nest Guard device for future updates — like its Assistant integration — the news comes as consumers have grown increasingly wary of major tech companies and their commitment to consumer privacy.

For Google, the revelation is particularly problematic and brings to mind previous privacy controversies, such as the 2010 incident in which the company acknowledged that its fleet of Street View cars “accidentally” collected personal data transmitted over consumers’ unsecured WiFi networks, including emails.

How Nest, designed to keep intruders out of people’s homes, effectively allowed hackers to get in

What strikes me about this article is simply how illiterate people are when it comes to their digital security. “Only about 1 percent of Internet users, he said, use some kind of password manager.” That is crazy. As this article states, “Credential stuffing is “at the root of probably 90 percent of the things we see happening” There are so many stolen passwords being dumped online that if you do not use a different password for each site, you are simply asking for trouble. You can check for yourself here to see if your email credential has been part of a breach. , The only way you will be able to use a different password for each site is to use a password manager.

Another conclusion drawn from this article is how IoT companies purposely make there devices hackers dreams by deliberating not employing better security methods. They do this to make their products idiot proof. Reread that, Idiot Proof. In other words, they think their users are idiots and do want to burden them with nonsense like better security.

Quote

Tara Thomas thought her daughter was just having nightmares. “There’s a monster in my room,” the almost-3-year-old would say, sometimes pointing to the green light on the Nest Cam installed on the wall above her bed.

Then Thomas realized her daughter’s nightmares were real. In August, she walked into the room and heard pornography playing through the Nest Cam, which she had used for years as a baby monitor in their Novato, Calif., home. Hackers, whose voices could be heard faintly in the background, were playing the recording, using the intercom feature in the software. “I’m really sad I doubted my daughter,” she said.

Though it would be nearly impossible to find out who was behind it, a hack like this one doesn’t require much effort, for two reasons: Software designed to help people break into websites and devices has gotten so easy to use that it’s practically child’s play, and many companies, including Nest, have effectively chosen to let some hackers slip through the cracks rather than impose an array of inconvenient countermeasures that could detract from their users’ experience and ultimately alienate their customers.

The result is that anyone in the world with an Internet connection and rudimentary skills has the ability to virtually break into homes through devices designed to keep physical intruders out.

As hacks such as the one the Thomases suffered become public, tech companies are deciding between user convenience and potential damage to their brands. Nest could make it more difficult for hackers to break into Nest cameras, for instance, by making the log-in process more cumbersome. But doing so would introduce what Silicon Valley calls “friction” — anything that can slow down or stand in the way of someone using a product.

At the same time, tech companies pay a reputational price for each high-profile incident. Nest, which is part of Google, has been featured on local news stations throughout the country for hacks similar to what the Thomases experienced. And Nest’s recognizable brand name may have made it a bigger target. While Nest’s thermostats are dominant in the market, its connected security cameras trail the market leader, Arlo, according to Jack Narcotta, an analyst at the market research firm Strategy Analytics. Arlo, which spun out of Netgear, has around 30 percent of the market, he said. Nest is in the top five, he said.

Nik Sathe, vice president of software engineering for Google Home and Nest, said Nest has tried to weigh protecting its less security-savvy customers while taking care not to unduly inconvenience legitimate users to keep out the bad ones. “It’s a balance,” he said. Whatever security Nest uses, Sathe said, needs to avoid “bad outcomes in terms of user experience.”

Google spokeswoman Nicol Addison said Thomas could have avoided being hacked by implementing two-factor authentication, where in addition to a password, the user must enter a six-digit code sent via text message. Thomas said she had activated two-factor authentication; Addison said it had never been activated on the account.

The method used to spy on the Thomases is one of the oldest tricks on the Internet. Hackers essentially look for email addresses and passwords that have been dumped online after being stolen from one website or service and then check to see whether the same credentials work on another site. Like the vast majority of Internet users, the family used similar passwords on more than one account. While their Nest account had not been hacked, their password had essentially become public knowledge, thanks to other data breaches.

In recent years, this practice, which the security industry calls “credential stuffing,” has gotten incredibly easy. One factor is the sheer number of stolen passwords being dumped online publicly. It’s difficult to find someone who hasn’t been victimized. (You can check for yourself here.)

A new breed of credential-stuffing software programs allows people with little to no computer skills to check the log-in credentials of millions of users against hundreds of websites and online services such as Netflix and Spotify in a matter of minutes. Netflix and Spotify both said in statements that they were aware of credential stuffing and employ measures to guard against it. Netflix, for instance, monitors websites with stolen passwords and notifies users when it detects suspicious activity. Neither Netflix nor Spotify offer two-factor authentication.

But the potential for harm is higher for the 20 billion Internet-connected things expected to be online by next year, according to the research firm Gartner. Securing these devices has public safety implications. Hacked devices can be used in large-scale cyberattacks such as the “Dyn hack” that mobilized millions of compromised “Internet of things” devices to take down Twitter, Spotify and others in 2016.

In January, Japanese lawmakers passed an amendment to allow the government to essentially do what hackers do and scour the Internet for stolen passwords and test them to see whether they have been reused on other platforms. The hope is that the government can force tech companies to fix the problem.

Security experts worry the problem has gotten so big that there could be attacks similar to the Dyn hack, this time as a result of a rise in credential stuffing.

“They almost make it foolproof,” said Anthony Ferrante, the global head of cybersecurity at FTI Consulting and a former member of the National Security Council. He said the new tools have made it even more important to stop reusing passwords.

Tech companies have been aware of the threat of credential stuffing for years, but the way they think about it has evolved as it has become a bigger problem. There was once a sense that users should take responsibility for their security by refraining from using the same password on multiple websites. But as gigantic dumps of passwords have gotten more frequent, technology companies have found that it is not just a few inattentive customers who reuse the same passwords for different accounts — it’s the majority of people online.

Credential stuffing is “at the root of probably 90 percent of the things we see happening,” said Emmanuel Schalit, chief executive of Dashlane, a password manager that allows people to store unique, random passwords in one place. Only about 1 percent of Internet users, he said, use some kind of password manager.

“We saw this coming in late 2017, early 2018 when we saw these big credential dumps start to happen,” Google’s Sathe said. In response, Nest says, it implemented security measures around that time.

It did its own research into stolen passwords available on the Web and cross-referenced them with its records, using an encryption technique that ensured Nest could not actually see the passwords. In emails sent to customers, including the Thomases, it notified customers when they were vulnerable. It also tried to block log-in attempts that veered from the way legitimate users log into accounts. For instance, if a computer from the same Internet-protocol address attempted to log into 10 Nest accounts, the algorithm would block that address from logging into any more accounts.

But Nest’s defenses were not good enough to stop several high-profile incidents throughout last year in which hackers used credential stuffing to break into Nest cameras for kicks. Hackers told a family in a San Francisco suburb, using the family’s Nest Cam, that there was an imminent missile attack from North Korea. Someone hurled racial epithets at a family in Illinois through a Nest Cam. There were also reports of hackers changing the temperature on Nest thermostats. And while only a handful of hacks became public, other users may not even be aware their cameras are compromised.

The company was forced to respond. “Nest was not breached,” it said in a January statement. “These recent reports are based on customers using compromised passwords,” it said, urging its customers to use two-factor authentication. Nest started forcing some users to change their passwords.

This was a big step for Nest because it created the kind of friction that technology companies usually try to avoid. “As we saw the threat evolve, we put more explicit measures in place,” Sathe said. Nest says only a small percentage of its millions of customers are vulnerable to this type of attack.

According to at least one expert, though, Nest users are still exposed. Hank Fordham, a security researcher, sat in his Calgary, Alberta, home recently and opened up a credential-stuffing software program known as Snipr. Instantly, Fordham said, he found thousands of Nest accounts that he could access. Had he wanted to, he would have been able to view cameras and change thermostat settings with relative ease.

While other similar programs have been around for years, Snipr, which costs $20 to download, is easier to use. Snipr provides the code required to check whether hundreds of the most popular platforms, such as “League of Legends” and Netflix, are accessible with a bunch of usernames and passwords — and those have become abundantly available all over the Internet.

Fordham, who had been monitoring the software and testing it for malware, noticed that after Snipr added functionality for Nest accounts last May, news reports of attacks started coming out. “I think the credential-stuffing community was made aware of it, and that was the dam breaking,” he said.

Nest said the company had never heard of Snipr, though it is generally aware of credential-stuffing software. It said it cannot be sure whether any one program drives more credential stuffing toward Nest products.

What surprises Fordham and other security researchers about the vulnerability of Nest accounts is the fact that Nest’s parent company, Google, is widely known for having the best methods for stopping credential-stuffing attacks. Google’s vast user base gives it data that it can use to determine whether someone trying to log into an account is a human or a robot.

The reason Nest has not employed all of Google’s know-how on security goes back to Nest’s roots, according to Nest and people with knowledge of its history. Founded in 2010 by longtime Apple executive Tony Fadell, Nest promised at the time that it would not collect data on users for marketing purposes.

In 2014, Nest was acquired by Google, which has the opposite business model. Google’s products are free or inexpensive and, in exchange, it profits from the personal information it collects about its users. The people familiar with Nest’s history said the different terms of service and technical challenges have prevented Nest from using all of Google’s security products. Google declined to discuss whether any of its security features were withheld because of incompatibility with Nest’s policies.

Under Alphabet, Google’s parent company, Nest employed its own security team. While Google shared knowledge about security, Nest developed its own software. In some ways, Nest’s practices appear to lag well behind Google’s. For instance, Nest still uses SMS messages for two-factor authentication. Using SMS is generally not recommended by security experts, because text messages can be easily hijacked by hackers. Google allows people to use authentication apps, including one it developed in-house, instead of text messages. And Nest does not use ReCaptcha, which Google acquired in 2009 and which can separate humans from automated software, such as what credential stuffers use to identify vulnerable accounts.

Sathe said Nest employed plenty of advanced techniques to stop credential stuffing, such as machine learning algorithms that “score” log-ins based on how suspicious they are and block them accordingly. “We have many layers of security in conjunction with what the industry would consider best practices,” he said.

When asked why Nest does not use ReCaptcha, Sathe cited difficulty in implementing it on mobile apps, and user convenience. “Captchas do create a speed bump for the users,” he said.

The person behind Snipr, who goes by the name “Pragma” and communicates via an encrypted chat, put the blame on the company. “I can tell you right now, Nest can easily secure all of this,” he said when asked whether his software had enabled people to listen in and harass people via Nest Cams. “This is like stupidly bad security, like, extremely bad.” He also said he would remove the capability to log into Nest accounts, which he said he added last May when one of his customers asked for it, if the company asked. Pragma would not identify himself, for fear of getting in “some kind of serious trouble.”

That’s when Fordham, the Calgary security researcher, became concerned. He noticed the addition of Nest on the dashboard and took it upon himself to start warning people who were vulnerable. He logged into their Nest cams and spoke to them, imploring them to change their passwords. One of those interactions ended up being recorded by the person on the other end of the camera. A local news station broadcast the video.

Fordham said he is miffed that it is still so easy to log into Nest accounts. He noted that Dunkin’ Donuts, after seeing its users fall victim to credential-stuffing attacks aimed at taking their rewards points, implemented measures, including captchas, that have helped solve the problem. “It’s a little alarming that a company owned by Google hasn’t done the same thing as Dunkin’ Donuts,” Fordham said.

A spokeswoman for Dunkin’ declined to comment.

Doorbells are gaining cameras for security and catching porch pirates. The Post’s Geoff Fowler goes over security camera etiquette you may not have considered. (Jhaan Elker, Geoffrey Fowler/The Washington Post)

According to people familiar with the matter, Google is in the process of converting Nest user accounts so that they utilize Google’s security methods via Google’s log-in, in part to deal with the problem. Addison said that Nest user data will not be subject to tracking by Google. She later said that she misspoke but would not clarify what that meant.

Knowing that the hack could have been stopped with a unique password or two-factor authentication has not made Thomas, whose camera was hacked, feel any better. “I continuously get emails saying it wasn’t their fault,” she said.

She unplugged the camera and another one she used to have in her son’s bedroom, and she doesn’t plan to turn them on again: “That was the solution.”