Skip to content

Monthly Archives: May 2019

Why Privacy Is an Antitrust Issue

QUOTE

Finally the mainstream media is saying what many of us have known for years. Too bad they do not put their money where their mouth is and sever all business ties the likes of Facebook.

As Facebook has generated scandal after scandal in recent years, critics have started to wonder how we might use antitrust laws to rein in the company’s power. But many of the most pressing concerns about Facebook are its privacy abuses, which unlike price gouging, price discrimination or exclusive dealing, are not immediately recognized as antitrust violations. Is there an antitrust case to be made against Facebook on privacy grounds?

Yes, there is. In March, when Representative David N. Cicilline, Democrat of Rhode Island, called on the Federal Trade Commission to investigate Facebook’s potential violations of antitrust laws, he cited not only Facebook’s acquisitions (such as Instagram and WhatsApp), but also evidence that Facebook was “using its monopoly power to degrade” the quality of its service “below what a competitive marketplace would allow.”

It is this last point, which I made in a law journal article cited by Mr. Cicilline, that promises to change how antitrust law will protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

[changes are needed to (ED)…] protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

To see what I mean, let’s go back to the mid-2000s, when Facebook was an upstart social media platform. To differentiate itself from the market leader, Myspace, Facebook publicly pledged itself to privacy. Privacy provided its competitive advantage, with the company going so far as to promise users, “We do not and will not use cookies to collect private information from any user.”

When Facebook later attempted to change this bargain with users, the threat of losing its customers to its competitors forced the company to reverse course. In 2007, for example, Facebook introduced a program that recorded users’ activity on third-party sites and inserted it into the News Feed. Following public outrage and a class-action lawsuit, Facebook ended the program. “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them,” Facebook’s chief executive, Mark Zuckerberg, wrote in a public apology.

This sort of thing happened regularly for years. Facebook would try something sneaky, users would object and Facebook would back off.

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

This is how Facebook usurped our privacy: with the help of its market dominance. The price of using Facebook has stayed the same over the years (it’s free to join and use), but the cost of using it, calculated in terms of the amount of data that users now must provide, is an order of magnitude above what it was when Facebook faced real competition.
 

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

It is hard to believe that the Facebook of 2019, which is so consuming of and reckless with our data, was once the privacy-protecting Facebook of 2004. When users today sign up for Facebook, they agree to allow the company to track their activity across more than eight million websites and mobile applications that are connected to the internet. They cannot opt out of this. The ubiquitous tracking of consumers online allows Facebook to collect exponentially more data about them than it originally could, which it can use to its financial advantage.

And while users can control some of the ways in which Facebook uses their data by adjusting their privacy settings, if you choose to leave Facebook, the company still subjects you to surveillance — but you no longer have access to the settings. Staying on the platform is the only effective way to manage its harms.

Lowering the quality of a company’s services in this manner has always been one way a monopoly can squeeze consumers after it corners a market. If you go all the way back to the landmark “case of monopolies” in 17th-century England, for example, you find a court sanctioning a monopoly for fear that it might control either price or the quality of services.

But we must now aggressively enforce this antitrust principle to handle the problems of our modern economy. Our government should undertake the important task of restoring to the American people something they bargained for in the first place — their privacy.

LONG LONG Overdue!

Security iPhone gyroscopes, of all things, can uniquely ID handsets on anything earlier than iOS 12.2

QUOTE

Your iPhone can be uniquely fingerprinted by apps and websites in a way that you can never clear. Not by deleting cookies, not by clearing your cache, not even by reinstalling iOS.

Cambridge University researchers will present a paper to the IEEE Symposium on Security and Privacy 2019 today explaining how their fingerprinting technique uses a fiendishly clever method of inferring device-unique accelerometer calibration data.

“iOS has historically provided access to the accelerometer, gyroscope and the magnetometer,” Dr Alastair Beresford told The Register this morning. “These types of devices don’t seem like they’re troublesome from a privacy perspective, right? Which way up the phone is doesn’t seem that bad.

“In reality,” added the researcher, “it turns out that you can work out a globally unique identifier for the device by looking at these streams.”
Your orientation reveals an awful lot about you

“MEMS” – microelectromechanical systems – is the catchall term for things like your phone’s accelerometer, gyroscope and magnetometer. These sensors tell your handset which way up it is, whether it’s turning and, if so, how fast, and how strong a nearby magnetic field is. They are vital for mobile games that rely on the user tilting or turning the handset.

These, said Beresford, are mass produced. Like all mass-produced items, especially sensors, they have the normal distribution of inherent but minuscule errors and flaws, so high-quality manufacturers (like Apple) ensure each one is calibrated.

“That calibration step allows the device to produce a more accurate parameter,” explained Beresford. “But it turns out the values being put into the device are very likely to be globally unique.”

Beresford and co-researchers Jiexin Zhang, also from Cambridge’s Department of Computer Science and Technology, and Ian Sheret of Polymath Insight Ltd, devised a way of not only accessing data from MEMS sensors – that wasn’t the hard part – but of inferring the calibration data based on what the sensors were broadcasting in real time, during actual use by a real-world user. Even better (or worse, depending on your point of view), the data can be captured and reverse-engineered through any old website or app.

“It doesn’t require any specific confirmation from a user,” said Beresford. “This fingerprint never changes, even if you factory reset the handset or reinstall the OS. This is buried deep inside the firmware of the device so the fingerprint data doesn’t change. This provides a way to track users around the web.”
How they did it

“You need to record some samples,” said Beresford. “There’s an API in JavaScript or inside Swift that allows you to get samples from the hardware. Because you get many samples per second, we need around 100 samples to get the attack. Around half a second on many of the devices. So it’s quite quick to collect the data.”

Each device generates a stream of analogue data. By converting that into digital values and applying algorithms they developed in the lab using stationary or slow-moving devices, Beresford said, the researchers could then infer what a real-world user device was doing at a given time (say, being bounced around in a bag) and apply a known offset.

“We can guess what the input is going to be given the output that we observe,” he said. “If we guess correctly, we can then use that guess to estimate what the value of the scale factor and the orthogonality are.”

From there it is a small step to bake those algorithms into a website or an app. Although the actual technique does not necessarily have to be malicious in practice (for example, a bank might use it to uniquely fingerprint your phone as an anti-fraud measure), it does raise a number of questions.
Good news, fandroids: you’re not affected

Oddly enough, the attack doesn’t work on most Android devices because they’re cheaper than Apple’s, in all senses of the word, and generally aren’t calibrated, though the researchers did find that some Google Pixel handsets did feature calibrated MEMS.

Beresford joked: “There’s a certain sense of irony that because Apple has put more effort in to provide more accuracy, it has this unfortunate side effect!”

Apple has patched the flaws in iOS 12.2 by blocking “access to these sensors in Mobile Safari just by default” as well as adding “some noise to make the attack much more difficult”.

The researchers have set up a website which includes both the full research paper and their layman’s explanation, along with a proof-of-concept video. Get patching, Apple fanbois

Boeing 737 Max Simulators Are in High Demand. They Are Flawed.

QUOTE

Since the two fatal crashes of the Boeing 737 Max, airlines around the world have moved to buy flight simulators to train their pilots.

They don’t always work.

Boeing recently discovered that the simulators could not accurately replicate the difficult conditions created by a malfunctioning anti-stall system, which played a role in both disasters. The simulators did not reflect the immense force that it would take for pilots to regain control of the aircraft once the system activated on a plane traveling at a high speed.

The mistake is likely to intensify concerns about Boeing, as it tries to regain credibility following the crashes of Lion Air and Ethiopian Airlines flights. In the months since the disasters, Boeing has faced criticism for serious oversights in the Max’s design. The anti-stall system was designed with a single point of failure. A warning light that Boeing thought was standard turned out to be part of a premium add-on.

“Every day, there is new news about something not being disclosed or something was done in error or was not complete,” said Dennis Tajer, a spokesman for the American Airlines pilots union and a 737 pilot.

The training procedures have been a source of contention. Boeing has maintained that simulator training is not necessary for the 737 Max and regulators do not require it, but many airlines bought the multimillion-dollar machines to give their pilots more practice. Some pilots want continuing simulator training.

The flight simulators, on-the-ground versions of cockpits that mimic the flying experience, are not made by Boeing. But Boeing provides the underlying information on which they are designed and built.
 

The simulators did not reflect the immense force that it would take for pilots to regain control of the aircraft once the system activated on a plane traveling at a high speed.

 

“Boeing has made corrections to the 737 Max simulator software and has provided additional information to device operators to ensure that the simulator experience is representative across different flight conditions,” said Gordon Johndroe, a Boeing spokesman. “Boeing is working closely with the device manufacturers and regulators on these changes and improvements, and to ensure that customer training is not disrupted.”

In recent weeks, Boeing has been developing a fix to the system, known as MCAS. As part of that work, the company tried to test on a simulator how the updated system would perform, including by replicating the problems with the doomed Ethiopian Airlines flight.

It recreated the actions of the pilots on that flight, including taking manual control of the plane as outlined by Boeing’s recommended procedures. When MCAS activates erroneously, pilots are supposed to turn off the electricity to a motor that allows the system to push the plane toward the ground. Then, pilots need to crank a wheel to right the plane. They have limited time to act.

On the Ethiopian flight, the pilots struggled to turn the wheel while the plane was moving at a high speed, when there is immense pressure on the tail. The simulators did not properly match those conditions, and Boeing pilots found that the wheel was far easier to turn than it should have been.

Regulators are now trying to determine what training will be required.

When the Max was introduced, Boeing believed that pilots did not need experience on the flight simulators, and the Federal Aviation Administration agreed. Many pilots learned about the plane on iPads. And they were not informed about the anti-stall system.

The limited training was a selling point of the plane. It can cost airlines tens of millions of dollars to maintain and operate flight simulators over the life of an aircraft.

After the first crash, Boeing gave airlines and pilots a full rundown of MCAS. But the company and regulators said that additional training was not necessary. Simply knowing about the system would be sufficient.

In a tense meeting with the American Airlines pilots union after the crash, a Boeing vice president, Mike Sinnett, said he was confident that pilots were equipped to deal with problems, according to an audio recording review by The New York Times. A top Boeing test pilot, Craig Bomben, agreed, saying, “I don’t know that understanding the system would have changed the outcome of this.”

Since the Ethiopian Airlines disaster in March, lawmakers and regulators are taking a closer look at the training procedures for the 737 Max, and whether they should be more robust. At a congressional hearing this week, the acting head of the F.A.A., Daniel Elwell, testified that MCAS should “have been more adequately explained.”

Boeing said on Thursday that it had completed its fix to the 737 Max. Along with changes to the anti-stall system, the fix will include additional education for pilots.
Subscribe to With Interest

Catch up and prep for the week ahead with this newsletter of the most important business insights, delivered Sundays.

The company still has to submit the changes to regulators, who will need to approve them before the plane can start flying again. The updates are not expected to include training on simulators, but the F.A.A. and other global regulators could push to require it.

“The F.A.A. is aware that Boeing Company is working with the manufacturers of Boeing 737 Max flight simulators,” a spokesman for the agency said in an emailed statement. “The F.A.A. will review any proposed adjustments as part of its ongoing oversight of the company’s efforts to address safety concerns.”

Airlines have already been pushing to get more simulators and develop their own training.

Pilots at American Airlines, which began asking for simulators when they started flying the planes, ratcheted up their requests after the Lion Air crash. Regardless of what the F.A.A. requires, the union believes pilots should get the experience. A spokesman for the airline said it had ordered a simulator that would be up and running by December.

“We value simulators in this situation,” said Mr. Tajer. “It’s not a condition of the Max flying again, but it is something we want.”

Bug-hunter reveals another ‘make me admin’ Windows 10 zero-day – and vows: ‘There’s more where that came from’

Quote

Vulnerability can be exploited to turn users into system stars, no patch available yet

A bug-hunter who previously disclosed Windows security flaws has publicly revealed another zero-day vulnerability in Microsoft’s latest operating systems.

The discovered hole can be exploited by malware and rogue logged-in users to gain system-level privileges on Windows 10 and recent Server releases, allowing them to gain full control of the machine. No patch exists for this bug, details and exploit code for which were shared online on Tuesday for anyone to use and abuse.

The flaw was uncovered, and revealed on Microsoft-owned GitHub, funnily enough, by a pseudonymous netizen going by the handle SandboxEscaper. She has previously dropped Windows zero-days that can be exploited to delete or tamper with operating system components, elevate local privileges, and so on.

This latest one works by abusing Windows’ schtasks tool, designed to run programs at scheduled times, along with quirks in the operating system.
 

Meanwhile… If you haven’t yet patched the wormable RDP security flaw in Windows (CVE-2019-0708), please do so ASAP – exploit code that can crash vulnerable systems is doing the rounds, and McAfee eggheads have developed and described a proof-of-concept attack that executes arbitrary software on remote machines, with no authentication required. Eek.

It appears the exploit code imports a legacy job file into the Windows Task Scheduler using schtasks, creating a new task, and then deletes that new task’s file from the Windows folder. Next, it creates a hard filesystem link pointing from where the new task’s file was created to pci.sys, one of Windows’ kernel-level driver files, and then runs the same schtasks command again. This clobbers pci.sys’s access permissions so that it can be modified and overwritten by the user, thus opening the door to privileged code execution.

The exploit, as implemented, needs to know a valid username and password combo on the machine to proceed, it seems. It can be tweaked and rebuilt from its source code to target other system files, other than pci.sys. …….

Rampant Android bloatware a privacy and security hellscape

I spent the past week examining an AT&T android. The bloatware was off the scale as was the spyware. Removing these via ADB broke the system. Even installing a firewall broke the system as it appeared that the firewall was detected and it simply blocked calls even the firewall was disabled (but installed). I will next look at and Android One Device to see if it is any better as they claim to be pure Android and no bloatware. I am not just picking on AT&T, but as the article and the PDF study that generated it points out, the practice is rampant.

Quote

The apps bundled with many Android phones are presenting threats to security and privacy greater than most users think.

This according to a paper (PDF) from university researchers in the US and Spain who studied the pre-installed software that 214 different vendors included in their Android devices. They found that everyone from the hardware builders to mobile carriers and third-party advertisers were loading products up with risky code.

“Our results reveal that a significant part of the pre-installed software exhibit potentially harmful or unwanted behavior,” the team from Universidad Carlos III de Madrid, Stony Brook University and UC Berkeley ICSI said.

 

The study, An Analysis of Pre-installed Android Software, was written by Julien Gamba, Mohammed Rashed, Abbas Razaghpanah, Juan Tapiador, and Narseo Vallina-Rodriguez. It is being presented later this month at the 41st IEEE Symposium on Security and Privacy.

 

“While it is known that personal data collection and user tracking is pervasive in the Android app ecosystem as a whole we find that it is also quite prevalent in pre-installed apps.”

To study bundled software, the team crowdsourced firmware and traffic information from a field of 2,748 volunteers running 1,742 different models of devices from 130 different countries.

Across all those different vendors, carriers, and locales, one theme was found: Android devices are lousy with bloatware that not only takes up storage, but also harvests personal information and in some cases even introduces malware.

“We have identified instances of user tracking activities by preinstalled Android software – and embedded third-party libraries – which range from collecting the usual set of PII and geolocation data to more invasive practices that include personal email and phone call metadata, contacts, and a variety of behavioral and usage statistics in some cases,” the team wrote.

“We also found a few isolated malware samples belonging to known families, according to VirusTotal, with prevalence in the last few years (e.g., Xynyin, SnowFox, Rootnik, Triada and Ztorg), and generic trojans displaying a standard set of malicious behaviors (e.g., silent app promotion, SMS fraud, ad fraud, and URL click fraud).”
Beware the bloat

The device vendors themselves were not the only culprits. While the bundled apps can be installed by the vendors, bloatware can also be introduced by the carriers who add their own software to devices as well as third parties that may slip in additional advertising or tracking tools into otherwise harmless and useful software.

Addressing this issue could prove particularly difficult, the researchers note. With vendors and carriers alike looking to eke a few extra bucks out of every device sold, bundled apps and bolted on advertising and tracking tools are highly attractive to companies, and absent pressure from a higher-up body, the bottom line will almost always win out.

To that end, they recommend someone steps in to offer audits of the supply chain and catch potential security and privacy threats in bundled software.

“Google might be a prime candidate for it given its capacity for licensing vendors and its certification programs,” the researchers note.

“Alternatively, in absence of self-regulation, governments and regulatory bodies could step in and enact regulations and execute enforcement actions that wrest back some of the control from the various actors in the supply chain.”

The study, An Analysis of Pre-installed Android Software, was written by Julien Gamba, Mohammed Rashed, Abbas Razaghpanah, Juan Tapiador, and Narseo Vallina-Rodriguez. It is being presented later this month at the 41st IEEE Symposium on Security and Privacy. ®

Security Panic as panic alarms meant to keep granny and little Timmy safe prove a privacy fiasco

QUOTE

Simple hack turns them into super secret spying tool

A GPS tracker used by elderly people and young kids has a security hole that could allow others to track and secretly record their wearers.

The white-label product is manufactured in China and then rebadged and rebranded by a range of companies in the UK, US, Australia and elsewhere including Pebbell 2, OwnFone and SureSafeGo. Over 10,000 people in the UK use the devices.

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

 

It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary.

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

 

But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone.

The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message.

The flaw was introduced in an update to the product: originally the portable fob communicated with a base station that was plugged into a phone line: an approach that provided no clear attack route. But in order to expand its range and usefulness, the SIM card was added so it was not reliant on a base station and would work over the mobile network.

The problem arises from the fact that the Chinese manufacturer built in a PIN to the device so it would be locked to the telephone number programmed into the device. Which is fine, except the PIN was disabled by default and the PIN is currently not needed to reboot or reset the device.

And so it is possible to send a reset command to the device – if you know its SIM telephone number – and restore it to factory settings. At that point, the device is wide open and doesn’t need the PIN to make changes to the other functions. Which all amounts to remote access.
Random access memory

But how would you find out the device’s number? Well, the researchers got hold of one such device and its number and then ran a script where they sent messages to thousands of similar numbers to see if they hit anything.

They did. “Out of the 2,500 messages we sent, we got responses from 175 devices (7 per cent),” they wrote. “So this is 175 devices being used at the time of writing as an aid for vulnerable people; all identified at a minimal cost. The potential for harm is massive, and in less than a couple of hours, we could interact with 175 of these devices!”

The good news is that it is easy to fix: in new devices. You would simply add a unique code to each device and require it be used to reset the device. And you could limit the device to only receive calls or texts from a list of approved contacts.

But in the devices already on the market, the fix is not so easy: even by using the default PIN to lock it down, the ability to reset the device is still possible because it doesn’t require the PIN to be entered. The researchers say they have contacted the companies that use the device “to help them understand the risks posed by our findings” and say that they are “looking into and are actively recalling devices.” But it also notes that some have not responded.

In short, poor design and the lack of a decent security audit prior to putting the updated product on the market has turned what is supposed to provide peace of mind into a potential stalking and listening nightmare.

Facebook’s third act: Mark Zuckerberg announces his firm’s next business model

Here lies Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.” I have a better idea: Get Facebook off the planet.

Quote

If it works, the social-networking giant will become more private and more powerful

THE FIRST big overhaul for Facebook came in 2012-14. Internet users were carrying out ever more tasks on smartphones rather than desktop or laptop computers. Mark Zuckerberg opted to follow them, concentrating on Facebook’s mobile app ahead of its website, and buying up two fast-growing communication apps, WhatsApp and Instagram. It worked. Facebook increased its market valuation from around $60bn at the end of 2012 to—for a brief period in 2018—more than $600bn.

On March 6th Mr Zuckerberg announced Facebook’s next pivot. As well as its existing moneymaking enterprise, selling targeted ads on its public social networks, it is building a “privacy-focused platform” around WhatsApp, Instagram and Messenger. The apps will be integrated, he said, and messages sent through them encrypted end-to-end, so that even Facebook cannot read them. While it was not made explicit, it is clear what the business model will be. Mr Zuckerberg wants all manner of businesses to use its messaging networks to provide services and accept payments. Facebook will take a cut.

A big shift was overdue at Facebook given the privacy and political scandals that have battered the firm. Even Mr Zuckerberg, who often appears incapable of seeing the gravity of Facebook’s situation, seemed to grasp the irony of it putting privacy first. “Frankly we don’t currently have a strong reputation for building privacy protective services,” he noted.

Still, he intends to do it. Mr Zuckerberg claims that users will benefit from his plan to integrate its messaging apps into a single, encrypted network. The content of messages will be safe from prying eyes of authoritarian snoops and criminals, as well as from Facebook itself. It will make messaging more convenient, and make profitable new services possible. But caution is warranted for three reasons.

The first is that Facebook has long been accused of misleading the public on privacy and security, so the potential benefits Mr Zuckerberg touts deserve to be treated sceptically. He is also probably underselling the benefits that running integrated messaging networks brings to his firm, even if they are encrypted so that Facebook cannot see the content. The metadata alone, ie, who is talking to whom, when and for how long, will still allow Facebook to target advertisements precisely, meaning its ad model will still function.

End-to-end encryption will also make Facebook’s business cheaper to run. Because it will be mathematically impossible to moderate encrypted communications, the firm will have an excuse to take less responsibility for content running through its apps, limiting its moderation costs.

If it can make the changes, Facebook’s dominance over messaging would probably increase. The newfound user-benefits of a more integrated Facebook might make it harder for regulators to argue that Mr Zuckerberg’s firm should be broken up.

Facebook’s plans in India provide some insight into the new model. It has built a payment system into WhatsApp, the country’s most-used messaging app. The system is waiting for regulatory approval. The market is huge. In the rest of the world, too, users are likely to be drawn in by the convenience of Facebook’s new networks. Mr Zuckerberg’s latest strategy is ingenious but may contain twists.

The Week in Tech: Facebook and Google Reshape the Narrative on Privacy

And from the bs department

QUOTE

…Stop me if you’ve heard this before: The chief executive of a huge tech company with vast stores of user data, and a business built on using it to target ads, now says his priority is privacy.

This time it was Google’s Sundar Pichai, at the company’s annual conference for developers. “We think privacy is for everyone,” he explained on Tuesday. “We want to do more to stay ahead of constantly evolving user expectations.” He reiterated the point in a New York Times Op-Ed, and highlighted the need for federal privacy rules.

The previous week, Mark Zuckerberg delivered similar messages at Facebook’s developer conference. “The future is private,” he said, and Facebook will focus on more intimate communications. He shared the idea in a Washington Post op-ed just weeks before, also highlighting the need for federal privacy rules.

Google went further than Facebook’s rough sketch of what this future looks, and unveiled tangible features: It will let users browse YouTube and Google Maps in “incognito mode,” will allow auto-deletion of Google history after a specified time and will make it easier to find out what the company knows about you, among other new privacy features.

Fatemeh Khatibloo, a vice president and principal analyst at Forrester, told The Times: “These are meaningful changes when it comes to the user’s expectations of privacy, but I don’t think this affects their business at all.” Google has to show that privacy is important, but it will still collect data.

What Google and Facebook are trying to do, though, is reshape the privacy narrative. You may think privacy means keeping hold of your data; they want privacy to mean they don’t hand data to others. (“Google will never sell any personal information to third parties,” Mr. Pichai wrote in his Op-Ed.)

Werner Goertz, a research director at Gartner, said Google had to respond with its own narrative. “It is trying to turn the conversation around and drive public discourse in a way that not only pacifies but also tries to get buy-in from consumers, to align them with its privacy strategy,” he said.
Politics of privacy law

Right – pacify the masses with BS.

Politics of privacy law

Facebook and Google may share a voice on privacy. Lawmakers don’t.

Members of the Federal Trade Commission renewed calls at a congressional hearing on Wednesday to regulate big tech companies’ stewardship of user data, my colleague Cecilia Kang reported. That was before a House Energy and Commerce subcommittee, on which “lawmakers of both parties agreed” that such a law was required, The Wall Street Journal reported.

Sounds promising.

But while the F.T.C. was united in asking for more power to police violations and greater authority to impose penalties, there were large internal tensions about how far it should be able to go in punishing companies. And the lawmakers in Congress “appeared divided over key points that legislation might address,” according to The Journal. Democrats favor harsh penalties and want to give the F.T.C. greater power; Republicans worry that strict regulation could stifle innovation and hurt smaller companies.

Finding compromise will be difficult, and conflicting views risk becoming noise through which a clear voice from Facebook and Google can cut. The longer disagreement rages, the more likely it is that Silicon Valley defines a mainstream view that could shape rules.

Yeah — more lobbyists and political donation subverting the democracy. The US should enact an EU equivalent GDPR now. And another thing, Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.”

Now for Sale on Facebook: Looted Middle Eastern Antiquities

Another reason Facebook is a disgusting dangerous corporation. A 5 billion dollat fine is nothing. It needs to be wound down and Zuckerberg and Sandberg given long hard prison terms for the evil and death they have caused.

Quote

Ancient treasures pillaged from conflict zones in the Middle East are being offered for sale on Facebook, researchers say, including items that may have been looted by Islamic State militants.

Facebook groups advertising the items grew rapidly during the upheaval of the Arab Spring and the ensuing wars, which created unprecedented opportunities for traffickers, said Amr Al-Azm, a professor of Middle East history and anthropology at Shawnee State University in Ohio and a former antiquities official in Syria. He has monitored the trade for years along with his colleagues at the Athar Project, named for the Arabic word for antiquities.

At the same time, Dr. Al-Azm said, social media lowered the barriers to entry to the marketplace. Now there are at least 90 Facebook groups, most in Arabic, connected to the illegal trade in Middle Eastern antiquities, with tens of thousands of members, he said.

They often post items or inquiries in the group, then take the discussion into chat or WhatsApp messaging, making it difficult to track. Some users circulate requests for certain types of items, providing an incentive for traffickers to produce them, a scenario that Dr. Al-Azm called “loot to order.”

Others post detailed instructions for aspiring looters on how to locate archaeological sites and dig up treasures.

Items for sale include a bust purportedly taken from the ancient city of Palmyra, which was occupied for periods by Islamic State militants and endured heavy looting and damage.

Other artifacts for sale come from Iraq, Yemen, Egypt, Tunisia and Libya. The majority do not come from museums or collections, where their existence would have been cataloged, Dr. Al-Azm said.

“They’re being looted straight from the ground,” he said. “They have never been seen. The only evidence we have of their existence is if someone happens to post a picture of them.”

Dr. Al-Azm and Katie A. Paul, the directors of the Athar Project, wrote in World Politics Review last year that the loot-to-order requests showed that traffickers were “targeting material with a previously unseen level of precision — a practice that Facebook makes remarkably easy.”

After the BBC published an article about the work of Dr. Al-Azm and his colleagues last week, Facebook said that it had removed 49 groups connected to antiquities trafficking.

 

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

Dr. Al-Azm countered that 90 groups were still up. But more important, he argued, Facebook should not simply delete the pages, which now constitute crucial evidence both for law enforcement and heritage experts.

In a statement on Tuesday, the company said it was “continuing to invest in people and technology to keep this activity off Facebook and encourage others to report anything they suspect of violating our Community Standards so we can quickly take action.”

A spokeswoman said that the company’s policy-enforcement team had 30,000 members and that it had introduced new tools to detect and remove content that violates the law or its policies using artificial intelligence, machine learning and computer vision.

Trafficking in antiquities is illegal across most of the Middle East, and dealing in stolen relics is illegal under international law. But it can be difficult to prosecute such cases.
Leila A. Amineddoleh, a lawyer in New York who specializes in art and cultural heritage, said that determining the provenance of looted items can be arduous, presenting an obstacle for lawyers and academics alike.

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

As the Islamic State expanded, it systematically looted and destroyed, using heavy machinery to dig into ancient sites that had scarcely been excavated before the war. The group allowed residents and other looters to take from heritage sites, imposing a 20 percent tax on their earnings.

Some local people and cultural heritage experts scrambled to document and save the antiquities, including efforts to physically safeguard them and to create 3-D models and maps. Despite their efforts, the losses were catastrophic.
Sign Up for Summer in the City

The best things to do in N.Y.C. during the hottest season of the year. This limited-edition newsletter will launch before Memorial Day and run through Labor Day.

Satellite images show invaluable sites, such as Mari and Dura-Europos in eastern Syria, pockmarked with excavation holes from looters. In the Mosul Museum in Iraq, the militants filmed themselves taking sledgehammers and drills to monuments they saw as idolatrous, acts designed for maximum propaganda value as the world watched with horror.

Other factions and people also profited from looting. In fact, the market was so saturated that prices dropped drastically for a time around 2016, Dr. Al-Azm said.

Around the same time, as Islamic State fighters scattered in the face of territorial losses, they took their new expertise in looting back to their countries, including Egypt, Tunisia and Libya, and to other parts of Syria, like Idlib Province, he added.

“This is a supply and demand issue,” Dr. Al-Azm said, repeating that any demand gives incentives to looters, possibly financing terrorist groups in the process.

Instead of simply deleting the pages, Dr. Al-Azm said, Facebook should devise a more comprehensive strategy to stop the sales while allowing investigators to preserve photos and records uploaded to the groups.

A hastily posted photo, after all, might be the only record of a looted object that is available to law enforcement or scholars. Simply deleting the page would destroy “a huge corpus of evidence” that will be needed to identify, track and recover looted treasures for years to come, he said.

Similar arguments have been made as social media sites, including YouTube, have deleted videos that show atrocities committed during the Syrian war that could be used to prosecute war crimes.

Facebook has also faced questions over its role as a platform for other types of illicit sales, including guns, poached ivory and more. It has generally responded by shutting down pages or groups in response to reports of illegal activity.

Some of the illicit items sold without proof of their ownership history, of course, could be fake. But given the volume of activity in the antiquities groups and the copious evidence of looting at famous sites, at least some of them are believed to be genuine.

The wave of items hitting the market will most likely continue for years. Some traffickers sit on looted antiquities for long periods, waiting for attention to die down and sometimes forging documents about the items’ origins before offering them for sale.

Boycott is the only way to force social media giants to protect kids online

About a month back I got into email exchange with a mother who had invited one of my children to a birthday party. I said ok but asked that no pictures be posted to social media. I explained my reasoning. She said I was crazy and would damage my children (as well as other things). I responded with advice from several reputable sources. No matter. Suffice, no birthday attendance.

I was never sure why she reacted this way. It was almost like I asked an addict to go cold turkey. Maybe that’s it. She is addicted.

A public boycott of social media may be the only way to force companies to protect children from abuse, the country’s leading child protection police officer has said.
QUOTE

Simon Bailey, the National Police Chiefs’ Council lead on child protection, said tech companies had abdicated their duty to safeguard children and were only paying attention due to fear of reputational damage.

The senior officer, who is Norfolk’s chief constable, said he believed sanctions such as fines would be “little more than a drop in the ocean” to social media companies, but that the government’s online harms white paper could be a “game changer” if it led to effective punitive measures.

Bailey suggested a boycott would be one way to hit big platforms, which he believes have the technology and funds to “pretty much eradicate the availability, the uploading, and the distribution of indecent imagery”.

Despite the growing problem, Bailey said he had seen nothing so far “that has given me the confidence that companies that are creating these platforms are taking their responsibilities seriously enough”.

He told the Press Association: “Ultimately I think the only thing they will genuinely respond to is when their brand is damaged. Ultimately the financial penalties for some of the giants of this world are going to be an absolute drop in the ocean.

“But if the brand starts to become tainted, and consumers start to see how certain platforms are permitting abuse, are permitting the exploitation of young people, then maybe the damage to that brand will be so significant that they will feel compelled to do something in response.

“We have got to look at how we drive a conversation within our society that says ‘do you know what, we are not going to use that any more, that system or that brand or that site’ because of what they are permitting to be hosted or what they are allowing to take place.”

In every playground there is likely to be someone with pornography on their phone, Bailey said as he described how a growing number of young men are becoming “increasingly desensitised” and progressing to easily available illegal material. Society is “not far off the point where somebody will know somebody” who has viewed illegal images, he said.

There has been a sharp rise in the number of images on the child abuse image database from fewer than 10,000 in the 1990s to 13.4m, with more than 100m variations of these.

Last month, the government launched a consultation on new laws proposed to tackle illegal content online. The white paper, which was revealed in the Guardian, legislated for a new statutory duty of care by social media firms and the appointment of an independent regulator, which is likely to be funded through a levy on the companies. It was welcomed by senior police and children’s charities.

Bailey believes if effective regulation is put in place it could free up resources to begin tackling the vaster dark web. He expressed concern that the spread of 4G and 5G networks worldwide would open up numerous further opportunities for the sexual exploitation of children.

Speaking at a conference organised by StopSO, a charity that works with offenders and those concerned about their sexual behaviour to minimise the risk of offending, of which Bailey is patron, he recently said that plans from Facebook’s Mark Zuckerberg to increase privacy on the social network would make life harder for child protection units. But he told the room: “There is no doubt that thinking is shifting around responsibility of tech companies. I think that argument has been won, genuinely.

“Of course, the proof is going to be in the pudding with just how ambitious the white paper is, how effective the punitive measures will be, or not.”

Andy Burrows, the National Society for the Prevention of Cruelty to Children’s associate head of child safety online, said: “It feels like social media sites treat child safeguarding crises as a bad news cycle to ride out, rather than a chance to make changes to protect children.”