Skip to content

Just Say NO to Facebook

Why Privacy Is an Antitrust Issue

QUOTE

Finally the mainstream media is saying what many of us have known for years. Too bad they do not put their money where their mouth is and sever all business ties the likes of Facebook.

As Facebook has generated scandal after scandal in recent years, critics have started to wonder how we might use antitrust laws to rein in the company’s power. But many of the most pressing concerns about Facebook are its privacy abuses, which unlike price gouging, price discrimination or exclusive dealing, are not immediately recognized as antitrust violations. Is there an antitrust case to be made against Facebook on privacy grounds?

Yes, there is. In March, when Representative David N. Cicilline, Democrat of Rhode Island, called on the Federal Trade Commission to investigate Facebook’s potential violations of antitrust laws, he cited not only Facebook’s acquisitions (such as Instagram and WhatsApp), but also evidence that Facebook was “using its monopoly power to degrade” the quality of its service “below what a competitive marketplace would allow.”

It is this last point, which I made in a law journal article cited by Mr. Cicilline, that promises to change how antitrust law will protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

[changes are needed to (ED)…] protect the American public in the era of Big Tech: namely, that consumers can suffer at the hands of monopolies because companies like Facebook lock in users with promises to protect their data and privacy — only to break those promises once competitors in the marketplace have been eliminated.

 

To see what I mean, let’s go back to the mid-2000s, when Facebook was an upstart social media platform. To differentiate itself from the market leader, Myspace, Facebook publicly pledged itself to privacy. Privacy provided its competitive advantage, with the company going so far as to promise users, “We do not and will not use cookies to collect private information from any user.”

When Facebook later attempted to change this bargain with users, the threat of losing its customers to its competitors forced the company to reverse course. In 2007, for example, Facebook introduced a program that recorded users’ activity on third-party sites and inserted it into the News Feed. Following public outrage and a class-action lawsuit, Facebook ended the program. “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them,” Facebook’s chief executive, Mark Zuckerberg, wrote in a public apology.

This sort of thing happened regularly for years. Facebook would try something sneaky, users would object and Facebook would back off.

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

This is how Facebook usurped our privacy: with the help of its market dominance. The price of using Facebook has stayed the same over the years (it’s free to join and use), but the cost of using it, calculated in terms of the amount of data that users now must provide, is an order of magnitude above what it was when Facebook faced real competition.
 

But then Facebook’s competition began to disappear. Facebook acquired Instagram in 2012 and WhatsApp in 2014. Later in 2014, Google announced that it would fold its social network Orkut. Emboldened by the decline of market threats, Facebook revoked its users’ ability to vote on changes to its privacy policies and then (almost simultaneously with Google’s exit from the social media market) changed its privacy pact with users.

It is hard to believe that the Facebook of 2019, which is so consuming of and reckless with our data, was once the privacy-protecting Facebook of 2004. When users today sign up for Facebook, they agree to allow the company to track their activity across more than eight million websites and mobile applications that are connected to the internet. They cannot opt out of this. The ubiquitous tracking of consumers online allows Facebook to collect exponentially more data about them than it originally could, which it can use to its financial advantage.

And while users can control some of the ways in which Facebook uses their data by adjusting their privacy settings, if you choose to leave Facebook, the company still subjects you to surveillance — but you no longer have access to the settings. Staying on the platform is the only effective way to manage its harms.

Lowering the quality of a company’s services in this manner has always been one way a monopoly can squeeze consumers after it corners a market. If you go all the way back to the landmark “case of monopolies” in 17th-century England, for example, you find a court sanctioning a monopoly for fear that it might control either price or the quality of services.

But we must now aggressively enforce this antitrust principle to handle the problems of our modern economy. Our government should undertake the important task of restoring to the American people something they bargained for in the first place — their privacy.

LONG LONG Overdue!

Facebook’s third act: Mark Zuckerberg announces his firm’s next business model

Here lies Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.” I have a better idea: Get Facebook off the planet.

Quote

If it works, the social-networking giant will become more private and more powerful

THE FIRST big overhaul for Facebook came in 2012-14. Internet users were carrying out ever more tasks on smartphones rather than desktop or laptop computers. Mark Zuckerberg opted to follow them, concentrating on Facebook’s mobile app ahead of its website, and buying up two fast-growing communication apps, WhatsApp and Instagram. It worked. Facebook increased its market valuation from around $60bn at the end of 2012 to—for a brief period in 2018—more than $600bn.

On March 6th Mr Zuckerberg announced Facebook’s next pivot. As well as its existing moneymaking enterprise, selling targeted ads on its public social networks, it is building a “privacy-focused platform” around WhatsApp, Instagram and Messenger. The apps will be integrated, he said, and messages sent through them encrypted end-to-end, so that even Facebook cannot read them. While it was not made explicit, it is clear what the business model will be. Mr Zuckerberg wants all manner of businesses to use its messaging networks to provide services and accept payments. Facebook will take a cut.

A big shift was overdue at Facebook given the privacy and political scandals that have battered the firm. Even Mr Zuckerberg, who often appears incapable of seeing the gravity of Facebook’s situation, seemed to grasp the irony of it putting privacy first. “Frankly we don’t currently have a strong reputation for building privacy protective services,” he noted.

Still, he intends to do it. Mr Zuckerberg claims that users will benefit from his plan to integrate its messaging apps into a single, encrypted network. The content of messages will be safe from prying eyes of authoritarian snoops and criminals, as well as from Facebook itself. It will make messaging more convenient, and make profitable new services possible. But caution is warranted for three reasons.

The first is that Facebook has long been accused of misleading the public on privacy and security, so the potential benefits Mr Zuckerberg touts deserve to be treated sceptically. He is also probably underselling the benefits that running integrated messaging networks brings to his firm, even if they are encrypted so that Facebook cannot see the content. The metadata alone, ie, who is talking to whom, when and for how long, will still allow Facebook to target advertisements precisely, meaning its ad model will still function.

End-to-end encryption will also make Facebook’s business cheaper to run. Because it will be mathematically impossible to moderate encrypted communications, the firm will have an excuse to take less responsibility for content running through its apps, limiting its moderation costs.

If it can make the changes, Facebook’s dominance over messaging would probably increase. The newfound user-benefits of a more integrated Facebook might make it harder for regulators to argue that Mr Zuckerberg’s firm should be broken up.

Facebook’s plans in India provide some insight into the new model. It has built a payment system into WhatsApp, the country’s most-used messaging app. The system is waiting for regulatory approval. The market is huge. In the rest of the world, too, users are likely to be drawn in by the convenience of Facebook’s new networks. Mr Zuckerberg’s latest strategy is ingenious but may contain twists.

The Week in Tech: Facebook and Google Reshape the Narrative on Privacy

And from the bs department

QUOTE

…Stop me if you’ve heard this before: The chief executive of a huge tech company with vast stores of user data, and a business built on using it to target ads, now says his priority is privacy.

This time it was Google’s Sundar Pichai, at the company’s annual conference for developers. “We think privacy is for everyone,” he explained on Tuesday. “We want to do more to stay ahead of constantly evolving user expectations.” He reiterated the point in a New York Times Op-Ed, and highlighted the need for federal privacy rules.

The previous week, Mark Zuckerberg delivered similar messages at Facebook’s developer conference. “The future is private,” he said, and Facebook will focus on more intimate communications. He shared the idea in a Washington Post op-ed just weeks before, also highlighting the need for federal privacy rules.

Google went further than Facebook’s rough sketch of what this future looks, and unveiled tangible features: It will let users browse YouTube and Google Maps in “incognito mode,” will allow auto-deletion of Google history after a specified time and will make it easier to find out what the company knows about you, among other new privacy features.

Fatemeh Khatibloo, a vice president and principal analyst at Forrester, told The Times: “These are meaningful changes when it comes to the user’s expectations of privacy, but I don’t think this affects their business at all.” Google has to show that privacy is important, but it will still collect data.

What Google and Facebook are trying to do, though, is reshape the privacy narrative. You may think privacy means keeping hold of your data; they want privacy to mean they don’t hand data to others. (“Google will never sell any personal information to third parties,” Mr. Pichai wrote in his Op-Ed.)

Werner Goertz, a research director at Gartner, said Google had to respond with its own narrative. “It is trying to turn the conversation around and drive public discourse in a way that not only pacifies but also tries to get buy-in from consumers, to align them with its privacy strategy,” he said.
Politics of privacy law

Right – pacify the masses with BS.

Politics of privacy law

Facebook and Google may share a voice on privacy. Lawmakers don’t.

Members of the Federal Trade Commission renewed calls at a congressional hearing on Wednesday to regulate big tech companies’ stewardship of user data, my colleague Cecilia Kang reported. That was before a House Energy and Commerce subcommittee, on which “lawmakers of both parties agreed” that such a law was required, The Wall Street Journal reported.

Sounds promising.

But while the F.T.C. was united in asking for more power to police violations and greater authority to impose penalties, there were large internal tensions about how far it should be able to go in punishing companies. And the lawmakers in Congress “appeared divided over key points that legislation might address,” according to The Journal. Democrats favor harsh penalties and want to give the F.T.C. greater power; Republicans worry that strict regulation could stifle innovation and hurt smaller companies.

Finding compromise will be difficult, and conflicting views risk becoming noise through which a clear voice from Facebook and Google can cut. The longer disagreement rages, the more likely it is that Silicon Valley defines a mainstream view that could shape rules.

Yeah — more lobbyists and political donation subverting the democracy. The US should enact an EU equivalent GDPR now. And another thing, Zuckerberg’s cynical attempt to change the narrative by implementing end to end encryption is simply a bad idea. It gets them off the hook to moderate content (read: more profits), still allows them to sell ads and makes it nearly impossible for law enforcement to do their job. Hey Zuck, why not hand hang a sign out: criminals, pedophiles, gangs, repressive regimes, etc. – “all welcome here.”

Now for Sale on Facebook: Looted Middle Eastern Antiquities

Another reason Facebook is a disgusting dangerous corporation. A 5 billion dollat fine is nothing. It needs to be wound down and Zuckerberg and Sandberg given long hard prison terms for the evil and death they have caused.

Quote

Ancient treasures pillaged from conflict zones in the Middle East are being offered for sale on Facebook, researchers say, including items that may have been looted by Islamic State militants.

Facebook groups advertising the items grew rapidly during the upheaval of the Arab Spring and the ensuing wars, which created unprecedented opportunities for traffickers, said Amr Al-Azm, a professor of Middle East history and anthropology at Shawnee State University in Ohio and a former antiquities official in Syria. He has monitored the trade for years along with his colleagues at the Athar Project, named for the Arabic word for antiquities.

At the same time, Dr. Al-Azm said, social media lowered the barriers to entry to the marketplace. Now there are at least 90 Facebook groups, most in Arabic, connected to the illegal trade in Middle Eastern antiquities, with tens of thousands of members, he said.

They often post items or inquiries in the group, then take the discussion into chat or WhatsApp messaging, making it difficult to track. Some users circulate requests for certain types of items, providing an incentive for traffickers to produce them, a scenario that Dr. Al-Azm called “loot to order.”

Others post detailed instructions for aspiring looters on how to locate archaeological sites and dig up treasures.

Items for sale include a bust purportedly taken from the ancient city of Palmyra, which was occupied for periods by Islamic State militants and endured heavy looting and damage.

Other artifacts for sale come from Iraq, Yemen, Egypt, Tunisia and Libya. The majority do not come from museums or collections, where their existence would have been cataloged, Dr. Al-Azm said.

“They’re being looted straight from the ground,” he said. “They have never been seen. The only evidence we have of their existence is if someone happens to post a picture of them.”

Dr. Al-Azm and Katie A. Paul, the directors of the Athar Project, wrote in World Politics Review last year that the loot-to-order requests showed that traffickers were “targeting material with a previously unseen level of precision — a practice that Facebook makes remarkably easy.”

After the BBC published an article about the work of Dr. Al-Azm and his colleagues last week, Facebook said that it had removed 49 groups connected to antiquities trafficking.

 

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

Dr. Al-Azm countered that 90 groups were still up. But more important, he argued, Facebook should not simply delete the pages, which now constitute crucial evidence both for law enforcement and heritage experts.

In a statement on Tuesday, the company said it was “continuing to invest in people and technology to keep this activity off Facebook and encourage others to report anything they suspect of violating our Community Standards so we can quickly take action.”

A spokeswoman said that the company’s policy-enforcement team had 30,000 members and that it had introduced new tools to detect and remove content that violates the law or its policies using artificial intelligence, machine learning and computer vision.

Trafficking in antiquities is illegal across most of the Middle East, and dealing in stolen relics is illegal under international law. But it can be difficult to prosecute such cases.
Leila A. Amineddoleh, a lawyer in New York who specializes in art and cultural heritage, said that determining the provenance of looted items can be arduous, presenting an obstacle for lawyers and academics alike.

Dr. Al-Azm said his team’s research indicated that the Facebook groups are run by an international network of traffickers who cater to dealers, including ones in the West. The sales are often completed in person in cash in nearby countries, he said, despite efforts in Turkey and elsewhere to fight antiquities smuggling.

He faulted Facebook for not heeding warnings about antiquities sales as early as 2014, when it might have been possible to delete the groups to stop, or at least slow, their growth.

As the Islamic State expanded, it systematically looted and destroyed, using heavy machinery to dig into ancient sites that had scarcely been excavated before the war. The group allowed residents and other looters to take from heritage sites, imposing a 20 percent tax on their earnings.

Some local people and cultural heritage experts scrambled to document and save the antiquities, including efforts to physically safeguard them and to create 3-D models and maps. Despite their efforts, the losses were catastrophic.
Sign Up for Summer in the City

The best things to do in N.Y.C. during the hottest season of the year. This limited-edition newsletter will launch before Memorial Day and run through Labor Day.

Satellite images show invaluable sites, such as Mari and Dura-Europos in eastern Syria, pockmarked with excavation holes from looters. In the Mosul Museum in Iraq, the militants filmed themselves taking sledgehammers and drills to monuments they saw as idolatrous, acts designed for maximum propaganda value as the world watched with horror.

Other factions and people also profited from looting. In fact, the market was so saturated that prices dropped drastically for a time around 2016, Dr. Al-Azm said.

Around the same time, as Islamic State fighters scattered in the face of territorial losses, they took their new expertise in looting back to their countries, including Egypt, Tunisia and Libya, and to other parts of Syria, like Idlib Province, he added.

“This is a supply and demand issue,” Dr. Al-Azm said, repeating that any demand gives incentives to looters, possibly financing terrorist groups in the process.

Instead of simply deleting the pages, Dr. Al-Azm said, Facebook should devise a more comprehensive strategy to stop the sales while allowing investigators to preserve photos and records uploaded to the groups.

A hastily posted photo, after all, might be the only record of a looted object that is available to law enforcement or scholars. Simply deleting the page would destroy “a huge corpus of evidence” that will be needed to identify, track and recover looted treasures for years to come, he said.

Similar arguments have been made as social media sites, including YouTube, have deleted videos that show atrocities committed during the Syrian war that could be used to prosecute war crimes.

Facebook has also faced questions over its role as a platform for other types of illicit sales, including guns, poached ivory and more. It has generally responded by shutting down pages or groups in response to reports of illegal activity.

Some of the illicit items sold without proof of their ownership history, of course, could be fake. But given the volume of activity in the antiquities groups and the copious evidence of looting at famous sites, at least some of them are believed to be genuine.

The wave of items hitting the market will most likely continue for years. Some traffickers sit on looted antiquities for long periods, waiting for attention to die down and sometimes forging documents about the items’ origins before offering them for sale.

Boycott is the only way to force social media giants to protect kids online

About a month back I got into email exchange with a mother who had invited one of my children to a birthday party. I said ok but asked that no pictures be posted to social media. I explained my reasoning. She said I was crazy and would damage my children (as well as other things). I responded with advice from several reputable sources. No matter. Suffice, no birthday attendance.

I was never sure why she reacted this way. It was almost like I asked an addict to go cold turkey. Maybe that’s it. She is addicted.

A public boycott of social media may be the only way to force companies to protect children from abuse, the country’s leading child protection police officer has said.
QUOTE

Simon Bailey, the National Police Chiefs’ Council lead on child protection, said tech companies had abdicated their duty to safeguard children and were only paying attention due to fear of reputational damage.

The senior officer, who is Norfolk’s chief constable, said he believed sanctions such as fines would be “little more than a drop in the ocean” to social media companies, but that the government’s online harms white paper could be a “game changer” if it led to effective punitive measures.

Bailey suggested a boycott would be one way to hit big platforms, which he believes have the technology and funds to “pretty much eradicate the availability, the uploading, and the distribution of indecent imagery”.

Despite the growing problem, Bailey said he had seen nothing so far “that has given me the confidence that companies that are creating these platforms are taking their responsibilities seriously enough”.

He told the Press Association: “Ultimately I think the only thing they will genuinely respond to is when their brand is damaged. Ultimately the financial penalties for some of the giants of this world are going to be an absolute drop in the ocean.

“But if the brand starts to become tainted, and consumers start to see how certain platforms are permitting abuse, are permitting the exploitation of young people, then maybe the damage to that brand will be so significant that they will feel compelled to do something in response.

“We have got to look at how we drive a conversation within our society that says ‘do you know what, we are not going to use that any more, that system or that brand or that site’ because of what they are permitting to be hosted or what they are allowing to take place.”

In every playground there is likely to be someone with pornography on their phone, Bailey said as he described how a growing number of young men are becoming “increasingly desensitised” and progressing to easily available illegal material. Society is “not far off the point where somebody will know somebody” who has viewed illegal images, he said.

There has been a sharp rise in the number of images on the child abuse image database from fewer than 10,000 in the 1990s to 13.4m, with more than 100m variations of these.

Last month, the government launched a consultation on new laws proposed to tackle illegal content online. The white paper, which was revealed in the Guardian, legislated for a new statutory duty of care by social media firms and the appointment of an independent regulator, which is likely to be funded through a levy on the companies. It was welcomed by senior police and children’s charities.

Bailey believes if effective regulation is put in place it could free up resources to begin tackling the vaster dark web. He expressed concern that the spread of 4G and 5G networks worldwide would open up numerous further opportunities for the sexual exploitation of children.

Speaking at a conference organised by StopSO, a charity that works with offenders and those concerned about their sexual behaviour to minimise the risk of offending, of which Bailey is patron, he recently said that plans from Facebook’s Mark Zuckerberg to increase privacy on the social network would make life harder for child protection units. But he told the room: “There is no doubt that thinking is shifting around responsibility of tech companies. I think that argument has been won, genuinely.

“Of course, the proof is going to be in the pudding with just how ambitious the white paper is, how effective the punitive measures will be, or not.”

Andy Burrows, the National Society for the Prevention of Cruelty to Children’s associate head of child safety online, said: “It feels like social media sites treat child safeguarding crises as a bad news cycle to ride out, rather than a chance to make changes to protect children.”

Sri Lanka Shut Down Social Media. My First Thought Was ‘Good.’

So was mine.

Quote

As a tech journalist, I’m ashamed to admit it. But this is how bad the situation has gotten.

This is the ugly conundrum of the digital age: When you traffic in outrage, you get death.

So when the Sri Lankan government temporarily shut down access to American social media services like Facebook and Google’s YouTube after the bombings there on Easter morning, my first thought was “good.”

Good, because it could save lives. Good, because the companies that run these platforms seem incapable of controlling the powerful global tools they have built. Good, because the toxic digital waste of misinformation that floods these platforms has overwhelmed what was once so very good about them. And indeed, by Sunday morning so many false reports about the carnage were already circulating online that the Sri Lankan government worried more violence would follow.

It pains me as a journalist, and someone who once believed that a worldwide communications medium would herald more tolerance, to admit this — to say that my first instinct was to turn it all off. But it has become clear to me with every incident that the greatest experiment in human interaction in the history of the world continues to fail in ever more dangerous ways.

In short: Stop the Facebook/YouTube/Twitter world — we want to get off.

Obviously, that is an impossible request and one that does not address the root cause of the problem, which is that humanity can be deeply inhumane. But that tendency has been made worse by tech in ways that were not anticipated by those who built it.

I noted this in my very first column for The Times almost a year ago, when I called social media giants “digital arms dealers of the modern age” who had, by sloppy design, weaponized pretty much everything that could be weaponized.

“They have weaponized civic discourse,” I wrote. “And they have weaponized, most of all, politics. Which is why malevolent actors continue to game the platforms and why there’s still no real solution in sight anytime soon, because they were built to work exactly this way.”

So it is no surprise that we are where we are now, with the Sri Lankan government closing off its citizens’ access to social media, fearing misinformation would lead to more violence. A pre-crime move, if you will, and a drastic one, since much critical information in that country flows over these platforms. Facebook and YouTube, and to a lesser extent services like Viber, are how news is distributed and consumed and also how it is abused. Imagine if you mashed up newspapers, cable, radio and the internet into one outlet in the United States and you have the right idea.

A Facebook spokesman stressed to me that “people rely on our services to communicate with their loved ones.” He told me the company is working with Sri Lankan law enforcement and trying to remove content that violates its standards.

It pains me as a journalist, and someone who once believed that a worldwide communications medium would herald more tolerance, to admit this — to say that my first instinct was to turn it all off. But it has become clear to me with every incident that the greatest experiment in human interaction in the history of the world continues to fail in ever more dangerous ways.

In short: Stop the Facebook/YouTube/Twitter world — we want to get off.

 

But while social media had once been credited with helping foster democracy in places like Sri Lanka, it is now blamed for an increase in religious hatred. That justification was behind another brief block a year ago, aimed at Facebook, where the Sri Lankan government said posts appeared to have incited anti-Muslim violence.

“The extraordinary step reflects growing global concern, particularly among governments, about the capacity of American-owned networks to spin up violence,” The Times reported on Sunday.

Spin up violence indeed. Just a month ago in New Zealand, a murderous shooter apparently radicalized by social media broadcast his heinous acts on those same platforms. Let’s be clear, the hateful killer is to blame, but it is hard to deny that his crime was facilitated by tech.

In that case, the New Zealand government did not turn off the tech faucets, but it did point to those companies as a big part of the problem. After the attacks, neither Facebook nor YouTube could easily stop the ever-looping videos of the killings, which proliferated too quickly for their clever algorithms to keep up. One insider at YouTube described the experience to me as a “nightmare version of Whack-a-Mole.”

New Zealand, under the suffer-no-foolish-techies leadership of Jacinda Ardern, will be looking hard at imposing penalties on these companies for not controlling the spread of extremist content. Australia already passed such a law in early April. Here in the United States, our regulators are much farther behind, still debating whether it is a problem or not.

It is a problem, even if the manifestations of how these platforms get warped vary across the world. They are different in ways that make no difference and the same in one crucial way that does. Namely, social media has blown the lids off controls that have kept society in check. These platforms give voice to everyone, but some of those voices are false or, worse, malevolent, and the companies continue to struggle with how to deal with them.

In the early days of the internet, there was a lot of talk of how this was a good thing, getting rid of those gatekeepers. Well, they are gone now, and that means we need to have a global discussion involving all parties on how to handle the resulting disaster, well beyond adding more moderators or better algorithms.

Shutting social media down in times of crisis isn’t going to work. I raised that idea with a top executive at a big tech company I visited last week, during a discussion of what had happened in New Zealand.

“You can’t shut it off,” the executive said flatly. “It’s too late.”

True – but we can encourage or even ban businesses from advertising on it. Then they would whither and die and good riddance.

Don’t Plummet with Summit- another Zuckerberg Insidious Failure

…Public schools near Wichita had rolled out a web-based platform and curriculum from Summit Learning. The Silicon Valley-based program promotes an educational approach called “personalized learning,” which uses online tools to customize education. The platform that Summit provides was developed by Facebook engineers. It is funded by Mark Zuckerberg, Facebook’s chief executive, and his wife, Priscilla Chan, a pediatrician.

Under Summit’s program, students spend much of the day on their laptops and go online for lesson plans and quizzes, which they complete at their own pace. Teachers assist students with the work, hold mentoring sessions and lead special projects. The system is free to schools. The laptops are typically bought separately.

Then, students started coming home with headaches and hand cramps. Some said they felt more anxious. One child began having a recurrence of seizures. Another asked to bring her dad’s hunting earmuffs to class to block out classmates because work was now done largely alone.

“We’re allowing the computers to teach and the kids all looked like zombies,” said Tyson Koenig, a factory supervisor in McPherson, who visited his son’s fourth-grade class. In October, he pulled the 10-year-old out of the school.

Yes – personalized learning meant teachers sat doing nothing and kids sat working with their computers alone.


“We’re allowing the computers to teach and the kids all looked like zombies,” said Tyson Koenig, a factory supervisor in McPherson, who visited his son’s fourth-grade class. In October, he pulled the 10-year-old out of the school.

 

When this school year started, children got laptops to use Summit software and curriculums. In class, they sat at the computers working through subjects from math to English to history. Teachers told students that their role was now to be a mentor.
In September, some students stumbled onto questionable content while working in the Summit platform, which often directs them to click on links to the open web.

In one class covering Paleolithic history, Summit included a link to an article in The Daily Mail, the British newspaper, that showed racy ads with bikini-clad women. For a list of the Ten Commandments, two parents said their children were directed to a Christian conversion site.

Ms. Tavenner said building a curriculum from the open internet meant that a Daily Mail article was fair game for lesson plans. “The Daily Mail is written at a very low reading level,” she said, later adding that it was a bad link to include. She added that as far as she was aware, Summit’s curriculum did not send students to a Christian conversion site.

Around the country, teachers said they were split on Summit. Some said it freed them from making lesson plans and grading quizzes so they had more time for individual students. Others said it left them as bystanders. Some parents said they worried about their children’s data privacy.

“Summit demands an extraordinary amount of personal information about each student and plans to track them through college and beyond,” said Leonie Haimson, co-chairwoman of the Parent Coalition for Student Privacy, a national organization.

Of course! That is Zuckerberg’s business. Get them hooked on isolation from real human interaction, develop their online dossier early and then sell sell sell their data to advertisers. Mark and Chan, do really want to help society? Then walk off a California cliff now. You are the real merchants of death – mental and physical.

Full article here

Facebook uploads users address books (contacts) without user permission

Really – any business that does business with Facebook is sending a great big message to their customers that they do not care about their user’s privacy. “Like US on Facebook” means let us rape your privacy. JUST SAY NO TO FACEBOOK.

Quote

Facebook has admitted to “unintentionally” uploading the address books of 1.5 million users without consent, and says it will delete the collected data and notify those affected.

The discovery follows criticism of Facebook by security experts for a feature that asked new users for their email password as part of the sign-up process. As well as exposing users to potential security breaches, those who provided passwords found that, immediately after their email was verified, the site began “importing” contacts without asking for permission.

Facebook has now admitted it was wrong to do so, and said the upload was inadvertent. “Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time,” the company said.

“When we looked into the steps people were going through to verify their accounts we found that in some cases people’s email contacts were also unintentionally uploaded to Facebook when they created their account,” a spokesperson said. “We estimate that up to 1.5 million people’s email contacts may have been uploaded. These contacts were not shared with anyone and we’re deleting them. We’ve fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage the contacts they share with Facebook in their settings.”

The issue was first noticed in early April, when the Daily Beast reported on Facebook’s practice of asking for email passwords to verify new users. The feature, which allows Facebook to automatically log in to a webmail account to effectively click the link on an email verification itself, was apparently intended to smooth the workflow for signing up for a new account.

But security experts said the practice was “beyond sketchy”, noting that it gave Facebook access to a large amount of personal data and may have led to users adopting unsafe practices around password confidentiality. The company was “practically fishing for passwords you are not supposed to know”, according to cybersecurity tweeter e-sushi, who first raised concern about the feature, which Facebook says has existed since 2016.

At the time, Facebook insisted it did not store email passwords but said nothing about other information gathered in the process. Shortly after, Business Insider reported that, for users who entered their passwords, Facebook was also harvesting contact details – apparently a hangover from an earlier feature that Facebook had built expressly to take contacts with permission – except in this new implementation, users had not given consent.

The company said those contacts were used as part of its People You May Know feature, as well as to improve ad targeting systems. While it has committed to deleting the uploaded contacts, it is not immediately clear whether it will delete the information it inferred from those uploaded contacts – or even whether it is able to do so. Facebook did not immediately reply to a query from the Guardian.

Facebook Is Stealing Your Family’s Joy

Before you post that baby bump or college acceptance letter online, remember how much fun it used to be to share in person.

My kids have had some good news lately. Academic triumphs, hockey tournament wins, even a little college admissions excitement. They’ve had rough moments too, and bittersweet ones. There have been last games and disappointments and unwashed dishes galore. If you’re a friend, or even somebody who knows my mom and struck up a friendly conversation in line at the grocery store, I’d love to talk to you about any of it. I might even show you pictures.

But I’m not going to post them on social media. Because I tried that for a while, and I came to a simple conclusion about getting the reactions of friends, family and acquaintances via emojis and exclamations points rather than hugs and actual exclamations.

It’s no fun. And I don’t want to do it any more.

I’m not the only one pulling back from social media. While around two-thirds of American adults use Facebook, the way many of us use it has shifted in recent years. About 40 percent of adult users report taking a break from checking Facebook for several weeks or more, and 26 percent tell researchers they’ve deleted the app from their phone at some point in the past year.

Some have changed their behavior because of Facebook’s lax record on protecting user data: More than half of adult users have adjusted their privacy settings in the past year. Others seem more concerned with how the platform makes them act and feel. Either way, pulling back on social media is a way to embrace your family’s privacy.

“I have definitely seen an evolution toward sharing less,” said Julianna Miner, an adjunct professor of global and community health at George Mason University and the author of the forthcoming “Raising a Screen-Smart Kid: Embrace the Good and Avoid the Bad in the Digital Age.” She added, “It’s hard to tell if the changes are a response to the security breaches, or a result of people just getting tired of it.”

Even Mark Zuckerberg, the chief executive of Facebook, seems to suspect it’s at least in part the latter — that after experimenting with living our lives in a larger online sphere for over a decade, many of us are ready to return to the more intimate groups where humans have long thrived. In a recent blog post, Mr. Zuckerberg announced plans to emphasize private conversations and smaller communities on the platform. Interacting on Facebook, he wrote, “will become a fundamentally more private experience” — less “town square,” more “living room.”

[As technology advances, will it continue to blur the lines between public and private? Sign up for Charlie Warzel’s limited-run newsletter to explore what’s at stake and what you can do about it.]

That’s a shift I’ve already made for myself, and since doing so, I find myself asking why I embraced my personal soapbox in that online square in the first place. The more I reserve both good news and personal challenges for sharing directly with friends, the more I see that the digital world never offered the same satisfaction or support. Instead, I lost out on moments of seeing friends’ faces light up at joyful news, and frequently found myself wishing that not everyone within my network had been privy to a rant or disappointment.

“There’s plenty of evidence that interpersonal, face-to-face interactions yield a stronger neural response than anything you can do online,” said Ms. Miner. “Online empathy is worth something to us, but not as much. It takes something like six virtual hugs to equal one real hug.”

Time spent seeking those virtual hugs can take us outside the world we’re living in, and draw us back to our phones (which, of course, is the reason many networks offer those bursts of feedback in the first place).

“Ultimately, you’re not just giving social media the time it takes you to post,” said Stacey Steinberg, the associate director of the Center on Children and Families at the University of Florida Levin College of Law and the author of a paper on the topic called “Sharenting: Children’s Privacy in the Age of Social Media.”

“The interaction doesn’t end the minute you press share,” she said. “Some part of your mind is waiting for responses, and that amounts to a small distraction that takes us away from whatever else we would be engaged in.” Once we post that image of our toddler flossing, we’re no longer entirely watching him dance. Some part of us is in the digital realm, waiting to have our delight validated.

That validation can be satisfying, but the emotion is fleeting, like the sugar rush that comes from replacing a real breakfast with a Pop-Tart. Watching your mother’s reaction to the same video, though, brings a different kind of pleasure. “I see parents sharing differently than I did five years ago,” said Ms. Steinberg. “We’re looking for smaller audiences and ways to share just with close friends.”

She also warned that even seemingly innocuous public updates have long shadows. “You could have a child who was a star baseball player and later decides to make a change, still being asked by relative strangers about his batting average,” she said. “Or one who decides on a college, and then changes her mind. Decisions are complex. Lives are complex. Marie Kondo-ing your Facebook page is not so easy.”

There are exceptions. Facebook shines as an arena for professional connection and promotion, of course. For those of us with children who have special needs, it can offer an invaluable community of support. And for the very worst of bad news — for calamities or illnesses or deaths — Facebook can help users speedily share updates, ask for help and share obituaries and memories.
Sign Up for The Privacy Project Newsletter

As technology advances, will it continue to blur the lines between public and private? Explore what’s at stake and what you can do about it.

Cal Newport, the author of “Digital Minimalism: Choosing a Focused Life in a Noisy World,” suggests that when we evaluate the ways we use the social media tools available to us, we ask ourselves if those tools are the best ways to achieve our goals. In those cases, the answer is yes.

But for sharing personal moments, for venting, for getting good advice on parenting challenges while feeling supported in our tougher moments? I’ve found that real life, face-to-face, hug-to-hug contact offers more bang for my buck than anything on a screen ever could. Why cheat yourself out of those pleasures for the momentary high of a pile of “likes”?

Recently, I ran into an acquaintance while waiting for my order at a local restaurant. “Congratulations,” she said, warmly. I racked my brain. I’d sold a book that week, but the information wasn’t public. I wasn’t pregnant, didn’t have a new job, had not won the lottery. My takeout ordering skills didn’t really seem worthy of note, and in fact I probably had asked for too much food, as I usually do. I wanted to talk more about this happy news, but what were we talking about? Fortunately, she went on, “Your son must be so thrilled.”

Right. My oldest — admitted to college. He was thrilled, and so were we, and I said so. But how did she know?

My son told her daughter, one of his classmates, and her daughter told her.

Perfect.

Dear tech companies, I don’t want to see pregnancy ads after my child was stillborn

Dear Tech Companies:

I know you knew I was pregnant. It’s my fault, I just couldn’t resist those Instagram hashtags — #30weekspregnant, #babybump. And, silly me! I even clicked once or twice on the maternity-wear ads Facebook served up. What can I say, I am your ideal “engaged” user.

You surely saw my heartfelt thank-you post to all the girlfriends who came to my baby shower, and the sister-in-law who flew in from Arizona for said shower tagging me in her photos. You probably saw me googling “holiday dress maternity plaid” and “babysafe crib paint.” And I bet Amazon.com even told you my due date, Jan. 24, when I created that Prime registry.

But didn’t you also see me googling “braxton hicks vs. preterm labor” and “baby not moving”? Did you not see my three days of social media silence, uncommon for a high-frequency user like me? And then the announcement post with keywords like “heartbroken” and “problem” and “stillborn” and the 200 teardrop emoticons from my friends? Is that not something you could track?

You see, there are 24,000 stillbirths in the United States every year, and millions more among your worldwide users. And let me tell you what social media is like when you finally come home from the hospital with the emptiest arms in the world, after you and your husband have spent days sobbing in bed, and you pick up your phone for a few minutes of distraction before the next wail. It’s exactly, crushingly, the same as it was when your baby was still alive. A Pea in the Pod. Motherhood Maternity. Latched Mama. Every damn Etsy tchotchke I was considering for the nursery.

And when we millions of brokenhearted people helpfully click “I don’t want to see this ad,” and even answer your “Why?” with the cruel-but-true “It’s not relevant to me,” do you know what your algorithm decides, Tech Companies? It decides you’ve given birth, assumes a happy result and deluges you with ads for the best nursing bras (I have cabbage leaves on my breasts because that is the best medical science has to offer to turn off your milk), DVDs about getting your baby to sleep through the night (I would give anything to have heard him cry at all), and the best strollers to grow with your baby (mine will forever be 4 pounds 1 ounce).

And then, after all that, Experian swoops in with the lowest tracking blow of them all: a spam email encouraging me to “finish registering your baby” with them (I never “started,” but sure) to track his credit throughout the life he will never lead.

Please, Tech Companies, I implore you: If your algorithms are smart enough to realize that I was pregnant, or that I’ve given birth, then surely they can be smart enough to realize that my baby died, and advertise to me accordingly — or maybe, just maybe, not at all.

Regards,

Gillian

Addendum:

Rob Goldman, VP of advertising at Facebook, responded to an earlier version of my letter, saying:

“I am so sorry for your loss and your painful experience with our products. We have a setting available that can block ads about some topics people may find painful – including parenting. It still needs improvement, but please know that we’re working on it & welcome your feedback.”

In fact, I knew there was a way to change my Facebook ad settings and attempted to find it a few days ago, without success. Anyone who has experienced the blur, panic and confusion of grief can understand why. I’ve also been deluged with deeply personal messages from others who have experienced stillbirth, infant death and miscarriage who felt the same way I do. We never asked for the pregnancy or parenting ads to be turned on; these tech companies triggered that on their own, based on information we shared. So what I’m asking is that there be similar triggers to turn this stuff off on its own, based on information we’ve shared.

But for anyone who wants to turn off parenting ads on Facebook, it’s under: Settings>Ad Preferences>Hide ad topics>Parenting.

I have a better idea – just say no to facebook and related social media “look at me look at me” posts and get on with growing up and living your own life.