Skip to content

Privacy

Don’t Plummet with Summit- another Zuckerberg Insidious Failure

…Public schools near Wichita had rolled out a web-based platform and curriculum from Summit Learning. The Silicon Valley-based program promotes an educational approach called “personalized learning,” which uses online tools to customize education. The platform that Summit provides was developed by Facebook engineers. It is funded by Mark Zuckerberg, Facebook’s chief executive, and his wife, Priscilla Chan, a pediatrician.

Under Summit’s program, students spend much of the day on their laptops and go online for lesson plans and quizzes, which they complete at their own pace. Teachers assist students with the work, hold mentoring sessions and lead special projects. The system is free to schools. The laptops are typically bought separately.

Then, students started coming home with headaches and hand cramps. Some said they felt more anxious. One child began having a recurrence of seizures. Another asked to bring her dad’s hunting earmuffs to class to block out classmates because work was now done largely alone.

“We’re allowing the computers to teach and the kids all looked like zombies,” said Tyson Koenig, a factory supervisor in McPherson, who visited his son’s fourth-grade class. In October, he pulled the 10-year-old out of the school.

Yes – personalized learning meant teachers sat doing nothing and kids sat working with their computers alone.


“We’re allowing the computers to teach and the kids all looked like zombies,” said Tyson Koenig, a factory supervisor in McPherson, who visited his son’s fourth-grade class. In October, he pulled the 10-year-old out of the school.

 

When this school year started, children got laptops to use Summit software and curriculums. In class, they sat at the computers working through subjects from math to English to history. Teachers told students that their role was now to be a mentor.
In September, some students stumbled onto questionable content while working in the Summit platform, which often directs them to click on links to the open web.

In one class covering Paleolithic history, Summit included a link to an article in The Daily Mail, the British newspaper, that showed racy ads with bikini-clad women. For a list of the Ten Commandments, two parents said their children were directed to a Christian conversion site.

Ms. Tavenner said building a curriculum from the open internet meant that a Daily Mail article was fair game for lesson plans. “The Daily Mail is written at a very low reading level,” she said, later adding that it was a bad link to include. She added that as far as she was aware, Summit’s curriculum did not send students to a Christian conversion site.

Around the country, teachers said they were split on Summit. Some said it freed them from making lesson plans and grading quizzes so they had more time for individual students. Others said it left them as bystanders. Some parents said they worried about their children’s data privacy.

“Summit demands an extraordinary amount of personal information about each student and plans to track them through college and beyond,” said Leonie Haimson, co-chairwoman of the Parent Coalition for Student Privacy, a national organization.

Of course! That is Zuckerberg’s business. Get them hooked on isolation from real human interaction, develop their online dossier early and then sell sell sell their data to advertisers. Mark and Chan, do really want to help society? Then walk off a California cliff now. You are the real merchants of death – mental and physical.

Full article here

Facebook uploads users address books (contacts) without user permission

Really – any business that does business with Facebook is sending a great big message to their customers that they do not care about their user’s privacy. “Like US on Facebook” means let us rape your privacy. JUST SAY NO TO FACEBOOK.

Quote

Facebook has admitted to “unintentionally” uploading the address books of 1.5 million users without consent, and says it will delete the collected data and notify those affected.

The discovery follows criticism of Facebook by security experts for a feature that asked new users for their email password as part of the sign-up process. As well as exposing users to potential security breaches, those who provided passwords found that, immediately after their email was verified, the site began “importing” contacts without asking for permission.

Facebook has now admitted it was wrong to do so, and said the upload was inadvertent. “Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time,” the company said.

“When we looked into the steps people were going through to verify their accounts we found that in some cases people’s email contacts were also unintentionally uploaded to Facebook when they created their account,” a spokesperson said. “We estimate that up to 1.5 million people’s email contacts may have been uploaded. These contacts were not shared with anyone and we’re deleting them. We’ve fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage the contacts they share with Facebook in their settings.”

The issue was first noticed in early April, when the Daily Beast reported on Facebook’s practice of asking for email passwords to verify new users. The feature, which allows Facebook to automatically log in to a webmail account to effectively click the link on an email verification itself, was apparently intended to smooth the workflow for signing up for a new account.

But security experts said the practice was “beyond sketchy”, noting that it gave Facebook access to a large amount of personal data and may have led to users adopting unsafe practices around password confidentiality. The company was “practically fishing for passwords you are not supposed to know”, according to cybersecurity tweeter e-sushi, who first raised concern about the feature, which Facebook says has existed since 2016.

At the time, Facebook insisted it did not store email passwords but said nothing about other information gathered in the process. Shortly after, Business Insider reported that, for users who entered their passwords, Facebook was also harvesting contact details – apparently a hangover from an earlier feature that Facebook had built expressly to take contacts with permission – except in this new implementation, users had not given consent.

The company said those contacts were used as part of its People You May Know feature, as well as to improve ad targeting systems. While it has committed to deleting the uploaded contacts, it is not immediately clear whether it will delete the information it inferred from those uploaded contacts – or even whether it is able to do so. Facebook did not immediately reply to a query from the Guardian.

Facebook Is Stealing Your Family’s Joy

Before you post that baby bump or college acceptance letter online, remember how much fun it used to be to share in person.

My kids have had some good news lately. Academic triumphs, hockey tournament wins, even a little college admissions excitement. They’ve had rough moments too, and bittersweet ones. There have been last games and disappointments and unwashed dishes galore. If you’re a friend, or even somebody who knows my mom and struck up a friendly conversation in line at the grocery store, I’d love to talk to you about any of it. I might even show you pictures.

But I’m not going to post them on social media. Because I tried that for a while, and I came to a simple conclusion about getting the reactions of friends, family and acquaintances via emojis and exclamations points rather than hugs and actual exclamations.

It’s no fun. And I don’t want to do it any more.

I’m not the only one pulling back from social media. While around two-thirds of American adults use Facebook, the way many of us use it has shifted in recent years. About 40 percent of adult users report taking a break from checking Facebook for several weeks or more, and 26 percent tell researchers they’ve deleted the app from their phone at some point in the past year.

Some have changed their behavior because of Facebook’s lax record on protecting user data: More than half of adult users have adjusted their privacy settings in the past year. Others seem more concerned with how the platform makes them act and feel. Either way, pulling back on social media is a way to embrace your family’s privacy.

“I have definitely seen an evolution toward sharing less,” said Julianna Miner, an adjunct professor of global and community health at George Mason University and the author of the forthcoming “Raising a Screen-Smart Kid: Embrace the Good and Avoid the Bad in the Digital Age.” She added, “It’s hard to tell if the changes are a response to the security breaches, or a result of people just getting tired of it.”

Even Mark Zuckerberg, the chief executive of Facebook, seems to suspect it’s at least in part the latter — that after experimenting with living our lives in a larger online sphere for over a decade, many of us are ready to return to the more intimate groups where humans have long thrived. In a recent blog post, Mr. Zuckerberg announced plans to emphasize private conversations and smaller communities on the platform. Interacting on Facebook, he wrote, “will become a fundamentally more private experience” — less “town square,” more “living room.”

[As technology advances, will it continue to blur the lines between public and private? Sign up for Charlie Warzel’s limited-run newsletter to explore what’s at stake and what you can do about it.]

That’s a shift I’ve already made for myself, and since doing so, I find myself asking why I embraced my personal soapbox in that online square in the first place. The more I reserve both good news and personal challenges for sharing directly with friends, the more I see that the digital world never offered the same satisfaction or support. Instead, I lost out on moments of seeing friends’ faces light up at joyful news, and frequently found myself wishing that not everyone within my network had been privy to a rant or disappointment.

“There’s plenty of evidence that interpersonal, face-to-face interactions yield a stronger neural response than anything you can do online,” said Ms. Miner. “Online empathy is worth something to us, but not as much. It takes something like six virtual hugs to equal one real hug.”

Time spent seeking those virtual hugs can take us outside the world we’re living in, and draw us back to our phones (which, of course, is the reason many networks offer those bursts of feedback in the first place).

“Ultimately, you’re not just giving social media the time it takes you to post,” said Stacey Steinberg, the associate director of the Center on Children and Families at the University of Florida Levin College of Law and the author of a paper on the topic called “Sharenting: Children’s Privacy in the Age of Social Media.”

“The interaction doesn’t end the minute you press share,” she said. “Some part of your mind is waiting for responses, and that amounts to a small distraction that takes us away from whatever else we would be engaged in.” Once we post that image of our toddler flossing, we’re no longer entirely watching him dance. Some part of us is in the digital realm, waiting to have our delight validated.

That validation can be satisfying, but the emotion is fleeting, like the sugar rush that comes from replacing a real breakfast with a Pop-Tart. Watching your mother’s reaction to the same video, though, brings a different kind of pleasure. “I see parents sharing differently than I did five years ago,” said Ms. Steinberg. “We’re looking for smaller audiences and ways to share just with close friends.”

She also warned that even seemingly innocuous public updates have long shadows. “You could have a child who was a star baseball player and later decides to make a change, still being asked by relative strangers about his batting average,” she said. “Or one who decides on a college, and then changes her mind. Decisions are complex. Lives are complex. Marie Kondo-ing your Facebook page is not so easy.”

There are exceptions. Facebook shines as an arena for professional connection and promotion, of course. For those of us with children who have special needs, it can offer an invaluable community of support. And for the very worst of bad news — for calamities or illnesses or deaths — Facebook can help users speedily share updates, ask for help and share obituaries and memories.
Sign Up for The Privacy Project Newsletter

As technology advances, will it continue to blur the lines between public and private? Explore what’s at stake and what you can do about it.

Cal Newport, the author of “Digital Minimalism: Choosing a Focused Life in a Noisy World,” suggests that when we evaluate the ways we use the social media tools available to us, we ask ourselves if those tools are the best ways to achieve our goals. In those cases, the answer is yes.

But for sharing personal moments, for venting, for getting good advice on parenting challenges while feeling supported in our tougher moments? I’ve found that real life, face-to-face, hug-to-hug contact offers more bang for my buck than anything on a screen ever could. Why cheat yourself out of those pleasures for the momentary high of a pile of “likes”?

Recently, I ran into an acquaintance while waiting for my order at a local restaurant. “Congratulations,” she said, warmly. I racked my brain. I’d sold a book that week, but the information wasn’t public. I wasn’t pregnant, didn’t have a new job, had not won the lottery. My takeout ordering skills didn’t really seem worthy of note, and in fact I probably had asked for too much food, as I usually do. I wanted to talk more about this happy news, but what were we talking about? Fortunately, she went on, “Your son must be so thrilled.”

Right. My oldest — admitted to college. He was thrilled, and so were we, and I said so. But how did she know?

My son told her daughter, one of his classmates, and her daughter told her.

Perfect.

Dear tech companies, I don’t want to see pregnancy ads after my child was stillborn

Dear Tech Companies:

I know you knew I was pregnant. It’s my fault, I just couldn’t resist those Instagram hashtags — #30weekspregnant, #babybump. And, silly me! I even clicked once or twice on the maternity-wear ads Facebook served up. What can I say, I am your ideal “engaged” user.

You surely saw my heartfelt thank-you post to all the girlfriends who came to my baby shower, and the sister-in-law who flew in from Arizona for said shower tagging me in her photos. You probably saw me googling “holiday dress maternity plaid” and “babysafe crib paint.” And I bet Amazon.com even told you my due date, Jan. 24, when I created that Prime registry.

But didn’t you also see me googling “braxton hicks vs. preterm labor” and “baby not moving”? Did you not see my three days of social media silence, uncommon for a high-frequency user like me? And then the announcement post with keywords like “heartbroken” and “problem” and “stillborn” and the 200 teardrop emoticons from my friends? Is that not something you could track?

You see, there are 24,000 stillbirths in the United States every year, and millions more among your worldwide users. And let me tell you what social media is like when you finally come home from the hospital with the emptiest arms in the world, after you and your husband have spent days sobbing in bed, and you pick up your phone for a few minutes of distraction before the next wail. It’s exactly, crushingly, the same as it was when your baby was still alive. A Pea in the Pod. Motherhood Maternity. Latched Mama. Every damn Etsy tchotchke I was considering for the nursery.

And when we millions of brokenhearted people helpfully click “I don’t want to see this ad,” and even answer your “Why?” with the cruel-but-true “It’s not relevant to me,” do you know what your algorithm decides, Tech Companies? It decides you’ve given birth, assumes a happy result and deluges you with ads for the best nursing bras (I have cabbage leaves on my breasts because that is the best medical science has to offer to turn off your milk), DVDs about getting your baby to sleep through the night (I would give anything to have heard him cry at all), and the best strollers to grow with your baby (mine will forever be 4 pounds 1 ounce).

And then, after all that, Experian swoops in with the lowest tracking blow of them all: a spam email encouraging me to “finish registering your baby” with them (I never “started,” but sure) to track his credit throughout the life he will never lead.

Please, Tech Companies, I implore you: If your algorithms are smart enough to realize that I was pregnant, or that I’ve given birth, then surely they can be smart enough to realize that my baby died, and advertise to me accordingly — or maybe, just maybe, not at all.

Regards,

Gillian

Addendum:

Rob Goldman, VP of advertising at Facebook, responded to an earlier version of my letter, saying:

“I am so sorry for your loss and your painful experience with our products. We have a setting available that can block ads about some topics people may find painful – including parenting. It still needs improvement, but please know that we’re working on it & welcome your feedback.”

In fact, I knew there was a way to change my Facebook ad settings and attempted to find it a few days ago, without success. Anyone who has experienced the blur, panic and confusion of grief can understand why. I’ve also been deluged with deeply personal messages from others who have experienced stillbirth, infant death and miscarriage who felt the same way I do. We never asked for the pregnancy or parenting ads to be turned on; these tech companies triggered that on their own, based on information we shared. So what I’m asking is that there be similar triggers to turn this stuff off on its own, based on information we’ve shared.

But for anyone who wants to turn off parenting ads on Facebook, it’s under: Settings>Ad Preferences>Hide ad topics>Parenting.

I have a better idea – just say no to facebook and related social media “look at me look at me” posts and get on with growing up and living your own life.

Is your pregnancy app sharing your intimate data with your boss?

I’m shocked! Shocked I say! (Or “There’s a sucker born every minute” and when it comes to apps, smart-speakers, social media, I think it is more like very nanosecond.

As apps to help moms monitor their health proliferate, employers and insurers pay to keep tabs on the vast and valuable data

Like millions of women, Diana Diller was a devoted user of the pregnancy-tracking app Ovia, logging in every night to record new details on a screen asking about her bodily functions, sex drive, medications and mood. When she gave birth last spring, she used the app to chart her baby’s first online medical data — including her name, her location and whether there had been any complications — before leaving the hospital’s recovery room.

But someone else was regularly checking in, too: her employer, which paid to gain access to the intimate details of its workers’ personal lives, from their trying-to-conceive months to early motherhood. Diller’s bosses could look up aggregate data on how many workers using Ovia’s fertility, pregnancy and parenting apps had faced high-risk pregnancies or gave birth prematurely; the top medical questions they had researched; and how soon the new moms planned to return to work.

Health experts worry that such data-intensive apps could expose women to security or privacy risks. The ovulation-tracking app Glow updated its systems in 2016 after Consumer Reports found that anyone could access a woman’s health data, including whether she’d had an abortion and the last time she’d had sex, as long as they knew her email address. Another Ovia competitor, Flo, was found to be sending data to Facebook on when its users were having their periods or were trying to conceive, according to tests published in February in the Wall Street Journal. Ovia says it does not share or sell data with social media sites.

“Maybe I’m naive, but I thought of it as positive reinforcement: They’re trying to help me take care of myself,” said Diller, 39, an event planner in Los Angeles for the video game company Activision Blizzard. The decision to track her pregnancy had been made easier by the $1 a day in gift cards the company paid her to use the app: That’s “diaper and formula money,” she said.

Period- and pregnancy-tracking apps such as Ovia have climbed in popularity as fun, friendly companions for the daunting uncertainties of childbirth, and many expectant women check in daily to see, for instance, how their unborn babies’ size compares to different fruits or Parisian desserts.

But Ovia also has become a powerful monitoring tool for employers and health insurers, which under the banner of corporate wellness have aggressively pushed to gather more data about their workers’ lives than ever before.

Employers who pay the apps’ developer, Ovia Health, can offer their workers a special version of the apps that relays their health data — in a “de-identified,” aggregated form — to an internal employer website accessible by human resources personnel. The companies offer it alongside other health benefits and incentivize workers to input as much about their bodies as they can, saying the data can help the companies minimize health-care spending, discover medical problems and better plan for the months ahead.

Emboldened by the popularity of Fitbit and other tracking technologies, Ovia has marketed itself as shepherding one of the oldest milestones in human existence into the digital age. By giving counseling and feedback on mothers’ progress, executives said, Ovia has helped women conceive after months of infertility and even saved the lives of women who wouldn’t otherwise have realized they were at risk.

But health and privacy advocates say this new generation of “menstrual surveillance” tools is pushing the limits of what women will share about one of the most sensitive moments of their lives. The apps, they say, are designed largely to benefit not the women but their employers and insurers, who gain a sweeping new benchmark on which to assess their workers as they consider the next steps for their families and careers.

Experts worry that companies could use the data to bump up the cost or scale back the coverage of health-care benefits, or that women’s intimate information could be exposed in data breaches or security risks. And though the data is made anonymous, experts also fear that the companies could identify women based on information relayed in confidence, particularly in workplaces where few women are pregnant at any given time.

“What could possibly be the most optimistic, best-faith reason for an employer to know how many high-risk pregnancies their employees have? So they can put more brochures in the break room?” asked Karen Levy, a Cornell University assistant professor who has researched family and workplace monitoring.

“The real benefit of self-tracking is always to the company,” Levy said. “People are being asked to do this at a time when they’re incredibly vulnerable and may not have any sense where that data is being passed.”

Ovia chief executive Paris Wallace said the company complies with privacy laws and provides the aggregate data so employers can evaluate how their workforces’ health outcomes have changed over time. The health information is sensitive, he said, but could also play a critical role in boosting women’s well-being and companies’ bottom lines.

“We are in a women’s health crisis, and it’s impacting people’s lives and their children’s lives,” he said, pointing to the country’s rising rates of premature births and maternal deaths. “But it’s also impacting the folks who are responsible for these outcomes — both financially and for the health of the members they’re accountable for.”

The rise of pregnancy-tracking apps shows how some companies increasingly view the human body as a technological gold mine, rich with a vast range of health data their algorithms can track and analyze. Women’s bodies have been portrayed as especially lucrative: The consulting firm Frost & Sullivan said the “femtech” market — including tracking apps for women’s menstruation, nutrition and sexual wellness — could be worth as much as $50 billion by 2025.

Companies pay for Ovia’s “family benefits solution” package on a per-employee basis, but Ovia also makes money off targeted in-app advertising, including from sellers of fertility-support supplements, life insurance, cord-blood banking and cleaning products.

An Ovia spokeswoman said the company does not sell aggregate data for advertising purposes. But women who use Ovia must consent to its 6,000-word “terms of use,” which grant the company a “royalty-free, perpetual, and irrevocable license, throughout the universe” to “utilize and exploit” their de-identified personal information for scientific research and “external and internal marketing purposes.” Ovia may also “sell, lease or lend aggregated Personal Information to third parties,” the document adds.

Milt Ezzard, the vice president of global benefits for Activision Blizzard, a video gaming giant that earned $7.5 billion last year with franchises such as “Call of Duty” and “World of Warcraft,” credits acceptance of Ovia there to a changing workplace culture where volunteering sensitive information has become more commonplace.

In 2014, when the company rolled out incentives for workers who tracked their physical activity with a Fitbit, some employees voiced concerns over what they called a privacy-infringing overreach. But as the company offered more health tracking — including for mental health, sleep, diet, autism and cancer care — Ezzard said workers grew more comfortable with the trade-off and enticed by the financial benefits.

“Each time we introduced something, there was a bit of an outcry: ‘You’re prying into our lives,’ ” Ezzard said. “But we slowly increased the sensitivity of stuff, and eventually people understood it’s all voluntary, there’s no gun to your head, and we’re going to reward you if you choose to do it.”

“People’s sensitivity,” he added, “has gone from, ‘Hey, Activision Blizzard is Big Brother,’ to, ‘Hey, Activision Blizzard really is bringing me tools that can help me out.’ ”

With more than 10 million users, Ovia’s tracking services are now some of the most downloaded medical apps in America, and the company says it has collected billions of data points into what it calls “one of the largest data sets on women’s health in the world.” Alongside competitors such as Glow, Clue and Flo, the period- and pregnancy-tracking apps have raised hundreds of millions of dollars from investors and count tens of millions of users every month.

Founded in Boston in 2012, Ovia began as a consumer-facing app that made money in the tried-and-true advertising fashion of Silicon Valley. But three years ago, Wallace said, the company was approached by large national insurers who said the app could help them improve medical outcomes and access maternity data via the women themselves.

Ovia’s corporate deals with employers and insurers have seen “triple-digit growth” in recent years, Wallace said. The company would not say how many firms it works with, but the number of employees at those companies is around 10 million, a statistic Ovia refers to as “covered lives.”

Ovia pitches its app to companies as a health-care aid for women to better understand their bodies during a mystifying phase of life. In marketing materials, it says women who have tracked themselves with Ovia showed a 30 percent reduction in premature births, a 30 percent increase in natural conception and a higher rate of identifying the signs of postpartum depression. (An Ovia spokeswoman said those statistics come from an internal return-on-investment calculator that “has been favorably reviewed by actuaries from two national insurance companies.”)

But a key element of Ovia’s sales pitch is how companies can cut back on medical costs and help usher women back to work. Pregnant women who track themselves, the company says, will live healthier, feel more in control and be less likely to give birth prematurely or via a C-section, both of which cost more in medical bills — for the family and the employer.

Women wanting to get pregnant are told they can rely on Ovia’s “fertility algorithms,” which analyze their menstrual data and suggest good times to try to conceive, potentially saving money on infertility treatments. “An average of 33 hours of productivity are lost for every round of treatment,” an Ovia marketing document says.

For employers who fund workers’ health insurance, pregnancy can be one of the biggest and most unpredictable health-care expenses. In 2014, AOL chief executive Tim Armstrong defended the company’s cuts to retirement benefits by blaming the high medical expenses that arose from two employees giving birth to “distressed babies.”

Ovia, in essence, promises companies a tantalizing offer: lower costs and fewer surprises. Wallace gave one example in which a woman had twins prematurely, received unneeded treatments and spent three months in intensive care. “It was a million-dollar birth … so the company comes to us: How can you help us with this?” he said.

But some health and privacy experts say there are many reasons a woman who is pregnant or trying to conceive wouldn’t want to tell her boss, and they worry the data could be used in a way that puts new moms at a disadvantage.

“The fact that women’s pregnancies are being tracked that closely by employers is very disturbing,” said Deborah C. Peel, a psychiatrist and founder of the Texas nonprofit Patient Privacy Rights. “There’s so much discrimination against mothers and families in the workplace, and they can’t trust their employer to have their best interests at heart.”

Federal law forbids companies from discriminating against pregnant women and mandates that pregnancy-related health-care expenses be covered in the same way as other medical conditions. Ovia said the data helps employers provide “better benefits, health coverage and support.”

Ovia’s soft pastels and cheery text lend a friendly air to the process of transmitting private health information to one’s employer, and the app gives daily nudges to remind women to log their progress with messages such as, “You’re beautiful! How are you feeling today?”

But experts say they are unnerved by the sheer volume and detail of data that women are expected to offer up. Pregnant women can log details of their sleep, diet, mood and weight, while women who are trying to conceive can record when they had sex, how they’re feeling and the look and color of their cervical fluid.

After birth, the app asks for the baby’s name, sex and weight; who performed the delivery and where; the birth type, such as vaginal or an unplanned C-section; how long labor lasted; whether it included an epidural; and the details of any complications, such as whether there was a breech or postpartum hemorrhage.

The app also allows women to report whether they had a miscarriage or pregnancy loss, including the date and “type of loss,” such as whether the baby was stillborn. “After reporting a miscarriage, you will have the option to both reset your account and, when you’re ready, to start a new pregnancy,” the app says.

“We’re their companion throughout this process and want to … provide them with support throughout their entire journey,” Ovia spokeswoman Sarah Coppersmith said.

Much of this information is viewable only by the worker. But the company can access a vast range of aggregated data about its employees, including their average age, number of children and current trimester; the average time it took them to get pregnant; the percentage who had high-risk pregnancies, conceived after a stretch of infertility, had C-sections or gave birth prematurely; and the new moms’ return-to-work timing.

Companies can also see which articles are most read in Ovia’s apps, offering them a potential road map to their workers’ personal questions or anxieties. The how-to guides touch on virtually every aspect of a woman’s changing body, mood, financial needs and lifestyle in hyper-intimate detail, including filing for disability, treating bodily aches and discharges, and suggestions for sex positions during pregnancy.

“We are crossing into a new frontier of vaginal digitalization,” wrote Natasha Felizi and Joana Varon, who reviewed a group of menstrual-tracking apps for the Brazil-based tech activist group Coding Rights.

Ovia data is viewable by the company, their insurers and, in the case of Activision Blizzard and other self-insured companies, the third-party administrators that process women’s medical claims.

Ovia says it is compliant with government data-privacy laws such as the Health Insurance Portability and Accountability Act, or HIPAA, which sets rules for sharing medical information. The company also says it removes identifying information from women’s health data in a way that renders it anonymous and that it requires employers to reach a certain minimum of enrolled users before they can see the aggregated results.

But health and privacy experts say it’s relatively easy for a bad actor to “re-identify” a person by cross-referencing that information with other data. The trackers’ availability in companies with few pregnant women on staff, they say, could also leave the data vulnerable to abuse. Ovia says its contract prohibits employers from attempting to re-identify employees.

Ezzard, the benefits executive at Activision Blizzard, said offering pregnancy programs such as Ovia helps the company stand out in a competitive industry and keep skilled women in the workforce coming back. The company employs roughly 5,000 artists, developers and other workers in the United States.

“I want them to have a healthy baby because it’s great for our business experience,” Ezzard said. “Rather than having a baby who’s in the neonatal ICU, where she’s not able to focus much on work.”

One of the first things Diana Diller did when Simone was born was report the birth on her Ovia app. (Philip Cheung/For The Washington Post)

Before Ovia, the company’s pregnant employees would field periodic calls from insurance-company nurses who would ask about how they were feeling and counsel them over the phone. Shifting some pregnancy care to an app where the women could give constant check-ins made a huge difference: Nearly 20 women who had been diagnosed as infertile had become pregnant since the company started offering Ovia’s fertility app, Ezzard said.

Roughly 50 “active users” track their pregnancies at any given time, and the average employee records more than 128 health data points a month, Ezzard said. They also open the app about 48 times a month, or more than once a day.

Ezzard said that the company maintains strict controls on who can review the internal aggregated data and that employees’ medical claims are processed at a third-party data warehouse to help protect their privacy. The program, he added, is already paying off: Ovia and the other services in its “well-being platform” saved the company roughly $1,200 per employee in annual medical costs.

Health experts worry that such data-intensive apps could expose women to security or privacy risks. The ovulation-tracking app Glow updated its systems in 2016 after Consumer Reports found that anyone could access a woman’s health data, including whether she’d had an abortion and the last time she’d had sex, as long as they knew her email address. Another Ovia competitor, Flo, was found to be sending data to Facebook on when its users were having their periods or were trying to conceive, according to tests published in February in the Wall Street Journal. Ovia says it does not share or sell data with social media sites.

The company says it does not do paid clinical trials but provides data to researchers, including for a 2017 study that cited Ovia data from more than 6,000 women on how they chose their obstetricians. But even some researchers worry about ways the information might be used.

“As a clinician researcher, I can see the benefit of analyzing large data sets,” said Paula M. Castaño, an obstetrician-gynecologist and associate professor at Columbia University who has studied menstrual-tracking apps. But a lot of the Ovia data given to employers, she said, raises concerns “with their lack of general clinical applicability and focus on variables that affect time out of work and insurance utilization.”

Ovia says its “fertility algorithms,” which analyze a woman’s data and suggest when she would have the best chance of getting pregnant, have helped 5 million women conceive. But the claim is impossible to prove: Research into similar promises from other apps has suggested there were other possible explanations, including the fact that the women were motivated enough to use a period-tracking app in the first place.

The coming years, however, will probably see companies pushing for more pregnancy data to come straight from the source. The Israeli start-up Nuvo advertises a sensor band strapped around a woman’s belly that can send real-time data on fetal heartbeat and uterine activity “across the home, the workplace, the doctor’s office and the hospital.” Nuvo executives said its “remote pregnancy monitoring platform” is undergoing U.S. Food and Drug Administration review.

Diller, the Activision Blizzard employee, said she was never troubled by Ovia privacy worries. She loved being able to show her friends what size pastry her unborn daughter was and would log her data every night while lying in bed and ticking through her other health apps, including trackers for food, sleep and “mindfulness.”

When she reported the birth in Ovia, the app triggered a burst of virtual confetti and then directed her to download Ovia’s parenting app, where she could track not just her health data, but her newborn daughter’s, too. It was an easy decision. On the app’s home screen, she uploaded the first photo of her newly expanded family.

Hundreds of millions of Facebook records exposed on public servers – report

Wait Wait – I thought old Zuck said he was changing things. I guess his users (including Business users) are really the Zuckers.

Note to Businesses: DROP FACEBOOK, it will hurt you in the long run.
Note to Users: Time to seek addiction counseling because if you still use Facebook, in spite of all the news, you are either mentally challenged or have a serious addiction problem (or simply too apathetic to give a damn).

Editorial — actually my ire is not with the users, it is with the businesses that still patronize Facebook. Customers of these businesses really need to ask this “why is this business still using Facebook?” The answers is clear — they also want your private data and prefer to track and monetize you as opposed to protecting your privacy. And yes, their bedfellows include media like the Washington Post, New York Times, The Guardian, Bloomberg, all of which we often quote here. Shame on them and all others. If companies left Facebook, then this menace (Facebook) would be history. Well that will not happen as they see $$$$$$$$$$$.

What to do? Simple: 1) Delete your Facebook Account, 2) contact those businesses and urge them to drop Facebook.

Quote

Material discovered on Amazon cloud servers in latest example of Facebook letting third parties extract user data

More than 540m Facebook records were left exposed on public internet servers, cybersecurity researchers said on Wednesday, in just the latest security black eye for the company.

Researchers for the firm UpGuard discovered two separate sets of Facebook user data on public Amazon cloud servers, the company detailed in a blogpost.

One dataset, linked to the Mexican media company Cultura Colectiva, contained more than 540m records, including comments, likes, reactions, account names, Facebook IDs and more. The other set, linked to a defunct Facebook app called At the Pool, was significantly smaller, but contained plaintext passwords for 22,000 users.
Zuckerberg’s proposals to regulate Facebook are self-serving and cynical | Roger McNamee
Read more

The large dataset was secured on Wednesday after Bloomberg, which first reported the leak (see article here), contacted Facebook. The smaller dataset was taken offline during UpGuard’s investigation.

The data exposure is not the result of a breach of Facebook’s systems. Rather, it is another example, akin to the Cambridge Analytica case, of Facebook allowing third parties to extract large amounts of user data without controls on how that data is then used or secured.

More than 540m Facebook records were left exposed on public internet servers, cybersecurity researchers said on Wednesday, in just the latest security black eye for the company.

“The data exposed in each of these sets would not exist without Facebook, yet these data sets are no longer under Facebook’s control,” the UpGuard researchers wrote in its blogpost. “In each case, the Facebook platform facilitated the collection of data about individuals and its transfer to third parties, who became responsible for its security.”

Facebook said that it was investigating the incident and did not yet know the nature of the data, how it was collected or why it was stored on public servers. The company said it will inform users if they find evidence that the data was misused.

“Facebook’s policies prohibit storing Facebook information in a public database,” a spokeswoman said in a statement. “Once alerted to the issue, we worked with Amazon to take down the databases. We are committed to working with the developers on our platform to protect people’s data.”

Cultura Colectiva did not immediately respond to a request for comment.

The data exposure is just the latest example of how Facebook’s efforts to be perceived as a “privacy-focused” platform are hampered by its own past practices and what UpGuard researchers called “the long tail” of user data. For years, Facebook allowed third-party app developers substantial access to users’ information.

“As these exposures show, the data genie cannot be put back in the bottle,” the UpGuard researchers wrote. “Data about Facebook users has been spread far beyond the bounds of what Facebook can control today.”

High tech is watching you

Quote

In new book [The Age of Surveillance Capitalism], Business School professor emerita says surveillance capitalism undermines autonomy — and democracy

The continuing advances of the digital revolution can be dazzling. But Shoshana Zuboff, professor emerita at Harvard Business School, warns that their lights, bells, and whistles have made us blind and deaf to the ways high-tech giants exploit our personal data for their own ends.

In her new book, “The Age of Surveillance Capitalism,” Zuboff offers a disturbing picture of how Silicon Valley and other corporations are mining users’ information to predict and shape their behavior.

The Gazette recently interviewed Zuboff about her belief that surveillance capitalism, a term she coined in 2014, is undermining personal autonomy and eroding democracy — and the ways she says society can fight back.
Q&A
Shoshana Zuboff

GAZETTE: The digital revolution began with great promise. When did you start worrying that the tech giants driving it were becoming more interested in exploiting us than serving us?

ZUBOFF: In my 2002 book, “The Support Economy,” I looked at the challenges to capitalism in shifting from a mass to an individual-oriented structure of consumption. I discussed how we finally had the technology to align the forces of supply and demand. However, the early indications were that the people framing that first generation of e-commerce were more preoccupied with tracking cookies and attracting eyeballs for advertising than they were in the historic opportunity they faced.

For a time I thought this was part of the trial and error of a profound structural transformation, but, certainly by 2007, I understood that this was actually a new variant of capitalism that was taking hold of the digital milieu. The opportunities to align supply and demand around the needs of individuals were overtaken by a new economic logic that offered a fast track to monetization.

GAZETTE: What are some of the ways we might not realize that we are losing our autonomy to Facebook, Google, and others?

ZUBOFF: I define surveillance capitalism as the unilateral claiming of private human experience as free raw material for translation into behavioral data. These data are then computed and packaged as prediction products and sold into behavioral futures markets — business customers with a commercial interest in knowing what we will do now, soon, and later. It was Google that first learned how to capture surplus behavioral data, more than what they needed for services, and used it to compute prediction products that they could sell to their business customers, in this case advertisers. But I argue that surveillance capitalism is no more restricted to that initial context than, for example, mass production was restricted to the fabrication of Model T’s.

Right from the start at Google it was understood that users were unlikely to agree to this unilateral claiming of their experience and its translation into behavioral data. It was understood that these methods had to be undetectable. So from the start the logic reflected the social relations of the one-way mirror. They were able to see and to take — and to do this in a way that we could not contest because we had no way to know what was happening.

We rushed to the internet expecting empowerment, the democratization of knowledge, and help with real problems, but surveillance capitalism really was just too lucrative to resist. This economic logic has now spread beyond the tech companies to new surveillance–based ecosystems in virtually every economic sector, from insurance to automobiles to health, education, finance, to every product described as “smart” and every service described as “personalized.” By now it’s very difficult to participate effectively in society without interfacing with these same channels that are supply chains for surveillance capitalism’s data flows. For example, ProPublica recently reported that breathing machines purchased by people with sleep apnea are secretly sending usage data to health insurers, where the information can be used to justify reduced insurance payments.

GAZETTE: Why have we failed even now to take notice of the effects of all this surveillance?

ZUBOFF: There are many reasons. I chronicle 16 explanations as to “how they got away with it.” One big reason is that the audacious, unprecedented quality of surveillance capitalism’s methods and operations has impeded our ability to perceive them and grasp their meaning and consequence.

Another reason is that surveillance capitalism, invented by Google in 2001, benefitted from a couple of important historical windfalls. One is that it arose in the era of a neoliberal consensus around the superiority of self-regulating companies and markets. State-imposed regulation was considered a drag on free enterprise. A second historical windfall is that surveillance capitalism was invented in 2001, the year of 9/11. In the days leading up to that tragedy, there were new legislative initiatives being discussed in Congress around privacy, some of which might well have outlawed practices that became routine operations of surveillance capitalism. Just hours after the World Trade Center towers were hit, the conversation in Washington changed from a concern about privacy to a preoccupation with “total information awareness.” In this new environment, the intelligence agencies and other powerful forces in Washington and other Western governments were more disposed to incubate and nurture the surveillance capabilities coming out of the commercial sector.

A third reason is that these methodologies are designed to keep us ignorant. The rhetoric of the pioneering surveillance capitalists, and just about everyone who has followed, has been a textbook of misdirection, euphemism, and obfuscation. One theme of misdirection has been to sell people on the idea that the new economic practices are an inevitable consequence of digital technology. In America and throughout the West we believe it’s wrong to impede technological progress. So the thought is that if these disturbing practices are the inevitable consequence of the new technologies, we probably just have to live with it. This is a dangerous category error. It’s impossible to imagine surveillance capitalism without the digital, but it’s easy to imagine the digital without surveillance capitalism.

A fourth explanation involves dependency and the foreclosure of alternatives. We now depend upon the internet just to participate effectively in our daily lives. Whether it’s interfacing with the IRS or your health care provider, nearly everything we do now just to fulfill the barest requirements of social participation marches us through the same channels that are surveillance capitalism’s supply chains.

GAZETTE: You warn that our very humanity and our ability to function as a democracy is in some ways at risk.

ZUBOFF: The competitive dynamics of surveillance capitalism have created some really powerful economic imperatives that are driving these firms to produce better and better behavioral-prediction products. Ultimately they’ve discovered that this requires not only amassing huge volumes of data, but actually intervening in our behavior. The shift is from monitoring to what the data scientists call “actuating.” Surveillance capitalists now develop “economies of action,” as they learn to tune, herd, and condition our behavior with subtle and subliminal cues, rewards, and punishments that shunt us toward their most profitable outcomes.

What is abrogated here is our right to the future tense, which is the essence of free will, the idea that I can project myself into the future and thus make it a meaningful aspect of my present. This is the essence of autonomy and human agency. Surveillance capitalism’s “means of behavioral modification” at scale erodes democracy from within because, without autonomy in action and in thought, we have little capacity for the moral judgment and critical thinking necessary for a democratic society. Democracy is also eroded from without, as surveillance capitalism represents an unprecedented concentration of knowledge and the power that accrues to such knowledge. They know everything about us, but we know little about them. They predict our futures, but for the sake of others’ gain. Their knowledge extends far beyond the compilation of the information we gave them. It’s the knowledge that they have produced from that information that constitutes their competitive advantage, and they will never give that up. These knowledge asymmetries introduce wholly new axes of social inequality and injustice.

GAZETTE: So how do we change this dynamic?

ZUBOFF: There are three arenas that must be addressed if we are to end this age of surveillance capitalism, just as we once ended the Gilded Age.

First, we need a sea change in public opinion. This begins with the power of naming. It means awakening to a sense of indignation and outrage. We say, “No.” We say, “This is not OK.”

Second, we need to muster the resources of our democratic institutions in the form of law and regulation. These include, but also move beyond, privacy and antitrust laws. We also need to develop new laws and regulatory institutions that specifically address the mechanisms and imperatives of surveillance capitalism.

A third arena relates to the opportunity for competitive solutions. Every survey of internet users has shown that once people become aware of surveillance capitalists’ backstage practices, they reject them. That points to a disconnect between supply and demand: a market failure. So once again we see a historic opportunity for an alliance of companies to found an alternative ecosystem — one that returns us to the earlier promise of the digital age as an era of empowerment and the democratization of knowledge.

Facebook’s Data Deals Are Under Criminal Investigation

Throw the book at em, and wind down this house of despicable spies and greedy exploiters of their (arguably gullible) flock

Quote

Federal prosecutors are conducting a criminal investigation into data deals Facebook struck with some of the world’s largest technology companies, intensifying scrutiny of the social media giant’s business practices as it seeks to rebound from a year of scandal and setbacks.

A grand jury in New York has subpoenaed records from at least two prominent makers of smartphones and other devices, according to two people who were familiar with the requests and who insisted on anonymity to discuss confidential legal matters. Both companies had entered into partnerships with Facebook, gaining broad access to the personal information of hundreds of millions of its users.

The companies were among more than 150, including Amazon, Apple, Microsoft and Sony, that had cut sharing deals with the world’s dominant social media platform. The agreements, previously reported in The New York Times, let the companies see users’ friends, contact information and other data, sometimes without consent. Facebook has phased out most of the partnerships over the past two years.

A grand jury in New York has subpoenaed records from at least two prominent makers of smartphones and other devices, according to two people who were familiar with the requests and who insisted on anonymity to discuss confidential legal matters. Both companies had entered into partnerships with Facebook, gaining broad access to the personal information of hundreds of millions of its users.


Yep, no surprise here. The invasion of privacy extends much further including the oligopolist, and in many cases, outright monopolies in the mobile phone carriers, ISPs and beyond. When will the U.S. get serious about anti-trust enforcement in the tech industry?

“We are cooperating with investigators and take those probes seriously,” a Facebook spokesman said in a statement. “We’ve provided public testimony, answered questions and pledged that we will continue to do so.”

[Read Brian Chen’s story on what he found when he downloaded his Facebook data.]

It is not clear when the grand jury inquiry, overseen by prosecutors with the United States attorney’s office for the Eastern District of New York, began or exactly what it is focusing on. Facebook was already facing scrutiny by the Federal Trade Commission and the Securities and Exchange Commission. And the Justice Department’s securities fraud unit began investigating it after reports that Cambridge Analytica, a political consulting firm, had improperly obtained the Facebook data of 87 million people and used it to build tools that helped President Trump’s election campaign.

The Justice Department and the Eastern District declined to comment for this article.

The Cambridge investigation, still active, is being run by prosecutors from the Northern District of California. One former Cambridge employee said investigators questioned him as recently as late February. He and three other witnesses in the case, speaking on the condition of anonymity so they would not anger prosecutors, said a significant line of inquiry involved Facebook’s claims that it was misled by Cambridge.

In public statements, Facebook executives had said that Cambridge told the company it was gathering data only for academic purposes. But the fine print accompanying a quiz app that collected the information said it could also be used commercially. Selling user data would have violated Facebook’s rules at the time, yet the social network does not appear to have regularly checked that apps were complying. Facebook deleted the quiz app in December 2015.

The disclosures about Cambridge last year thrust Facebook into the worst crisis of its history. Then came news reports last June and December that Facebook had given business partners — including makers of smartphones, tablets and other devices — deep access to users’ personal information, letting some companies effectively override users’ privacy settings.

The sharing deals empowered Microsoft’s Bing search engine to map out the friends of virtually all Facebook users without their explicit consent, and allowed Amazon to obtain users’ names and contact information through their friends. Apple was able to hide from Facebook users all indicators that its devices were even asking for data.

Privacy advocates said the partnerships seemed to violate a 2011 consent agreement between Facebook and the F.T.C., stemming from allegations that the company had shared data in ways that deceived consumers. The deals also appeared to contradict statements by Mark Zuckerberg and other executives that Facebook had clamped down several years ago on sharing the data of users’ friends with outside developers.

F.T.C. officials, who spent the past year investigating whether Facebook violated the 2011 agreement, are now weighing the sharing deals as they negotiate a possible multibillion-dollar fine. That would be the largest such penalty ever imposed by the trade regulator.

Facebook has aggressively defended the partnerships, saying they were permitted under a provision in the F.T.C. agreement that covered service providers — companies that acted as extensions of the social network.

The company has taken steps in the past year to tackle data misuse and misinformation. Last week, Mr. Zuckerberg unveiled a plan that would begin to pivot Facebook away from being a platform for public sharing and put more emphasis on private communications.

No guns or lockpicks needed to nick modern cars if they’re fitted with hackable ‘smart’ alarms

Vulnerable kit can immobilise motors and even unlock doors

Quote

Researchers have discovered that “smart” alarms can allow thieves to remotely kill your engine at speed, unlock car doors and even tamper with cruise control speed.

British infosec biz Pen Test Partners found that the Viper Smart Start alarm and products from vendor Pandora were riddled with flaws, allowing an attacker to steal a car fitted with one of the affected devices.

“Before we contacted them, the manufacturers had inadvertently exposed around 3 million cars to theft and their users to hijack,” said PTP in a blog post about their findings. The firm was inspired to start looking at Pandora’s alarms after noticing that the company boasted their security was “unhackable”.

Thanks to an unauthenticated corner of the service’s API and a simple parameter manipulation (an indirect object request, IDOR), PTP said they were able to change a Viper Smart Start user account’s password and registered email address, giving them full control over the app and the car that the alarm system was installed on.

All they had to do was send a POST request to the API with the parameter “email” redefined to one of their choice in order to overwrite the legitimate owner’s email address, thus gaining access and control over the account.

PTP said that in a live proof-of-concept demo they were able to geolocate a target car using the Viper Smart Start account’s inbuilt functionality, set off the alarm (causing the driver to stop and investigate), activated the car’s immobiliser once it was stationary and then remotely unlocked the car’s doors, using the app’s ability to clone the key fob and issue RF commands from a user’s mobile phone.

Even worse, after further API digging, PTP researchers discovered a function in the Viper API that remotely turned off the car’s engine. The Pandora API also allowed researchers to remotely enable the car’s microphone, allowing nefarious people to eavesdrop on the occupants.

They also said: “Mazda 6, Range Rover Sport, Kia Quoris, Toyota Fortuner, Mitsubishi Pajero, Toyota Prius 50 and RAV4 – these all appear to have undocumented functionality present in the alarm API to remotely adjust cruise control speed!”

Both Pandora and Viper had fixed the offending IDORs before PTP went public. The infosec firm noted that modern alarm systems tend to have direct access to the CANbus, the heart of a modern electronic vehicle.

A year ago infosec researchers wailed that car security in general is poor, while others discovered that electronic control units (ECUs), small modular computers used for controlling specific vehicle routines that were done mechanically years ago, were vulnerable to certain types of hack even with the engine off and the car stationary.