Skip to content

Monthly Archives: April 2019

Google Spies! The worst kind of microphone is a hidden microphone.

Google says the built-in microphone it never told Nest users about was ‘never supposed to be a secret’

Yeah right.
Quote

  • In early February, Google announced that Assistant would work with its home security and alarm system, Nest Secure.
  • The problem: Users didn’t know a microphone existed on their Nest security devices to begin with.
  • On Tuesday, a Google representative told Business Insider the company had made an “error.”
  • “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the person said. “That was an error on our part.”

In early February, Google announced that its home security and alarm system Nest Secure would be getting an update. Users, the company said, could now enable its virtual-assistant technology, Google Assistant.

The problem: Nest users didn’t know a microphone existed on their security device to begin with.

The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device.

On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”

Google says “the microphone has never been on and is only activated when users specifically enable the option.”

It also said the microphone was originally included in the Nest Guard for the possibility of adding new security features down the line, like the ability to detect broken glass.

Still, even if Google included the microphone in its Nest Guard device for future updates — like its Assistant integration — the news comes as consumers have grown increasingly wary of major tech companies and their commitment to consumer privacy.

For Google, the revelation is particularly problematic and brings to mind previous privacy controversies, such as the 2010 incident in which the company acknowledged that its fleet of Street View cars “accidentally” collected personal data transmitted over consumers’ unsecured WiFi networks, including emails.

How Nest, designed to keep intruders out of people’s homes, effectively allowed hackers to get in

What strikes me about this article is simply how illiterate people are when it comes to their digital security. “Only about 1 percent of Internet users, he said, use some kind of password manager.” That is crazy. As this article states, “Credential stuffing is “at the root of probably 90 percent of the things we see happening” There are so many stolen passwords being dumped online that if you do not use a different password for each site, you are simply asking for trouble. You can check for yourself here to see if your email credential has been part of a breach. , The only way you will be able to use a different password for each site is to use a password manager.

Another conclusion drawn from this article is how IoT companies purposely make there devices hackers dreams by deliberating not employing better security methods. They do this to make their products idiot proof. Reread that, Idiot Proof. In other words, they think their users are idiots and do want to burden them with nonsense like better security.

Quote

Tara Thomas thought her daughter was just having nightmares. “There’s a monster in my room,” the almost-3-year-old would say, sometimes pointing to the green light on the Nest Cam installed on the wall above her bed.

Then Thomas realized her daughter’s nightmares were real. In August, she walked into the room and heard pornography playing through the Nest Cam, which she had used for years as a baby monitor in their Novato, Calif., home. Hackers, whose voices could be heard faintly in the background, were playing the recording, using the intercom feature in the software. “I’m really sad I doubted my daughter,” she said.

Though it would be nearly impossible to find out who was behind it, a hack like this one doesn’t require much effort, for two reasons: Software designed to help people break into websites and devices has gotten so easy to use that it’s practically child’s play, and many companies, including Nest, have effectively chosen to let some hackers slip through the cracks rather than impose an array of inconvenient countermeasures that could detract from their users’ experience and ultimately alienate their customers.

The result is that anyone in the world with an Internet connection and rudimentary skills has the ability to virtually break into homes through devices designed to keep physical intruders out.

As hacks such as the one the Thomases suffered become public, tech companies are deciding between user convenience and potential damage to their brands. Nest could make it more difficult for hackers to break into Nest cameras, for instance, by making the log-in process more cumbersome. But doing so would introduce what Silicon Valley calls “friction” — anything that can slow down or stand in the way of someone using a product.

At the same time, tech companies pay a reputational price for each high-profile incident. Nest, which is part of Google, has been featured on local news stations throughout the country for hacks similar to what the Thomases experienced. And Nest’s recognizable brand name may have made it a bigger target. While Nest’s thermostats are dominant in the market, its connected security cameras trail the market leader, Arlo, according to Jack Narcotta, an analyst at the market research firm Strategy Analytics. Arlo, which spun out of Netgear, has around 30 percent of the market, he said. Nest is in the top five, he said.

Nik Sathe, vice president of software engineering for Google Home and Nest, said Nest has tried to weigh protecting its less security-savvy customers while taking care not to unduly inconvenience legitimate users to keep out the bad ones. “It’s a balance,” he said. Whatever security Nest uses, Sathe said, needs to avoid “bad outcomes in terms of user experience.”

Google spokeswoman Nicol Addison said Thomas could have avoided being hacked by implementing two-factor authentication, where in addition to a password, the user must enter a six-digit code sent via text message. Thomas said she had activated two-factor authentication; Addison said it had never been activated on the account.

The method used to spy on the Thomases is one of the oldest tricks on the Internet. Hackers essentially look for email addresses and passwords that have been dumped online after being stolen from one website or service and then check to see whether the same credentials work on another site. Like the vast majority of Internet users, the family used similar passwords on more than one account. While their Nest account had not been hacked, their password had essentially become public knowledge, thanks to other data breaches.

In recent years, this practice, which the security industry calls “credential stuffing,” has gotten incredibly easy. One factor is the sheer number of stolen passwords being dumped online publicly. It’s difficult to find someone who hasn’t been victimized. (You can check for yourself here.)

A new breed of credential-stuffing software programs allows people with little to no computer skills to check the log-in credentials of millions of users against hundreds of websites and online services such as Netflix and Spotify in a matter of minutes. Netflix and Spotify both said in statements that they were aware of credential stuffing and employ measures to guard against it. Netflix, for instance, monitors websites with stolen passwords and notifies users when it detects suspicious activity. Neither Netflix nor Spotify offer two-factor authentication.

But the potential for harm is higher for the 20 billion Internet-connected things expected to be online by next year, according to the research firm Gartner. Securing these devices has public safety implications. Hacked devices can be used in large-scale cyberattacks such as the “Dyn hack” that mobilized millions of compromised “Internet of things” devices to take down Twitter, Spotify and others in 2016.

In January, Japanese lawmakers passed an amendment to allow the government to essentially do what hackers do and scour the Internet for stolen passwords and test them to see whether they have been reused on other platforms. The hope is that the government can force tech companies to fix the problem.

Security experts worry the problem has gotten so big that there could be attacks similar to the Dyn hack, this time as a result of a rise in credential stuffing.

“They almost make it foolproof,” said Anthony Ferrante, the global head of cybersecurity at FTI Consulting and a former member of the National Security Council. He said the new tools have made it even more important to stop reusing passwords.

Tech companies have been aware of the threat of credential stuffing for years, but the way they think about it has evolved as it has become a bigger problem. There was once a sense that users should take responsibility for their security by refraining from using the same password on multiple websites. But as gigantic dumps of passwords have gotten more frequent, technology companies have found that it is not just a few inattentive customers who reuse the same passwords for different accounts — it’s the majority of people online.

Credential stuffing is “at the root of probably 90 percent of the things we see happening,” said Emmanuel Schalit, chief executive of Dashlane, a password manager that allows people to store unique, random passwords in one place. Only about 1 percent of Internet users, he said, use some kind of password manager.

“We saw this coming in late 2017, early 2018 when we saw these big credential dumps start to happen,” Google’s Sathe said. In response, Nest says, it implemented security measures around that time.

It did its own research into stolen passwords available on the Web and cross-referenced them with its records, using an encryption technique that ensured Nest could not actually see the passwords. In emails sent to customers, including the Thomases, it notified customers when they were vulnerable. It also tried to block log-in attempts that veered from the way legitimate users log into accounts. For instance, if a computer from the same Internet-protocol address attempted to log into 10 Nest accounts, the algorithm would block that address from logging into any more accounts.

But Nest’s defenses were not good enough to stop several high-profile incidents throughout last year in which hackers used credential stuffing to break into Nest cameras for kicks. Hackers told a family in a San Francisco suburb, using the family’s Nest Cam, that there was an imminent missile attack from North Korea. Someone hurled racial epithets at a family in Illinois through a Nest Cam. There were also reports of hackers changing the temperature on Nest thermostats. And while only a handful of hacks became public, other users may not even be aware their cameras are compromised.

The company was forced to respond. “Nest was not breached,” it said in a January statement. “These recent reports are based on customers using compromised passwords,” it said, urging its customers to use two-factor authentication. Nest started forcing some users to change their passwords.

This was a big step for Nest because it created the kind of friction that technology companies usually try to avoid. “As we saw the threat evolve, we put more explicit measures in place,” Sathe said. Nest says only a small percentage of its millions of customers are vulnerable to this type of attack.

According to at least one expert, though, Nest users are still exposed. Hank Fordham, a security researcher, sat in his Calgary, Alberta, home recently and opened up a credential-stuffing software program known as Snipr. Instantly, Fordham said, he found thousands of Nest accounts that he could access. Had he wanted to, he would have been able to view cameras and change thermostat settings with relative ease.

While other similar programs have been around for years, Snipr, which costs $20 to download, is easier to use. Snipr provides the code required to check whether hundreds of the most popular platforms, such as “League of Legends” and Netflix, are accessible with a bunch of usernames and passwords — and those have become abundantly available all over the Internet.

Fordham, who had been monitoring the software and testing it for malware, noticed that after Snipr added functionality for Nest accounts last May, news reports of attacks started coming out. “I think the credential-stuffing community was made aware of it, and that was the dam breaking,” he said.

Nest said the company had never heard of Snipr, though it is generally aware of credential-stuffing software. It said it cannot be sure whether any one program drives more credential stuffing toward Nest products.

What surprises Fordham and other security researchers about the vulnerability of Nest accounts is the fact that Nest’s parent company, Google, is widely known for having the best methods for stopping credential-stuffing attacks. Google’s vast user base gives it data that it can use to determine whether someone trying to log into an account is a human or a robot.

The reason Nest has not employed all of Google’s know-how on security goes back to Nest’s roots, according to Nest and people with knowledge of its history. Founded in 2010 by longtime Apple executive Tony Fadell, Nest promised at the time that it would not collect data on users for marketing purposes.

In 2014, Nest was acquired by Google, which has the opposite business model. Google’s products are free or inexpensive and, in exchange, it profits from the personal information it collects about its users. The people familiar with Nest’s history said the different terms of service and technical challenges have prevented Nest from using all of Google’s security products. Google declined to discuss whether any of its security features were withheld because of incompatibility with Nest’s policies.

Under Alphabet, Google’s parent company, Nest employed its own security team. While Google shared knowledge about security, Nest developed its own software. In some ways, Nest’s practices appear to lag well behind Google’s. For instance, Nest still uses SMS messages for two-factor authentication. Using SMS is generally not recommended by security experts, because text messages can be easily hijacked by hackers. Google allows people to use authentication apps, including one it developed in-house, instead of text messages. And Nest does not use ReCaptcha, which Google acquired in 2009 and which can separate humans from automated software, such as what credential stuffers use to identify vulnerable accounts.

Sathe said Nest employed plenty of advanced techniques to stop credential stuffing, such as machine learning algorithms that “score” log-ins based on how suspicious they are and block them accordingly. “We have many layers of security in conjunction with what the industry would consider best practices,” he said.

When asked why Nest does not use ReCaptcha, Sathe cited difficulty in implementing it on mobile apps, and user convenience. “Captchas do create a speed bump for the users,” he said.

The person behind Snipr, who goes by the name “Pragma” and communicates via an encrypted chat, put the blame on the company. “I can tell you right now, Nest can easily secure all of this,” he said when asked whether his software had enabled people to listen in and harass people via Nest Cams. “This is like stupidly bad security, like, extremely bad.” He also said he would remove the capability to log into Nest accounts, which he said he added last May when one of his customers asked for it, if the company asked. Pragma would not identify himself, for fear of getting in “some kind of serious trouble.”

That’s when Fordham, the Calgary security researcher, became concerned. He noticed the addition of Nest on the dashboard and took it upon himself to start warning people who were vulnerable. He logged into their Nest cams and spoke to them, imploring them to change their passwords. One of those interactions ended up being recorded by the person on the other end of the camera. A local news station broadcast the video.

Fordham said he is miffed that it is still so easy to log into Nest accounts. He noted that Dunkin’ Donuts, after seeing its users fall victim to credential-stuffing attacks aimed at taking their rewards points, implemented measures, including captchas, that have helped solve the problem. “It’s a little alarming that a company owned by Google hasn’t done the same thing as Dunkin’ Donuts,” Fordham said.

A spokeswoman for Dunkin’ declined to comment.

Doorbells are gaining cameras for security and catching porch pirates. The Post’s Geoff Fowler goes over security camera etiquette you may not have considered. (Jhaan Elker, Geoffrey Fowler/The Washington Post)

According to people familiar with the matter, Google is in the process of converting Nest user accounts so that they utilize Google’s security methods via Google’s log-in, in part to deal with the problem. Addison said that Nest user data will not be subject to tracking by Google. She later said that she misspoke but would not clarify what that meant.

Knowing that the hack could have been stopped with a unique password or two-factor authentication has not made Thomas, whose camera was hacked, feel any better. “I continuously get emails saying it wasn’t their fault,” she said.

She unplugged the camera and another one she used to have in her son’s bedroom, and she doesn’t plan to turn them on again: “That was the solution.”

Sri Lanka Shut Down Social Media. My First Thought Was ‘Good.’

So was mine.

Quote

As a tech journalist, I’m ashamed to admit it. But this is how bad the situation has gotten.

This is the ugly conundrum of the digital age: When you traffic in outrage, you get death.

So when the Sri Lankan government temporarily shut down access to American social media services like Facebook and Google’s YouTube after the bombings there on Easter morning, my first thought was “good.”

Good, because it could save lives. Good, because the companies that run these platforms seem incapable of controlling the powerful global tools they have built. Good, because the toxic digital waste of misinformation that floods these platforms has overwhelmed what was once so very good about them. And indeed, by Sunday morning so many false reports about the carnage were already circulating online that the Sri Lankan government worried more violence would follow.

It pains me as a journalist, and someone who once believed that a worldwide communications medium would herald more tolerance, to admit this — to say that my first instinct was to turn it all off. But it has become clear to me with every incident that the greatest experiment in human interaction in the history of the world continues to fail in ever more dangerous ways.

In short: Stop the Facebook/YouTube/Twitter world — we want to get off.

Obviously, that is an impossible request and one that does not address the root cause of the problem, which is that humanity can be deeply inhumane. But that tendency has been made worse by tech in ways that were not anticipated by those who built it.

I noted this in my very first column for The Times almost a year ago, when I called social media giants “digital arms dealers of the modern age” who had, by sloppy design, weaponized pretty much everything that could be weaponized.

“They have weaponized civic discourse,” I wrote. “And they have weaponized, most of all, politics. Which is why malevolent actors continue to game the platforms and why there’s still no real solution in sight anytime soon, because they were built to work exactly this way.”

So it is no surprise that we are where we are now, with the Sri Lankan government closing off its citizens’ access to social media, fearing misinformation would lead to more violence. A pre-crime move, if you will, and a drastic one, since much critical information in that country flows over these platforms. Facebook and YouTube, and to a lesser extent services like Viber, are how news is distributed and consumed and also how it is abused. Imagine if you mashed up newspapers, cable, radio and the internet into one outlet in the United States and you have the right idea.

A Facebook spokesman stressed to me that “people rely on our services to communicate with their loved ones.” He told me the company is working with Sri Lankan law enforcement and trying to remove content that violates its standards.

It pains me as a journalist, and someone who once believed that a worldwide communications medium would herald more tolerance, to admit this — to say that my first instinct was to turn it all off. But it has become clear to me with every incident that the greatest experiment in human interaction in the history of the world continues to fail in ever more dangerous ways.

In short: Stop the Facebook/YouTube/Twitter world — we want to get off.

 

But while social media had once been credited with helping foster democracy in places like Sri Lanka, it is now blamed for an increase in religious hatred. That justification was behind another brief block a year ago, aimed at Facebook, where the Sri Lankan government said posts appeared to have incited anti-Muslim violence.

“The extraordinary step reflects growing global concern, particularly among governments, about the capacity of American-owned networks to spin up violence,” The Times reported on Sunday.

Spin up violence indeed. Just a month ago in New Zealand, a murderous shooter apparently radicalized by social media broadcast his heinous acts on those same platforms. Let’s be clear, the hateful killer is to blame, but it is hard to deny that his crime was facilitated by tech.

In that case, the New Zealand government did not turn off the tech faucets, but it did point to those companies as a big part of the problem. After the attacks, neither Facebook nor YouTube could easily stop the ever-looping videos of the killings, which proliferated too quickly for their clever algorithms to keep up. One insider at YouTube described the experience to me as a “nightmare version of Whack-a-Mole.”

New Zealand, under the suffer-no-foolish-techies leadership of Jacinda Ardern, will be looking hard at imposing penalties on these companies for not controlling the spread of extremist content. Australia already passed such a law in early April. Here in the United States, our regulators are much farther behind, still debating whether it is a problem or not.

It is a problem, even if the manifestations of how these platforms get warped vary across the world. They are different in ways that make no difference and the same in one crucial way that does. Namely, social media has blown the lids off controls that have kept society in check. These platforms give voice to everyone, but some of those voices are false or, worse, malevolent, and the companies continue to struggle with how to deal with them.

In the early days of the internet, there was a lot of talk of how this was a good thing, getting rid of those gatekeepers. Well, they are gone now, and that means we need to have a global discussion involving all parties on how to handle the resulting disaster, well beyond adding more moderators or better algorithms.

Shutting social media down in times of crisis isn’t going to work. I raised that idea with a top executive at a big tech company I visited last week, during a discussion of what had happened in New Zealand.

“You can’t shut it off,” the executive said flatly. “It’s too late.”

True – but we can encourage or even ban businesses from advertising on it. Then they would whither and die and good riddance.

Don’t Plummet with Summit- another Zuckerberg Insidious Failure

…Public schools near Wichita had rolled out a web-based platform and curriculum from Summit Learning. The Silicon Valley-based program promotes an educational approach called “personalized learning,” which uses online tools to customize education. The platform that Summit provides was developed by Facebook engineers. It is funded by Mark Zuckerberg, Facebook’s chief executive, and his wife, Priscilla Chan, a pediatrician.

Under Summit’s program, students spend much of the day on their laptops and go online for lesson plans and quizzes, which they complete at their own pace. Teachers assist students with the work, hold mentoring sessions and lead special projects. The system is free to schools. The laptops are typically bought separately.

Then, students started coming home with headaches and hand cramps. Some said they felt more anxious. One child began having a recurrence of seizures. Another asked to bring her dad’s hunting earmuffs to class to block out classmates because work was now done largely alone.

“We’re allowing the computers to teach and the kids all looked like zombies,” said Tyson Koenig, a factory supervisor in McPherson, who visited his son’s fourth-grade class. In October, he pulled the 10-year-old out of the school.

Yes – personalized learning meant teachers sat doing nothing and kids sat working with their computers alone.


“We’re allowing the computers to teach and the kids all looked like zombies,” said Tyson Koenig, a factory supervisor in McPherson, who visited his son’s fourth-grade class. In October, he pulled the 10-year-old out of the school.

 

When this school year started, children got laptops to use Summit software and curriculums. In class, they sat at the computers working through subjects from math to English to history. Teachers told students that their role was now to be a mentor.
In September, some students stumbled onto questionable content while working in the Summit platform, which often directs them to click on links to the open web.

In one class covering Paleolithic history, Summit included a link to an article in The Daily Mail, the British newspaper, that showed racy ads with bikini-clad women. For a list of the Ten Commandments, two parents said their children were directed to a Christian conversion site.

Ms. Tavenner said building a curriculum from the open internet meant that a Daily Mail article was fair game for lesson plans. “The Daily Mail is written at a very low reading level,” she said, later adding that it was a bad link to include. She added that as far as she was aware, Summit’s curriculum did not send students to a Christian conversion site.

Around the country, teachers said they were split on Summit. Some said it freed them from making lesson plans and grading quizzes so they had more time for individual students. Others said it left them as bystanders. Some parents said they worried about their children’s data privacy.

“Summit demands an extraordinary amount of personal information about each student and plans to track them through college and beyond,” said Leonie Haimson, co-chairwoman of the Parent Coalition for Student Privacy, a national organization.

Of course! That is Zuckerberg’s business. Get them hooked on isolation from real human interaction, develop their online dossier early and then sell sell sell their data to advertisers. Mark and Chan, do really want to help society? Then walk off a California cliff now. You are the real merchants of death – mental and physical.

Full article here

On the Great Russian Heist of 2016 and Cyber Security

The Mueller report is a good reminder of how important it is to prevent foreign interference in American elections. “

True. And how they did it is telling also. They used social media and hacking. I will address hacking first. I have been focusing on the cyber security business nearly 20 years and to this day it astounds me how lax businesses are about their information security. The reason is money. They do not want to spend it. Why? The cost of a breach is really cheaper than that which is needed secure and monitor a business. Solution? Easy – since businesses will not solve this, we need very strong data security and privacy laws at the federal level coupled with stiff fines for breaches.

Next social media – here is where businesses can play a roll. They need to wake up to the fact that their use of companies like Facebook, Twiiter, and Google are all part of the problem. They need to simply stop using their advertising platforms. Of course that will not happen.

Consumers – at the Federal level, mandate that all browsers have effective ad blockers and browser privacy data is dumped when the browsers is closed inclusive of cookies and cache. If consumers can’t be targeted or tracked, then their value as ad revenue will dry up and so will the likes of Facebook etc.

There are other things that need to be done along the anti-trust route for big tech and in counseling for social media addiction. Perhaps a minimum age limit of 18 for using any digital social media would facilitate real social development in our children and eliminate the ills of the false digital social media ills.

Anyway – on to the op. ed. piece. It is very old news of course. Unless you were living under a rock, Russian influence in the election was well known a long time ago. And I do not wish to simply pick on Russia. Other countries are targeting us also – on many fronts. Should we ask them to stop? Well we can, but the results will be the same as when Obama did this. A temporary pause and then right back at it (China in this case). In fact, we should thank Russia and China for showing us how weak, and in many case, non existent, our cyber defenses are. They did cause they could. I maintain, therefore, the answers is federal action to guide and mandate information security.

Quote

Heist of 2016

The report of the special counsel Robert Mueller leaves considerable space for partisan warfare over the role of President Trump and his political campaign in Russia’s interference in the 2016 election. But one conclusion is categorical: “The Russian government interfered in the 2016 presidential election in sweeping and systematic fashion.”

That may sound like old news. The Justice Department’s indictment of 13 Russians and three companies in February 2018 laid bare much of the sophisticated Russian campaign to blacken the American democratic process and support the Trump campaign, including the theft of American identities and creation of phony political organizations to fan division on immigration, religion or race. The extensive hacks of Hillary Clinton’s campaign emails and a host of other dirty tricks have likewise been exhaustively chronicled.

But Russia’s interference in the campaign was the core issue that Mr. Mueller was appointed to investigate, and if he stopped short of accusing the Trump campaign of overtly cooperating with the Russians — the report mercifully rejects speaking of “collusion,” a term that has no meaning in American law — he was unequivocal on Russia’s culpability: “First, the Office determined that Russia’s two principal interference operations in the 2016 U.S. presidential election — the social media campaign and the hacking-and-dumping operations — violated U.S. criminal law.”

The first part of the report, which describes these crimes, is worthy of a close read. Despite a thick patchwork of redactions, it details serious and dangerous actions against the United States that Mr. Trump, for all his endless tweeting and grousing about the special counsel’s investigation, has never overtly confronted, acknowledged, condemned or comprehended. Culpable or not, he must be made to understand that a foreign power that interferes in American elections is, in fact, trying to distort American foreign policy and national security.

The earliest interference described in the report was a social media campaign intended to fan social rifts in the United States, carried out by an outfit funded by an oligarch known as “Putin’s chef” for the feasts he catered. Called the Internet Research Agency, the unit actually sent agents to the United States to gather information at one point. What the unit called “information warfare” evolved by 2016 into an operation targeted at favoring Mr. Trump and disparaging Mrs. Clinton. This included posing as American people or grass-roots organizations such as the Tea Party, anti-immigration groups, Black Lives Matter and others to buy political ads or organize political rallies.

At the same time, the report said, the cyberwarfare arm of the Russian army’s intelligence service opened another front, hacking the Clinton campaign and the Democratic National Committee and releasing reams of damaging materials through the front groups DCLeaks and Guccifer 2.0, and later through WikiLeaks. The releases were carefully timed for impact — emails stolen from the Clinton campaign chairman John Podesta, for example, were released less than an hour after the “Access Hollywood” tape damaging to Mr. Trump came out.

 

A perceived victory for Russian interference poses a serious danger to the United States. Already, several American agencies are working, in partnership with the tech industry, to prevent election interference going forward. But the Kremlin is not the only hostile government mucking around in America’s cyberspace — China and North Korea are two others honing their cyber-arsenals, and they, too, could be tempted to manipulate partisan strife for their ends.

That is something neither Republicans nor Democrats should allow. The two parties may not agree on Mr. Trump’s culpability, but they have already found a measure of common ground with the sanctions they have imposed on Russia over its interference in the campaign. Now they could justify the considerable time and expense of the special counsel investigation, and at the same time demonstrate that the fissure in American politics is not terminal, by jointly making clear to Russia and other hostile forces that the democratic process, in the United States and its allies, is strictly off limits to foreign clandestine manipulation, and that anyone who tries will pay a heavy price.

All this activity, the report said, was accompanied by the well documented efforts to contact the Trump campaign through business connections, offers of assistance to the campaign, invitations for Mr. Trump to meet Mr. Putin and plans for improved American-Russian relations. Both sides saw potential gains, the report said — Russia in a Trump presidency, the campaign from the stolen information. The Times documented 140 contacts between Mr. Trump and his associates and Russian nationals and WikiLeaks or their intermediaries. But the Mueller investigation “did not establish that members of the Trump campaign conspired or coordinated with the Russian government in its election interference activities.”

That is the part Mr. Trump sees as vindication, though the activities of his chaotic campaign team that the report describes are — at best — naïve. It is obviously difficult for this president to acknowledge that he was aided in his election by Russia, and there is no way to gauge with any certainty how much impact the Russian activities actually had on voters.

But the real danger that the Mueller report reveals is not of a president who knowingly or unknowingly let a hostile power do dirty tricks on his behalf, but of a president who refuses to see that he has been used to damage American democracy and national security.

Since the publication of the report, Vladimir Putin and his government have been crowing that they, too, are now somehow vindicated, joining the White House in creating the illusion that the investigation was all about “collusion” rather than a condemnation of criminal Russian actions. If their hope in a Trump presidency was to restore relations between the United States and Russia, and to ease sanctions, the Russians certainly failed, especially given the added sanctions ordered by Congress over Moscow’s interference.

But if the main intent was to intensify the rifts in American society, Russia backed a winner in Mr. Trump.

A perceived victory for Russian interference poses a serious danger to the United States. Already, several American agencies are working, in partnership with the tech industry, to prevent election interference going forward. But the Kremlin is not the only hostile government mucking around in America’s cyberspace — China and North Korea are two others honing their cyber-arsenals, and they, too, could be tempted to manipulate partisan strife for their ends.

That is something neither Republicans nor Democrats should allow. The two parties may not agree on Mr. Trump’s culpability, but they have already found a measure of common ground with the sanctions they have imposed on Russia over its interference in the campaign. Now they could justify the considerable time and expense of the special counsel investigation, and at the same time demonstrate that the fissure in American politics is not terminal, by jointly making clear to Russia and other hostile forces that the democratic process, in the United States and its allies, is strictly off limits to foreign clandestine manipulation, and that anyone who tries will pay a heavy price.

Facebook uploads users address books (contacts) without user permission

Really – any business that does business with Facebook is sending a great big message to their customers that they do not care about their user’s privacy. “Like US on Facebook” means let us rape your privacy. JUST SAY NO TO FACEBOOK.

Quote

Facebook has admitted to “unintentionally” uploading the address books of 1.5 million users without consent, and says it will delete the collected data and notify those affected.

The discovery follows criticism of Facebook by security experts for a feature that asked new users for their email password as part of the sign-up process. As well as exposing users to potential security breaches, those who provided passwords found that, immediately after their email was verified, the site began “importing” contacts without asking for permission.

Facebook has now admitted it was wrong to do so, and said the upload was inadvertent. “Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time,” the company said.

“When we looked into the steps people were going through to verify their accounts we found that in some cases people’s email contacts were also unintentionally uploaded to Facebook when they created their account,” a spokesperson said. “We estimate that up to 1.5 million people’s email contacts may have been uploaded. These contacts were not shared with anyone and we’re deleting them. We’ve fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage the contacts they share with Facebook in their settings.”

The issue was first noticed in early April, when the Daily Beast reported on Facebook’s practice of asking for email passwords to verify new users. The feature, which allows Facebook to automatically log in to a webmail account to effectively click the link on an email verification itself, was apparently intended to smooth the workflow for signing up for a new account.

But security experts said the practice was “beyond sketchy”, noting that it gave Facebook access to a large amount of personal data and may have led to users adopting unsafe practices around password confidentiality. The company was “practically fishing for passwords you are not supposed to know”, according to cybersecurity tweeter e-sushi, who first raised concern about the feature, which Facebook says has existed since 2016.

At the time, Facebook insisted it did not store email passwords but said nothing about other information gathered in the process. Shortly after, Business Insider reported that, for users who entered their passwords, Facebook was also harvesting contact details – apparently a hangover from an earlier feature that Facebook had built expressly to take contacts with permission – except in this new implementation, users had not given consent.

The company said those contacts were used as part of its People You May Know feature, as well as to improve ad targeting systems. While it has committed to deleting the uploaded contacts, it is not immediately clear whether it will delete the information it inferred from those uploaded contacts – or even whether it is able to do so. Facebook did not immediately reply to a query from the Guardian.

Facebook Is Stealing Your Family’s Joy

Before you post that baby bump or college acceptance letter online, remember how much fun it used to be to share in person.

My kids have had some good news lately. Academic triumphs, hockey tournament wins, even a little college admissions excitement. They’ve had rough moments too, and bittersweet ones. There have been last games and disappointments and unwashed dishes galore. If you’re a friend, or even somebody who knows my mom and struck up a friendly conversation in line at the grocery store, I’d love to talk to you about any of it. I might even show you pictures.

But I’m not going to post them on social media. Because I tried that for a while, and I came to a simple conclusion about getting the reactions of friends, family and acquaintances via emojis and exclamations points rather than hugs and actual exclamations.

It’s no fun. And I don’t want to do it any more.

I’m not the only one pulling back from social media. While around two-thirds of American adults use Facebook, the way many of us use it has shifted in recent years. About 40 percent of adult users report taking a break from checking Facebook for several weeks or more, and 26 percent tell researchers they’ve deleted the app from their phone at some point in the past year.

Some have changed their behavior because of Facebook’s lax record on protecting user data: More than half of adult users have adjusted their privacy settings in the past year. Others seem more concerned with how the platform makes them act and feel. Either way, pulling back on social media is a way to embrace your family’s privacy.

“I have definitely seen an evolution toward sharing less,” said Julianna Miner, an adjunct professor of global and community health at George Mason University and the author of the forthcoming “Raising a Screen-Smart Kid: Embrace the Good and Avoid the Bad in the Digital Age.” She added, “It’s hard to tell if the changes are a response to the security breaches, or a result of people just getting tired of it.”

Even Mark Zuckerberg, the chief executive of Facebook, seems to suspect it’s at least in part the latter — that after experimenting with living our lives in a larger online sphere for over a decade, many of us are ready to return to the more intimate groups where humans have long thrived. In a recent blog post, Mr. Zuckerberg announced plans to emphasize private conversations and smaller communities on the platform. Interacting on Facebook, he wrote, “will become a fundamentally more private experience” — less “town square,” more “living room.”

[As technology advances, will it continue to blur the lines between public and private? Sign up for Charlie Warzel’s limited-run newsletter to explore what’s at stake and what you can do about it.]

That’s a shift I’ve already made for myself, and since doing so, I find myself asking why I embraced my personal soapbox in that online square in the first place. The more I reserve both good news and personal challenges for sharing directly with friends, the more I see that the digital world never offered the same satisfaction or support. Instead, I lost out on moments of seeing friends’ faces light up at joyful news, and frequently found myself wishing that not everyone within my network had been privy to a rant or disappointment.

“There’s plenty of evidence that interpersonal, face-to-face interactions yield a stronger neural response than anything you can do online,” said Ms. Miner. “Online empathy is worth something to us, but not as much. It takes something like six virtual hugs to equal one real hug.”

Time spent seeking those virtual hugs can take us outside the world we’re living in, and draw us back to our phones (which, of course, is the reason many networks offer those bursts of feedback in the first place).

“Ultimately, you’re not just giving social media the time it takes you to post,” said Stacey Steinberg, the associate director of the Center on Children and Families at the University of Florida Levin College of Law and the author of a paper on the topic called “Sharenting: Children’s Privacy in the Age of Social Media.”

“The interaction doesn’t end the minute you press share,” she said. “Some part of your mind is waiting for responses, and that amounts to a small distraction that takes us away from whatever else we would be engaged in.” Once we post that image of our toddler flossing, we’re no longer entirely watching him dance. Some part of us is in the digital realm, waiting to have our delight validated.

That validation can be satisfying, but the emotion is fleeting, like the sugar rush that comes from replacing a real breakfast with a Pop-Tart. Watching your mother’s reaction to the same video, though, brings a different kind of pleasure. “I see parents sharing differently than I did five years ago,” said Ms. Steinberg. “We’re looking for smaller audiences and ways to share just with close friends.”

She also warned that even seemingly innocuous public updates have long shadows. “You could have a child who was a star baseball player and later decides to make a change, still being asked by relative strangers about his batting average,” she said. “Or one who decides on a college, and then changes her mind. Decisions are complex. Lives are complex. Marie Kondo-ing your Facebook page is not so easy.”

There are exceptions. Facebook shines as an arena for professional connection and promotion, of course. For those of us with children who have special needs, it can offer an invaluable community of support. And for the very worst of bad news — for calamities or illnesses or deaths — Facebook can help users speedily share updates, ask for help and share obituaries and memories.
Sign Up for The Privacy Project Newsletter

As technology advances, will it continue to blur the lines between public and private? Explore what’s at stake and what you can do about it.

Cal Newport, the author of “Digital Minimalism: Choosing a Focused Life in a Noisy World,” suggests that when we evaluate the ways we use the social media tools available to us, we ask ourselves if those tools are the best ways to achieve our goals. In those cases, the answer is yes.

But for sharing personal moments, for venting, for getting good advice on parenting challenges while feeling supported in our tougher moments? I’ve found that real life, face-to-face, hug-to-hug contact offers more bang for my buck than anything on a screen ever could. Why cheat yourself out of those pleasures for the momentary high of a pile of “likes”?

Recently, I ran into an acquaintance while waiting for my order at a local restaurant. “Congratulations,” she said, warmly. I racked my brain. I’d sold a book that week, but the information wasn’t public. I wasn’t pregnant, didn’t have a new job, had not won the lottery. My takeout ordering skills didn’t really seem worthy of note, and in fact I probably had asked for too much food, as I usually do. I wanted to talk more about this happy news, but what were we talking about? Fortunately, she went on, “Your son must be so thrilled.”

Right. My oldest — admitted to college. He was thrilled, and so were we, and I said so. But how did she know?

My son told her daughter, one of his classmates, and her daughter told her.

Perfect.

Is America Becoming an Oligarchy?

Wait, does this have to do with Information Technology? A lot. Industrial concentration examples can be seen in Telecoms, ISP/Content, online retail, and social media.

Growing inequality threatens our most basic democratic principles.

Pete Buttigieg, who’s shown an impressive knack for putting matters well in these early days of the 2020 presidential race, nailed it recently when Chuck Todd of NBC asked him about capitalism. Of course I’m a capitalist, he said; America “is a capitalist society.”

But, he continued: “It’s got to be democratic capitalism.”

Mr. Buttigieg said that when capitalism becomes unrestrained by democratic checks and impulses, that’s no longer the kind of capitalism that once produced broad prosperity in this country. “If you want to see what happens when you have capitalism without democracy, you can see it very clearly in Russia,” he said. “It turns into crony capitalism, and that turns into oligarchy.”

Aside from enabling Mr. Buttigieg, the South Bend, Ind., mayor, to swat away a question that has bedeviled some others, his rhetoric reminds us of a crucial point: There is, or should be, a democratic element to capitalism — and an economic element to how we define democracy.

After all, oligarchy does have an economic element to it; in fact, it is explicitly economic. Oligarchy is the rule of the few, and these few have been understood since Aristotle’s time to be men of wealth, property, nobility, what have you.

But somehow, as the definition of democracy has been handed down to us over the years, the word has come to mean the existence and exercise of a few basic rights and principles. The people — the “demos” — are imbued with no particular economic characteristic. This is wrong. Our definition of democracy needs to change.

Democracy can’t flourish in a context of grotesque concentration of wealth. This idea is neither new nor radical nor alien. It is old, mainstream and as American as Thomas Jefferson.

I invoke Jefferson for a reason. Everyone knows how he was occupying his time in the summer of 1776; he was writing the Declaration of Independence. But what was he up to that fall? He was a member of the Virginia House of Delegates, and he was taking the lead in writing and sponsoring legislation to abolish the commonwealth’s laws upholding “entail” (which kept large estates within families across generations) and primogeniture.

Mere coincidence that he moved so quickly from writing the founding document of democracy to writing a bill abolishing inheritance laws brought over from England? Hardly. He believed, as the founders did generally, that excess inherited wealth was fundamentally incompatible with democracy.

They were most concerned with inherited wealth, as was the Scottish economist Adam Smith, whom conservatives invoke constantly today but who would in fact be appalled by the propagandistic phrase “death tax” — in their time, inherited wealth was the oppressive economic problem.

But their economic concerns weren’t limited to that. They saw clearly the link between democratic health and general economic prosperity. Here is John Adams, not exactly Jefferson’s best friend: All elements of society, he once wrote, must “cooperate in this one democratical principle, that the end of all government is the happiness of the People: and in this other, that the greatest happiness of the greatest Number is the point to be obtained.” “Happiness” to the founders meant economic well-being, and note that Adams called it “democratical.”

So, yes, democracy and the kind of economic inequality we’ve seen in this country in recent decades don’t mix. Some will rejoin that many nations even more unequal than ours are still democracies — South Africa, Brazil, India. But are those the models to which the United States of America should aspire?

A number of scholars have made these arguments in recent years, notably Ganesh Sitaraman in his book “The Crisis of the Middle-Class Constitution.” All that work has been vitally important. But now that some politicians are saying it, we can finally have the broad national conversation we’ve desperately needed for years.

Bernie Sanders has proposed an inheritance tax that the founders would love, and Elizabeth Warren has proposed a wealth tax of which they’d surely approve. But you don’t have to be a supporter of either of those candidates or their plans to get behind the general idea that great concentration of wealth is undemocratic.

Policies built around this idea will not turn America into the Soviet Union or, in the au courant formulation, Venezuela. They will make it the nation the founders intended. And this, as Mr. Buttigieg’s words suggest, is how Democratic candidates should answer the socialism question (with the apparent exception of the socialist Mr. Sanders). No, I’m a capitalist. And that’s why I want capitalism to change.

Dear tech companies, I don’t want to see pregnancy ads after my child was stillborn

Dear Tech Companies:

I know you knew I was pregnant. It’s my fault, I just couldn’t resist those Instagram hashtags — #30weekspregnant, #babybump. And, silly me! I even clicked once or twice on the maternity-wear ads Facebook served up. What can I say, I am your ideal “engaged” user.

You surely saw my heartfelt thank-you post to all the girlfriends who came to my baby shower, and the sister-in-law who flew in from Arizona for said shower tagging me in her photos. You probably saw me googling “holiday dress maternity plaid” and “babysafe crib paint.” And I bet Amazon.com even told you my due date, Jan. 24, when I created that Prime registry.

But didn’t you also see me googling “braxton hicks vs. preterm labor” and “baby not moving”? Did you not see my three days of social media silence, uncommon for a high-frequency user like me? And then the announcement post with keywords like “heartbroken” and “problem” and “stillborn” and the 200 teardrop emoticons from my friends? Is that not something you could track?

You see, there are 24,000 stillbirths in the United States every year, and millions more among your worldwide users. And let me tell you what social media is like when you finally come home from the hospital with the emptiest arms in the world, after you and your husband have spent days sobbing in bed, and you pick up your phone for a few minutes of distraction before the next wail. It’s exactly, crushingly, the same as it was when your baby was still alive. A Pea in the Pod. Motherhood Maternity. Latched Mama. Every damn Etsy tchotchke I was considering for the nursery.

And when we millions of brokenhearted people helpfully click “I don’t want to see this ad,” and even answer your “Why?” with the cruel-but-true “It’s not relevant to me,” do you know what your algorithm decides, Tech Companies? It decides you’ve given birth, assumes a happy result and deluges you with ads for the best nursing bras (I have cabbage leaves on my breasts because that is the best medical science has to offer to turn off your milk), DVDs about getting your baby to sleep through the night (I would give anything to have heard him cry at all), and the best strollers to grow with your baby (mine will forever be 4 pounds 1 ounce).

And then, after all that, Experian swoops in with the lowest tracking blow of them all: a spam email encouraging me to “finish registering your baby” with them (I never “started,” but sure) to track his credit throughout the life he will never lead.

Please, Tech Companies, I implore you: If your algorithms are smart enough to realize that I was pregnant, or that I’ve given birth, then surely they can be smart enough to realize that my baby died, and advertise to me accordingly — or maybe, just maybe, not at all.

Regards,

Gillian

Addendum:

Rob Goldman, VP of advertising at Facebook, responded to an earlier version of my letter, saying:

“I am so sorry for your loss and your painful experience with our products. We have a setting available that can block ads about some topics people may find painful – including parenting. It still needs improvement, but please know that we’re working on it & welcome your feedback.”

In fact, I knew there was a way to change my Facebook ad settings and attempted to find it a few days ago, without success. Anyone who has experienced the blur, panic and confusion of grief can understand why. I’ve also been deluged with deeply personal messages from others who have experienced stillbirth, infant death and miscarriage who felt the same way I do. We never asked for the pregnancy or parenting ads to be turned on; these tech companies triggered that on their own, based on information we shared. So what I’m asking is that there be similar triggers to turn this stuff off on its own, based on information we’ve shared.

But for anyone who wants to turn off parenting ads on Facebook, it’s under: Settings>Ad Preferences>Hide ad topics>Parenting.

I have a better idea – just say no to facebook and related social media “look at me look at me” posts and get on with growing up and living your own life.

Is your pregnancy app sharing your intimate data with your boss?

I’m shocked! Shocked I say! (Or “There’s a sucker born every minute” and when it comes to apps, smart-speakers, social media, I think it is more like very nanosecond.

As apps to help moms monitor their health proliferate, employers and insurers pay to keep tabs on the vast and valuable data

Like millions of women, Diana Diller was a devoted user of the pregnancy-tracking app Ovia, logging in every night to record new details on a screen asking about her bodily functions, sex drive, medications and mood. When she gave birth last spring, she used the app to chart her baby’s first online medical data — including her name, her location and whether there had been any complications — before leaving the hospital’s recovery room.

But someone else was regularly checking in, too: her employer, which paid to gain access to the intimate details of its workers’ personal lives, from their trying-to-conceive months to early motherhood. Diller’s bosses could look up aggregate data on how many workers using Ovia’s fertility, pregnancy and parenting apps had faced high-risk pregnancies or gave birth prematurely; the top medical questions they had researched; and how soon the new moms planned to return to work.

Health experts worry that such data-intensive apps could expose women to security or privacy risks. The ovulation-tracking app Glow updated its systems in 2016 after Consumer Reports found that anyone could access a woman’s health data, including whether she’d had an abortion and the last time she’d had sex, as long as they knew her email address. Another Ovia competitor, Flo, was found to be sending data to Facebook on when its users were having their periods or were trying to conceive, according to tests published in February in the Wall Street Journal. Ovia says it does not share or sell data with social media sites.

“Maybe I’m naive, but I thought of it as positive reinforcement: They’re trying to help me take care of myself,” said Diller, 39, an event planner in Los Angeles for the video game company Activision Blizzard. The decision to track her pregnancy had been made easier by the $1 a day in gift cards the company paid her to use the app: That’s “diaper and formula money,” she said.

Period- and pregnancy-tracking apps such as Ovia have climbed in popularity as fun, friendly companions for the daunting uncertainties of childbirth, and many expectant women check in daily to see, for instance, how their unborn babies’ size compares to different fruits or Parisian desserts.

But Ovia also has become a powerful monitoring tool for employers and health insurers, which under the banner of corporate wellness have aggressively pushed to gather more data about their workers’ lives than ever before.

Employers who pay the apps’ developer, Ovia Health, can offer their workers a special version of the apps that relays their health data — in a “de-identified,” aggregated form — to an internal employer website accessible by human resources personnel. The companies offer it alongside other health benefits and incentivize workers to input as much about their bodies as they can, saying the data can help the companies minimize health-care spending, discover medical problems and better plan for the months ahead.

Emboldened by the popularity of Fitbit and other tracking technologies, Ovia has marketed itself as shepherding one of the oldest milestones in human existence into the digital age. By giving counseling and feedback on mothers’ progress, executives said, Ovia has helped women conceive after months of infertility and even saved the lives of women who wouldn’t otherwise have realized they were at risk.

But health and privacy advocates say this new generation of “menstrual surveillance” tools is pushing the limits of what women will share about one of the most sensitive moments of their lives. The apps, they say, are designed largely to benefit not the women but their employers and insurers, who gain a sweeping new benchmark on which to assess their workers as they consider the next steps for their families and careers.

Experts worry that companies could use the data to bump up the cost or scale back the coverage of health-care benefits, or that women’s intimate information could be exposed in data breaches or security risks. And though the data is made anonymous, experts also fear that the companies could identify women based on information relayed in confidence, particularly in workplaces where few women are pregnant at any given time.

“What could possibly be the most optimistic, best-faith reason for an employer to know how many high-risk pregnancies their employees have? So they can put more brochures in the break room?” asked Karen Levy, a Cornell University assistant professor who has researched family and workplace monitoring.

“The real benefit of self-tracking is always to the company,” Levy said. “People are being asked to do this at a time when they’re incredibly vulnerable and may not have any sense where that data is being passed.”

Ovia chief executive Paris Wallace said the company complies with privacy laws and provides the aggregate data so employers can evaluate how their workforces’ health outcomes have changed over time. The health information is sensitive, he said, but could also play a critical role in boosting women’s well-being and companies’ bottom lines.

“We are in a women’s health crisis, and it’s impacting people’s lives and their children’s lives,” he said, pointing to the country’s rising rates of premature births and maternal deaths. “But it’s also impacting the folks who are responsible for these outcomes — both financially and for the health of the members they’re accountable for.”

The rise of pregnancy-tracking apps shows how some companies increasingly view the human body as a technological gold mine, rich with a vast range of health data their algorithms can track and analyze. Women’s bodies have been portrayed as especially lucrative: The consulting firm Frost & Sullivan said the “femtech” market — including tracking apps for women’s menstruation, nutrition and sexual wellness — could be worth as much as $50 billion by 2025.

Companies pay for Ovia’s “family benefits solution” package on a per-employee basis, but Ovia also makes money off targeted in-app advertising, including from sellers of fertility-support supplements, life insurance, cord-blood banking and cleaning products.

An Ovia spokeswoman said the company does not sell aggregate data for advertising purposes. But women who use Ovia must consent to its 6,000-word “terms of use,” which grant the company a “royalty-free, perpetual, and irrevocable license, throughout the universe” to “utilize and exploit” their de-identified personal information for scientific research and “external and internal marketing purposes.” Ovia may also “sell, lease or lend aggregated Personal Information to third parties,” the document adds.

Milt Ezzard, the vice president of global benefits for Activision Blizzard, a video gaming giant that earned $7.5 billion last year with franchises such as “Call of Duty” and “World of Warcraft,” credits acceptance of Ovia there to a changing workplace culture where volunteering sensitive information has become more commonplace.

In 2014, when the company rolled out incentives for workers who tracked their physical activity with a Fitbit, some employees voiced concerns over what they called a privacy-infringing overreach. But as the company offered more health tracking — including for mental health, sleep, diet, autism and cancer care — Ezzard said workers grew more comfortable with the trade-off and enticed by the financial benefits.

“Each time we introduced something, there was a bit of an outcry: ‘You’re prying into our lives,’ ” Ezzard said. “But we slowly increased the sensitivity of stuff, and eventually people understood it’s all voluntary, there’s no gun to your head, and we’re going to reward you if you choose to do it.”

“People’s sensitivity,” he added, “has gone from, ‘Hey, Activision Blizzard is Big Brother,’ to, ‘Hey, Activision Blizzard really is bringing me tools that can help me out.’ ”

With more than 10 million users, Ovia’s tracking services are now some of the most downloaded medical apps in America, and the company says it has collected billions of data points into what it calls “one of the largest data sets on women’s health in the world.” Alongside competitors such as Glow, Clue and Flo, the period- and pregnancy-tracking apps have raised hundreds of millions of dollars from investors and count tens of millions of users every month.

Founded in Boston in 2012, Ovia began as a consumer-facing app that made money in the tried-and-true advertising fashion of Silicon Valley. But three years ago, Wallace said, the company was approached by large national insurers who said the app could help them improve medical outcomes and access maternity data via the women themselves.

Ovia’s corporate deals with employers and insurers have seen “triple-digit growth” in recent years, Wallace said. The company would not say how many firms it works with, but the number of employees at those companies is around 10 million, a statistic Ovia refers to as “covered lives.”

Ovia pitches its app to companies as a health-care aid for women to better understand their bodies during a mystifying phase of life. In marketing materials, it says women who have tracked themselves with Ovia showed a 30 percent reduction in premature births, a 30 percent increase in natural conception and a higher rate of identifying the signs of postpartum depression. (An Ovia spokeswoman said those statistics come from an internal return-on-investment calculator that “has been favorably reviewed by actuaries from two national insurance companies.”)

But a key element of Ovia’s sales pitch is how companies can cut back on medical costs and help usher women back to work. Pregnant women who track themselves, the company says, will live healthier, feel more in control and be less likely to give birth prematurely or via a C-section, both of which cost more in medical bills — for the family and the employer.

Women wanting to get pregnant are told they can rely on Ovia’s “fertility algorithms,” which analyze their menstrual data and suggest good times to try to conceive, potentially saving money on infertility treatments. “An average of 33 hours of productivity are lost for every round of treatment,” an Ovia marketing document says.

For employers who fund workers’ health insurance, pregnancy can be one of the biggest and most unpredictable health-care expenses. In 2014, AOL chief executive Tim Armstrong defended the company’s cuts to retirement benefits by blaming the high medical expenses that arose from two employees giving birth to “distressed babies.”

Ovia, in essence, promises companies a tantalizing offer: lower costs and fewer surprises. Wallace gave one example in which a woman had twins prematurely, received unneeded treatments and spent three months in intensive care. “It was a million-dollar birth … so the company comes to us: How can you help us with this?” he said.

But some health and privacy experts say there are many reasons a woman who is pregnant or trying to conceive wouldn’t want to tell her boss, and they worry the data could be used in a way that puts new moms at a disadvantage.

“The fact that women’s pregnancies are being tracked that closely by employers is very disturbing,” said Deborah C. Peel, a psychiatrist and founder of the Texas nonprofit Patient Privacy Rights. “There’s so much discrimination against mothers and families in the workplace, and they can’t trust their employer to have their best interests at heart.”

Federal law forbids companies from discriminating against pregnant women and mandates that pregnancy-related health-care expenses be covered in the same way as other medical conditions. Ovia said the data helps employers provide “better benefits, health coverage and support.”

Ovia’s soft pastels and cheery text lend a friendly air to the process of transmitting private health information to one’s employer, and the app gives daily nudges to remind women to log their progress with messages such as, “You’re beautiful! How are you feeling today?”

But experts say they are unnerved by the sheer volume and detail of data that women are expected to offer up. Pregnant women can log details of their sleep, diet, mood and weight, while women who are trying to conceive can record when they had sex, how they’re feeling and the look and color of their cervical fluid.

After birth, the app asks for the baby’s name, sex and weight; who performed the delivery and where; the birth type, such as vaginal or an unplanned C-section; how long labor lasted; whether it included an epidural; and the details of any complications, such as whether there was a breech or postpartum hemorrhage.

The app also allows women to report whether they had a miscarriage or pregnancy loss, including the date and “type of loss,” such as whether the baby was stillborn. “After reporting a miscarriage, you will have the option to both reset your account and, when you’re ready, to start a new pregnancy,” the app says.

“We’re their companion throughout this process and want to … provide them with support throughout their entire journey,” Ovia spokeswoman Sarah Coppersmith said.

Much of this information is viewable only by the worker. But the company can access a vast range of aggregated data about its employees, including their average age, number of children and current trimester; the average time it took them to get pregnant; the percentage who had high-risk pregnancies, conceived after a stretch of infertility, had C-sections or gave birth prematurely; and the new moms’ return-to-work timing.

Companies can also see which articles are most read in Ovia’s apps, offering them a potential road map to their workers’ personal questions or anxieties. The how-to guides touch on virtually every aspect of a woman’s changing body, mood, financial needs and lifestyle in hyper-intimate detail, including filing for disability, treating bodily aches and discharges, and suggestions for sex positions during pregnancy.

“We are crossing into a new frontier of vaginal digitalization,” wrote Natasha Felizi and Joana Varon, who reviewed a group of menstrual-tracking apps for the Brazil-based tech activist group Coding Rights.

Ovia data is viewable by the company, their insurers and, in the case of Activision Blizzard and other self-insured companies, the third-party administrators that process women’s medical claims.

Ovia says it is compliant with government data-privacy laws such as the Health Insurance Portability and Accountability Act, or HIPAA, which sets rules for sharing medical information. The company also says it removes identifying information from women’s health data in a way that renders it anonymous and that it requires employers to reach a certain minimum of enrolled users before they can see the aggregated results.

But health and privacy experts say it’s relatively easy for a bad actor to “re-identify” a person by cross-referencing that information with other data. The trackers’ availability in companies with few pregnant women on staff, they say, could also leave the data vulnerable to abuse. Ovia says its contract prohibits employers from attempting to re-identify employees.

Ezzard, the benefits executive at Activision Blizzard, said offering pregnancy programs such as Ovia helps the company stand out in a competitive industry and keep skilled women in the workforce coming back. The company employs roughly 5,000 artists, developers and other workers in the United States.

“I want them to have a healthy baby because it’s great for our business experience,” Ezzard said. “Rather than having a baby who’s in the neonatal ICU, where she’s not able to focus much on work.”

One of the first things Diana Diller did when Simone was born was report the birth on her Ovia app. (Philip Cheung/For The Washington Post)

Before Ovia, the company’s pregnant employees would field periodic calls from insurance-company nurses who would ask about how they were feeling and counsel them over the phone. Shifting some pregnancy care to an app where the women could give constant check-ins made a huge difference: Nearly 20 women who had been diagnosed as infertile had become pregnant since the company started offering Ovia’s fertility app, Ezzard said.

Roughly 50 “active users” track their pregnancies at any given time, and the average employee records more than 128 health data points a month, Ezzard said. They also open the app about 48 times a month, or more than once a day.

Ezzard said that the company maintains strict controls on who can review the internal aggregated data and that employees’ medical claims are processed at a third-party data warehouse to help protect their privacy. The program, he added, is already paying off: Ovia and the other services in its “well-being platform” saved the company roughly $1,200 per employee in annual medical costs.

Health experts worry that such data-intensive apps could expose women to security or privacy risks. The ovulation-tracking app Glow updated its systems in 2016 after Consumer Reports found that anyone could access a woman’s health data, including whether she’d had an abortion and the last time she’d had sex, as long as they knew her email address. Another Ovia competitor, Flo, was found to be sending data to Facebook on when its users were having their periods or were trying to conceive, according to tests published in February in the Wall Street Journal. Ovia says it does not share or sell data with social media sites.

The company says it does not do paid clinical trials but provides data to researchers, including for a 2017 study that cited Ovia data from more than 6,000 women on how they chose their obstetricians. But even some researchers worry about ways the information might be used.

“As a clinician researcher, I can see the benefit of analyzing large data sets,” said Paula M. Castaño, an obstetrician-gynecologist and associate professor at Columbia University who has studied menstrual-tracking apps. But a lot of the Ovia data given to employers, she said, raises concerns “with their lack of general clinical applicability and focus on variables that affect time out of work and insurance utilization.”

Ovia says its “fertility algorithms,” which analyze a woman’s data and suggest when she would have the best chance of getting pregnant, have helped 5 million women conceive. But the claim is impossible to prove: Research into similar promises from other apps has suggested there were other possible explanations, including the fact that the women were motivated enough to use a period-tracking app in the first place.

The coming years, however, will probably see companies pushing for more pregnancy data to come straight from the source. The Israeli start-up Nuvo advertises a sensor band strapped around a woman’s belly that can send real-time data on fetal heartbeat and uterine activity “across the home, the workplace, the doctor’s office and the hospital.” Nuvo executives said its “remote pregnancy monitoring platform” is undergoing U.S. Food and Drug Administration review.

Diller, the Activision Blizzard employee, said she was never troubled by Ovia privacy worries. She loved being able to show her friends what size pastry her unborn daughter was and would log her data every night while lying in bed and ticking through her other health apps, including trackers for food, sleep and “mindfulness.”

When she reported the birth in Ovia, the app triggered a burst of virtual confetti and then directed her to download Ovia’s parenting app, where she could track not just her health data, but her newborn daughter’s, too. It was an easy decision. On the app’s home screen, she uploaded the first photo of her newly expanded family.