This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.
Facebook’s changes under the hood are a power grab.
My colleague Mike Isaac wrote about Facebook’s latest step to make its apps — its main social network, Instagram and the Messenger chat app — blend together more seamlessly behind the scenes. Facebook’s products would stay separate, but over time they would interact in ways they hadn’t before.
For example, Facebook is starting to let people use Instagram to send a photo to someone using Messenger, and vice versa. In the future, you might be able to text a friend who uses only WhatsApp, which Facebook also owns, from your Messenger account.
There might be — possibly? — handy things as a result of stitching these apps together, particularly for businesses. But the more Facebook operates as a unified empire and not a constellation of apps, the harder it becomes for a government to break up Facebook and the tougher it might become for rivals to chip away at the company’s dominance.
What’s happening now shows the difficulty of checking the power of superstar companies like Facebook, Google and Amazon. By the time the impact of small changes they make becomes obvious, it might be too late to do anything about it.
At Facebook, the more the company knits together its family of apps, the more difficult it becomes to untangle the company’s takeovers of Instagram and WhatsApp. Some academics and others have said Facebook should give up those apps because they saw those acquisitions as illegal tactics to insulate the company from competition.
The other risk is that a more unified Facebook makes the company harder to unseat. Could any new messaging app succeed if Facebook funnels its 3 billion users seamlessly into Messenger, and convinces people not to bother going anywhere new?
This is not a theoretical risk. There is a history of technology companies tying together their products or customer information to make them invulnerable. Sometimes it works.
Google over the years has stitched together what once were separate parts of its internet advertising business into a largely unified system that makes it difficult for anyone to buy or sell ads online without going through Google. A generation ago, Microsoft got into hot water in part for trying to expand its dominance by linking its new internet browser to Windows. (That didn’t work, largely because governments and courts said no to this practice.)
Facebook knitting together its apps is technically different than what Google and Microsoft did, but the practical effect is largely the same. Both Google and Microsoft said — as Facebook is saying now — that combining their products was useful to customers. Maybe. It definitely helped expand the power of those companies.
(Side note: Is it actually useful to message someone on Instagram from Messenger or whatever? People tend to use Facebook’s apps in different ways.)
One change from tech history is that people are now aware of the risks of companies uniting their products. As soon as Mike first wrote about Facebook’s app integration plan in early 2019, some lawmakers and regulators started to ask whether it was a ploy to insulate Facebook.
The question is what to do about the risk that Facebook is slowly entrenching itself. Regulators could say no to Facebook binding its apps together, but Facebook might be betting that lawmakers and regulators move more slowly than it does. And Facebook’s cynicism is probably right.
If you don’t already get this newsletter in your inbox, please sign up here.
Tech can’t fix everything. Sometimes it makes things worse.
I encourage you to read this article from Reveal, a nonprofit investigative news organization, about high injury rates in Amazon warehouses, and how Amazon’s public defense of its worker safety record was sometimes contradicted by company documents and private management discussions.
One of the blaring and disturbing conclusions I have from this investigation is that technology cannot paper over flawed systems built by humans. In fact, sometimes technology makes them worse.
Among Reveal’s findings was that at Amazon package warehouses that used more robots and other automated human helpers — technology that Amazon said was intended to make work safer and more efficient — rates of serious on-the-job injuries were significantly higher than they were in traditional warehouses.
Reveal’s reporting found that this happened because the company used robotic warehouses to increase productivity quotas to levels so high that it led to more instances of Amazon workers cutting corners, repeating the same motions and doing other things that led to more injuries. The article said that none of Amazon’s dozens of safety initiatives reviewed by Reveal suggested slowing down production quotas to try to reduce injuries.
Amazon didn’t respond to Reveal’s questions about the company’s injury data, but told the news organization that it had made significant investments in worker health and safety.
This report added to my concerns that we too often have misguided hopes for automation and other kinds of technology to solve complex problems. Too many Americans lack internet? Just wait for new wireless technology to magically fix it. Cities are clogged with cars? Wait for robot-driven cars to magically fix them. Nope and nope.
That’s not to say that technology can never help solve problems, but it’s not a magic wand. If humans set unrealistic expectations to move merchandise fast, then those same humans might use technology to absolve them of responsibility for fixing the problem.
Before we go …
GAH, THE INTERNET! Well, the U.S. presidential debate was pretty darn chaotic, and my colleagues have explanations about some of the misleading information that went wild online about it, including false rumors about Joe Biden being fed questions in advance and the glee of a far-right group that has endorsed violence at being mentioned by President Trump.
The software is watching you: Students spoke to my Times colleagues about what it’s like to use software that is intended to catch cheating in online exams by tracking people’s eye movements through a webcam and other steps. Spoiler alert: These students didn’t love it.
Ah, the innocent days when the internet was for judging people by their looks: Mashable makes a compelling argument that HOTorNOT, one of the first internet sites that went viral and let people rate the attractiveness of strangers, became a blueprint for internet activity in the 20 years since it started — and not only in a bad way.
Hugs to this