Why Blanket Rejection Risks Harming the Very People Who Advocate For It

A large pile of discarded electronic devices rises on one side, while three people sit around a small campfire in a forest clearing on the other, separated by a rough wooden fence, symbolizing the tension between modern technology and a return to simpler living.

Technology has always been morally neutral. A knife can prepare a meal or end a life. Fire can cook food or destroy a village. The tool itself does not choose; the hands that wield it do — or sometimes Mother Nature steps in.

In recent years, however, a growing chorus has declared “Big Tech” the root of nearly every modern ill, from privacy erosion and mental health crises to threats against democracy and human autonomy. Campaigns urging us to “Say No to Big Tech” frame large technology companies and their infrastructure as existential dangers that must be rejected outright.

This perspective, while often rooted in legitimate concerns, overlooks a critical reality: technology is a powerful multiplier of human intent. The deeper problem lies not in the tools themselves, but in how flawed humans design, deploy, and incentivize their use. Blanket rejection or disruption of modern technological systems may feel righteous, but it frequently comes from people whose daily lives depend on the very infrastructure they wish to dismantle. The consequences of such disruption would likely fall hardest on everyday individuals, not distant elites.

Many well-intentioned critics rely daily on cloud computing, global supply chains, precision agriculture, and AI-assisted tools. Amazon Web Services and Microsoft Azure power vast portions of the internet — including banking apps, telehealth platforms, remote work systems, educational resources, and government services.

Modern food production depends on GPS-guided farming equipment, data-driven logistics, cold-chain refrigeration, and advances like genetically modified crops that have helped increase yields and reduce certain pesticide needs in many regions.

Healthcare has been transformed as well: conditions that frequently killed outright a century ago — heart attacks, a wide swath of diseases, and ailments requiring surgical intervention — no longer do in many cases. Premature babies survive at higher rates. Patients in intensive care depend on technology and the professionals who use it correctly to stay alive.

I know this personally. My own beloved grandfather would never have met me had the pacemaker not existed to keep his heart beating long enough for me to reach nearly forty years of age, when he succumbed to Alzheimer’s Disease.

When advocates call for wholesale rejection or radical de-growth of these systems without viable, scaled alternatives, they risk creating outcomes that would disproportionately harm working- and middle-class people.

A sudden large-scale failure of cloud infrastructure would disrupt essential services: digital payments, access to bank/financial services, medical record access, supply chain coordination for food and medicine, mail & package delivery, and remote care for vulnerable populations. Historical precedents show that abrupt technological reversals rarely produce egalitarian utopias. Instead, they tend to amplify instability, shortages, and inequality.

An EMP-level disruption or coordinated rollback of digital infrastructure would break refrigeration networks, payment systems, hospital equipment, and transportation logistics almost overnight. Modern vehicles won’t work as they rely on computerization. Even gas pumps shut down when what regulates their ability to pump is shut off (electricity, with access to the pump often regulated and monitored by software). The result would not be a peaceful return to simpler times, but widespread shortages, higher prices for basic goods, and healthcare failures that hit the poorest and most dependent first.

This is not to dismiss genuine grievances. Concentrated corporate power, addictive design patterns, privacy erosion, reckless deployment of new technologies, and misaligned incentives all deserve serious scrutiny and accountability. Yet proposing to simply unplug or dismantle the systems that underpin modern life ignores the deep interdependence that has raised living standards for many, even as it has created new problems globally.

A more responsible path forward requires acknowledging trade-offs rather than seeking purity. We can and should demand better governance, clearer incentives, rigorous independent testing, and stronger protections for privacy and safety. We can criticize specific misuses — surveillance capitalism, profit-driven engagement maximization, or hasty AI rollout — without pretending we can un-invent powerful general-purpose technologies. Other actors, including countries with far less transparency, will not voluntarily hit the pause button just because Western companies do.

Responsible innovation demands more than rejecting “Big Tech.” It requires responsible use, responsible testing, and responsible reporting on both the benefits and the risks.

Humans are imperfect stewards of the tools we create. We repeatedly demonstrate the capacity to misuse power through greed, short-term thinking, or ideological blind spots. But we have also shown the ability to adapt, learn from past mistakes, and gradually improve the baseline conditions of life, albeit in some cases, frustratingly slowly.

The solution to our technological challenges is not romanticizing collapse or issuing blanket condemnations. It’s not broadstrokes calls for boycotting companies like Google, Amazon, Microsoft and Apple. It is clearer-eyed realism: recognizing that technology amplifies both our best and worst impulses, and committing to the difficult, ongoing work of aligning our systems and incentives toward the former rather than the latter.

We do not try to ban sharp knives from kitchens because some people use them as weapons. We hold the person who wields the knife accountable. The same principle should apply to our most powerful technologies.

It’s worth thinking about.