In a recent segment on The Daily Show titled “The False Promises of AI,” John Stewart runs through all the big tech players — Meta, Google, OpenAI, and Microsoft — in top saving-the-world form. AI is the most profound technology humanity is working on, one intones gravely More profound than fire or electricity. Yeah! shouts Stewart, Suck a [bleep] fire!

Stewart is great at lampooning this kind of tone-deaf grandiosity, but he also tries to make a more significant point: not just that AI won’t save the world, but, based on the historical precedent of “the last 300 years,” it will make things worse for regular people.

The segment is reflective of the rhetorical issues that have formed around AI. On one level, the existential stakes are themselves marketing constructs. One effective way to make people believe you have a compelling, world-changing technology is to issue continuous warnings about how seriously you take its potential for destruction and set up many boards for its ethical application. Ethics have become the new sustainability — a vague, catch-all answer to one conception of catastrophe, in this case, not the destruction of the earth but the replacement of all workers with robots.

Why Making Grounded, Incremental Investments in AI Makes Sense

Stewart points out what every tech CEO should recognize — you can’t ask people to take a trust fall on AI. But Stewart’s one-roast-fits-every-tech-CEO isn’t totally fair, either, and it’s intellectually dishonest to suggest there’s nothing in between utopia and apocalypse.

We believe in making grounded, incremental investments in AI. These investments are transformative but not destructive. The solutions we build at Frame AI are designed to enhance tools and processes rather than replace them.

Ultimately, the ethical question is more abstract than speculating about the likelihood of a single worst-case scenario like “all humans replaced by robots.” When you have a new thing that can be used for good or bad, what you have is a tool. The most effective safekeeping measure is keeping that tool as broadly accessible as possible, in keeping with the open source movement from which so much foundational work in AI developed.

Open Source and the Future of AI

For those outside the tech community, “open source” may be poorly understood. The roots of open source date back to the early days of computing in the 1950s and 1960s, at researchers and developers at academic institutions and private companies shared software and hardware plans freely. The formal open-source movement began in the 1980s, spearheaded by initiatives like Richard Stallman’s GNU Project and later the formation of the Free Software Foundation, in response to the growing trend of proprietary software. The ethos was software freedom—the ability to use, study, modify, and redistribute software without restriction; a call to return to the collaboration that characterized the early days of computing.

It’s easy to take potshots at condescending PR moves (and few do it better than Stewart), but there are meaningful distinctions to be made between AI strategies. Just this week, for example, Meta announced they’re opening up the operating system that powers their Meta Quest devices to third-party hardware makers.

Toward the end of the segment, Stewart humorously protests that AI models have hoovered up the entire sum of human experience and achievements over thousands of years, only for us to turn the reins over to them as their mere prompt engineers. It’s a funny image, but it speaks to a real fear born of a question lots of people are asking themselves— how will life change now?

Embracing open source, as we do at Frame AI, decentralizes power and fosters an environment where innovation is transparent and collaborative. This collective stewardship helps safeguard against the kind of bleak futures Stewart jests about. In order to get there, we have to envision a future in which AI evolves as a tool of empowerment rather than a means of displacement.