Why We Need AI Regulation Now: A Founder’s Perspective from the Frontlines of Creative Technology
It’s June 2025, and a tax bill in the United States just made global headlines—not because of its tax policy, but because of what it quietly locks away. Deep within the sprawling “One Big Beautiful Bill” lies a clause that effectively bars state-level regulation of artificial intelligence for the next ten years. Ten years—an eternity in the AI timeline. Ten years without accountability, transparency, or public oversight.
To those of us building AI at the frontlines—particularly in media, entertainment, and across emerging markets—this isn’t just legislative negligence. It’s the regulatory equivalent of watching a hurricane form, then banning weather forecasts.
At Wubble.ai, where we empower businesses to create royalty free, customised music in seconds, at a fraction of current costs, I see firsthand both the astonishing potential and the perilous absence of boundaries. This post is not a fear-mongering rant. It is a founder’s reflection, a regional perspective, and a global call to act before the AI trajectory mirrors the same mistakes we made with social media.
AI Is Not Neutral—And That’s Not a Problem Until We Pretend It Is
Let’s start with a truth that technologists like to gloss over: AI is not neutral. It is engineered by humans, trained on human data, and deployed within human power structures. Algorithms encode assumptions. Models amplify bias. And when applied in creative industries, they shape cultural narratives—often invisibly.
In our work with Asian creators—from indie filmmakers in Jakarta to K-pop producers in Seoul—AI tools are making it faster to create, edit, repurpose, and distribute content. But if the models driving these tools are trained overwhelmingly on Western data, they misfire: tonal subtleties are lost, accents are mangled, and cultural motifs get flattened into stereotypes.
The question is not “Can AI create?” It’s “Whose lens does it create through?”
Without regulation, we risk not just bias but homogenisation—where the stories of billions are reinterpreted through the default preferences of a handful of companies in California.
The Environmental Cost of Scalable Creativity
The myth of the “cloud” obscures a dirty truth: AI is power-hungry. Training large models or rendering AI-generated video at scale draws tremendous energy. If current trajectories hold, the emissions from AI data centers in the U.S. alone will cross 1 billion tons of CO₂ by 2035 - more than the entire nation of Japan emits in a year.
This hits especially hard in Asia. Many of our coastal cities—Bangkok, Mumbai, Manila—face existential climate threats. The idea that we could export carbon from Western AI infrastructure and import it back in the form of unchecked digital growth is not just unjust. It’s a repeat of history.
Worse, the new U.S. bill effectively blocks states from implementing any energy or climate constraints on AI infrastructure. The message is clear: scale at all costs, and let others bear the costs.
The Social Media Playbook: A Cautionary Tale
To understand why AI regulation is urgent, look no further than the last tech platform we failed to regulate in time: social media.
In the early 2010s, platforms like Facebook, Twitter, and YouTube promised connection, democratization, and creativity. What followed was a decade of polarization, misinformation, algorithmic extremism, and attention engineering—fuelled by opaque systems optimized for engagement, not truth or wellbeing.
By the time regulators woke up, the damage was systemic:
Elections were influenced by foreign botnets.
Mental health crises, especially among teenagers, skyrocketed.
Journalistic institutions were hollowed out.
Communities fractured into algorithmically sorted echo chambers.
And yet, even now, we are playing catch-up—debating content moderation, data privacy, and misinformation laws well after the platforms have matured into global utilities.
AI is following the same trajectory—except it's faster, deeper, and more opaque.
Where social media shaped how we see each other, AI will shape how we understand reality itself. It will write our news, mimic our voices, generate our faces, and even simulate our emotions.
If we let AI evolve under the same laissez-faire dogma that defined early social media, the consequences could make today’s content wars look like a warm-up act.
The Hollywood Paradox and the Asian Echo
In the U.S., the 2023 writers' strike forced tech and media executives to confront the role of AI in screenwriting and storytelling. That conversation is only beginning in Asia—but it’s arriving fast.
Studios from Mumbai to Manila are experimenting with AI tools for lip-syncing, synthetic voice dubbing, and automated editing. While this unlocks productivity, it also raises red flags: Who owns the creative rights to AI-generated edits? What happens when deepfake trailers go viral with fabricated castings? How does AI decide what “sells” in cross-border content?
Here’s the kicker: many of the AI systems that make these decisions aren’t built in Asia, yet they dictate cultural outcomes across our screens.
Regulation isn’t just a content issue. It’s a question of cultural sovereignty.
What Thoughtful Regulation Could Look Like
Regulation doesn’t mean red tape. It means writing the rules before chaos writes them for us.
Here’s what I would advocate for:
1. Disclosure Requirements
If an AI system is used to generate, recommend, or alter content, platforms should disclose that to users. Creators deserve to know if their content is being evaluated or remixed by an algorithm, and audiences should know when they’re engaging with AI-generated media.
2. Cultural Bias Audits
We need regional audits of major AI models for cultural bias—especially in languages, accents, and emotional cues. An AI voice trained on American English shouldn’t be making editorial decisions on Tamil film trailers.
3. Carbon Transparency
Just like nutrition labels on food, AI platforms should publish the carbon cost of their operations. If a 4K video transformation uses X kilowatt-hours, let creators and users see that. Climate is a creative issue now.
4. Data Sovereignty
Countries—and creators—should have the right to control how their data is used to train models. No more scraping Asian fan fiction forums or Korean subtitles without consent.
5. Algorithmic Pluralism
We need more publicly funded or open-source models that reflect diverse values. The Singapore government has already made strides in this direction with the MERLion and SEALion programs, but its not enough. A Vietnamese AI model should not have to rely on AI built in San Francisco with American assumptions baked in.
Asia’s Philosophical Contribution to AI Ethics
The AI debate today is dominated by Anglo-American frameworks—risk mitigation, libertarian freedoms, tech exceptionalism. But Asia has rich intellectual traditions that can reshape how we think about intelligence, agency, and responsibility.
In India, the concept of Dharma emphasises duty, balance, and context—offering a lens to think about AI’s social role beyond profit. In Japan, Shinto traditions embrace non-human entities with spiritual agency—a worldview that could inspire more respectful, co-creative AI relationships. In Southeast Asia, the interdependence of all beings points toward collective outcomes rather than individual dominance.
If we center these traditions, we might arrive at AI policies rooted not in control, but in harmony and responsibility.
The Founder’s Dilemma: Scale vs Stewardship
As a founder, I understand the tension. Regulation can feel like drag. It slows shipping cycles, introduces compliance overhead, and sometimes stifles experiments. But unchecked freedom is not innovation—it’s risk deferred.
We’ve seen what happens when we let scale outpace ethics. From social media manipulation to gig economy exploitation, the tech sector has a poor track record of self-regulation.
With AI, the stakes are not just economic - they are existential.
We have the rare chance to design governance before the damage is irreversible. Let’s not squander it on short-term optimization.
A Creative Reset for a Shared Future
At Wubble, we work with creators who are not afraid of AI. They use it to create music for social media content in Australia, to create engaging educational content for Vietnamese schools, and to animate folklore in new formats, among other things.
These creators see AI as a companion—not a competitor. But they deserve assurance that the tools they use aren’t exploiting their culture, compromising their privacy, or trashing the planet.
That’s why regulation matters—not to constrain creativity, but to protect its future.
Closing Thoughts: Write the Rules Before the Story Writes You
The “One Big Beautiful Bill” is more than an American policy. It’s a global signal of how power sees technology: a thing to deregulate, privatize, and scale—without guardrails.
If Asia wants a different future, we must speak up now. Founders must engage with policymakers. Creators must demand transparency. And governments must stop waiting for the West to lead.
We don’t need AI regulation to slow innovation. We need it to guide innovation toward justice, sustainability, and cultural flourishing.
The next decade will define not just what AI can do, but who we become as a result.
Let’s choose wisely.