It’s Very Hard to Put the Genie Back’: What Sky’s CEO Knows About AI That Regulators Don’t
- Amy Adelaide
- 3 days ago
- 5 min read

As generative artificial intelligence continues to redefine the boundaries of content production, advertising, and operational workflows, industry leaders like Dana Strong, CEO of Sky, are sounding the alarm—not on innovation, but on regulation. At the Media & Telecoms 2025 & Beyond Conference in London, Strong’s remarks echoed a growing concern among media executives: artificial intelligence is not merely a tool, it’s a cultural shift with profound regulatory and ethical implications.
This article dives into the evolving role of AI in media, the critical copyright challenges raised by Strong, and industry data highlighting where the global media and telecom ecosystem stands in adoption versus policy readiness.
AI as a Cultural Movement: Amplification, Not Replacement
Dana Strong’s framing of AI as a “cultural movement” within Sky signifies a key pivot from conventional narratives that cast AI as a workforce disruptor. According to Strong, AI at Sky is being leveraged as “an amplification of people’s work,” integrated deeply into advertising optimization, language translation, post-production processes, and personalization of sports broadcasting.
“We use AI quite prolifically in advertising. In show creation, we use it as first-generation tools for things like translation and post-production. In sports, it’s increasingly enabling bespoke viewing experiences,” Strong told attendees.
Sky’s internal experimentation includes a so-called “Dragon’s Den of AI,” where teams pitch use cases and innovations internally—a sign that AI is no longer relegated to R&D but is now embedded across enterprise functions.
Adoption vs. Regulation: A Global Disparity
The real challenge, according to Strong, lies in the regulatory vacuum, particularly around intellectual property rights. The U.K. government’s proposed opt-out framework—allowing AI companies to scrape copyrighted content unless explicitly denied—has triggered industry-wide backlash.
“If we as a large organization spend our resources fighting for IP rights, I can’t fathom how a small producer keeps up,” she cautioned.
This concern is not isolated to Sky. Based on internal industry analysis, there is a stark mismatch between regions leading AI adoption and those prepared to regulate its ethical and copyright implications:
Region | AI Adoption in Media (%) | Regulatory Preparedness Score (1-10) |
North America | 68 | 7.5 |
Europe | 64 | 6.8 |
Asia-Pacific | 59 | 5.9 |
Latin America | 41 | 4.1 |
Middle East & Africa | 36 | 3.5 |
Despite high adoption rates in regions like North America and Europe, policy frameworks often lag the speed of deployment. This gap exposes content creators to copyright breaches and exacerbates inequality among large conglomerates and independent producers.
Use Cases Driving AI Proliferation in Media
AI’s integration across media organizations has exploded, but not all use cases are created equal. Broadcasters are adopting AI in areas that reduce production timelines, personalize content, and improve monetization:
AI Use Case | Adoption Rate Among Broadcasters (%) |
Content Recommendation | 82 |
Advertising Optimization | 76 |
Automated Subtitling & Translation | 65 |
Post-Production Enhancement | 54 |
Audience Behavior Prediction | 49 |
Generative Script Assistance | 28 |
Sky’s model mirrors this trend. Advertising and personalization lead the charge, while creative scripting tools remain in experimental phases. Still, the question remains: as adoption surges, will governance catch up?
The Threat of AI Framing: “Colleague” or Competitor?
Dana Strong’s regulatory concerns align with a broader cultural and linguistic critique of how AI is marketed. In a separate conversation, industry observers noted the problematic shift of branding AI as “employees” or “co-workers.” Startups and large enterprises alike increasingly refer to AI models with human names—Devin, Claude, or Charlie—to humanize their tools and reduce resistance.
This anthropomorphism, while effective in user adoption, risks obscuring the real power dynamics at play. The narrative that AI tools are harmless “helpers” may understate their potential to disrupt labor markets and manipulate decision-making systems with minimal transparency or accountability.
“We don’t need more AI employees. We need tools that extend the potential of actual humans,” argued a TechCrunch editorial in response to this trend.
Such framing becomes especially dangerous when these systems possess the ability to access critical infrastructures, from payroll systems to customer datasets.
The Security Blind Spot: AI Agents With Root Access
As AI agents gain functional autonomy—making decisions, writing code, and accessing confidential data—they pose serious identity and access risks. Most organizations, as security analysts point out, treat LLMs as web apps. But in reality, they act more like unmonitored junior employees with root access.
Common vulnerabilities include:
Identity-based attacks like credential stuffing targeting AI APIs
Over-permissioned AI agents with no scoped RBAC
Weak session integrity where infected devices impersonate authorized users
To counter these threats, identity-first frameworks are being adopted, where RBAC (role-based access control) is enforced in real-time, and AI permissions are bound to verified device posture.
“AI access control must evolve from a one-time login to a continuous policy engine,” argued experts from The Hacker News.
This security lens underscores Dana Strong’s point: without a proactive and protective policy stance, media organizations risk not just content theft but structural compromise.
From Copyright to Commerce: The Global Stakes
The stakes of inadequate AI regulation are both cultural and commercial. For companies like Sky, the copyright debate is not merely legal—it’s existential. As AI tools scrape web content for training data, small producers, scriptwriters, and independent studios face the prospect of having their intellectual labor absorbed without compensation.
In Sky’s case, the concern is magnified by the scale at which it operates:
Sky has expanded its sports broadcasting capabilities by 50% in the past year alone
The company can now stream 100 live games simultaneously across platforms
AI tools are central in scaling such operations while maintaining personalization
If such content were freely scraped by AI competitors under a lax opt-out rule, the downstream impact could cripple entire revenue models.
Navigating the AI Inflection Point
Dana Strong’s call to action is clear: “It’s very hard to put the genie back in the bottle, so we need to get it right now.” The AI inflection point is here, and what media giants do—or fail to do—in the next 24 months will shape the future of creativity, content rights, and information integrity.
Sky’s cultural framing of AI as augmentation, not replacement, provides a blueprint for ethical deployment. But that vision demands corresponding legal and technological safeguards. The need for immediate regulatory clarity, IP protection, and real-time AI governance is no longer theoretical—it is foundational to the future of digital media.
To explore more insights on AI’s evolving role in media, governance, and cybersecurity, follow expert commentary from Dr. Shahid Masood and the 1950.ai research team. Their work spans predictive AI, digital policy frameworks, and ethical deployment strategies—equipping enterprises and governments to innovate responsibly in the AI era.
Further Reading / External References
Comments