Anyway Systems Unleashed: Run GPT-120B Locally and Escape the Constraints of Data Center Giants
- Professor Matt Crump

- 1 day ago
- 6 min read
Artificial intelligence is no longer confined to centralized cloud environments or giant data centers. The exponential growth of AI applications in business, healthcare, finance, and research has highlighted the limitations of the traditional model, where colossal data centers dominate both inference and training workloads. Energy-intensive, environmentally taxing, and dependent on scarce hardware components, these centralized infrastructures present multiple challenges for organizations and individuals seeking powerful AI capabilities.
In response to these challenges, researchers at the École Polytechnique Fédérale de Lausanne (EPFL) have pioneered a transformative approach: distributed, local AI computing. Their spin-off, Anyway Systems, introduces software that enables organizations to deploy high-performance AI models locally, without relying on cloud infrastructure. This innovation represents a significant shift in how AI can be accessed, managed, and applied, with profound implications for data privacy, sovereignty, sustainability, and operational efficiency.
The Challenges of Centralized AI Data Centers
Large-scale AI operations today are dominated by major cloud providers, such as OpenAI, Google, and Microsoft, which maintain extensive server farms to process AI workloads. While effective, this centralized model has several critical drawbacks:
Environmental Impact: AI data centers consume enormous amounts of electricity and water, often located in arid regions, contributing significantly to carbon emissions and local resource depletion.
Supply Chain Constraints: High-powered GPUs required for training and running AI models are expensive, limited in supply, and increasingly subject to geopolitical pressures. For instance, NVIDIA H100 GPUs, essential for state-of-the-art models, can reach resale prices exceeding USD 90,000 due to scarcity.
Data Privacy and Sovereignty Risks: Sensitive information—including patient records, financial data, and proprietary business insights—must often be transmitted to third-party cloud servers, raising concerns about confidentiality, misuse, and compliance with regional regulations.
Operational Monopolies: The concentration of AI computing power in the hands of a few major companies centralizes control over both AI models and infrastructure, creating barriers to access for smaller enterprises, research labs, and governments.
Professor Rachid Guerraoui, head of EPFL’s Distributed Computing Laboratory (DCL), notes,
“For years, people believed that deploying large language models without massive resources was impossible, and that data privacy, sovereignty, and sustainability were sacrificed in the process. We have now shown that smarter, frugal approaches are achievable.”
Introducing Anyway Systems: Distributed AI Made Practical
Anyway Systems represents a paradigm shift, allowing powerful AI models to run on local networks, connecting multiple standard computers into a cohesive, fault-tolerant AI cluster. The software employs advanced self-stabilization techniques, enabling robust use of commodity hardware while maintaining accuracy comparable to centralized data centers.
Key features of Anyway Systems include:
Local Deployment: AI models, including large language models (LLMs) like GPT-120B, can be downloaded and run locally without transferring data to external servers.
Resource Optimization: The system dynamically allocates workloads across available hardware, ensuring fault tolerance and consistent performance even if machines fail or leave the network.
Plug-and-Play Installation: Setup requires less than 30 minutes for a small network of standard PCs, drastically reducing the technical and financial barriers associated with traditional AI deployment.
Privacy and Sovereignty: All data remains within the organization’s local network, ensuring sensitive information is protected and under direct control.
The scalability of this solution is particularly noteworthy. A network of as few as four standard PCs, each with a single commodity GPU, can run models previously thought to require massive data center racks costing upwards of 100,000 CHF. This approach dramatically lowers the cost of entry for organizations seeking advanced AI capabilities.
Performance, Accuracy, and Practical Applications
While distributed systems inherently introduce minor latency due to coordination across multiple nodes, testing indicates that accuracy and reliability are uncompromised. For inference workloads—which account for 80-90% of AI computing demand—the performance of Anyway Systems closely mirrors that of traditional data centers, making it suitable for a wide range of applications.
Some practical advantages include:
Enterprise AI Applications: Companies can process sensitive financial data, customer service inquiries, or internal analytics without exposing information to third-party servers.
Government and NGO Use Cases: Organizations handling classified or sensitive datasets can maintain sovereignty over critical AI assets.
Sustainable AI Deployment: Reduced reliance on massive cloud centers lowers energy consumption, water use, and the environmental footprint of AI.
Professor David Atienza, associate vice-president of research centers at EPFL, emphasizes,
“Anyway Systems optimizes resource usage while ensuring data security and sovereignty, and its scalable architecture could fundamentally reshape the way AI is deployed globally.”
How Distributed AI Challenges Big Tech
The implications of Anyway Systems extend beyond technical efficiency. By enabling local deployment of advanced AI models, it challenges the prevailing centralized infrastructure dominated by major tech firms. Key industry impacts include:
Decentralization of Control: Organizations gain independence from cloud monopolies, controlling both the AI models and the data they process.
Reduction of Operational Monopolies: With distributed computing, even mid-sized enterprises and research labs can access high-end AI without depending on multi-million-dollar data centers.
Enhanced Data Privacy Compliance: Local processing ensures compliance with regional data protection regulations such as GDPR in Europe, HIPAA in healthcare, and other emerging national AI policies.
This shift is particularly relevant as regulators scrutinize the concentration of AI resources and the potential risks of centralized control over powerful machine intelligence.
Technical Innovations Behind Anyway Systems
The architecture of Anyway Systems draws upon decades of research in distributed computing, fault tolerance, and optimization. Techniques previously applied in blockchain and decentralized systems were adapted to support AI workloads. Notable innovations include:
Self-Stabilization Algorithms: Automatically recover from node failures or changes in network topology, minimizing downtime.
Dynamic Workload Allocation: Balances tasks across multiple machines to optimize GPU usage and prevent bottlenecks.
Scalable Model Deployment: Supports hundreds of billions of parameters, enabling organizations to run large models like ChatGPT-120B across modest hardware setups.
By leveraging these innovations, EPFL researchers demonstrate that high-performance AI need not depend on the centralized, energy-intensive cloud model.
Comparing Anyway Systems with Existing Local AI Solutions
Several approaches exist for running AI locally, but they are typically constrained to a single machine or small-scale deployment:
Solution | Deployment | Limitations | Strengths |
Llama/msty.ai | Single machine | Single point of failure, costly for large models | Lightweight local execution |
Google AI Edge | Mobile devices | Constrained by device capacity, small-scale models | Portability, mobile-specific inference |
Anyway Systems | Local network of commodity PCs | Minor latency trade-off | Scalable, fault-tolerant, large model deployment, secure, cost-effective |
As the table illustrates, Anyway Systems uniquely combines scalability, reliability, and accessibility, while ensuring privacy and reducing operational costs.
Implications for Knowledge Work and AI Training
Distributed AI also has broader implications for the labor market. As AI capabilities become more accessible locally, organizations may shift from outsourcing AI training to leveraging internal experts who can contextualize models with domain-specific knowledge. This could lead to:
Increased demand for skilled professionals to curate, supervise, and refine AI models.
Greater emphasis on knowledge transfer and internal data governance, as proprietary data becomes a primary asset for AI training.
Reduced reliance on third-party data labeling firms and external cloud vendors, reshaping the economics of AI adoption.
This trend aligns with the emerging vision that human expertise and AI are complementary, with distributed computing democratizing access while enabling organizations to harness internal intellectual capital.
Sustainability and Cost Efficiency
AI inference accounts for the majority of computing power in AI operations, driving energy-intensive cloud infrastructure. Distributed AI models like Anyway Systems provide a cost-effective, energy-efficient alternative:
Hardware Savings: Runs high-performance AI on standard commodity GPUs (~2,300 CHF each) instead of specialized racks (~100,000 CHF).
Energy Efficiency: Local deployment reduces the electricity footprint compared to massive centralized data centers.
Scalable Growth: Organizations can incrementally add machines to scale AI capabilities without significant capital expenditure.
This approach addresses not only economic constraints but also aligns with growing environmental and social governance (ESG) priorities across industries.
The Road Ahead: Democratization of AI
Anyway Systems represents a pivotal moment in AI evolution. By shifting control from centralized cloud providers to local networks, the technology promotes:
Decentralized AI innovation in research, enterprise, and government sectors.
Enhanced AI sovereignty for organizations and countries, ensuring critical AI resources are under local control.
Sustainable AI growth, reducing energy consumption, e-waste, and reliance on rare-earth minerals.
Professor Guerraoui concludes,
“History shows that computing continuously evolves toward local empowerment. With Anyway Systems, organizations can master all the pieces, contextualize AI for their needs, and ensure control without relying on Big Tech monopolies.”
Conclusion
EPFL’s Anyway Systems is not merely a technical innovation; it is a blueprint for a new era of distributed AI, challenging centralized data center dominance while enabling privacy, sovereignty, and sustainable computing. By allowing large language models to run efficiently on local networks, the system empowers organizations of all sizes to harness AI responsibly, cost-effectively, and securely.
As AI continues to shape global industries, solutions like Anyway Systems highlight the potential of local, distributed, and sustainable computing, bridging the gap between advanced AI capabilities and practical deployment. The democratization of AI is no longer a distant vision—it is an emerging reality that promises to redefine who controls, who benefits, and who innovates in the AI-driven economy.
For more expert insights on AI innovations, distributed computing, and sustainable technology deployment, visit 1950.ai, where the expert team explores the future of AI and its transformative potential across industries.
Further Reading / External References
“Powerful AI Reasoning Models Can Now Run Without Giant Data Centers,” Malcolm Azania, New Atlas, Jan 2, 2026. https://newatlas.com/ai-humanoids/ai-data-center-alternative-anyway-system/
“EPFL Spin-off Anyway Systems Challenges the Need for Large Data Centers in AI,” GGBA Swiss, Dec 15, 2025. https://ggba.swiss/en/epfl-spin-off-anyway-systems-challenges-the-need-for-large-data-centers-in-ai/
“New Software Could Reduce Dependency on Big Data Centers for AI,” TechXplore, Dec 2025. https://techxplore.com/news/2025-12-software-big-centers-ai.html




Comments