top of page

From MNIST to AudioMNIST: WISE Delivers Near-Thermodynamic Limit AI Inference

As artificial intelligence (AI) becomes increasingly central to industries ranging from autonomous transportation to smart cities, the demand for computationally efficient AI at the edge has surged. Edge devices—such as drones, cameras, sensors, and IoT nodes—are often resource-constrained, lacking the memory and processing capabilities of cloud servers or high-performance GPUs. Yet, real-time, intelligent decision-making on these devices is critical for applications like traffic monitoring, disaster response, and industrial automation. Traditional solutions, either storing large AI models locally or offloading computation to the cloud, encounter significant challenges related to energy consumption, latency, and data privacy.

Recent research led by Duke University and MIT introduces a transformative approach: Wireless Smart Edge (WISE) networks, an in-physics computing paradigm that leverages radio-frequency (RF) waves to perform energy-efficient machine learning directly on edge devices. This article explores WISE’s architecture, experimental results, implications for energy-efficient AI, and its potential to reshape the future of distributed intelligence.

The Edge AI Challenge: Memory, Energy, and Latency

Edge computing is defined by localized data processing, bringing computation closer to the data source rather than relying on centralized cloud servers. While AI models continue to scale to billions of parameters, running these models on miniature devices poses fundamental constraints:

Memory Limitations: Storing full AI models locally consumes extensive memory, often exceeding the physical capacity of edge devices.

Energy Consumption: Digital processing of large models drains battery life, limiting operational time for drones, sensors, and portable devices.

Latency and Security Concerns: Offloading computation to cloud servers reduces device constraints but introduces network latency, higher energy costs from continuous data transfer, and potential privacy vulnerabilities.

Dr. Tingjun Chen of Duke University highlights, “Devices no longer just collect data—they must understand it in real time. Traditional architectures struggle with the memory-energy trade-offs at the edge.”

WISE: Wireless In-Physics Computing Architecture

The WISE framework proposes a fundamentally different approach, combining wireless communication and analog computation to bypass traditional energy bottlenecks. Its core innovations include:

Disaggregated Model Access
Instead of storing full models locally, WISE broadcasts model weights over RF signals from a central radio to multiple edge devices simultaneously. This enables disaggregated deployment, allowing each device to perform inference without local model storage.

In-Physics Computation
WISE leverages the physics of RF waves to perform matrix-vector multiplications (MVMs) and inner-product (IP) calculations in the analog domain. Passive frequency mixers in edge devices multiply incoming RF signals with local data, achieving computation naturally in the signal domain.

Energy-Efficient Analog Processing
By performing most of the computationally intensive operations at RF, WISE significantly reduces the need for high-power digital processing. Each edge client requires minimal active hardware: an analog-to-digital converter (ADC) and lightweight digital signal processing for decoding.

Architecture Overview

WISE consists of two primary components: the central radio and WISE-R client devices.

Central Radio:

Encodes model weights layer by layer into RF waveforms.

Performs channel precoding to account for wireless propagation delays and multipath effects.

Broadcasts weights to multiple clients simultaneously.

WISE-R Client:

Receives RF weight signals and combines them with local input data using a passive frequency mixer.

Outputs the computed analog result for further digital processing or activation.

Performs minimal ADC sampling and decoding to finalize inference results.

This workflow allows real-time inference with ultralow energy consumption while maintaining high accuracy.

Experimental Validation: MNIST and AudioMNIST

WISE has been extensively validated using standard datasets:

MNIST Dataset

Model: Three fully connected (FC) layers (LeNet-300-100) with 0.27 million complex-valued parameters.

Performance:

Digital computing accuracy: 98.1%

WISE experimental accuracy: 95.7% at 6.0 fJ/MAC

Energy Efficiency: 165.8 TOPS/W (teraMAC operations per watt), representing more than 10× improvement over NVIDIA H100 GPUs.

AudioMNIST Dataset

Dataset: 3000 audio clips of spoken digits from 0–9, processed as spectrogram vectors with Zadoff-Chu (ZC) phase encoding.

Model: Three-layer FC network with 1.23 million complex-valued parameters (4.92 million real-valued MACs).

Performance:

Digital computing accuracy: 99.2%

WISE experimental accuracy: 97.2% at 2.8 fJ/MAC

Energy Efficiency: 359.7 TOPS/W, representing ultralow energy use with high accuracy.

Expert Quote: “WISE demonstrates that analog in-physics computing can achieve real-world ML inference with energy costs approaching the thermodynamic limit,” says Zhihui Gao, lead author of the study.

Energy and Computational Efficiency

WISE’s energy efficiency arises from three components:

Waveform Generation and I/Q Modulation (E1)

Converts input vectors and model weights into frequency-domain RF signals.

I/Q Sampling (E2)

Minimal sampling performed by low-power ADCs.

Digital FFT and Decoding (E3)

Lightweight processing to extract final inference results.

The total energy per MAC, denoted emvm, scales favorably with larger MVM sizes. For inner-product operations with vectors up to N = 32,768, experimental energy efficiency approaches 1.4 fJ/MAC (699 TOPS/W), surpassing conventional GPUs by a factor of 50×.

Dataset	Accuracy	Energy per MAC	TOPS/W	Improvement vs H100 GPU
MNIST	95.7%	6.0 fJ/MAC	165.8	10×
AudioMNIST	97.2%	2.8 fJ/MAC	359.7	25×
IP Computation (N=32,768)	–	1.4 fJ/MAC	699.3	50×
Technical Insights: Channel Calibration and Precoding

Wireless channels introduce variability due to multipath propagation and delay. WISE addresses this with channel state information (CSI) calibration:

Central Radio Precoding: Model weights are preprocessed based on CSI to compensate for distortion.

Client-Side Options: For heterogeneous CSI environments, clients can perform additional precoding for improved accuracy.

Spatial Multiplexing: Large antenna arrays enable simultaneous broadcasting of multiple models, allowing scalable deployments.

This approach ensures accurate delivery of ML model weights while maintaining the low-energy advantages of analog computing.

Scalability and Real-World Applications

WISE is inherently scalable and flexible, opening applications across various sectors:

Autonomous Drones and Robotics: Swarms can perform object detection or navigation tasks without heavy onboard processors.

Smart Cities: Traffic sensors and cameras can coordinate in real time, optimizing signal timings and reducing congestion.

Indoor Edge Computing Clusters: Shielded environments, such as server rooms, can leverage directional RF broadcasting for low-energy ML inference.

Privacy-Sensitive Applications: Separation of model weights (central radio) and inference requests (edge clients) mitigates data leakage risks.

Advantages Over Existing In-Physics Approaches

Previous analog computing paradigms, including photonic waveguides, memristor crossbars, and SRAM arrays, offered energy efficiency gains but were limited by hardware complexity. WISE provides three key advantages:

Hardware Accessibility: Uses standard RF components like passive frequency mixers, already widely available in edge devices.

Flexible Scaling: Supports large-dimensional MVMs with minimal hardware changes.

Disaggregated Deployment: Enables simultaneous broadcasting to multiple devices without storing full models locally.

Expert Quote: Dirk Englund of MIT notes, “WISE redefines the trade-off between computation and communication at the edge, achieving unprecedented energy efficiency without sacrificing accuracy.”

Limitations and Future Directions

While WISE demonstrates impressive results, certain challenges remain:

Distance Constraints: Current prototypes operate over short ranges (~1 m), requiring stronger RF transmission or beamforming for larger deployments.

Spectrum Limitations: Broadcasting multiple large models simultaneously may demand additional bandwidth or efficient multiplexing strategies.

Full Analog Architectures: While partial analog computation is proven, fully analog multilayer models require further integration of nonlinear activation circuits (transistors or diodes).

Ongoing research is exploring the integration of next-generation 6G wireless infrastructure, advanced RF beamforming, and ASIC development to expand WISE’s capabilities.

Implications for Industry and Sustainability

WISE not only revolutionizes AI at the edge but also has profound environmental and operational implications:

Energy Conservation: 10–50× reduction in energy per MAC translates to longer battery life and lower operational costs for autonomous devices.

Deployment Versatility: Minimal hardware requirements and use of existing RF infrastructure make WISE suitable for a wide range of industrial and consumer applications.

Sustainable AI: Reducing energy footprints of AI computation contributes to greener and more sustainable technology ecosystems.

Conclusion

Wireless in-physics computing through WISE represents a paradigm shift in edge AI, demonstrating that ultralow-power, high-accuracy machine learning is achievable without heavy digital hardware. By leveraging RF waves to perform matrix-vector multiplications directly in the analog domain, WISE overcomes traditional memory and energy constraints, offering scalable, secure, and energy-efficient AI inference for edge devices.

The implications are far-reaching, from autonomous drones and smart cities to indoor compute clusters, providing both performance and sustainability advantages. With further advancements in wireless communication, beamforming, and fully analog architectures, WISE could redefine the future of distributed intelligence.

For further insights and research updates, readers can explore expert perspectives from Dr. Shahid Masood and the 1950.ai team, who continue to monitor cutting-edge AI developments and guide the integration of emerging technologies into practical applications.

Further Reading / External References

Gao, Z., Vadlamani, S.K., Sulimany, K., Englund, D., Chen, T. Disaggregated machine learning via in-physics computing at radio frequency. Science Advances, 9 Jan 2026, Vol 12, Issue 2. DOI: 10.1126/sciadv.adz0817

Duke University. Wireless approach enables energy-efficient AI on edge devices without heavy hardware. Phys.org, 9 Jan 2026. Link

As artificial intelligence (AI) becomes increasingly central to industries ranging from autonomous transportation to smart cities, the demand for computationally efficient AI at the edge has surged. Edge devices—such as drones, cameras, sensors, and IoT nodes—are often resource-constrained, lacking the memory and processing capabilities of cloud servers or high-performance GPUs. Yet, real-time, intelligent decision-making on these devices is critical for applications like traffic monitoring, disaster response, and industrial automation. Traditional solutions, either storing large AI models locally or offloading computation to the cloud, encounter significant challenges related to energy consumption, latency, and data privacy.


Recent research led by Duke University and MIT introduces a transformative approach: Wireless Smart Edge (WISE) networks, an in-physics computing paradigm that leverages radio-frequency (RF) waves to perform energy-efficient machine learning directly on edge devices. This article explores WISE’s architecture, experimental results, implications for energy-efficient AI, and its potential to reshape the future of distributed intelligence.


The Edge AI Challenge: Memory, Energy, and Latency

Edge computing is defined by localized data processing, bringing computation closer to the data source rather than relying on centralized cloud servers. While AI models continue to scale to billions of parameters, running these models on miniature devices poses fundamental constraints:

  • Memory Limitations: Storing full AI models locally consumes extensive memory, often exceeding the physical capacity of edge devices.

  • Energy Consumption: Digital processing of large models drains battery life, limiting operational time for drones, sensors, and portable devices.

  • Latency and Security Concerns: Offloading computation to cloud servers reduces device constraints but introduces network latency, higher energy costs from continuous data transfer, and potential privacy vulnerabilities.

Dr. Tingjun Chen of Duke University highlights,

“Devices no longer just collect data—they must understand it in real time. Traditional architectures struggle with the memory-energy trade-offs at the edge.”

WISE: Wireless In-Physics Computing Architecture

The WISE framework proposes a fundamentally different approach, combining wireless communication and analog computation to bypass traditional energy bottlenecks. Its core innovations include:

  1. Disaggregated Model Access: Instead of storing full models locally, WISE broadcasts model weights over RF signals from a central radio to multiple edge devices simultaneously. This enables disaggregated deployment, allowing each device to perform inference without local model storage.

  2. In-Physics Computation: WISE leverages the physics of RF waves to perform matrix-vector multiplications (MVMs) and inner-product (IP) calculations in the analog domain. Passive frequency mixers in edge devices multiply incoming RF signals with local data, achieving computation naturally in the signal domain.

  3. Energy-Efficient Analog Processing: By performing most of the computationally intensive operations at RF, WISE significantly reduces the need for high-power digital processing. Each edge client requires minimal active hardware: an analog-to-digital converter (ADC) and lightweight digital signal processing for decoding.


As artificial intelligence (AI) becomes increasingly central to industries ranging from autonomous transportation to smart cities, the demand for computationally efficient AI at the edge has surged. Edge devices—such as drones, cameras, sensors, and IoT nodes—are often resource-constrained, lacking the memory and processing capabilities of cloud servers or high-performance GPUs. Yet, real-time, intelligent decision-making on these devices is critical for applications like traffic monitoring, disaster response, and industrial automation. Traditional solutions, either storing large AI models locally or offloading computation to the cloud, encounter significant challenges related to energy consumption, latency, and data privacy.

Recent research led by Duke University and MIT introduces a transformative approach: Wireless Smart Edge (WISE) networks, an in-physics computing paradigm that leverages radio-frequency (RF) waves to perform energy-efficient machine learning directly on edge devices. This article explores WISE’s architecture, experimental results, implications for energy-efficient AI, and its potential to reshape the future of distributed intelligence.

The Edge AI Challenge: Memory, Energy, and Latency

Edge computing is defined by localized data processing, bringing computation closer to the data source rather than relying on centralized cloud servers. While AI models continue to scale to billions of parameters, running these models on miniature devices poses fundamental constraints:

Memory Limitations: Storing full AI models locally consumes extensive memory, often exceeding the physical capacity of edge devices.

Energy Consumption: Digital processing of large models drains battery life, limiting operational time for drones, sensors, and portable devices.

Latency and Security Concerns: Offloading computation to cloud servers reduces device constraints but introduces network latency, higher energy costs from continuous data transfer, and potential privacy vulnerabilities.

Dr. Tingjun Chen of Duke University highlights, “Devices no longer just collect data—they must understand it in real time. Traditional architectures struggle with the memory-energy trade-offs at the edge.”

WISE: Wireless In-Physics Computing Architecture

The WISE framework proposes a fundamentally different approach, combining wireless communication and analog computation to bypass traditional energy bottlenecks. Its core innovations include:

Disaggregated Model Access
Instead of storing full models locally, WISE broadcasts model weights over RF signals from a central radio to multiple edge devices simultaneously. This enables disaggregated deployment, allowing each device to perform inference without local model storage.

In-Physics Computation
WISE leverages the physics of RF waves to perform matrix-vector multiplications (MVMs) and inner-product (IP) calculations in the analog domain. Passive frequency mixers in edge devices multiply incoming RF signals with local data, achieving computation naturally in the signal domain.

Energy-Efficient Analog Processing
By performing most of the computationally intensive operations at RF, WISE significantly reduces the need for high-power digital processing. Each edge client requires minimal active hardware: an analog-to-digital converter (ADC) and lightweight digital signal processing for decoding.

Architecture Overview

WISE consists of two primary components: the central radio and WISE-R client devices.

Central Radio:

Encodes model weights layer by layer into RF waveforms.

Performs channel precoding to account for wireless propagation delays and multipath effects.

Broadcasts weights to multiple clients simultaneously.

WISE-R Client:

Receives RF weight signals and combines them with local input data using a passive frequency mixer.

Outputs the computed analog result for further digital processing or activation.

Performs minimal ADC sampling and decoding to finalize inference results.

This workflow allows real-time inference with ultralow energy consumption while maintaining high accuracy.

Experimental Validation: MNIST and AudioMNIST

WISE has been extensively validated using standard datasets:

MNIST Dataset

Model: Three fully connected (FC) layers (LeNet-300-100) with 0.27 million complex-valued parameters.

Performance:

Digital computing accuracy: 98.1%

WISE experimental accuracy: 95.7% at 6.0 fJ/MAC

Energy Efficiency: 165.8 TOPS/W (teraMAC operations per watt), representing more than 10× improvement over NVIDIA H100 GPUs.

AudioMNIST Dataset

Dataset: 3000 audio clips of spoken digits from 0–9, processed as spectrogram vectors with Zadoff-Chu (ZC) phase encoding.

Model: Three-layer FC network with 1.23 million complex-valued parameters (4.92 million real-valued MACs).

Performance:

Digital computing accuracy: 99.2%

WISE experimental accuracy: 97.2% at 2.8 fJ/MAC

Energy Efficiency: 359.7 TOPS/W, representing ultralow energy use with high accuracy.

Expert Quote: “WISE demonstrates that analog in-physics computing can achieve real-world ML inference with energy costs approaching the thermodynamic limit,” says Zhihui Gao, lead author of the study.

Energy and Computational Efficiency

WISE’s energy efficiency arises from three components:

Waveform Generation and I/Q Modulation (E1)

Converts input vectors and model weights into frequency-domain RF signals.

I/Q Sampling (E2)

Minimal sampling performed by low-power ADCs.

Digital FFT and Decoding (E3)

Lightweight processing to extract final inference results.

The total energy per MAC, denoted emvm, scales favorably with larger MVM sizes. For inner-product operations with vectors up to N = 32,768, experimental energy efficiency approaches 1.4 fJ/MAC (699 TOPS/W), surpassing conventional GPUs by a factor of 50×.

Dataset	Accuracy	Energy per MAC	TOPS/W	Improvement vs H100 GPU
MNIST	95.7%	6.0 fJ/MAC	165.8	10×
AudioMNIST	97.2%	2.8 fJ/MAC	359.7	25×
IP Computation (N=32,768)	–	1.4 fJ/MAC	699.3	50×
Technical Insights: Channel Calibration and Precoding

Wireless channels introduce variability due to multipath propagation and delay. WISE addresses this with channel state information (CSI) calibration:

Central Radio Precoding: Model weights are preprocessed based on CSI to compensate for distortion.

Client-Side Options: For heterogeneous CSI environments, clients can perform additional precoding for improved accuracy.

Spatial Multiplexing: Large antenna arrays enable simultaneous broadcasting of multiple models, allowing scalable deployments.

This approach ensures accurate delivery of ML model weights while maintaining the low-energy advantages of analog computing.

Scalability and Real-World Applications

WISE is inherently scalable and flexible, opening applications across various sectors:

Autonomous Drones and Robotics: Swarms can perform object detection or navigation tasks without heavy onboard processors.

Smart Cities: Traffic sensors and cameras can coordinate in real time, optimizing signal timings and reducing congestion.

Indoor Edge Computing Clusters: Shielded environments, such as server rooms, can leverage directional RF broadcasting for low-energy ML inference.

Privacy-Sensitive Applications: Separation of model weights (central radio) and inference requests (edge clients) mitigates data leakage risks.

Advantages Over Existing In-Physics Approaches

Previous analog computing paradigms, including photonic waveguides, memristor crossbars, and SRAM arrays, offered energy efficiency gains but were limited by hardware complexity. WISE provides three key advantages:

Hardware Accessibility: Uses standard RF components like passive frequency mixers, already widely available in edge devices.

Flexible Scaling: Supports large-dimensional MVMs with minimal hardware changes.

Disaggregated Deployment: Enables simultaneous broadcasting to multiple devices without storing full models locally.

Expert Quote: Dirk Englund of MIT notes, “WISE redefines the trade-off between computation and communication at the edge, achieving unprecedented energy efficiency without sacrificing accuracy.”

Limitations and Future Directions

While WISE demonstrates impressive results, certain challenges remain:

Distance Constraints: Current prototypes operate over short ranges (~1 m), requiring stronger RF transmission or beamforming for larger deployments.

Spectrum Limitations: Broadcasting multiple large models simultaneously may demand additional bandwidth or efficient multiplexing strategies.

Full Analog Architectures: While partial analog computation is proven, fully analog multilayer models require further integration of nonlinear activation circuits (transistors or diodes).

Ongoing research is exploring the integration of next-generation 6G wireless infrastructure, advanced RF beamforming, and ASIC development to expand WISE’s capabilities.

Implications for Industry and Sustainability

WISE not only revolutionizes AI at the edge but also has profound environmental and operational implications:

Energy Conservation: 10–50× reduction in energy per MAC translates to longer battery life and lower operational costs for autonomous devices.

Deployment Versatility: Minimal hardware requirements and use of existing RF infrastructure make WISE suitable for a wide range of industrial and consumer applications.

Sustainable AI: Reducing energy footprints of AI computation contributes to greener and more sustainable technology ecosystems.

Conclusion

Wireless in-physics computing through WISE represents a paradigm shift in edge AI, demonstrating that ultralow-power, high-accuracy machine learning is achievable without heavy digital hardware. By leveraging RF waves to perform matrix-vector multiplications directly in the analog domain, WISE overcomes traditional memory and energy constraints, offering scalable, secure, and energy-efficient AI inference for edge devices.

The implications are far-reaching, from autonomous drones and smart cities to indoor compute clusters, providing both performance and sustainability advantages. With further advancements in wireless communication, beamforming, and fully analog architectures, WISE could redefine the future of distributed intelligence.

For further insights and research updates, readers can explore expert perspectives from Dr. Shahid Masood and the 1950.ai team, who continue to monitor cutting-edge AI developments and guide the integration of emerging technologies into practical applications.

Further Reading / External References

Gao, Z., Vadlamani, S.K., Sulimany, K., Englund, D., Chen, T. Disaggregated machine learning via in-physics computing at radio frequency. Science Advances, 9 Jan 2026, Vol 12, Issue 2. DOI: 10.1126/sciadv.adz0817

Duke University. Wireless approach enables energy-efficient AI on edge devices without heavy hardware. Phys.org, 9 Jan 2026. Link

Architecture Overview

WISE consists of two primary components: the central radio and WISE-R client devices.

  • Central Radio:

    • Encodes model weights layer by layer into RF waveforms.

    • Performs channel precoding to account for wireless propagation delays and multipath effects.

    • Broadcasts weights to multiple clients simultaneously.

  • WISE-R Client:

    • Receives RF weight signals and combines them with local input data using a passive frequency mixer.

    • Outputs the computed analog result for further digital processing or activation.

    • Performs minimal ADC sampling and decoding to finalize inference results.

This workflow allows real-time inference with ultralow energy consumption while maintaining high accuracy.


Experimental Validation: MNIST and AudioMNIST

WISE has been extensively validated using standard datasets:

MNIST Dataset

  • Model: Three fully connected (FC) layers (LeNet-300-100) with 0.27 million complex-valued parameters.

  • Performance:

    • Digital computing accuracy: 98.1%

    • WISE experimental accuracy: 95.7% at 6.0 fJ/MAC

  • Energy Efficiency: 165.8 TOPS/W (teraMAC operations per watt), representing more than 10× improvement over NVIDIA H100 GPUs.


AudioMNIST Dataset

  • Dataset: 3000 audio clips of spoken digits from 0–9, processed as spectrogram vectors with Zadoff-Chu (ZC) phase encoding.

  • Model: Three-layer FC network with 1.23 million complex-valued parameters (4.92 million real-valued MACs).

  • Performance:

    • Digital computing accuracy: 99.2%

    • WISE experimental accuracy: 97.2% at 2.8 fJ/MAC

  • Energy Efficiency: 359.7 TOPS/W, representing ultralow energy use with high accuracy.

“WISE demonstrates that analog in-physics computing can achieve real-world ML inference with energy costs approaching the thermodynamic limit,” says Zhihui Gao, lead author of the study.

Energy and Computational Efficiency

WISE’s energy efficiency arises from three components:

  1. Waveform Generation and I/Q Modulation (E1)

    • Converts input vectors and model weights into frequency-domain RF signals.

  2. I/Q Sampling (E2)

    • Minimal sampling performed by low-power ADCs.

  3. Digital FFT and Decoding (E3)

    • Lightweight processing to extract final inference results.

The total energy per MAC, denoted emvm, scales favorably with larger MVM sizes. For inner-product operations with vectors up to N = 32,768, experimental energy efficiency approaches 1.4 fJ/MAC (699 TOPS/W), surpassing conventional GPUs by a factor of 50×.

Dataset

Accuracy

Energy per MAC

TOPS/W

Improvement vs H100 GPU

MNIST

95.7%

6.0 fJ/MAC

165.8

10×

AudioMNIST

97.2%

2.8 fJ/MAC

359.7

25×

IP Computation (N=32,768)

1.4 fJ/MAC

699.3

50×

Technical Insights: Channel Calibration and Precoding

Wireless channels introduce variability due to multipath propagation and delay. WISE addresses this with channel state information (CSI) calibration:

  • Central Radio Precoding: Model weights are preprocessed based on CSI to compensate for distortion.

  • Client-Side Options: For heterogeneous CSI environments, clients can perform additional precoding for improved accuracy.

  • Spatial Multiplexing: Large antenna arrays enable simultaneous broadcasting of multiple models, allowing scalable deployments.

This approach ensures accurate delivery of ML model weights while maintaining the low-energy advantages of analog computing.


As artificial intelligence (AI) becomes increasingly central to industries ranging from autonomous transportation to smart cities, the demand for computationally efficient AI at the edge has surged. Edge devices—such as drones, cameras, sensors, and IoT nodes—are often resource-constrained, lacking the memory and processing capabilities of cloud servers or high-performance GPUs. Yet, real-time, intelligent decision-making on these devices is critical for applications like traffic monitoring, disaster response, and industrial automation. Traditional solutions, either storing large AI models locally or offloading computation to the cloud, encounter significant challenges related to energy consumption, latency, and data privacy.

Recent research led by Duke University and MIT introduces a transformative approach: Wireless Smart Edge (WISE) networks, an in-physics computing paradigm that leverages radio-frequency (RF) waves to perform energy-efficient machine learning directly on edge devices. This article explores WISE’s architecture, experimental results, implications for energy-efficient AI, and its potential to reshape the future of distributed intelligence.

The Edge AI Challenge: Memory, Energy, and Latency

Edge computing is defined by localized data processing, bringing computation closer to the data source rather than relying on centralized cloud servers. While AI models continue to scale to billions of parameters, running these models on miniature devices poses fundamental constraints:

Memory Limitations: Storing full AI models locally consumes extensive memory, often exceeding the physical capacity of edge devices.

Energy Consumption: Digital processing of large models drains battery life, limiting operational time for drones, sensors, and portable devices.

Latency and Security Concerns: Offloading computation to cloud servers reduces device constraints but introduces network latency, higher energy costs from continuous data transfer, and potential privacy vulnerabilities.

Dr. Tingjun Chen of Duke University highlights, “Devices no longer just collect data—they must understand it in real time. Traditional architectures struggle with the memory-energy trade-offs at the edge.”

WISE: Wireless In-Physics Computing Architecture

The WISE framework proposes a fundamentally different approach, combining wireless communication and analog computation to bypass traditional energy bottlenecks. Its core innovations include:

Disaggregated Model Access
Instead of storing full models locally, WISE broadcasts model weights over RF signals from a central radio to multiple edge devices simultaneously. This enables disaggregated deployment, allowing each device to perform inference without local model storage.

In-Physics Computation
WISE leverages the physics of RF waves to perform matrix-vector multiplications (MVMs) and inner-product (IP) calculations in the analog domain. Passive frequency mixers in edge devices multiply incoming RF signals with local data, achieving computation naturally in the signal domain.

Energy-Efficient Analog Processing
By performing most of the computationally intensive operations at RF, WISE significantly reduces the need for high-power digital processing. Each edge client requires minimal active hardware: an analog-to-digital converter (ADC) and lightweight digital signal processing for decoding.

Architecture Overview

WISE consists of two primary components: the central radio and WISE-R client devices.

Central Radio:

Encodes model weights layer by layer into RF waveforms.

Performs channel precoding to account for wireless propagation delays and multipath effects.

Broadcasts weights to multiple clients simultaneously.

WISE-R Client:

Receives RF weight signals and combines them with local input data using a passive frequency mixer.

Outputs the computed analog result for further digital processing or activation.

Performs minimal ADC sampling and decoding to finalize inference results.

This workflow allows real-time inference with ultralow energy consumption while maintaining high accuracy.

Experimental Validation: MNIST and AudioMNIST

WISE has been extensively validated using standard datasets:

MNIST Dataset

Model: Three fully connected (FC) layers (LeNet-300-100) with 0.27 million complex-valued parameters.

Performance:

Digital computing accuracy: 98.1%

WISE experimental accuracy: 95.7% at 6.0 fJ/MAC

Energy Efficiency: 165.8 TOPS/W (teraMAC operations per watt), representing more than 10× improvement over NVIDIA H100 GPUs.

AudioMNIST Dataset

Dataset: 3000 audio clips of spoken digits from 0–9, processed as spectrogram vectors with Zadoff-Chu (ZC) phase encoding.

Model: Three-layer FC network with 1.23 million complex-valued parameters (4.92 million real-valued MACs).

Performance:

Digital computing accuracy: 99.2%

WISE experimental accuracy: 97.2% at 2.8 fJ/MAC

Energy Efficiency: 359.7 TOPS/W, representing ultralow energy use with high accuracy.

Expert Quote: “WISE demonstrates that analog in-physics computing can achieve real-world ML inference with energy costs approaching the thermodynamic limit,” says Zhihui Gao, lead author of the study.

Energy and Computational Efficiency

WISE’s energy efficiency arises from three components:

Waveform Generation and I/Q Modulation (E1)

Converts input vectors and model weights into frequency-domain RF signals.

I/Q Sampling (E2)

Minimal sampling performed by low-power ADCs.

Digital FFT and Decoding (E3)

Lightweight processing to extract final inference results.

The total energy per MAC, denoted emvm, scales favorably with larger MVM sizes. For inner-product operations with vectors up to N = 32,768, experimental energy efficiency approaches 1.4 fJ/MAC (699 TOPS/W), surpassing conventional GPUs by a factor of 50×.

Dataset	Accuracy	Energy per MAC	TOPS/W	Improvement vs H100 GPU
MNIST	95.7%	6.0 fJ/MAC	165.8	10×
AudioMNIST	97.2%	2.8 fJ/MAC	359.7	25×
IP Computation (N=32,768)	–	1.4 fJ/MAC	699.3	50×
Technical Insights: Channel Calibration and Precoding

Wireless channels introduce variability due to multipath propagation and delay. WISE addresses this with channel state information (CSI) calibration:

Central Radio Precoding: Model weights are preprocessed based on CSI to compensate for distortion.

Client-Side Options: For heterogeneous CSI environments, clients can perform additional precoding for improved accuracy.

Spatial Multiplexing: Large antenna arrays enable simultaneous broadcasting of multiple models, allowing scalable deployments.

This approach ensures accurate delivery of ML model weights while maintaining the low-energy advantages of analog computing.

Scalability and Real-World Applications

WISE is inherently scalable and flexible, opening applications across various sectors:

Autonomous Drones and Robotics: Swarms can perform object detection or navigation tasks without heavy onboard processors.

Smart Cities: Traffic sensors and cameras can coordinate in real time, optimizing signal timings and reducing congestion.

Indoor Edge Computing Clusters: Shielded environments, such as server rooms, can leverage directional RF broadcasting for low-energy ML inference.

Privacy-Sensitive Applications: Separation of model weights (central radio) and inference requests (edge clients) mitigates data leakage risks.

Advantages Over Existing In-Physics Approaches

Previous analog computing paradigms, including photonic waveguides, memristor crossbars, and SRAM arrays, offered energy efficiency gains but were limited by hardware complexity. WISE provides three key advantages:

Hardware Accessibility: Uses standard RF components like passive frequency mixers, already widely available in edge devices.

Flexible Scaling: Supports large-dimensional MVMs with minimal hardware changes.

Disaggregated Deployment: Enables simultaneous broadcasting to multiple devices without storing full models locally.

Expert Quote: Dirk Englund of MIT notes, “WISE redefines the trade-off between computation and communication at the edge, achieving unprecedented energy efficiency without sacrificing accuracy.”

Limitations and Future Directions

While WISE demonstrates impressive results, certain challenges remain:

Distance Constraints: Current prototypes operate over short ranges (~1 m), requiring stronger RF transmission or beamforming for larger deployments.

Spectrum Limitations: Broadcasting multiple large models simultaneously may demand additional bandwidth or efficient multiplexing strategies.

Full Analog Architectures: While partial analog computation is proven, fully analog multilayer models require further integration of nonlinear activation circuits (transistors or diodes).

Ongoing research is exploring the integration of next-generation 6G wireless infrastructure, advanced RF beamforming, and ASIC development to expand WISE’s capabilities.

Implications for Industry and Sustainability

WISE not only revolutionizes AI at the edge but also has profound environmental and operational implications:

Energy Conservation: 10–50× reduction in energy per MAC translates to longer battery life and lower operational costs for autonomous devices.

Deployment Versatility: Minimal hardware requirements and use of existing RF infrastructure make WISE suitable for a wide range of industrial and consumer applications.

Sustainable AI: Reducing energy footprints of AI computation contributes to greener and more sustainable technology ecosystems.

Conclusion

Wireless in-physics computing through WISE represents a paradigm shift in edge AI, demonstrating that ultralow-power, high-accuracy machine learning is achievable without heavy digital hardware. By leveraging RF waves to perform matrix-vector multiplications directly in the analog domain, WISE overcomes traditional memory and energy constraints, offering scalable, secure, and energy-efficient AI inference for edge devices.

The implications are far-reaching, from autonomous drones and smart cities to indoor compute clusters, providing both performance and sustainability advantages. With further advancements in wireless communication, beamforming, and fully analog architectures, WISE could redefine the future of distributed intelligence.

For further insights and research updates, readers can explore expert perspectives from Dr. Shahid Masood and the 1950.ai team, who continue to monitor cutting-edge AI developments and guide the integration of emerging technologies into practical applications.

Further Reading / External References

Gao, Z., Vadlamani, S.K., Sulimany, K., Englund, D., Chen, T. Disaggregated machine learning via in-physics computing at radio frequency. Science Advances, 9 Jan 2026, Vol 12, Issue 2. DOI: 10.1126/sciadv.adz0817

Duke University. Wireless approach enables energy-efficient AI on edge devices without heavy hardware. Phys.org, 9 Jan 2026. Link

Scalability and Real-World Applications

WISE is inherently scalable and flexible, opening applications across various sectors:

  • Autonomous Drones and Robotics: Swarms can perform object detection or navigation tasks without heavy onboard processors.

  • Smart Cities: Traffic sensors and cameras can coordinate in real time, optimizing signal timings and reducing congestion.

  • Indoor Edge Computing Clusters: Shielded environments, such as server rooms, can leverage directional RF broadcasting for low-energy ML inference.

  • Privacy-Sensitive Applications: Separation of model weights (central radio) and inference requests (edge clients) mitigates data leakage risks.


Advantages Over Existing In-Physics Approaches

Previous analog computing paradigms, including photonic waveguides, memristor crossbars, and SRAM arrays, offered energy efficiency gains but were limited by hardware complexity. WISE provides three key advantages:

  1. Hardware Accessibility: Uses standard RF components like passive frequency mixers, already widely available in edge devices.

  2. Flexible Scaling: Supports large-dimensional MVMs with minimal hardware changes.

  3. Disaggregated Deployment: Enables simultaneous broadcasting to multiple devices without storing full models locally.

Dirk Englund of MIT notes, “WISE redefines the trade-off between computation and communication at the edge, achieving unprecedented energy efficiency without sacrificing accuracy.”

Limitations and Future Directions

While WISE demonstrates impressive results, certain challenges remain:

  • Distance Constraints: Current prototypes operate over short ranges (~1 m), requiring stronger RF transmission or beamforming for larger deployments.

  • Spectrum Limitations: Broadcasting multiple large models simultaneously may demand additional bandwidth or efficient multiplexing strategies.

  • Full Analog Architectures: While partial analog computation is proven, fully analog multilayer models require further integration of nonlinear activation circuits (transistors or diodes).

Ongoing research is exploring the integration of next-generation 6G wireless infrastructure, advanced RF beamforming, and ASIC development to expand WISE’s capabilities.


Implications for Industry and Sustainability

WISE not only revolutionizes AI at the edge but also has profound environmental and operational implications:

  • Energy Conservation: 10–50× reduction in energy per MAC translates to longer battery life and lower operational costs for autonomous devices.

  • Deployment Versatility: Minimal hardware requirements and use of existing RF infrastructure make WISE suitable for a wide range of industrial and consumer applications.

  • Sustainable AI: Reducing energy footprints of AI computation contributes to greener and more sustainable technology ecosystems.


As artificial intelligence (AI) becomes increasingly central to industries ranging from autonomous transportation to smart cities, the demand for computationally efficient AI at the edge has surged. Edge devices—such as drones, cameras, sensors, and IoT nodes—are often resource-constrained, lacking the memory and processing capabilities of cloud servers or high-performance GPUs. Yet, real-time, intelligent decision-making on these devices is critical for applications like traffic monitoring, disaster response, and industrial automation. Traditional solutions, either storing large AI models locally or offloading computation to the cloud, encounter significant challenges related to energy consumption, latency, and data privacy.

Recent research led by Duke University and MIT introduces a transformative approach: Wireless Smart Edge (WISE) networks, an in-physics computing paradigm that leverages radio-frequency (RF) waves to perform energy-efficient machine learning directly on edge devices. This article explores WISE’s architecture, experimental results, implications for energy-efficient AI, and its potential to reshape the future of distributed intelligence.

The Edge AI Challenge: Memory, Energy, and Latency

Edge computing is defined by localized data processing, bringing computation closer to the data source rather than relying on centralized cloud servers. While AI models continue to scale to billions of parameters, running these models on miniature devices poses fundamental constraints:

Memory Limitations: Storing full AI models locally consumes extensive memory, often exceeding the physical capacity of edge devices.

Energy Consumption: Digital processing of large models drains battery life, limiting operational time for drones, sensors, and portable devices.

Latency and Security Concerns: Offloading computation to cloud servers reduces device constraints but introduces network latency, higher energy costs from continuous data transfer, and potential privacy vulnerabilities.

Dr. Tingjun Chen of Duke University highlights, “Devices no longer just collect data—they must understand it in real time. Traditional architectures struggle with the memory-energy trade-offs at the edge.”

WISE: Wireless In-Physics Computing Architecture

The WISE framework proposes a fundamentally different approach, combining wireless communication and analog computation to bypass traditional energy bottlenecks. Its core innovations include:

Disaggregated Model Access
Instead of storing full models locally, WISE broadcasts model weights over RF signals from a central radio to multiple edge devices simultaneously. This enables disaggregated deployment, allowing each device to perform inference without local model storage.

In-Physics Computation
WISE leverages the physics of RF waves to perform matrix-vector multiplications (MVMs) and inner-product (IP) calculations in the analog domain. Passive frequency mixers in edge devices multiply incoming RF signals with local data, achieving computation naturally in the signal domain.

Energy-Efficient Analog Processing
By performing most of the computationally intensive operations at RF, WISE significantly reduces the need for high-power digital processing. Each edge client requires minimal active hardware: an analog-to-digital converter (ADC) and lightweight digital signal processing for decoding.

Architecture Overview

WISE consists of two primary components: the central radio and WISE-R client devices.

Central Radio:

Encodes model weights layer by layer into RF waveforms.

Performs channel precoding to account for wireless propagation delays and multipath effects.

Broadcasts weights to multiple clients simultaneously.

WISE-R Client:

Receives RF weight signals and combines them with local input data using a passive frequency mixer.

Outputs the computed analog result for further digital processing or activation.

Performs minimal ADC sampling and decoding to finalize inference results.

This workflow allows real-time inference with ultralow energy consumption while maintaining high accuracy.

Experimental Validation: MNIST and AudioMNIST

WISE has been extensively validated using standard datasets:

MNIST Dataset

Model: Three fully connected (FC) layers (LeNet-300-100) with 0.27 million complex-valued parameters.

Performance:

Digital computing accuracy: 98.1%

WISE experimental accuracy: 95.7% at 6.0 fJ/MAC

Energy Efficiency: 165.8 TOPS/W (teraMAC operations per watt), representing more than 10× improvement over NVIDIA H100 GPUs.

AudioMNIST Dataset

Dataset: 3000 audio clips of spoken digits from 0–9, processed as spectrogram vectors with Zadoff-Chu (ZC) phase encoding.

Model: Three-layer FC network with 1.23 million complex-valued parameters (4.92 million real-valued MACs).

Performance:

Digital computing accuracy: 99.2%

WISE experimental accuracy: 97.2% at 2.8 fJ/MAC

Energy Efficiency: 359.7 TOPS/W, representing ultralow energy use with high accuracy.

Expert Quote: “WISE demonstrates that analog in-physics computing can achieve real-world ML inference with energy costs approaching the thermodynamic limit,” says Zhihui Gao, lead author of the study.

Energy and Computational Efficiency

WISE’s energy efficiency arises from three components:

Waveform Generation and I/Q Modulation (E1)

Converts input vectors and model weights into frequency-domain RF signals.

I/Q Sampling (E2)

Minimal sampling performed by low-power ADCs.

Digital FFT and Decoding (E3)

Lightweight processing to extract final inference results.

The total energy per MAC, denoted emvm, scales favorably with larger MVM sizes. For inner-product operations with vectors up to N = 32,768, experimental energy efficiency approaches 1.4 fJ/MAC (699 TOPS/W), surpassing conventional GPUs by a factor of 50×.

Dataset	Accuracy	Energy per MAC	TOPS/W	Improvement vs H100 GPU
MNIST	95.7%	6.0 fJ/MAC	165.8	10×
AudioMNIST	97.2%	2.8 fJ/MAC	359.7	25×
IP Computation (N=32,768)	–	1.4 fJ/MAC	699.3	50×
Technical Insights: Channel Calibration and Precoding

Wireless channels introduce variability due to multipath propagation and delay. WISE addresses this with channel state information (CSI) calibration:

Central Radio Precoding: Model weights are preprocessed based on CSI to compensate for distortion.

Client-Side Options: For heterogeneous CSI environments, clients can perform additional precoding for improved accuracy.

Spatial Multiplexing: Large antenna arrays enable simultaneous broadcasting of multiple models, allowing scalable deployments.

This approach ensures accurate delivery of ML model weights while maintaining the low-energy advantages of analog computing.

Scalability and Real-World Applications

WISE is inherently scalable and flexible, opening applications across various sectors:

Autonomous Drones and Robotics: Swarms can perform object detection or navigation tasks without heavy onboard processors.

Smart Cities: Traffic sensors and cameras can coordinate in real time, optimizing signal timings and reducing congestion.

Indoor Edge Computing Clusters: Shielded environments, such as server rooms, can leverage directional RF broadcasting for low-energy ML inference.

Privacy-Sensitive Applications: Separation of model weights (central radio) and inference requests (edge clients) mitigates data leakage risks.

Advantages Over Existing In-Physics Approaches

Previous analog computing paradigms, including photonic waveguides, memristor crossbars, and SRAM arrays, offered energy efficiency gains but were limited by hardware complexity. WISE provides three key advantages:

Hardware Accessibility: Uses standard RF components like passive frequency mixers, already widely available in edge devices.

Flexible Scaling: Supports large-dimensional MVMs with minimal hardware changes.

Disaggregated Deployment: Enables simultaneous broadcasting to multiple devices without storing full models locally.

Expert Quote: Dirk Englund of MIT notes, “WISE redefines the trade-off between computation and communication at the edge, achieving unprecedented energy efficiency without sacrificing accuracy.”

Limitations and Future Directions

While WISE demonstrates impressive results, certain challenges remain:

Distance Constraints: Current prototypes operate over short ranges (~1 m), requiring stronger RF transmission or beamforming for larger deployments.

Spectrum Limitations: Broadcasting multiple large models simultaneously may demand additional bandwidth or efficient multiplexing strategies.

Full Analog Architectures: While partial analog computation is proven, fully analog multilayer models require further integration of nonlinear activation circuits (transistors or diodes).

Ongoing research is exploring the integration of next-generation 6G wireless infrastructure, advanced RF beamforming, and ASIC development to expand WISE’s capabilities.

Implications for Industry and Sustainability

WISE not only revolutionizes AI at the edge but also has profound environmental and operational implications:

Energy Conservation: 10–50× reduction in energy per MAC translates to longer battery life and lower operational costs for autonomous devices.

Deployment Versatility: Minimal hardware requirements and use of existing RF infrastructure make WISE suitable for a wide range of industrial and consumer applications.

Sustainable AI: Reducing energy footprints of AI computation contributes to greener and more sustainable technology ecosystems.

Conclusion

Wireless in-physics computing through WISE represents a paradigm shift in edge AI, demonstrating that ultralow-power, high-accuracy machine learning is achievable without heavy digital hardware. By leveraging RF waves to perform matrix-vector multiplications directly in the analog domain, WISE overcomes traditional memory and energy constraints, offering scalable, secure, and energy-efficient AI inference for edge devices.

The implications are far-reaching, from autonomous drones and smart cities to indoor compute clusters, providing both performance and sustainability advantages. With further advancements in wireless communication, beamforming, and fully analog architectures, WISE could redefine the future of distributed intelligence.

For further insights and research updates, readers can explore expert perspectives from Dr. Shahid Masood and the 1950.ai team, who continue to monitor cutting-edge AI developments and guide the integration of emerging technologies into practical applications.

Further Reading / External References

Gao, Z., Vadlamani, S.K., Sulimany, K., Englund, D., Chen, T. Disaggregated machine learning via in-physics computing at radio frequency. Science Advances, 9 Jan 2026, Vol 12, Issue 2. DOI: 10.1126/sciadv.adz0817

Duke University. Wireless approach enables energy-efficient AI on edge devices without heavy hardware. Phys.org, 9 Jan 2026. Link

Conclusion

Wireless in-physics computing through WISE represents a paradigm shift in edge AI, demonstrating that ultralow-power, high-accuracy machine learning is achievable without heavy digital hardware. By leveraging RF waves to perform matrix-vector multiplications directly in the analog domain, WISE overcomes traditional memory and energy constraints, offering scalable, secure, and energy-efficient AI inference for edge devices.


The implications are far-reaching, from autonomous drones and smart cities to indoor compute clusters, providing both performance and sustainability advantages. With further advancements in wireless communication, beamforming, and fully analog architectures, WISE could redefine the future of distributed intelligence.


For further insights and research updates, readers can explore expert perspectives from Dr. Shahid Masood and the 1950.ai team, who continue to monitor cutting-edge AI developments and guide the integration of emerging technologies into practical applications.


Further Reading / External References

  1. Gao, Z., Vadlamani, S.K., Sulimany, K., Englund, D., Chen, T. Disaggregated machine learning via in-physics computing at radio frequency. Science Advances, 9 Jan 2026, Vol 12, Issue 2. DOI: 10.1126/sciadv.adz0817

  2. Duke University. Wireless approach enables energy-efficient AI on edge devices without heavy hardware. Phys.org, 9 Jan 2026. Link

Comments


bottom of page