Fourier-Continuation-based Physics-Informed Neural Operator (FC-PINO) is an extension of the Physics-Informed Neural Operator (PINO) framework that integrates Fourier continuation techniques to improve accuracy on non-periodic and non-smooth partial differential equations (PDEs).
Traditional PINO leverages the Fourier Neural Operator (FNO) and spectral differentiation to compute physics-informed losses. However, these methods assume periodicity, which leads to significant errors and artifacts such as Gibbs phenomena when applied to non-periodic problems.
FC-PINO overcomes this limitation by transforming non-periodic signals into periodic ones on extended domains using well-conditioned Fourier continuation. This enables efficient and accurate derivative computation without the drawbacks of finite differences or the high memory overhead of automatic differentiation.
By incorporating Fourier continuation, FC-PINO provides robust, scalable, and highly precise solutions across challenging PDE benchmarks, substantially outperforming standard and padded PINO methods.
See arXiv.2211.15960 for more details.
FC-PINO is the central approach introduced in this work. It improves the standard PINO framework by applying Fourier continuation to the input signal before the initial lifting layer. After the final Fourier layer, derivatives are computed using the chain rule on a padded domain. The input signal is then truncated before the last projection layer, yielding highly accurate solutions for non-periodic and non-smooth PDEs.
-
Fourier Continuation in FC-PINO
We add two new Fourier continuation techniques, available through theneuraloperatorlibrary, which can be used within FC-PINO:- FC-Legendre
- FC-Gram
-
Spectral Differentiation
Accurate derivatives are computed on extended domains using spectral differentiation -
Baselines for Comparison
Alongside FC-PINO, we provide several baseline methods for benchmarking:- Standard PINO: Standard PINO without continuation or padding
- PINO with Domain Padding:
- Pad-PINO: The solution and derivatives are computed on a padded domain
- PINO Pad-Out: Only derivatives are computed on a padded domain, with the solution predicted on the original domain
To explore the capabilities of FC-PINO, you can run the provided example scripts.
The repository includes examples for 1-D, 2-D, and 3-D fluid dynamics problems.
These scripts demonstrate the core FC-PINO approach.
- 1-D Self-Similar Burgers' Equation:
python 1D_examples/fc_pino_1d_burgers.py - 2-D Burgers' Equation:
python 2D_examples/fc_pino_2d_burgers.py - 3-D Cavity Flow Navier-Stokes:
python 3D_examples/fc_pino_NS.py
These scripts showcase the different padding options available in FC-PINO.
- 1-D Padded Self-Similar Burgers' Equation:
python 1D_examples/pad_1d_burgers.py - 2-D Padded Burgers' Equation:
python 2D_examples/pad_2d_burgers.py - 3-D Padded Cavity Flow Navier-Stokes:
python 3D_examples/pad_NS.py
For comparison, these scripts implement a standard PINO approach without Fourier continuation or padding.
- 1-D Srandard PINO, Self-Similar Burgers' Equation:
python 1D_examples/standard_pino_1d_burgers.py - 2-D Standard PINO, Burgers' Equation:
python 2D_examples/standard_pino_2d_burgers.py - 3-D Standard PINO, Cavity Flow Navier-Stokes:
python 3D_examples/standard_pino_NS.py
FC-PINO can be trained on entire families of PDEs and subsequently fine-tuned on specific instances for higher accuracy.
We illustrate this on the 1D self-similar Burgers' equation, where smooth self-similar solutions exist for values of λ of the form λ = 1/(2i + 2).
The model is first pretrained across a range of sampled λ values to learn a general solution operator, capturing the full spectrum of solution behaviors across the family of PDEs.
python 1D_examples/lambda_pretrain.py
After pretraining, the model is fine-tuned on selected λ instances to achieve precise solutions for individual cases.
λ is incorporated into the architecture using a NeRF-style embedding with sine and cosine features, which naturally aligns with the Fourier structure of the model and allows efficient learning of spectral interactions.
This combination of family-level pretraining and instance-specific fine-tuning demonstrates how FC-PINO can generalize across parameterized PDE families while maintaining high-fidelity predictions for individual solutions.
If you use FC-PINO in your research, please cite the following paper:
Ganeshram, A., Maust, H., Duruisseaux, V., Li, Z., Wang, Y., Leibovici, D., Bruno, O., Hou, T., & Anandkumar, A. (2025).
FC-PINO: High Precision Physics-Informed Neural Operators via Fourier Continuation.
arXiv preprint arXiv:2211.15960.
@misc{ganeshram2025fcpinohighprecisionphysicsinformed,
title = {FC-PINO: High Precision Physics-Informed Neural Operators via Fourier Continuation},
author = {Adarsh Ganeshram and Haydn Maust and Valentin Duruisseaux and Zongyi Li and Yixuan Wang and Daniel Leibovici and Oscar Bruno and Thomas Hou and Anima Anandkumar},
year = {2025},
eprint = {2211.15960},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2211.15960}
}