2025-10-23
Large, complex, many-part systems.
Subgrid processes are largest source of uncertainty


Microphysics by Sisi Chen Public Domain
Staggered grid by NOAA under Public Domain
Globe grid with box by Caltech under Fair use
Subgrid processes are largest source of uncertainty


Microphysics by Sisi Chen Public Domain
Staggered grid by NOAA under Public Domain
Globe grid with box by Caltech under Fair use

Neural Net by 3Blue1Brown under fair dealing.
https://www.microsoft.com/en-us/research/project/aurora-forecasting/
Many large scientific models are written in Fortran (or C, or C++), but machine learning is (mostly) conducted in Python.




![]()


![]()
Mathematical Bridge by cmglee used under CC BY-SA 3.0
PyTorch, the PyTorch logo and any related marks are trademarks of The Linux Foundation.”
iso_c_binding.We will:
libtorch C++ APIlibtorch C++![]()

![]()

![]()
Python
env
Python
runtime


xkcd #1987 by Randall Munroe, used under CC BY-NC 2.5
![]()
Easy to build and link using CMake,
User tools
pt2ts.py aids users in saving PyTorch models to TorchscriptExamples suite
Full API documentation online at
cambridge-iccs.github.io/FTorch
FOSS, licensed under MIT
Find it on :
CUDA, HIP, MPS, and XPU enabledFind it on :
Find it on :
import torch
import torchvision
# Load pre-trained model and put in eval mode
model = torchvision.models.resnet18(weights="IMAGENET1K_V1")
model.eval()
# Create dummmy input
dummy_input = torch.ones(1, 3, 224, 224)
# Save to TorchScript
if trace:
ts_model = torch.jit.trace(model, dummy_input)
elif script:
ts_model = torch.jit.script(model)
frozen_model = torch.jit.freeze(ts_model)
frozen_model.save("/path/to/saved_model.pt")
TorchScript
trace for simple models, script more generally use ftorch
implicit none
real, dimension(5), target :: in_data, out_data ! Fortran data structures
type(torch_tensor), dimension(1) :: input_tensors, output_tensors ! Set up Torch data structures
type(torch_model) :: torch_net
integer, dimension(1) :: tensor_layout = [1]
in_data = ... ! Prepare data in Fortran
! Create Torch input/output tensors from the Fortran arrays
call torch_tensor_from_array(input_tensors(1), in_data, torch_kCPU)
call torch_tensor_from_array(output_tensors(1), out_data, torch_kCPU)
call torch_model_load(torch_net, 'path/to/saved/model.pt', torch_kCPU) ! Load ML model
call torch_model_forward(torch_net, input_tensors, output_tensors) ! Infer
call further_code(out_data) ! Use output data in Fortran immediately
! Cleanup
call torch_delete(model)
call torch_delete(in_tensors)
call torch_delete(out_tensor)Work led by Joe Wallwork
To date FTorch has focussed on enabling researchers to run models developed and trained offline within Fortran codes.
However, it is clear (Mansfield and Sheshadri 2024) that more attention to online performance, and options with differentiable/hybrid models (e.g. Kochkov et al. 2024) is becoming important.
Suppose we want to use a loss function involving downstream model code, e.g.,
\[J(\theta)=\int_\Omega(u-u_{ML}(\theta))^2\;\mathrm{d}x,\]
where \(u\) is the solution from the physical model and \(u_{ML}(\theta)\) is the solution from a hybrid model with some ML parameters \(\theta\).
Computing \(\mathrm{d}J/\mathrm{d}\theta\) requires differentiating Fortran code as well as ML code.
autograd functionality from Torch.
requires_grad argument and backward methods.=,+,-,*,/,**).torch::optim::SGD, torch::optim::AdamW etc., as well as zero_grad and step methods.torch_tensor_sum and torch_tensor_mean, though.autograd and optimizer exposed using iso_c_binding.Get in touch:
Thanks to Jack Atkinson, Joe Wallwork, Tom Meltzer,
Elliott Kasoar, Niccolò Zanotti and the rest of the FTorch team.
The ICCS received support from 
FTorch has been supported by 

