EUV Lithography Simulation
Developing weakly guiding approximations and CNN-based models to speed up extreme ultraviolet lithography workflows for advanced process nodes.
I explore hardware-aware acceleration for computation-intensive workloads, spanning EUV lithography modeling, FPGA-based machine learning, and sparse neural network architectures. Recent papers investigate how supercomputers and large-scale GPU clusters shorten full-field optical simulations, while reconfigurable processors orchestrate inference and training pipelines. A major thrust is leveraging sparsity-aware dataflows and memory-light data paths on FPGA and CGRA platforms to approximate 3D EUV masks and neural operators with high fidelity. By tailoring the software stack and hardware implementations together, I aim to transition these AI acceleration capabilities into resilient, production-ready systems.
Leading the Sparsity-aware Coarse-grained Reconfigurable Accelerator project with support from the Google Silicon Research Grant (FY2024–2025).
Developing weakly guiding approximations and CNN-based models to speed up extreme ultraviolet lithography workflows for advanced process nodes.
Designing FPGA-oriented architectures that balance agility and throughput for data-intensive applications.
Building sparse neural network accelerators and near-memory computing fabrics for deep learning workloads.
Publications spanning EUV lithography, FPGA training accelerators, and sparse CNN deployment.
Regular contributions to FPT, FPGA, FCCM, and related venues on reconfigurable and AI accelerators.
Active projects backed by Google, JSPS, and industrial partners on sparse computing architectures.
For complete publication, award, and grant records, please visit the English publications page.