HY-WU introduces a lightweight "neural memory" for AI. It generates conditioned model adapter per request (without finetuning!), enabling instance-level personalization while preserving the base model's general capability.
HY-WU remains practical for large foundation models (even at 80B parameters!). With structured parameter tokenization, the method is naturally compatible with large-scale architectures.
HY-WU achieves high human preference win-rates against open-source models, exceeds strong closed-source baselines, and remains close to the latest Nano-Banana series.
HY-WU substantially outperforms leading open-source models, and remains competitive with top-tier closed-source commercial systems. While Nano Banana 2 and Nano Banana Pro achieve slightly higher overall scores (52.4% and 53.8%, respectively), the margin remains modest. Given that these commercial systems are likely trained with substantially larger-scale backbones and proprietary data, the modest performance gap suggests that our operator-level conditional adaptation remains effective even under more constrained model scale.
Human Evaluation with other models.
We thank the Tencent Hunyuan team for their support. HY-WU is part of the Tencent Hunyuan project.
@article{wu2026hy-wu,
title={HY-WU (Part I): An Extensible Functional Neural Memory Framework and An Instantiation in Text-Guided Image Editing},
author={Tencent HY Team, Mengxuan Wu, Xuanlei Zhao, Ziqiao Wang, Ruicheng Feng, Atlas Wang, Qinglin Lu, and Kai Wang},
journal={arXiv preprint arXiv:2603.07236},
year={2026}
}