HY-WU introduces a lightweight "neural memory" for AI. It generates conditioned model adapter per request (without finetuning!), enabling instance-level personalization while preserving the base model's general capability.
HY-WU remains practical for large foundation models (even at 80B parameters!). With structured parameter tokenization, the method is naturally compatible with large-scale architectures.
HY-WU achieves high human preference win-rates against open-source models, exceeds strong closed-source baselines, and remains close to the latest Nano-Banana series.
HY-WU substantially outperforms leading open-source models, and remains competitive with top-tier closed-source commercial systems. While Nano Banana 2 and Nano Banana Pro achieve slightly higher overall scores (52.4% and 53.8%, respectively), the margin remains modest. Given that these commercial systems are likely trained with substantially larger-scale backbones and proprietary data, the modest performance gap suggests that our operator-level conditional adaptation remains effective even under more constrained model scale.
Human Evaluation with other models.
We thank the Tencent Hunyuan team for their support. HY-WU is part of the Tencent Hunyuan project.
@misc{wu2026hy-wu,
author = {Tencent HY Team and Mengxuan Wu and Xuanlei Zhao and Ziqiao Wang and Ruichfeng Feng and Atlas Wang and Qinglin Lu and Kai Wang},
title = {HY-WU (Part I): An Extensible Functional Neural Memory Framework and An Instantiation in Text-Guided Image Editing},
year = {2026},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Tencent-Hunyuan/HY-WU}},
note = {Preprint}
}