Personal photo

Yuhao Zhang

Natural language processing lab, Northeastern university, Shenyang, China

PhD student      Google scholar
Contact me: yoohao.zhang@gmail.com

Research interests: Speech translation, Machine translation and Multi-task learning.

Severed as the reviewer of NeurIPS, ACL, AAAI, EMNLP, ICASSP and other conferences for many times.

Education experiments

2020.9 - now
Computer sinence and techolegy, PhD student at Northeastern university
NEU NLP lab, supervisors:Prof. XIAO Tong and Porf. ZHU Jingbo
Ph.D. project: Unified Cross-Modal Modeling for End-to-End Speech Translation
2018.9-2020.7
Computer software and theory, Master student at Northeastern university
NEU NLP lab, supervisor:Prof. XIAO Tong
Thesis title: Research on Performance and Inference Speed Improvement for Multilingual Neural Machine Translation
2014.9-2018.7
Computer sinence and techolegy, Bachelor student at Northeastern university
Thesis title: Fast Decoding Method and Implementation for Neural Machine Translation

Publications

1. Soft Alignment of Modality Space for End-to-end Speech Translation. Yuhao Zhang, Kaiqi Kou, Bei Li, Chen Xu, Chunliang Zhang, Tong Xiao, Jingbo Zhu. ICASSP2024.
2. Rethinking and Improving Multi-task Learning for End-to-end Speech Translation. Yuhao Zhang, Chen Xu, Bei Li, Hao Chen, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. EMNLP2023.
3. Improving End-to-end Speech Translation by Leveraging Auxiliary Speech and Text Data. Yuhao Zhang, Chen Xu, Bojie Hu, Chunliang Zhang, Tong Xiao, Jingbo Zhu. AAAI2023.
4. Information Magnitude Based Dynamic Sub-sampling for Speech-to-text. Yuhao Zhang, Chenghao Gao, Kaiqi Kou, Chen Xu, Tong Xiao, Jingbo Zhu. INTERSPEECH2023.
5. The NiuTrans’s Submission to the IWSLT22 English-to-Chinese Offline Speech Translation Task. Yuhao Zhang, Canan Huang, Chen Xu, Xiaoqian Liu, Bei Li, Anxiang Ma, Tong Xiao, Jingbo Zhu. IWSLT2022.
6. The NiuTrans Machine Translation Systems for WMT20. Yuhao Zhang, Ziyang Wang, Runzhe Cao.et.al. WMT2020.
7. Inference acceleration method of neural machine translation system based on coarse-to-fine. Yuhao Zhang, Nuo Xu, Yinqiao Li, Tong Xiao, Jingbo Zhu. CCMT2019.
8. Bridging the Gaps of Both Modality and Language: Synchronous Bilingual CTC for Speech Translation and Speech Recognition. Chen Xu, Xiaoqian Liu, Erfeng He, Yuhao Zhang, Qianqian Dong, Tong Xiao, Jingbo Zhu, Dapeng Man, Wu Yang. ICASSP2024.
9. Bridging the Granularity Gap for Acoustic Modeling. Chen Xu, Yuhao Zhang, Chengbo Jiao, Xiaoqian Liu, Chi Hu, Xin Zeng, Tong Xiao, Anxiang Ma, Huizhen Wang, JingBo Zhu. ACL2023 findings.
10. An End-to-End Automatic Speech Recognition Method Based on Multiscale Modeling. Chen Hao, Runlai Zhang, Yuhao Zhang, Chenghao Gao, Chen Xu, Anxiang Ma, Tong Xiao, Jingbo Zhu. CCL2023.
11. Stacked Acoustic-and-Textual Encoding: Integrating the Pre-trained Models into Speech Translation Encoders. Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Shen Huang, Qi Ju, Tong Xiao, Jingbo Zhu. ACL2021.
12. Learning Architectures from an Extended Search Space for Language Modeling. Yinqiao Li, Chi Hu, Yuhao Zhang, Nuo Xu, Yufan Jiang, Tong Xiao, Jingbo Zhu, Tongran Liu, Changliang Li. ACL2020.
13. CTC-based Non-autoregressive Speech Translation. Chen Xu, Xiaoqian Liu, Xiaowen Liu, Qingxuan Sun, Yuhao Zhang, Murun Yang, Qianqian Dong, Tom Ko, Mingxuan Wang, Tong Xiao, Anxiang Ma, Jingbo Zhu. ACL2023.
14. The NiuTrans End-to-End Speech Translation System for IWSLT23 English-to-Chinese Offline Task. Yuchen Han, Xiaoqian Liu, Hao Chen, Yuhao Zhang, Chen Xu, Tong Xiao, Jingbo Zhu. IWSLT2023.
15. The NiuTrans machine translation systems for WMT19. Bei Li, Yinqiao Li, Chen Xu, Ye Lin, Jiqiang Liu, Hui Liu, Ziyang Wang, Yuhao Zhang, et al. WMT2019.

Projects

An end-to-end speech-to-text toolkit, NiuTrans.ST2023.09- ······
The toolkit can achieve ASR, MT and ST three tasks. This toolkit is based on NiuTensor library and our goal is to achieve a high speed inference framework for speech-to-text generation.
Project website: https://github.com/xiaozhang521/NiuTrans.ST
Soft alignment method for end-to-end speech translation (ICASSP2024) 2023.05-2023.09
In response to the challenge of inconsistent representation between speech and text modalities, we propose a soft alignment methodology within the representation space, leveraging an adversarial training strategy. This approach not only attains advanced performance in speech translation tasks but also accomplishes speech recognition, text translation, and speech translation tasks with one model. The achieved performance closely parallels that of the individual models.
An End-to-End Automatic Speech Recognition Method Based on Multi-scale Modeling (CCL2023) 2023.02-2023.05
We designed a multi-scale modeling method to align the original audio features to phonemes, characters and sub-words step by step, and then design a gated network to integrate multi-scale features. This method can effectively alleviate the modal gap caused by length inconsistency.
End-to-end Speech Translation Evaluation (IWSLT2023) 2023.02-2023.04
We participated in the IWSLT 2023 English to Chinese offline end-to-end speech translation (constrained data) track. We mainly implement the multi-task leaning, multi-scale modeling, stacked acoustic-and-textual encoding and VAD based on SHAS. Our system ranked 1st in this track.
Information Magnitude Based Dynamic Sub-sampling for Speech-to-text (Interspeech2023) 2022.05-2023.01
We design a dynamic down-sampling strategy for audio frames to solve the problem of long speech sequences. We use the Gaussian mixture model to distinguish the magnitude of each frame and then dynamically design the sampling stride according to the order of magnitude. For speech translation and speech recognition tasks, this method can achieve a larger compression ratio without losing performance compared to static down-sampling.
Rethinking and Improving Multi-task Learning for End-to-end Speech Translation (EMNLP2023) 2022.06-2023.06
Based on the quantitative analysis of the effect and time of the interaction between speech and other auxiliary tasks, it is found that the length gap between speech and text modalities hinders the learning effect of the alignment method. Additionally, the difference between speech and text representations is still significant. In light of these phenomena, we design the lookback mechanism and the local-to-global training method to improve performance and achieve the state-of-the-art (SOTA) performance under limited data.
End-to-end Speech Translation Evaluation (IWSLT2022) 2022.02-2022.05
We participated in the offline end-to-end English-Chinese speech translation task, mainly using the decoupling pre-training method, multi-stage pre-training method, and multi-view fusion method. We further combined the strategies of VAD consistency training and model ensemble.
Improving end-to-end speech translation by leveraging auxiliary speech and text data (AAAI2023) 2021.03-2021.11
In order to address the problem of data scarcity for end-to-end speech translation training, we propose a multi-stage pre-training strategy to build speech translation system. Our method can use all types of unlabeled and labeled text data, as well as speech data, which achieves a new state-of-the-art performance. The work was partially completed during the internship at Information Security Department of Tencent.
Stacked acoustic-and-textual encoding: Integrating the pre-trained models into speech translation encoders (ACL2021) 2020.09-2021.01
Aiming to address the issue of unstable training in the end-to-end speech translation model, we propose a decoupling pre-training method to separately pre-train the speech recognition model and the text translation model. Subsequently, an adapter is employed to facilitate the integration of the two. We first achieve performance where the end-to-end system outperforms the cascade system under conditions of limited data.
Tensor computing library towards offline device 2020.11-2020.12
We use the OpenCL interface in the ARM Computing Library to rewrite the operations in NiuTensor and run them on the Mali architecture chip. The system has been applied to the Translation Pen Scanner.
Quantifying Transfer Learning for Multilingual Neural Machine Translation (Under review) 2020.06-2021.03
We conduct a quantitative analysis of the interference and transfer of rich resources and low resources in the multilingual translation model, and we find that the transfer process mainly occurs in the early training stage, while the interference is primarily in the later stage.
Machine Translation Evaluation (WMT2020) 2020.03-2020.05
We participate in three rich resource tasks (English to/from Japanese, English to Chinese) and two low resource tasks (Tamil to English and Inuktitut to English). Our training strategies include using a multi-language model, a large-capacity model, iterative fine-tuning, generating pseudo-data by Top-p sampling, domain adaptation with a pre-trained language model, and so on. Among them, English to/from Japanese ranks first in automatic evaluation; English to Japanese and Inuktitut to English win first place in manual evaluation.
Learning architectures from an extended search space for language modeling (ACL2020) 2019.10-2019.12
We apply the neural structure search method to the neural machine translation task. Initially, we use the differentiable method (e.g., DARTS) to search for a structure between the inner-CELL and intra-CELL on the language model task. Subsequently, we apply this structure to the task of neural machine translation. The experimental results show that the searched structure improves the performance of the IWSLT English-to-Vietnamese tasks.
Inference acceleration method of neural machine translation system based on coarse-to-fine (CCMT2019) 2019.06-2019.08
We accelerate the attention operation in the inference of the Transformer model. We utilize the attention distribution of each layer in the model and calculate its information entropy as the amount of information contained in each layer. We found that the amount of information in each layer is different, but the amount of calculation is consistent. Therefore, we design a coarse-to-fine method to compress the parameters of attention calculation and improve the decoding speed by about 10% without performance degradation.
Machine Translation Evaluation (WMT2019) 2019.02-2019.04
We participated in the Gujarati-to-English translation task, primarily using transfer learning, linguistic prior knowledge, back-translation with diversity, ensemble search, re-ranking based on multi-feature, and the DLCL deep network. Our system ranked first in automatic evaluation and manual evaluation.
NiuTensor Deep Learning Open Source Computing Library 2018.1- ······
We have developed an efficient tensor computing library (similar to TensorFlow or PyTorch), named NiuTensor which is based on C++ and CUDA. Its main features are as follows: 1. Developlow-level high-efficiency CUDA operators; 2. Support speech-to-text tasks and text generation tasks; 3. Support Mobile device; 4. Optimize neural machine translation operation inference (kernel fusion, high concurrency algorithm based on GPU). The project has been applied to the online system of NiuTrans.
Project website: https://github.com/NiuTrans/NiuTensor

Awards

中文