I am a final-year master's student majoring in Computer Science @Institute of Computing Technology, Chinese Academy of Sciences, under the supervision of Prof. Yiqiang Chen and Prof. Xinlong Jiang. Before that, I received my B. Eng. degree in Software Engineering from @Hainan University.
Research Interests:
⚡ AI4Medical & Healthcare: Medical Image Analysis, Radiology and Biomedical Imaging, Computational Biology.
⚡ Foundation Model: Vision-Language Model, Collaboration of Large and Small Models, Privacy & Security of LLMs.
⚡ Federated Learning: Trustworthy, Privacy-preserving, Heterogeneity, and FL Application in Medical & Healthcare.
⚡ Optimization: Distributed Optimization, Online Convex Optimization, Long-term Constraints
Please feel free to send me an email, if you have any questions. I'd like to communicate with you.
I am looking for a Ph.D position in 2025 fall, please contact me if you are interested in me!
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Bingjie Yan, Qian Chen, Yiqiang Chen†, Xinlong Jiang, Wuliang Huang, Bingyu Wang, Zhirui Wang, Chenlong Gao, Teng Zhang († corresponding author )
ACM CIKM'24, CCF-B, CORE-A (Acceptance Rate: 22.7%) (2024) Oral
Federated learning (FL) enables collaborative learning across multiple biomedical data silos with multimodal foundation models while preserving privacy. Due to the heterogeneity in data processing and collection methodologies across diverse medical institutions and the varying medical inspections patients undergo, modal heterogeneity exists in practical scenarios, where severe modal heterogeneity may even prevent model training. With privacy considerations, data transfer cannot be permitted, restricting knowledge exchange among different clients. To trickle these issues, we propose a cross-modal prototype imputation method for visual-language understanding (Buffalo) with only a slight increase in communication cost, which can improve the performance of fine-tuning general foundation models for downstream biomedical tasks. We conducted extensive experiments on medical report generation and biomedical visual question-answering tasks. The results demonstrate that Buffalo can fully utilize data from all clients to improve model generalization compared to other modal imputation methods in three modal heterogeneity scenarios, approaching or even surpassing the performance in the ideal scenario without missing modality.
Bingjie Yan, Qian Chen, Yiqiang Chen†, Xinlong Jiang, Wuliang Huang, Bingyu Wang, Zhirui Wang, Chenlong Gao, Teng Zhang († corresponding author )
ACM CIKM'24, CCF-B, CORE-A (Acceptance Rate: 22.7%) (2024) Oral
Federated learning (FL) enables collaborative learning across multiple biomedical data silos with multimodal foundation models while preserving privacy. Due to the heterogeneity in data processing and collection methodologies across diverse medical institutions and the varying medical inspections patients undergo, modal heterogeneity exists in practical scenarios, where severe modal heterogeneity may even prevent model training. With privacy considerations, data transfer cannot be permitted, restricting knowledge exchange among different clients. To trickle these issues, we propose a cross-modal prototype imputation method for visual-language understanding (Buffalo) with only a slight increase in communication cost, which can improve the performance of fine-tuning general foundation models for downstream biomedical tasks. We conducted extensive experiments on medical report generation and biomedical visual question-answering tasks. The results demonstrate that Buffalo can fully utilize data from all clients to improve model generalization compared to other modal imputation methods in three modal heterogeneity scenarios, approaching or even surpassing the performance in the ideal scenario without missing modality.
Qian Chen, Yiqiang Chen†, Bingjie Yan, Xinlong Jiang, Xiaojin Zhang, Yan Kang, Teng Zhang, Wuliang Huang, Chenlong Gao, Lixin Fan, Qiang Yang († corresponding author )
ICDE'24, CCF-A (2024) Oral
Federated Learning has emerged as a revolutionary innovation in the evolving landscape of global healthcare, fostering collaboration among institutions and facilitating collaborative data analysis. As practical applications continue to proliferate, numerous federations have formed in different regions. The optimization and sustainable development of federation-pretrained models have emerged as new challenges. These challenges primarily encompass privacy, population shift and data dependency, which may lead to severe consequences such as the leakage of sensitive information within models and training samples, unfair model performance and resource burdens. To tackle these issues, we propose FairFusion, a cross-federation model fusion approach that enhances privacy and fairness. FairFusion operates across federations within a Model Trip paradigm, integrating knowledge from diverse federations to continually enhance model performance. Through federated model fusion, multi-objective quantification and optimization, FairFusion obtains trustworthy solutions that excel in utility, privacy and fairness. We conduct comprehensive experiments on three public real-world healthcare datasets. The results demonstrate that FairFusion achieves outstanding model fusion performance in terms of utility and fairness across various model structures and subgroups with sensitive attributes while guaranteeing model privacy.
Qian Chen, Yiqiang Chen†, Bingjie Yan, Xinlong Jiang, Xiaojin Zhang, Yan Kang, Teng Zhang, Wuliang Huang, Chenlong Gao, Lixin Fan, Qiang Yang († corresponding author )
ICDE'24, CCF-A (2024) Oral
Federated Learning has emerged as a revolutionary innovation in the evolving landscape of global healthcare, fostering collaboration among institutions and facilitating collaborative data analysis. As practical applications continue to proliferate, numerous federations have formed in different regions. The optimization and sustainable development of federation-pretrained models have emerged as new challenges. These challenges primarily encompass privacy, population shift and data dependency, which may lead to severe consequences such as the leakage of sensitive information within models and training samples, unfair model performance and resource burdens. To tackle these issues, we propose FairFusion, a cross-federation model fusion approach that enhances privacy and fairness. FairFusion operates across federations within a Model Trip paradigm, integrating knowledge from diverse federations to continually enhance model performance. Through federated model fusion, multi-objective quantification and optimization, FairFusion obtains trustworthy solutions that excel in utility, privacy and fairness. We conduct comprehensive experiments on three public real-world healthcare datasets. The results demonstrate that FairFusion achieves outstanding model fusion performance in terms of utility and fairness across various model structures and subgroups with sensitive attributes while guaranteeing model privacy.
Bingjie Yan*, Danmin Cao*, Xinlong Jiang, Yiqiang Chen†, Weiwei Dai†, Fan Dong, Wuliang Huang, Teng Zhang, Chenlong Gao, Qian Chen, Zhen Yan, Zhirui Wang (* equal contribution, † corresponding author )
Patterns, Cell Press, JCR-Q1, IF=6.7 (2024)
Federated learning (FL) enables training machine learning models on decentralized medical data while preserving privacy. Despite growing research on FL algorithms and systems, building real-world FL applications requires extensive expertise, posing barriers for medical researchers. FedEYE, an end-to-end FL platform tailored for ophthalmologists without programming skills, is developed here to easily create federated projects on tasks like image classification. The platform provides rich capabilities, scalability, flexible deployment, and separation of concerns. With user-friendly interfaces and comprehension of underlying mechanisms, FedEYE strives to democratize FL for ophthalmology.
Bingjie Yan*, Danmin Cao*, Xinlong Jiang, Yiqiang Chen†, Weiwei Dai†, Fan Dong, Wuliang Huang, Teng Zhang, Chenlong Gao, Qian Chen, Zhen Yan, Zhirui Wang (* equal contribution, † corresponding author )
Patterns, Cell Press, JCR-Q1, IF=6.7 (2024)
Federated learning (FL) enables training machine learning models on decentralized medical data while preserving privacy. Despite growing research on FL algorithms and systems, building real-world FL applications requires extensive expertise, posing barriers for medical researchers. FedEYE, an end-to-end FL platform tailored for ophthalmologists without programming skills, is developed here to easily create federated projects on tasks like image classification. The platform provides rich capabilities, scalability, flexible deployment, and separation of concerns. With user-friendly interfaces and comprehension of underlying mechanisms, FedEYE strives to democratize FL for ophthalmology.