Filter By:

Journal Check one or more journals to show results from those journals only.

Choose more journals

Article type Check one or more article types to show results from those article types only.
Subject Check one or more subjects to show results from those subjects only.
Date Choose a date option to show results from those dates only.

Custom date range

Clear all filters
Sort by:
Showing 1–6 of 6 results
Advanced filters: Author: Fangzhao Wu Clear advanced filters
  • Interest in using large language models such as ChatGPT has grown rapidly, but concerns about safe and responsible use have emerged, in part because adversarial prompts can bypass existing safeguards with so-called jailbreak attacks. Wu et al. build a dataset of various types of jailbreak attack prompt and demonstrate a simple but effective technique to counter these attacks by encapsulating users’ prompts in another standard prompt that reminds ChatGPT to respond responsibly.

    • Yueqi Xie
    • Jingwei Yi
    • Fangzhao Wu
    Research
    Nature Machine Intelligence
    Volume: 5, P: 1486-1496
  • To ensure the privacy of processed data, federated learning approaches involve local differential privacy techniques which however require communicating a large amount of data that needs protection. The authors propose here a framework that uses selected small data to transfer knowledge in federated learning with privacy guarantees.

    • Tao Qi
    • Fangzhao Wu
    • Xing Xie
    ResearchOpen Access
    Nature Communications
    Volume: 14, P: 1-9
  • Mainstream personalization methods rely on centralized Graph Neural Network learning on global graphs, which have considerable privacy risks due to the privacy-sensitive nature of user data. Here, the authors present a federated GNN framework for both effective and privacy-preserving personalization.

    • Chuhan Wu
    • Fangzhao Wu
    • Xing Xie
    ResearchOpen Access
    Nature Communications
    Volume: 13, P: 1-10
  • This work presents a communication-efficient federated learning method that saves a major fraction of communication cost. It reveals the advantage of reciprocal learning in machine knowledge transfer and the evolutional low-rank properties of deep model updates.

    • Chuhan Wu
    • Fangzhao Wu
    • Xing Xie
    ResearchOpen Access
    Nature Communications
    Volume: 13, P: 1-8
  • While federated learning is promising for efficient collaborative learning without revealing local data, it remains vulnerable to white-box privacy attacks, suffers from high communication overhead, and struggles to adapt to heterogeneous models. Here, the authors show a federated distillation method to tackle these challenges, which leverages the strengths of knowledge distillation in a federated learning setting.

    • Jiawei Shao
    • Fangzhao Wu
    • Jun Zhang
    ResearchOpen Access
    Nature Communications
    Volume: 15, P: 1-11