← Home

Quick answer

AI Summary: Instead of averaging weights, this method uses preference-based distillation to align diverse model architectures across different clients.

Claim

Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Models

Authors
Fan Yang·
Rui Meng·
Trudi Di Qi·
Yuxin Wen

ABSTRACT

We argue that while replacing data with model parameters characterizes the present of Federated Learning (FL), replacing parameters with preferences represents a more scalable and privacy-preserving future. Preferences capture high-level user intent and align with downstream goals, making them ideal for FL where clients share reward signals instead of raw model parameters. We introduce Mixture-of-Rewards (MoR), which trains a lightweight routing network to integrate preference signals of different styles and resolve conflicts while maintaining privacy. Experiments validate that MoR consistently outperforms existing parameter-averaging approaches, especially under client heterogeneity and diverse model architectures.

Review Snapshot

Explore ratings

0.0
★★★★★
0 ratings
5 star
0%
4 star
0%
3 star
0%
2 star
0%
1 star
0%

Recommendation

0%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Models.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful
Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Models | Attendemia