Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1-expert worse than dense model #107

Open
Muennighoff opened this issue May 8, 2024 · 1 comment
Open

1-expert worse than dense model #107

Muennighoff opened this issue May 8, 2024 · 1 comment

Comments

@Muennighoff
Copy link

Muennighoff commented May 8, 2024

I'm finding that training a 1-expert dMoE (brown) has worse training loss than an otherwise equivalent dense model (green). Is there some reason why this difference is expected or can I expect them to be the same? Thanks!

Screenshot 2024-05-08 at 10 09 05 AM
@alexliap
Copy link

The difference between the Dense and MoE variation is that:

  • the Dense model has only one MLP after the attention mechanism and
  • the MoE model has NxMLPs with a gating mechanism after the attention.

So in general, the respective experts are smaller in size (parameters) than their dense MLP counterpart. This means that they have less "capacity" to learn more complex patterns.

Next, you might ask yourself, why do we use MoE architectures then? Well for Efficiency and speed purposes.

  • Firstly, only a subset of the experts are chosen, so not all weights are used at inference (+speed +efficiency)
  • MoE architectures allow for parallelization between the experts so inference is sped up (+speed)
  • So by increasing a model's size with MoE architecture you can have at inference the same load on your machine as a smaller model by keeping the same performance as an equally big dense model.

This would explain the worse performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants