I seriously wonder when the small Midjourney team around David Holz will ever get to sleep. In the recent past, updates have been introduced at extremely short intervals, which turn the AI completely upside down. Just last week, a new algorithm for even higher resolutions was introduced, in between the support of the bot for larger Discord servers was extended, now there is even a completely new AI model available. You can try it out either with
--beta (not to be confused with
--upbeta for the new upscaler which is automatically added using the new Beta) in your prompt or via the corresponding setting via
Table of contents
- Midjourney uses Stable Diffusion as new model for Beta
- Midjourney’s announcement of the new Beta
- Earn free GPU hours for rating images
- Midjourney V3 vs Beta with basic prompts
- Impressions of the Midjourney Beta from the community
Midjourney uses Stable Diffusion as new model for Beta
Although it wasn’t mentioned in the announcement, Midjourney is now apparently using the just-released Stable Diffusion as the new model under the hood, but refining it with his own editing.
Midjourney’s announcement of the new Beta
IMPORTANT INFO ABOUT THIS BETA MODEL
– This model has more knowledge and coherence than v3 – Styles are more diverse, but prompt interpretations are more ‘literal’
– You get 2 images if they’re square and 1 if non-square
– The maximum aspect ratio right now is 3:2 or 2:3
– Toggling this mode makes –upbeta default (so default upscale resolution is 2048 x 2048)
– This setting is available with /relax mode
– This setting is not compatible with –hd –stylize or –q
– Using Image prompts without any text prompts is not yet available
Please note this is a ‘test’ mode and may be subject to rapid changes over the coming weeks. We hope you enjoy playing on the cutting-edge with this model.
Earn free GPU hours for rating images
If you want to earn free Fast GPU hours, you can help improve the AI and rate images you particularly like.
1) Go to our ranking site and rate as many beta images as possible (based on whether you like the aesthetic / vibes) https://www.midjourney.com/app/ranking/others/week/ (if you see nudes pls use puke emoji thanks!)
2) Click this link and submit a list of adjectives you think describe our ‘ideal’ vibes and aestheticshttps://o9q981dirmk.typeform.com/to/I4pg29I2
Meanwhile, Midjourney’s normal algorithm is already in version 3 and has already made great leaps in its short development time. If you don’t like the style of the new beta, don’t worry: V3 will not suddenly disappear. “We don’t know if there’s a fundamental tradeoff between creativity and coherence, but we want your help to try to help find out. In an ideal world we would have a system that is the best of both worlds, or a slider that blends between extremes. If we can’t do this we will pick and develop two separate models and develop both. No problem.“
Midjourney V3 vs Beta with basic prompts
Of course, I didn’t wait long and played around with the beta a bit. As is often the case, I entered basic terms:
Impressions of the Midjourney Beta from the community
On social media, however, you can already find the first results with the beta from much more talented prompt artists than me. While Midjourney V3 had problems with faces before, the beta can shine in this case. An extensive Twitter thread gives an impression of the beta with the prompt in currently 37 different styles. If you want to find out more about the differences between Stable Diffusion, Midjourney and DALL-E, I can highly recommend this article.