What if Midjourney had a camera

AI generators like Midjourney are anything but cameras, and the creations follow their own unique style – but what if they pretended to be? After all, you can feed Midjourney with appropriate arguments to mimic the image quality of different camera models and settings. A growing library from a community member shows how this should look.

“Creating an unofficial and non-exhaustive resource testing and comparing various photography settings, film stocks, and terms in Midjourney”, announces Midjourney community member AfterEarth in the show-and-tell section on the official Discord server. “Won’t be overly extensive like other style and knowledge resources, and will focus on proving/disproving various uses of photography terms/film stocks, etc. in MJ. I will post updates to the resource here. Feel free to give suggestions.”

AfterEarth makes all images created as part of this project available in a Github repository and provides each of the categories such as focal length, aperture, camera bodies or film stocks with a short comment. He also adds the used midjourney prompt, so you can try it out again yourself. However, he limits the generation of the images to the first 2×2 preview of each prompt instead of making them higher resolution.

Boundaries of camera simulation in Midjourney

“I’ve found that one set of things MJ doesn’t understand is lens artifacts. MJ sort of knows what chromatic aberration is, but not astigmatism, barrel or pincushion distortion, coma, etc”, criticizes another community member, Keith Putnam.

“Those are all kind of intrinsically unique aspects of lens/body combinations. I don’t think it’s really right to expect MJ to learn settings and apply them when all it does is combine and join existing material into an image description. It’s the same as how it’s expected that MJ can’t really know the difference between 1/2 second and 1/1000 second for shutter speeds unless it analyzes and stores comparative exif data of images of various settings”, explains AfterEarth.

Camera examples

Focal length

“Adding focal lengths works best when describing a scene, and not necessarily a specific subject. For example bird, 14mm will have less success than busy street, 14mm“, it says on the Github page. “Additionally, using focal lengths might produce more depth of field bokeh/blur. Similar to film stocks, less common focal lengths will yeild poorer results due to insufficient source material on the internet and/or in the dataset.”

Shutter speed

“Mention of the word shutter or shutter speed seems to ccause motion blur, no matter the specified shutter speed.” According to AfterEarth, Midjourney doesn’t handle shutter speeds well because of one specific reason. “Seeing as fractions (ex: 1/1000) are a common mathematical format, it’s fair to assume MJ can’t distinguish between shutter speeds and regular integer expressions. Using s or second also doesn’t produce desired results. This is likely due to the source material not having a clear and consistent set of shutter speed examples across subject matter. Your mileage may vary, and hopefully this will get better in the future.”


“Aperture values likes f1.4f2.8, etc. tend to not have much of an influence on depth of field, but small differences can be observed. Mention of the word aperture seems to add more bokeh/blur, no matter the specified f-stop. This is likely due to the source material not having a clear and consistent set of aperture examples across subject matter.” Again, AfterEarth hopes that Midjourney will understand even better what is meant by the aperture inputs through further training material. “Your mileage may vary, and hopefully this will get better in the future.”

Film stocks

While with digital photography it doesn’t matter which storage medium is inserted because the image will always look the same, with analog photography it makes a huge difference which film is used. This can be recreated a bit with Midjourney. “In general, using any film stock tends to add lots of saturation, contrast, vignette, and grain. For a less-intense result, try a weighted solution. While certain popular film stocks may produce expected results, not all films act as expected in Midjourney, probably due to insufficient source material on the internet and/or in the dataset.”

Camera bodies

Now it probably comes to the supreme discipline, because if a human can already tell from a photo with which camera it was taken, how can a machine? “Specifying a camera body won’t emulate one piece of technology, as different photos are produced with different settings and film stocks in reality.” But apparently there really are differences: “We do see differences in realism when adding a camera body to a simple subject, however, so Midjourney is likely just averaging all the different photos taken with different settings and film stocks on one specific camera body. For more specific vintage results you could try specifying both a camera body and film stock like ...Canon AE-1, Kodak Gold 200....”

If you are interested in all the examples created by AfterEarth that also consider Weights/Phrasig and Stylize/Quality as parameters, you should take a look at the Github repository. More entries are to follow soon. If you want to discuss with him in Discord, you can do so here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Select menu by going to Admin > Appearance > Menus


Send Message


My favorites

Notifications visibility rotate_right Close