

Where It All Started: Photographer Wins Competition Using AI
In 2023, photographer Boris Eldagsen won the Creative Open category at the 2023 Sony World Photography Awards with a portrait titled “Pseudomnesia: The Electrician.” He later publicly refused the prize after revealing that the winning image was entirely AI-generated. I was fascinated by this. I was aware that AI existed but my finger wasn’t on the pulse. I’d seen some articles published about the topic, but hadn’t really paid attention. I’d just assumed that AI was just a glorified Google – you could ask it questions and get responses without advertising. But Eldagson WAS on the pulse and challenged the limits of what it could achieve – technically it won him a prize that is only ever attributed to humans. (He did decline the award though.) The incident sparked an ongoing debate about what counts as photography, how AI is defined in creative fields, and whether AI-generated imagery should have its own category. One of the articles regarding this story mentioned that he had used an AI tool which was, or was similar to ‘MidJourney’. Right! I will hunt down MidJourney and have a look at what it is all about…. (This is his AI image that won the competition:)
My First Experiment with MidJourney
In September 2023 I signed up for the free version of MidJourney, I very quickly ran out of my free credits so I purchased a monthly subscription plan to continue experimenting.
On the 8th of September 2023 I created my first image as a paid subscriber.
Using the ‘imagine’ function I typed in:
“1920×1080 pixels image of grey aliens flying through the glalaxy in a blue and white hyperspace surrounded by glowing binary digits”
This is what was generated:
![]()
You can see that each of the four images (MidJourney kindly generates four images for each prompt) aren’t aspect ratio 16:9 so I needed to learn what I was doing wrong. I found some very basic tutorials online regarding how prompts work with MidJourney. I learned to define an aspect ratio, be more descriptive and use style ‘raw’.
So now, using the ‘imagine’ function again, I typed in:
“Grey Aliens, floating in space. Quizzical expression. Fingers outstretched. Blue and white galaxy. Night. Hyperspace. Glowing binary numbers, small passing ufos, technical space debris. Long shot. Dramatic lighting, space dust illuminated by comets. High dynamic range. –ar 21:9 –raw”
This is the first of the four images that were generated:

It seems I was getting somewhere 🙂
What Does the Strange Code at the End of the Prompt Mean?
This was the next part of my journey – what on Earth is that weird code?
–ar simply stands for ‘aspect ratio’, so if you wanted a square picture, you would add –ar 1:1 at the end of the prompt. If you wanted a letterbox shaped image, you might use –ar 16:9 at the end.
–raw is slightly more complicated. This is what it does:
- Disables MidJourney’s default “prettification”: Normally, MidJourney adds lighting, contrast, saturation, and composition flair to make images look polished or cinematic.
- Keeps your prompt more literal: The results will more closely reflect the actual words you used in your prompt.
- It is useful for realism or custom styling: It’s ideal when you’re trying to get photo-realistic, technical, or documentary-style images – or when building consistent character designs or product mockups.
Let’s do an experiment and compare the difference between ‘raw’ settings and ‘default’ settings. Let’s use this simple prompt:
“portrait of a woman in a forest, soft lighting, ethereal atmosphere”
Raw version:

Default version:

The difference isn’t overwhelming, but personally I prefer the look of the ‘raw’ generated images.
Simple Prompt Compared to Complex Prompt
I began to write longer and longer prompts until I realised that I didn’t need to… sometimes…
A complex prompt doesn’t always result in a more complex image. Ideally you have in mind what you want the results to be before you start typing. In the example below I wanted an image to serve a specific purpose. It needed to be a decorative border to be included in a different image altogether.
Let’s compare the complex prompt to the simple prompt:
Complex prompt:
A simple line drawing of an elegant floral design with roses and vines, outlined in black on a white background. The flowers form a symmetrical pattern along the top or bottom edge of a rectangular shape, creating a sophisticated yet minimalistic look suitable for decorative purposes. –ar 31:14 –raw

Simple prompt:
line drawing of roses and vines –ar 31:14

As you can see, both images have a similar aesthetic quality, but without a lot of further work, the resulting image from the simple prompt isn’t ideal for my desired simple decorative border.
How Can I Find Good Examples of What to Prompt?
I am not a professional photographer and I don’t know about various focal lengths, lens types etc so I didn’t know what to prompt to get a pro-looking image – until I discovered what the ‘describe’ button is for.
The ‘describe’ function lets MidJourney analyse an image and describe it back to you as it sees it. Let’s try it! First of all I need to find an image for MidJourney to describe. I randomly thought of a New York cab at night. Let’s Google it!
Google: New York taxi at night

Out of curiosity, lets see how MidJourney responds with the same prompt:
MidJourney: New York taxi at night

Okay – not too bad seeing as it had no extra information to go on.
Anyway, back to the point. Let’s upload the image we found on Google to MidJourney and we can see how MidJourney interprets it…
It came back with 4 descriptions:
- A bustling city street at night, with yellow cabs and black cars moving along the road, surrounded by tall buildings and lights. Realistic photographs of beautiful city streets, in the style of a skilled photographer. –ar 78:43
- A photograph of a New York City street at night, with yellow cabs and cars driving on it, buildings in the background, a wide-angle shot, and ultra-realistic photography. –ar 78:43
- A photograph of the city streets at night in New York City, with yellow cabs and cars driving on them, and tall buildings in the background. High resolution, high quality, high detail, hyper-realistic, cinematic photography with cinematic lighting. –ar 78:43
- A photograph of the city streets at night, with yellow cabs and cars driving on both sides of an empty street in New York City. The buildings stand tall against the dark sky, illuminated by lights that highlight their architecture. There are traffic lights in front of every car, creating dynamic movement as they pass each other. This scene captures the lively atmosphere of urban life in NYC. Shot using a Canon EOS R5 camera with an RF600mm F4 lens for high-resolution capture. –ar 78:43
Let’s see the image results of these descriptions by clicking the ‘Imagine All’ button.
Here they are in the same order as the prompts:




So there we go. We can see how MidJourney describes an image and we can use that description to generate similar images. MidJourney references ‘realism’, ‘camera angle’, ‘lighting’, ‘camera brand’ ‘lens’ etc etc. It is showing us that the more we know about our subject and what it is that we want to achieve, the better it can help us.
Basically know your stuff.
Are All AI Image Generators the Same?
After experimenting with various different AI tools I would say absolutely not! Let’s do a quick little experiment and compare the results from Chat GPT and MidJourney using the same prompt.
This prompt will explain the context behind WHY I want the image and WHAT IT IS FOR.
Let’s image I am working on a web project for a hairdressing business.
I will use this prompt:
“I have a client who is a hairdresser. She needs a banner image for the top of her website. Her customers are high-end, wealthy and want to look glamorous, so the banner image needs to reflect that with images of beautiful and luxurious hair. Please create me a suitable image on this criteria.”
MidJourney result:

ChatGPT result:

ChatGPT understood the brief and gave a more appropriate image for the project straight away. ChatGPT seems to be more responsive to ideas, chat and the human element, while MidJourney needs direct and explicit commands and doesn’t respond to context – although I do prefer the aesthetic of the MidJourney images. I would need to tweak the MidJourney prompt to get what I needed in this instance.
Conclusion
During my journey of discovery with MidJourney, I have to come to realise there is no right or wrong in generating images with AI.
Understanding the differences in how the various platforms operate is a benefit, but what I’ve learnt in my brief journey is that knowing what you want and how you want it before you start is key.
AI, for the moment, is just a tool. It is how you use it that is important. Learn the language that it identifies with and work with it to get best results.
Research beforehand about the terms used by professionals to understand how they would work with images and this will set you above your competitors.
Most important of all is to have fun playing and learning 🙂