Revolutions in digital tools and technology are rapidly changing the landscape of many different industries across the globe. One of the latest innovations in digital technology is the widespread use of Artificial Intelligence, or AI. Two Ankrom Moisan architects, Michael Great and Ramin Rezvani – Director of Design Strategy and Senior Project Designer, respectively – have recently begun to incorporate AI software into their design process, receiving encouraging results.
Before the advent of Artificial Intelligence software, precedent images sourced from Pinterest or similar could be used to establish the initial aesthetic direction of a project. Because not every feature of an image would be relevant for a certain project, these images were often cropped and/or collaged together, leading to unnecessary confusion if clients became attached to specific features in precedent images that were never intended to be a part of the final design. AI-generated images have the potential to circumvent that issue, providing inspiration imagery that is specific to a particular place, project, client and design.
Example of an AI-generated precedent image.
Recently, Michael and Ramin have been using AI to create precedent imagery for their projects. In their experience, renderings created by AI software such as Midjourney assist in streamlining the design process and ensuring that clients are on the same page as designers when it comes to project design and direction.
For many, Artificial Intelligence still represents an enigmatic, complicated technology of the future, reserved for the plots of science fiction movies. However, recent developments in technology have made AI and its uses more widespread and accessible than ever. To explain how AI can be utilized to generate unique outcomes and facilitate a cohesive design language for a project, Michael and Ramin sat down to answer some questions about how Midjourney is integrated into the projects they work on and to dispel common misconceptions about the technology.
Michael and Ramin together in the Portland office.
Q: When did you begin incorporating AI into your approach to project design? Why was this something you decided to do?
Our adoption of AI software has aligned with the technology’s continual improvement. Initially when we started experimenting with architectural imagery, it was giving us broad stroke building concept imagery. These were by no means a “design” but it got Ramin and I thinking, ‘Oh, this technology might be evolving to a place where we could utilize it more in the design process, let’s trial this a bit and see what we can get out of it.’
Part of my interest there is that historically architects have used precedent imagery to describe things that don’t exist yet, or to get clients aligned to what the design intent might be? Language doesn’t often get us to a full understanding. So, I think architects have always used imagery, whether that’s precedent imagery or rough sketches to just get alignment about the direction of a project aesthetically. Both Ramin and I have always thought it was strange that in this process you are often using existing buildings to convey new ideas. I think the advantage of using Midjourney and AI is that we can accomplish the same general task of conceptual alignment but show clients unique imagery that is specific to their project, place and aesthetic.
We just started playing around with Midjourney when it came out. It was really exciting and interesting, and we had no idea what it was, or what it could do, or how powerful it was the first couple of times we were testing it out. Then we tried to make it do something specific, and that’s where it started getting fascinating, because it’s potentially a huge shortcut for certain things- especially with generating concept imagery.
We kind of hit a wall with a project where we wanted to be able to quickly visually convey something that didn’t exist. We had some loose ideas influenced by some projects that only exist at a completely different scale than what we were looking at. We thought ‘let’s see if we can figure out how to combine all of these ideas and generate imagery to illustrate to the client where we are going with this.’ Through that process, getting imagery close to what we were trying to do was mind-blowing.
Final project design renders created by Michael and Ramin that were influenced by AI imagery.
Q: Ramin, you’ve said that AI is “like a paintbrush or any creative tool, you just need to figure out how to use it,” and Michael, that “it’s a language. You have to learn it, just like any software.” How did you both go about learning to use these tools, and how long did it take you to learn the language, so-to-speak?
I don’t know how far we actually are on that journey, and I think we have a long way to go. There are a ton of resources out there, though, in terms of helping you learn the language through prompt editing. But this is moving so fast that there is now software that will do your prompts for you. You can just add in a few descriptive words, and it’ll fill in the rest, writing it in the way that the AI software wants to see it. Every time you use it, the more you use it, you learn something about what the output is. The more trial and error you go through, the faster you get at getting to an image you can use.
You have to think differently about the words you are using to get the imagery desired. It’s a shift in how you think since you have to use fewer words to get your idea across. You must be specific and pointed while still giving the software enough information. From that standpoint, I feel like the faster you can get your mind into that mode of thinking, the better off you will be as AI continues to develop, because the premise of utilizing language to direct output will only accelerate from here.
What we all have to adapt to and learn is how to use language to describe what we want machines to do. But even that is probably a couple years from being obsolete. There seems to be an updated version of Midjourney every month that’s substantially better than the last. Even since we last talked, they’ve come out with reverse-prompt capability. So instead of putting a text prompt and getting an image, you can do the opposite, dropping in an image and getting a prompt. By doing so you can start to understand the language in reverse because you’re dropping in an image and the AI is telling you what it sees in text.
I’ve been using it a lot, trying to figure out how to create very specific imagery. Like Michael said, it’s a lot of trial and error. To be able to get usable images, it has definitely required a shift in the way that I think due to the way that the prompts work. I’ve been approaching it almost like a science experiment, changing the prompts slightly with each iteration to see what I get back visually with each update. But also, it’s not like you can master it because it’s changing so rapidly. The next versions will likely have a completely different interface, so the way that you write prompts will likely change too.
Q: Can you walk me through the typical steps of using Midjourney to create precedent imagery?
The process right now that we’ve been utilizing is that we’re trying to plug it in to an existing process. On a lot of our projects, we start by charette-ing and brainstorming, trying to develop a cohesive concept. AI software like Midjourney increases the speed at which we can reach solutions, because we’re not all going in different design directions.
What we’ve tried to do initially is take the guiding design principles for a project and feed those words into the AI to see what kind of visual representation it would create with our initial thoughts. So again, trying to accelerate the process a bit and get to visuals through words that we’ve already talked about or discussed to create alignment on design direction. As the technology evolves, there will be other ways for us to utilize it, maybe in final renderings, for instance. But right now, I think coming up with precedent imagery is the best use of it.
Visual breakdown of how guiding design principles and text prompts are used to generate new precedent imagery renderings with AI software.
Q: [You’ve] said that clients often don’t know what to make of design renderings when they learn that they were created by AI. What are some common misconceptions or misunderstandings about Artificial Intelligence that you’ve encountered since you began using it?
The most common misconception that Ramin and I have run into is that the AI-created images are just precedent imagery pulled from the internet. You have to explain that it’s not a search engine, it’s not finding an existing image on the web. Often, I have to describe what it does in shorthand for people to understand it.
One of the things I noticed right away was people asking ‘doesn’t this take the creative process out of architecture now that you have this image designed by AI?’ At least for the time being, I don’t feel that way. As a design team, you still have to generate the foundational ideas and coax the AI to output something that aligns with your goals and vision. It’s a quick way to get the team on the same page and discover interesting emergent qualities from concept intersections that you may not have discovered on your own. In our current workflow AI produced visuals are intended to draw from and quickly study a whole bunch of different ideas to curate the most interesting aspects of each, based on what we asked the software to do.
Q: Do you have any fears surrounding the use of AI or the rate at which it is evolving, a la Terminator’s Skynet?
Like any new technology, it absolutely has the ability to be used in various ways. I mean, there’s no way around that. I think there’s many applications of AI that could be negative, primarily in terms of its ability to manipulate people. But in terms of what we do, there’s not much risk if you understand it’s just one tool out of many that we can use. It’s not like Midjourney will actually produce architecture. It produces ideas that a designer still has to understand, edit, and synthesize into a project’s end design.
It’s hard to tell right now what is going to change and how much it will change. I’m definitely concerned about it, not just for the field of architecture, but for humans. In general, I feel like no technology has advanced this quickly before and it will continue to accelerate. There are just so many unknowns but I’m sure we will quickly see AI implementation in daily life. I think that we’ll know a lot more in the next five years or so.
AI process design results, highlighting the Midjourney-generated concept renderings that Michael and Ramin synthesized and incorporated into the initial massing render for a project.
Q: With the rapid speed at which AI changes and evolves, how do you envision the future of AI as it relates to architecture? What about the future of architecture as it relates to AI?
I think that AI continues a theme that has remained consistent throughout the last 100 years in terms of how architecture utilizes technology. Usually, it’s used to speed up the design process. One thing about architecture that’s so different from a lot of other professions is that it still relies on artistry, but there’s always a ‘hurry-up’ type of attitude, we are often pushed to develop designs and drawings faster and faster because of project economics. So, we’re always looking for tools to speed up the process. In addition, architecture is a broad profession. There are people doing wildly different things in the profession their whole career, and I think that could get streamlined.
Outside of Midjourney, there’s a whole slew of AI implementations using other design and construction software that’s meant to speed up how fast we can produce a construction set with fewer people. I think inevitably, that’s where architecture has always gone. 100 years ago, it took 40 people in a room, drawing a set for a high-rise tower by hand. I think in the future, a 40-story tower can probably be designed and drawn by two people. Eventually, the industry will get to a point where one or two people can accomplish that same task in half the time it takes now.
I would say that right now, as designers, we are not spending enough time understanding the place, the people using the building, and the environment surrounding a project. We’re rushing through a lot of those elements to get projects built, so I think where you end up by incorporating AI into that process is more thoughtful buildings, because we don’t have to spend as much time crossing the T’s and dotting the I’s. We can actually think about the project and the building rather than drawing it, and to me, that’s pretty exciting. Architecture can’t do anything but get better through this process. I don’t think anything gets worse. It just gets better.
In my mind, there’s no doubt that any areas of inefficiency in the architectural process right now, some of which will be resolved using AI. It’s going to accelerate and amplify the amount that an individual can do by themselves, so I think it’ll take fewer people to do the same amount of work.
I think it will allow us to study way more aspects of a project quickly and, like Michael said, make projects significantly better by understanding more of the site’s parameters. It feels like an amplification to me now, but who knows what will happen in six months?
AI-rendered precedent imagery from other projects.
Compared to other Pacific Northwest architecture firms, Ankrom Moisan is a pacesetter in terms of integrating Artificial Intelligence and other digital tools. Few competitors use AI, if at all. International firms, though, tend to use AI software for design-based research. However you cut it, the digital tools of imagined sci-fi futures are closer than it seems, and may, in fact, already be here. It’s a massive paradigm shift that will take some time to get used to, but the good news is that when the AI overlords take over, we will already know how to deal with them.
By Jack Cochran, Marketing Coordinator