In 1957, when local officials in Sydney, Australia, were judging entries in a competition to design their new opera house, they settled on an unusual plan by a Danish architect, Jorn Utzon. In Utzon’s vision, the building would feature a series of curving segments, evoking the billowing sails of a ship, an apparent homage to Sydney’s harbor and maritime orientation.
However, no one was certain if Utzon’s design was structurally possible. So over the next several years, the firm of a Danish engineer, Ove Arup, became increasingly involved in the project. The resulting building — the now-famous Sydney Opera House — became taller and narrower than the one first proposed by Utzon.
“The form was reinvented to an extent,” says Yanni Loukissas, a postdoc in MIT’s Program in Science, Technology, and Society (STS). And while the question of exactly which design changes should be credited to Utzon or Arup’s firm has been a matter of some dispute, the building, Louskissas says, stands as “an example where the engineer was instrumental in reshaping the project.”
In so doing, Arup and his engineering colleagues had a new tool at their disposal: the computer. The Sydney Opera House was among the first major structures in the world where the viability of the design was tested by a computer program representing the forces at work. As Loukissas points out in his new book — Co-Designers: Cultures of Computer Simulation in Architecture, published this spring by Routledge — the project was not just a landmark structure, but helped usher a new dynamic into the architecture workplace in which engineers, among others, could interject themselves into the design process via virtual tests of a building.
“It’s not only that technology has had an impact on design,” Loukissas says. “Certain new technologies have become part of an ongoing negotiation about what constitutes the work of architects versus that of engineers.”
Inside and out
Loukissas’ new book explores what it means to be an architect — or an engineer working on building design — in today’s computing-saturated world. It’s territory Loukissas knows inside and out: He received his undergraduate degree in architecture from Cornell University and a master’s and a PhD in computation and design from MIT, and worked as an architect in New York and London before returning to academia. As a student working with MIT professors David Mindell and Sherry Turkle, Loukissas’ book emerged from an STS research project, funded by the National Science Foundation, about the impact of technology on numerous professions.
“Too often in schools, architecture is still taught as work of a sole practitioner who independently conceives and refines an architectural idea and hands it off to someone who builds it according to the specifications,” Loukissas says. “Well, it never happens that way. It’s a kind of myth.”
And while architecture has long involved (often uneasy) collaborations between architects and engineers, Loukissas thinks computing has intensified such partnerships. Consider how architects grapple with acoustics: More than a century ago, Boston’s Symphony Hall, which opened in 1900, was designed in consultation with a Harvard University physicist, Wallace Sabine, who had developed a formula for the reverberation time of sound in a room. Today, the Arup firm, where Loukissas researched part of his book, has its own sound consulting division, ArupAcoustics, with its own acoustical simulator, the Arup SoundLab. And whereas Sabine’s formula did not take actual design shapes into account, Arup’s process of computer simulation does.
“It allows [Arup] to generate a simulated experience of the building before it is built,” Loukissas says. “So they can sit in a room with architects, clients and others, and have a discussion about what they’re hearing, what they like and don’t like. And that can interactively change the form of the building.” The firm earned a place in the renovation of Lincoln Center’s Alice Tully Hall precisely because of its ability to develop those simulations.
‘Doomed to succeed’?
However, Loukissas asserts, computer simulations in architecture come with a catch: Are they realistic? It is possible, he writes in the book, that computer models may be “doomed to succeed,” in Mindell’s phrase. That is, as Loukissas writes, “given the inherent malleability of computer simulations, how do designers assure themselves, as well as clients, collaborators and regulators, of the reliability of their results?” Clearly all interested parties must think carefully about the assumptions in their models.
“If we want to make building forms really complex,” Loukissas says, “with acoustical and structural and lighting and airflow conditions that are difficult to predict intuitively, then we’re going to have to rely on the simulations to tell us if they will be any good or not.”
As a result, Loukissas sees the new design technologies placing the profession of architecture under new pressures. Could architects use more technical training, to better judge the plausibility of various designs? Or are they better off focusing on conceptual work and keeping the technical side of the profession at arm’s length?
As it stands now, Loukissas thinks, architects are walking a fine line between “casting themselves as generalists” with a useful range of knowledge, on the one hand, and being perceived as lacking “expertise in any one area” on the other. Meanwhile, he adds, “a lot of engineers see technology, and its creators, as occupying the central role in design, with architects being just another branch of this process.”
Changing that might mean changes in architectural education, or having architecture firms further develop their own technical and consulting arms to retain more control of the design process. It would be a mistake, Loukissas asserts, for most architects to assume they can stand pat.
“People have this conception of the work of architects as something that’s been unchanged over millennia,” Loukissas says. “On the contrary, I think what it means to be an architect has very much changed, and it’s still evolving.”