Towards a Digital Activation of Our Past

As we look back on the previous chapters, it seems that we have only just begun the exploration of AI and its utility to the field of architecture. In this room, we will focus on the discoveries we have made, the interesting problems and challenges ahead, and reflect on the consequences of our findings. In doing so, hopefully laying the foundation for continued research towards ideas and discoveries made along our journey.



CHAPTER 6 OVERVIEW

Chapter 6: Towards a digital activation of our past






The space and landscape of AI and architecture seemingly offers newfound opportunities within many different levels of scale. Through chapter five we pushed into new territory and envisioned a restructuring of the building code, to elevate our built environment’s cognition. In chapter four we saw the traces of automating the creation of large parts of architectural presentation. Looking back on the discoveries we have made; it seems appropriate to briefly reflect on the conceivable significance and the possible consequences of our findings. Let us begin by looking back on our explorations of the built collective memories.

6.1 Digital Activation: Data and the Value of Our Past

In chapter three we explored the archives of OBOS in Oslo. The large storage, containing thousands of project drawings and details, was yet to be properly converted into digital form. From the research conducted in our project, this lag of digital conversion seems to be the norm throughout much of the construction industry. As for the digital material already available on company servers, with respect to work towards use of predicative data analysis or towards development of automated drawing tools, this also seems broadly left untouched.

The amount of data and information, and the possibility to extract patterns from valuable past projects is difficult to precisely valuate. However, from our discoveries they seem to contain immense potential for value through proper use. It appears that the owners of these archives are in direct control of treasures unbeknownst to themselves. Clearly the initial costs of converting such material to a digital counterpart is a substantial one for many. However, if it is done in a complimentary fashion with the creation of cleverly designed standards for AI utilisation and data analysis, such costs suddenly and very plausible will appear greatly favourable.

As the AEC industry wakes up to the value and opportunity within architectural and construction data, we will also be forced to tackle new and interesting questions. Who owns the property’s data? When you buy a house, is the digital twin data copy yours, or is it owned by those who built it? Is it shared between the actors? A central challenge to the statistical approaches of machine learning, is today the need for truly large amounts of data, to see satisfying results. In what way will we facilitate for research into topics that require such large data sets? A crucial and early finding was the need for an official AEC AI-Standard. Could we utilise current inventions to circumvent problems of ownership, to allow for cooperative research into the data? Through blockchain technology we can give users access to analyse data, without giving the data away. The technology could offer virtual rooms, where encrypted data can be analysed, and only the results exported for viewing.

A few years back, The Economist’ ran a main article, advertised on their cover. It was titled “The world’s most valuable resource is no longer oil, but data”.1 The analogy, however, has been suggested as a bit too simplistic. “Oil is a commodity–to be bought and sold. Data is an asset, an asset that grows in value through use.”2 It seems clear that these valuable assets have yet to be activated in a meaningful sense for much of the AEC-Industry. As such, the time is opportune for first movers to leap ahead.

_____________________

1. The Economist (2017), “The world’s most valuable resource is no longer oil, but data”, Ed. 6 May 2017

2. Open Data Science Conference (2019), “Data Valuation”, Accessed: May 2020, Available at: https://medium.com/@ODSC/data-valuation-what-is-your-data-worth-and-how-do-you-value-it-b0a15c64e516

6.2 Implications of Artificial Architectural Intelligence

Image placeholder

From the results of our practical experimentation in chapter 4, and the work and results from international research preceding this project, we seem to be on the edge of a change in something that could affect the very core of the architect’s profession. Seemingly, patterns of architectural representation can be captured and generated automatically through current technology. It is no far stretch of the imagination, to begin the creation of tools that near instantaneously produce plans, based on thousands of excellent architectural plan layouts or sections; applying them directly to the needed footprints of new or existing projects under development.3


“Old fashions please me best; I am not so nice
To change true rules for odd inventions.”

W. Shakespeare, The Taming of the Shrew, act 3 scene II



How are we to react to these new suggestions of method and approach? Are they odd inventions, or meaningful pieces in a relevant puzzle building our future? So far, these discoveries should in no way discourage the architect. Instead, perhaps they empower our abilities to search for, and to find, solutions to creative problems. Indeed, artificial learning could plausibly offer us greater understand our own field. What would we discover, and what would we write about in architectural history, if we could meaningfully leverage artificial learning on the patterns found in the greater total of our currently available data on buildings and cities from the beginning of written history?

As digital algorithms increasingly excel in many of our previous contributary roles, how could it change our work? For years, Mike Haley, head of machine intelligence and vice president of research at Autodesk, has been speaking of the foreseeable changes within design and creation experience. Haley’s message builds upon the ideas addressed in chapter 2 by Negroponte, where man and machine work together in increasingly symbiotic and natural relationships. The AI-powered design process is suggested by Haley to drastically improve our future design experience.4 For many years, I have wondered how we are still so broadly tied to the computer keyboard and mouse, as connectors of resonance between our ideas and intentions, and the computers themselves. Seeing how the keyboard and mouse were first invented in the 1960s,5 certainly, they are ripe for innovation? In the recent decades we have seen an increase in direct touch interfaces and voice-controlled software and technology. Haley here points to AI’s increasing competency in natural language processing. Innovation and development, he argues, that could likely promise more natural and intuitive ways of interaction with our new AI co-designers. This would allow nearly instantaneous and effortless access to substantial amounts of varying configurations provided by the inter- and extrapolations of our parametrised ideas.

From our research in chapter 5 we addressed the important subject of creativity. A key point was the current inability of intelligent computational algorithms to produce innovation and novelty outside of the solution space provided by the training samples. Without the proper structure, and access to large amounts of time and real randomness (whatever randomness in the end might be), the AI’s limit on the subject of innovation seems, for a foreseeable future, meaningfully existent. However, with time and full development of tools imagined, it may be reasonable to assume that one architect of present responsibility, could fulfil the role of more than a few of his or her colleagues.

Importantly, what we could be seeing in the coming future is a change in the working landscape. Specifically, one in which architects need to adapt to if we are to uphold a meaningful responsibility for the built environment at large. If our cultural evolution increasingly allows for the digital tools, such as intelligent computer algorithms, to control the structuring of our umwelt and living spaces, it becomes paramount to have a greater number of architects that can artfully and competently explain to computers what architectural quality might be. In fact, it seems dangerous to only have a very small number of architects engage with the field of such conversions. The topic of architectural quality is too rich to be converted to computational form, by only the few. As Magnus Rönn at the Royal Institute of Technology hypothesises, on the topic of quality in architecture: “[…] quality in architecture and urban design should be understood as an open and debatable key concept resulting in disagreement and discussion.”6 This very discussion is key to a healthy architectural evolution, and it may in the future be spoken, partly, in digital languages.

_____________________

3. The previously cited project of 2019 from Harvard School of Design, by Stanislas Chaillou, has already shown a conceptual 3-parted GAN-stack method for automating this process.

4. For a brief introduction: https://www.autodesk.com/autodesk-university/content/future-design-powered-ai-mike-haley-2019

5. These inventions are naturally part of a longer contextual timeline of multiple inventors and designs. For the computer mouse, I refer to Douglas Engelbart’s invention of 1964. The history of the keyboard can be traced as far back as the typewriters of Sholes and Gidden, with the “QWERTY” keyboard established in 1874.

6. Magnus Rönn (2014), "Quality in Architecture - A Disputed Concept", ARCC Conference Repository, https://doi.org/10.17831/rep:arcc%y335

6.3 The Architect and the 21st Century

In the beginning of this thesis I pointed out a possible need for a rebalancing of the architect’s stance within the technological and quantitative arena. Having further explored the technological landscape, let us briefly, in retrospect, address the topic.

Surely, there is something ineffable about the quality and the creation of hand drawings. The lack of perceived distance, and often to some extent friction, between the formulation or appearance of an idea within oneself, and the new material expression transformed onto paper - as an anchor, a message and a representation - embodies a deep feeling of intimacy, empathy, and perhaps even nostalgia. Also, as our eyes touch, and our senses interact with physical models, we find ourselves strangely transported into a profound state of imagination, seeing the lives unfold, within the future and unrealised structures in front of us. These tools and methods have lost no significance to this day. Instead, they are joined, differentiated, and perhaps elevated, by the addition of many more tools of expression, experimentation, and production.


“I learned very early the difference between knowing the name of something and knowing something.”

Richard P. Feynman



Today, however, most of us cannot understand, nor perhaps even grasp, the operational nature and mechanics of our daily digital tools. Such a distance, marked by the lack of understanding of principle motions of chains and events, leading from our inputs and actions to the on-screen results; could plausibly build unseen barriers to intimacy, exploration, and the implementation of new technological tools.

Much technology also operates on the program of giving us enough customisability to feel as if our phone, or our computer, truly is “ours”. As if we knew it well. However, how could it be so? The devices increasingly operate as portals to the owner of the software in which we interact. And surely, the majority of us must also admit to the lack of principled knowledge and understanding of how the tools actually works. Nonetheless, we still seem captured, both by the comforts they produce, and the superficial space of interaction, in which we achieve control.

Image placeholder

Our exploration has suggested the approach of new environments for creative unfolding, specifically through new technology. The question of how we will seek to understand this technology, therefore seems meaningfully important. How would we react to changes that could fundamentally alter our current contributary role? In a digital age, how do we truly define built environment? Are ethical challenges introduced by AI of concern to architects? Furthermore, to what degree is it our responsibly to truly leverage and effect our knowledge in the continually evolving and changing natural and social habitat that we live and serve? I do not believe that this discussion is a question of letting go of familiar traditions. Rather, it is the positing of a need for “yes, and".

In a world of experts and specialists, the role of expert generalists may also be more needed than ever. A multi-disciplinary systemic perspective, coupled with an ability to create consensus across stakeholders, given the unique generalist vantage point, seems deeply valuable going forward. We should therefore be mindful of the tendency to resist change or go the way of least effort. This seems a deeply human and natural conservational principle.6 If we should fail to exhibit the needed curiosity and effort, in our own adaptation to the changes of a dynamic and digital 21st century environment, we will, at the very least, be hard pressed to blame anyone but ourselves.

_____________________

6. The principle of least effort was first addressed in the late 19th century by Guillaume Ferrero, later to be extensively elaborated upon by George Kingsley Zipf, who wrote “Human behavior and the principle of least effort”, 1949. Addison-Wesley Press.

6.4 Going Further: What is Next?

In the beginning of March, the trip to Oslo, and the collection of plan layouts, were done with the hope of eventually designing and preforming a comparative cause-effect analysis, through artificial learning and computational pattern recognition. As we were not able to collect enough plans, we opted instead to run the learning experiment on one of the cohorts with plans from 2010 to 2019. Therefore, the pursuit to better understand how changes in law effect changes in our buildings and built environment, is still an interesting exploration within the bounds our chosen topic. Let us outline a few possible stepping stones towards such an achievement.

From the speculative hypothesis formulated in chapter 4, one may continue the research into several different building code periods. A first step would be the training of separate periods and assuring that each meaningfully different group can reproduce novel architectural representation that incorporates the changes of the new updates to law satisfactorily. Thereafter, experiments with the creation of sensible representation of the AI's neural nets’ weights or configurations (of which allowed the precise capture of patterns within the given material) would likely be necessary. For this, coordination with AI specialists, and further research is likely needed.

Image placeholder

When multiple patterns that connect directly to separate periods in the timeline of the building code are extracted, it would be interesting to research ways of performing inverted parametric investigation. Such research could involve a form of virtual simulation, where results of the comparative cause-effect analysis were used to connect parameters between text in a virtual building code, and a copy of a building DNA. Thereafter, a thorough analysis of variations in parameter settings, backwards from the patterns of the building DNA, could connect the geometric patterns with text in the building code itself. Correlations to modal verbs, such as “can” or “shall”, would be very valuable to directly connect to the resulting actual changes within the patterns that represent the cohorts. Such an undertaking is clearly a challenge, also with respect to required material and historic data available. It would likely contain many stepping stones, of which several are still difficult to predict. Importantly, we will note that there are more factors influencing the built consequences than just the building code and its text alone. The social, political, economic, and cultural changes within short spans of time, reasonably and probably affects the outcomes. To what degree, and in what specific manner, has been outside the discussion and scope of investigation for this thesis. However, the creation of well-structured AI models could, in the future, provide a plausible deduction method for other covariates. Specifically, where large unforeseeable changes from one period to another, must be caused by something other than the building code itself.

There is already interesting precedent for analysis on the grounds of modal verbs within the building code.8 Continued and successful investigation could very likely prove fruitful. The linking of specific sentences or parts of the text, to its translation and connection into physical results through artificial learning, would possibly also allow for the creation of large and thorough digital simulation scenarios. These could test updates thoroughly before we apply them to the seemingly infinitely complex nature of our situation. These simulations could therefore, in the end, allow jurists and lawmakers to find the best text for their intentions. Importantly, the improvement of our ability to create more effective laws and regulations must also be paralleled by increased discussion and social engagement concerning the values that these rules will embody. As for now, more research is needed.

_____________________

8. Jørgen Hallås Skatland et al. (2018), “Society’s Blueprints - A Study of the Norwegian Building Code’s Modal Descriptions of a Building”, Nordic Journal of Architectural Research

6.5 Summary

• A crucial part of learning is the lesson drawn from memory of past experience. The digital and analogue projects of our past, represent a large set of physical anchors to such valuable memories.

• With the advent of AI and accessibility to artificial learning, builders, planners, and architects, are given a new creative space to effectuate their knowledge and contribute to the improvement of our built environment.

• The 21st century architect may be decidedly affected by the technical innovation within the field of AI and digital computation. It may therefore be more important than ever to discuss and explore the architect’s role and responsibility going forward.

• Areas of continued research could be many. The design and exploration of comparative cause-effect analysis and inverted parametric investigation between building code text and built consequences, may be of immediate interest.

7. Looking Back: A Concluding Outline






As we come to an end, let us briefly look back, while adding a few concluding remarks, to the summary of our explorations. On the grounds of empirical observations, we began with the claim of the building code and its meaningful definition as a learning process. In the light of this, our current structure seemingly held large opportunities for improvement. Through research in January, February, and March, and through the collection of over 500 plan drawings in Oslo, we established strong indications of a wide industry lag with respect to digital conversion and large-scale applied predictive data analysis.

From the collected plan drawings, over 1500 layouts were created and processed in March and April. Four levels of preparatory processing were applied to a cohort of plans from 2008-2019 (TEK10-TEK17). These were run through training processes with a conditional generative adversarial network machine learning algorithm. Level four was indicative of some form of artificial architectural intelligence, having grasped patterns in our dataset, yielding the ability to produce novel plans with clear resemblance of architectural composition, mainly with respect to programmatic concerns. From the findings we formed a speculative hypothesis. It suggested the existence of a building DNA. The building DNA was defined as a function of a specific time period within the building code evolution.

To further a path towards a digital activation of our past, we combined the assumption of our speculative hypothesis with its inferred opportunities for the future. Combining the earlier observation of the building code as a learning process, and our experiment with AI and architectural representation, we sketched an idea and a draft of an algorithmic building code. The idea, utilising principles from neuroscience and our current understanding of the human brain, attempted to minimize surprises in the legislative learning and updating process. Its conceptual design also sought to increase the building code’s capability to balance our collectively sought values, with the dynamic environment in which it exists. Importantly, it also applied continual learning from an internal database containing its experience from our past. We established a range of challenges to overcome on the topic of AI and algorithms. Importantly, we also underlined the immense opportunity inherent within the idea of an algorithmic building code. Specifically, the achievement of improved meta-learning for a process governing nationwide structural dynamics.

The work suggests that AI is likely to bring empowering opportunities to architects and the field of architecture in the near future. AI technology could also increasingly affect the structure and dynamics of our built environment. The outcome of this technology, however, is deeply depended on our ability to explain our aspirations. Future architects may therefore have to understand, and competently communicate, ideas about architectural quality in new languages. For this reason, a more collective and active engagement may be required if architects are to effect meaningful influence on the results of these powerful tools.

We can also conclude of the greatly favourable opportunity, available to actors within the industry, that are in control of large numbers of old projects and building information archives. In greater numbers, these can likely serve as powerful and valuable memories to learn and innovate from. As suggested within the work, they could greatly increase our understanding of cause-effect connections, within the empirical changes produced by the building code blueprint, and they can be used for the creation of automated AI design and drawing tools. However, proportional to its potential, the activation of large-scale architectural learning will most likely require a very serious preparatory effort. On this note, a central finding throughout the extraction of plan layouts, and the experiments with our machine learning algorithm, was the lack of an official AEC-AI Standard. We will therefore also conclude of the need to increase research into the subject. The creation of such a standard seems likely to need the involvement of resourceful stakeholders, and multi-disciplinary actors cooperating through design, production, and operational maintenance.

Going forward, we have underlined areas of interest for future research. The design and exploration of comparative cause-effect analysis, by artificial learning and computational pattern recognition, as well as explorations into the topic of inverted parametric investigation, are likely be of immediate interest. In conclusion, a digital activation of our past seems likely to provide a unique and powerful stepping stone for architectural learning — building on the vast experience of our past, in completely new ways.

Image placeholder