The Building Code 2.0

So far, we have explored built collective memories and trained a machine learning algorithm to start grasping digital representations of our architecture. Let us continue the exploration of our speculative hypothesis.



1) An Official AI-standard is created A joint project group/ organisation launches and maintains an official AEC AI-Standard
2) Speculative Hypothesis is not disproved (N) converges, as indicated, on a unique unifying pattern, as it approaches (A)
3) What is next? Explore opportunities to improve and secure better architectural learning


CHAPTER 5 OVERVIEW




Chapter 5: The Building Code 2.0






Image placeholder

Image placeholder

Through the preceding chapters, we have explored and experimented with opportunities within the current landscape of AI and architecture. Now, we will instead focus on the future landscape that may lay ahead of us. Interestingly, much of our discoveries so far seem to suggest a profound opportunity for new architectural learning. Specifically, through a restructuring our current building code. What could such a new structure look like? What if the building code was an algorithm?

The precise form and shape of such an algorithm would clearly be hard to predict or define from where we are today. Let us therefore follow our question, building a path towards the continued research and design. Importantly, we will also explore whether the idea of such a journey and destination is worthy of our pursuit. Let us begin by understanding some of the key elements to this new structure.

5.1 Securing Efficient Learning: A Complimentary Fit

Our exploration to improve architectural learning, through increasing the learning efficiency of our building code, is at heart a creative endeavour. Creativity is also what will be needed in the future, should an algorithmic building code be viable for use. Let us therefore briefly look into the two central and complimentary forces that secure a new structure for architectural learning.

Image placeholder

An important theme to parts of our exploration is the challenge of creating a balanced and constructive relationship between society and its technological innovation. I believe this to be a crucial notion as we continue into the 21st century and further our development of technology, particularly within fields such as AI and genetics, that can change our history dramatically. We have for many years been witnessing the digital computation and robotic automation’s increased ability to outperform parts of human labour. Resultingly, we have seen serious changes in global labour markets. Questions concerning social effects of automation and AI have been passionately discussed for years, and according to a 2013 estimate from Oxford, on the future of employment and job susceptibility to computerisation, “47 percent of total US employment is at risk.”1

AI has also long since then sought to breach the gate of creative endeavours.2 A famous example is the 2018 portrait of Edmond Belamy, sold for $432 500 at Christie’s in New York. Mr. Belamy, however, never existed. His portrait was a product produced by the same machine learning model that we have been using ourselves (a generative adversarial network).3 Another example is the team of artist Refik Anadol. Anadol creates large-scale art and architectural installations, heavily influenced by new opportunities afforded through machine learning technology.4

There seems to exist a great deal of complimentary overlap between traditional human capabilities and AI technology, and I believe the topic of creativity affords us a helpful insight into the opportunity we will explore in this chapter. Demis Hassabis, the Co-Founder and CEO of DeepMind, presented three types of creativity, in his lecture at the at the Royal Academy of Arts in 2018. He outlines that of interpolation, extrapolation, and innovation. The three types are illustrated below. Importantly, we are far away from realising artificially learned innovation.5

Image placeholder

When or whether creativity will be genuinely understood, to a point where we may recreate it in machines, is still to be determined. However, the combination of our collective abilities opens unique algorithmic opportunities for the building code. Computational precision, power, and indifference is a very potent combination if coupled correctly with human innovation and socially constructive values. The combination of these forces may therefore hold the key to lift our collective architectural learning.

Image placeholder

_____________________

1. Carl Benedikt Frey, Michael A. Osborne (2013), “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School, University of Oxford

2. Marcus de Sautoy (2019), “The Creativity Code: How AI Is Learning to Write, Paint and Think”, Fourth Estate Ltd., ISBN-13: 978-0008288150

3. “Edmond de Belamy, from La Famille de Belamy”, LOT363, Christie’s New York 23-25 October 2018, Accessed: May 2020, Available at: https://www.christies.com/Lotfinder/lot_details.aspx?sid=&intObjectID=6166184&T=Lot&language=en

4. For more information: http://refikanadolstudio.com

5. Demis Hassabis (2018), “Creativity and AI – The Rothschild Foundation Lecture”

5.2 The Algorithmic Building Code: A Conceptual Draft

In the room of initiation, part 2.1 introduced the learning structure of our current building code. As illustrated below, it cycles reactionary feedback, through a collective social sensor to update. The sensor, choses a new policy, provided within a set containing alternative choices, of which remain limited by its structure. This set is largely made up of the continual reactionary suggestions of new updates from interested stakeholders, brought through official hearings, individual choice-making and voting procedures. From the social sensor, these new differences, in the form of updates, are implemented to the dynamic blueprint of the building code. These changes produce new action onto the built environment, and the cycle continues.

Image placeholder

Having briefly been reminded with an overview of our current model's structure, let us now turn to a conceptual draft of the "building code 2.0". Importantly, we notice the parallel between our social sensor, and the component "securance of quality". However, the new structure, including its internal self-evaluation of efficacious policy making, utilises a large number of connections found within the current model we use to explain the human brain's learning and inference process8. Importantly, it now also expands into multiple sets of alternative choices for policy making; utilising the entire memory bank of past experience, to help with the choice of new update selections. Key characteristics are described below the illustration.



Image placeholder

KEY CHARACTERISTICS

Image placeholder

Image placeholder

Image placeholder

Image placeholder

_____________________

6. Note the option provided by the methods of artificial architectural learning, of which we explored in the previous room. It seems likely that, that in the future, we can use large sets of drawings, rather than text, to guide the understanding of our sought architectural quality in a digital algorithmic code. We can also find precedent for the use of drawings in law, going all the way back to the building code (PBL) of 22nd of February 1924 (for examples, see page 197 or 199).

7. Karl Friston (2010), “The free-energy principle: a unified brain theory?”, Nature Review Neuroscience 11, 127–138 (2010), https://doi.org/10.1038/nrn2787

8. Karl Friston et al. (2016), “Active inference and learning”, Neuroscience & Biobehavioral Reviews, Volume 68, September 2016, Pages 862-879, https://doi.org/10.1016/j.neubiorev.2016.06.022

5.3 Buildings and Cities That Learn and Live

The idea presented above, in its full extension, offers a completely new path for future collective architectural learning. It will resultingly also change the principle dynamics, and effectively the temporal and structural compositions of our cities and living environments. Why would I suggest that we explore something so radical to begin with?


“You cannot make life by frozen design.”

Christopher Alexander, 1985



It has been suggested, by architect and mathematician Christopher Alexander, in his architectural theories on pattern languages9, that we have reached an unsustainable and ill-conceived cul-de-sac. He claims our ability to produce living and whole environments as effectively lost. We have replaced the slow process of personal, meaningful, and intimate building, with a system of expert builders and copy-paste construction machines.

From the onset of the industrial revolution, the needs of the many in the post-war 20th century, and the ruling ideologies of past and present, we find ourselves today in a world of mass-fabrication. Today, in many ways, a majority of our buildings, of which we hope to make our homes, are mass-produced like pairs of shoes. Kitchen counters are delivered to stand 900mm off the floor, no matter your height, and if you are lucky, you may choose between the mass-produced grey tiles, or the mass-produced white tiles. The irony, of which Alexander points out, is the immensity of variation available to our designs, if we should so choose.

Alexander’s theory on networks of patterns and the resulting pattern language, along with his ideas on the necessity of architecture’s learning and living, have built a central platform for much of what has been discussed so far in our journey. Indeed, the need for architecture to once again become unfrozen; to be turned into a living, growing, and learning process, speaks directly to many of the points earlier addressed. On the grounds of his ideas, I question whether our current building code encapsulates a dangerous memory loss. As a society, and naturally therefore also reflected within our building code, we seem to have forgotten a timeless truth of building. Namely, that what is principally important is not the buildings themselves, but the life they create.

“A living thing is a process, so is a community”.10 Today, the building code is so rigid in its goal-defining structures, so slow in its learning, that in order to let the process of non-conventional building shape new everyday architectural compositions, it is often stopped before it can prove the ability to meet our sought values. An example is how restrictions and regulations had to be lifted, is the case of Svartlamoen in Trondheim, Norway.11 The removal of such barriers allowed the environment and its inhabitants to seek out non-conventional but seemingly progressive and sustainable living designs.10 We can find many other similar examples, but importantly, the creation of such high barriers for life and learning in our built environment seems dangerous and unwise for long-term ecological and holistic sustainability.

_____________________

9. See: Christopher Alexander, Sara Ishikawa, Murray Silverstein (1977), “A Pattern Language: Towns, Buildings, Construction”, ISBN 0-19-501919-9, Oxford University Press. Also “The Timeless Way of Building”, “The Oregon Experiment”, from the same book series.

10. Christopher Alexander (1985), Progressive Architecture, interviewed by Greg Saatkamp, Pacifica Radio Archives. Accessed: March 2020, Hear archived interview, available at: https://archive.org/details/pra-AZ0895A

11. For more information: https://www.trondheim.kommune.no/globalassets/10-bilder-og-filer/10-byutvikling/eierskapsenheten/svartlamoen/160509_horingssvar_beboerne.pdf or https://trd.by/sponset/annonsorinnhold/2017/05/24/Derfor-er-Svartlamon-blitt-en-unik-bydel-i-Norge-14775653.ece

5.4 Challenges and Algorithmic Traps

There are few shortcuts to challenging problems worth solving. Importantly, let us therefore address some of the issues and critiques that an algorithmic building code immediately brings to the surface.

“The next time you hear someone talking about algorithms, replace the term with “god” and ask yourself if the meaning changes. Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.”

Ian Bogost, 2015


At heart, we will find that algorithms are finite sequences of well-defined instructions. However, there have been tendencies to optimistically, and sometimes naively, purport our ability to create such sequences as final solutions to wicked problems. This seems a result of the not uncommon perception that our barrier between reality and science fiction is becoming progressively thinner. I mentioned earlier, in chapter 2, how I believe stories, and today, particularly science fiction stories; forcefully shape technological innovation and thereafter history. When we look back at our place in the timeline, it may be perfectly clear how deeply Silicon Valley dreams were made of the stuff envisioned in the movies like Minority Report or The Matrix. To make matters more interesting, scientists and philosophers also propose adjoining ideas and arguments in support of these visions of techno-utopia/ dystopia. This extends to the nature of reality. Including examples such as Max Tegmark’s mathematical universe12, Nick Bostrom’s Simulation Hypothesis13, or Stephen Wolfram’s theories on the computational universe14.

Proposing the idea and creation of an algorithmic building code is a means for initiating discussion and further research. I will make no claim to have seen more than the clouded but definite possibility in the distant future landscape. I am often reminded of the degree to which we are deeply vulnerable when it comes to constructing the world on false premises. Indeed, we seem to think in stories.15,16 Additionally, one should not forget that “All experience is subjective” 17, and that the map is also certainly not the territory. In the same way a person forgets where he or she left their glasses, only then to find that it was on their nose, in front of their eyes all along, we seem vulnerable to forget our most fundamental lenses through which we perceive and experience the world and ourselves. We can also easily misunderstand that the roots of much behaviour “[…] lie in the realm of consciousness and culture”18. Much of the matrix that our action and thought springs out of, is therefore difficult to understand outside the timeline and perspectives that are immediately available to us. Layered above the fundamental lenses, we also seem to find arch stories and role models that play in our subconscious. To make matters more difficult, we seem deeply vulnerable to the attractions of our aesthetic desires. The latter of which I believe there to be many. Along the words of the philosopher Alain de Botton, there seems to be as many ideas on beauty, as there are visions of happiness.19 Seeing near-omnipotent artificial intelligence-algorithms as shortcuts to hopes of newfound greatness, may in the end surely be one of them.

As we witness algorithms taking charge of large decision-making platforms, many have called out the dangers we seem to collectively disregard. The act of sincere awareness on topics of unjustified and wrongful optimism is therefore central to the work towards an algorithmic building code. However, it does not take away from the many benefits, comforts, and opportunities provided by computational algorithms, many of which we are currently enjoying and have little reason to doubt.


“We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead.”

Cathy O'Neil, 2016



An algorithm will stay indifferent, but never neutral. As a result, we will not get rid of bias and our preconceived opinions just by handing them over to a computer. In her book, “Weapons of Math Destruction”, Cathy O’Neil underlines the deeply important mission to better understand how the mathematics and code behind our algorithms extend, and often multiplies, the faults than we knowingly or unknowingly, implement into our systems. This issue is not to be taken lightly. Additionally, an algorithmic building code is only as useful as its data is an accurate representation of its environment. “We have to learn to interrogate our data collection process, not just our algorithms.” 20. As previously mentioned, our data production and consumption are rising. Through the Internet of Things, and our continually increasing communicative resonance with the internet, we can assume that more data in needed categories for the building code algorithm will appear. However, the continual and transparent interrogation of such data will be crucial to balance the ongoing architectural learning.

The subject of how computational algorithms may lend stepping stones to utopian or dystopian futures are partly outside the scope of this thesis. However, genuinely interesting, and deeply important conversations on the topic are currently being had. These include questions on the effects of search-algorithms that impose unbalanced personalised information bubbles, strengthening our preconceptions and biases. Or how the current development affords the opportunity for large-scale societal surveillance 21, or deep psychological profiling of large portions of countries by private actors22. Coupled with deep fake-machine learning technology, our digital algorithms will doubtlessly continue to bring democracy under serious pressure.

It would be unwise to think an algorithmic and artificially self-learning building code could not cause trouble. However, not exploring the opportunities to activate emergent structural and socially beneficial forces seems like a choice akin to closing one’s eyes to the present. In so many ways, we are also passed the point of discussing whether current AI technology is beneficial or harmful for our society. Clearly it will be both. I believe a sense of pragmatism is needed. “In order to get the most value from AI, operations need to be redesigned.”23 This means finding the courage to experiment and explore new ideas and new structures. Implementing algorithms into practises of law is already being tested. At the moment “There is a shift in the academic debate from the ‘if’ to the ‘how’ AI should and could be regulated”24. Such a shift clearly underlines the current landscape and the developed scene at which we now are implementing this technology.


“The need grows for algorithmic literacy, transparency, and oversight.”

Pew Research Group, 2017



Another genuine challenge, as pointed out in the quote above, is the plausible need for either greater algorithmic literacy, or an accurate human-language interface for our algorithm (possibly both). The need for oversight, and public transparency, is along with the continual focus on the question of “who is in control”, a democratic necessity for a computational and algorithmic solution.

Establishing a trusting relationship between with public will very likely require greater levels of transparency than we have seen so far in other areas of implementation. This is a difficult topic for many reasons. It is also argued that transparency alone will not be sufficient. “Instead, by understanding what can and cannot be done when evaluating software systems, and by demanding convincing evidence that systems are operating correctly and within the bounds set by law, society can allow the use of sophisticated software techniques to thrive while also having meaningful ways to ensure that these systems are governable.”25


“I wonder whether or when AI will ever crash the barrier of meaning”

Gian-Carlo Rota, 1985



From the outside, looking in, the field and technological advancements within artificial intelligence seem to produce triumphant victories one after the other. However, the hard and interesting problems are still arguable yet to be solved. “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”26

Melanie Mitchell, from the Santa Fe Institute and Portland State University, also points out how deep learning networks sometimes do not learn what we think they do. She explains with an example from a machine learning algorithm that was successfully trained to visually identify and mark animals in pictures. What they instead realised was happening, was that the machine had learnt to recognise the character of the background. The background behind the animal in the picture contained clear characteristics relating to the picture' zoom and focus; and the machine was using these features to achieve the marking of animals. It had in no way learnt anything of the animals themselves. Additionally, “Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.”27 As the story goes, a team from UC Berkeley showed in 2018 how a maliciously produced and placed stickers, put on a stop sign, could trick autonomous car computer-vision into dangerous misunderstandings. Such opportunity for harm has now created an entire field focusing on understanding and preventing such dangers.

We have addressed and discussed some key challenges and issues. From what we have gathered, it seems that the work of unclouding and exploring our "what if" question has only just begun. Let us therefore end the chapter with a few notes on why, again, our journey towards the algorithmic building code in the future landscape, could nonetheless be worth our continued pursuit.

_____________________

12. Max Tegmark (2014), “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality”, ISBN-13: 978-0307599803

13. Nick Bostrom (2003), “Are You Living in a Simulation?”, Philosophical Quarterly (2003), Vol. 53, No. 211, p. 243-255

14. Stephen Wolfram (2019), “A New Kind of Science”, ISBN-13: 978-1579550257

15. Yuval Noah Harari (2015), “Sapiens: A Brief History of Humankind”, ISBN-13:978-0771038983 and see 5.

16. Gregory Bateson (1979), “Mind and Nature: A Necessary Unity”, Hampton Press Inc., 2001, p. 14

17. Ibid p. 28, Hampton Press Inc.

18. Yoshihiro Francis Fukuyama (1989), “The End of History”, No. 16, Summer 1989, p. 5

19. Alain de Botton (2016), “The Architecture of Happiness”, ISBN0375424431

20. Cathy O’Neil (2016), “Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy”

21. Central Government of China (2014-2020), Social Credit System, National Reputation System, Accessed: April 2020, Information available at: http://www.gov.cn/zhengce/content/2014-06/27/content_8913.htm

22. See Cambridge Analytica-Facebook Data Scandal (2018)

23. H. James Wilson, Paul R. Daugherty (2018), “Collaborative Intelligence: Humans and AI are Joining Forces”, Harvard Business Review, p. 114-123

24. Review by Hans W. Micklitz, European University Institute, of Algorithms and Law, 2020, by Martin Ebers, Susana Navas, Cambridge University Press

25. Deven R. Desai, Joshua A. Kroll (2018), “Trust but Verify: A Guide to Algorithms and the Law”, Harvard Journal of Law and Technology, Vol. 31, p. 64

26. Pedro Domingos (2015), “The Master Algorithm”, Basic Books

27. Kevin Eykholt et al. (2018), “Robust Physical-World Attacks on Deep Learning Visual Classification”, CVPR, arXiv:1707.08945v5

5.5 Mind in Nature: Levels of Learning

In chapter two we briefly discussed the profound relationship between our environments and our behaviour and well-being. Concepts such as embodied cognition, and the current models for our learning and inference, also seem to call out for further reflections on the perceived existence and definition of a meaningful barrier between ourselves and the world. The new findings certainly seem to support architectures long-held intuitive understanding of the strong dynamic between perceived spatial experience and subsequent behaviour. I would argue, this knowledge, and these insights, weigh upon the responsibility that architects, planners, and social builders hold for our built environment and our fellow citizens.

An important part of the project has been to go wherever the next stepping stone appears. At the beginning of the chapter we decided to further explore the distant but definite algorithmic building code in the future landscape. As we have seen, the creation and design are no easy task. The realisation of such an idea could be the work and collection of unknown future achievements and stepping stones, many years into our 21st century. Why then, is this seemingly challenging and possibly dangerous path ahead, still worth our continued pursuit?

Image placeholder

In his book, “Steps to an Ecology of Mind”, Gregory Batson presents a framework for understanding learning. Here he defines specific levels, importantly, operating in recursion, with distinctly hierarchical relations to each other. Bateson explains five levels of learning, if we are to include his “Zero learning”, of which “is characterized by specificity of response, which—right or wrong—is not subject to correction.”28 The next level, named “Learning I”, “is a change in specificity of response by correction of errors of choice within a set of alternatives.” For our topic, this is akin to choosing another option, of which to update the building code with, from the already present set of options. The latter to be understood as the total options available within our current structure and contextual understanding. This is a continuation of the same updating procedures, perhaps with a new input, every now and then, from new stakeholders, and in practise, generally from within the already well-defined field of interested influencers. It is the continuation of “business as usual”, with the clarity of seeing a mistake for the unsuccessful attempt that it is. Thereafter applying a new remedy from within the current, and likely narrow, understanding of the context from which the learning itself is operating.

The next level, Bateson calls “Learning II”. It entails a meta-learning exposé. In his own words, “Learning II is change in the process of Learning I, e.g., a corrective change in the set of alternatives from which choice is made, or it is a change in how the sequence of experience is punctuated.” The second level of learning, for our case, would mean a greater understanding of the contextual environment of which the building code is embedded and operates. I suggest that this level is of central importance to our exploration and building code 2.0 idea. From the newly suggested contextual understanding of the building code as a learning process, our insight seemingly opens the door to a powerful change in learning how we learn. This upgrading of the vantage point, from which we read our total context, leads to the opportunity to update our building code with choices from more sets of alternatives. To once again firmly ground this to our work, the suggested new set would be the total available extent of memories, existent within our old behaviour or interaction. Behaviour or interaction, and the patterns that connect them, are here to be understood as possible contents of our built collective memories.

Beyond Learning II, Bateson also explains two more levels, lending explanatory power through the field of biology. “Learning III is a change in the process of Learning II, e.g., a corrective change in the system of sets of alternatives from which choice is made. […] Learning IV would be a change in Learning III, but probably does not occur in any adult living organism on this earth. Evolutionary process has, however, created organisms whose ontogeny brings them to Level III. The combination of phylogenesis with ontogenesis, in fact, achieves Level IV.”28 Importantly, Bateson’s framework suggests a continual interplay, not a multilevel stage theory. This means the different levels of learning go on in parallel.

A key concept in Bateson’s work is context. “It is of central importance because it helps specify the way in which information flows from a larger system to the parts of which it is composed”29. As has been pointed out, on the topic of our machine learning and artificial intelligence component, “While learning 0 has been realized for chess playing computers, learning I turns out today as the basic concept of artificial neural nets (ANN). All models of ANN are basically (non-linear) data filters, which is the idea behind simple and behavioristic input-output models.”30 Additionally, these neural networks are without an environment, in the traditional and relevant sense. This, however, changes with the current structure of our algorithmic building code, as it receives continual signalling from changes in the outer environment. The potential for environmental input stimulus, may also increase, as previously elaborated upon, as we broaden our physical information networks and dataflow capacities into the 21st century.

Another key point to learning, is the understanding of relevant levels of logic and the heterarchical and hierarchical structures embedded within the larger activity structures of our context. We have been building for decades, as if we were outside of the ecological system. Outside, and seemingly independent on the ecological welfare. There was perhaps no other historic and cultural evolution available, than to initiate the industrial revolution, and the unsustainable exponential extraction of non-renewable energy sources – so as to collectively produce the technological innovation by force of creation within science – to push us into the age of renewable energy, and hopefully an ecologically sustainable way of living. As Lewis Mumford puts it, in his 1934 book Technics and Civilization, “if Neotechnic economy is to survive, it has no other alternative than to organize industry and its polity on a worldwide scale”.31 However, as pointed out by professor OIe Möystad, in Cognition and the Built Environment, this evolution in thoughts and acts, certainly challenges our cognitive order.32 So far, I would argue, we are by no means seeing the necessary change and collective learning needed to deal with our relatively recent detachment from the seeming reality of our place in nature.

By both activating and initiating the continual use of experience, through the analysis and extraction of built collective memories, and the creation of a digitally simulated dynamic, and continually updated environment; we are working towards something reminiscent of second level learning – for the entire learning process of our nationally structuring governor. If achieved, the value of the consequences of a change, in the change, of our learning cannot be ignored.

Image placeholder

With hopes to the future of our architectural understanding, the endeavour, as sketched within this chapter, could certainly also lead us closer to the mystery of the unfolding complexity around us at large. From the work of Michael Batty on city growth, explained through the mathematical framework of fractals33, to the brainless slime mold that grew near optimal geometric patterns connecting the Tokyo subway system nodes34, or the complexity unfolding from trivial rules within cellular automata35; architectural learning is surely the study of nature’s unfolding, peered through the eyes of nature herself. In his work, “Mind and Nature”, Gregory Bateson speaks of what he calls the great stochastic processes. It entails the culmination of perspectival insights from both the somatic and the genetic view. Learning, described as the system operating within the individual, and evolution, as immanent in heredity and time. As such, they speak clearly to the understanding of logical typing. Both, according to Bateson, being matched pieces in a single biosphere. Combining to the necessary unity of both systems and connecting mind and nature as one.36

It seems our continual chase of new vantage points, from which to understand our place and our nature, brings value proportional to our ability of enacting the newfound knowledge constructively. Let us therefore end on a thought for future action. Could the activation of some form of an algorithmic building code, be the constructive new leap forward in the chain of carbon-silicon formation, and the greater natural evolution? Whether it is the case or not, given its potential, it may still be worth our continued research and exploration.

_____________________

28. Gregory Bateson (1972), “Steps to an Ecology of Mind”, Intertext Books, London — International Textboook Company Ltd., Chandler Publishing Company. p. 293

29. Eric Bredo (1989), “Bateson's Hierarchical Theory of Learning and Communication”, Educational Theory, 1989, Vol. 39, No. 1 p. 28

30. Eberhard von Goldammer, Joachim Paul (2007), “The Logical Categories of Learning and Communication: Reconsidered From a Polycontextural Point of View”, Kybernetes, vol.36, issue 7/8, 2007, p.1000-101

31. Lewis Mumford (1934), “Technics and Civilization”, p.264

32. Ole Möystad (2017), “Cognition and the Built Environment”, 10.4324/9781315642383. p. 103

33. Michael Batty, Longley M. (1994), “Fractal Cities - A Geometry of Form and Function” Academic Press, London.

34. Tero, Atsushi & Takagi, Seiji & Saigusa, Tetsu & Ito, Kentaro & Bebber, Daniel & Fricker, Mark & Yumiki, Kenji & Kobayashi, Ryo & Nakagaki, Toshiyuki. (2010), “Rules for Biologically Inspired Adaptive Network Design” Science (New York, N.Y.), 327. 439-42. 10.1126/science.1177894. For a video preview, see: https://www.youtube.com/watch?v=BZUQQmcR5-g

35. For examples of the complexity produced, see: https://plato.stanford.edu/entries/cellular-automata/supplement.html

36. Gregory Bateson (1979), “Mind and Nature – A Necessary Unity”, p.141

5.6 Summary

• The findings from the previous chapters, and the continuation of our speculative hypothesis, has led to the idea of an algorithmic building code.

• Key conceptual characteristics of the new building code is the ability for continual value adjustment through visual and digital representation, the application of predictive machine learning, the use of constrains and bDNA solution spaces, and its design principles based on neuroscience.

• The algorithmic building code should be viewed as a process and by no means a solution. Serious challenges must be addressed, if we were to continue towards a digital activation of a new architectural learning.

• Activating the full potential in the building code as a learning process, through an algorithmic building code, is the attempt to elevate and reinvigorate profound architectural learning. Such an accomplishment seeks to unfreeze our current copy-paste construction machine, and in time, provide genuine life to our built environments.

“Living and learning are the same.”

A Pattern Language, Christopher Alexander 1977