Tag: Behaviour

30
Avr

INTERVIEW : Ján Pernecký

Team members Aymeric and Rémi recently took part to the rese arch MEET UP and presented the approach of parametric design developed at Franck Boutté Consultants and through the MESH project. This was an opportunity to meet again Ján Pernecký, parametric designer and host of the event. Interview.

Founder or the “rese arch” initiative, teaching creative programming and robotic fabrication in several European universities and through workshops, researching generative processes in the context of art and architecture theory. Ján Pernecký studied architecture at Excessive III / Die Angewandte, Vienna; Academy of fine arts and Design, Bratislava; Faculty of Architecture STU, Bratislava and Arkitekturskolan KTH, Stockholm.

Maker of the Boid flocking library for Grasshopper® (2014).

Creator and curator of Asking Architecture - Slovak and Czech national pavilion at 13th architecture exhibition of la Biennale di Venezia.

Art digital - Image: rese-arch.org
Art digital - Image: rese-arch.org

Could you please introduce yourself in few words?

I’m an educated architect but I don’t practise anymore. I’m trying to make living through research in architecture and something I call non-applied research. I’m not into optimization of my own or other architects’ projects but rather into  conceptual thinking in architecture. Therefore I’ve created a platform , which gives the architects and thinkers an opportunity to develop, share and promote new ideas and notions.

Can you share your definition of computational design with us?

I personally think that there are three existing stages of computational design. I believe there is a fourth one that has not been practised yet but it should happen soon.

The first one is automation where you use the computer to do the heavy lifting for you. If you have an assignment that needs a lot of work, repetitive work, then the computer could help you a lot. I believe this is what BIM software does in general. This is what the computer has been doing ever since it started to be used but it doesn’t really bring any paradigm shift. It’s a conventional design approach, you just use the computer to do the hard work for you.

The second one would be parametric design which means the design is controlled by parameters which are fed into algorithms. The designer can act either on the algorithm or on the parameters to modify the final shape. In that process, the designer can evaluate the impact of parameters on the evaluation of its design. As the process can merge together a lot of different data and instructions the result can be surprising. But it doesn’t bring that much innovation because basically you have a presumption of what is going on. It’s not completely top-down but it is also still not bottom-up.

The last one that I can recognize as existing is a generative or emergent design where you can run a simulation that, by definition, is non-linear so you don’t know what the result is going to look like. You probably know roughly from which world it is coming but you don’t know exactly what the result will be when you are implementing some ideas into the design. This finally is a new paradigm. The design you get in the end is something that has not been done before or is not conventional because you don’t know accurately how it has been done. You are not designing the form itself but the tools that generate the form. As the generative process is non-linear you don’t know in advance what the result is going to look like but you very precisely know the forces generating the result. And the form emerges.

And there is the fourth stage that I think still doesn’t exist. I’d like to call it design by behaviour or designing the behaviour that generates the forms. The current emergent design is mainly bio-inspired: it takes existing behaviours and rules or rulesets from nature and it simulates them. That doesn’t make a lot of sense in architecture, but I believe certain architectural notions could be represented by autonomous agents. The agency could be totally architectural and you could design behaviours that form the final object of creation.

L’émergence est un concept philosophique formalisé au XIXe siècle et qui peut être grossièrement résumé par l'adage : « le tout possède parfois davantage de possibilités que la seule somme de ses parties ». Il existe en effet des entités dont les caractéristiques (constitutives) ne sont pas explicables à partir des caractéristiques de leurs parties. Ces caractéristiques apparaissent (émergent) du fait de l'organisation qui s'est créée spontanément.

D'un point de vue empirique, l'émergence est une façon de désigner l'apparition d'entités complexes irréductibles. Ce concept est illustré en sociologie par Emile Durkheim et Pierre Bourdieu, l’utilisant d’un point de vue holistique pour décrire l’émergence d’un ordre de faits irréductible aux parties du système et à leurs interactions [ 1 ].

Le concept est également mobilisé en neurosciences et sciences cognitives dans l’analyse des rapports entre cerveau et esprit. En architecture, urbanisme, design, la notion d’émergence est intimement liée à l’utilisation d’outils issus de progrès associés à l’intelligence artificielle comme les automates cellulaires, les réseaux neuronaux…

What are the parameters you deal with? Do you integrate environmental issues?

I made a couple of projects many years ago that were completely parametric and based on big data. I was trying to take all thinkable aspects into account: circulation, programmatic function, phenomenological phenomenon and perception of the final design because these were the only things I could imagine quantifying at that time. Today I would probably use a different sample set and pay more attention to the relative ranking of solutions than to the absolute value of the parameters or evaluations.

How does computational design influence collaboration between the different parties of your projects?

I see the amazing capacities of crowd-sourcing and crowd-designing, but I also see the risks. It allows you to involve everybody thinkable and to find a result that is based on everybody’s wishes and needs and expertise. You probably can find an output that in a certain sense optimum but is average. You level things up or down and what you end up with is really just mediocre. Something that cannot be extreme in a good sense. I can’t imagine that anything surprising will come up or come out of the collaborative process when too many parties are involved.

Is a computational approach not leading to a new form of automation that focuses on efficacy instead of quality, calculation rather than experience?

Some things are hard to quantify or it’s hard to imagine how to quantify them. But you don’t always need to quantify them. Basically, what you need, is a relative ranking of the performances. If you are able to put it on such a scale, then it’s enough to feed a genetic algorithm for example. It will take the most successful part of generated designs and then breed another generation of designs which is probably going to be better and so on.

To understand how to rank your solutions, you can now use neural networks. For example, you want to quantify the beauty of your design. Of course, you can give it to one thousand people and each one will select the most beautiful output. From this data, the network works out a way of evaluating your design and you can use it in your genetic algorithm.

There is a risk of misinterpreting big data though. Any data or information can be read right or wrong, by which I mean you don’t always read it the way it was meant to be read or the way it makes the most sense to be read. But on the other hand also the misreading could be interesting…

Un réseau de neurones artificiels (artificial neural network) est un modèle de calcul dont la conception est très schématiquement inspirée du fonctionnement des neurones biologiques. Les réseaux de neurones, en tant que systèmes capables d'apprendre, mettent en œuvre le principe de l'induction, c’est-à-dire l'apprentissage par l'expérience. Ils permettent notamment d’approximer des fonctions mathématiques inconnues et dépendant d’un grand nombre de variables ou de faire une modélisation accélérée d’une fonction connue mais très complexe à calculer avec exactitude.

Les réseaux de neurones sont généralement optimisés par des méthodes d’apprentissage de type probabiliste, en particulier bayésien. Ils sont placés d’une part dans la famille des applications statistiques, qu’ils enrichissent avec un ensemble de paradigmes permettant de créer des classifications rapides (réseaux de Kohonen en particulier), et d’autre part dans la famille des méthodes de l’intelligence artificielle auxquelles ils fournissent un mécanisme perceptif indépendant des idées propres de l'implémenteur, et fournissant des informations d'entrée au raisonnement logique formel (voir "Deep Learning").