Monday, March 7, 2011

Computational design...

The arrangement of the dwellings in the slabs will be organised by optimization using grasshopper/geco/ecotect. The initial model is using a couple of parameters that can be changed for each layer.

Divisions along x-axis
Divisions along y-axis
Density

The model is very basic, it can be found here. It creates a isotrim from a loft (the total area). The faces are isolated from the isotrim, and are reduced in a random way, with use of the RandomReduce feature. The density in % gives the amount of the total faces that have to be trimmed. beceause of the randomness, the result is different every time.

2 examples of this model:

And the parameters (config 3 and 4)


The most important result is the Total Density. This gives the percentage of open space. The lower this percentage is, the more economic the result is.

BUT.

The daylight is obviously the big critical factor. So globally our plan is to fix the density, and to find for a given density the most optimal daylight. This modelis unsuitable, because of the randomfactor. The optimization with an evolutionary algorithm doesnt work good, because not all properties of the model are inherited. Some change randomly with every new try. The computer is not able to converge to an optimum because of the noise that the random element causes.

A more sophisticated model is not based on subdivisions any more, but on the number and the size of the connecting streets. It still uses RandomReduce, but the model is done in such a way that an algorithm can be implemented for this function. A result of the new model:



This is interesting, because we need to find an algorithm that is progressive in some way.
Lets suppose that we have a model with only one layer and 9 breps (houses). The layout for the plan looks like this (without streets).
Lets say we want a density of 60% on the ground floor. In that case, 60%*9= ~5 blocks have to be rendered. When we do not use the RandomReduce, 4 blocks have thus to be reduced in another way. Lets use the brute-force strategy. It culls 0,1,2 and 3. It looks like this.
After calculating the result for daylight, it goes on with the next option: 0,1,2,4.


the optimum probably is:


In this case the program will have to try (9 nCr 4) = 126 times to be sure it has found the optimum (correct me if i'm wrong, math's a little hazy now...). In a 3x3 grid it is still possible, but in a 8x4 grid (32 breps, 601080390 possibilities) it is just impossible to calculate it like this.

So in the end we'll have to go back to an genetic algorithm. The first generation is randomly generated, the next generation will have mutation of no more than, say, 10% from the best shots. Right now I have no idea how to do this...whatever I can think of needs a lot of scripting.

1 comment:

  1. Michela mentioned this to me yesterday, and we discussed it for awhile since it seems like an interesting problem. The need is to balance the desire for 'randomness' with the designers' insights in a way that avoids wasting time (and computational resources) on generating and analyzing low-quality variations while also incorporating and expressing the designers' knowledge and their understanding of the situation they have set up.

    The idea of just trying variations whose generation is constrained only by the desire for 55% (or whatever amount) of coverage and evaluating them for daylighting performance does not achieve this balance, as it relies basically on luck and doesn't incorporate any understanding of what makes for a good variation, only the knowledge of how to judge good and bad variations (e.g. by setting a threshold of acceptability for results of the daylighting analysis.)

    A more productive approach would be to incorporate some of the knowledge you develop by your analysis of housing unit clusters and their (local) daylighting performance - such as you've begun to do in the post above. Thus, rather than randomizing the generation of patterns (variations) at the level of the individual unit, you can instead randomize at the level of the cluster (e.g. nine-unit squares, but others are possible.) In this way you can incorporate your knowledge of what makes for a well-performing cluster while still allowing for emergent effects at the boundaries between and interactions among clusters. To start this process, you first need to identify a small set of clusters which perform reasonably well, and then build your generative algorithm/engine to produce and evaluate variations created using (also) those basic 'super-units' rather than only from individual housing units.

    Hope that helps ... send a reply if you have questions/comments.

    Andre

    ps No comment here on other aspects of the design, since I haven't had time to review your other posts closely enough, but bear in mind that evaluating patterns for their daylighting performance is just one of many (and possibly conflicting) aims that should influence your final selection and design, and so you might want/need to set up similar generation-evaluation systems to address at least some of those, too.

    ReplyDelete