Welcome

rchitecture design highlights the best in home and building makeovers. Including model buildings featuring unique housing of one-of-a-kind construction, ...

Gift golf idea Directory

Sunday, June 3, 2007

Architecture: Green and Greener


349539959_7a2a28a5ce.jpg A new report from the UN Environment Programhere) [re]confirmed what many of us already know, but what policy-makers and giant development corporations still need to hear from places like the UN: that the building sector plays a huge role in achieving the greenhouse gas reductions necessary to effectively combat climate change. (UNEP) released last week (downloadable

Achim Steiner, UN Under-Secretary General and UNEP Executive Director, said:" Energy efficiency, along with cleaner and renewable forms of energy generation, is one of the pillars upon which a de-carbonized world will stand or fall. The savings that can be made right now are potentially huge and the costs to implement them relatively low if sufficient numbers of governments, industries, businesses and consumers act".
"This report focuses on the building sector. By some conservative estimates, the building sector world-wide could deliver emission reductions of 1.8 billion tonnes of C02. A more aggressive energy efficiency policy might deliver over two billion tonnes or close to three times the amount scheduled to be reduced under the Kyoto Protocol," he added.

Indeed, governments, industries, businesses and consumers are starting to jump on the bandwagon, having realized the validity of the economic argument towards building greener and operating our buildings more sustainably. But they aren't necessarily the early adoptors in this game. The early adoptors are the architects, designers and urban planners who've seen for years now that a smartly designed building, planned into an intelligently conceived urban context, can not only make a hugely positive impact on health and the environment , but revolutionize our quality of life. These are the people we've been watching at Worldchanging since the beginning. We thought we'd take a look at the current status of some of the most promising projects out there today, with green building enjoying a steep upswing.

LEED
The US Green Building Council's Leadership in Energy and Environmental Design (LEED) rating system finds its way into nearly every conversation and article about green building. That's because it's the most commonly known environmental standard for architecture. LEED's existence has become a force for change, having set a concrete list of goals toward which architects and developers can aspire. Inhabitat ran a long series pulling apart and explaining LEED criteria last summer.

Where it used to be newsworthy to hear about a building achieving LEED certification, it's now relatively commonplace. That's a good sign of progress, but now it's time to look towards how we can raise the bar once a standard's been established. Originally conceived for commercial buildings, LEED has since established ratings for single-family homes and is in a pilot phase for a neighborhood development (ND) system that looks more closely at context and connection among a group of homes, and the community aspects of environmental building design. As we'll hear more about this week in a guest post from Justus Stewart, LEED pushed its own bar with the ND ratings by expanding the scope of what defines green, but there's the possibility of going even farther beyond LEED. Like organic standards taught us with the food industry, it's important to set a baseline, but equally important never to be complacent about defining the [green] standard. In sustainability, there's always room to improve and accelerate on the road to progress.

Architecture 2030 and the 2010 Imperative
In many ways, Ed Mazria answered the call to look beyond what's right in front of us when he developed Architecture 2030 Challenge, a voluntary commitment by the global architecture community to achieve drastic reductions in their buildings' CO2 emissions, such that by the year 2030 all new buildings are carbon neutral. To build a foundation that might make this possible, Mazria more recently launched the 2010 Imperative to bring these goals into the design classroom and bring mandatory ecological education to the students who will be building those buildings a few decades down the road.

jubileewharf.jpgJubilee Wharf
One of the bar-setters of green living to which we often refer is BedZED, the community housing development by Bill Dunster Architects and Bioregional. BedZED has endured some criticism of late for some of the glitches in their radically green plans for the place, but that hasn't stopped them from completing the successor to BedZED, Jubilee Wharf, a mixed-use, seaside development that has stirred expectation that we can not only keep pushing the boundary on what sustainability means, but that the model can be replicated -- and quickly -- to bring this kind of residential and work experience to more people. The waiting lists grow daily.

ZEDstandards
Much like LEED, one way to make the BedZED model replicable lies in presenting the building approach and green criteria to the public in a clear, sensible format. Dunster Architects developed the ZEDStandards to help developers and designers understand how to go the extra mile. Way before LEED started looking at context and community, ZEDFactory (along with partnering firm ARUP) took product-service systems, sustainable transit, and high density development has significant factors in a green living environment.

Prefab
Prefab's been reborn in the beginning of the 21st century as a creature almost nothing like its 1950s incarnation. As we've said before, the aura of glam and luxury that’s come to exist around today's prefab to some degree betrays both the original utilitarian nature of a prefabricated dwelling, as well as the simplicity, accessibility and affordability that came with it in the post-WWII era. However, off-site manufacturing does have some redeeming ecological qualities, such as the reduced impact on the housing site itself, the reduced transportation and energy required to build on-site, and reduced waste by way of mechanically precise measuring and cutting systems. A number of prefab design pros, like Michelle Kaufmann, have taken the innate ecological smarts of this approach much further by incorporating numerous additional green features such as PV, wind, natural ventilation and shading, green roofs, FSC-certified lumber, recycled materials, efficient insulation and replaceable modular components. Plus, most prefabs are relatively compact compared to standard homes, saving space and energy and permitting increased density.

livinghome.jpg But that's not always the case. This year's poster child of prefab is the 2,500-square-foot Living Home, which made its mark on the industry by gleaning a LEED Platinum rating, the highest possible distinction of greenness from the USGBC. The prototype, built in Santa Monica, serves as a show house for demonstrating what can be done when you bring together an all-star architect (Ray Kappe), a generous sum of money, and one of the nicest climates anywhere. For the time being, a house as exceptionally green and exquisitely beautiful as this home remains out of reach of the majority, but that's not to say that a good green house can't be had with an average combination of architect, budget and climate. Living Homes stands as an ideal against which to measure other projects.

afh.jpgHumanitarian Architecture and the Open Architecture Network
Fortunately, designers with equal brilliance and know-how are balancing this equation by bringing affordable, green solutions to people at the other end of the spectrum. Thanks in large part to Architecture for Humanity, today's public has a greatly increased awareness of humanitarian approaches to design and architecture, having seen the housing crises brought about by disasters like the South Asian tsunami and the Gulf Coast hurricanes. Placing tools for building low-cost, locally-appropriate shelters directly into the hands of those who need them means empowering communities to create their own pathways toward recovery.

Now Architecture for Humanity has launched the Open Architecture Network as a way to make those tools freely available and adaptable for anyone who needs them. Already OAN has become a model for other industries and causes whose ultimate goal is widespread distribution of – and access to – means for people to improve their own situations, whether it's clean water and sanitation, food, medicine or infrastructure. It's an inspiring leap towards open practices for the global good.

Iconic building
While widespread changes in residential housing and ordinary buildings are necessary for reductions along the lines of 2030 carbon neutrality, the more iconic buildings which have adopted radically green principles tend to become the reference point for public understanding of what a different kind of building looks like. Also, these kinds of buildings often look very literally different, as is the case with the Gherkin and the Reichstag. One of premier icons on the block these days is Hearst Tower in New York City (pictured here). The tenants of this Manhattan skyscraper could be considered guinea pigs in an experiment proving that worker productivity and satisfaction increase in an environment full of daylight, clean air, open spaces and greenery. There won't be any complaints when bills come, either, since the building uses a fraction of the energy of any of comparable size, and reuses water where possible.

Patrick Rollens, editor of the Worldchanging Chicago, recently told us about another building emerging at the edge of green -- the Alberici office in St. Louis, Missouri, that was just awarded the highest LEED rating ever given by the USGBC.

The first thing Alberici visitors notice is the massive wind turbine that graces the 13.86-acre site...Under normal use, the turbine generates about 18% of the 110,000 square foot building's power, which is about equivalent to powering all the electric lights in the facility. That 18% generation capacity is by design rather than the limits of technology. If the building generated any more electricity, batteries would be necessary to contain the leftovers, or additional technology would be required to sell the extra kilowatts back to the St. Louis grid.

This is the kind of achievement that marks a reach toward more definitive reductions and design reconsiderations. The total on-site production of renewable energy for the building's needs shows a commitment to zero-consumption that, if Ed Mazria and others had their way, would be true for every building built from here forward. And Alberici was not even a brand new building, but a renovation of a defunct 50-year-old manufacturing facility.

calacad.jpgGreen roofs and facades
One of the great applications that can make an old building greener -- ecologically and quite literally -- is the use of living plants as exterior treatments on roofs and walls. Living surfaces help insulate and cool down a building's interior, regulate urban heat island effect, offer air filtration and rain catchment, as well as beautifying the structure. We've written in depth about green roofs, including some of the major projects in the works like the top of Renzo Piano's new California Academy of Sciences. We've talked a bit less about green walls, but the vertical version is gaining increased interest, both for outdoor and indoor use. Because initially these greening projects seemed (and probably were) too ambitious an undertaking for the average homeowner, some companies now make preseeded, prefabricated tile systems that allow almost anyone to add greenery according to their own parameters and available space. Habitile and Toyota are just two working towards simple designs for greening the built environment. Parallel concepts now exist, as well, for solar and wind power.

Imagining different futures
Of course, the best places and spaces of the future remain unbuilt, perhaps even unimagined... yet. But we're getting better tools for imagining the future. We're beginning, even, to apply those new imaginings to architecture. And as we dream differently about the buildings and neighborhoods in which we live, we may well find that they change to become something better than we could have expected.

Architecture to inspire learning

In my recent presentations I have been using a picture of the The Saltire Centre at Glasgow Caledonian University, to emphasis how libraries are now becoming flexible learning spaces in the physical as well as the online world. The Saltire Centre is an excellent example of a purpose designed building which provides an inspiration learning environment. If you get to see Les Watson, the champion behind the project to build the Saltire speak, as he did at the recent Executive Briefing organized by Talis at CILIP, take it.

The social event at the National Library of Finland's Triangle Seminar was held in the new Library of the University of Technology in Tampere. This is another library building providing an inspirational working environment. At its core is a staircase providing impressive views from the ground floor to the roof. Mia culpa I didn't manage to get a photograph of it to add to the others I took in Tampere.

On the subject of library buildings I highly recommend a look at this talk by Joshua Prince-Ramus filmed at the 2006 TED Conference. Joshua is the architect of the Seattle Central Library. The majority of his presentation takes you on a journey through the design and realization of this innovative library building. It is well worth letting the video run on beyond the section on the library it is a fascinating insight in to his visualization, design and construction of public buildings.

A Conceptual Architecture for Search

I have previously written about the Top 17 Search Innovations outside of Google. Clearly, Google is not going to take this onslaught lying down. As Alex Iskold wrote in an article on the Read/WriteWeb, these types of changes slowly make their way into the mainstream. Google has already introduced personalized search; it's only a matter of time before many or all of these features get included into the main Google search engine. [Naturally, I will be happy to help with suggestions !]

As more and more features get crammed in, mainstream search engines like Google and Yahoo! will face challenges from the Innovator's Dilemma - if not integrated properly, ongoing relentless addition of features can not only make the user interface cluttered and difficult to use, but can also degrade the architecture by making it horribly complex and difficult to change.

So the key question becomes: what would the overall architecture of Google (or any mainstream web search engine) look like, if it included most of these features? In this post, we will take a speculative look at a unifying architecture - a conceptual look at how a general-purpose search engine like Google or Yahoo! might set up their architecture so that these and other new features could be easily added while maintaining overall architecture coherency. This is a purely intellectual exercise - no doubt each of the major search engines will evolve their own strategy and architecture to deal with these issues.

Search Engine Architecture

As the above image shows [click to enlarge it], the overall architecture is split up into sections: the query interface, server components, the results interface, saved-search agents and support for alternative results platforms. Of course, not every search would use all of these features, but the search would optionally be routed through some of these engines as appropriate.

Let us take a quick look at the various sections:

1. Query Interface
One key change to the query interface in the future, is the likely addition of search parameters, which can use the magic of Ajax to appear automatically as needed. Parameters can be classified into two types: General parameters, such as freshness dates and content type, and Domain-specific parameters for vertical search queries.

2. Server components
In future, the simple "search box" on the Google front page could hide a variety of specialized search engines behind it:
- Pre-processing support: Personalization, Natural language processing, semantic analysis
- Algorithmic changes: Rich content search, social input (reputation-based), self-optimization
- Source restrictions: Restricting the scope of the search to trusted sources and/or to a specific vertical
- Post-processing support: Clustering, related tags, support for services

3. Results interface
Long term, the results interface should include support for enhanced types of results visualization, such as clustering and related tags, query refinement (using filters or suggestions), along with support for saving searches (user agents) and alternative results platforms - such as Mobile, RSS feeds, RIAs, Emails and Web Services.

Finally, a big win for the user would be support for Discovery: a process by which the search engine knows enough about you, as a user, to find content of interest specifically for you (articles, news, blog posts and so on) and notifies you about it, preferably using an RSS feed.

That concludes our look at a conceptual architecture for search. Is this simple, yet powerful? Or just simple-minded? Leave a comment or email me and let me know!

Architecture re-housed: Part 1

A break from the standard blogging currency of comment, criticism, conjecture and pointing elsewhere … here’s a series of entries about one of my own projects and how it’s been confirming my growing concern about my generation’s appreciation (or rather, lack thereof) of the history of housing design:

Part 1: to a degree

In November last year I was asked by a client to develop a housing layout for a small site on the edge of Stourbridge in the West Midlands. The brief, set by Black Country Housing Association, called for an ‘exemplar’ environmentally friendly scheme. A layout had already been prepared by others using three pairs of semi-detached properties but large storm and foul drains had subsequently been found to be running through the centre of the site and they required a substantial ‘wayleave’ (zone to be kept free of building) on either side. Very little room for development was remaining.

Can you continue the ‘green’ agenda of the initial scheme? Could we still achieve the same number of units on half the site? Can we have a plan by next week? Can you note that the brief asks for ‘award winning architecture’?

Yes, yes, yes and - depending on your definition of award, winning, or for that matter, architecture - yes:

QR-concept-model2

QR-concept-model1 QR-concept-model3

We have a tried and tested technique in our office. It’s a simple thing but it’s value is often overlooked by those obsessed with the black/white, and/or, left/right, x/y world of the perpendicular. It’s called, for want of a more poetic name, 45 degree planning. It’s come to my rescue often. So often in fact that it risks becoming a style rather than a technique, but for the moment I shall stand by the assertion that I’m understanding the action rather than just reaching for a result. It’s a simple thing but instead of its more popular sibling - 90 degrees - it seems to require a certain deftness. It feels more like a vector. A point on a line of infinite possibilities, rather than a line between two points of known characteristics (*cough* thank you D & G *cough*).

Three existing conditions leapt off the site plan in that first meeting to create the response above: the position of the neighbouring house to the north, the narrow space forced on us by the drainage restrictions and the north-south orientation of the site. The last one creating the need for an appreciation of the solar gain to be equally enjoyed by each property to both front and back, and the potential heat loss to be avoided in the north.

A blustery weekend in a coastal cottage with pencil, paper and Jane Eyre on the TV and it developed into this:

QR-concept1

QR-concept2 QR-concept3

QR-concept-plan

The crucial factor in the development beyond that initial site plan proved to be the roof. You can see me noodling about with it on the first 3 sheets (noodling - verb: to apply, through subtle, successive iterations, the full extent of one’s many years of architecture experience to a design problem). The result is a type of scissor roof arrangement in which each plot has two different pitches, one half of which connects to the following plot as the houses step back. We get visual continuity and interest out of the wider street scene, irrespective of level changes, that also creates an opportunity/need to deal with the intersection detail directly above the centre of the floor plan. Ventilation possibilities? Check. Natural light inlet? Check. Character? Innovation? Place making? Check, check and (hello CABE) check.

Returning to the office that week I discussed the layout and house design with older, wiser colleagues. Go and take a look at the work of Eric Lyons, they said. Eric who? said I, not knowing his work. The following week brought the announcement that the RIBA would be mounting an exhibition of his work at Portland Place. The coincidence seemed too great to ignore. I booked train tickets.

Architecture, Policy, Governance, Strategy

A theme that runs through several articles this month is the increasing need for IT to work harder than ever before to understand the requirements of business units and individual users. This has always been critical to IT’s success, but the issue continues to arise in new areas.

One reason for this is that IT is getting more tools for implementing business priorities. In this issue, Robin Layland writes about the emerging market for data leakage protection software. These relatively new software packages are designed to let IT be more effective in preventing the loss of personal customer or employee information, or the misappropriation of internal corporate data. But to use these tools to maximum benefit, IT has to have a clear picture of corporate priorities and policies beyond obvious concerns like Social Security numbers and medical records.

Similarly, Peter Sevcik and Rebecca Wetzel report on an end user survey on application performance management. Peter and Rebecca found that implementing some best practices around APM is indeed likely to result in better performance and greater satisfaction for the applications’ end users. And in his column, Peter takes off from these findings, and demonstrates how the Apdex methodology can help you uncover areas of sub-par performance.

Finally, in the second of a two-part article, “Security Architectures And The New IT Organization,” end user Stuart Berman shows how his organization was able to work with the company’s business units to implement strong security systems even in an environment of IT budget cuts. Stuart’s organization used internal chargeback and external outsourced services to provide the needed resources.

Stuart Berman holds up GM’s security groups as a model for the “new IT organization.” He wrote that, “They have no servers, no datacenters and very few security staff. The staff they have are dealing with architecture, policy, governance and strategy development.”

That sounds like a pretty good job description for much of what the IT department is likely to become over the next few years. Not every company will go as far as GM has gone in terms of outsourcing these capabilities; every enterprise will face its own cost/benefit decisions in this area. (Though Stuart Berman concludes, “Let me assure you that our focus is always on cost containment and that we continue to find evidence that we are adopting the inevitable.”)

IT “plumbing” may have become commoditized, leading some to agree with the thesis that “IT doesn’t matter.” But IT does matter, and will always matter, when it’s focused on those four areas—architecture, policy, governance and strategy development.


Architecture and 'robotic ecologies'

The University of Virginia (UVA) School of Architecture has started a new program about 'robotic ecologies' which wants to answer the question: Will robots take over architecture? As said the program leader, "This research is not just about architectural machines that move. It is about groups of architectural machines that move with intelligence." Apparently, buildings tracking our movements and adapting their shape or texture according human presence are not far fetched. Maybe one day, we'll talk to our homes and they'll answer...

The Super Galaxy architectural project

Above is a picture of "Super Galaxy, a NYC Tropospheric Refuge." This is "a high-rise apartment complex that's constantly in motion and responds to the needs of its inhabitants." (Credit for image: UVA School of Architecture; credit for caption: The Hook). Here is a link to a larger version of this picture.

This project, along with several ones, has been led by Jason Johnson, the architecture professor who animates the Robotic Ecologies seminar, and her partner Nataly Gattegno.

Now, let's return to the story reported by Dave McNair, in The Hook, Charlottesville, Virginia, to discover the robots used in Johnson's seminar.

  • the Rave [,short for R.A.V.E. or Reactive Acoustic Variable Expansive Space, is] a mutating scissor-like structure that actively stores solar energy during the day to create a dynamic dance space with light, sound, and pulsations at night;
  • the Iris [,short for I.R.I.S. or Intelligent Responsive Integrated Space, is] a layered facade that could potentially sense and optimize itself relative to air flow, light levels, and pollutant levels:
  • and the Tilt [,short for T.I.L.T. or Transformative Intelligent Loop Tower, is] an aerodynamically calibrated high-rise building prototype that adjusts the shape of its woven floor plates from a circle to an ellipse in response to weather conditions, while an array of LED displays register wind velocity, temperatures, and the movements of people inside.

But why could be these robots used for in architecture design?

"Robots can sense scenarios unfolding, make a plan based on past and present experiences, and then act in an appropriate way," says Johnson. "What makes robots so intriguing is their ability to learn and optimize through feeding back information into these phases." As Johnson observes, robots are getting smaller, smarter, and cheaper, and can be interconnected using WiFi, radio, bluetooth, and other technologies.
"It is now more efficient and economical to build and deploy lots of small clusters of networked expendable robots than a single expensive one," says Johnson. "These robots are now being incorporated into biological entities, into urban streetscapes for monitoring, and into industry for control and efficiency."

Now, let's move to how Johnson himself describes his research about Robotic Ecologies.

The crossing of architecture and robotics represents one of the most promising and perhaps exigent technological intersections in recent times. Robots are sensing, thinking and moving entities. They are different from most machines in that they are capable of intelligent behavior – the capacity to learn, adapt and act on their senses and intuitions.
Groups of robots, or robotic ecologies, are unique in their capacity to work as an organized system: rather than merely acting on their individual desires, robotic ecologies can work collectively in swarms or packs. Without much fanfare, an extraordinary new phylum of intelligent machines is coming to life in laboratories, studios and machine shops across the planet. Designers are building and programming kinematic self-replicating machines, modular self-assembling robots, fields of sun-tracking robotic sunflowers, and the like.

The Architecture of Mailinator

Almost 3.5 years ago I started the Mailinator(tm) service. I got the bulk of the idea from my drunk roommate at the time and the first incarnation took me all of about 3 days to code up. In some senses it was a crazy idea. As far I know, it was the first site of its kind. A web-based email service that allowed any incoming email to create an inbox. No sign-up. No personal information. Send email first, check email later.



This became ridiculously handy for things like signing up for websites that send you one confirmation email, then save or sell or spam your email address forever. And of course, it *is* very handy for users. But think about it from mailinator's side. Its basically signing up to receive spam for that address forever. That's a tall order and one that seems to have the possibility of a terrible demise. Someday, enough email could come in that will simply smush Mailinator. But, as of this writing, that day isn't today.

I have in that 3.5 years received hundreds of "thank you" emails, a pile of "it doesn't work" emails, a radio interview, articles in the Washington Post, New York Times, and Delta Skymiles magazine, 1 call from both Scotland Yard and the LAPD, and a total of 4 subpoenas (1 of those being a Federal Grand Jury subpoena issued by the FBI).

At this point, Mailinator averages approximately 2.5million emails per day. I have seen hourly spikes that would result in about 5million in a day. (Edit: Feb 2007 - One month later we're averaging 4.5million emails a day with spikes over 6million) In addition, the system also services several thousand web users and several thousand RSS users per day.


In the world of email services, this probably isn't all that much. The most interesting part to me is that the complete set of hardware that mailinator uses is one little server. Just one. A very modest machine with an AMD 2Ghz Athlon processor, 1G of ram (although it really doesn't need that much), and a boring IDE, 80G hard drive (Check ServerBeach's Category 1 Powerline 2100 for the exact specs). And honestly, its really not very busy at all. I've read the blogs of some copycat services of Mailinator where their owners were upgrading their servers to some big iron. This was really the impetus for me writing down this document - to share a different point of view.

Mailinator easily handling a few million emails a day wasn't always the case. The initial mailinator system was quite busy. And in fact, got overwhelmed about a year ago when email traffic started topping 800,000 a day (that's my recollection anyway). In an effort to squeeze life out of the
server and as an exercise in putting together some principles I always championed about server development, I rewrote the system from scratch. I have no idea what the current limit of the existing system is, but at 2.5million a day, its not even breaking a sweat.

If you don't know what Mailinator is, take a small tour through the (rather funny) FAQ.

Lossy lossie lossee

There is a very important point to note about the Mailinator service. And that is, that it is indeed - free. Although it might not seem like it, it has an immense impact on the design (as you'll see). This allowed me to favor performance across the board of the design. This fact influenced decisions from how I dealt with detecting spam all the way down to how I synchronized some code blocks. No kidding.

The basic tenet is that I do not have to provide perfect service. In order to do that, my hardware requirements would be much higher. Now that would all be fine and dandy if people were paying for the service. I could then provide support and guarantees. But given its free I instead went for, in order, these two design decisions:

1) Design a system that values survival above all else even users (as of course, if its down, users aren't really getting much out of it)
2) Provide 99.99% uptime and accuracy for users.

If you wonder what I mean about "survival" in the first line, it basically means that Mailinator is attacked on literally a daily basis. I wanted to make a system that could survive the large majority of those attacks. Note - I'm not interested in it surviving all of them. Because again, if some zombie network decided to Denial-of-service me - I really have no chance of thwarting it without some serious hardware. The good news is that if someone goes to all the trouble of smashing Mailinator (again referencing the fact we're lossy), I really don't lose much sleep over it. It sucks for my users - but there really isn't anything I can do anyway. I'm not trying to be cavalier about this - I went to great lengths to handle attacks, I'm just saying its a cold reality that I simply cannot stop them all. Thus I accept them as part of the game.

The platform

The original Mailinator used a relatively standard
unix stack of applications including a Java based web application running in Tomcat. Mailinator is and was of course, always just a hobby. I had a day job (or 2) so months of development just was never an option. I chose Java for no other reason than I knew Java better than anything else. For email, it used sendmail with a special rule that directed any incoming email to mailinator.com to one single mailbox.

Sendmail --> disk --> Mailinator <-- Tomcat Servlet Engine

The Java based mailinator app then grabbed the emails using IMAP and/or POP (it changed over time) and deleted them. I should have used an mbox interface but I never got around to implementing that. The system then loaded all emails into memory and let them sit there. Mailinator only allowed about 20000 emails to reside in memory at once. So when a new one came, the oldest one got pushed out.

The FAQ advertises that emails stick around for "a couple of hours." And that was true, but exactly how long mattered on the rate of incoming emails. You'll also note an interesting side effect that since all emails lived in memory, if the server came down - all emails were lost! Talk about exploiting the fact that my service was free huh? This may seem dubious but the code was really quite stable and ran for weeks and months without downtime.

I thought about saving emails into a database of course but honestly, all this bought me was emails that stuck around longer. And, that in and of itself sort of went against my intent for mailinator. The ideas was, sign-up for something, goto Mailinator, click the link, and forget about it. If you want a
mailbox
where emails last a few days, thats fine, but there are many other alternatives out there - that's not what Mailinator is about. I forgot the database idea and now shoot for mails that last somewhere around 3-4 hours.

This all worked fabulously for awhile. It pretty much filled up all 1G of ram of the server. Finally when the incoming email rate started surpassing 800,000 a day, the system started to break down. I believe it was primarily the disk contention between unix mail apps and the Java app locking mailboxes. Regardless, there were many issues with that system that bugged me for a long time. The root of most of those problems really boiled down to one thing - the disk. The disk activity of sendmail, procmail, logging and whatever else was a silly bottleneck. And it needed to go.

More than a year ago now I did a full rewrite. Much of the anti-spam code that I'll describe later was already in this code-base but was improved and extended for the new system.

Synchronous vs. Asynchronous I/O

I've read a fair number of articles on the wonders of asynchronous I/O (java's NIO library). I don't doubt them but I decided against using it. Primarily, again, because I did a great deal of work in multithreaded environments and knew that area well. I figured if I had performance issues later, I could always switch over to NIO as a learning experience.

The biggest thing I knew I needed to do with Mailinator was to remove the unix application components. Mailinator needed to stop outsourcing its email receipt and do it itself. This basically meant I needed to write my own SMTP server. Or at least, a subset of one. Firstly, Mailinator has never had the ability to send email so I didn't need to code that part up. Second, I had really different needs for receiving email. I wanted to get it as fast as possible -or- refuse it as fast as possible.

SMTP has a rich dialog for errors but I chose to only support one error message. And that error is, appropriately enough - "User Unknown". That's a touch ironic since Mailinator accepts any user at all. Simply said, if you do anything that the Mailinator server doesn't like - you'll get a user unknown error. Even if you haven't sent it the username yet.

I looked at Apache James as a base which is a pure java SMTP server but it was way too comprehensive for my needs. I really just found some code examples and the SMTP specs and wrote things basically from scratch. From there, I was able to get an email, parse it, and put it right into memory. This bypassed the old system's step of writing it to disk all the way. From wire to user, mailinator mail never touches the disk. In fact, the Mailinator server's
disk is pretty darn idle all things considered.

Now to address persistence concerns right away - Mailinator doesn't run diskless, but it does run very asynchronously with regards to the disk. Emails are not written to disk EVER unless the system is coming down and is instructed to write them first (so it can reload them upon reboot). This little fact has been very handy when I've been subpoenaed. I simply do not have access to any emails that were sent to Mailinator in the past. If it is possible that I can get an email - so can you just by checking that inbox. If you can't get it then that means its long deleted from memory and nothing is going to get it back.

Mailinator also used to do logging (again, shut-off because of pesky subpoenas). But it did it very "batchy". It wrote several thousand logs lines to memory before doing one disk write. In effect we never want to have contention based on the incredibly slow disk.

Now if this all sounds a bit shaky, as in we might just lose an email now and then - you're right. But remember, our goal is 99.99% accuracy. Not 100%. That's an important distinction. The latest incarnation of Mailinator literally runs for months unattended. We do lose emails once in awhile - but its rare and usually involves a server crash. We accept the loss and by far most users never encounter it.

Emails

The system now is one unit. The web application, the email server, and all email storage run in one JVM.


The system uses under 300 threads. I can increase that number but haven't seen a need as of yet. When an email arrives (or attempts to arrive) it must pass a strong set of filters that are described below. If it gets past those filters it is then stored in memory - however, it is first compressed to save in-memory space. Over 99% of emails that arrive are never looked at, so we only ever decompress an email if someone actually "looks" at them.

Because of this, I am able to store many more emails than the original system's 20000. The current mailinator stores about 80000 emails and uses under 300M or ram. I probably should increase this number as plenty of ram is just sitting around. The average email lifespan is about 3-4 hours with this pool. The amount of incoming email has gone way up, so even by increasing this pool, we're largely staying steady as far as email lifespan. I could probably kick that up to 200,000 or so and increase the lifespan accordingly but I haven't seen a great need yet.

Another inherent limit that the system imposes is on mailboxes themselves. Popular mailboxes such as joe@mailinator.com and bob@mailinator.com get much more email than average. Every inbox is limited to only 10 emails. Thus, popular boxes inherently limit themselves on the amount of email they can occupy in the pool. Use of popular inboxes is discouraged anyway and generally become the creme de la cesspool of spam.

Two more
memory conserving issues is that no incoming email can be over 100k and all attachments are immediately discarded. That latter feature was in years ago but obviously really ruins this whole new wave of image spam (if you see a few seemingly "empty" emails in some popular boxes, they might have been image spam that got their images thrown away).

Spam and Survival

I'd like to emphasize here that Mailinator's mission is NOT to filter spam. If you want penis enlargement or sheep-of-the-month club emails, that's pretty much what Mailinator is good for. We are clear in the FAQ. Mailinator provides pretty good anonymity - but we do NOT guarantee it. We also do NOT guarantee ANY privacy. Its really easier that way for us. Still, it does a pretty damn good job even so. We might log you (used to and it might get turned on again someday, never know) and we DO respond to subpoenas (that whole "jail" thing is a strong motivator).

So, in essence I have no real interest in filtering out spam. I do however, have a great deal of interest in keeping Mailinator alive. And spammers have this nasty habit of sending Mailinator so much crap that this can be an issue. So - Mailinator has a simple rule. If you do anything (spammer or not) that starts affecting the system - your emails will be refused and you may be locked out.

In the new system I created a data structure I call an AgingHashmap. It is, as it indicates a hashmap (String->int) that has elements that "age".

The first type of spammer I encountered was one machine blasting me with thousands of emails. So, now, every time an email arrives, its senders IP is put into an AgingHashmap with a counter of 1. If that IP does not send us anymore email for (let's say) a minute, then that entry automatically leaves the AgingHashmap. But, let's say that IP address sends us another email 2 seconds later. We then find the first entry in the AgingHashmap and increase that counter to 2. If we see another email from that IP, it goes to 3 and so on. Eventually, when that counter reaches some threshold we ban all emails from that IP for some amount of time.

We can put this in words as so (values are examples):
If any IP address sends us 20 emails in 2 minutes, we will ban all email from that IP address for 5 minutes. Or more precisely, we will ban all email from that IP until it stops trying to send to us for at least 5 minutes.

This is really what the AgingHashmap is good for. We can setup some parameters and detect frequency of some input, then cause a ban on that input. If some IP address sends us email every second for 100 days straight, we'll ban (or throw away) every last email after the first 20.

Here's a graph of an average 24 hours of banned IP address emails. Notice at 10am and 11am some joker (i.e., some single IP address) sent us over 19000 emails per hour.



I do have some code that has Java talk back to unix's iptables system to do very hard blocking of IP addresses but its not on right now. Partially because there's no need (yet) and partially because I like to see the stats.

The funny part of this is the error Mailinator gives. Remember the "User Unknown"? Once an IP address is banned and then it tries to open a new connection it will send the SMTP greeting of "HELO". Mailinator will then reply "User Unknown" and close the connection. Of course, it didn't even get the username yet.

Zombies

The next problem came from zombie
networks. Now we were getting spam from thousands of different IPs all sending the same message. We could no longer key in on IP address. As a layer of defense past IP we created an AgingHashmap based on subjects. That is, if we get (again, example numbers) something like 20 emails with the same subject within 2 minutes, all emails with that subject are then banned for 1 hour.

Here's a similar graph. Keep in mind these emails got past the IP filter - so basically they are "same subject" emails from many disparate sources.



You could argue we should ban them forever, but then we'd have to keep track of them and the Mailinator system is inherently transient. Forgetting is core to what it does. This blocking is more expensive than IPs as comparing subjects can be costly. And of course, we have to have enough of a conversation with the sending server to actually get the subject.

Pottymouth!

Finally, we ran into some issues on emails that just weren't cool. As I said, I'm far more interested in keeping Mailinator alive than blocking out your favorite porn newsletter. But, some unhappy people used Mailinator for some really not happy purposes. Simply put, as a last layer, subjects are searched for words that indicate hate or crimes or just downright nastiness.

Boing

Another major influx that happened early on was a plethora of bounce messages. Now thats sort of odd isn't it? I mean Mailinator doesn't send email. In fact, it CAN'T send email so how could it get bounce messages? Well, some spammy type folks thought it'd be neat to send out spam from their servers using forged Mailinator addresses as a return address. Thus when those emails bounced, the bounce came here.

What's worse, is I still get email from people who think Mailinator sent them
spam. Its very frustrating to defend myself against people who are ignorant of how email works ready to crucify me for sending them spam (especially ironic is that I run a free, anti-spam website). As I've said in my FAQ - please feel free to add mailinator.com to the tippy tippy tippy top of your spam blacklists. If you EVER get an email from mailinator.com, its a forged spam.

The good news is that bounces are very easy to detect, and are really the first line of our defense. Bouncing SMTP servers aren't particularly evil, they're just doing their job so when I say "user unknown" they believe me and go away.

On an abstract level, here is what happens to an email as it enters the system.



(and to be fair, there might just be another layer or two thats not on that diagram!)

Anti-Spam revolt

There are 2 more, somewhat conflicting features of the Mailinator server that should be noted. For one, its a clear fact that when we're busy, we're busy. An easy DoS against us would be to open a socket to our server and leave it open. This is an inherent vulnerability in any server (maybe especially multithreaded servers). So, as a basic idea Mailinator closes all connections if they are silent for more than a second or two. Actually, the amount of time is variable (read below). Clearly, we are DoS'able by sending us many many connections, but this blocks at least one trivial way of bringing us down.

Secondly, although we demand servers talking to us are very speedy. We reserve the right to be very NOT speedy. Here's the logic. When Mailinator is not terribly busy, we still demand responses quickly, but we give responses slowly. In fact, the less busy we are, the slower we give responses. It is possible that sending an email into the Mailinator SMTP server could take a very long time (like 10 or 20 or 30 seconds) even for a very small amount of data.

Why? Well.. think about it. Let's say you're spamming. You want to send out a zillion emails as fast as possible. You want every receiving SMTP server to get your email, deliver it to the poor sod who wants (or doesn't want) weener enlargement and then close the connection so you can go on to the next. If you encounter some darn SMTP server that takes 20 seconds to receive your email, the speed at which you can send out your emails diminishes. You might just even think about avoiding such SMTP servers.

It might be a pipe dream to think this is slowing down any spammers, but this does tend to keep my quieter times lasting longer. And it doesn't really hurt me - or my users. And if we eventually get terribly busy, those delays are scaled down to make sure we don't lose any emails.

Sites will ban it

Every time I read some comment about Mailinator, someone always points out something like "Yeah, well sites will start banning any email from Mailinator and then it will be worthless". Guys. Its been 3 years. A handful of sites have indeed blocked email from Mailinator, but my user base and the number of read emails has only gone up. Clearly, people are finding Mailinator more useful than ever.

I have added at times additional domains (like sogetthis.com and fakeinformation.com) that point to mailinator. Often if a site bans mailinator.com proper, you can use one of those to same effect.

Overall

Many copycat sites have appeared over the years which is pretty reasonable. This idea itself is obvious. The only real hurdle was that it seemed impossible to do given the amount of useless email you'd get. But the copycats had the advantage of seeing that Mailinator actually does work, so they knew what to shoot for. Only a few post their daily email numbers but I've yet to see any that come close to mailinator's incoming email (not that this is necessarily a good thing). I also see that many are using an architecture similar to Mailinator's original which is just fine so long as they either don't get any massive increases in email or are happy to keep buying bigger hardware.

Overall, Mailinator has been a great experience. It was a terribly fun exercise in optimization, security, and generally making things work. Thousands of people use it everyday and its amazing how many people know about it when it comes up in conversation. I've thought many times about how to make a business around it, and there is always an angle, but I've just been to busy with other things.

My hope is that its useful for you and that you tell your friends.

Football and architecture

Some of the more innovative and exciting buildings these days are linked to the world of sport. This may not be surprising given the vast sums of money - alas, sometimes taxpayers' money - that swirls around sport these days. Take this picture of the Barcelona FC stadium, for example. Ever since the Roman days, in fact, sports stadia have been among the most impressive buildings in human civilisation (the arena at Arles, in the South of France, has a spooky, imposing quality of its own, for example).

But of course today, if you are a sport-loving Englishman like yours truly, today matters because the FA Cup Final is being held at its traditional home, Wembley (for non-Brits, this is in west London). The new stadium looks pretty damned impressive. The project to build it has not gone at all smoothly (a sign of the possible difficulties we might expect from the London Olympics). But the wait is worth it. It is magnificent.

One of my happiest days as a youngster was in 1978, when my local team, Ipswich Town, beat Arsenal 1-0 to win the FA Cup (the Blues won the European UEFA Cup three years later. Ah, those were the days). Even watching the game on the television, you were struck by the atmosphere. In 2000, when Ipswich were promoted in a playoff, I went with friends to the stadium in the last fully competitive game to be held before the old stadium was pulled down.

Update: a pity the match between Manchester United and Chelsea did not live up to the billing. Chelsea won. Well done to them (I think one or two Samizdata contributors will be rather chuffed about that).

More dirt on Intel's Penryn / Nehalem architecture


While you've been off dreaming of long-range WiFi, Intel's not forgotten about its Penryn / Nehalem architectures, and thanks to an uber-boring slideshow presentation, we now know more than ever about the forthcoming duo. As expected, there isn't much new on the oft detailed Penryn front, but the fresher Nehalem most certainly piqued our interest; while built on the same 45-nanometer technology as its predecessor, Nehalem is being hailed as "the most dramatic architecture shift since the introduction of the front-side bus in the Pentium Pro in 1996." Attempting to back up such bold claims came news that HyperThreading would be native to Nehalem, and it would "share data at the L1 and potentially, the L3 cache levels," allow eight-core CPUs to clock down to two / four, and boast scalability options to satisfy a wider market. Most intriguing, however, was the "optional high performance integrated graphics" that could reportedly be included on the same processor die, which could certainly prove interesting if crammed into, say, a UMPC. So if you're still not satisfied with the highlights, and don't get enough mundane PowerPoint action from your corporate employment, be sure to hit the read link when your friends aren't looking.

Architecture and interaction design, via adaptation and hackability, Posted by Dan Hill at City of Sound (reblog)

Image and text source: City of Sound

May 23, 2006

Dan Saffer recently asked me to contribute some thoughts on adaptation, hackability and architecture to his forthcoming book Designing for Interaction (New Riders, 2006), alongside 10 other ‘interviewees’ such as Marc Rettig, Larry Tesler, Hugh Dubberly, Brenda Laurel etc. Dan’s been posting their various responses up at the official book site (see also UXMatters) yet he kindly agreed to let me post my full answers below (the book will feature an excerpt).

The questions he posed were: Can products be made hackable, or are all products hackable? What types of things can be designed into products to make them more hackable? What are the qualities of adaptive designs? You’ve spoken on putting “creative power in the hands of non-designers.” How do interaction designers go about doing that? What can interaction designers learn about adaptability from architecture?

Given this, Dan had inadvertently provided me with the impetus to get down a decent summary to a few years’ worth of thinking around this subject. So what follows directly addresses one of the stated purposes behind this blog: to see what we can draw from the culture and practice of architecture and design into this new arena of interaction design - and some of the issues in doing so. (An unstated purpose of the blog - of providing me with an indexed notebook - is also fulfilled!) Here goes:

Can products be made hackable, or are all products hackable?

At our panel at DIS2004, Anne Galloway defined designing for hackability as “allowing and encouraging people to make technologies be what they want them to be.” Yet, as the architect Cedric Price might have said,” technology is the answer, but what is the question”?

Because effectively, all products are hackable. If we define a hackable product as ‘a product capable of being modified by its user’ then we’ve seen pretty much everything hacked. That definition is fairly broad, but hacking itself has a long history. At MIT, ‘hacks’ consisted a wide repertoire ingenious pranks, mostly physical; it was taken simply to mean “an appropriate application of ingenuity”. In this sense, ‘hacking’ of products has a long as history as industrialised production itself. Adaptation in general predates it of course.

But if this definition appears too wide to be useful, it’s worth noting the range of products that have been hacked, apparently irrespective of size, solidity, complexity. Cars have been hacked - witness the ever-vibrant custom car movement; buildings are endlessly modified and customised from day one; clothes often are; musical instruments often are; even apparently perfectly finished ‘modernist’ products like Dieter Rams’s Braun hi-fis have been heavily modified. Likewise the iPod, despite its hermetically sealed appearance.

Software is incredibly malleable of course, even that which is explicitly designed not to be hacked (cf. operating systems, digital rights management software etc.).

Essentially, all products lives start when in the hands of the consumer, long after the designer has waved bye bye. Design is a social process. This reinforces the idea of adaptation as a basic human desire. This should really be a question of can products be purposefully made more or less hackable? If we take the starting point that all products are intrinsically hackable - as the life of a product starts with the user, not the designer - the question is how do we design products that engender useful hacking?

That word ‘useful’ means we should bring in Tom Moran’s thinking, as he suggests that designers have lately been obsessed with usability at the expense of usefulness. If we truly wish to design products that are useful, that may mean letting the user in to the creative process, into the life of the product. For users are the ultimate arbiter of usefulness. So we should ask, as designers, how can we make products more useful through enabling adaptation, aka ‘hacking’.

What types of things can be designed into products to make them more hackable?

For me, this indicates a reversal of some traditional design thinking. The maxim that “when design’s working properly you don’t notice it.” (this last from Germaine Greer, no less) is less useful with hackability in mind. If we are to invite the user in, we need to leave some of the seams and traces open for others to explore; some sense of what the process of design, or un-design, might entail. Naoto Fukasawa’s idea that “good design means not leaving traces of the designer” makes sense in terms of reinforcing humility in the designer, but leaving traces of the design itself may be very useful to users.

Matthew Chalmers suggests we indicate a product’s ’seams’, in order that it might convey how it can be appropriated or adapted. This is beyond affordances (which concern predefined usage). This sense that the fabric of the product should communicate its constituent parts and how they are assembled runs counter to ‘invisible computing’ thinking and much user-centred design, which argues that interfaces should get out of the way. Yet Chalmers’ notion of “seamful systems (with beautiful seams)” is powerful when seen in the context of enabling hackability.

For example, the seams on a musical instrument are quite often visible, particularly those that have evolved over centuries. Pianos intrinsically communicate which aspects make noise - their basic affordances in this case - yet opening the lid reveals further hackable aspects. Numerous musicians have modified the keys and strings to produce different effects, essentially unintended by the instrument’s designers. Electric guitars quickly communicate their operational ‘noise-making’ components; while these instruments take a lifetime to master, their approachability means that a wide range of interaction is possible within minutes. And their obvious ’seams’ clearly enable hackability, as witnessed in the vast numbers of guitars customised in apparently infinitely ways. (Related to this could be an appropriation of Kevin Lynch’s urban planning concept of ‘imageability’ i.e. how clearly the essential architecture of a system can be envisaged.)

Tangentially, the support structures around products enable hackability i.e. facilitating learning, creating communities of interests around products. This needn’t necessarily mean instruction manuals. Bearing in mind Brian Eno’s suggestion that manuals are to be immediately put away in a drawer, we have to find ways of communicating the possibilities of products in engaging ways. Eno is able to suggest this as the musical instruments he’s describing are often brilliantly designed interfaces, honed by hundreds of years of refinement, as noted above. Yet each is also supported by thriving communities, particularly online. Creating a discursive social space around products can enable hackability, and it’s important that designers show up here, in messageboards, blogs, community sites and other forms of discourse - many great architects and designers have been great communicators first and foremost.

These techniques work to varying degrees across objects, either hardware or software. In the latter, there is a huge amount of learning in code - as suggested above, its malleable essence means that hackability is almost a pre-requisite. Some of the basic conditions of code could be abstracted into useful analogues for hackability, as many products increasingly become the domain of code. For instance, the concepts of open source, object-oriented programming and relational databases, application programming interfaces (APIs) or representational state transfer (REST). The latter indicates the value of allowing distinct objects or resources (nouns) to communicate with and modify each other, using basic well-defined operations (verbs) without having any unnecessary detail of the other’s existence or particular operational requirements (polymorphism). In a sense, this suggests a set of interchangeable yet malleable Lego bricks: universally unique shapes; clearly defined ways of connecting; sets of basic operations which can be applied across numerous shapes and functional components in order to build numerous new shapes and functions …

At a more basic level, on a website, a hackable URL indicates both the structure of the site and some sense of how to move through it. A URL designed to be read only by machines presents no hooks for the user to try out. Deeper than URLs, more complex informational products, such as Amazon, Google etc., all offer varieties of web services to elements of their databases, over relatively controlled APIs. This encourages hackability with these products - with the end result that product design innovation blooms around their offerings and their services are increasingly thought of as the ‘wiring’ of the internet (in Stewart Brand’s terms, at the services layer). So this principle involves machines being able to interpret particular layers, and ignore - or not even see - others. This is another core feature of hackability and adaptation, which seem very close in definition here (perhaps the former indicates a less graceful, more earthily pragmatic way of appropriating and reconfiguring products. The vocabulary is littered with “hooks, sockets, plugs, handles etc” rather than stability through more stable layers or graceful refinement over time etc.). There are certain some basic architectural principles which apply i.e. enabling these different ‘layers’ to move independently of each other. More on this later.

As the distinction between hardware and software blurs, their behaviour approaches the malleability of software. Arguably the most interesting things about these emerging products and devices is their ability to create or contribute towards a sense of self - both in terms of the product and the owner. As products get smarter in terms of being aware of their behaviour - in some senses, becoming reflexive - and as their raison d’être gets increasingly close to personal, social functionality - in some senses, becoming involved in presentation of self and the behaviour of the users - there is huge potential to build devices which become increasingly, personally meaningful, which can adapt to personal context and preference like never before.

This requires that the products have at least some ‘understanding’ of both their own behaviour - essentially, tracking their behaviour, usage patterns, and context wherever possible - and are built by both designers/researchers who understand ‘the social’ in depth, and can ultimately be adapted by their own users. It strikes me that the basic condition for these products is to be essentially self-aware. For example, this means that mp3 players without in-built system clocks are unable to infer when particular tracks are played, and therefore cannot communicate essential characteristics of its usage. So, increasingly, another basic condition of a hackable product’s design is its ability to communicate its state or behaviour over time.

So creating hackable products could include the following techniques:

* Make sure affordances and seams are clear and malleable.
* Enable interrogation, cloning and manipulation on particular layers
* Learn from the malleability, object-oriented nature and social behaviour of code
* Enable products which are self-aware in terms of behaviour/usage, which then present those findings back to users
* Enable products to emerge from this behaviour, as well as from design research
* Enable social interaction around products

What are the qualities of adaptive designs?

To some extent, they’re listed above. And yet I suggested a difference in vocabulary may reveal a deeper distinction. The discourse around hackability is often littered with “hooks, sockets, plugs, handles” and so on. With adaptive design, drawing from the language of architecture more than code, we have a more graceful, refined vocabulary of “enabling change in fast layers building on stability in slow layers”, “designing space to evolve”, “time being the best designer” and so on. This suggests that there could be a distinction; that adaptive design is perhaps the process designed to enable careful articulation and evolution, as opposed to hackability’s more open-ended nature.

However, they still draw from the same basic concepts: of design being an ongoing social process between designer and user; of products evolving over time; of enabling the system to learn across an architecture of loosely-coupled layers; of not over-designing … They can be summarised from Tom Moran’s definition and extended thus:

* Think of platforms, not solutions - overbuild infrastructure, underbuild features
* Build with an architecture of layers; enable fast layers to change rapidly (learning); slower layers enable stability
* Create seamful experiences, based around behaviour not aesthetics; often includes modular design
* Undesigned products, or rather not overdesigned; to invite the user in, to encourage evolution
* Define vocabularies, or basic patterns of interaction
* Leave space to evolve (if physical/spatial, build with modular shapes which can extend easily)
* Enable users to manage the at-hand information and interactions; the surface layers
* Create an aesthetic of ongoing process (this could engender trust)
* This process implies that the designer provides support, engagement over time etc.

In adaptive design, designers must enable the experience/object to ‘learn’, and users to be able to ‘teach’ the experience/object. So, it’s a two-way interaction, in which the user wants to adapt the product, to make it useful to him or her. Therefore the designer must concentrate on enabling this adaptation in order to achieve a useful experience, rather than attempting to direct the experience towards usefulness themselves. Designers shouldn’t aim to control, but to enable.

Again, this implies a process long after the traditional ‘up-front design phase’ of projects. The process of design starts when the user gets their hands on it; ideally, the designer stays engaged in a collaboration, a support function, a modification function and so on. This last - modification - is hugely important, as it reinforces this idea that adaptive products are a process. There will be financial implications, positive and negative, of such an approach if based on standard business models.

You’ve spoken on putting “creative power in the hands of non-designers.” How do interaction designers go about doing that?

Two sides to this. Firstly, in order to create these more adaptable products, interaction designers will need to work within multidisciplinary environments, communicating coherently with software developers and other disciplines. This means really understanding code; doesn’t necessarily mean coding, although that can be a useful communications medium. Secondly, interaction designers will need to work with these non-designers, often directly. The notion of design being an ongoing, social process means that designers have a responsibility to work with products and experiences after they’ve launched. This doesn’t necessarily fit many of the development methodologies, and indeed business models, that interaction designers traditionally work in. But putting designers into direct contact with non-designers will enable them to truly adapt products to their own needs, creating genuinely engaging experiences. This is a form of design literacy, perhaps, but also product literacy. It should mean being transparent in one’s practice rather than obscuring the process of design.

There’s a theme of humility running through adaptive design; of making the ‘designer-as-personality’ invisible, foregrounding the design itself, as a shared terrain. This also invites the non-designer in. This partnership aims to nurture well-designed solutions from people, rather than attempting to complete generalised products. (In order to mitigate against damage, the architecture of layers should provide stability - in the slow layers (site, structure) - to enable change in fast layers (surface, skin etc.), as long as layers are not too tightly bound.)

Increasingly, there’s a role for designers to create systems which in turn enable design, offering tools, not solutions. Paul Dourish has described this fundamental change in the “designer’s stance”. Dourish wrote:

“The precise way in which the artifact will be used to accomplish the work will be determined by the user, rather than by the designer. Instead of designing ways for the artifact to be used, the designer instead needs to focus on ways for the user to understand the tool and understand how to apply it to each situation. The designer’s stance is revised as the design is less directly “present” in the interaction between the user and the artifact. So in turn, the revised stance will result in a different set of design activities and concerns … In particular, the designer’s attention is now focused on the resources that a design should provide to users in order for them to appropriate the artifact and incorporate it into their practice.” (From ‘Where the Action Is: The Foundations of Embodied Interaction’, Paul Dourish, MIT Press 2001)

We can see evidence here of designers and developers producing frameworks for others to create with. (Dourish also writes perceptively of the need to provide users with a conceptual understanding of the fundamental artefacts of computer science.) At the entry level, an example of this kind of work would include creating an interface and system for Blogger that didn’t necessarily extend its functionality, but did extend the type of users who could use it; creating a system that enables meaningful design opportunities for others. An example of a more complex offering would be Ning, the social web app generator, which creates a community around building web applications. After providing many of the basic feature-sets of web applications, Ning enables non-developers to create applications by assembling them in numerous ways, or simply cloning. At the most complex end, frameworks for software engineering, Ruby On Rails takes a similar ‘toolkit’ or ‘framework’ approach to development. All these products attempt to make previously complex things easier and quicker; all involve designers and developers collaborating to produce layered systems which enable users - non-designers - to explore creativity at faster-moving layers.

Professional designers have little to fear here. The premise is: everyone is a designer, yet some of us do it for a living. In this sense, interaction design may emerge as the kind of variegated practice that we see in more mature activities e.g. home-improvement and vernacular architecture. Many people practice DIY home improvement, with home-makers able to construct and modify many things for themselves - indeed using tools and guidelines set by professionals - but if they want something more complex, they get professional designers, builders and architects involved. Amateur practice, literacy, toolkits and professionalism; all present simultaneously.

What can interaction designers learn about adaptability from architecture?

This is a complex question, as people veer back and forth on this: on the one hand reading too much into the parallels between architecture and interaction design; on the other, saying it’s so radically different that we have nothing to learn. Yet rather than creating some kind of grand unified theory of construction, I prefer to pick and choose elements of architectural practice, theory and history; then try them on for size. Here’s a few examples:

Of course, much of the thinking behind adaptive design is influenced by a few key architectural principles, embodied in a couple of books: Stewart Brand’s ‘How Buildings Learn’ and Christopher Alexander’s ‘A Pattern Language’. (Neither of which appear to be particularly widely accepted/respected by the mainstream architectural community, interestingly.) Both place design, architecture and building closer to the hands of non-designers than architects, essentially highlighting the beauty and usefulness of vernacular architecture based on hundreds of thousands of years of refinement of practice. (Although Brand covers the area of vernacular architecture thoroughly, other books are relevant here, such as Bernard Rudofsky’s ‘Architecture without Architects’, or perhaps Robert Venturi’s ‘Learning From Las Vegas’ which covers a more contemporary vernacular.)

Additionally, I’d add in the work of 1960s British architects Cedric Price, Archigram and The Smithsons, all of whom attempted to develop architectural systems which were modular, adaptive, malleable, reactive, indicating development over time, and so on.

Some recent ‘interactive architectural’ thinking around learning from biological systems is highly apposite here too - Janine Benyus has suggested that natural organisms adapt brilliantly, taking the ‘long view’, and all without creating unsightly areas or industrial zones too! She believes that “life creates conditions conducive to life” and talks of “a pattern language for survival”, echoing Christopher Alexander. Watch that space.

The above have demonstrated various aspects of adaptability in architecture. Most of these concepts - layers moving at different paces; patterns being reused; products evolving over time etc - are inspired by architecture, such that while hackability has a basis in code, adaptability is essentially associated with this slower process of physical architecture. So the basic practice of designing for adaptation is largely learnt from architecture, whether that’s those emotionally resonant, genuinely useful, ‘good enough’ vernacular solutions or carefully articulated, highly complex systems built by professionals.

In terms of drawing further, there are some basic architectural concepts which can be useful items of vocabulary, or suggest different ways of thinking. For example, the following architectural terms:

* ‘threshold’ (articulating the transition between spaces)
* ‘view’ and ‘route’ (basic wayfinding components)
* ’screen’ (a masked element, usefully concealing depth)

And so on. These are several architectural concepts that could provide insight when applied to interaction design problems.

The practice of ‘post-occupancy evaluations’ (POEs), highly recommended by Brand, also seems relevant. This entails visiting and evaluating the building sometime after it has been constructed, when the occupants have been firmly ensconced. The POE is intended to see how people actually used the building, both in order to modify the environment if necessary but also for the architects to learn. This echoes the idea of design as process, and reminds designers to engage as much as possible with products or experiences after they’re constructed; that the life of the design starts with the user, as opposed to ending with the designer. Most architectural projects do not conduct thorough regular POEs, due to financial constraints; it would be worth thinking about how interaction design can perform the equivalent POEs in such a way that makes financial sense.

A further learning from architecture is one of context. Good architecture considers the environment the building is to be situated within, and designs accordingly. The context of buildings in a city, or rural/natural environment, should shape the project significantly. The Finnish architect Eliel Saarinen said: “Always design a thing by considering it in its next larger context - a chair in a room, a room in a house, a house in an environment, an environment in a city plan.” Interaction design can learn much from this, whether that’s the context of use of objects - backed up by ethnographic design research - or from the multiple entry points or adjoining experiences of work on the web. In the latter, too many design projects have sharp ‘edges’ around the site; and yet few sites are hermetically sealed, coherent experiences from the users point-of-view.

Cities and urban theory provides much of use here, in addition to the Kevin Lynch ‘imageability’ concept mentioned earlier. Richard Sennett wrote:

“Cities are places where learning to live with strangers can happen directly, bodily, physically, on the ground. The size, density, and diversity of urban populations makes this sensate contact possible - but not inevitable. One of the key issues in urban life, and in urban studies, is how to make the complexities a city contains actually interact.” [’Capitalism and the City’, Richard Sennett, Zentrum für Kunst und Medientechnologie Karlsruhe]

Interaction design is often working in such a “dense, diverse” interactive environment, and so a sense of adaptation should be front of mind. This sense of plasticity and malleability of identity in cities and their citizens is also described well in Jonathan Raban’s book ‘Soft City’.

Given the lengthy history of architecture, and the comparative embryonic development of interaction design, there are some things we should pull from its organisation too. It’s useful that architecture has an intrinsically multidisciplinary nature - hovering between science and art - as this echoes the multidisciplinary nature of interaction design, comprising elements of information architecture, interface design, graphic design, software development, design research and so on. Architecture provides a useful counterpoint to the ‘Two Cultures’ split between science and humanities often witnessed in other professions. The emphasis on the architectural team, and the agency of the engineering firm within this process too, helps balance the sometimes unhelpful amount of column inches that the ’starchitects’ can get. There is also a healthy discourse, with an ancient history, around architecture, ranging from industry publications through to thriving blog scene. Again, all things for the profession of interaction design to aspire to.

However, we should be wary of aping architecture for the sake of it; some of the above would apply equally to other professions; other professions may provide equally inspirational yet entirely different learning. There’s no need to lift some of the more depressing or stultifying elements of architecture into a relatively young field, too. For instance, risk will always be managed differently across the two disciplines, given architecture’s intrinsic ability to fall over. (Only elements of interaction design would have similar impact, albeit increasing amounts). Equally, criticism and discussion in architecture can be infuriating; witness the rhetoric around the New Urbanist battlefront. (See also Geoff Manaugh’s impassioned and constructive broadside on the paucity of much writing about architecture.)

Yet, in my view, drawing selectively from the riches of thousands of years of architectural development, both professional and vernacular, can only be a good thing in such a nascent field as interaction design.