Design Patterns: Dependency Injection

Note: This is a reposting of a previously written article for MSDN Magazine in September 2005. The original web archives are no longer online as HTML. You can find a link to the original CHM file for the issue here and all of the source code here


Today there is a greater focus than ever on reusing existing components and wiring together disparate components to form a cohesive architecture. But this wiring can quickly become a daunting task because as application size and complexity increase, so do dependencies. One way to mitigate the proliferation of dependencies is by using Dependency Injection (DI), which allows you to inject objects into a class, rather than relying on the class to create the object itself.

The use of a factory class is one common way to implement DI. When a component creates a private instance of another class, it internalizes the initialization logic within the component. This initialization logic is rarely reusable outside of the creating component, and therefore must be duplicated for any other class that requires an instance of the created class. For example, if class Foo creates an instance of class Bar and instances of Bar require several initialization steps, different for each instance of Bar, other classes that create instances of Bar will have to reproduce the same initialization logic found within Foo.

Developers like to automate monotonous and menial tasks, and yet most developers still perform functions such as object construction and dependency resolution by hand. Dependency resolution can be described as the resolving of defined dependencies of a type or object. Dependency Injection, on the other hand, aims to reduce the amount of boilerplate wiring and infrastructure code that you must write.

Containers provide a layer of abstraction in which to house components. DI containers, in particular, reduce the kind of dependency coupling I just described by providing generic factory classes that instantiate instances of classes. These instances are then configured by the container, allowing construction logic to be reused on a broader level.

Before diving into DI containers, let’s first review a core pattern used through DI containers, the Abstract Factory pattern.

Factory Patterns Refresher

In Design Patterns (Addison-Wesley, 1995), authors Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides describe the intent of the Abstract Factory pattern like this: “To construct and instantiate a set of related objects without specifying their concrete objects.” Utilizing the Abstract Factory pattern in your applications allows you to define abstract classes to create families of objects. By encapsulating the instantiation and construction logic, you retain control over the dependencies and the allowable states of your objects.

Frequently, objects need to be instantiated in a coordinated fashion, usually because of certain dependencies or other requirements. For example, when creating an instance of System.Xml.XmlValidatingReader in client code, an XmlSchemaCollection object is frequently populated with the relevant schemas for use when validating the XmlValidatingReader object. This is an example of needing to not only create an instance of a class, but also to configure it after creation and before it can be used.

Another type of factory pattern is called the factory method. A factory method is simply a method, usually defined as static, whose sole purpose is to return an instance of a class. Sometimes, in the case of facilitating polymorphism, a flag will be passed to the factory method to indicate the specific interface implementation or subclass to be returned. For example, the Create method of WebRequest takes in either a string or Uri instance, and returns a new instance of a class derived from WebRequest.

From this point forward, I will simply use the word “factories” to mean both the Abstract Factory pattern as well as the factory method implementation.

DI Implementation Using Factories

Factories allow for an application to wire together objects and components without exposing too much information about how the components fit together or what dependencies each component might have. Instead of spreading complex creation code around an application, factories allow for that code to be housed in a central location, thereby facilitating reuse throughout the application. Client code then calls creation methods on the factory, with the factory returning complete instances of the requested classes. Encapsulation is preserved, and the client is effectively decoupled from any sort of plumbing required to create or configure the object instance.

Figure 1 Factory Functions

Figure 1 Factory Functions

Factories can do more than simply create objects and assemble their dependencies. They can also serve as a central configuration area for applying services or constraints uniformly across all instances of an object (see Figure 1). For example, instead of returning an instance of an object, a factory can return a proxy to the real instance of the object, thereby enabling distributed method calls. Since the client application is unaware that the object it is being handed is, in fact, a proxy, as opposed to the real instance of the object, no changes to the client code need to occur. An example of this type of service can be found within the .NET remoting infrastructure. Distributed objects can be configured declaratively, with a .NET configuration file, and the client application can simply create an instance of the class using “new”. This is the same for local and distributed objects, as well as for client-activated objects and server-activated objects. All of this configuration and management takes place without the client application knowing about .NET remoting.

However, factories are not without drawbacks. While a factory implementation can be quite valuable for a certain application, most of the time, it is not reusable across other applications. Frequently all of the available creation options are hardcoded into the factory implementation, making the factory itself non-extensible. Also, most of the time the class calling the factories’ creation methods must know which subclass of the factory to create.

Secondly, all dependencies for an object that is created using a factory are known to the factory at compile time. Leaving .NET reflection out of the picture for a moment, at run time there is no way to insert or alter the manner in which objects are created, or which dependencies are populated. This all must happen at design time, or at least require a recompile. For example, if a factory is creating instances of class Foo and instances of class Foo require an instance of class Bar, then the factory must know how to retrieve an instance of the Bar class. The factory could create a new instance, or even make a call to another factory.

Third, since factories are custom to each individual implementation, there can be a significant level of cross-cutting infrastructure that is held captive inside a particular factory. Take my example of a factory dynamically substituting a proxy object for a real object. That is an example of a piece of infrastructure, namely the wrapping of simple objects for deployment over a distributed wire, that is completely encapsulated inside that particular factory. If another object needs to be altered in a similar manner, the logic to do so is hidden inside a factory, and would have to be repeated for the other object. Once this functionality is desired outside of the original application, the problem now becomes how to reuse such functionality while still maintaining the existing factory concept.

Lastly, factories rely on well-defined interfaces to achieve polymorphism. In order for a factory implementation to be able to dynamically create different concrete subclass instances, there must be a common base class or shared interface implemented by all classes that the factory will create. Interfaces decouple the construction of the object from the specific implementation of the interface. The dilemma that arises now is how you can accomplish this decoupling without being forced to create an interface for everything.

These are just some of the problems facing DI implemented using conventional factory implementations. However, as you will see shortly, another viable option exists. Also, DI is not based solely around the factory pattern and in fact has many correlations with many other patterns, including the Builder, Assembly, and Visitor patterns. For more information on these useful patterns, the Design Patterns book (already mentioned) is the seminal reference.

Abstracting DI Using Containers

Many of the previous shortcomings to DI can be solved by using a container. A container is a compartment that houses some sort of abstraction within its walls. Typically, responsibility for object management is taken over by whatever container is being used to manage those objects. However, containers can also take over instantiations, configuration, as well as the application of container-specific services to objects.

Containers allow for objects to be configured by the container, as opposed to being configured by the client application. This allows for the container to serve a wide range of functions, such as object lifecycle management and dependency resolution. In addition, containers can apply cross-cutting services to whatever construct is being hosted inside the container. A cross-cutting service is defined as a service that is generic enough to be applicable across different contexts, while providing specific benefits. An example of a cross-cutting service is a logging framework that logs all method calls within an application.

Containers vs. Factories

There are several reasons to use containers in your application development. Containers provide the ability to wrap vanilla objects with a wealth of other services. This allows the objects to remain ignorant about certain infrastructure and plumbing details, like transactionality and role-based security. Oftentimes, the client code does not need to be aware of the container, so there is no real dependency on the container itself.

These services can be configured declaratively, meaning they can be configured via some external means, including GUIs, XML files, property files, or vanilla .NET-based attributes.

Containers that have cross-cutting services are also reusable across applications. One container can be used to configure objects across various applications within an enterprise. Many services that can be applied across an enterprise are low-level infrastructure and plumbing services. These services can be used across an enterprise without the need to deeply embed container-specific code or logic within an application.

Containers Are Not New

Containers have been around in some form or another for many years. As a matter of fact, containers were used back when Microsoft® Transaction Server (MTS) was released as part of the Windows NT® 4.0 option pack.

Containers are still an active part of Microsoft enterprise development strategy today. In fact, if you’re writing .NET-based code, you’re already using a container to deploy your application: the .NET common language runtime (CLR). The CLR performs a wide variety of important tasks at run time, including memory management, automatic bounds checking and overflow protection, and method call security, to name a few.

The next version of MTS, dubbed COM+, was a major evolution. The .NET equivalent, Enterprise Services, is still the recommended approach to constructing distributed enterprise applications. COM+ and Enterprise Services offer a wealth of services beyond what Microsoft Transaction Server originally offered. In the current version, this includes object messaging, object pooling, declarative automatic transactions, loosely coupled events, and role-based security.

The problem with some containers is that they can be costly. Despite being built upon a fairly stable architecture, the current container technology available to developers using .NET has some drawbacks. They require container-specific constructs to be introduced into domain code. Container infrastructure can adversely impact performance for many operations, even if only minimally.

An example of requiring container-specific constructs can be found in the Microsoft .NET Framework 1.x Enterprise Services requirement that any object that is under its control must derive from the ServicedComponent class. Since .NET does not support multiple inheritance, this constraint limits where Enterprise Services can be utilized.

Since heavyweight containers impact performance and increase the complexity of the client application, they are usually employed in only the largest distributed applications.

Microsoft also offers built-in support for a lightweight version of Dependency Injection, with the System.ComponentModel namespace. Unlike EnterpriseServices, it does not provide any extra services or functionality; it merely provides service injection. However, like Enterprise Services, in order to use the classes within the System.ComponentModel namespace, your classes must become container-aware. This is accomplished by implementing certain container-specific interfaces.

Lightweight Containers

There are a great number of applications that would benefit from many of the features of the containers I described, but their needs don’t justify the use of a heavyweight container. At the opposite end of the containers spectrum, lightweight containers provide many of the same benefits that the heavyweight containers do, but without all of the overhead of COM+ and Enterprise Services. Many organizations still choose to use Enterprise Services in spite of the existence of lightweight containers, but this situation is changing. Many of these lightweight containers offer services in addition to simple DI. The containers can often be configured to add other valuable services to your objects.


You can build your own lightweight DI container, though a few implementations of such systems already exist which you might consider taking advantage of. One such solution is Spring.NET, which I’ll be using for the rest of this column to demonstrate some of the ideas discussed thus far. Spring.NET offers a lightweight DI container built around the concept of factories. It not only provides DI by allowing users to use pre-built factories within their code, but it also provides a suite of services that can be applied to any object under Spring.NET’s control. And since Spring.NET is built using standard .NET-based code, applications that utilize Spring.NET have no additional dependency on COM, COM+, or Enterprise Services.

Factory Example

The following code shippet is a simple interface, IDomainObjectInterface, which my objects will implement. The interface contains one property, which returns a string representation of the name of my object:

public interface IDomainObjectInterface
    string Name{ get; }

The code in Figure 2 contains two classes that implement the interface. As you can see, the Name property simply returns the name of the class, depending on which concrete class is used. I will use these two classes, as well as the interface they implement, as the basis of the example I will present.

Figure 2 Implementation Classes 
public class ImplementationClass1 : IDomainObjectInterface {
    public ImplementationClass1(){}
    public string Name
        get { return "Implementation Class 1"; }  

public class ImplementationClass2 : IDomainObjectInterface {
    public ImplementationClass2(){}
    public string Name
        get { return "Implementation Class 2"; }

Typically, either of these two classes would be created by a factory class, similar to the one in Figure 3. In addition to the factory class, ImplementationClassFactory, Figure 3 also contains one enumeration, ImplementationClassType. The factory class has one method, GetImplementationClass, which accepts one of the enumeration values. Based on the value of the enumeration, one of the two IDomainObjectInterface implementations will be returned. The client class is responsible for choosing which implementation it would like to use.

Figure 3 Factory Class 
public enum ImplementationClassType
    ImplementationClass1, ImplementationClass2

public class ImplementationClassFactory
    public static IDomainObjectInterface GetImplementationClass( 
        ImplementationClassType implementationClassType )
        switch ( implementationClassType )
            case ImplementationClassType.ImplementationClass1:
                return new ImplementationClass1();
            case ImplementationClassType.ImplementationClass2:
                return new ImplementationClass2();
                throw new ArgumentException("Class " + 
                    implementationClassType + " not supported." );

Now, there are several drawbacks to this factory method. The first is that the number of implementation classes is hardwired into the factory method. Therefore, even though there is an interface for the implementation objects, it’s impossible for the factory method to return an implementation class that it does not know about. This limits extensibility, especially in the case of a public API and application frameworks, where the ability to dynamically introduce new implementation classes is not only desired, but often expected to achieve a certain degree of flexibility.

Secondly, even if the ability to dynamically introduce new implementations existed, the client application would still need to know which class to ask for. This eliminates some of the flexibility that the factory class was supposed to provide.

The ConsoleRunner class in Figure 4 illustrates how a client would use the factory class to create an instance of the desired implementation class. Notice how the client code has to explicitly ask for the desired implementation class. At this point many of the benefits of the factory class have been negated.

Figure 4 Using the Factory Class 
using System;
using SpringDIExample;

class ConsoleRunner
    static void Main()
        IDomainObjectInterface domainObjectInterface = 
        Console.WriteLine("My name is " + domainObjectInterface.Name);

A Spring.NET Implementation

Now that you have seen the typical factory pattern, let’s take a look at how a DI container not only achieves many of the same goals, but also adds a significant amount of flexibility and functionality to your application.

Figure 5 contains an updated version of the ConsoleRunner class. For this example I’m using the Spring.NET DI container, which requires a bit of initial setup. First, you must create an instance of the factory, using a config.xml file as the source of your object definitions. Next, replace a call to the custom factory class with a call to the Spring.NET factory class. Notice that since the generic factory doesn’t know anything about third-party interfaces, everything coming back from the factory is downcast to object. So you must upcast the object instances returned by the factory to the interface that you expect. Finally, the last line of the ConsoleRunner class remains unaffected, even though you have changed the source of the object and how it’s instantiated.

Figure 5 ConsoleRunner Using Spring.NET
using System;
using System.IO;
using Spring.Objects.Factory.Xml;
using SpringDIExample;

class ConsoleRunner
    static void Main()
        // 1. Open the configuration file and create a new
        //    factory, reading in the object definitions
        using (Stream stream = File.OpenRead("config.xml"))
            // 2. Create a new object factory
            XmlObjectFactory xmlObjectFactory = 
                new XmlObjectFactory(stream);
            // 3. Call my factory class with generic label for the object
            //    that is requested. 
            IDomainObjectInterface domainObjectInterface = 

            // 4. Use the object just like any other concrete class.
            Console.WriteLine("My name is " + domainObjectInterface.Name);

Now, let’s take a look at the make up of the config.xml file, which drives the factory class. Here is the full config.xml file:

<?xml version="1.0" encoding="utf-8" ?>
<objects xmlns="">
    <object name="DomainObjectImplementationClass" 
            type="ImplementationClass1, SpringDIExample" />

You will notice that the configuration file is not very large, being comprised of only two elements, <objects>, which contain all of the object definitions, and the individual <object> definitions. From the configuration file shown, you can see that there is one object definition. The object definition contains three basic attributes that define what object is created as well as how it’s created.

The name attribute defines the name that my ConsoleRunner class uses when requesting an object from the factory. In this case, it is “DomainObjectImplementationClass”. This name is just used to reference the definitions contained in the configuration file from the client code.

Next, the singleton attribute is a Boolean flag that designates if the object should be created as a singleton or not. Spring.NET has built-in support for making objects singletons, but since I do not require such functionality, I set this attribute to false.

Finally, the type attribute defines the actual type of the object to be created. This is the type that will actually be loaded and returned when the factory is queried. This string takes the form of “Type, Assembly”, indicating not only the type of object, but also which assembly the type is located in. As indicated in the configuration file, the type desired is one of the types shown in Figure 2.

By simply changing the type of the implementation class listed in the configuration file from “ImplementationClass1” to “ImplementationClass2”, you can dynamically alter the class that is returned to the client, all without a recompile.

Enhancing Extensibility

Until now, I have simply moved responsibility of object creation to an external factory implementation and configuration file. While this type of declarative configuration can be seen as more desirable than a static, custom factory implementation, there is much more that can be accomplished by using containers.

Let’s say that you have published the IDomainObjectInterface in a public API, and you would like to allow users of the API to create their own implementations of the interface, all while still utilizing several existing clients that have already been built to use the IDomainObjectInterface. Getting the users’ implementation to your clients could prove to be difficult, especially since you know nothing about how the class will be built or deployed. ImplementationClass3 is a third implementation of the IDomainObjectInterface, which is similar to the ImplementationClass1 and ImplementationClass2 classes, except this one resides in a separate assembly. Previously, both implementation classes and the interface resided in the same assembly:

public class ImplementationClass3 : IDomainObjectInterface
    public ImplementationClass3(){}
    public string Name
        get { return "Implementation Class 3"; }

Using a framework like Spring.NET, getting my ConsoleRunner class to use the new ImplementationClass3 is easy to accomplish, and only requires a minor change to my original config.xml configuration file. The following is the configuration file with the necessary changes made. The only difference lies within the type attribute, which has been updated to point to the ImplemenationClass3 type and the SimpleDIExampleExtension assembly:

<?xml version="1.0" encoding="utf-8" ?>
<objects xmlns="">
    <object name="DomainObjectImplementationClass" 
            type="ImplementationClass3, SpringDIExampleExtension"/>

When the ConsoleRunner class is rerun, an instance of ImplementationClass3 will be instantiated and returned. All of this is accomplished without a recompile of ConsoleRunner, even though ImplemenationClass3 resides in an assembly that’s physically separate from the original implementation classes.

Dependency Resolution

Now that you have seen how containers can aid in object creation, let’s take a look at how dependencies between objects are handled. The following class, DependentClass, has one read/write property that contains a message for the class to hold:

public class DependentClass
    private string _message;

    public DependentClass(){}

    public string Message
        set { _message = value; }
        get { return _message; }

I will configure the container to automatically insert a message into the DependentClass.Message property. I will also dynamically insert an instance of the configured DependentClass object into the ImplemenationClass4.DependentClass property.

Figure 6 shows a new implementation of IDomainObjectInterface, ImplemenationClass4. As you can see, not only does ImplementationClass4 implement the IDomainObjectInterface, it also has an extra property, DependentClass, which holds an instance of DependentClass.

Figure 6 New Implementation of IDomainObjectInterface 
public class ImplementationClass4 : IDomainObjectInterface
    private DependentClass _dependentClass;

    public ImplementationClass4(){}

    public DependentClass DependentClass
        get { return _dependentClass; }
        set { _dependentClass = value; }

    public string Name
        get { return _dependentClass.Message; }

Figure 7 shows my updated config.xml file. There are three changes from the configuration files shown previously. The first is the addition of a new <object> element that configures an instance of the DependentClass to be used. All of the previously explained attributes of the <object> element are present, but this object definition has an extra element underneath the main object definition. The <property> element configures a property for a given object definition. In this case the name attribute contains the name of the property to populate; here it’s DependentClass.Message. Since the DependentClass.Message property is a basic type, its configured value is wrapped in <value> tags. The text contained inside these tags is the configured value of the DependentClass.Message that will be populated at instantiation.

Figure 7 Updated config.xml 
<?xml version="1.0" encoding="utf-8" ?>
<objects xmlns="">
    <object name="DomainObjectImplementationClass" 
            type="ImplementationClass4, SpringDIExample">
        <property name="DependentClass">
            <ref object="DomainObjectDependentClass"/>
    <object name="DomainObjectDependentClass" 
            type="DependentClass, SpringDIExample">
        <property name="Message"><value>Dependent Class</value></property>

The second change concerns my original DomainObjectImplementationClass definition. A new <property> element has been added to the definition, configured to populate the ImplementationClass4.DependentClass property. Since the value of this property is an instance of a complex type, a <ref> tag is used instead of wrapping the value in <value> tags. The object attribute of the <ref> tags references the name of a previously configured object definition, in this case, “DomainObjectDependentClass”.

The last change to the configuration file can be found within the first object definition. I have updated the type reference to use the ImplementationClass4 class.

Now, redeploy the new config.xml file, and rerun the ConsoleRunner class. Notice that the configured DependentClass.Message property is displayed. The dependencies have been populated and resolved and the client app is using the new classes, all without knowing what class it’s using and all without requiring a recompile.


Dependency Injection is a worthwhile concept to explore for use within apps that you develop. Not only can it reduce coupling between components, but it also saves you from writing boilerplate factory creation code over and over again. Spring.NET is an example of a framework that provides a ready to use DI container, but it is not the only .NET lightweight container out there. Other containers include Pico and Avalon.

Send your questions and comments to

Software Factories – Refactoring An Industry

(I’m republishing an old article I had written about 10 years ago that seems to have disappeared from the web. This was originally published in 2006 on The fact that the issues described below have not really changed in 8 years is, frankly, sad.)

Software development has always been costly and time-consuming process. Specialized requirements and lack of skilled resources are just two of the difficulties facing companies today. Pressure to deliver software, on time and within budget, have pushed developers to look for a way to increase value delivered, while decreasing development time.

For many years now, efficient reuse of existing assets, either through object-oriented programming, component-based development or patterns-based architecture, has been one of the core objectives for the IT industry as a whole. Software reuse is seen as a means to combat many of the problems facing development teams. However, for many years and several different technology paradigms, this level of reuse has eluded the industry as a whole.

The book Software Factories: Assembling applications with Patterns, Models, Frameworks, and Tools aims to change this by modifying the definition of reuse as used by the IT industry and bring a more manufacturing approach to software reuse.

Economies of Scale vs. Economies of Scope

The difference between these two definitions are subtle and is something that has been proven outside of the software industry, in more established industries like manufacturing and fabrication. While both promise to reduce time and cost and improve product quality, they are actually quite different.

Economies of scale occurs when the initial development of a design and subsequent construction of a prototype result in multiple copies of that prototype are created. This is similar to the fabrication industry when a custom part is created and that part is used to produce hundreds or thousands of similar parts. In the manufacturing industry, it’s similar to a machine being built that can produce screws in bulk. In this case, ruse occurs in the later stage of production, namely construction.

Economies of scope occurs when multiple similar, but unique designs are produced in groups, as opposed to individually. Authors Jack Greenfield uses the example of a car manufacturing plant that uses existing designs, such as chassis, body and interiors, to produce several lines of distinct cars. Each product line is complete with custom features, but all share that same underlying design. Here the reuse occurs earlier in production, namely design and prototyping.

Greenfield point out that for years, the IT industry has been trying to apply economies of scale for achieving software reuse, when in fact, the IT industry should have been trying to apply economies of scope to achieve reuse.

Chronic Problems of Software Development

The authors of Software Factories identity some major problems with how the IT industry is attempting to achieve reuse. Unfortunately, simply altering how software reuse is applied within an industry or an organization won’t make systematic reuse a reality. Greenfield identify four chronic problems with software development, each of which impede a teams’ ability to gain valuable insight and reuse from existing domain knowledge and experience. These problems are: •

  • Monolithic Development
  • Copious Granularity
  • Process Immaturity
  • One-off Developer

Monolithic Development

Monolithic development means the creation of software in such a way as to make it difficult or even impossible, to utilize the resulting artifacts outside of a narrowly defined scope. Software development teams within large organizations are certainly familiar with this style of development. Several projects, several teams, all building isolated applications, without concern for what anyone else may be doing, or what, if any, domain knowledge they are embedding within the application. This results in large, inflexible applications that are of little use to anyone outside the original, intended audience. Even two projects within a single organization can be faced with a mini-integration effort in order for one application to take advantage of another’s functionality. Why does this keep occurring within organizations? Industry leaders have championed design by assembly for years, and yet very few applications take advantage of existing components, even within their own organization.

Copious Granularity

Typical software development for business applications consists of a strikingly similar set of features and functionality. For example, many business applications read data from a database, present it to the user, allow the user to modify the data in some way, and then allow the user to persist the change back to the database. Even though this is an over simplification of most applications, the basics still remain the same across projects. It begs the question of why developers use such fine-grained tools as standard programming languages like C# and VB .NET for representing such basic patterns? Part of the reason is the immaturity of modeling tools and languages. While a language such as UML is suitable for documenting software architecture, it is inadequate for allowing implementation to be derived from such models. UML lacks the extensibility required for generating large amount of code, and also lacks the breadth required to represent all aspects of software, including databases and user interfaces.

Process Immaturity

  • Controlling complexity at the expense of change — Many “traditional” processes would fall under this heading, including RUP and Waterfall
  • Controlling change at the expense of complexity — Most “agile” methodologies would fall under this heading, including Scrum and XP Before a process can be tuned for software reuse, it must mature because automation can only automate well-defined processes.

One-off Development

Software development projects within a company are usually so focused on the bottom line and delivery time, that overall architecture is placed on the back burner as “lofty” or “academic”. Very little regard is taken to analyze and evaluate existing assets, and very little time is dedicated to ensuring that new assets produced are reusable within other contexts. This results in many development efforts within a single company that create various amounts code and valuable assets for use within the company’s domain. Projects rarely perform post-mortems where reusable components are identified, documented, and packaged in such a way as to become reusable by other projects. These four problems, as well as the industrys’ misunderstanding of how to apply reuse with the current economic model, paint a pretty bleak picture as to the future of software development as an industry, especially compared to other more mature industries, such as manufacturing. Luckily, this is where the Software Factory comes into play. The next section contains a brief overview of what is a Software Factory is and identifies some of its main components.

Software Factories: The Solution?

When creating software within a specific vertical, knowledge about that vertical is frequently embedded in the software being developed. Unfortunately, this knowledge remains hidden inside the software; unable to be reused outside of the original scope of the software that contains it. This means reuse of this knowledge is next to impossible. This is where Software Factories look to step in with methods and procedures to harvest that knowledge, turning that knowledge into reusable production assets for a company. A Software Factory is defined by Greenfield, et al., as a software product line that configures extensible tools, processes, and content using a software factory template based on a software factory schema to automate the development and maintenance of variants of an archetypical product by adapting, assembling and configuring framework-based components. In layman’s terms, a Software Factory are about collecting existing, specialized knowledge about a certain domain and applications built within that domain. You can then use that knowledge to create a blueprint for other applications of a similar type. That schema can then be tweaked and configured to produce semi-functional to functional applications. In short, Software Factories means application generation from valuable, and reusable production assets that exist within an organization. Before delving into what makes components comprise a Software Factory, let’s first take a look at a look at the current state of the IT industry as compared to other industries.

Moving from Craftsmanship to Industrialization

All too often, highly skilled application developers and architects have to use their time for low-level, implementation level tasks. Usually, junior developers are not able to complete such tasks because of lack of appropriate domain knowledge, requiring the senior developer to mentor the junior developer. This fosters not only knowledge transfer, but also an introduction to the complexities of the current development environment. Since developers are always involved at some stage of development, very little time is spent in making development more efficient, especially low-level implementation details. This method of development resembles early goods based industries, where single workers create custom solutions, tailored to each individual requirement. Think early tailors or shoe cobblers.

This craftsmanship approach to software development does not scale very well. The lack of quality senior developers creates a mentoring environment, where specialized knowledge must be transferred, similar to an apprenticeship. Since there is such a hands-on approach required, each part of the project need to be created, most of the time, by hand. This often leads to higher quality, but also leads to performance and efficient issues. Migrating from a craftsmanship-based industry to a more industrial-based industry has been the path of progression for many more mature industries. If this is the end result for so many other industries, why is software development still based on small groups of highly specialized craftsmen?

Most people within the IT industry will agree that a form of standardization and modularization is the key to enabling the kind of reuse required for efficient industrialization of software development. What they don’t agree on is the means to which this standardization and modularization is achieved. The Software Factory aims to address this effort by prescribing a process by which software can be modularized into reusable assets.

Components of Software Factories

There are 3 main components of Software Factories:

  • Models and Patterns
  • Domain Specific Languages
  • Software Product Lines

Models and Patterns

The authors of Software Factories define Model-Driven Development as:

using models to capture high level information, usually expressed informally, and to automate its implementation, either by compiling models to produce executables, or by using them to facilitate the manual development of executables.

The importance of models comes from their ability to keep a consistent representation of concepts within a project. For example, while it’s easy to draw and manipulate a person object in a model, it’s much more difficult to manipulate the same person object at a lower level, because the person object could be represented by class files, tables, and columns in a database.

Representing and manipulating core abstractions and concepts within a software system is only half the battle, though. The other half comes from being able to effectively use those models to generate the underlying implementation details represented by the model

Domain Specific Languages

Domain Specific Languages, or DSLs, have long been a way to provide an abstraction atop existing concepts within a specialized domain. Examples of a DSL include SQL and HTML. Both provide specialized languages for manipulating concepts within their respective domain — tables, rows, and columns in the case of SQL and elements, tables, and forms in the case of HTML. For years DSLs like these remained the only cost effective DSLs because of their wide spread use and generalized concerns. No matter what your specific project entails, if you use a database, you can use SQL and if you use web pages, you can HTML. Creating DSLs for other, more vertical concerns has always been cost prohibitive because of the lacks of tools.

However, several companies have recently announced tools or plug-ins for the creation of DSLs. Once these DSLs are created, they can be utilized within a company to work with components at a much high level, with a much higher level of software reuse.

Software Product Lines

Software Product Lines are entire subsets of components that can be configured, assembled, and packaged to provide a fairly complete product. The resulting product should only require customization from a developer for the highly specialized aspects of the project. Perhaps the largest component of a Software Factory, Software Product Lines not only provide the greatest value, but also require the greatest investment. Software must be carefully partitioned into distinct and reusable components and those components must readily fit together in a coherent manner. Configuration is the key to Software Product Lines, as projects must be able to pick and choose which components they want to utilize, and then generate an application off of that configuration.

Implementing a Software Factory While all of the promises of Software Factories sound appealing, many companies have tried to provide the tools and components, only to fail under the load of inflexible tools or proprietary formats.

All of this is about to change. Big-name companies like Microsoft and Sun are getting ready to release many of the components necessary for building and assembling a Software Factory within an organization. With the release of Visual Studio 2005, Microsoft will unveil several add-ins and plug-ins that enable the creation of not only Domain Specific Languages, but also the integration of those languages with the IDE itself. This will allow developers to manipulate and use the language from within the Visual Studio.NET IDE.

Not to be outdone, Sun Microsystems is working on its own implementation of Software Factory technology, simply named Project Ace. Although, very little details of “Project Ace’ are available, developers shouldn’t expect Sun to let Microsoft provide .NET tools, without answering with a comparable set of tools for Java.

What about now?

While all this conjecture sounds wonderful for the future, many developers will be asking themselves what they can do now to utilize Software Factory techniques in their organizations today. Well, the good news is that a lot of functionality already exists with Visual Studio.NET. Products like Enterprise Templates, Project Item Templates, and Nant allow for the creation of standard artifacts that a team or organization can utilize today.

Silicon Valley Startups: Low-risk R&D

Wired Reporting:

As the engineer and writer Alex Payne put it, these startups represent “the field offices of a large distributed workforce assembled by venture capitalists and their associate institutions,” doing low-overhead, low-risk R&D for five corporate giants. In such a system, the real disillusionment isn’t the discovery that you’re unlikely to become a billionaire; it’s the realization that your feeling of autonomy is a fantasy, and that the vast majority of you have been set up to fail by design.

The most expensive lottery ticket in the world

From Felix Salmon:

Founding a Silicon Valley startup, then, is a deeply irrational thing to do: it’s a decision to throw away a large chunk of your precious youth at a venture which is almost certain to fail. Meanwhile, the Silicon Valley ecosystem as a whole will happily eat you up, consuming your desperate and massively underpaid labor, and converting it into a few obscenely large paychecks for a handful of extraordinarily lucky individuals. On its face, the winners, here, are the people with the big successful exits. But after reading No Exit, a different conclusion presents itself. The real winners are the happy and well-paid engineers, enjoying their lives and their youth while working for great companies like Google. In the world of startups, the only winning move is not to play.

That’s fine for Griffin

More than once, for some reason, people have asked me for advice on a variety of subjects: life, business, technology, etc… When I try and answer them as best as I can, I’m honest for what works for me. Not necessarily what works for everyone, all the time.

Yet, oddly enough, when presented with recommendations for how to approach and solve problems, people tend to rebuff some things as fine for me, but it’s not scalable to everyone in the whole wide world. “What if everyone saved their money? The economy would collapse!”

That’s bullshit

Sure, a ton of people can get smart, save their money, run their own businesses & lose weight. But most people won’t. Your goal in your own life is to do the very best for you and the people you care about. Everyone else can take care of themselves. After all, you shouldn’t expect other people to have your best interests in mind. As the saying goes “no one cares more about your money than you.”

Lack of Engineering Talent

I recently attended a dinner event for a prominent university here where internal university updates are discussed. Filled with lecturers, deans, VPs and other state & local VIPs, it’s a pretty standard gathering of higher education people. At this years dinner, the president of the university was discussing the brand new data center. Proud of the fact that it’s the schools first LEED Platinum certified building, she wanted to call out the person responsible for the initiative. This is how she announced them:

“My resident computer geek”

I talk to a lot of people about the state of the technology industry, especially with regards to job opportunities. Whenever I point to the wealth of technical job openings that remain unfilled, people always ask “why is there such a lack of tech people?”. The reason I give surprised many and gets dismissed. It’s this:

It’s still not cool to be involved with computers.

Despite all the recent fame & success of technology / internet entrepreneurs, people still think of techies as the taped glasses & pocket protector wearing geeks from the movies. It’s very hard to imagine why the “D&D playing virgins, living in their parents basement” historical stereotype persists. Yet, here we are.

“My resident computer geek”

Its difficult to find a faster growing sector of the economy with higher earning potential than computers. Yet despite the existence of demand, the year over year growth of the sector, the massive unemployment rates of certain generations and the high earning potential of a computer based job, the supply is actually going down.

To explain this some people point to the relative newness of technology professions as a leading indicator. The narrative goes that we’ll see younger generations see the demand, flock to it, then start to fill it. This might have been true in the 80’s, 90’s or early 00’s. But we’re squarely in the 3rd decade (at least) of computers underpinning most of our daily lives. If timing was an issue, we would have seen an influx of grads after the mid-80’s, late 90’s or mid 00’s. Yet, enrollment is down in almost every single STEM major around the country. The rise in use of non-US talent is at an all time high as companies go to Central & South America, Europe & Asia for talent.

Note: I know that higher ed is not the only source of talent and training, but it’s a big one.

Something deeper is going on that is steering people away from the sector.

“My resident computer geek”

This wasn’t some football throwing jock, stuffing kids into garbage cans. This was the president of a major university. If anyone should be sensitive throwing around pejorative names, it should be her. The dismissive remarks seem to ignore just how quickly tech savvy people are lapping non-tech savvy people in terms of knowledge, business acumen, social mobility and plain economic power. To dismiss that section of the population is dangerous at best and ignorant at worst.

Among the other members introduced that night were lit professors, authors, pharmacists and CEOs. How many of those people do you think were reduced to a unflattering stereotype? She could have easily used stereotypes such as bookworms, alcoholics, med school dropouts and crooks to describe the other members mentioned above. But she didn’t.

“My resident computer geek”

After her speech, at the end of the dinner, she had a new recruitment video cued up to show everyone. As we sat watching, the video froze and stopped playing. Everyone in the room sat there, with no idea what to do. The person running the laptop could do nothing but click play / pause a few times before the president declared “we’re having technical difficulties”.

If only there was a computer geek around to help.

You’re probably half-assing your startup idea

I finally finished reading Paul Grahams prose on how to get startup ideas. The whole essay is incredible. I did want to call out a few critical nuggets that he touches on that I especially liked:

The verb you want to be using with respect to startup ideas is not “think up” but “notice”


When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they’ll use it even when it’s a crappy version one made by a two-person startup they’ve never heard of? If you can’t answer that, the idea is probably bad. [3]

In all honesty, this is enough to make you stop and nod your head ‘Yes!’. After 4 years running two startup groups here in Chicago, I got tired of saying this exact phrase. If you want to start a company, find someone already solving a problem they have, but doing it poorly. That’s really it. Of course, never confuse simple with easy.

Otherwise, you have to convince them that a) they have the problem, b) they should spend money to solve it and c) you’re the one to solve it. That’s a tough thing to do.

How about the incumbents you’re trying to displace?

When startups consume incumbents, they usually start by serving some small but important market that the big players ignore. It’s particularly good if there’s an admixture of disdain in the big players’ attitude, because that often misleads them.

Make no mistake, someone is making money “solving” the problem you’re trying to solve with your startup. This isn’t kindergarden. You’re trying to take money that would otherwise go to them. They’re not gonna like that. You have to move quick, fight dirty & scrappy and use your quickness to your advantage. It’s up to you to figure out exactly how to do that, otherwise, you’re screwed. Don’t whine if you can’t understand why you just can’t catch a break. Nobody is owed a business model.

Lastly, on idea sexiness:

In fact, one strategy I recommend to people who need a new idea is not merely to turn off their schlep and unsexy filters, but to seek out ideas that are unsexy or involve schleps. Don’t try to start Twitter. Those ideas are so rare that you can’t find them by looking for them. Make something unsexy that people will pay you for.

There are so many ideas out there just waiting to be implemented. The problem is that they’re not sexy enough. They’re not going to make the WSJ or NYT. They are micro-opportunties & little buckets of gold just waiting for someone to pickup and run with. Find your flywheel

The “Jobs Creation” Fallacy

Felix Salmon chimes in on the Startup Act 3.0 in his latest post. The focus is on the so called immigrant-entrepreneur visa:

One is the immigrant-entrepreneur visa; the second is the idea of giving green cards to up to 50,000 foreign students who graduate from an American university with an advanced degree in science, technology, engineering, or mathematics — so long as they remain in that field for five consecutive years.

This is a very very very good thing. Our bleed to other countries in technical fields is quickly turning into a hemorrhage. However, he also makes a assumption that trips up most people: the lure of Job Creation.

And of course — by definition — it would create jobs. The Kauffman foundation’s math is solid, here: they conservatively estimate job creation at somewhere between 500,000 and 1.6 million new jobs after ten years, and possibly substantially more. (Those estimates don’t include jobs created by the new firms after they’ve left the program, for instance.)

Now, some of these entrepreneurs will start companies in the classics: education, healthcare, manufacturing, etc.. However, the majority will be tech related startups. And here’s the rub: the required employees of tech startups are not the ones sitting around unemployed. There’s a major shortage in knowledge workers of all kinds, from engineers on down. So simply creating more available jobs is not actually going to help out of work people get work. They already have work.

This visa does nothing to stimulate the creation of supply for these jobs, only demand. In a way, this will actually hurt the economy a bit because you’re adding to the price war going on for technical workers. This drives prices up to levels only large companies can afford, pushing out the smaller, scrappy entrepreneurs. The rich get richer.

If the government really wants to stimulate job creation, in addition to the above visa, they should work on increasing supply. But how? Simple: subsidize the education ( college or otherwise ) of anyone getting a degree / certification in a field that’s among the highest demand for the past 3-5 years. Computer Science, Electrical Engineering, Software Development, etc..

With the sky rocketing costs of tuition, people are facing the choice of education or no education. By offering them alternative situation ( Major in communications and take out loans or Major in Technical Writing and go for free ) you’re also reducing the debt load of an entire generation of teenagers who would otherwise die in debt.

A week with my new favorite watch by @Hodinkee

NOMOS Zurich

This watch is the type of watch that should be worn by a man who travels the world and thinks nothing of it; a man who is at home in Zurich, Hong Kong, Chicago, and Santiago, and knows the best places to eat in each without having to use his iPhone. It was made for the type of person who reads Monocle Magazine not to impress people on the train, but who genuinely cares about stalwarts of sustainable design in an obscure Scandinavian city. This watch is for a man who appreciates that fact that this watch features an in-house manufacture movement with hand-finishing, but doesn’t need everyone around him to know how much he paid for it. The NOMOS is a watch for a man who knew exactly who Nick Horween was before he saw that this watch came on Horween leather.

Banksy distills our upcoming revolution

This profile of Banksy outlines why he might just be remembered as one of the most influential people of the 21st century.

On indie artists:

While he may shelter behind a concealed identity, he advocates a direct connection between an artist and his constituency. “There’s a whole new audience out there, and it’s never been easier to sell [one’s art],” Banksy has maintained. “You don’t have to go to college, drag ’round a portfolio, mail off transparencies to snooty galleries or sleep with someone powerful, all you need now is a few ideas and a broadband connection. This is the first time the essentially bourgeois world of art has belonged to the people. We need to make it count.”

We may very well look back at Banksy as the catalyst for the upcoming indiepocalypse

On capitalism:

I love the way capitalism finds a place—even for its enemies. It’s definitely boom time in the discontent industry. I mean how many cakes does Michael Moore get through?”

Finally, a bit of sarcasim:

“Hollywood,” he once said, “is a town where they honor their heroes by writing their names on the pavement to be walked on by fat people and peed on by dogs. It seemed like a great place to come and be ambitious.”