Sunday, June 27, 2010

On the semantics of permanent use-relations

3 weeks ago I have written a blog with the Title Domain-Specific Presentation is Component Visualization. The idea was to provide a domain-specific component visualization which is aware of the different types of components and their relations.

Since then I've asked myself if there isn't any UML-way to visualize domain-specific components. I know how this sounds like: UML is generic by its very nature but not domain-specific. But let's give it a try anyway.

Before starting I have have to explain one thing first: actifsource distinguish only between two types of relations: the own-relation and the use-relation. The own-relation defines a strong coupling. If A owns B, B must not exist without A. The use-relation on the other hand doesn't imply any life-time semantics.

As I explained in my blog Software Architecture and Components, domain-specific components are defined implicitly around the own-relations of their meta-model.

Looking at the UML Component Diagram we find out that relations between components are realized as delegation connectors which connects a port to an element of the component.


And now the interesting question: does every use-relation between components lead to a port?

To answer this question, lets have a look at the meta-model of a UML class. A class can be seen as a component as well, composing

  • a name attribute
  • member variables
  • member functions
Lets have a closer look at member variables which can express to different relations:

  • own relation
  • use relation
Unfortunately, using 3rd GL's the semantic isn't always that clear. Programming in C++ I use pointers to imply use-relations and references or value-objects to imply own-relations whenever possible.

Before continuing the discussion, let's have a look at member functions. Functions consist of named and typed arguments. The types are modeled as a use-relation to other existing types.

So, what is the difference between the use-relation of member-variables to other types and the use-relation of typed arguments of a member function?

The use-relations of member-variables define the permanent relations between components while the use-relations of typed arguments of a function definition simply simply define a temporary relation.

Obviously we have to distinguish between two different of use-relations:

  • use-relations which define permanent relations between components
  • use-relations which define temporary relations between components
For a valuable component relation overview we are interested only in permanent relations between components which leads to the following definition:

Permanent use-relations shall be shown as ports in a component diagram.
In my next blog I try to explain how composite structures shall be visualized using the UML Composite Structure Diagram combined with the idea of ports for permanent use-relations.

Friday, June 25, 2010

Be careful when using access modifier

When working with frameworks provided by others, I sometimes get a little bit frustrated. Using the "normal" features provided by a framework in general is no problem. But when I have to access more information and go deeper into the code, I often find package-private and private modifiers protecting the code I like to reuse. Most often the problem is not at the method level, the method needed to call is public. In many cases even the constructor of the class is public, the problem begins with the arguments of the constructor or the method.

Oft one of the arguments is a class declared package-private or is public but has a package-private constructor. This forces me to search for a factory-method, which is often hidden deep by other such constructs. Meaning the code is strongly coupled to the framework.

Here is a small example:



I hate such code, especially if there is no special reason why the factory has a package-private constructor. Maybe there is an misunderstanding between information hiding and class design.

In cases where the code doesn't access any data from the framework there is no need to make the constructor package-private. It doesn't matter, if there is a second factory used by someone else. If you really need to have to use package-private, please make use of interfaces. This way it remains still possible to reuse the class by creating a new factory implementation:



I always wonder how the test code for such strong coupled code looks like. If you use interfaces unit testing is an easy task. You might use a mock framework like easymock. Using interfaces your class can be tested independent from the factory implementation and without creating an instance of the framework class.

For easier testing and better reusability, I suggest to use public-interfaces for a reasonable decoupling whenever possible.

Thursday, June 24, 2010

Abolish the Mutants! .. or do we need Setable and Mapable Interfaces?

No, this is not about a well-known film series. This is about software development! Forgive me for the lurid title :-)

Some weeks ago I was writing about an unexpected behaviour when you use Map.Entry objects in HashSets. The problem was emerging because the code assumed an object to be immutable that was not. More generally, we can ask: What elements can we put into a Set or use as the key of a Map?

In the javadoc to java.util.Map we find following warning:
Note: great care must be exercised if mutable objects are used as map
keys.  The behavior of a map is not specified if the value of an object is
changed in a manner that affects equals comparisons while the
object is a key in the map.
Unfortunately, we usually do not immediately see wheter an object can be changed in such a way or not and therefore are clueless whether we are allowed to use it as the key of a mapping. We need to check the implementation of the class and possible sub-classes. If we are dealing with an interface we would have to check all implementations which is an impossible task as new implementations might be added in the future.

The inventors of the java collection library could have diminished this problem by introducing an interface Setable resp. Mapable which mark whether a class is designed for being put into a set resp. for being the key in a map. Map and Set could then allow only objects implementing these interfaces. However, these interface names are rather cumbersome and as they choose not to do this, we need to find another way of dealing with this issue.

Universum java est omnis divisa in partes tres

If we look at the universe of all java objects we can group them into three distinct sets:
  1. Immutable objects
  2. Entities
  3. Neither 1 nor 2
Immutable objects are objects which never change their internal state. They are created fully-initialized, their hash code never changes and the equality relation formed by their equals method remains the same in their full livetime. They are perfect set members and hash keys!

Entities can change their internal state over time but they have a constant identity-giving property. Every java class which does not override equals and hashCode from java.lang.Object fullfills this requirement: It's identity is given by it's address in memory (which is only accessable to the jvm but nonetheless it is consistently defined). In other cases, the entity's identity is given by some id which might be a final long field or some special guid. The equals and hashCode methods must then be defined on this id only and not on any other fields. This makes entities suitable for sets and maps.

The remaining objects which are neither immutable nor entities can change their identity over time. I will therefore call them mutants for now. We should use them in sets and maps only if we can somehow guarantee that they will never mutate (change their identity) the entire time while they are used in the sets or maps!

Avoid Mutants!

Unfortunately, there are many mutants hiding in the java libraries:
  • java.util.List and all its implementations
  • java.util.Set and all its implementations
  • java.util.Map and all its implementations
  • java.util.BitSet
  • java.util.Date
  • ...
So, what can you do if you want to use a list of elements safely as key to a map, for example?

public class Registry {
  private Map<List<String>, String> map = new HashMap<List<String>, String>();

  public void put(List<String> strings, String value) {
    // oops, unsafe!
    map.put(strings, value);
  }
}

The list of strings is a potential mutant!


  public void put(List<String> strings, String value) {
    // oops, still unsafe!
    map.put(Collections.unmodifiableList(strings), value);
  }

Wrapping the mutant into an unmodifiable list will still not work as the caller of the method still has a reference to the list and can therefore modify it!


  public void put(List<String> strings, String value) {
    map.put(createImmutableCopy(strings), value);
  }

  private List<String> createImmutableCopy(List<String> strings) {
    return Collections.unmodifiableList(new ArrayList<String>(strings));
  }

We need to both copy the list and put it into a unmodifiable wrapper! Now we can guarantee that this list will never change again because there are no references to the wrapped ArrayList except by the Collections.UnmodifiableList wrapper which guards it from modifications. Note that we have this guarantee only because we know that the list is an direct result from our helper function. If we pass the list to some other method it will have to be treated as potential mutant there again...

As you see, mutants cause a lot of trouble and work to ensure they do not change their identity again. Therefore my demand is cleary: Do not write any mutant classes! By cleverly compositing immutable classes and entitiy classes you can always avoid the danger-bringing mutants.

Friday, June 18, 2010

Generating an internal DSL from an external DSL

We have learned that when we create sound explicit intension revealing APIs the number of WTFs per minute is reduced and the programmer's developer life is a bit easier.

Further we can introduce a steep learning curve and make less faults due to accidental missusage of the API.

Making internal DSLs formulated in our general purpose programming language is one way to aproach this but writing an internal DSL is not an easy task and violates the KISS principle.

But since you want to make the life of our developers a bit easier and even more fun it makes sense to create such an internal DSL where the developer can declaratively describe what is needed from the API.

actualValue = ... // this value should be detected by the framework
Detector detector = actualValueFramework.bind(new DetectorModule {
@Override
public void configure(DetectorExpressionBuilder detector) {
detector.detect()
.withEvent().change()
.andInterval().humanVisual()
.andPersistency().disc();
}
}).to(actualValue);

Listing 1

My prefered internal DSL style in Java is shown in listing 1 and is known as Method Chaining.

With code completion from the IDE this is very neat since the proposed possibilities from the completion express the further declaration steps. The developer can explore the API without any prior knowledge of its usage.

So this way I can achieve the goals mentioned above. Exploring is fun. Code completion while exploring makes it easy. Finding words from the problem domain makes it intension revealing.

But how can I make my own life easier and even more fun?

I have learned that the internal DSLs in Java can have a generic domain model, a meta-model. There is always a variation-point and its variations. Variations can be defined using polymorphism. A generic domain model must therefore give me the ability to express my API this way.

In the example above I have an Domain-Event that can be either Change, reach an Upper or Lower Limit. Then there is an interval that can be a millisecond, humanVisual oder oneMinute. The persistency can be disc or memory.

Event, interval and persistency are the variation-points where as change, upper, lower, millisecond, humanVisual, oneMinute, disc and memory are variations.

Using actifSource I created an external DSL that lets me express this variability in a generic domain model and write a code generator that generates ExpressionBuilders from the specific domain model.

Now it is no more hard to write such type of internal DSL where your specific domain model can be expressed in a declarative descriptive way.

How is it done using actifSource?

On the generic domain model class ExpressionBuilder is defined one property called operation and one called expression.

The property operation is from type Operation and expression is from type Class.

Operation is a NamedResource and here the operation name is used as a starting point of the expression. As in listing 1 the detector.detect() method. The Class contains a Class Instance having defined its own properties. Each property's Range is from an abstract class.

The Class is the variation-point where as the Class Instance is the variation.


public PersistencyExpression withPersistency(){
return new PersistencyExpression();
}


Listing 2

The generator reads each property and defines a with-Method as shown in Listing 2 for
i.e. the Persistency variation-point.

Further the generator generates an ExpressionBuilder by looking up all instances of the type defined as Range on the property and creates methods.


public class PersistencyExpression{
public AndProperties memory(){
persistency = new Memory();
return new AndProperties();
}
public AndProperties disc(){
persistency = new Disc();
return new AndProperties();
}
}

Listing 3

The end declaration of the internal DSL expression is also generated as shown in listing 4.


public Detector build(){
DetectorImpl build = new DetectorImpl();

if(persistency!=null){
build.setPersistency(persistency);
}
if(interval!=null){
build.setInterval(interval);
}
if(range!=null){
build.setRange(range);
}
if(event!=null){
build.setEvent(event);
}

return build;
}


Listing 4


public class ActualValueFrameworkImpl ...
public BindTo bind(final DetectorModule detectorModule) {
return new BindTo(){

@Override
public Disposable to(final ActualValue actualValue) {
DetectorExpressionBuilder detect = new DetectorExpressionBuilder();
detectorModule.configure(detect);
return detect.build();
}
};
}
...
}

Listing 5

Now the framework method looks like listing 5.

If this is a proper usage of modeling or not... it does make my life easier an I can generate several internal DSLs using this aproach and have nothing else to do as to declaratively define it and press generate.

Happy modeling, Nils

Plugin Dependency Management outside Eclipse

Today I searched for a solution to simplify using the actifsource ant task. Until now, all actifsource scopes (either an eclipse project or a bundle) have to be defined using a special actifsource ant-task, which takes a classpath. When making heavy usage of bundles, these classpaths can really become long and updating them by hand is not very convenient.

One idea I had to improve this is to define the plugin-directory directly and only point to the required bundles by their symbolic names. To do this we need to load the plugin outside of eclipse. The first place to start was to have a deeper look into equinox and see if there is a way to simply load all plugins and use them to load the required classes. At the moment we don't need to activate the bundles.

Sadly the equinox framework seems to strongly rely on some properties and requires the existence configuration files on the files ystem. Reusing the implementations like BaseAdaptor, Framework, HookRegistry and BaseStorage isn't really an option since they are strongly bound together. Some of the classes have only package private constructors, others have public constructors but taking one of the private classes as parameter. Time to search another solution. OSGi is an open standard and every implementation should be able to load the bundles.

The solution I found, was Apache Felix. It was said to be easy in setup and when I tried an example I was really glad to find out that this is true.

The following code allows me to load a bundle located in the plugin directory:

Felix felix = new Felix(Collections.emptyMap());
try {
felix.start();
} catch (BundleException e) {
e.printStackTrace();
return;
}
File bundleDir = new File("C:\\eclipseLocation\\plugins");
Bundle bundle = felix.getBundleContext().installBundle("locationId", new FileInputStream(new File(bundleDir, "bundle.jar")));


This is really short. Using this approach to load all bundles from the bundle-Directory would allow us to search for the bundles by their symbolic name and directly refer to them. No more classpath setup in every ant-file. Maybe we could also load some extension point definitions and and simplify the whole registration of generator tasks.

Tuesday, June 15, 2010

GWT And Data Persistence

After trying out the tutorials from the Google Web Toolkit I created some small application where I needed to persist some data on the server. I was searching how to do this for quite some while so I thought it would be a nice idea to share what I found out about it. Basically, you can store data easily using Java Data Objects (JDO) but there are some things that you have to pay attention to.

JDO uses annotations to mark the fields and classes that are persistable:

@PersistenceCapable
public class Book extends Entity {

  @Persistent
  private java.lang.String title;
   
  @Persistent
  private java.lang.String iSBN;

  @Persistent
  private double price;
   
  @Persistent
  private java.lang.String description;

   // ...

}

When you are working with GWT you usually implement services on the Server. This is done by extending RemoteServiceServlet and implementing your service interface. That is the place where I wanted to persist the data which gets changed by service calls. To do this, you need a so-called PersistenceManager. I wrote a helper class which contains a singleton access to a PersistenceManagerFactory:

public final class PMF {
 
  private static final PersistenceManagerFactory pmfInstance =
        JDOHelper.getPersistenceManagerFactory("transactions-optional");

  private PMF() {}

  public static PersistenceManagerFactory get() {
    return pmfInstance;
  }
 
}

Now, creating a new book and making it persistent is as easy as this:

  @Override
  public Book createBook() {
    Book newBook = new Book();
    PersistenceManager pm = PMF.get().getPersistenceManager();
    try {
      pm.makePersistent(newBook);
    } finally {
      pm.close();
     }
    return newBook;
  }

Access to existing data is done using JDOQL and not much harder:

public List getBooks() {
  PersistenceManager pm = PMF.get().getPersistenceManager();
  try {
    Query query = pm.newQuery(Book.class);
    List results = (List) query.execute();
    return new ArrayList(results);
  } finally {
    pm.close();
  }
}

This gives a list of all existing books.

I got into problem when I tried to use iheritence with my JDO data classes. All my data classes inherit from the following class Entity:

@PersistenceCapable
@Inheritance(strategy = InheritanceStrategy.SUBCLASS_TABLE)
public abstract class Entity {
   
    @PrimaryKey
    @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
    private Long fKey;
   
    protected Entity() {
        this(null);
    }
   
    protected Entity(Long key) {
        fKey = key;
    }
   
    public long getKey() {
        return fKey;
    }

    @Override
    public int hashCode() {
        if (fKey == null) return 0;
        return fKey.hashCode();
    }

    @Override
    public boolean equals(Object obj) {
       // ...
    }
}

The important piece here is @Inheritance(strategy = InheritanceStrategy.SUBCLASS_TABLE). This defines how the objects are stored in the tables. As the default store strategy is not implemented by google it is very important to specify this explicitly!

If you want to use the data classes as arguments to the service call you need to give them a no-arg constructor  (which is allowed to be private) and make sure that they inhert from IsSerializable:

  /**
   * Creates a new {@link Book} with all fields set to default.
   */
  public Book() {
    title = "";
    iSBN = "";
    price = 0.0;
    description = "";
  }


Friday, June 11, 2010

Implementing Apache Axis Webservices based on POJOs

In my last post I came up with the idea of generating a Webservice using dataclass generated from an actifsource model.

Have you ever wrote a wsdl from scratch? If you are used to the wsdl-schema, it seams to be really easy, even for beginners. Later on when you write the server-side implementation, you will soon find out that you always have to look out for inconsistencies. When you are done and start implementing the client-side, this task become more difficult, it's the third place where to keep consistency.

This week I started to explore how to generate an Apache axis webservice (http://ws.apache.org/axis/) with actifsource. First I looked at the eclipse web tool platform project (wtp), for common users point of views, it seems to be really straight forward. If the server-side project and the environment is set up correctly, the service classes are updated on the fly via hot code replacement and there is also the possibility to run the tomcat webserver in debug mode. In debug mode you can use breakpoints the same way in every other jdt project. That's fine.

One big disadvantage I found, was the way how dependencies are managed. There is an additional entry for the j2ee-dependencies in the preferences. This means you have to be careful where to set the dependencies, because only dependencies defined here, are automatically uploaded to the webserver.

If your project is also an osgi-bundle, then these dependencies are ignored. This is a thing you have to take care about, but it not really is a problem, since the server-side implementation should be in it's own project anyway. So what was really annoying? The wtp toolkit changed all projects , I added to the j2ee-dependencies. Now, I had dependencies to the eclipse builder of the wst-project in all library-projects I referred to. As that wasn't enough, the classpath was modified, to include the Apache Tomcat 6 Libraries.

The result is that when we are looking at the dependencies we now have to deliver all the tomcat libraries. Each developer working with these libraries has to install the eclipse wst, otherwise he gets an eclipse problem marker and the project won't compile. The solution I found, was to create a jar-file for each library directly in the webservice project. This means I have to keep them up to date by myself and they are duplicate for each webservice using them.

The alternative to this is either to find another toolkit, which behaves different or to create an actifsource solution. The actifsource solution should be able to create the dataclasses, the server-side implementation with protected regions and the client implementation directly from the model. Both the server implementation and the client implementation should have the same service interface. To be independent from the underlying webservice framework it should use a generic ServiceException instead, i.e. an AxisFault. The server implementation should be automatically packed in a service archive (.aar/.war) together with the required libraries. At the end it would be fine, if it also offers the possibility for automatic uploading to the webservice.

For this task I reuse the templates for the genericapp dataclasses and modified the model from service tutorial to fit my needs. I searched for a possibility to auto-generate the wsdl from the server implementation. Reusing the implementations from the wst-plugins was a good option, but failed by lack of information and doesn't seem to be easy and may be impossible.

Looking at the apache axis website I saw that they already used axis v2 and that there is no need to provide the wsdl in the ".aar" webservice archive. There is no longer the need for a deploy.wsdd and undeploy.wsdd, instead one has to put an "services.xml" to the META-INF directory of the webarchive. This file is really simple, it only needs the service name and some default handlers:



Since I want to use the actifsource namespace, I also defined the package-namespace mapping and a generated wsdl. One pitfall was generating a custom wsdl; it must be located in the META-INF directory (not in the wsld-directory as in axis 1) and in wsdl2 format, otherwise it will be ignored. Creating the server-side implementation including the templates costs me about four days. But not for writing the templates and the actifsource model. I spent most of the time reading documentation and discovering how the thing work in axis2. Thinking back, how to define the namespace-mapping and why my custom wsdl was ignored, was the hardest thing. Tcpmon was a really good helper. So basically, the main time I spent in learning how axis2 works.

The client implementation was much easier and compared to axis 1 which generates four classes for the implementation, in axis 2 there is only one implementation class needed and the interface definition can be shared between server and client.

Now after a week of work, I have a working generic app which has two part. A dataclass project, which defines the metamodel for the dataclasses and the service interfaces. And an axis-service-project to generate the axis-client and server implementation. Here is an example how the project dependencies in a simple setup will look like.



Note that the only actifsource model the user has to create is the dataclass. I think it is a good idea to provide a project generation buildconfig, which allows the creation of the target projects for client, server and dataclasses including an initial setup.

Casting in Java Leads to DRY Violations

The Don't Repeat Yourself principle states that you should not have any duplication in your code. Code duplication makes the code harder to maintain. In Java, one finds often code like this:

  String extractValue(IType parameter) {
    if (parameter instanceof ILiteralType) {
      ILiteralType literalType = (ILiteralType)parameter;
      return literlType.getValue();    
    }
    return null;
  }

Of course, it is best to avoid casting completely. However, in some cases this is not possible. When casting, the code must first check if an object is an instance of the desired type using the instanceof operator. If the test succeeds, the object can be casted to the type. The problem is that you have to specify the type twice which is a repetition and therefore violates DRY. Whenever you change the type in the instanceof expression you need to change the cast in sync. Failing to do so will result in an error that occurs only at run time.

One can circumvent this problem by using a little static helper function (we have put it in a class named ObjectUtil):


  public static T castOrNull(Object o, Class clazz) {
    if (!clazz.isInstance(o)) return null;
    return clazz.cast(o);
  }

With the help of this function the code above becomes:

  String extractValue(IType parameter) {
    ILiteralType literalType =
          ObjectUtil.castOrNull(parameter, ILiteralType.class);
    if (literaType == null) {
      return null;
    }
    return literalType.getValue();
  }

Now, the type is specified only once as argument to the castOrNull method.

(The attentive reader will object that ILiteralType is still specified twice, once as argument to castOrNull and once when defining the variable literalType. So effectively, we have only two occurrences instead of three in the original code. However, an inconsistency of the two remaining occurrences will be detected by the compiler and is therefore far fewer problematic!)

Tuesday, June 8, 2010

Domain-Specific Presentation is Component Visualization

Some time ago I was asked if actifsource provides a domain-specific presentation of domain-models. I had to answer this question with no. But what's the problem with a domain-specific presentation?

As I stated in this blog before, components are the building blocks of architecture. A component (derived from latin "componens") is something that is composed of different parts by its very nature.

Using different UML classes to visualize components is kind of silly, because the single parts of a component are structured hierarchically. This means, that visualizing component instances of the same type is given by concept.

But how to specify a component visualization concept? The problem is to cope with the multiplicity of relations between the component parts. Being able to visualize a to-many relation r between to elements A and B means that the visualization concept has to cope with any number of B instances aggregated in A.

To implement a generic graphical editor where you can design a component visualization concept which handles the presentation of to-many relation is quite a challenge.

I will try to sketch a possible solution in one of my next blogs.

Monday, June 7, 2010

If a bug exists long enough it becomes a feature..

Consider the following code which uses a java.util.TreeSet:

1 TreeSet set = new TreeSet();
2 set.add(null);
3 set.remove(null);

Does it work? Does it throw a NullPointerException? If we look at the javadoc of the TreeSet.add method we read:

  * @throws NullPointerException if the specified element is null
 *         and this set uses natural ordering, or its comparator
 *         does not permit null elements

As we create the TreeSet using the default constructor it uses the natural ordering and we would expect a NullPointerException on line 2. However, the NullPointerException happens only on line 3. Even though this is a clear bug which got reported already for java 1.3 (issue 5045147) it is still present in Java 6. Sun decided that there is too much code that would break if they fixed the bug that adding null to an empty TreeSet is allowed. One might ask what quality such code has..

Friday, June 4, 2010

Set-Operations

When writing templates it's sometime useful to merge the result of two or more selector expressions and iterate of it. In the older versions there was only one possibility to achieve this, writing your own Java template function. With the new version of actifsource we added three keywords to the selector line syntax. Since the selector line doesn't contain any source code, we don't break the rule of avoiding to mixture of source code and template code.

The three keywords are "union", "intersect", "except". All three are binary operations operating on lists. Additional to the keywords we allow to use round brackets for changing the evaluation order.


The union keyword

The union keywords merges the result of two selector expressions (m1 union m2) by creating a new list containing all elements of m1 and m2. Since we operate on lists this will preserve the order and allows for duplicate elements. As a result we get one big list first containing all element of the result from m1 followed by all element of the result from m2.

For example [1, 2, 3, 5, 4] union [2, 4, 3] will result in [1, 2, 3, 5, 4, 2, 4, 3].


The intersect keyword

The intersect keyword merges the result of two selector (m1 intersect m2) so that all occurrences of an element in m1 that exceed the occurrence of that element in m2 are removed from m1. This keeps the order as in m1 except for the elements removed.

For example [1, 2, 3, 5, 2] intersect [1, 3, 3, 2] will result in [1, 2, 3].


The except keyword

The except keywords merges the result of two selector expressions (m1 except m2) by removing all element of m2 from the result of m1. Since we operate on lists this will preserve the order and elements contained multiple times in the result of m1 will only removed as often they are contained in m2.

For example [1, 2, 3, 5, 2, 4] except [2, 4, 6, 3] will result in [1, 5, 2].


For what can it be used?

When traversing the hierarchy or collecting referenced objects it's often recommended to exclude the own element.

As an example I built a simple model with persons. All person can have friends. If you want to collect the friends of all friends, except the person itself and the friends that are known directly, you can use the following expression:


Or you can collect all friends and include person itself:



With the set operation there are much less situations where you have to write template functions, which makes writing templates much faster and easier.

Thursday, June 3, 2010

Software Modernization through Componentization

As David L. Parnas in his article Software Aging stated:

Programs, like people, get old. We can't prevent aging, but we can understand its causes, take steps to limit its effects, temporarily reverse some of the damage it has caused, and prepare for the day when the Software is no longer viable.

But how to cope with a legacy system? Is it even possible to renovate such a legacy system and if so, how?

Legacy System are hard to maintain because the systems structure tends to degrade over the years. Furthermore, legacy systems support functions as the structuring element. Today, systems are structured along components which contain functions.

So what's the idea behind Software Modernization through Componentization? The idea is to analyze existing functions of your legacy system and group them together to components. This task is called Componentization.

After the Componentization you need to analyze the worked out components. Are there any similarities in the component structures and the relationship between components? Any findings of this component analysis process will help you to define the formal component concept.

Once found an adequate formal component concept of your legacy system, you are able to manage your components. Doing so allows you to generate component interfaces.

These component interfaces allow you to define clear relations between components. They also make it easy to setup new functionality as managing components through a web gui (web enbaling) or providing access through services (SOA enabling).