WebSphere ESB Topologies (Part 1)

WebSphere Application Server (WAS) has a variety of ways of defining servers, and their relationships to each other. These are often called WAS topologies. Let’s revisit some of the WAS topology concepts from a WebSphere ESB perspective (although many things you may know or learn about WAS topologies probably apply equally to WebSphere ESB, and vice-versa, because ESB is built on top of WAS).

There is a hierarchy of objects in an ESB topology:

An ESB installation can have one or more profiles, which define nodes in a topology (in other words, there is a 1:1 mapping between profiles and nodes). These profiles can have three types:

These profiles and the overall topology are orthogonal to ESB installations. An entire topology (with a variety of profile types) can be run from a single installation, or each profile can be part of a separate installation. To further confuse matters, a physical machine can have one or more ESB installations (although typically it only has one).

A stand-alone profile is easiest to understand. This defines a single node, which exists in a single cell. The single node contains a single default server. If you install ESB using the ‘Complete’ option, you will get a profile created of this type - called ‘default’, containing a server called ‘server1’ (the node and cell name will be some permutation of the hostname of your machine). Administration is done through an administrative console attached to the node.

Alternatively, you can set up a more complex configuration. If you create a deployment manager node, you can use this to manage other nodes. Typically those other nodes start out as custom nodes. When you ‘federate’ them to a deployment manager, they become part of the deployment manager’s cell, and a special type of server called a ‘node agent’ is created on the custom profile. Often this federation is done when the profile for that node is created. This federation allows configuration information to be shared between nodes - the administrative console used is now part of the deployment manager node, and configuration information is synchronised according to a schedule, or on demand. Resources (for example, JDBC connections), can be created at ‘cell’, ‘node’, or ‘server’ scope, and can only be seen at that level. Application servers also need to be manually created for custom nodes - they don’t contain any by default. Cells can only contain one deployment manager.

I plan to write a Part 2 on this topic soon, covering clustering. Watch this space…


[...] In a previous post describing SCA in WebSphere ESB / Process Server, I wrote that SCA modules have to be running in the same address space. I’d like to correct this: the restriction actually imposed on these bindings is that they need to be between SCA modules running in the same WebSphere cell (see this post for more information on cells, nodes, and servers). This is because the SCA resources that are automatically created when an SCA module is deployed are cell-scoped. Different types of SCA resources are created depending on whether asynchronous or synchronous behaviour is required, which is normally decided automatically, but in both cases the scope is the same. For more information, see this Developerworks article. [...]