I was recently listening to a Nanoservices? Miniservices? Macroservices? podcast from the NoFluffJustStuff team on different service styles and I learned about terminology to describe a style of structuring services that I had recently been advocating. Miniservices describe a style of breaking a system into services that is more  balanced than Microservices.

Microservices Recap

It is worth reviewing some key characteristics of a microservice.

  • A microservice is built around a bounded context. A bounded context is described in the language of the business and constitutes the boundary or ‘job’ of a service.   Typical examples of bounded contexts might be “Billing” or “Maintaining Customer Information” or “The Orders in the system” however in practice the context boundaries tend to be much smaller and refined to keep the services small.  A bounded context is specifically not a technical boundary such as “Frontend” versus “Backend”.  A microservice should be concerned with a single bounded context and have a single responsibility.
  • A Microservice has  its own data-store (ie database).  The data-store belongs to the microservice and the data-store is not exposed outside of the microservice.  Multiple microservices do not share data-stores.
  • Microservices do not share code with other microservices.  Two microservices can use the same library but this library should be an independent project with its own release cycle and commitment to stable public API’s. If two microservices need the same business rules then those business rules should be belong to one bounded context (possibly a new microservice) and the other services should invoke those business rules through a stable service interface.
  • Each microservice can evolve and be deployed independently of other microservices.  This is allows each micoservice team to move fast and make changes without having to coordinate with other microservices.  Changes to the public API are typically handled by versioning the public API and supporting the old versions through a deprecation period

The ability to deploy and evolve a microservice independently is one of the key goals of a microservice architecture.  The desire to have an organizational structure consisting of a bunch of small teams that can have short releases and deploy independent of other teams is what leads to many of the other characteristics on the list.  Sharing code and databases across services and teams means that those services and teams now need to coordinate any changes to these shared blocks.

Microservice Drawbacks

There are some costs and downsides to this approach to software development resulting from the trade-offs made to optimize for small independent services.

  • Organizations that would have previously had one or two larger “applications’ to handle a collection of business functionality might now have dozens of microservices to do the same thing.   Each of these microservices needs to be maintained and updated for security deprecated API’s.
  • Code duplication becomes much more common because code sharing isn’t allowed. If some code is useful to two different services and that code isn’t appropriate to be broken out into its own service (or maybe the schedule doesn’t allow for this) then teams will typically just copy the code into the other service’s source tree. Twenty years ago this was considered a big anti-pattern.  Code duplication adds long term maintenance costs.
  • The performance of an individual user-focused business operation can start to grow when the operation might need to contact a dozen other services to complete the workflow.  There are techniques to mitigate this but they add complexity.
  • Coordinating integration tests of the entire architecture becomes very difficult because there are so many services and teams involved.  Some organizationsend up doing a lot of testing in production (ie A/B testing or running a parallel instance with mirrored production traffic).

Any individual microservice might be simple and not-complex but the architecture overall with dozens to hundreds of microservices tends to be very complex.  The trade-offs made to allow small independent teams to deploy quickly isn’t the best goal for all organizations.

Introducing Miniservices

Organizations that aren’t as focused on small independent teams with fast deployment cycles might instead be focused on efficiently evolving a software system as the business changes.

  • Business functionality is grouped into related groupings but the strictness of the “Single Responsibility Service” is no longer a driving principal. I feel that a “Bounded Context” probably would be a good word to describe a “Collection of related business functions” but this means something different than how the word is typically used in a pure microservice architecture.  Multiple services will live inside a group of related business functionality.  I will call this a “Business Context” but it is possible others have come up with a better term for this. A “Business Context” is usually larger than than the “Bounded Context” that that define a single microservice.
  • The multiple related miniservices in a “business context” are allowed to share the same data-store (ie database). Not allowing two microservices to share a single database is one of the most expensive aspects of microservices.  This makes many types of common business results difficult to produce because they have to combine data from multiple services often at the cost of increased complexity. Allowing multiple related miniservices to share a database can significantly reduce the implementation costs for a lot of architectures.  There will often be multiple databases within a “business context” and some databases will be more closely related to(or mostly used by) some services but there is no hard-rule preventing multiple miniservices from using the database.  These decisions should be made on an individual basis while keeping efficiency in mind.
  • The data-store shared between multiple miniservices in the “business context” should have a well defined schema.  This schema can change but it needs to be well defined at any point in time and the data-store should enforce this.  NOSQL systems that don’t enforce a formal schema can work when a single service is the only user of that data but any data that will be used by multiple miniservices needs an enforced schema.
  • The multiple related miniservices are allowed to share code typically through libraries (ie JARS). Developers should easily be able to move code into a library that can allow the code to be shared with the related miniservices.
  • The teams responsible for the group of related miniservices should be related to each other on the organizational chart.  Ideally the same team should maintain the collection of “related miniservices”.  This allows for easier coordination of changes shared database schemas and shared libraries. Different people or subteams can be dedicated to some of the miniservices but it is important that they coordinate and work together when required
  • Each miniservice within a “business context” can still be modified and deployed on independent schedules.  The miniservices are still standalone deployable services.  Sometimes the changes and deployment of two related miniservices will need to be coordinated and possibly combined.  This is both common and acceptable in a miniservice architecture.  We rely on the team to figure out these dependencies and coordinate them.
  • All of the related miniservices in a  “Business Context” must be tested together as part of a pre-deployment integration test.  This is needed because it is possible that a change might impact a service without anyone realizing it.
  • Multiple related miniservices can be deployed on a single virtual machine or you can adopt an approach with docker to give each miniservice its own container.  A miniservice architecture is flexible enough to accommodate both.
  • An individual “release” will consist of one or more miniservices from the set of related services being deployed together.

Miniservices and distantly related services

The above section describes how you can break a bunch of related business functionality into miniservices and have them share database.  Some business functionality and the related services will end up being part of a different “business context”.   These services (and their associated business rules) will usually be maintained by a different team often living on a different part of the organizational chart.   The sharing of data-stores and code between these business context is discouraged.   Two miniservices from different “Business contexts” should probably not be writing data into the same data-store. Allowing a miniservice from a different context to pull(or query) data from the data-store belonging to another context might sometimes be done for practical or efficiency reasons,   This is discouraged but not outright forbidden.

 

 

 

 

 

 

 

Advertisements

The @Bean annotation is a great way to declare Spring beans when writing code.  This allows the programmer to control which beans get instantiated and with what arguments.   Some situations call for deciding if beans should be instantiated at runtime based on configuration this is often done with the @Conditional annotation on @Configuration classes.   The @Conditional annotation works well if you are enabling or disabling a particular bean through configuration but sometimes the number of beans needs to be completely dynamic.

This often comes up if your application needs to connect to a bunch of external services such as a database.  You want the number of  services or databases that the application connects to be configurable.

Read the rest of this entry »

Spring integration makes it easy to monitor an sftp server for new files and inject those files into your application for processing.  I like to configure my spring integration applications with annotations and @Bean configurations.  I found the documentation, and online examples for doing this annotations a bit lacking.

 

This post demonstrates the basics on how to connect get a spring-boot (1.3.x or 1.4.x) application to monitor a sftp server.

Read the rest of this entry »

Sometimes the behaviour of an application is controlled through properties
and the application needs to detect changes to the property file so it can switch to the new configuration. You also want to ensure that a particular request either uses the old configuration or the new configuration but not a mixture of old and new. Think of this as ACID like isolation for properties to ensure that your requests don’t get processed using an inconsistent configuration.

We accomplish this by using a property lookup bean (PropertyBean) that returns property lookups. The PropertyLookupBean is a request scoped bean meaning that a new instance is created for each request in a spring MVC application. The PropertyLookup bean has an init method that runs post construction and calls the PropertyCache bean to get the current set of properties. Each instance of the PropertyLookup bean will return the property values from the Properties object it gets at initialization.

properties

The property lookup bean has a reference to a singleton scoped PropertyCache. The PropertyCache maintains a reference to the current Properties object. When the PropertyCache bean is asked to return the current properties it checks to see if they have changed on disk and if so reloads the properties.

class PropertyBean {
    private Properties currentProperties;

    @Autowired
    PropertyCache propertyCache;

    @PostConstruct
    void doInit() {
        currentProperties = propertyCache.getProperties();
    }
    public String getProperty(String propertyName) {
        return currentProperties.getProperty(propertyName);
    }
}

 

The PropertyBean would look similar to the above sample class. This class simply caches an instance of a Properties object post-construction and delegates property requests to this instance.

The PropertyBean needs to be created as a request scoped bean. This can be done with a Spring annotation configuration class as follows.

@Configuration
class MyConfig {
    @Bean
    @Scope(value="request", proxyMode=ScopedProxyMode.TARGET_CLASS)
    public PropertyBean propertyBean() {
        return new PropertyBean();
    }
}

 

Managing the loading of properties falls to the PropertyCache class which would look similar to the below class

@Component
public class PropertyCache {
    private Properties currentProperties;
    private Long lastReloadTime=0;
    @Value("${dynamic.property.file.path")
    private String propertyFileName;


    public Properties getProperties()  {
         reloadIfRequiredProperties();   
         synchronized(this) {
             return currentProperties;
         }
    }

    private reloadPropertiesIfRequired() {
         FileInputStream stream = null;
         try {
             File f = new File(propertyFileName);
             if(f.lastModified() > lastReloadTime) {
                 Properties newProperties = new Properties();
                 stream = new FileInputStream(f);
                 newProperties.load(stream);
                 synchronized(this) {
                     lastReloadTime = System.getCurrentMillis();
                     currentProperties = newProperties;
                 }
             } 
          } 
    }
    catch(IOException exp) {
        //do something intelligent on the error
    }
    finally {
        if (stream != null)
            stream.close();
    }
}

 

All of the business logic code that needs properties to process a request will use the request scoped PropertyBean instance bean. This ensures that the entire request uses the same set of properties even if the underlying property file changes.

class SomeBusinessService {
    @Autowired
    private PropertyBean propertyBean;

   public void someOperation() {
      String v = propertyBean.getProperty("some.property");
       .
       .
    }

}

When a new request is started a new instance of the PropertyBean is created. The doInit() method invokes the PropertyCache getProperties() method. The Property Cache checks the timestamp of the file and looks for modifications, if it detects one it reloads the property file. Any PropertyBean instances that have already been initialized will continue to use the old properties instance.

I recently gave a talk at PuppetCamp Toronto 2015 focusing on how we use puppet at work. Slides are available here.

One of the more interesting topics I covered was how to integrate the building of puppet manifests into the software development life-cycle.

Read the rest of this entry »

Tonight I presented a talk on using JSON in Postgres at the Toronto Postgres users group. Pivotal hosted the talk at their lovely downtown Toronto office. Turnout was good with a little over 15 people attending (not including the construction workers banging against some nearby windows).

I talked about the JSON and JSONB datatypes in Postgres and some idea for appropriate uses of NoSQL features in a SQL database like Postgres.

My slides are available for download

We are thinking of having lighting and ignite talks for the next meetup. If anyone is in the Toronto area and wants to give a short (5 minute) talk on a Postgres related topic let me know.

Last week I attended the second Canadian DevOps Days event which was the first to be held in Toronto. DevOps days is a conference dedicated to DevOps. I had a great time and would like to thank the conference organizers and sponsors for making this happen.

DevOps Days was different than most I.T. conferences that I attend because none of the talks were technical in nature. There are lots software products and technologies used in the practice of DevOps such as Puppet, Chef, various CI servers , test frameworks, and logging frameworks but the conference wasn’t really about those. Talks referenced these technologies but that wasn’t the focus of the talks. The key challenges in DevOps today are softer issues. What is How do you go about introducing and promoting DevOps was the key themes of the conference.

One of the most memorable stories from the conference was told during an open-spaces session. The woman telling the story was a manager of a team responsible for some I.T service at a government ministry. She told us how formal initiatives required approvals and budgets that were difficult and time consuming to put in place. Instead of trying to get a “Devops Program” approved or budget to hire “Devops engineers” she took a different approach. Each time one of her reports came to her asking if they could do something such as make a change, fix an issue or propose a solution to a problem she would ask one question.

Will this make us suck less?

If the answer was yes, even if it made them suck just a tiny bit less she often went along with the request. If the answer was no she asked them to rethink the request. This approach allowed her team to make small incremental improvements to the systems they run. Over time their service became much more reliable meaning her team was spending less time fighting fires and had more time to implement bigger improvements within their existing budget. Running a more reliable service also relived some of the pressure on her managers and business partners earning them goodwill. She knew that she couldn’t fix all of the problems with her system in one go and she didn’t sugar coat the reality of their system. The motto wasn’t about being being great, or striving for excellence which would have seemed like a far off goal. The motto was about sucking less something they could archive in the short term while still leaving more room for improvement . The lesson to take away from this story is

Make small incremental improvements that are focused on reducing the pain your co-workers are feeling

Another story from the conference that stood out for me was from Telus. Telus started introducing DevOps practices into various teams a while ago. They started slowly but soon saw dramatic decreases in the number of production issues on products that they had tried DevOps practices on. These success where noticed and helped them roll DevOps practices out to more projects leading to more success. The CEO of Telus (a large Canadian telco) is now giving executive level support and encouragement to the DevOps initiative. The lesson to take away from this story is

Start small but in a position prepared to measure your success. Use those measured success to make a business case for expanding the effort