Appcelerator Blog

The Leading Resource for All Things Mobile

Microservices Q&A: Origins and Drivers

81 Flares 81 Flares ×

When speaking with Tony Pujals, Director of Cloud Engineering at Appcelerator, his enthusiasm about microservices—along with APIs, distributed systems and scalable cloud architecture—is clear. At Appcelerator, Tony plays a key role in improving the process of building, deploying, orchestrating and monitoring containerized microservices. He is a member of the Docker Captains team as well as a dedicated user of the Node platform and the co-founder of the third-biggest meetup for Node developers globally. We sat down with Tony to better understand the microservices trend. In this first installment of a series of Q&A discussions, Tony talks about the emergence and rationale behind this new architectural approach.

Can you talk us through how microservices came to be and its rise in popularity?

As a term of art, it is roughly two years old, but I would argue the idea behind microservices goes back a lot longer, at least to the original efforts around distributed object-oriented programming in the 1990s: being able to communicate with objects over a network. The evolution of service-oriented architecture (SOA) over the next decade called for the ability of self-contained application components to communicate over a network in a way that would provide accessible functionality.

I’m simplifying things a bit, but the problem with the distributed component frameworks was the need for special runtimes and approach to networking that predated the rise in popularity of HTTP-based communication, and the problem with SOA as it evolved was that it was onerously complex and heavyweight, especially compared to the emerging trend toward REST-based APIs that exploited the simplicity of HTTP semantics. In any case, I’d say the “component” concept of being able to deploy and expose some type of functionality over a network as opposed to releasing a single, monolithic bundle has been something that developers have desired for a long time.

When you think about it, the cost of deployment of an integrated, tested monolithic bundle is high; and when it takes a lot of effort to get software out the door, it leads to less frequent, epic releases, instead of small updates to individual services that can be managed independently. Classic three-tier architecture represents the kind of monolithic approach that has been prevalent for a long time, but the explosion of web consumers besides web browsers, such as smartphones, tablets, and the Internet of things, led to a corresponding explosion in API development. And then an “application” simply becomes the user experience with a particular device or system that consumes a set of APIs, and this naturally leads to the desire to evolve, deploy, scale, and update APIs individually to satisfy user expectations—not as part of less-frequent, epic releases.

So microservices has come to mean just this: an approach to deploying lighter, narrowly-focused services with less effort and ceremony, putting less strain on individual development teams. Netflix is largely given credit for pioneering this effort as they adapted their development, testing, and operations model and migrated from deployment of a single application bundle to over 600 microservices.

For example, if you’re responsible for user profiles, you should be able to own that service without having to worry about integrating and deploying it bundled with the rest of the large application on the same release cadence. You can maintain and update it separately. For an individual developer or small team, this means a lot less friction, a lot less cognitive overhead, which translates to higher productivity and quality.

Container technology, particularly Docker, plays a huge role in running these lightweight, isolated processes with the same benefits as virtualization without the overhead of provisioning virtual machines for each service. I would say that at least for the past year or so, the idea of microservices presumes the use of containers. Pragmatically speaking, you really need to be exploiting containers for an effective microservice strategy.

How do APIs factor into this?

Simplistically, microservices are are independent software components that perform a narrow range of functions and communicate through a message queue or contractually agreed interfaces — APIs.

Microservices present an opportunity to take advantage of virtualization and the desire to build more granular services, especially with mobile and the IoT coming into play. Suddenly, it’s not about shipping web applications. You need APIs that fulfill different experiences for different environments. What we refer to as an “app” could be a smartphone, a tablet or a connected device on your fridge. All of these use APIs on the backend. Microservice architecture provides an approach for effectively building, deploying, and independently evolving API implementations at their own unique cadence as necessary for these different apps.

For getting microservices out the door, it can help to adopt a platform like Node for building lightweight, high performance APIs. There isn’t a lot of ceremony involved in creating a service with Node that executes in a very fast runtime, well-suited for the type of I/O-oriented activity that characterizes the majority of APIs. Node certainly isn’t the only solution, but for many teams it is a big win.

See Microservices Q&A Part 2: Clearing the Hurdles and Part 3: Today and Tomorrow

81 Flares Twitter 0 Facebook 0 Google+ 1 LinkedIn 80 Email -- 81 Flares ×

Sign up for updates!

Become a mobile leader. Take the first step to scale mobile innovation throughout your enterprise.
Get in touch
computer and tablet showing Appcelerator software
Start free, grow from there.
81 Flares Twitter 0 Facebook 0 Google+ 1 LinkedIn 80 Email -- 81 Flares ×