Serious question. I know there are a lot of memes about microservices, both advocating and against it. And jokes from devs who go and turn monoliths into microservices and then back again. For my line of work it isn’t all that relevant, but a discussion I heard today made me wonder.
There were two camps in this discussion. One side said microservices are the future, all big companies are moving towards it, the entire industry is moving towards it. In their view, if it wasn’t Mach architecture, it wasn’t valid software. In their world both software they made themselves and software bought or licensed (SaaS) externally should be all microservices, api first, cloud-native and headless. The other camp said it was foolish to think this is actually what’s happening in the industry and depending on where you look microservices are actually abandoned instead of moving towards. By demanding all software to be like this you are limiting what there is on offer. Furthermore the total cost of operation would be higher and connecting everything together in a coherent way is a nightmare. Instead of gaining flexibility, one can actually lose flexibility because changing interfaces could be very hard or even impossible with software not fully under your own control. They argued a lot of the benefits are only slight or even nonexistent and not required in the current age of day.
They asked what I thought and I had to confess I didn’t really have an answer for them. I don’t know what the industry is doing and I think whether or not to use microservices is highly dependent on the situation. I don’t know if there is a universal answer.
Do you guys have any good thoughts on this? Are microservices the future, or just a fad which needs to be forgotten ASAP.
I have the unpopular user’s opinion that I expect transparency in my connections and interactions. I won’t use any commercial websites that have ambiguous connections on a whitelist firewall. I expect all businesses to operate like a brick and mortar store in real life. If I enter a grocery store, I’m not entering a bank, a bookstore, and a restaurant to purchase an apple, nor am I selling or giving up any part of my autonomy in my quest to satisfy my fundamental needs.
Personally, in my opinion alone, I view any business that is not transparent and straight forward about who and what I am dealing with as illegitimate and untrustworthy. I view it like inviting someone to a house party that then goes to the bathroom, opens the window, and lets other people enter my home. Or like a retail store that adds a bunch of hidden fees to their products because of something like various distributor logistics costs; stuff that belongs in the back office accounting of running a legitimate business. To me, the practice feels wrong because the default state of the internet being theft of individual autonomy with no freedom of information from determinism. I must assume that everyone is stalkerware, so I must be selective about who I enable in such a system. Micro services remove all accountability and make the problem of digital autonomy exponentially worse in the cases where they are exposed to the end user (data).
I think micro services are useful if you need to scale your application parts independently from other parts.
But I think now everyone uses micro services because it fulfills the developer dreams of being able to write code independently, and have it run on your computer inside docker.
So using docker is probably the best part about micro services. The downside of all this is the extra complexity in having them talk to eachother and that is the difficult part. When everything goes over the network and is async, you run into lots of potential error scenarios that must be handled. And each call is a million times slower compared to local.
It’s not the future. It’s just a tech trend and it will be something else in a few years. Tech never stays the same for very long.
But monolitic apps run fine inside of docker? I’ve tried a couple of home automation tools using docker. They were all monoliths, but we’re able to be run with a single docker command. It included a webserver and database server, plus the entire app and dependencies inside the docker container.
It’s been said before that microservices solve organizational problems. When you’re forced to go through official APIs, each team becomes responsible for their own connections to other teams. If you’re at a scale where a few people can be responsible for the entire system there’s really no benefit.
That’s a good way to put it. It’s also possible to use Kafka message queues to share data with other teams instead of using docker apps with rest apis. I’ve seen it done pretty well. But depends on what needs to be shared and how flexible it needs to be.
The actual protocol doesn’t matter, just that the team has to own it and publish it and other teams must use these APIs. Otherwise you get teams adding and modifying other teams code and you end up with the monolith anyways.
I mean, if anything, I would say microservices are the present.
As assaultpotato said, horses for courses, but I mean, microservices aren’t really a new concept at this point.
Sure, but are they the future? At this point there is some microservices oriented software available, but still plenty of monoliths as well.
Are the monoliths dying and microservices the only way forward? Or is there always going to be a balance? Or will microservices turn out to be only worth it in very few cases and become niche?
I guess what I’m saying is that I think things will generally stay balanced the way they are. Monoliths are never going to completely die out, and neither are microservices.
They both serve different functions, so there’s no reason to think one will “win” over the other.
I would say service-oriented is definitely but sometimes microservices are more trouble than they’re worth. They scale well but can be overengineering for smaller scale applications. And like Donald Knuth says: “premature optimization is cringe”
There’s a spectrum between architecture/planning/design and “premature optimization.” Using microservices in anticipation of a need for scaling isn’t premature optimization, it’s just design.
I find small services work well for well defined and context free tasks. For example, say you have common tasks like handling user authorization, PDF generation, etc. Having a common service works really well. This sort of a service bus can be leveraged by different apps that can focus on their business logic, and leverage common functionality.
However, anything that’s part of a common workflow and has shared state is much better handled within a single application. Splitting things out into services creates a ton of overhead, and doesn’t actually address any problems since you have to be able to reason about the entirety of the workflow anyways. You end up having a much more difficult development process where you need a bunch of services running. Doing API calls means having to add endpoints, do authentication, etc. where within a single app you just do a function call. Debugging and tracing becomes a lot more difficult, and so on.
My take is that pretty much everything should start out as a monolith and evolve organically from there. You will know if/when you need a separate service.
No.
Only a Sith would deal in absolutes. Same goes in programming. Microservices have their benefits . So do monoliths. Neither is going away in the foreseeable future.
Safest bet is probably to do monoliths first. Use microservices once it makes sense.
No. They’re the “right now”.
The future will be something else.
Have to keep changing it. No idea why, but it keeps us employed, so c’est la vie.
I mean, I like complete microservices in principal, but I think they design of your software and organization, its style of operating, size, and budget all play into the decision. I think the issue lies in presenting it as a binary rather than a spectrum. You can have something that is largely a monolith, but some bits of it are split out into microservices. The opposite is true as well.
My company tried to do the “microservice all of the things” approach and we’re already back to combining a handful, but definitely not back to one monolithic app.
The reality is, as always, “it depends”.
If you’re a smaller team that needs to do shit real fast, a monolith is probably your best bet.
Do you have hundreds of devs working on the same platform? Maybe intelligently breaking out your domains into distinct services makes sense so your team doesn’t get bogged down.
And in the middle of the spectrum you have modular domain centric monoliths, monorepo multi-service stuff, etc.
It’s a game of tradeoffs and what fits best for your situation depends on your needs and challenges. Often going with an imperfect shared technical vision is better than a disjointed but “state of the art” approach.
Thanks for your excellent response. The way these two sides were talking it seemed like there is no actual middle ground. But from what you say there actually is.
Tech people tend to be very black-and-white when discussing ideology. Reality is more forgiving.
If you can get your hands on it, the opening chapters of “Practical Event Driven Microservices Architecture” by Hugo Rocha gives a reasonable high level view of when you might decide to break a domain out of a monolith. I wouldn’t exactly consider it the holy grail of technical reading, but he does a good job explaining the pros and cons of monolith v microservices and a bit of exploration on those middle grounds.
Online discussions tend to be very polarising.
Nothing special about this topic, but something you should always keep in mind about any topic.