Home signMedius sign
← Back to Stories

Navigating the Evolution of Enterprise Java

Bojan: My name is Bojan, I’m the Director of Business Development here at Medius, and joining me today is the Senior Software Engineer at Medius, Rok Koleša. Rok welcome, how are you?

Rok: Hi, thanks for having me. I’m doing great, I just came from a meeting slash debugging session relevant to this interview where we’ve had a challenge on how to address live-reload in our multi-module applications. It was fun, it was interesting, and a bit exhausting.

Rok Koleša, Senior Software Engineer

Bojan: Today we are talking about Jakarta EE, formerly Java Platform, Enterprise Edition and Java 2 Platform, Enterprise Edition, so, to start us off, how has JavaEE/JakartaEE application development evolved over the years and what new technologies or trends have influenced it?

Rok: Let’s start with an overview of Medius and why this is important to us. Our company has a track record of 100% client retention, which underscores our commitment to trust, skill, and transparency. We specialize in custom software and AI solutions for a range of industries, from government to fintech. We’re recognized by the UN and we use cutting-edge technologies like Big Data and Machine Learning to give our clients a competitive edge.

We're not limited to one industry only, we've got success stories across the board. That diverse experience lets us adapt quickly and deliver on time, without compromising quality. So, if you're looking to tackle a complex, data-driven project, we're the team that'll help you pull it off with confidence. Now, back to your question regarding the evolution of application development in the enterprise environment.

At the heart of what we do, you'll always find JakartaEE, formerly known as JavaEE. It's got these stable, well-thought-out specs that aren't rushed. The developers really take their time to think long-term, and that's why we end up with an API that's user-friendly but also versatile. When it comes to upgrades, it's usually a smooth process. The more complex your setup, the trickier the upgrade can be, but honestly, that's true for most tech out there.

As web tech evolved, so did JavaEE. They expanded the specs, but at a pace that made sense. We've worked with various JavaEE implementations over the years, like Glassfish and WebSphere, but Wildfly has been our go-to for over a decade.

Interestingly, we recently upgraded an application running on JBoss7 to Wildfly 26. That’s a big gap between releases and even in the JavaEE version - JBoss7 was compatible with JavaEE6, and WildFly26 is compatible with JakartaEE8. During the upgrade, the JavaEE compatibility was never in question as the backward compatibility is there. Everything still runs as it should with minimal code changes.

Having said all that, the rise of microservices shifted the whole ecosystem so that applications started to become more specific and use less of the JavaEE specification.

Bojan: Can you talk about the specific challenges you've encountered when transitioning from a JavaEE architecture to a microservices model, especially when deploying on cloud systems? How do these challenges affect both the development and deployment phases of a project?

Rok: Microservices are usually deployed in a cloud environment like Kubernetes or AWS. As previously mentioned, the JakartaEE specification became too broad for them to justify using it. Will we containerize a whole application server to run a small application just so we can say that it is compatible with the full JakartaEE platform specification? Probably not, but there are some valid use cases to do this. That’s not to say that we should *throw away the whole spec** but just retain the most core concepts. These application servers are more suited to a bare-bones installation on a virtual machine where it’s a long-running process, and we just change deployments as we go.

Cloud systems, on the other hand, are usually populated with containerized applications where one container does one thing and deploys one application. We also want to scale these deployments quickly, and this is where resource consumption, fast boot time and small memory footprint come into play. Application servers (I always have WildFly in my mind as it’s the one I have the most experience with) usually don’t have the most incredible startup times or the smallest memory footprint rendering them not the best fit for this kind of situation. From this, it follows that scalability can also be an issue. They just try to do too many things at once, which is a direct consequence of the spec being too broad - they have to do it, otherwise, they are not compatible with the spec.

It also has an impact on the development side of things. In our experience, developing with an application server such as WildFly is more bare-bones than it needs to be, meaning that it doesn’t help you build the application, it just runs it. Compare that with Quarkus, on the other hand where it helps you with a lot of helper functions and Quarkus-generated code. It may seem at first that the old way could be better as it should be runtime-agnostic, but this is never the case. You always know where the application is going to live and make shortcuts accordingly. So in a way, you always bind the application to the runtime. Just that WildFly doesn’t help you there and Quarkus does.

These challenges are not necessarily due to JakartaEE usage but the usage of the implementations itself.

Bojan: How does Quarkus address the challenges of transitioning from JavaEE to microservices and can you highlight the key advantages or benefits of using Quarkus in this context?

Rok: Quarkus is a Kubernetes-native framework and basically solves all the aforementioned problems. You always build exactly one application with it and it can be as big as you want or as small as you want. It is compatible with the core of JakartaEE specification, in the case of Quarkus 3 it is actually compatible with the JakartaEE 10 Core Profile with some other specifications thrown in as well - the ones that make sense e.g. WebSockets, Bean Validation and others.

It supports creating native images out-of-the-box which makes the resulting container image much smaller, the startup time is next to none and the initial memory footprint is very small. All of which is a cornerstone of a cloud application. There can be some problems with the native image generation though but that’s a whole different matter.

The transition from WildFly to Quarkus is in our experience just an overly positive experience. The core business logic can largely remain the same as they even share some libraries like Resteasy. There is a large difference though in managing their libraries. WildFly has them hidden in its modules and can be quite hard to replace should you need to. It also doesn’t give you easily accessible information to really know which versions you are using. Quarkus solves this with its BOM and you get every library in your effective pom if you are using Maven. This way you know exactly what you are using and it reduces dependency hell. Adding to this, every release is well-tested with all the used libraries so you have some big assurances there from the developers themselves. You can upgrade incrementally which makes it easier to upgrade applications not to mention that your code base does not become obsolete.

Configuring the application is also a treat. You only have your application properties file coupled with environment variables. Previously we had to configure the application server and our application separately. This is not the case anymore. It also makes it easier to change the configuration on Kubernetes. Every variable defined in Quarkus is available for configuration via environment variables. You just change it in ConfigMap, restart the pod and that's it! Live reload functionality is also great. This makes the developer experience so much better. The individual development cycles are now just a fraction of what they once were - you see the results almost instantaneously. CI/CD pipelines can also get much more straightforward as Quarkus supports generating the Kubernetes yaml files.

Oh, and let me just reveal how our debugging session that I mentioned at the beginning of the interview panned out. We actually discovered a bug in Quarkus, submitted a bug report on their GitHub and it will be available in the next release. They were quick to answer and willing to help or look for alternative solutions.

So that was a very positive experience personally and as part of Medius. It makes us feel heard as a company and that we can contribute to a better community for the Quarkus ecosystem.

Bojan: Last question, if people are interested in finding out more about Medius, where can they go?

Rok: We have quite an active LinkedIn channel (Medius.si) where we publish our success stories and latest use cases. We also have a webpage where you can read about our specific solutions and there is always a section where you can apply for a job to work with our fine engineers. Both will be available in the comments section of this post I think. And they can always contact Bojan (smiles) through e-mail which will also be in the comments section.

Bojan: Rok, thank you for your time, it was great talking with you.

Cookie Settings

We use third-party cookies to analyze web traffic. This allows us to deliver and improve our web content. Our website uses cookies for these purposes only.
Copyright © 2024 Medius Inc.All rights reserved.
Facebook iconInstagram iconLinkedIn icon