Posts

Showing posts from 2015

Smart package structure to improve testability

There are many ways of dividing whole application into packages. Discussions about pros and cons of packaging by feature or by layer can we found on many programming blogs and forums. I want to discuss this topic starting from testability and see if it will drive to any meaningful result. At the beginning let's try to describe what we usually want to test in our applications across different layers. Let's assume standard three tier architecture. At the bottom we have data layer. Depending on our attitude to Domain-Driven-Design we'll try to maximize (for rich, business-oriented entities) or minimize (for anemic entities built only from getters and setters) test coverage. In the second approach it's even hard to say about any tests, unless you don't trust Java and want to verify if get can retrieve value assigned before by set invocation. For rich entities we definitely want to verify business logic correctness. But to be honest it almost always can be done by si

JPA in case of asynchronous processing

Few years ago in Java world it was almost obvious that every "enterprise" class project needed JPA to communicate with database. JPA is a perfect example of " leaky abstraction " described by Joel Spolsky. Great and easy at the beginning but hard to tune and limiting at the end. Hacking and working directly with caches, flushes and native queries is a daily routine for many backend developers involved in data access layer. There are enough problems and workarounds to write a dedicated book "JPA for hackers", but in this article I'll focus only on concurrent entity processing. Let's assume the situation: we have Person entity which in some business process is updated by some service. @Entity public class Person { @Id @GeneratedValue private Long id; private String uuid = UUID.randomUUID().toString(); private String firstName; private String lastName; // getters and setters } To ignore any domain complexity w

Do we really still need a 32-bit JVM?

Image
Even today (and it's 2015) we have two versions or Oracle HotSpot JDK - adjusted to 32 or 64 bits architecture. The question is do we really would like to use 32bit JVM on our servers or even laptops? There is pretty popular opinion that we should! If you need only small heap then use 32bits - it has smaller memory footprint, so your application will use less memory and will trigger shorter GC pauses. But is it true? I'll explore three different areas: Memory footprint GC performance Overall performance Let's begin with memory consumption. Memory footprint It's known that major difference between 32 and 64 bits JVM relates to memory addressing. That means all references on 64bit version takes 8 bytes instead of 4. Fortunately JVM comes with  compressed object pointers  which is enabled by default for all heaps less than 26GB. This limit is more than OK for us, as long as 32 bit JVM can address around 2GB (depending on target OS it's still about 13 times l

Using jstat to report custom JVM metric sets

I've always been missing possibility to configure custom headers in JStat . Of course there are a lot of predefined data sets, but it'll be nicer if we could create our own data set. And as you probably already devised I'm writing this post because such functionality is of course available :) Unfortunately I haven't found it in any documentation so now I'll try to fill this gap. First thing we have to do is to provide custom descriptor with possible JStat options. This descriptor is just a text file containing something we'll call "jstat specification language". To make this custom file available to JStat we should place it in the following path: $HOME/.jvmstat/jstat_options If you want to view the bundled options please refer to  file in OpenJDK repository . The specification language is pretty similar to json files, and it contains the group of option  elements. Each option should be threaten as a set of columns that can be shown in single jsta

DataTransferObject myth-busting

DataTransferObject is one of the most popular design patterns. It has been widely used in archaic Java Enterprise application and some time ago got second live due to growth of RESTful APIs. I don't want to elaborate on DTOs pros (like for example hiding domain logic, etc) because there are whole books talking about it. I just want to cover technological aspects of DTO classes. Usually we create just simple POJO with getters, setters and default constructor. The whole mapping configuration usually limits to  @JsonIgnore annotation added on getter we want to skip. It's the simplest way but can we do better? I'll focus on most popular in Java world Jackson Mapper  and try to answer 3 basic questions which are usually skipped during development, and which in my opinion can improve the design of the transport layer. 1. Can we skip getters? Yes! Jackson is able to serialize our objects using different strategies. Those strategies are fully customizable on two levels: for

Dependency injection pitfalls in Spring

There are three injection variants in Spring framework: Setter-based injection Constructor-based injection Field-based injection Each of those mechanisms has advantages and disadvantages and there is not only one right approach. For example field injection: @Autowired private FooBean fooBean; It's generally not the best idea to use it in the production code, mostly because it makes our beans impossible to test without starting Spring context or using reflection hacks. On the other hand it requires almost no additional code and could be used in integration tests - which definitely won't be instantiated independently. And in my opinion this is the only case for field-based injections. Now let's focus on two major variants. In  Spring documentation  we can read that ...it is a good rule of thumb to use constructor arguments for mandatory dependencies and setters for optional dependencies. Also in documentation referring Spring up to 3.1 we could find a sentence