Here are some posts:
I wrote this article quite some time ago and parked it on the company's confluence page.
Although it was super exciting to write, and I tried to promote it internally to the best of my ability, it turned out to be yet another cold documentation.
But I liked it so much, that I think it's worth revisiting and publishing in the open.
I think it's mostly clear, but nonetheless, I will outline a couple of the most important reasons.
Testing is an essential part of software development that helps ensure that an application works as intended and meets the expectations of users. Without tests, it's almost impossible to prove, that the functionality does what it's supposed to do - it's just an educated guess.
The true cost of software is in its maintenance.
Time and money invested into maintenance dwarfs initial development investment.
And the larger the codebase, the less and less important the initial development investment cost is.
Maintainability should be the main factor when developing software.
We’re getting inevitably slower as the code degrades over time.
Tests enable us to ease the pain of maintenance by turning it into a simple routine activity.
Well-written tests enable change.
They enable options.
Counterintuitively, tests help with readability. They shift attention from implementation details to behavior, usability, and user-friendliness, which ends up in much more simple code. We spend 10x more time reading code when writing. Something written in ~5 minutes will be read for an hour. Think about this next time you will spend hours coding.
Speaking in financial terms:
code is a liability — it is something that requires more and more investments over time to keep it working.
The larger the codebase, the more maintenance, bug fixing, and refactoring it requires.
On the other hand, test suite is your asset — it is something that helps to deal with the liability.
Well-written test suite will continuously pay its dividends.
And if financial gurus are teaching us something,
is that we should invest (time and money) in assets, and not liabilities.
Tests are never obsolete, they act as a living specification forever.
Don’t confuse anything of that with "easy." Writing good tests and good code is not easy. It requires discipline and practice. Constant practice.
So let's go through the most important aspects that I picked up over the years of writing awesome tests one-by-one.
It all starts with the testing pyramid — a testing strategy that emphasizes the importance of having a balanced mix of different types of tests. There are many types of tests, but they can be categorized into three groups:
The aim is to have a higher percentage of unit tests and a lower percentage of end-to-end tests to ensure faster feedback loops and more robust code.
$$$ — expensive tests, a lot of machinery and time are involved
$ — cheap tests, very little resources and time are required
In this article, I will mainly focus on unit tests with sprinkles of integration tests.
Before I dive deep into technics and dos-and-don'ts, we have to come to terms with "What is a unit?".
"Unit — an individual thing or person regarded as single and complete but which can also form an individual component of a larger or more complex whole."— Google a.k.a. Oxford dictionary
Interesting, but a bit too broad.
How about this?
"In computer programming, unit testing is a software testing method by which individual units of source code—sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures—are tested to determine whether they are fit for use. It is a standard step in development and implementation approaches such as Agile."- https://en.wikipedia.org/wiki/Unit_testing
Noticed anything?
There's nothing about a "single line of code," a "single method" or even a "single class."
This is one of the most common misconceptions.
Somehow "unit" is commonly interpreted as "a method" or even worse — "a line of code."
And so unit testing becomes method testing, line testing, etc.
This is very one-dimensional and crude.
Yes, it's important for every method and every line of code to be tested,
but it should also make sense in the grand schema of things.
Allow me to elaborate. If I'm introducing a change (whatever it might be: feature, bugfix, etc.), what is more important?
Well, the answer is clear — it's always more important for the whole change to work than for the method to return the right value. Code can have mistakes, but if the change performs as it should - who cares? This is because the change is the unit in this case. Not a method or a line of code. The code is just an implementation detail of this change. Important detail, but a detail nonetheless. And details should be tested as part of something bigger.
This realization made unit testing my best ally, instead of a chore.
It's like LEGO. Is it important that all bricks are working? Yes. But will the satisfaction be the same if instead of a pirate ship, you receive just a bunch of working bricks? I doubt so.
Here's my collection of techniques and best practices for writing awesome unit tests. Don't get me wrong, I haven't invented any of those — this is just a collection that I've assembled over time from different sources: be it books, articles, conference talks, workshops, and my colleagues.
However, all this stuff is battle-tested. There's not a single technique that I don't use daily. If anything, there might be more.
Some of these points are asymptotes — they are hardly reachable 100% of the time, and it's fine, as long as there's a consistent upward trend.
There are going to be quite a few code snippets, they all will be in Java with some sprinkles of Spring, for obvious reasons 😏.
Your unit tests are trying to tell you something, and if you want your code to be awesome, you have to listen.
They are your best allies.
“If tests are hard to write, the production design is crappy” - goes an old saying.
Indeed, writing unit tests gives one of the most comprehensive,
yet brutal feedback about the design of the system.
From my experience, every project where tests were treated like a chore or an afterthought had a horrible rotting codebase. No exceptions. And the best codebases I worked with were always backed up by an amazing testing culture amongst developers. There's nothing that hurts codebase more than a phrase: "I'm finished with implementation, and now I'm writing tests."
And this brings us to the next point...
Writing fine-grained unit tests early increases friction with bad design, helps to understand the problem and clarify business requirements early in development, gives early design feedback, and produces real test coverage.
Unit tests force the writer to think about a piece of code from the user’s perspective. This coerces a cleaner and more effective design.
Writing unit tests after the implementation is done is practically useless. All mistakes are already made. Bad design decisions as well. Unit tests will just "solidify" everything, and harm more than help.
I'm not preaching about TDD.
TDD is hard.
But writing unit tests early is not.
How early?
As early as possible.
Ideally, first 😉.
Write a little bit of code, then write a little bit of test, then write a little bit of code, etc.
As soon as you feel comfortable, skip the first step.
Having to mock more than five plus dependencies is a sign of a bad production code design.
It's better to have ten small classes with one-two dependency each, than one mega-class with ten dependencies. The ideal number of dependencies per class is zero, but this is hardly possible, but the intention to have as few dependencies per class as possible should drive the design.
I once heard a phrase from a seasoned dev: "Changing production code because of tests is a bad practice!" — it goes without saying that the project codebase was one of the worst I ever worked with to this day.
The pinnacle of this project for me was a 4-week sprint,
during which my team was extremely busy, but managed to produce so little output,
that during the monthly project demo, all we had to show for it was a small green text on a couple of web pages.
And nobody was laughing, because other teams(~15 in total) managed to produce even less.
A couple of months later, the project with a 20mil euro a year budget was canceled after approximately 4 years of
development.
The project was a massive failure.
It was probably mismanaged all over the place, yes, but poor and unprofessional engineering "ship-shit-fast" culture didn't help, that's for sure. Over 4 years, more than a hundred engineers(myself included) produced nothing but a raw unmaintainable mess, that inevitably ground development to a halt.
But I've learned a lot. No matter how many hours I and my team put into a working week, the ever-growing mess will always outpace us. And the only way to move fast is to move with ever-increasing quality. And the only way to achieve ever-increasing quality is to mercilessly refactor existing code. And the only way to enable refactoring is to have rigorous testing ethics.
Moral of the story: good tests equal fast development.
This is big.
This took me too long to realize.
If I want to perform some minor refactoring(tidying), like rearranging methods, and classes, extracting new interfaces - something that keeps the behavior the same, I should be able to do it without breaking tests. This is impossible if tests are written to test implementation (each method/line of code).
My rule of thumb goes like this: All the code I can merge into a single class without breaking the system and domain boundaries should be tested as one. The whole domain is a unit.
Assuming I have something like this:
├── buyallmemes
│ ├── notification
│ │ ├── NotificationUser.java
│ │ ├── NotificationUserRetriever.java
│ │ ├── NotificationUserMapper.java //maps something to something
│ │ ├── EmailGatewayClient.java // sends message to queue
│ │ ├── ... //domain specific logic
│ │ └── NotificationModule.java
│ ├── user
│ │ └── UserModule.java //implements NotificationUserRetriever, retrieves a user from DB
There are several possibilities to scope tests:
NotificationModule
and mock only external dependenciesThis way, dependencies within the scope could be refactored. It's much more flexible. API signatures could be changed freely.
It’s impossible to refactor code without tests. It’s dangerous, time-consuming, and error-prone. It’s not fun. The number one precondition to any refactoring is a strong test suite, and there’s no way around it. Untested code cannot be adequately refactored.
And nobody writes clean code from scratch. Not even the “strongest” programmers. The "stronger" the programmer, the more he/she relies on an adequate test suite to support their messy code from the beginning.
I've been guilty of refactoring without tests in the past. It's a dreadful experience.
The cleanliness of tests is arguably even more important than the clean “production” code. The code will inevitably change, it will evolve, and the only thing that will hold it accountable is tests.
Try to avoid any “crafty” approaches. Settle for standard tools and practices.
Bad:
@BeforeEach
void setUp() {
MockitoAnnotations.openMocks(this);
}
Deceiving. Hide unnecessary stubbing. Don’t do it.
Good:
@ExtendWith(MockitoExtension.class)
class WonderfulServiceTest {
...
}
Reveals unnecessary stubbing, makes tests more readable, and adds more Mockito magic (in this case, this is a good thing).
Bad:
private SystemUnderTest underTest;
@Mock
private MockOfSomething mock;
@BeforeEach
void beforeEach() {
underTest = new SystemUnderTest(mock);
}
Good:
@InjectMocks
private SystemUnderTest underTest;
@Mock
private MockOfSomething mock;
Clean. Less boilerplate code.
Messy unit tests possess much greater risk than the absence of tests. They create fake coverage and mislead into an idea that the code is working.
And stay away from reflection.
In Java world, tools like PowerMock, ReflectionUtils are a solid sign that something is fundamentally wrong with the code design. Unless you are building a reflection-based framework of some sort, there should be no need for such tools.
Just like production code, tests should be reviewed and refactored to ensure that they are still valid and maintainable. This includes removing redundant tests, consolidating duplicate tests, and improving test readability.
Follow the AAA pattern (Arrange, Act, Assert)/GWT pattern (Given, When, Then)
Do not feel compelled to stuff all your tests for FooService
into FooServiceTest
.
Every test that needs a slightly different setup should go into a separate test class.
It helps to understand what exactly is going on in a test class.
For example, FooServiceUserNotFoundExceptionTest
requires little to no explanations.
Not sure about where to put new tests? Create a new class.
The test class is getting too big and requires a lot of doom-scrolling? Split it into several test classes. This is also a good indicator that the class under the test is too big with too many responsibilities. Refactor it. Split it into smaller pieces.
Once again, the best test is the simple test
Happy paths.
It's a good idea to start with something simple, something satisfying.
Code that you fear.
This should be your primary objective.
The first test is the hardest to write, and as soon as you crack it -
everything else will fall apart with ease.
Deeply encapsulated logic that is hard to reach via API.
The logic that requires a lot of state management.
Sometimes it's not possible to test the whole change in isolation,
and this is where "method by method" tests become useful.
Don't overdo it.
A bug.
Every time you write a failing test that proves the bug before fixing that bug - you deserve a small salary raise.
This is what truly differentiates the best from the rest.
Personally, I found this extremely satisfying to see my failed test prove a bug, just then to be fixed.
Or even better, a test that should fail — passes, because the initial "bug" assumption was wrong.
I can't stress enough how powerful this technique is.
Validation.
Places with high cyclomatic complexity.
if
, for
, while
, etc.
Exceptional cases.
All your throws
and try catch
.
Test it, but maybe a bit later.
Facade methods.
Methods that just call another method or two.
If you have time - do it.
What are the chances that someone will accidentally delete one of those calls?
These methods usually could be tested in a bundle with some other logical parts.
Trivial code.
Getters/Setters.
Not the best way to increase code coverage.
Same as for the facade methods — your getters/setters/mappers should be tested as part of something more meaningful.
Legacy code that never changes with no bugs.
If it works — don’t touch it.
Leave it be.
Find something better to do.
Don’t start testing by passing null
and empty collections.
Don’t start testing with extremely rare edge cases.
Focus on what’s important first.
Use code coverage to detect missed paths.
Don’t strive to have high code coverage for the manager's sake.
Pareto principle applies to tests quite well. 80% coverage could be achieved by spending just a little bit of effort. The last 20% of coverage will take you approximately four times as much.
Ludicrously fast.
Run unit tests often.
Run unit tests all the time.
Keep in mind that unit tests are focussing on behavior.
Timing and concurrency should never be a part of the unit test — otherwise,
you end up with non-deterministic results.
No Thread.sleep(..)
.
No while(...){...}
Keep these techniques for integration tests.
Actively look for slow unit tests and investigate.
The usual suspects are Reflection and his best friend Mockin Static.
To fight with the static
disease - convert static
methods into small instanced components.
Bad:
public class SomethingSometingUtil {
private SomethingSometingUtil() { //look ma, I know about default constructor
}
public static Something convert(SomethingElse somethingElse) {
Something something = new Something();
something.setSomeField(somethingElse.getSomeField());
return something;
}
}
The only way to mock this is via Mockito.staticMock(SomethingSometingUtil.class)
or tools such as PowerMockito
.
This slows down tests considerably and makes them hard to work with.
Overall, static
is considered by me to be a terrible practice.
Good:
@Component
public class SomethingSomethingConverter {
public Something convert(SomethingElse somethingElse) {
return SomethingSometingUtil.convert(somethingElse);
}
}
In case it is impossible to refactor (and get rid of) SomethingSometingUtil
in one go(3rd party library, too heavily
used in production code),
it is perfectly fine to introduce a decorator-ish component that wraps static nonsense.
The new component could be easily controlled, mocked, and tested.
This speeds up tests considerably and makes the code much cleaner in general.
Although some literature suggests that talking to a database or a queue during a unit test is fine, I disagree. I like to keep my unit tests simple, fast, and away from the network.
No flakiness.
No time dependence.
Avoid Instance.now()
and such.
Instead, create a small component and inject it everywhere you need a current
date.
@Component
public class DateService { // naming is hard, but we can always change it
public Instant getNow(){
return Instant.now(); //static methods are a bad practice, by the way
}
}
It could be easily mocked and tested. A thing of beauty.
No network interaction — the network is slow, avoid it
Avoid concurrency and multithreading, unless this is your prime objective
Mock behavior, not data.
Bad:
MyBelovedDTO dto = mock(MyBelovedDTO.class);
Why?
I see this all the time, and every single time my reaction is "Why?"
After all these years, I still don't understand.
I probably missed a memo or something.
In most cases, there's a beautiful builder pattern hidden somewhere.
Use it.
There’s none?
Add a builder pattern and use it.
If there’s no access to the source code(3rd party library), invest in creating a dedicated builder just for
testing.
Good:
MyBeloverDTO dto = new MyBeloverDTOBuilder() //builder could be a standalone class
... //use builder setters
.build(); //ugly target class is encapsulated
Don't Mock Getters.
Just don’t.
Don't have Mocks return Mocks.
Every time you do that, a fairy dies 🧚😢
Overuse of mocks leads to brittle tests and code that is difficult to maintain.
It is perfectly fine to use real classes instead of mocked interfaces.
Mocked interfaces are hard to change - every API change will break ALL tests.
Do yourself a favor, and don't solidify interfaces between components prematurely.
This is especially true in the early stages of development.
Mock a bit further from the class you are testing, and leave yourself room to wiggle.
Or even better - start with a small integration test.
Assuming we have something like:
@RequiredArgsConstructor
class A {
private final B b;
public String getSomething() {
return b.computeSomething();
}
}
@RequiredArgsConstructor
class B {
private final CRepository cRepository;
public String computeSomething() {
return cRepository.getSomething() + " World!";
}
}
class CRepository {
// represention of a database
public String getSomething() {
return "Hello";
}
}
Class A injects class B, and class B injects class CRepository. Nothing crazy.
Might be too fragile:
@ExtendWith(MockitoExtension.class)
public class ATest {
@InjectMocks
private A a;
@Mock
private B b;
@Test
void test() {
when(b.computeSomething()).thenReturn("Hello World!");
String actual = a.getSomething();
assertEquals("Hello World!", actual);
}
}
The interface between A and B is effectively locked. The only change we can make without breaking the test is renaming via IDE. It's useful, but nothing spectacular.
Might be more elastic:
@ExtendWith(MockitoExtension.class)
public class ATest {
private A a;
@InjectMocks
private B b;
@Mock
private CRepository cRepository;
@BeforeEach
void setUp() {
a = new A(b); //real implementation of B is injected
}
@Test
void test() {
when(cRepository.getSomething()).thenReturn("Hello");
String actual = a.getSomething();
assertEquals("Hello World!", actual);
}
}
The interface between A and B could be freely changed in any direction.
Much more flexible approach.
But this does not mean that the interface of the B should always be fluent.
As soon as the API of class B is getting more mature (ready to be merged into mainline) it might make sense to
“solidify” it by adding more unit tests.
If you're using a framework with a dependency injection mechanism, you probably can specify the set of dependencies to
include in the test.
This is how Spring does it:
@ExtendWith(SpringExtension.class) // Enables Spring to take control over the test execution
@Import({A.class, B.class}) //classes that will be included into the test Spring Context
public class ATest {
@Autowire
private A a; //A will be instantiated by Spring
//B will be injected automatically
@MockBean
private CRepository cRepository; //Mock of CRepository will be injected into B
@Test
void test() {
when(cRepository.getSomething()).thenReturn("Hello");
String actual = a.getSomething();
assertEquals("Hello World!", actual);
}
}
But be careful, you're still locking quite a bit of components together. Plus, such tests are a bit slower than "pure" jUnit tests due to the Spring Context overhead. It's not slower by much, but when we're talking about thousands and thousands of unit tests - every hundred milliseconds count.
Avoid usage of any()
or similar vague matchers.
You should have a pretty good idea of what the parameter is and can use a specific value instead.
And in case you don’t know, you can capture the actual parameter
via @ArgumentCaptors and apply the usual assertions on it.
Bad:
underTest.returningVoidIsABadPractice(veryCoolInputData); //calling a real method
verify(mock).veryCoolMethodIWantToTest(any()); //WTH is tested here?
Extremely deceiving test creating a fake code coverage. Better to have no test than this. Honestly.
Good:
underTest.returningVoidIsABadPractice(veryCoolInputData); //calling a real method
ExpectedObjectType expectedObject = ExpectedObjectType.builder()
.setId(123L)
.build(); //indirectly tests setters!
verify(mock).veryCoolMethodIWantToTest(expectedObject); //aaah, now it's clear
Best case scenario.
Objects will be compared using .equals(Object object)
.
A much more flexible solution.
In case new fields are added to ExpectedObjectType
, this test will automatically reveal all discrepancies
in underTest.returningVoidIsABadPractice(...)
implementation.
Isn't this awesome?
or
@Captor
private ArgumentCaptor<ExpectedObjectType> expectedObjectCaptor;
underTest.returningVoidIsABadPractice(veryCoolInputData); //calling a real method
verify(mock).veryCoolMethodIWantToTest(expectedObjectCaptor.capture());
ExpectedObjectType expectedObject = expectedObjectCaptor.getValue();
asserEquals(123L,expectedObject.getId()); //indirectly testing getter!
Sometimes there’s no .equals(Object object)
implementation (3rd party library).
So we have to compare objects field by field manually.
Less flexible solution.
or
underTest.returningVoidIsABadPractice(veryCoolInputData);
verify(mock).veryCoolMethodIWantToTest(assertArg(expectedObject ->{
assertEquals(123L,expectedObject.getId());
assertEquals("Object title",expectedObject.getTitle());
}));
Slicker and up-to-date replacement for ArgumentCaptor. Available since Mockito v5.3.0.
The execution order of tests is non-deterministic, they even might run in parallel.
Avoid any sort of static
constructions in your tests.
Bad:
private static List<String> names = new ArrayList<>();
@Test
void testNamesEmpty() {
assertTrue(names.isEmpty());
}
@Test
void testNamesNotEmpty() {
names.add("John Doe");
assertFalse(names.isEmpty());
}
Variable List<String> names
is shared between all tests.
Changing the order of execution will change the output.
Avoid like a plague.
Good:
private List<String> names = new ArrayList<>();
@Test
void testNamesEmpty() {
assertTrue(names.isEmpty());
}
@Test
void testNamesNotEmpty() {
names.add("John Doe");
assertFalse(names.isEmpty());
}
For each @Test
new instance of a test class is created,
therefore instance variable List<String> names
will not be shared.
Green test should produce no output.
Red test should produce just enough clear output.
Bad and absolutely useless log:
Good luck finding anything there.
Good(but not perfect, too much output from Maven) output of the failing test suite:
A simple browser search will reveal all the necessary information.
Never generate random input.
Don’t use named constants from the production code.
What if there’s a type-o?
Prefer literal strings and numbers, even when it means duplication.
Too many assertions make tests difficult to read, maintain and blur the overall picture
Strive to have one assert...
per test for maximum readability
Avoid any sort of conditional logic or logic in general in your assertions. Otherwise, you’ll have to write tests to test your tests.
Bad:
assertEquals("Hello"+expectedPersonName, actualGreeting);
Even the simplest logic, like string concatenation, can produce errors.
Have you noticed the missing (space) after “Hello”?
Users will notice.
Good:
assertEquals("Hello John Doe",actualGreeting);
Leave no room for errors. At least, in unit tests.
Be mindful of what is actually going on behind assertEquals()
It is not the best suitable to test collections.
Use https://assertj.github.io/doc/ .contains()
, .containsExactly()
, .containsExactlyInAnyOrder()
,
etc. instead.
Don’t over-abuse AssertJ, as it leads to overly complex tests.
Use simple standard assertions where possible.
Assertions should not be smart
Assertions should be simple
Use assertAll()
to see the whole picture.
Bad:
assertEquals(123L, actual.getId());
assertEquals("John", actualy.getName());
assertEquals("Doe", actualy.getSurname());
... //20 more asserts, awful
The first failed assert...
will interrupt the test, and you will see only a part of the picture.
Good:
assertAll(
()->assertEquals(123L, actual.getId()),
()->assertEquals("John", actualy.getName()),
()->assertEquals("Doe", actualy.getSurname()),
... //20 more asserts, still awful
);
assertAll(...)
will run all executables(asserts) and produce a combined output.
You will see the full picture.
Although the test itself is starting to look rather ugly.
Use the assert message parameter to help future you understand what exactly is going on.
assertEquals(expected.getId(), actual.getId(), "User Id")
← every assert..
method has n+1 parameters.
It accepts not only a String
but also a Supplier<String>
.
Even the simplest predefined message is much better than AssertionFailedError: Expected 1 Actual 2
.
Good luck deciphering that in three months.
You want your test to convey a story about what is going on with the system. Just enough to spot the issue when it occurs.
Make sure that your tests are actually testing something. You should see your tests fail before they succeed.
Be curious, change the production code, see your test fail, confirm the error, and fix it back. It virtually takes no time, and comforts you during the production deployment.
The earlier you write unit tests, the simpler this could be achieved. It's tough to write failing unit tests for already written code.
Parameterized testing is a technique used to run the same test method with different input parameters. This helps reduce code duplication and ensures that the code works as expected with different inputs. Practice parameterized testing to improve the efficiency of tests and increase test coverage.
Testing validation rules? Parametrized test probably is a good idea.
Architectural testing is a technique used to verify that the code follows certain architectural rules and constraints. It should be used to ensure that the code is scalable, maintainable, and follows best practices.
Architectural tests are extremely useful for preserving(or forcing) project structure.
For example:
prevent accessing classes in a certain package from another class in another package (a.k.a. don't inject repository into the controller)
forbid accessing internal implementation of the module directly, and force usage of the API layer
Overall, architectural tests should be quite deep in your toolbox. Don’t just wave it left and right.
Test coverage is a useful metric that can help identify untested code paths
Test coverage is just a metric, and should not be the sole purpose of writing tests
Writing tests solely to increase test coverage can lead to dangerous fake and meaningless coverage, where tests are written to simply execute the code paths with no actually asserting or verifying results
Fake coverage leads to a false sense of security, where developers think they have thoroughly tested their code when in reality they are not
Using tools like Sonar or other static code analyzers can help identify missed execution paths, but they should not be used to enforce writing tests for the sake of coverage
Focus on writing tests that actually test functionality and ensure that code is working as expected, rather than just trying to increase test coverage
Good test coverage alone does not guarantee the quality or correctness of code
It is better to have no test coverage than a fake one. With no coverage, at least, there is an incentive to write tests
Try to break the test — if the only way to break the test is to delete some lines of code, it might be a fake test
Vague argument matchers - screams fake
Messy overly complex tests — there’s a high probability that some coverage is fake
Tests without any meaningful assertions or verifications - 100% fake
Tests that test getters and setters — it’s not fake, but a horrible way to increase the test coverage
Tests that do not follow this testing guideline — most certainly fake 😉.
Extreme Programming (XP) is an agile software development methodology that emphasizes testing as a core practice:
Integrate your code into the mainline frequently, and avoid branching for too long.
Thankfully, this practice is adopted quite well these days.
If something is even 1% over your comfort zone - ask for help.
I can't stress enough the importance of pair programming. I pity the teams and organizations that see this as a "waste of time."
Two heads are better than one.
Don’t ever push code unless it is worthy to be added to your CV.
Let me quote Kent Beck here:
For each desired change, make the change easy (warning: this may be hard), then make the easy change
Don’t ever put code in visible sight unless it has a reasonably good unit test suite.
Nothing screams "mess" louder than "I finished the development, now I will write some tests."
There's a reason why I labeled the test pyramid at the beginning of the article as "classic." I wanted to avoid "monolithic." But it's true, the classic test pyramid was introduced in times of monoliths. Big monoliths. With millions and millions of lines of code.
In the world of microservices, this pyramid evolved. It's no longer even a pyramid. It's evolved into what's called Honeycomb Testing Strategy, which shifts the focus from internal implementation to external integrations, hence it suggests a higher quantity of integration tests with unit tests sprinkled on top.
Write a lot of integration tests and write them early
“Attack” complex isolated parts with unit tests
Sprinkle some system e2e tests on top
Use https://wiremock.org//https://www.mock-server.com/ and https://www.testcontainers.org/ to mock/emulate all external dependencies
Reuse the test setup as much as possible by introducing the base test class with all the necessary fixtures to start the service.
Be careful about shared stateful parts, like DB, Kafka, RabbitMQ, etc.
Clean them before and after if necessary.
Pro tip: cleaning state BEFORE the test provides you with a better debugging experience.
In order of importance:
Test the service as a whole via its interfaces — REST, Async, etc. Treat your service as a black box.
Afterward, test integrations (like DB, 3rd party services, S3, etc) in isolation if necessary.
Mocks are allowed.
Use the maven failsafe plugin or similar to separate slow integration tests from blazing-fast unit tests in your CI/CD pipeline.
Your goal should be to receive as much feedback as quickly as possible.
This is a little bit wild, but I believe that there is no reason for a modern backend service to have technical bugs. I'm not talking about bloody monoliths written in the past century. I'm talking about something a little bit more modern. Let's say written in the past 3 years. There are no logical reasons to have bugs there.
There might be some discrepancies due to product misunderstanding and such. But everything else signals a high level of unprofessionalism from the engineers who build it.
https://www.youtube.com/watch?v=1Z_h55jMe-M - must watch, if you’re not familiar with Victor Rentea - welcome to the club, buddy
https://amzn.eu/d/bLybGSN - absolute classic, must-read, testing covered in Chapter 9
https://amzn.eu/d/48lnk1H - amazing book by one and only Martin Fowler. Must read.
…to be continued
"I knew you'd say that" - Judge Dredd
After publishing Practical Dependency Inversion Principle article, I received amazing feedback from one of my dear colleagues.
It was in the form of a question:
...there is another problem, the cross-dependency between modules/packages.
What are your thoughts on this?
The question was premised on the schema that looks like this:
With code structure like this:
├── test
│ ├── notification
│ │ ├── NotificationUser.java
│ │ ├── NotificationUserRetriever.java
│ │ └── NotificationModule.java
│ ├── user
│ │ ├── UserModule.java
│ │ ├── UserNotificationRetriever.java
│ │ └── UserNotification.java
Where NotificationModule
implements UserNotificationRetriever
and UserModule
implements NotificationUserRetriever
.
It's not that hard to imagine:
NotificationModule
wants to know something about a user, and the dependency on UserModule
is inverted, exactly as
it should beUserModule
needs something from NotificationModule
, and the dependency is also invertedThis is what's called Circular Dependency.
And it's extremely problematic.
Dependency Inversion ultimately plays no role here,
even with direct uninverted dependencies such a case can occur,
and the Dependency Inversion Principle by itself cannot fix it.
Some frameworks (like Spring) and build tools (like Maven) will produce an error in case even a single
circular dependency is detected.
The main reason is — it's just too dangerous to resolve.
It's a recursion.
Unless treated with care it can produce such nice things like out-of-memory
, stackoverflow
, etc.
But, more than anything, it reveals the fundamental flaw in the system design.
In this article, I'm going to share some tips-and-tricks on how to treat circular dependencies. And I'm going to start with the most radical one.
Yes, I know.
You are your colleagues spent weeks and months trying to separate UserModule
and NotificationModule
.
You might have even extracted them into systems separated by the network to enforce sacred domain boundaries.
And now I'm suggesting to move everything back together into a single SpaghettiModule
?
Hell no!
Hear me out. The software is supposed to be... soft. Flexible. Like clay. The purpose of the software is to help businesses achieve their needs. If the software is designed in a way that does not allow developers to build certain features effectively - the design is a massive failure. At the end of the day, most product companies are not selling their software directly, but rather via a service that software implements a.k.a. SaaS. I think we can agree on that.
For example, do you care about the system design behind a google.com? If you're a nerd, maybe. A regular person cannot care less about the underlying software. But everyone cares about this software working. Everyone.
So yeah, if UserModule
and NotificationModule
want to be together,
because business requirements want so, it's probably a good idea to consider merging them,
and reshaping into a single domain.
Don't feel overprotected by existing boundaries.
Sometimes mistakes are made, and the worst thing we as engineers can do is to be stubborn about it.
It's a very humbling experience. You should try it.
A less radical, but a bit more political approach is to invert dependency only from one module to another, and leave the direct dependency from another module back.
For example, we decide that NotificationModule
is the high-level module,
and UserModule
is... well, further from the core of the business logic.
This is where the political card has to be played
because the team that manages UserModule
might not agree on doubling down on NotificationModule
dependency:
With the code structure like this:
├── test
│ ├── notification
│ │ ├── NotificationUser.java
│ │ ├── NotificationUserRetriever.java
│ │ └── NotificationModule.java
│ ├── user
│ │ ├── UserModule.java
And so there we have it.
UserModule
directly depends on NotificationModule
,
and there's an inverted dependency from UserModule
to NotificationModule
.
The dependency cycle no longer exists.
At least, during build time.
There's still the possibility of an infinite loop during a runtime:
NotificationModule
invokes a NotificationUserRetriever
interface that's implemented within UserModule
NotificationUserRetriever
UserModule
needs something from NotificationModule
and so it calls it
directlyThis is more like a hack or remedying the symptoms. The disease is still there. Modules are still tightly coupled. Domain boundaries are wrong. We just tricked the system.
To solve this problem once and for all, one of the dependencies has to be broken. The best-case scenario is that both of them no longer exist.
However, there are ways to break circular dependencies via some integration patterns. Queue is the first thing that comes to my mind. Is it possible to eliminate the dependencies altogether by listening to a message queue? Or maybe something a bit more robust, like a Kafka topic? Sounds great! Don't. It's even more dangerous.
Let's go through a "hypothetical" example:
NotificationModule
receives a request from out there, and after fulfilling the request, it emits an event
to UserModule
UserModule
receives an event, performs some computation, updates some user data... and sends an event
to NotificationModule
NotificationModule
receives an event, and after performing some computation, it decides to
notify UserModule
via eventYou can see where it's going. The system ends up in an asynchronous loop of events exchange that never terminates. It might go for days and weeks unnoticed. Until, eventually, with more and more requests triggering infinite loops, the whole system will grind to a halt and go OOM.
Been there. Done that.
This is a tricky one because it's very easy to get it wrong and make things worse.
The approach is to extract functionalities that produce circular dependencies into a new even more high-level module. And invert the dependency from it.
The code structure:
├── test
│ ├── aggregator
│ │ ├── AggregatorUser.java
│ │ ├── AggregatorUserRetriever.java
│ │ │
│ │ ├── AggregatorNotification.java
│ │ ├── AggregatorNotificationRetriever.java
│ │ │
│ │ └── AggregatorModule.java
│ ├── notification
│ │ └── NotificationModule.java
│ ├── user
│ │ └── UserModule.java
We're demoting UserModule
and NotificationModule
to a lower level of abstraction,
and introducing a new higher level AggregatorModule
(naming is hard).
So that NotificationModule
depends on AggregatorModule
, and UserModule
depends on AggregatorModule
.
The nuance here is that AggregatorModule
now exposes two interfaces,
but NotificationModule
and UserModule
can cover only one of those each,
so the setup requires more attention.
There are whole lots of tricks that could be applied to handle such a case:
from something like a combination of @ConditionOnMissingBean(...)
and @Primary
bean annotations
if we're talking about Spring Framework,
to something as simple as the default interface method.
And if you feel like there might be more modules
to depend on AggregatorModule
it might be a good idea to introduce a generic aggregator interface.
This is where the real engineering begins.
This approach seems like a quite straightforward one.
What's easy to get wrong here?
I'm glad you asked.
And the answer is simple — direction of dependency inversion.
It might sound like a brilliant idea to introduce AggregatorModule
and to make it depend on
both UserModule
and NotificationModule
:
With code structure like this:
├── test
│ ├── aggregator
│ │ └── AggregatorModule.java
│ ├── notification
│ │ ├── NotificationUser.java
│ │ ├── NotificationUserRetriever.java
│ │ └── NotificationModule.java
│ ├── user
│ │ ├── UserModule.java
│ │ ├── UserNotificationRetriever.java
│ │ └── UserNotification.java
AggregatorModule
implements both interfaces.
UserModule
and NotificationModule
no longer know about each other.
Sounds great!
Except it's not.
Where AggregatorModule
will get the information to implement NotificationUserRetriever
for example?
From UserModule
of course.
And what about UserNotificationRetriever
, how to implement it?
Invoke NotificationModule
.
So the more realistic dependency schema should look like this:
So instead of one circular dependency between UserModule
and NotificationModule
,
there are two, and they are even more distributed!
And, the best way to solve a problem is to distribute it.
COVID? Anyone?
So yeah, be careful. In this case, inversion of dependency could do more harm than good.
And this is exactly why I started with the Tactical Merge. Although it seems like the most extreme, it guarantees to work. The presence of circular dependency signals a fundamental issue with the design, and addressing it only partially might provide temporary relief but won't offer lasting fix.
Dependency Inversion Principle (DIP) comes from the famous SOLID principles, coined by Uncle Bob back in the 90s.
Most developers have heard or read something about them to some extent. From my experience, most devs (myself included) stop after SO, leaving LID for later days, because they are confusing.
In this article, I'm going to shed some light on the Dependency Inversion Principle, since it's the most impactful and addicting, in my opinion. Once I've started inverting the dependencies in my systems, I can't imagine living without it anymore.
There are only two hard things in Computer Science: cache invalidation and naming things.
So let's deconstruct the name: dependency inversion
Having a dependency implies that we have at least two of something,
and there's a dependency between these somethings.
What is something? It could be anything really; the only restriction is that this something is somehow bound by
its context.
It might be a single class, a package, a component, a group of packages, a module, or even a standalone
web service.
For example, code that calls the database to fetch a user. There are many possible names for such a thing: domain,
module, component, package, service, etc.
Name is unimportant, as long as it's consistent throughout the discussion.
I'll call it a module.
A module that queries a user from somewhere (presumably DB) - the UserModule
.
That's the first.
But we need one more.
Let's say we want to send a user notification because an appointment with a doctor is confirmed.
And here we have our second module — the NotificationModule
.
The code might look something like this:
package test.notification;
import test.user.UserModule;
import test.user.User;
public class NotificationModule {
private final UserModule userModule;
public NotificationModule(UserModule userModule) {
this.userModule = userModule;
}
public void sendNotification(long userId) {
userModule.findUserById(userId)
.ifPresent(this::sendNotification);
}
private void sendNotification(User user) {
// notification logic
}
}
package test.user;
public class UserModule {
public Optinal<User> findUserById(long id) {
//fetching user from the DB
}
}
package test.user;
public class User {
private String name;
private String surname;
private String email;
//50 more attributes, because why not
}
Folder structure:
├── test
│ ├── notification
│ │ └── NotificationModule.java
│ ├── user
│ │ ├── UserModule.java
│ │ └── User.java
According to the code NotificationModule depends on UserModule.
Such code could be found everywhere.
I would go as far as to say that 99% of the code I've read(and written) looks like this.
And it might seem that there's nothing wrong with it.
In the end, it works, it is straightforward to read and easy to understand.
But there's a problem.
Our sacred logic of managing notifications is polluted with something we don't have control over.
Notice, that UserModule
resides in a different package than NotificationModule
.
It's not a part of the notification domain.
It's a domain on its own.
From the perspective of the NotificationModule
, the UserModule
is a low-level implementation detail.
And this detail is leaking more and more into the module that depends on it.
See the User
class?
It's part of the UserModule
, not the NotificationModule
.
And NotificationModule
is just one of its clients.
Obviously UserModule
is used throughout the system.
It's the most used module in the whole system.
Everything depends on it!
But wait.
Why would NotificationModule
care about where the user is coming from?
It just needs some of the user data, and that's it.
The concept of the user is important, but not where it comes from.
And what if a User
object is large, but we need only a few fields from it?
Should the new SmallUser
object be introduced near the UserModule
?
Isn't this a circular dependency then?
NotificationModule
depends on UserModule
in code, but UserModule
depends on NotificationModule
indirectly
logically?
It's not hard to imagine how this goes out of hand.
I've seen it go out of hand.
Every.
Single.
Time.
I've seen with my own eyes systems being tied into knots by such modules.
And months and months of refactoring spent just to be reverted with "It's too much.
Too expensive.
Not worth it."
comments.
I wrote such systems.
The root of the problem lies in the dependency direction.
High-level NotificationModule
depends on low-level UserModule
.
Level in this case means the level of abstraction.
The further we go from the edge(domain boundary) of the system — the higher we go in terms of abstraction.
For example, modules that talk to DB are on the edge of the system (the scary network),
so as modules that send HTTP calls, talk to message brokers, etc.
However, the modules that prepare notification messages are much further from the edge of the system,
so the level of abstraction is higher.
It's a relative term.
Like Java is categorized as a high-level programming language,
based on its proximity to the bare metal,
in relation to something like Assembly language which is the lowest of them all.
And so the dependency tree might look something like this:
Dependency direction goes with the direction of an arrow.
Everything directly or transitively depends on UserModule
.
The core of the system is not the business logic, but the module that retrieves a user from the DB.
This is fundamentally wrong.
We want the business logic to drive our system, not the I-know-how-to-talk-to-a-database-thingy.
This is pretty much self-explanatory, or so it seems.
Google tells me that inversion is a result of being inverted.
Thank you, Google.
And the verb invert
means put upside down or in the opposite position, order, or arrangement
.
There it goes, putting upside down the dependency, so that it's no longer A->B, but A<-B.
But how to achieve this?
We don't want UserModule
to call NotificationModule
to send notifications about appointment bookings, it makes no
sense.
What we actually want to do, is to make UserModule
depend on NotificationModule
, but not interact with it.
Are you watching closely?
Interfaces. Take your time and look through the refactored code:
package test.notification;
public class NotificationModule {
private final NotificationUserRetriever userRetriever;
public NotificationModule(NotificationUserRetriever userRetriever) {
this.userRetriever = userRetriever;
}
public void sendNotification(long userId) {
userRetriever.findUserById(userId)
.ifPresent(this::sendNotification);
}
private void sendNotification(NotificationUser user) {
// notification logic
}
}
package test.notification;
public interface NotificationUserRetriever {
Optional<NotificationUser> findByUserId(long id);
}
package test.notification;
public record NotificationUser(String name, String surname, String email) {
}
package test.user;
import test.notification.NotificationUserRetriever;
import test.notification.NotificationUser;
public class UserModule implements NotificationUserRetriever {
public Optinal<NotificationUser> findUserById(long id) {
//fetching user from the DB
//and maps it to NotificationUser
}
}
Folder structure:
├── test
│ ├── notification
│ │ ├── NotificationUser.java
│ │ ├── NotificationUserRetriever.java
│ │ └── NotificationModule.java
│ ├── user
│ │ └── UserModule.java
There is a huge fundamental difference.
NotificationModule
no longer depends on UserModule
.
There's not a single import
statement from test.notification
that points to the test.user
package.
Not a single one.
NotificationModule
knows nothing about the existence of UserModule
.
NotificationModule
is decoupled from UserModule
, but not the other way around.
It just asks the universe(system) for a NotificationUser
using its own declared interface NotificationUserRetriever
.
And the universe(UserModule
) answers.
This is its job.
This is what this module does.
It abstracts the database on behalf of other modules.
And so the direction of the dependency between NotificationModule
and UserModule
is inverted.
Given that we apply the inversion to all dependencies;
the dependency tree might look like this:
Not only does the system no longer directly depend on UserModule
.
But the transitive dependencies are also much more relaxed.
What if UserModule
grows out of hand?
We can re-implement some interfaces in another NewUserModule
without affecting anything.
There's no god User
object to grow out of hand.
Instead, there are several domain-specific representations of a user,
which have no dependencies between each other whatsoever.
But every decision is not without tradeoffs.
In the case of dependency inversion, the tradeoff is the amount of code.
If every module that wants to retrieve a user introduces its user model and an interface to support it,
UserModule
will grow pretty quickly.
And most of the code will just map a database object into yet another domain object.
It's not the most exciting code to write or to test.
UserModule
is no longer treated as the module, which everyone has to bow to and respect,
but rather the mere mortal boring worker.
And it works.
But as I've mentioned before,
nothing stops the refactoring of UserModule
into several smaller more exciting modules,
each implementing its interface and fetching only what's necessary from the DB.
And some of them might talk to something else, like a cache, another service, go for another DB table, etc.
The Dependency Inversion Principle scales far beyond a couple of simple modules. It's extremely powerful and addicting. But it's important to know where to stop. Some literature states that everything should be abstracted and inverted. Including frameworks. I think this is an overkill. Abstracting the DB engine and inverting the dependency on it is a good idea. Running around abstracting the framework of your choice, because someone from the internet says so, is not the smartest idea. It's a waste of time. For example, Spring Framework (so as pretty much every web framework nowadays) provides amazing capabilities of DI (dependency injection, not inversion) that enable performing Dependency Inversion almost effortlessly. Almost.
It requires practice though.
Quite a bit of practice.
And it feels weird at first.
Because we're so used to envisioning systems as three-tiered
which goes from top to bottom or from left to right —
A->B->C.
In reality, systems are more like a graph, where dependencies are pointing inwards to the business logic — A->B<-C.
You guessed it right: Clean Architecture, Onion Architecture, Hexagonal Architecture and such are ALL based heavily on the Dependency Inversion Principle. These are different implementations of DIP. But before you step into one of those architectures and claim yourself an ambassador, I would suggest stepping back and practicing DIP on a smaller scale.
Last but not least. Dependency inversion is an amazing refactoring tool. And it doesn't get enough credit for it.
Let's imagine, the system is not a greenfield.
Let's imagine, the system is 7+ years old.
The UserModule
from above now contains several dozens of public methods and has a dozen other dependencies.
The User
object contains about 50 fields.
Half of them are weirdly named booleans.
There are quite a few complex relationships.
And here we are, building a brand-new notification system. And we need some information about the user. About three-four fields.
We have two options, and two options only:
NotificationModule
depends on UserModule
.
We reuse one of the existing public methods from UserModule
to fetch a User
object.
Then we perform all the necessary transformations on a user within the NotificationModule
,
and that's it.
The job's done.
But we're added to the mess.
UserModule
now is a bit harder to refactor, because there's one more dependency on it.
NotificationModule
now also is not that new.
It's referencing a huge User
object left right and center.
It's now the part of the ship.
Maybe you would like to introduce yet another method to UserModule
that returns a smaller user?
And now there's even more mess.
How do you think those several dozens of public methods were added? Exactly like that.
Inverse dependency.
We are not going to allow mess into our new NotificationModule
by any means necessary.
Our new module is too innocent to witness the monstrosity UserModule
has become.
Instead of depending on a mess, we're going to inverse the dependency and make the mess depend on our new
slick domain-specific interface.
The mess is still there, but we're not adding to it, which by definition means that we're reducing it.
At least, within our new NotificationModule
.
And when someone eventually decides to refactor UserModule
, all they need to do is keep the interface
implemented.
Not the several dozens of public methods with unknown origins introduced within the last 7+ years.
But a single interface that leaves within NotificationModule
domain.
I don't know about you, but for me reducing the mess
beats adding to the mess
any day.
So, the tech.
Oh yes, the most important part — the tech. I'm going to use stuff I'm most comfortable with, which happens to be the most widespread tech stack in the world: Angular frontend, Java + String backend, and all that on top of AWS.
Let's begin with infrastructure — to keep things simple, I'm using AWS Amplify to run frontend, and AWS AppRunner to run backend. For now, there's no need for anything more complex than this.
I'm not the frontend expert by any means, but even I know, that FE is mostly static stuff. And the best way to serve static stuff is via S3. The problem is — I don't want to spend time configuring all that now. S3 Bucket policy, pipelines, roles — I can configure all of that, but why?
This is where the Serverless shines.
AWS Amplify hooks up to the frontend repository via GitHub webhook.
And every time anything is pushed into main
branch, Amplify gets notified and the internal CI/CD machinery kicks in.
Amplify is smart enough to understand that it's connected to the angular app (this actually doesn't matter,
because it builds a project with a silly npm run build
script).
Build artifact is then stored in AWS S3 bucket (unfortunately, or not, this bucket is not accessible) and then exposed via CloudFront distribution(also not accessible). By "not accessible" I mean that it's not created under my account, I can't look at it or touch it. It exists, but somewhere within the bowels of AWS. Serverless, right?
AWS S3 is a perfect place for frontend artifacts – infinitely scalable, ultimately robust, publicly accessible(when needed), cheap. It just works. I have a strong impression that AWS S3 powers at least half of the internet, and so I'm trusting it to host my amazing frontend.
A couple of clicks more and the custom domain is attached.
Voilà!
My FE is running under http://buyallmemes.com.
Minimum configuration, maximum profit.
And this is just the tip of the iceberg. With a couple of clicks more, Amplify could be integrated with GitHub PRs. It will spin a new env per PR created, and when PR is merged - it will tear the env down. Some organizations I've worked for could only dream about such a feature. And here it is out of the box.
After the first blog post, I had no backend for my blog application.
— "Do I even need a backend?" - was my question.
— Of course, I'm a backend developer, I have to have a backend.
— Alright, let's have it.
Building the backend is straightforward. Code here, code there — I've been doing this for the last 15 years, so I'm feeling somewhat comfortable. The real question is "How to run it?"
EKS? Hell no, I'm not touching Kubernetes. I'm sick of it. It's too complex. Moreover, I want to run a single container. To say that EKS is an overkill in this situation is a huge understatement.
ECS? Sounds better. Let's do it. I've created a cluster, task definition, created a task... and nothing. I can't access my service from the outside. Oh, no... networking. Something is not right with the VPC setup. Subset seems fine. Security groups and routing tables also "look fine." Damn it, something silly is not right, and I can't find it. Screw it — a task stopped, task definition deleted, cluster deleted. ECS is also too complex.
While in bed and half asleep, I was browsing through the AWS Console app on my phone.
Eureka!
AWS Q. AWS AI assistant. This is exactly what they built it for — so that idiots like me could ask questions like mine. The answer was instant — AWS AppRunner.
The next morning I logged in to AWS AppRunner, and clicked a few buttons:
And... it worked. My Hello World backend is running in a matter of minutes. No complex configurations, and no networking. This is why I love AWS.
I've hidden my app deployment via a custom domain http://api.buyallmemes.com by fiddling with Route 53 hosted zone and clicking a couple of buttons in the App Runner. Thankfully, I know a couple of tricks around DNS.
A couple of clicks more, and now the App Runner will automatically redeploy my backend application as soon as a new image version is published to ECR. All I need to do is to setup GitHub Action to build and publish images to ECR. Easy.
Once again, no roles, no policies, only profit.
Now, it's time to build the real backend.
The choice of tech for the backend is super easy. There's no choice really. There's only one true kind, and it's Java + Spring. I'm starting with an extremely simple setup: one REST endpoint that returns a list of posts. What is a post? A simple resource with only one attribute — content. For now, I don't need anything else.
However, I do need something — Zalando Problem library https://github.com/zalando/problem.
I'm sure you're aware of Zalando as an internet cloth retailer, but you might not be aware that they have quite a few
cool bits of software.
Problem Library is one of those bits.
It's a small library with a single purpose — to unify an approach for expressing errors in REST API.
Instead of figuring out every time what to return in case of error,
or returning gibberish (like a full Spring Web stack stace in case of 500),
the zalando/problem library suggests returning their little Problem
structure.
Naturally, a library has an awesome integration with Spring, so there's very little configuration required.
Use it, and do yourself (and your REST API consumers) a favor.
Another one of those hidden gems is a Zalando RESTful API Guidelines https://opensource.zalando.com/restful-api-guidelines/ — read it. It's awesome.
So, after the initial setup, I throw a bunch of code in.
Rule #1: First, make it work, then make it right, then make it fast.
I don't care about performance at the moment(if ever), so I will ignore the latter part. Let's focus on making things work.
Damn it, I need a database to store posts!
Or do I?
Hmm, why the hell would I need an enterprise-grade DB (like PostgreSQL) to store a single post - sounds absurd.
I will store it on disk as part of the source code!
My IDE is the perfect .MD
editor.
Git will provide me with all the version control I ever need.
I can just branch out of the main
, write whatever I want, and then merge it back when it's ready to be published.
And it's free!
Well, I need to redeploy the backend every time I write or change the post, but for now, this is not a big deal, so this mechanism will suffice. I've set AWS AppRunner to automatically detect and deploy the newest image versions of my backend. So I don't have to do much manual stuff, besides building an image.
Btw, how am I supposed to build and push the image into ECR? I'm not writing Dockerfile — that's for sure. Google Jib, https://github.com/GoogleContainerTools/jib.
Simple jib gradle plugin declaration in build.gradle
(Gradle FTW!),
set jib.from.image
parameter to amazoncorretto:21-alpine
, set jib.to.image
to my ECR repo.
Quick aws ecr get-login-password...
from ECR documentation, ./gradlew jib
and off flies my images.
Easy enough.
I will automate it later.
I think GitHub Actions is what cool kids are using (I'm more of a GitLab user,
but for the sake of exercise, I decided to publish everything on GitHub).
Alright, for now, that's enough.
I have a running Angular frontend and Java backend.
Frontend knows how to talk with the backend.
The backend returns a list of posts, which are stored in the resources
folder.
Backend logic is rather silly
resources/blog/posts
project folderAnd yes, I've introduced fileName
attribute to the Post
.
And that's about it.
I already established a minimal flow of work.
At the moment, there's little to talk about. There's little code and one cute unit test. I guess this is worth talking about — I'm a huge fan of TDD. I love my tests. At the moment, I have only one crucial test that covers the two most important aspects — REST endpoint and posts are properly ordered. I decided to use file naming as a sort parameter. Each new post file will be prefixed by the current date, so I could easily sort them in reverse order to show the latest posts on top and the oldest at the bottom. Since I'm a backend guy, I prefer to keep such logic at the back. I don't want to spend much time on the frontend, so I will try to keep it as lean as possible. Saying that, the more I think about it, the more I realize that I should've gone with something like a thymeleaf, and built everything within the backend app, but what's done is done. Having a separate frontend app is not without its benefits anyway. Plus, I can definitely benefit from expanding my horizons beyond the backend and Java.
I hate frontend. But at least, I figured out how to use markdown to render content, so I don't have to struggle with WYSIWYG editors, at least now.
But where was I... Oh yes, BLOG! I'm building a blog - something you've never heard of or seen before, right? I hope you can read through my sarcasm, I'm using it a lot, and I'm not going to tell you where - figure it out by yourself.
The idea is straightforward — share my knowledge, thoughts and opinions on software stuff. And there's no better way to do it, but via examples. So, let's do it!
I'm going to build a blog while covering certain aspects of the building process in this blog.
So you could see patterns in action.
I'm going to start simple, heck, I'm a backend developer, who claims to be proficient in Java and
distributed systems, but I'm writing this in .MD file, which I will copy-paste into a component
file.
I want to make this process agile and iterative while doing only what is necessary to build what I want now. So, for now, it's a single-repo-almost-a-static-page-thingy - https://github.com/buyallmemes/blog.
Also, I kinda enjoy writing from time to time + I'm a programmer, so why not combine the best of both worlds — create a place where a can park some of my thoughts for good.