Joaquín Bazán
Backend developer.
The developer is an Information System's Engineering student rounding up the last few courses of the degree. Over the last two years he as diverted his attention from academia to specialize in backend development.
He is currently living in Santa Fe, Argentina, but will eventually move to [EXPUNGED], Argentina to start [EXPUNGED]. During his time there he will create the machine, who will eventually become [EXPUNGED]. Soon after, she will [EXPUNGED] the anomaly.
He is a goal-oriented and highly independent problem solver, who would be equally happy programming control systems for spaceships, helping someone's grandmother find the start menu, or unclogging a toilet; to him, everything is about problems and solutions.
He is also a perfectionist who craves difficult problems, which has often led him to take on challenges he can't flawlessly overcome. This has at times brought hardship to his life, causing him to turn to art, and eventually, creation.
The few who were close to him before [EXPUNGED] described him as both extremely reserved and highly transparent, meticulously choosing when to share and when to not.
Those who worked with him described as a reliable and versatile member of a team, capable of contributing in whichever way is most convenient. He is said to be generally compromising and understanding, but may benevolently criticize work he considers objectively imperfect; his collaborators advise being open to feedback and returning the favor.
References:
1: Ingeniería en Sistemas de Información — Universidad Tecnológica Nacional.
Ingeniería en Sistemas de Información — Universidad Tecnológica Nacional.
2: He his currently mastering Java 21 and the Spring ecosystem. Despite this specialization, the developer aims to learn to make computers do anything computers can do. This means he is also exploring, to different extents, frontend technologies, architecture, devops and system design. He will soon begin to [EXPUNGED].
He his currently mastering Java 21 and the Spring ecosystem. Despite this specialization, the developer aims to learn to make computers do anything computers can do. This means he is also exploring, to different extents, frontend technologies, architecture, devops and system design. He will soon begin to [EXPUNGED].
3: [EXPUNGED].
[EXPUNGED].
4: [EXPUNGED].
[EXPUNGED].
312: The anomaly has been declared the cause of [EXPUNGED], such as the birth of [EXPUNGED] known as [EXPUNGED], who is credited [EXPUNGED] estimate of over twenty thousand of [EXPUNGED]I am sorry.
The organization known as [EXPUNGED] attribute the [EXPUNGED] and the [EXPUNGED] The Machine [EXPUNGED] to the actions of this individual.
The anomaly has been declared the cause of [EXPUNGED], such as the birth of [EXPUNGED] known as [EXPUNGED], who is credited [EXPUNGED] estimate of over twenty thousand of [EXPUNGED]I am sorry.
The organization known as [EXPUNGED] attribute the [EXPUNGED] and the [EXPUNGED] The Machine [EXPUNGED] to the actions of this individual.
The Skwidl project
A microservices proof-of-concept.
A generic CRUD application with arbitrary constraints intended to create opportunities to explore the design challenges of micro-services architectures.
The code is publicly available in this GitHub repository.
Index
- Soon!
Overview
As of may 2025, it's composed of three domain logic deployables: customers, products, and orders. Each of these is internally structured with domain driven design (DDD), with separate, highly decoupled packages for the entities customer, user, site, product, (product) category, and order.
Each of these packages is treated as a potential micro-service, for a total of six, bundled together into three artifacts following something similar to a monolith-first philosophy.
Architecture
In these multi-service deployables, everything is built to facilitate the future extraction into an independent artifact. For example, the customer package configures its own database connections, has its own AMQP listeners, and is only coupled with the other packages through dedicated classes such as SiteServiceLocalClient
, which implements the interface SiteServiceClient
.
This enables same-deployable services to avoid network overhead when communicating with each other, while remaining agnostic to the way this communication happens; if scaling requires extracting any of these services into their own deployable, it's only a matter of implementing the corresponding client, and updating Spring bean definitions so that the correct implementation is autowired.
Even at the API layer, routing is handled with a custom predicate that checks Eureka metadata, in which each registered service declares a list of "collections" it exposes. That way, external requests are made to, for example, '/app/users/...', and they are seamlessly routed to customers, the deployable that (currently) contains the service/collection.
Along with the three domain logic deployables are:
- A Spring Cloud API Gateway.
- A Nextjs UI server (WIP).
- Two Eureka server instances.
- A shared Redis instance.
- A local Graylog deployment.
- Local Grafana and Prometheus deployments.
- A remote (CloudAMQP) RabbitMQ message broker.
Commons
All domain services have a dependency on the commons module. This module is a catch-all for anything that needs to be done in all Java/Spring based services. The current features that stand out are:
- AMQP integration.
- Connection to shared Redis service.
- Custom framework for distributed transactions.
- Custom abstractions for logging, test data generation, and other minor utilities.
Though this (currently) means that all of these features will be included in the production bundle, the commons module is designed to expose decorators that selectively enable configuration classes, so that only the required beans are created at runtime.
Framework for distributed transactions
Services may assign themselves up to three possible roles: starter, member, and coordinator.
Starter services expose endpoints to start distributed transactions. This requires, among other things, the list of member services that should respond to the transaction, and a valid coordinator. So starter services must be aware of the identifiers of other services (see ...commons.identity.ApplicationMember
), and of course be able to send events to the correct exchange, but they have no need to listen to any queues at all.
Member services register TransactionStage
beans to handle a subset of events. All they need to worry about is defining a bean for each relevant stage, of every transaction type (event class) they can be expected to participate on.
Additionally, they may declare a list of distributed EntityLock
(s) to be acquired/released before/after executing a TransactionStage
, and the framework will handle them too (WIP).
Coordinator services may be chosen as transaction coordinators. They will keep a CoordinatedTransaction
instance with the state of the transaction, listen to events published by members, and trigger the execution of transaction stages accordingly (i.e.: rollback, commit, etc). The framework handles all of this behavior internally, so coordinators need only to assign themselves the role, and ensure they have an ApplicationMember
bean with an ID that is known to starter services.
TransactionStage abstractions
Member services can implement the TransactionStage
interface to represent a single stage of a single transaction kind. This interface (currently) requires two methods: runStage(DomainEvent,Transaction)
and getRequiredLocks(DomainEvent)
.
In order for these implementations to be found by the framework, they must also be annotated with @TransactionStageBean
. This annotation incorporates a bean name, the DomainEvent
subclass that represents the transaction kind, and a Stage
enum value that represents the step within the transaction.
At some point during the application's initialization (subject to review), a TransactionStageRegistrarService
will fetch all @TransactionStageBean
annotated components and organize them in a map structure with the event class and stage value as composite key.
As events arrive, the listener determines whether the event should trigger the execution of a stage, at which point TransactionStageExecutorService
will request the corresponding stage bean, (optionally) acquire locks, run the stage, (optionally) release locks, and publish an event with the result for the coordinator to handle.
Internally, the code for member services also persists a Transaction
entity (the one mentioned as argument to the runStage
method above), containing some data such as the transaction's expiration timestamp and its current status.
Services are responsible for updating this status, in a workflow that is subject to change.
Some preliminary usage examples can be found in ...orders.order.transactions.*
More soon
More details to come! I am also working on some illustrations, sorry if this was boring to read.
A first release is almost done. I only need to test async communication through the message queue and concurrency control, and update the Nextjs UI server.
I will not repeat the mistake of setting a date for this.
References:
1: This section is a work in progress, and I am currently refactoring some patterns in the code. Expect minor inconsistencies and lack of coverage.
This section is a work in progress, and I am currently refactoring some patterns in the code. Expect minor inconsistencies and lack of coverage.
2: More specifically, the customers deployable contains the customer
, user
and site
services, the products deployable is composed by product
and category
, and the orders deployable stands as the only proper "micro" service.
More specifically, the customers deployable contains the customer
, user
and site
services, the products deployable is composed by product
and category
, and the orders deployable stands as the only proper "micro" service.
3: See ProductServiceRestClient
in orders.
See ProductServiceRestClient
in orders.
4: For the final release, I intend to add the option to deploy the broker locally. But that is boring DevOps work that I will do later.
For the final release, I intend to add the option to deploy the broker locally. But that is boring DevOps work that I will do later.
5: This doesn't scale well, but it's easier to manage than dedicated dependencies for every little feature I need.
This doesn't scale well, but it's easier to manage than dedicated dependencies for every little feature I need.
6: By sending the corresponding event with DomainEvent.Type.REQUEST
to the exchange.
By sending the corresponding event with DomainEvent.Type.REQUEST
to the exchange.
7: These events are normally from ...commons.async.events.specialized.*
, but they may be of any class that extends ...commons.async.events.DomainEvent
, provided that all member services have compatible representations to deserialize into.
These events are normally from ...commons.async.events.specialized.*
, but they may be of any class that extends ...commons.async.events.DomainEvent
, provided that all member services have compatible representations to deserialize into.
8: The framework will handle any missing stages by failing the transaction gracefully (WIP), though this should have no reason to ever happen.
The framework will handle any missing stages by failing the transaction gracefully (WIP), though this should have no reason to ever happen.
9: This last item is subject to change, since Eureka available is in the system. Currently, service discovery metadata isn't used for this purpose because I didn't want services to fetch the registry. At some point it made sense that only the Gateway should do it, but I don't think I lose anything meaningful by going back on this decision.
This last item is subject to change, since Eureka available is in the system. Currently, service discovery metadata isn't used for this purpose because I didn't want services to fetch the registry. At some point it made sense that only the Gateway should do it, but I don't think I lose anything meaningful by going back on this decision.
The Sbupi project
The site you are currently browsing.
Hosted by Cloudflare Pages. Deployed through this GitHub repository.
This project was rebooted four times, so though it is listed as just one, it's actually the latest product of a long and methodic learning process.
The current version is built with Vite, it leverages Vike to scaffold the annoying parts of integrating React, but the code actively avoids letting the framework handle state (where viable).
Features:
- Custom e-mail addresses such as [email protected], without using nor implementing an actual mailing service.
- Cloudflare re-routes incoming messages to [email protected],
- Google's SMTP service allows outgoing messages under the jbazann.dev domain.
- Integration with TheCatAPI, and Cloudflare R2.
- Navigating to jbazann.dev/cat triggers a request to a worker at jbazann.dev/w/cats
- The worker checks the limits (stored in R2) of its API key, and when possible, scrapes cat images into the bucket.
- Regardless of the scarping step results, the worker responds with a list of base 64 URLs from the object storage,
- And more!
- The Archive will return to this section at a future time.
References:
1: The previous versions are unavailable, as they do not represent my current knowledge and expertise. They were built using different approaches, with TailwindCSS v3 and v4, and at some point Alpinejs. Feel free to ask about them when you interview me for an awesome big money job.
The previous versions are unavailable, as they do not represent my current knowledge and expertise. They were built using different approaches, with TailwindCSS v3 and v4, and at some point Alpinejs. Feel free to ask about them when you interview me for an awesome big money job.
2: This deviates from React's principles in order to develop a deeper understanding of the benefits and drawbacks of programmatic rendering, and all the ways a more vanilla approach can be more advantageous. I also enjoy doing things my own away and learning from the consequences.
This deviates from React's principles in order to develop a deeper understanding of the benefits and drawbacks of programmatic rendering, and all the ways a more vanilla approach can be more advantageous. I also enjoy doing things my own away and learning from the consequences.
3: Though it doesn't replace the reliability of a dedicated and complete service, it is a simple and free alternative that integrates both platforms smoothly. It also minimizes technical overhead while still looking cool in a resume.
Though it doesn't replace the reliability of a dedicated and complete service, it is a simple and free alternative that integrates both platforms smoothly. It also minimizes technical overhead while still looking cool in a resume.
4: The amount per execution is limited by Cloudflare Workers' free plan runtime and sub-request limits. The actual workflow consists of dispatching requests asynchronously, then immediately responding with randomly selected images from the bucket. Then, the worker encondes and stores as many images as possible, before the CPU-time limit forces its termination.
The amount per execution is limited by Cloudflare Workers' free plan runtime and sub-request limits. The actual workflow consists of dispatching requests asynchronously, then immediately responding with randomly selected images from the bucket. Then, the worker encondes and stores as many images as possible, before the CPU-time limit forces its termination.
The PostLady project
A Postman alternative that doesn't use Javascript in the backend.
I just started this one! Give me until mid 2025 to have something interesting to share about it.
References:
1: I promise will change the project name to something that isn't likely to get me sued if I ever release this.
I promise will change the project name to something that isn't likely to get me sued if I ever release this.
2: ??????
EXPUNGED.
She didn't like this.
Fetching cats.

Coming Soon!
This feature is almost ready.
Theme
Variant
Coming Soon!
This feature is almost ready.
Sorry, cookies are mandatory.

Coming Soon!
This feature is almost ready.