Campsite Booking Platform
Integrating with external API
Duration: 2021.12 - 2022.03 (5 months)
Technologies: Ruby, Rails, RSpec, PostgreSQL, Redis, Web-API
Methodology: Agile / XP like
Overview
As a software developer at Ironin Software House, I was brought on board for a fixed-scope project. The goal was to integrate client’s application with a Vacation Rental Market provider, so that they could go through so called soft launch process with the provider.
Specific requirements were outlined for the API communication, such as pulling and caching dictionary data, retrieving property details and availability, making bookings, handling webhooks - to name a few.
The provider’s API was specific in the design (not a fully RESTful, more like SOAP). Stateless, but used XML and only HTTP POST method to take action on resources.
Design and implementation
My job was to design and implement a communication with the API so that application services could interact with it – perform some common application operations (like list properties, create reservations, pull availability, pull status, etc).
I’ve designed the architecture, containing a building blocks such as:
- Entities – each representing a specific API resource,
- Repositories – as entrypoints for the resources, requesting the HTTP data via client, returning entities, and encapsulating building them from XML,
- Client – already existing, based on Faraday, returning a response object with XML payload.
A common approach, what I usually see in Rails applications, would be to use the client directly in the application services, making the Client return Ruby hashes (hash maps) or some generic objects — seems like the simplest approach.
However I wanted to add an extra repository-entity layer between the application services and the Client. It would add complexity, but in my opinion it was reasonable, as I needed extra spece to encapsulate specific logic. I considered the following:
- some resources needed to be cached (so called dictionaries, for example error codes to error messages, status enums to actual status names, property types numbers to actual type names),
- data structures needed to be transformed/mapped in a specific way to be usable within the application, namely to use the previously cached dictionary values,
- XML parsing is more complicated than dealing with JSON, as the XML can have not only values but also attributes (can get out of hand when parsed implicitly),
- the repositories would serve as entry points (limiting the full client interface to specific subset) and responded with typed data structures, which would be easier to use (as the data would already be coerced, and data structures defined explicitly in the code.)
I discussed this idea with the client (who was also a developer) and he accepted it.
The idea presented on a sequence diagram, an example data flow:
Application service calls a specific repository’s method, which then calls the client, which in turn request the data from the external service. After receiving the response, the XML data is parsed and mapped in to the entities, which are in turn used by the application service.
Initially I wanted to use the dry-struct to implement the typed data containers, as I was familiar with this gem. But a Sorbet gem was already present in the application so I’ve used the Sorbet’s T::Structs instead, not to introduce new libraries.
The advantage of converting the payload into typed entities is that it allows the application to immediately detect and raise errors when a corrupted payload is received. This error detection occurs at the application boundary as soon as the corrupted data enters the system. Such incidents can occur rarely, such as when changes are made to the API structures, (or more frequently, when a developer makes incorrect assumptions about the data during implementation.) Still this kind of issues are hard to detect and deal with. This approach increases developers’ awareness of these issues if they occur and prevents the propagation of incorrect data structures through the application, which could lead to random crashes as the application’s assumptions about the data become outdated.
Data mapping
Also worth mentioning, the mapping logic would use a dictionary data pulled from the provider. As these values rarely change, it’s reasonable to cache it.
I designed the cache to happen on a repository level in a way that it could be turned on when needed on every repository:
repo_a = RepositoryA.new(...)
repo_a_with_caching = RepositoryA.new(...).with_caching
To achieve that I used a bit of Ruby metaprograming inside the with_caching
method and also the Ruby prepend
to include the mixin with caching logic.
(Being very careful, commented extensively, as metaprogramming is a dangerous.)
The included mix-in would enhance the repository’s logic with caching in a following manner:
If cache got hit, the HTTP request, XML parsing, and building/mapping the entities would be skipped.
Redis was used as a caching backed (with Marshal.dump and Marshal.load on the T::Structs) with TTL set, so that the cache would expire automatically. I also provided methods to clear and regenerate the cache.
In practice the caching was used for the dictionary endpoints, as the API docs suggested.
Collaboration
After a month, new college joined me. I onboarded him by explaining the design choices I had made in the project. I proposed a collaboration method where we would primarily work independently but also connect daily for a pair programming session, a practice we adhered to.
During these sessions, we shared our progress, divided the ongoing tasks, and discussed any challenges we were facing. After this, we continued to work individually for the rest of the day.
This approach resulted in architecturally consistent code. Which we were presenting incrementally to the client, each 2-4 weeks.
What was unique is that I was presenting the working code to the client’s developers, right from my terminal emulator, running the Rails console and evaluating the code I’ve developed. They were providing feedback and getting familiar with the API.
Also a project manager coordinated our communication with the client, scheduling calls if needed and arranging the demo sessions.
Testing
Adding the extra layer of repository-entity simplified the testing in a specific way. I’ve organized the tests in 2 categories. Repositories were unit tested. But the definition of unit was the repository (as the entry point) with the dependencies it used (entities, mappers, and the Client itself). I allowed for the test to go also through the Client and hit the external service (then cached by webmock, recorded by vcr gem). The repositories were fully tested based in recorded payloads (which could be regenerated in case of API changes).
Using the webmock/vcr is a common approach in Rails applications, however, due to it’s ease of use in my opinion it’s overused. If I’d follow the common approach I would also use webmock/vcr in the application services’ unit tests. Which I didn’t as it would make the tests too stiff, not me mention the speed of execution.
And so I designed the application unit tests around the idea of passing in the entities (as T::Structs) which could be build with just the right data to test specific business rules.
This resulted in a very elastic unit tests suit, as the higher level unit tests were separated from the lower level ones, resulting in a specific form of separation of concerns.
Result
Client was happy with the result, they started the soft-launch process with the provider – successfully, as they got listed on the provider’s page.