FLO HEALTH, INC.

FLO HEALTH, INC.

18 Followers

Decisions 9

Siarhei Zuyeu

Python was first used at Flo because we needed quick prototyping and product idea validation. So, the very first backend architecture was built as a monolithic Python web service. Now we use it for ML-related projects, building web services for our core product, and a huge variety of utilities that help to support platforms or integrations with external tools.

We do focus on getting the most beneficial aspects of language and balancing development with Scala. Core application services include the health domain cycle predictions, managing user data, chat-bots, etc. Python helped us start development and is still capable of managing high-load.

18 25.9K

Vladimir Kurlenya

It’s pretty common when you read a success story about migrating from a monolith to microservices to see that people have a clear idea of what they already have; what they want to attain overall; that they have looked at all the pros and cons; and out of the plethora of available candidates, they chose Kubernetes. They have been faced with insurmountable problems, and with an unbelievable superhuman effort they resolved these issues and finally found the kind of happy resolution that happens “a year and a half into production.”

Was that how it was for us? Definitely not.

We didn’t spend a lot of time considering the idea of migrating to microservices like that. One day we just decided “why not give it a try?” There was no need to choose from the orchestrators at that point: Most of the dinosaurs were already on their last legs, except for Nomad, which was still showing some signs of life. Kubernetes became the de facto standard, and I had experience working with it. So we decided to pluck up our courage and try to run something non-critical in Kubernetes.

Considering that at that time all our infrastructure was in AWS, we also didn’t spend much time deciding to use EKS.

I’m struggling to remember who we chose as the guinea pig for the run in EKS — it might have been Jenkins. Or Prometheus. It’s difficult to say, but gradually all the new services were launched in EKS, and on the whole, everyone liked the approach.

The only thing that we didn’t understand was how to organize CI/CD.

At that time, we had the heady mix of Ansible/Terraform/Bitbucket, and we were not entirely satisfied with the results. Besides, we tried to practice delivery engineering and didn’t have a dedicated DevOps team, and there were many other factors too.

What did we need?

  • Unification — despite the fact that we never needed our teams to use a strictly defined stack, in CI/CD, some certainty was desired.
  • Decentralization — as mentioned earlier, we did not have a dedicated DevOps team, nor the desire (or need) to start one.
  • Relevance — not bleeding edge, but we wanted a tech stack that was on trend.
  • We also wanted the obvious things like speed, convenience, flexibility, etc.

It was safe to say that Helm was the standard for installing and running applications in EKS, so we didn’t use Ansible or Terraform for the management and templating of Kubernetes objects, although this solution was offered. We only used Helm (although there were lots of questions and complaints about it).

We also didn’t use Ansible or Terraform to manage Helm charts. It didn’t fit with our desire for decentralization and wasn’t exactly convenient. Again, because we don’t have a DevOps team, our service can be deployed in EKS by any developer with the help of Helm, and we don’t need (or want) to be involved in this process. We therefore took the most controversial route: We made our wrapper for Helm so it would work like an automatic transmission, more specifically that it would reduce interaction with the user when making the decision to go or not to go (in our case, to deploy or not to deploy). Later, we added a general Helm chart to this wrapper, so the developer needed several input values for deploying:

  • What to deploy (docker image)
  • Where to deploy (dev, stage, prod, etc.)

So in all, the service deployment process was run from the repository of the same service by the same developer, exactly when and how the developer needed it. Our participation in this process was reduced to minimal consultation on some borderline cases and occasionally eliminating errors (where would we be without them?) in the wrapper.

And then we lived happily ever after. But our story isn’t about that at all.

In fact, I was asked to talk about why we use Kubernetes, not how it went. If I am honest (and as you can surely tell), I don’t have a clear answer. Maybe it would be better if I told you why we are continuing to use Kubernetes.

With Kubernetes, we were able to:

  • Better utilize EC2 instances
  • Obtain a better mix of decentralization (all the services arrive in Kubernetes from authorized repositories, we are not involved in the process) and centralization (we always see when, how, and from where a service arrives to us, whether it is a log, audit or event)
  • Conveniently scale a cluster (we use the combination cluster autoscaler and horizontal pod autoscaler)
  • Get a convenient infrastructure debug (not forgetting that Kubernetes is only one level of abstraction over several others, and even in the worst case scenario it is under the hood of standard RHEL … well, at the very least we have it)
  • Get high levels of fault tolerance and self-healing for the infrastructure
  • Get a single (well, almost) and understandable CI/CD
  • Significantly shorten TTM
  • Have an excellent reason to write this post

And although we didn’t get anything new, we like what we got.

14 30.2K

Vladimir Burylov

iOS Developer at FLO HEALTH, INC.

Four years ago, we switched to Swift over Objective-C and never looked back. At the same time, we did a major overhaul of our architecture and coding practices. The codebase was practically divided into ‘before’ and ‘after’. We’ve kept interactions between these two parts as small as possible to minimize language interoperability issues. Objective-C code is continuously migrated away from the application monolith, either to the swift re-implementation or the dedicated ‘legacy’ module. Working with Swift for the last four years allowed us to appreciate its strengths:

  • It has a strict but concise and flexible type system, ensuring type safety at all times without requiring too much effort from developers.
  • Uncompromising safety regarding value nullability: it is practically impossible to make mistakes in this regard.
  • It includes native support for functional programming-related features employed by many modern languages. We relied heavily on things like immutable value types when building our architecture.

But choosing Swift was not the only decision we made about iOS development. There are several interesting developments of our stack over the years that we would like to share.

  • We chose Texture over UIKit. A tough decision indeed, since Texture is a large and complex 3-rd party dependency and introduces related risks. We initially used Texture for what it does best: it allowed us to develop complex list layouts pretty fast and at the same time have flawless performance in runtime. With time we enjoyed other aspects of it, such as the power of its flexbox-like layout engine, so we decided to widen its area of use to the whole app.
  • We chose RxSwift over Promises. Any modern application utilizes asynchronous execution heavily. Tools like Promises allow describing asynchronous flows in a more convenient and natural way. That was not enough in our case: we needed something to bind data to the UI, so we used RxSwift for that. As a more extensive yet simple tool, it covered both our needs for bindings and any asynchronous code, so we switched to it entirely.
  • We chose Redux over MVVM. When we set out, MVVM was the most popular architecture, and not without a reason. But on a certain scale, our view models became too complicated to comprehend. We had to decrease this complexity somehow, and splitting large view models into smaller ones didn’t help much since emerging dependencies between them created more complexity in return. We also wanted better ways to manage how the state changes and test this logic quickly, without too many mocks. Single-flow architecture like Redux is still a niche in the iOS world but is much more prevalent in the web frontend, and it covers all our needs. It took us some time and several iterations to adapt it properly to mobile specifics, but now we are pretty happy with the results.
  • We chose Swift Package Manager (SPM) over CocoaPods. Like the Swift language itself, it emerged in a somewhat limited state. We watched patiently how it evolved until it received all the features that we needed, like the ability to have binary packages. With SPM, we can fully embrace modularization in our app, as it allows us to create new internal modules with ease and support any number of them.
13 4.9K

Vladislav Ermolin

Android Engineer at Flo

Back in 2015, when Flo began, we chose Android SDK as the basis for our Android application.

Nowadays, we could choose from plenty of cross-platform SDK options, which would’ve probably saved us resources at the beginning of the product’s development life cycle. However, engineering resource utilization isn’t the only consideration for making decisions. If you wanted to create the best women’s health solution on the market, you would need to care about performance and seamless integration with operating system features too. The modern cross-platform SDKs have just begun to get closer to the native development option in that regard. The Kotlin Multiplatform Project is a good example of such a framework. Unfortunately, because it hasn't been around for a long time, it still has plenty of issues, so it currently doesn't fit our needs. However, we might consider it in the future. All in all, I believe that we made the right choice.

Over time, Android engineering best practices, tools, and the operating system itself evolved, giving developers multiple ways to implement the same features more effectively, both in terms of engineering team performance and device resource utilization. Our team evolved as well: We’ve come a long way from a single Android developer to a dozen feature teams that need to work on the same codebase simultaneously without stepping on each other's toes. We began caring more about cycle time because one can’t successfully compete by delivering value slowly.

For our dev team, these changes prompted a request to update the codebase in order to deliver value faster and increase the speed of new Android features adoption, raising the overall level of quality at the same time.

We began with the modularization of our Android application. Using the power of the Gradle build system, we split our application into 70+ shared core modules and 30+ independent feature modules. Such a huge step required the revision of the application’s architecture. One could say that we moved to clean architecture; however, I would say that we use architecture driven by common software engineering principles like SOLID, DRY, KISS, etc. On the presentation layer, we switched from the MVP to the MVVM pattern. Implementation of this pattern, powered by the Jetpack Lifecycle components, simplifies Android component lifecycle management and increases the reusability of the code.

Supporting such a setup would be barely possible without a dependency injection (DI) implementation. We settled on Dagger 2. This DI framework supports compile-time graph validation, multibinding, and scoping support. Apart from that, it offers two ways to wire up individual components into a single graph: subcomponents and component dependencies, each good for its purpose. At Flo, we prefer component dependencies, as they better isolate the features and positively impact the build speed, but we use subcomponents closer to the leaves of the dependency graph as well.

Though we still have Java code in the project, Kotlin has become our main programming language. Compared to Java, it has multiple advantages:

  • Improved type system, which, for example, makes it possible to avoid the “billion-dollar mistake” in the majority of cases
  • Rich and mature standard library, which provides solutions for many typical tasks out of the box and minimizes the need for extra utilities
  • Advanced features to better fit the open-closed principle (for example, extension functions and removal of checked exceptions let us improve the extendability of solutions)
  • The syntax sugar, which simply lets you write code faster (it’s hard to imagine modern Android development without data classes, sealed classes, delegates, etc.) We attempt to use Kotlin wherever possible. Our build scripts are written in it, and we also migrate the good old bash scripts onto KScript.

Another huge step in Kotlin adoption is the migration from RxJava to the Kotlin coroutines. RxJava is a superb framework for event-based and asynchronous programming. However, it is not the best choice for asynchronous programming. In that regard, Kotlin coroutines seem like a much wiser choice, offering more effective resource utilization, more readable error stack traces, finer control over the execution scope and the syntax, which looks almost identical to the synchronous code. In its main area of usage — event-based programming — RxJava has also begun to lose ground. Being written in Java, it does not support Kotlin’s type system well. Besides, many of its operators are synchronous by design, which can limit developers. Driven by the Kotlin coroutines, Flow addresses both of these drawbacks. Even though it is a much younger framework, we found it perfectly fits our needs.

Perhaps the most noticeable sign that the above changes were not taken in vain is that you can now use Flo on your smartwatch powered by Android Wear. This is the second Flo app for the Android platform, and it effectively reuses the codebase of the mobile app. One of the core advantages of the Flo Watch app lies in Wear Health Services. It allows us to effectively and securely collect health-related data from the user’s device, if a user so chooses, and utilize it to improve the precision of cycle estimation.

But we won't stop chasing innovation!

Even though we migrated to ViewBinding, enjoying the extra type safety and reduced amount of the boilerplate code, we couldn’t pass by the Jetpack Compose framework, which is going to be the next big thing both for Flo and the whole mobile industry. It allows us to use Kotlin power to construct UI, reduces code duplication, increases reusability of the UI components, and unblocks building complex view hierarchies with less performance penalty. On the other hand, it requires changing the architecture approach once again. But that has never stopped us. So far, we’ve integrated it into one feature module and look forward to using it as a main UI framework in all the upcoming ones.

Finally, what about recent Android features support? Well, we continuously improve the app in that sense. Like most teams, we rely on different Jetpack, Firebase, and Play Services libraries to achieve that goal. We use them to move work to the background, implement push notifications, integrate billing, and many other little things, all of which improve the overall UX or let us reach out to users more effectively. However, we also invest in first-party tooling. In an effort to ensure secure and transparent management of user data, we implemented our own solutions for A/B testing, remote configuration management, logging, and analytics.

What about quality? Developers are responsible for the quality of created solutions. To ensure that we use modern tools and approaches:

  • We chose Detekt and Android Lint for static code analysis. These frameworks prevent many issues from coming up in production by analyzing the codebase during compile time. They are capable of finding the most common problems in Kotlin and Android-related code, ensuring the whole team follows the same code style. When those frameworks do not provide the necessary checks out of the box, we implement them by ourselves.
  • The above two frameworks are used both locally and in the continuous integration pipelines. However, in the latter, we additionally utilize the Sonarcloud tool. It provides extended complexity, security, and potential bug checks, which are run in the cloud.
  • To ensure that the code meets the requirements, we use multiple layers of automated testing. Our test pyramid includes unit tests, which use the JUnit5 platform, and E2E tests powered by Espresso framework. Together, these two approaches to testing allow developers to get feedback fast while at the same time ensuring that features work as expected end-to-end.
11 40.7K