Avatar of Julien DeFrance

Julien DeFrance

Principal Software Engineer at Tophatter
Avatar of juliendefrance
Principal Software Engineer at Tophatter·

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

READ MORE
16 upvotes·1.7M views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·

As a Engineering Manager & Director at SmartZip, I had a mix of front-end, back-end, #mobile engineers reporting to me.

Sprints after sprints, I noticed some inefficiencies on the MobileDev side. People working multiple sprints in a row on their Xcode / Objective-C codebase while some others were working on Android Studio. After which, QA & Product ensured both applications were in sync, on a UI/UX standpoint, creating addional work, which also happened to be extremely costly.

Our resources being so limited, my role was to stop this bleeding and keep my team productive and their time, valuable.

After some analysis, discussions, proof of concepts... etc. We decided to move to a single codebase using React Native so our velocity would increase.

After some initial investment, our initial assumptions were confirmed and we indeed started to ship features a lot faster than ever before. Also, our engineers found a way to perform this upgrade incrementally, so the initial platform-specific codebase wouldn't have to entirely be rewritten at once but only gradually and at will.

Feedback around React Native was very positive. And I doubt - for the kind of application we had - no one would want to go back to two or more code bases. Our application was still as Native as it gets. And no feature or device capability was compromised.

READ MORE
8 upvotes·8 comments·132.3K views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·
Shared insights
on
BootstrapBootstrapLessLessSassSass
at

Which #GridFramework to use? My team and I closed on Bootstrap !

On a related note and as far as stylesheets go, we had to chose between #CSS, #SCSS, #Sass , Less Finally opted for Sass

As syntactically awesome as the name announces it.

READ MORE
6 upvotes·45.7K views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·
Shared insights
on
RubyRubyRailsRailsGemfuryGemfuryGitGit
at

Working with Ruby on Rails also means working with #RubyGems Most of the time, the community has some gems you can use and list down your #Gemfile. But sometimes, you also need to come up with your own proprietary ones to encapsulate and re-use some of your business logic.

It is critical that such repositories and their source code remain private, secure. Even though your code shouldn't contain any credentials, this still applies to your gems' distribution channels. Unless for parts you've willingly open sourced, you don't want your intellectual property stolen.

Rubygems.org therefore, not being an option for this use case, I faced two alternatives: accepting the overhead of maintaining my own gem server, or finding a service that does it for me.

Obviously, the latter was the way to go:

I chose Gemfury for its convenience, pricing model, and reliability.

Gemfury also allowed me/my team to publish gems via different methods: file upload, SSH, HTTPS, or as simple as a Git push.

READ MORE
Gemfury (gemfury.com)
6 upvotes·8.8K views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·
Recommends
AlgoliaAlgolia

In the early days, people would set up their own Elastic Search clusters and have troubles maintaining it, keeping it secure, also this required a lot of manual work to get the data in, keep it current, query it... etc. A couple of years ago AWS came up with AWS Elastic Search Service, which reduced some of this overhead. My previous teams and I went through all of these different stages, and ultimately, discovering Algolia a couple of years ago, solved so many of our issues, kept the cost low, reduced dramatically Implementation therefore GTM timelines, and freed up so much of our engineering bandwidth. They have SDKs for most common languages and platforms, and you can achieve a complete solution in just a matter of hours if not minute. Value your own/your team's engineering time. Factor that in when comparing costs. There should also be entry-level tiers you can get a proof of concept rolled out with.

READ MORE
How Algolia Works | Getting Started | Guide | Algolia Documentation (algolia.com)
6 upvotes·23 views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·
Shared insights
on
Swagger UISwagger UIRubyRuby
at

Use case: Keeping all API endpoints documented.

Swagger UI is slick. Not only details the specifications of all input/output parameters are there, but the interface also is interactive and allows sample requests to be sent to the actual endpoints.

With the help of Ruby gems such as https://github.com/richhollis/swagger-docs, the JSON files can automatically be generated for you for every controller you want to appear on the documentations page.

READ MORE
5 upvotes·33.6K views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·

Hi Vibhanshu, When it comes to serving a static website, single page application, S3 or a combination of S3/CloudFront, is a great fit. There is no server that you need to manage, and S3 is as resilient and scalable at it gets. Moreover, it's certainly the most cost-effective solution you'll be able to come up with. Is your application fully static or does it come with compute needs? - Regardless, I would highly discourage you from running anything directly on a custom EC2 instance of your own, as this will from the get-go but also over time, come with high maintenance costs. - Elastic Beanstalk, (not to be mistaken with EBS, which acronym stands for Elastic Block Storage), on the contrary, is a rather powerful way to manage your applications and environments. Either via the CLI or through the console, you can easily configure your environment variables, load balancers, certificates, events/thresholds causing your instances counts to scale up/down, steaming of logs... - Elastic Beanstalk supports by default a couple of language, but these don't always allow you to run the version/dependency you need. For best decoupling from the underlying instances, you might want to look at leveraging Docker (Elastic Beanstalk has Docker-compatible AMIs), so you are in control of the stack you application runs on.

READ MORE
5 upvotes·3 comments·72 views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·

Nexmo vs Twilio ?

Back in the early days at SmartZip Analytics, that evaluation had - for whatever reason - been made by Product Management. Some developers might have been consulted, but we hadn't made the final call and some key engineering aspects of it were omitted.

When revamping the platform, I made sure to flip the decision process how it should be. Business provided an input but Engineering lead the way and has the final say on all implementation matters. My engineers and I decided on re-evaluating the criteria and vendor selection. Not only did we need SMS support, but were we not thinking about #VoiceAndSms support as the use cases evolved.

Also, on an engineering standpoint, SDK mattered. Nexmo didn't have any. Twilio did. No-one would ever want to re-build from scratch integration layers vendors should naturally come up with and provide their customers with.

Twilio won on all fronts. Including costs and implementation timelines. No-one even noticed the vendor switch.

Many years later, Twilio demonstrated its position as a leader by holding conferences in the Bay Area, announcing features like Twilio Functions. Even acquired Authy which we also used for 2FA. Twilio's growth has been amazing. Its recent acquisition of SendGrid continues to show it.

READ MORE
3 upvotes·365.8K views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·
Shared insights
on
New RelicNew RelicDatadogDatadog
at

Which #APM / #Infrastructure #Monitoring solution to use?

The 2 major players in that space are New Relic and Datadog Both are very comparable in terms of pricing, capabilities (Datadog recently introduced APM as well).

In our use case, keeping the number of tools minimal was a major selection criteria.

As we were already using #NewRelic, my recommendation was to move to the pro tier so we would benefit from advanced APM features, synthetics, mobile & infrastructure monitoring. And gain 360 degree view of our infrastructure.

Few things I liked about New Relic: - Mobile App and push notificatin - Ease of setting up new alerts - Being notified via email and push notifications without requiring another alerting 3rd party solution

I've certainly seen use cases where NewRelic can also be used as an input data source for Datadog. Therefore depending on your use case, it might also be worth evaluating a joint usage of both solutions.

READ MORE
3 upvotes·205.7K views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·

I was at some point looking for a way to loadtest some of my environments, staging initially, but including production as well, ultimately, to ensure our ability to scale when facing sudden bursts of requests, and understand how it would impact our load balancers, our instances, our database server... etc.

I came across Loader.io , a service by SendGrid labs which not only allowed my team and I to loadtest our API endpoints but also simulate actual user traffic on our website.

#LoadAndPerformanceTesting

READ MORE
3 upvotes·42.5K views