Rails

Rails

Application and Data / Languages & Frameworks / Frameworks (Full Stack)

Decision about JavaScript, Rails, Apollo, React

Avatar of holman
Zach Holman ·

Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.

But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.

But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.

Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.

20 upvotes·4 comments·119.8K views

Decision at Shopify about GitHub, Rails

Avatar of kirs
Production Engineer at Shopify ·

The core Shopify app has remained a Rails monolith, but we also have hundreds of other Rails apps across the organization. These are not microservices, but domain-specific apps: Shipping (talks with various shipping providers), Identity (single sign on across all Shopify stores), and App Store to name a few. Managing a hundred apps and keeping them up to date with security updates can be tough, so we've developed ServicesDB, an internal app that keeps track of all production services and helps developers to make sure that they don't miss anything important.

ServicesDB keeps a checklist for each app: ownership, uptime, logs, on-call rotation, exception reporting, and gem security updates. If there are problems with any of those, ServicesDB opens a GitHub issue and pings owners of the app to ask them to address it. ServicesDB also makes it easy to query the infrastructure and answer questions like, “How many apps are on Rails 4.2? How many apps are using an outdated version of gem X? Which apps are calling this service?”.

19 upvotes·33.2K views

Decision at Algolia about Ember.js, Rails, Discourse, Gitter, Discord, Algolia

Avatar of dzello
Developer Advocate at DeveloperMode ·

Shortly after I joined Algolia as a developer advocate, I knew I wanted to establish a place for the community to congregate and share their projects, questions and advice. There are a ton of platforms out there that can be used to host communities, and they tend to fall into two categories - real-time sync (like chat) and async (like forums). Because the community was already large, I felt that a chat platform like Discord or Gitter might be overwhelming and opted for a forum-like solution instead (which would also create content that's searchable from Google).

I looked at paid, closed-source options like AnswerHub and ForumBee and old-school solutions like phpBB and vBulletin, but none seemed to offer the power, flexibility and developer-friendliness of Discourse. Discourse is open source, written in Rails with Ember.js on the front-end. That made me confident I could modify it to meet our exact needs. Discourse's own forum is very active which made me confident I could get help if I needed it.

It took about a month to get Discourse up-and-running and make authentication tied to algolia.com via the SSO plugin. Adding additional plugins for moderation or look-and-feel customization was fairly straightforward, and I even created a plugin to make the forum content searchable with Algolia. To stay on top of answering questions and moderation, we used the Discourse API to publish new messages into our Slack. All-in-all I would say we were happy with Discourse - the only caveat would be that it's very helpful to have technical knowledge as well as Rails knowledge in order to get the most out of it.

19 upvotes·2 comments·27.8K views

Decision at SmartZip about Amazon DynamoDB, Ruby, Node.js, AWS Lambda, New Relic, Amazon Elasticsearch Service, Elasticsearch, Superset, Amazon Quicksight, Amazon Redshift, Zapier, Segment, Amazon CloudFront, Memcached, Amazon ElastiCache, Amazon RDS for Aurora, MySQL, Amazon RDS, Amazon S3, Docker, Capistrano, AWS Elastic Beanstalk, Rails API, Rails, Algolia

Avatar of juliendefrance
Full Stack Engineering Manager at ValiMail ·

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

16 upvotes·145.3K views

Decision at StackShare about Redis, CircleCI, Webpack, Amazon CloudFront, Amazon S3, GitHub, Heroku, Rails, Node.js, Apollo, Glamorous, React, FrontEndRepoSplit, Microservices, SSR, StackDecisionsLaunch

Avatar of ruswerner
Lead Engineer at StackShare ·

StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

12 upvotes·135.3K views

Decision at Shopify about Redis, Memcached, MySQL, Rails

Avatar of kirs
Production Engineer at Shopify ·

As is common in the Rails stack, since the very beginning, we've stayed with MySQL as a relational database, Memcached for key/value storage and Redis for queues and background jobs.

In 2014, we could no longer store all our data in a single MySQL instance - even by buying better hardware. We decided to use sharding and split all of Shopify into dozens of database partitions.

Sharding played nicely for us because Shopify merchants are isolated from each other and we were able to put a subset of merchants on a single shard. It would have been harder if our business assumed shared data between customers.

The sharding project bought us some time regarding database capacity, but as we soon found out, there was a huge single point of failure in our infrastructure. All those shards were still using a single Redis. At one point, the outage of that Redis took down all of Shopify, causing a major disruption we later called “Redismageddon”. This taught us an important lesson to avoid any resources that are shared across all of Shopify.

Over the years, we moved from shards to the concept of "pods". A pod is a fully isolated instance of Shopify with its own datastores like MySQL, Redis, memcached. A pod can be spawned in any region. This approach has helped us eliminate global outages. As of today, we have more than a hundred pods, and since moving to this architecture we haven't had any major outages that affected all of Shopify. An outage today only affects a single pod or region.

12 upvotes·38.7K views

Decision at Shopify about Rails, Ruby

Avatar of kirs
Production Engineer at Shopify ·

In 2004, Shopify’s CEO and founder, Tobi Lütke, was building out an e-commerce store for snowboarding products. Unsatisfied with the existing e-commerce products on the market, Tobi decided to build his own SaaS platform using Ruby on Rails.

At that time, Rails wasn't even 1.0 yet, and the only version of the framework was exchanged as a .zip archive by email. Tobi joined Rails creator David Heinemeier Hansson (DHH) and started contributing to Ruby on Rails while building Shopify.

Shopify is now one of the world's largest and oldest Rails apps. It’s never been rewritten and still uses the original codebase, though it has matured considerably over the past decade. All of Tobi’s original commits are still in the version control history.

The bet on Rails greatly shaped how we think at Shopify and empowered us to deliver product as fast as possible. While there are parts of the framework that sometimes make it harder to scale (e.g. ActiveRecord callbacks and code organization), many of us tend to agree with Tobi that Rails is what allowed Shopify to move from a garage startup to a public company.

11 upvotes·19.8K views

Decision at StackShare about Segment, Rails, FullStory, Sentry, Bug-squashing, Sessionrecording, Reproducing-bugs, UserFeedbackAsAService

Avatar of yonasb
CEO at StackShare ·
SegmentSegmentRailsRailsFullStoryFullStorySentrySentry
#Bug-squashing
#Sessionrecording
#Reproducing-bugs
#UserFeedbackAsAService

One of the challenges we've had to deal with as our product surface area has grown, is identifying and reproducing bugs. We use Sentry for exception monitoring, however, it's usually difficult to try to reproduce bugs. I first heard about FullStory from our friends over at Flexport (check out the Stack Story and you'll hear them mention it: https://stackshare.io/posts/how-flexport-builds-software-to-move-over-1-billion-dollars-in-merchandise). FullStory let's you record user sessions, and play them back to help you identify bugs and UX issues. You're even able to view the console errors live as they happen during the sessions!

We were pretty blown away at how comprehensive the product was at first, and it seems to be getting better every time I use it. Only complaint is that it's super expensive once you're in the hundreds of thousands of sessions so we had to stop trying to record logged out sessions, we only use it for auth'd sessions. We also started out using it via Segment but once we needed to watch out for the number of sessions we were recording we realized that it was impossible to restrict FullStory recordings on a per-page basis without ripping it out of Segment, so we ended up just using their JS snippet and putting that in the Rails views that we wanted to monitor closely.

The ability to share specific portions of sessions, speed them up, skip inactivity, and all sorts of other little features all add up to a really solid product that helps both our PMs and engineers improve our own product much quicker. I officially requested a Sentry + FullStory integration a while back https://twitter.com/yonasbe/status/871987738777616384, still waiting on this! #UserFeedbackAsAService #reproducing-bugs #sessionrecording #bug-squashing

10 upvotes·47.8K views

Decision at StackShare about Memcached, Heroku, Amazon ElastiCache, Rails, PostgreSQL, MemCachier, RailsCaching, Caching

Avatar of yonasb
CEO at StackShare ·

We decided to use MemCachier as our Memcached provider because we were seeing some serious PostgreSQL performance issues with query-heavy pages on the site. We use MemCachier for all Rails caching and pretty aggressively too for the logged out experience (fully cached pages for the most part). We really need to move to Amazon ElastiCache as soon as possible so we can stop paying so much. The only reason we're not moving is because there are some restrictions on the network side due to our main app being hosted on Heroku.

#Caching #RailsCaching

9 upvotes·18.8K views

Decision at My Job Glasses about Slack, Amazon CloudWatch, Rails, Sidekiq, Redis, Amazon SNS, Amazon S3, Amazon SES, AWS Lambda

Avatar of Startouf
CTO at My Job Glasses ·

We decided to use AWS Lambda for several serverless tasks such as

  • Managing AWS backups
  • Processing emails received on Amazon SES and stored to Amazon S3 and notified via Amazon SNS, so as to push a message on our Redis so our Sidekiq Rails workers can process inbound emails
  • Pushing some relevant Amazon CloudWatch metrics and alarms to Slack
9 upvotes·11.5K views