Heroku

Heroku

Application and Data / Application Hosting / Platform as a Service

Decision at ChecklyHQ about vuex, Knex.js, PostgreSQL, Amazon S3, AWS Lambda, Vue.js, hapi, Node.js, GitHub, Docker, Heroku

Avatar of tim_nolet
Founder, Engineer & Dishwasher at Checkly ·

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

17 upvotes·97.4K views

Decision about SonarQube, Codacy, Docker, Git, Apache Maven, Amazon EC2 Container Service, Microsoft Azure, Amazon Route 53, Elasticsearch, Solr, Amazon RDS, Amazon S3, Heroku, Hibernate, MySQL, Node.js, Java, Bootstrap, jQuery Mobile, jQuery UI, jQuery, JavaScript, React Native, React Router, React

Avatar of ganesa-vijayakumar
Full Stack Coder | Module Lead ·

I'm planning to create a web application and also a mobile application to provide a very good shopping experience to the end customers. Shortly, my application will be aggregate the product details from difference sources and giving a clear picture to the user that when and where to buy that product with best in Quality and cost.

I have planned to develop this in many milestones for adding N number of features and I have picked my first part to complete the core part (aggregate the product details from different sources).

As per my work experience and knowledge, I have chosen the followings stacks to this mission.

UI: I would like to develop this application using React, React Router and React Native since I'm a little bit familiar on this and also most importantly these will help on developing both web and mobile apps. In addition, I'm gonna use the stacks JavaScript, jQuery, jQuery UI, jQuery Mobile, Bootstrap wherever required.

Service: I have planned to use Java as the main business layer language as I have 7+ years of experience on this I believe I can do better work using Java than other languages. In addition, I'm thinking to use the stacks Node.js.

Database and ORM: I'm gonna pick MySQL as DB and Hibernate as ORM since I have a piece of good knowledge and also work experience on this combination.

Search Engine: I need to deal with a large amount of product data and it's in-detailed info to provide enough details to end user at the same time I need to focus on the performance area too. so I have decided to use Solr as a search engine for product search and suggestions. In addition, I'm thinking to replace Solr by Elasticsearch once explored/reviewed enough about Elasticsearch.

Host: As of now, my plan to complete the application with decent features first and deploy it in a free hosting environment like Docker and Heroku and then once it is stable then I have planned to use the AWS products Amazon S3, EC2, Amazon RDS and Amazon Route 53. I'm not sure about Microsoft Azure that what is the specialty in it than Heroku and Amazon EC2 Container Service. Anyhow, I will do explore these once again and pick the best suite one for my requirement once I reached this level.

Build and Repositories: I have decided to choose Apache Maven and Git as these are my favorites and also so popular on respectively build and repositories.

Additional Utilities :) - I would like to choose Codacy for code review as their Startup plan will be very helpful to this application. I'm already experienced with Google CheckStyle and SonarQube even I'm looking something on Codacy.

Happy Coding! Suggestions are welcome! :)

Thanks, Ganesa

15 upvotes·13 comments·149K views

Decision about GitHub, nginx, ESLint, AVA, Semantic UI React, Redux, React, PostgreSQL, ExpressJS, Node.js, FeathersJS, Heroku, Amazon EC2, Kubernetes, Jenkins, Docker Compose, Docker, Frontend, Stack, Backend, Containers, Containerized

Avatar of jordandenison

Recently I have been working on an open source stack to help people consolidate their personal health data in a single database so that AI and analytics apps can be run against it to find personalized treatments. We chose to go with a #containerized approach leveraging Docker #containers with a local development environment setup with Docker Compose and nginx for container routing. For the production environment we chose to pull code from GitHub and build/push images using Jenkins and using Kubernetes to deploy to Amazon EC2.

We also implemented a dashboard app to handle user authentication/authorization, as well as a custom SSO server that runs on Heroku which allows experts to easily visit more than one instance without having to login repeatedly. The #Backend was implemented using my favorite #Stack which consists of FeathersJS on top of Node.js and ExpressJS with PostgreSQL as the main database. The #Frontend was implemented using React, Redux.js, Semantic UI React and the FeathersJS client. Though testing was light on this project, we chose to use AVA as well as ESLint to keep the codebase clean and consistent.

15 upvotes·98.1K views

Decision at Dubsmash about Kubernetes, Amazon EC2, Heroku, Python, ContainerTools, PlatformAsAService

Avatar of tspecht
‎Co-Founder and CTO at Dubsmash ·
KubernetesKubernetesAmazon EC2Amazon EC2HerokuHerokuPythonPython
#ContainerTools
#PlatformAsAService

Since we deployed our very first lines of Python code more than 2 years ago we are happy users of Heroku. It lets us focus on building features rather than maintaining infrastructure, has super-easy scaling capabilities, and the support team is always happy to help (in the rare case you need them).

We played with the thought of moving our computational needs over to barebone Amazon EC2 instances or a container-management solution like Kubernetes a couple of times, but the added costs of maintaining this architecture and the ease-of-use of Heroku have kept us from moving forward so far.

Running independent services for different needs of our features gives us the flexibility to choose whatever data storage is best for the given task.

#PlatformAsAService #ContainerTools

14 upvotes·30.3K views

Decision at Dubsmash about Amazon RDS for Aurora, Redis, Amazon DynamoDB, Amazon RDS, Heroku, PostgreSQL, PlatformAsAService, Databases, NosqlDatabaseAsAService, SqlDatabaseAsAService

Avatar of tspecht
‎Co-Founder and CTO at Dubsmash ·
Amazon RDS for AuroraAmazon RDS for AuroraRedisRedisAmazon DynamoDBAmazon DynamoDBAmazon RDSAmazon RDSHerokuHerokuPostgreSQLPostgreSQL
#PlatformAsAService
#Databases
#NosqlDatabaseAsAService
#SqlDatabaseAsAService

Over the years we have added a wide variety of different storages to our stack including PostgreSQL (some hosted by Heroku, some by Amazon RDS) for storing relational data, Amazon DynamoDB to store non-relational data like recommendations & user connections, or Redis to hold pre-aggregated data to speed up API endpoints.

Since we started running Postgres ourselves on RDS instead of only using the managed offerings of Heroku, we've gained additional flexibility in scaling our application while reducing costs at the same time.

We are also heavily testing Amazon RDS for Aurora in its Postgres-compatible version and will also give the new release of Aurora Serverless a try!

#SqlDatabaseAsAService #NosqlDatabaseAsAService #Databases #PlatformAsAService

13 upvotes·35.6K views

Decision at Shopify about kubernetes-deploy, Capistrano, Heroku, Shipit, PlatformAsAService, ApplicationHosting, ContainerTools, BuildTestDeploy

Avatar of kirs
Production Engineer at Shopify ·
kubernetes-deploykubernetes-deployCapistranoCapistranoHerokuHerokuShipitShipit
#PlatformAsAService
#ApplicationHosting
#ContainerTools
#BuildTestDeploy

Shipit, our deployment tool, is at the heart of Continuous Delivery at Shopify. Shipit is an orchestrator that runs and tracks progress of any deploy script that you provide for a project. It supports deploying to Rubygems, Pip, Heroku and Capistrano out of the box. For us, it's mostly kubernetes-deploy or Capistrano for legacy projects.

We use a slightly tweaked GitHub flow, with feature development going in branches and the master branch being the source of truth for the state of things in production. When your PR is ready, you add it to the Merge Queue in ShipIt. The idea behind the Merge Queue is to control the rate of code that is being merged to master branch. In the busy hours, we have many developers who want to merge the PRs, but at the same time we don't want to introduce too many changes to the system at the same time. Merge Queue limits deploys to 5-10 commits at a time, which makes it easier to identify issues and roll back in case we notice any unexpected behaviour after the deploy.

We use a browser extension to make Merge Queue play nicely with the Merge button on GitHub:

Both Shipit and kubernetes-deploy are open source, and we've heard quite a few success stories from companies who have adopted our flow.

#BuildTestDeploy #ContainerTools #ApplicationHosting #PlatformAsAService

13 upvotes·13.5K views

Decision at StackShare about Redis, CircleCI, Webpack, Amazon CloudFront, Amazon S3, GitHub, Heroku, Rails, Node.js, Apollo, Glamorous, React, FrontEndRepoSplit, Microservices, SSR, StackDecisionsLaunch

Avatar of ruswerner
Lead Engineer at StackShare ·

StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

12 upvotes·134.8K views

Decision at Rainforest QA about Terraform, Helm, Google Cloud Build, CircleCI, Redis, Google Cloud Memorystore, PostgreSQL, Google Cloud SQL for PostgreSQL, Google Kubernetes Engine, Kubernetes, Heroku

Avatar of shosti
Senior Architect at Rainforest QA ·

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

12 upvotes·66K views

Decision at ChecklyHQ about Knex.js, Node.js, Heroku Postgres, Heroku, PostgreSQL

Avatar of tim_nolet
Founder, Engineer & Dishwasher at Checkly ·

PostgreSQL Heroku Heroku Postgres Node.js Knex.js

Last week we rolled out a simple patch that decimated the response time of a Postgres query crucial to Checkly. It quite literally went from an average of ~100ms with peaks to 1 second to a steady 1ms to 10ms.

However, that patch was just the last step of a longer journey:

  1. I looked at what API endpoints were using which queries and how their response time grew over time. Specifically the customer facing API endpoints that are directly responsible for rendering the first dashboard page of the product are crucial.

  2. I looked at the Heroku metrics such as those reported by heroku pg:outlier and cross references that with "slowest response time" statistics.

  3. I reproduced the production situation as best as possible on a local development machine and test my hypothesis that an composite index on a uuid field and a timestampz field would reduce response times.

This method secured the victory and we rolled out a new index last week. Response times plummeted. Read the full story in the blog post.

10 upvotes·8K views

Decision at StackShare about Memcached, Heroku, Amazon ElastiCache, Rails, PostgreSQL, MemCachier, RailsCaching, Caching

Avatar of yonasb
CEO at StackShare ·

We decided to use MemCachier as our Memcached provider because we were seeing some serious PostgreSQL performance issues with query-heavy pages on the site. We use MemCachier for all Rails caching and pretty aggressively too for the logged out experience (fully cached pages for the most part). We really need to move to Amazon ElastiCache as soon as possible so we can stop paying so much. The only reason we're not moving is because there are some restrictions on the network side due to our main app being hosted on Heroku.

#Caching #RailsCaching

9 upvotes·18.7K views