Feed powered byStream Blue Logo Copy 5
Amazon S3

Amazon S3

Application and Data / Data Stores / Cloud Storage

Decision at Raygun about AWS Elastic Load Balancing (ELB), Amazon EC2, nginx, Amazon RDS, Amazon S3, LoadBalancerReverseProxy, CloudStorage, WebServers, CloudHosting

Avatar of CmdrKeen
Co-founder & CEO at Raygun
AWS Elastic Load Balancing (ELB)AWS Elastic Load Balancing (ELB)
Amazon EC2Amazon EC2
nginxnginx
Amazon RDSAmazon RDS
Amazon S3Amazon S3
#LoadBalancerReverseProxy
#CloudStorage
#WebServers
#CloudHosting

We chose AWS because, at the time, it was really the only cloud provider to choose from.

We tend to use their basic building blocks (EC2, ELB, Amazon S3, Amazon RDS) rather than vendor specific components like databases and queuing. We deliberately decided to do this to ensure we could provide multi-cloud support or potentially move to another cloud provider if the offering was better for our customers.

We鈥檝e utilized c3.large nodes for both the Node.js deployment and then for the .NET Core deployment. Both sit as backends behind an nginx instance and are managed using scaling groups in Amazon EC2 sitting behind a standard AWS Elastic Load Balancing (ELB).

While we鈥檙e satisfied with AWS, we do review our decision each year and have looked at Azure and Google Cloud offerings.

#CloudHosting #WebServers #CloudStorage #LoadBalancerReverseProxy

19 upvotes2.6K views

Decision at SmartZip about Amazon DynamoDB, Ruby, Node.js, AWS Lambda, New Relic, Amazon Elasticsearch Service, Elasticsearch, Superset, Amazon Quicksight, Amazon Redshift, Zapier, Segment, Amazon CloudFront, Memcached, Amazon ElastiCache, Amazon RDS for Aurora, MySQL, Amazon RDS, Amazon S3, Docker, Capistrano, AWS Elastic Beanstalk, Rails API, Rails, Algolia

Avatar of juliendefrance
Principal Software Engineer at Stessa
Amazon DynamoDBAmazon DynamoDB
RubyRuby
Node.jsNode.js
AWS LambdaAWS Lambda
New RelicNew Relic
Amazon Elasticsearch ServiceAmazon Elasticsearch Service
ElasticsearchElasticsearch
SupersetSuperset
Amazon QuicksightAmazon Quicksight
Amazon RedshiftAmazon Redshift
ZapierZapier
SegmentSegment
Amazon CloudFrontAmazon CloudFront
MemcachedMemcached
Amazon ElastiCacheAmazon ElastiCache
Amazon RDS for AuroraAmazon RDS for Aurora
MySQLMySQL
Amazon RDSAmazon RDS
Amazon S3Amazon S3
DockerDocker
CapistranoCapistrano
AWS Elastic BeanstalkAWS Elastic Beanstalk
Rails APIRails API
RailsRails
AlgoliaAlgolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

15 upvotes10.8K views

Decision at Uploadcare about PostgreSQL, Amazon DynamoDB, Amazon S3, Redis, Python, Google App Engine

Avatar of dmitry-mukhin
PostgreSQLPostgreSQL
Amazon DynamoDBAmazon DynamoDB
Amazon S3Amazon S3
RedisRedis
PythonPython
Google App EngineGoogle App Engine

Uploadcare has built an infinitely scalable infrastructure by leveraging AWS. Building on top of AWS allows us to process 350M daily requests for file uploads, manipulations, and deliveries. When we started in 2011 the only cloud alternative to AWS was Google App Engine which was a no-go for a rather complex solution we wanted to build. We also didn鈥檛 want to buy any hardware or use co-locations.

Our stack handles receiving files, communicating with external file sources, managing file storage, managing user and file data, processing files, file caching and delivery, and managing user interface dashboards.

At its core, Uploadcare runs on Python. The Europython 2011 conference in Florence really inspired us, coupled with the fact that it was general enough to solve all of our challenges informed this decision. Additionally we had prior experience working in Python.

We chose to build the main application with Django because of its feature completeness and large footprint within the Python ecosystem.

All the communications within our ecosystem occur via several HTTP APIs, Redis, Amazon S3, and Amazon DynamoDB. We decided on this architecture so that our our system could be scalable in terms of storage and database throughput. This way we only need Django running on top of our database cluster. We use PostgreSQL as our database because it is considered an industry standard when it comes to clustering and scaling.

15 upvotes2.9K views

Decision at ChecklyHQ about vuex, Knex.js, PostgreSQL, Amazon S3, AWS Lambda, Vue.js, hapi, Node.js, GitHub, Docker, Heroku

Avatar of tim_nolet
Founder, Engineer & Dishwasher at Checkly
vuexvuex
Knex.jsKnex.js
PostgreSQLPostgreSQL
Amazon S3Amazon S3
AWS LambdaAWS Lambda
Vue.jsVue.js
hapihapi
Node.jsNode.js
GitHubGitHub
DockerDocker
HerokuHeroku

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

14 upvotes5.6K views

Decision at BootstrapCDN about Ruby, Node.js, Amazon S3, MaxCDN, Google Analytics, Bootstrap, BootstrapCDN

Avatar of jdorfman
Developer Evangelist at StackShare
RubyRuby
Node.jsNode.js
Amazon S3Amazon S3
MaxCDNMaxCDN
Google AnalyticsGoogle Analytics
BootstrapBootstrap
#BootstrapCDN

This is the second Stack Decision of this series. You can read the last one to catch up (link below).

I was skeptical until one of the co-authors of Bootstrap, Jacob Thornton aka 'fat' tweeted about #BootstrapCDN and according to Google Analytics, that sent 10k uniques to the site in 24 hours. Now I was pumped but I knew I was way over my head and needed help. Fortunately, I met my co-maintainer Josh Mervine at the 2013 O鈥橰eilly Velocity Conference and we hit it off immediately. I showed him the MaxCDN and Amazon S3 stats and his eyebrows went up. When I showed him the code, he was very polite, 鈥渨ell, I mean it works but I really want to try Node.js out so I鈥檓 just going to rewrite everything in Node and Ruby for the S3 scripts.

I didn鈥檛 know what to expect from Josh to be honest. In the next decision (part 3), I will go over how he completely transformed the project.

AMA below.

14 upvotes5.1K views

Decision at Stitch about Go, Clojure, JavaScript, Python, Kubernetes, AWS OpsWorks, Amazon EC2, Amazon Redshift, Amazon S3, Amazon RDS

Avatar of jakestein
CEO at Stitch
GoGo
ClojureClojure
JavaScriptJavaScript
PythonPython
KubernetesKubernetes
AWS OpsWorksAWS OpsWorks
Amazon EC2Amazon EC2
Amazon RedshiftAmazon Redshift
Amazon S3Amazon S3
Amazon RDSAmazon RDS

Stitch is run entirely on AWS. All of our transactional databases are run with Amazon RDS, and we rely on Amazon S3 for data persistence in various stages of our pipeline. Our product integrates with Amazon Redshift as a data destination, and we also use Redshift as an internal data warehouse (powered by Stitch, of course).

The majority of our services run on stateless Amazon EC2 instances that are managed by AWS OpsWorks. We recently introduced Kubernetes into our infrastructure to run the scheduled jobs that execute Singer code to extract data from various sources. Although we tend to be wary of shiny new toys, Kubernetes has proven to be a good fit for this problem, and its stability, strong community and helpful tooling have made it easy for us to incorporate into our operations.

While we continue to be happy with Clojure for our internal services, we felt that its relatively narrow adoption could impede Singer's growth. We chose Python both because it is well suited to the task, and it seems to have reached critical mass among data engineers. All that being said, the Singer spec is language agnostic, and integrations and libraries have been developed in JavaScript, Go, and Clojure.

13 upvotes6.7K views

Decision at Dubsmash about Amazon CloudFront, Amazon S3, CloudStorage, ContentDeliveryNetwork, AssetsAndMedia

Avatar of tspecht
鈥嶤o-Founder and CTO at Dubsmash
Amazon CloudFrontAmazon CloudFront
Amazon S3Amazon S3
#CloudStorage
#ContentDeliveryNetwork
#AssetsAndMedia

In the early days features like My Dubs, which enable users to upload their Dubs onto our platform, uploads were going directly against our API, which then stored the files in Amazon S3.

We quickly saw that this approach was crumbling our API performance big time. Since users usually have slower internet connections on their phones, the process of uploading the file took up a huge percentage of the processing time on our end, forcing us to spin up way more machines than we actually needed. We since have moved to a multi-way handshake-like upload process that uses signed URLs vendored to the clients upon request so they can upload the files directly to S3. These files are then distributed, cached, and served back to other clients through Amazon CloudFront.

#AssetsAndMedia #ContentDeliveryNetwork #CloudStorage

13 upvotes353 views

Decision at Dubsmash about Amazon S3, DataStores, CloudStorage

Avatar of tspecht
鈥嶤o-Founder and CTO at Dubsmash
Amazon S3Amazon S3
#DataStores
#CloudStorage

Dubsmash in the beginning was simply downloading a JSON file from Amazon S3 containing the Quote metadata. This file was updated & uploaded to Amazon S3 by hand every time we had new content available; we would simply put in the URL to the sound file, the name of the Quote, and re-upload the file.

We chose this really simple mechanism to avoid having to bootstrap a custom API to distribute the content to the clients. This turned out to be a great business decision as well, since we didn鈥檛 need to worry at all about any scaling issues in the beginning; this became an even better call a couple weeks after the initial launch.

#CloudStorage #DataStores

11 upvotes347 views

Decision at StackShare about Redis, CircleCI, Webpack, Amazon CloudFront, Amazon S3, GitHub, Heroku, Rails, Node.js, Apollo, Glamorous, React, FrontEndRepoSplit, Microservices, SSR, StackDecisionsLaunch

Avatar of ruswerner
Lead Engineer at StackShare
RedisRedis
CircleCICircleCI
WebpackWebpack
Amazon CloudFrontAmazon CloudFront
Amazon S3Amazon S3
GitHubGitHub
HerokuHeroku
RailsRails
Node.jsNode.js
ApolloApollo
GlamorousGlamorous
ReactReact
#FrontEndRepoSplit
#Microservices
#SSR
#StackDecisionsLaunch

StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

10 upvotes20.9K views

Decision at Stitch Fix about Apache Spark, Victory, Amazon S3, Elasticsearch, Redux.js, React

Avatar of psunnn
Software Engineer at Stitch Fix
Apache SparkApache Spark
VictoryVictory
Amazon S3Amazon S3
ElasticsearchElasticsearch
Redux.jsRedux.js
ReactReact

As a frontend engineer on the Algorithms & Analytics team at Stitch Fix, I work with data scientists to develop applications and visualizations to help our internal business partners make data-driven decisions. I envisioned a platform that would assist data scientists in the data exploration process, allowing them to visually explore and rapidly iterate through their assumptions, then share their insights with others. This would align with our team's philosophy of having engineers "deploy platforms, services, abstractions, and frameworks that allow the data scientists to conceive of, develop, and deploy their ideas with autonomy", and solve the pain of data exploration.

The final product, code-named Dora, is built with React, Redux.js and Victory, backed by Elasticsearch to enable fast and iterative data exploration, and uses Apache Spark to move data from our Amazon S3 data warehouse into the Elasticsearch cluster.

9 upvotes5.3K views