Amazon CloudFront

Amazon CloudFront

Application and Data / Assets and Media / Content Delivery Network
Avatar of ruswerner
Lead Engineer at StackShare·

StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

READ MORE
28 upvotes·1.1M views
Avatar of juliendefrance
Principal Software Engineer at Tophatter·

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

READ MORE
16 upvotes·1.7M views
Avatar of tspecht
‎Co-Founder and CTO at Dubsmash·

In the early days features like My Dubs, which enable users to upload their Dubs onto our platform, uploads were going directly against our API, which then stored the files in Amazon S3.

We quickly saw that this approach was crumbling our API performance big time. Since users usually have slower internet connections on their phones, the process of uploading the file took up a huge percentage of the processing time on our end, forcing us to spin up way more machines than we actually needed. We since have moved to a multi-way handshake-like upload process that uses signed URLs vendored to the clients upon request so they can upload the files directly to S3. These files are then distributed, cached, and served back to other clients through Amazon CloudFront.

#AssetsAndMedia #ContentDeliveryNetwork #CloudStorage

READ MORE
Dubsmash: Scaling To 200 Million Users With 3 Engineers - Dubsmash Tech Stack | StackShare (stackshare.io)
13 upvotes·18K views
Avatar of gykhauth
CTO at SalesAutopilot Kft.·

I'm the CTO of a marketing automation SaaS. Because of the continuously increasing load we moved to the AWSCloud. We are using more and more features of AWS: Amazon CloudWatch, Amazon SNS, Amazon CloudFront, Amazon Route 53 and so on.

Our main Database is MySQL but for the hundreds of GB document data we use MongoDB more and more. We started to use Redis for cache and other time sensitive operations.

On the front-end we use jQuery UI + Smarty but now we refactor our app to use Vue.js with Vuetify. Because our app is relatively complex we need to use vuex as well.

On the development side we use GitHub as our main repo, Docker for local and server environment and Jenkins and AWS CodePipeline for Continuous Integration.

READ MORE
12 upvotes·299.7K views
Avatar of johnnyxbell
Senior Software Engineer at StackShare·

When I first built my portfolio I used GitHub for the source control and deployed directly to Netlify on a push to master. This was a perfect setup, I didn't need any knowledge about #DevOps or anything, it was all just done for me.

One of the issues I had with Netlify was I wanted to gzip my JavaScript files, I had this setup in my #Webpack file, however Netlify didn't offer an easy way to set this.

Over the weekend I decided I wanted to know more about how #DevOps worked so I decided to switch from Netlify to Amazon S3. Instead of creating any #Git Webhooks I decided to use Buddy for my pipeline and to run commands. Buddy is a fantastic tool, very easy to setup builds, copying the files to my Amazon S3 bucket, then running some #AWS console commands to set the content-encoding of the JavaScript files. - Buddy is also free if you only have a few pipelines, so I didn't need to pay anything 🤙🏻.

When I made these changes I also wanted to monitor my code, and make sure I was keeping up with the best practices so I implemented Code Climate to look over my code and tell me where there code smells, issues, and other issues I've been super happy with it so far, on the free tier so its also free.

I did plan on using Amazon CloudFront for my SSL and cacheing, however it was overly complex to setup and it costs money. So I decided to go with the free tier of CloudFlare and it is amazing, best choice I've made for caching / SSL in a long time.

READ MORE
Portfolio - johnnyxbell | StackShare (stackshare.io)
11 upvotes·2 comments·226.6K views
Avatar of brenzenb
Engineering Manager at Compass·

At Compass, we’re big proponents of using NPM and semver (semantic versioning) when distributing our shared components as packages. NPM provides us with an industry-standard platform to publish our internal dependencies. The tools and technologies someone learns while working on a package at Compass are the same ones they’ll use in projects in the open source community. Meanwhile, semantic versioning itself plays a huge role in providing peace of mind. Users of shared components know when updates are safe enough to upgrade to, and component authors can make big updates without the fear of silently breaking the contracts they’ve made with their users. We wanted to build out a way to provide these same benefits to more than just JS libraries, and we ended up creating a lightning-fast form of semantic versioning for our CSS implementation that utilized Lambda@Edge, NPM, and some clever work by our engineers.

AWS Lambda Amazon CloudFront npm #lambdaatedge #semver #serverless

READ MORE
Semantic Version Redirects at the Edge – Compass True North – Medium (medium.com)
9 upvotes·18.2K views
Avatar of sgbett
Managing Director at Bettison.org Limited·

In 2010 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.

Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.

PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.

Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!

Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.

We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).

Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service

READ MORE
7 upvotes·249.5K views
Avatar of johnnyxbell
Senior Software Engineer at StackShare·

I recently moved my portfolio to Amazon S3 and I needed a new way to cache and SSL my site as Amazon S3 does not come with this right out of the box. I tried Amazon CloudFront as I was already on Amazon S3 I thought this would be super easy and straight forward to setup... It was not, I was unable to get this working even though I followed all the online steps and even reached out for help to Amazon.

I'd used CloudFlare in the past, and thought let me see if I can set up CloudFlare on an Amazon S3 bucket. The setup for this was so basic and easy... I had it setup with caching and SSL within 5 minutes, and it was 100% free.

READ MORE
7 upvotes·143.3K views
Avatar of pk-katana
Engineering Lead at Katana MRP·
Shared insights
at
()

Sometimes #ad-blocking addons can cause a real headache when working with JavaScript apps. Onboarding assistants (Appcues + elevio ), chat (Intercom) and product usage insight (Hotjar) have all landed on their blacklists. I guess there is a perfectly good reason for this that I just don't know.

In order to fix this, we had to set up our own content delivery service. We chose Amazon CloudFront and Amazon S3 to do the job because it has a good synergy with Heroku PaaS we are already using.

READ MORE
7 upvotes·36.9K views
Avatar of bramzor
Founder at CloudvCard·

Yesterday we moved away from using CloudFlare towards Amazon Route 53 for a few reasons. Although CloudFlare is a great platform, once you reach almost a 100% AWS Service integration, it makes it hard to still use CloudFlare in the stack. Also being able to use Aliases for DNS makes it faster because instead of doing a CNAME and an A record lookup, you will be able to receive the A records from the end services directly. We always loved working with CloudFlare , especially for DNS as we already used Amazon CloudFront for CDN. But having everything within AWS makes it "cleaner" when deploying automatically using AWS CloudFormation. All that aside, the main reason for moving towards Amazon Route 53 for DNS is the ability to do geolocation and latency based DNS responses. Doing this outside the AWS console would increase the complexity.

READ MORE
5 upvotes·27.2K views