Feed powered byStream Blue Logo Copy 5
Docker

Docker

DevOps / Build, Test, Deploy / Virtual Machine Platforms & Containers

Decision at SmartZip about Amazon DynamoDB, Ruby, Node.js, AWS Lambda, New Relic, Amazon Elasticsearch Service, Elasticsearch, Superset, Amazon Quicksight, Amazon Redshift, Zapier, Segment, Amazon CloudFront, Memcached, Amazon ElastiCache, Amazon RDS for Aurora, MySQL, Amazon RDS, Amazon S3, Docker, Capistrano, AWS Elastic Beanstalk, Rails API, Rails, Algolia

Avatar of juliendefrance
Principal Software Engineer at Stessa ·
Amazon DynamoDBAmazon DynamoDB
RubyRuby
Node.jsNode.js
AWS LambdaAWS Lambda
New RelicNew Relic
Amazon Elasticsearch ServiceAmazon Elasticsearch Service
ElasticsearchElasticsearch
SupersetSuperset
Amazon QuicksightAmazon Quicksight
Amazon RedshiftAmazon Redshift
ZapierZapier
SegmentSegment
Amazon CloudFrontAmazon CloudFront
MemcachedMemcached
Amazon ElastiCacheAmazon ElastiCache
Amazon RDS for AuroraAmazon RDS for Aurora
MySQLMySQL
Amazon RDSAmazon RDS
Amazon S3Amazon S3
DockerDocker
CapistranoCapistrano
AWS Elastic BeanstalkAWS Elastic Beanstalk
Rails APIRails API
RailsRails
AlgoliaAlgolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

16 upvotes·11K views

Decision at ChecklyHQ about vuex, Knex.js, PostgreSQL, Amazon S3, AWS Lambda, Vue.js, hapi, Node.js, GitHub, Docker, Heroku

Avatar of tim_nolet
Founder, Engineer & Dishwasher at Checkly ·
vuexvuex
Knex.jsKnex.js
PostgreSQLPostgreSQL
Amazon S3Amazon S3
AWS LambdaAWS Lambda
Vue.jsVue.js
hapihapi
Node.jsNode.js
GitHubGitHub
DockerDocker
HerokuHeroku

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

14 upvotes·5.6K views

Decision at Shopify about Memcached, Redis, MySQL, Google Kubernetes Engine, Kubernetes, Docker

Avatar of kirs
Production Engineer at Shopify ·
MemcachedMemcached
RedisRedis
MySQLMySQL
Google Kubernetes EngineGoogle Kubernetes Engine
KubernetesKubernetes
DockerDocker

At Shopify, over the years, we moved from shards to the concept of "pods". A pod is a fully isolated instance of Shopify with its own datastores like MySQL, Redis, Memcached. A pod can be spawned in any region. This approach has helped us eliminate global outages. As of today, we have more than a hundred pods, and since moving to this architecture we haven't had any major outages that affected all of Shopify. An outage today only affects a single pod or region.

As we grew into hundreds of shards and pods, it became clear that we needed a solution to orchestrate those deployments. Today, we use Docker, Kubernetes, and Google Kubernetes Engine to make it easy to bootstrap resources for new Shopify Pods.

13 upvotes·7.1K views

Decision at Dubsmash about Docker Compose, Docker, ContainerTools

Avatar of tspecht
‎Co-Founder and CTO at Dubsmash ·
Docker ComposeDocker Compose
DockerDocker
#ContainerTools

On the backend side we started using Docker almost 2 years ago. Looking back, this was absolutely the right decision, as running things manually with so many services and so few engineers wouldn’t have been possible at all.

While in the beginning we used it mostly to ease-up local development, we have since started using it quickly to also run all of our CI & CD pipeline on top of it. This not only enabled us to speed things up drastically locally by using Docker Compose to spin up different services & dependencies and making sure they can talk to each other, but also made sure that we had reliable builds on our build infrastructure and could easily debug problems using the baked images in case anything should go wrong. Using Docker was a slight change in the beginning but we ultimately found that it forces you to think through how your services are composed and structured and thus improves the way you structure your systems.

#ContainerTools

13 upvotes·883 views

Decision at SendGrid about Docker, BuildTestDeploy, VirtualMachinePlatformsContainers

Avatar of sethgrid
Principal Software Developer at SendGrid ·
DockerDocker
#BuildTestDeploy
#VirtualMachinePlatformsContainers

For the unit-integration layer that tests transactional emails, we leverage Docker. Our incoming edge is when the upstream service is finished processing a message and hands it to us for delivery, and then our outgoing edge is actually communicating with someone's inbox. We don't actually want to set up a bunch of receiving MTAs and such, but we still need to test behavior at that layer. Our solution is still a work in progress, but it gets the lion's share of use cases covered so we can confidently refactor and push new features and know we did not break anything.

This Docker setup leverages DNSMasq for setting up MX and A records and ensures they point to running mock inbox sinks. These inboxes are configured from a base image with multiple options. We can specify that the sink's TLS certificate is expired or improperly set up, we can have them respond slowly or with given errors at different SMTP conversation parts. We can ensure that we are backing off and deferring email if the inbox provider says to do so. This detailed faking of the outside world allows us to automate all kinds of outside behavior and ensure that our services behave as expected.

We develop locally in Docker, as we just went into. Our docker-compose file spins up containers with fancy DNS settings and all our dependencies, allowing us to test the MTA against a variety of MX and TLS settings, alongside a variety of potential inbox responses and behaviors. Everyone uses their editor of choice and we often pair up on more complex tasks to prevent siloed system understanding.

#VirtualMachinePlatformsContainers #BuildTestDeploy

12 upvotes·733 views

Decision at SalesAutopilot Kft. about AWS CodePipeline, Jenkins, Docker, vuex, Vuetify, Vue.js, jQuery UI, Redis, MongoDB, MySQL, Amazon Route 53, Amazon CloudFront, Amazon SNS, Amazon CloudWatch, GitHub

Avatar of gykhauth
CTO at SalesAutopilot Kft. ·
AWS CodePipelineAWS CodePipeline
JenkinsJenkins
DockerDocker
vuexvuex
VuetifyVuetify
Vue.jsVue.js
jQuery UIjQuery UI
RedisRedis
MongoDBMongoDB
MySQLMySQL
Amazon Route 53Amazon Route 53
Amazon CloudFrontAmazon CloudFront
Amazon SNSAmazon SNS
Amazon CloudWatchAmazon CloudWatch
GitHubGitHub

I'm the CTO of a marketing automation SaaS. Because of the continuously increasing load we moved to the AWSCloud. We are using more and more features of AWS: Amazon CloudWatch, Amazon SNS, Amazon CloudFront, Amazon Route 53 and so on.

Our main Database is MySQL but for the hundreds of GB document data we use MongoDB more and more. We started to use Redis for cache and other time sensitive operations.

On the front-end we use jQuery UI + Smarty but now we refactor our app to use Vue.js with Vuetify. Because our app is relatively complex we need to use vuex as well.

On the development side we use GitHub as our main repo, Docker for local and server environment and Jenkins and AWS CodePipeline for Continuous Integration.

11 upvotes·9.7K views

Decision about Docker Compose, Docker, Home Assistant

Avatar of holman
Zach Holman ·
Docker ComposeDocker Compose
DockerDocker
Home AssistantHome Assistant

I've been recently getting really into home automation- you know, making my house Smart™, which basically means half the time my lights don't turn on and the other half of the time apparently my kitchen faucet needs a static IP address.

But it's been a blast! It's a fun way to write code for yourself, outside of work, to have an impact in the real world. It's a nice way of falling in love with a different side of programming again.

I've used Apple's HomeKit for awhile, since we're pretty all-in in Apple devices at home, but the rough edges have been grating at me more and more. HomeKit is so opaque- you can't see what's wrong, why a device is unresponsive, and most importantly: the compatibility isn't there. HomeKit has a limited selection of — more expensive — accessories, and as you go beyond just simple LED lights, you want a bit more power. Also, we're programmers, dammit, gimme all the things.

Anyway, I've switched to Home Assistant the last few months, and I'm kicking myself I didn't make the switch earlier. As a programmer, it's great: you get the most capability than pretty much any other smart home platform (integrations have been written for most devices and technologies out there today), it's easier to debug, and when you want to go bigger than just simple lights on/off, HA has some really powerful stuff behind it.

I use Home Assistant in conjunction with Docker and Docker Compose; since the config is extracted out, upgrades are usually as easy as a pull of the latest version. I've just started digging into writing integrations for a lesser-used device that I have at home, and HA makes it pretty straightforward to just magically add it to the home network.

It plays well with others, too- we require a VPN connection in to the home network to access our Home Assistant install, and HA has a few tricks to help with that (ignoring the VPN route if you're on a local network, etc). Nice client support for iOS and Android, too.

Anyway, big fan of Home Assistant if you want to go beyond simple home automations and setup. Wish I would have done it a lot earlier. Also, big fan of jumping into all this if you have the time and interest to do so- it's been tickling a different part of my code brain than I've had access to in awhile, and that's been fun in and of itself.

11 upvotes·2.2K views

Decision about Amazon EC2, LXC, CircleCI, Docker, Git, Vault, Apache Maven, Slack, Jenkins, TeamCity, Logstash, Kibana, Elasticsearch, Ansible, VirtualBox, Vagrant

Avatar of Puciek
Devops guy at X20X Development LTD ·
Amazon EC2Amazon EC2
LXCLXC
CircleCICircleCI
DockerDocker
GitGit
VaultVault
Apache MavenApache Maven
SlackSlack
JenkinsJenkins
TeamCityTeamCity
LogstashLogstash
KibanaKibana
ElasticsearchElasticsearch
AnsibleAnsible
VirtualBoxVirtualBox
VagrantVagrant

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

10 upvotes·2 comments·20K views

Decision at CodeFactor about Google Cloud Functions, Azure Functions, AWS Lambda, Docker, Google Compute Engine, Microsoft Azure, Amazon EC2, CodeFactor.io, Kubernetes, Devops, AI, Machinelearning, Automation, Startup, Autoscale, Containerization, IAAS, SAAS

Avatar of kaskas
Entrepreneur & Engineer ·
Google Cloud FunctionsGoogle Cloud Functions
Azure FunctionsAzure Functions
AWS LambdaAWS Lambda
DockerDocker
Google Compute EngineGoogle Compute Engine
Microsoft AzureMicrosoft Azure
Amazon EC2Amazon EC2
CodeFactor.ioCodeFactor.io
KubernetesKubernetes
#Devops
#AI
#Machinelearning
#Automation
#Startup
#Autoscale
#Containerization
#IAAS
#SAAS

CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

10 upvotes·12.2K views

Decision at Portainer about Docker, Go

Avatar of deviantony
Co-founder and Software Engineer at Portainer.io at Portainer.io ·
DockerDocker
GoGo

Go was a natural choice for the backend of the Portainer web application. It makes the creation of HTTP API/services a breeze with a lot of standard features available in the ecosystem.

One of the main thing we like with Go is its synergy with Docker and how easy it is to leverage this synergy to easily distribute an efficient software:

  • Go allows to compile a program for multiple platforms and OSes easily (it's just a matter of options when starting the compilation process, no matter the execution context)
  • Go binaries are lightweight, fast and can have a low memory footprint

Combining these points with the empty scratch Docker image and multi-platform images, we can distribute Portainer for any environment that is running Docker. It allows our users to get started using the software in a matter of seconds.

Go is also heavily geared toward the creation of HTTP/API services and is a language that is easy to read and also quite easy to learn, making it a first choice in the context of Portainer.

8 upvotes·18.7K views