Feed powered byStream Blue Logo Copy 5
MySQL

MySQL

Application and Data / Data Stores / Databases

Decision at The New York Times about Apache HTTP Server, Kafka, Node.js, GraphQL, Apollo, React, PHP, MySQL

Avatar of nsrockwell
CTO at NY Times ·
Apache HTTP ServerApache HTTP Server
KafkaKafka
Node.jsNode.js
GraphQLGraphQL
ApolloApollo
ReactReact
PHPPHP
MySQLMySQL

When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

22 upvotes·1 comment·48.2K views

Decision at SmartZip about Amazon DynamoDB, Ruby, Node.js, AWS Lambda, New Relic, Amazon Elasticsearch Service, Elasticsearch, Superset, Amazon Quicksight, Amazon Redshift, Zapier, Segment, Amazon CloudFront, Memcached, Amazon ElastiCache, Amazon RDS for Aurora, MySQL, Amazon RDS, Amazon S3, Docker, Capistrano, AWS Elastic Beanstalk, Rails API, Rails, Algolia

Avatar of juliendefrance
Principal Software Engineer at Stessa ·
Amazon DynamoDBAmazon DynamoDB
RubyRuby
Node.jsNode.js
AWS LambdaAWS Lambda
New RelicNew Relic
Amazon Elasticsearch ServiceAmazon Elasticsearch Service
ElasticsearchElasticsearch
SupersetSuperset
Amazon QuicksightAmazon Quicksight
Amazon RedshiftAmazon Redshift
ZapierZapier
SegmentSegment
Amazon CloudFrontAmazon CloudFront
MemcachedMemcached
Amazon ElastiCacheAmazon ElastiCache
Amazon RDS for AuroraAmazon RDS for Aurora
MySQLMySQL
Amazon RDSAmazon RDS
Amazon S3Amazon S3
DockerDocker
CapistranoCapistrano
AWS Elastic BeanstalkAWS Elastic Beanstalk
Rails APIRails API
RailsRails
AlgoliaAlgolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

15 upvotes·10.8K views

Decision at Stitch about PostgreSQL, MySQL, Clojure

Avatar of jakestein
CEO at Stitch ·
PostgreSQLPostgreSQL
MySQLMySQL
ClojureClojure

The majority of our Clojure microservices are simple web services that wrap a transactional database with CRUD operations and a little bit of business logic. We use both MySQL and PostgreSQL for transactional data persistence, having transitioned from the former to the latter for newer services to take advantage of the new features coming out of the Postgres community.

Most of our Clojure best practices can be summed up by the phrase "keep it simple." We avoid more complex web frameworks in favor of using the Ring library to build web service routes, and we prefer sending SQL directly to the JDBC library rather than using a complicated ORM or SQL DSL.

15 upvotes·4K views

Decision about Apollo, Next.js, styled-components, React, graphql-yoga, Prisma, MySQL, GraphQL, Node.js

Avatar of deDykz
PayHub Ghana Limited ·
ApolloApollo
Next.jsNext.js
styled-componentsstyled-components
ReactReact
graphql-yogagraphql-yoga
PrismaPrisma
MySQLMySQL
GraphQLGraphQL
Node.jsNode.js

I just finished a web app meant for a business that offers training programs for certain professional courses. I chose this stack to test out my skills in graphql and react. I used Node.js , GraphQL , MySQL for the #Backend utilizing Prisma as a database interface for MySQL to provide CRUD APIs and graphql-yoga as a server. For the #frontend I chose React, styled-components for styling, Next.js for routing and SSR and Apollo for data management. I really liked the outcome and I will definitely use this stack in future projects.

13 upvotes·10.1K views

Decision at Shopify about Memcached, Redis, MySQL, Google Kubernetes Engine, Kubernetes, Docker

Avatar of kirs
Production Engineer at Shopify ·
MemcachedMemcached
RedisRedis
MySQLMySQL
Google Kubernetes EngineGoogle Kubernetes Engine
KubernetesKubernetes
DockerDocker

At Shopify, over the years, we moved from shards to the concept of "pods". A pod is a fully isolated instance of Shopify with its own datastores like MySQL, Redis, Memcached. A pod can be spawned in any region. This approach has helped us eliminate global outages. As of today, we have more than a hundred pods, and since moving to this architecture we haven't had any major outages that affected all of Shopify. An outage today only affects a single pod or region.

As we grew into hundreds of shards and pods, it became clear that we needed a solution to orchestrate those deployments. Today, we use Docker, Kubernetes, and Google Kubernetes Engine to make it easy to bootstrap resources for new Shopify Pods.

13 upvotes·7.1K views

Decision at Zulip about Elasticsearch, MySQL, PostgreSQL

Avatar of tabbott
Founder at Zulip ·
ElasticsearchElasticsearch
MySQLMySQL
PostgreSQLPostgreSQL

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

12 upvotes·20.7K views

Decision at ReadMe.io about Compose, MongoLab, MongoDB Atlas, PostgreSQL, MySQL, MongoDB

Avatar of gkoberger
ComposeCompose
MongoLabMongoLab
MongoDB AtlasMongoDB Atlas
PostgreSQLPostgreSQL
MySQLMySQL
MongoDBMongoDB

We went with MongoDB , almost by mistake. I had never used it before, but I knew I wanted the *EAN part of the MEAN stack, so why not go all in. I come from a background of SQL (first MySQL , then PostgreSQL ), so I definitely abused Mongo at first... by trying to turn it into something more relational than it should be. But hey, data is supposed to be relational, so there wasn't really any way to get around that.

There's a lot I love about MongoDB, and a lot I hate. I still don't know if we made the right decision. We've been able to build much quicker, but we also have had some growing pains. We host our databases on MongoDB Atlas , and I can't say enough good things about it. We had tried MongoLab and Compose before it, and with MongoDB Atlas I finally feel like things are in a good place. I don't know if I'd use it for a one-off small project, but for a large product Atlas has given us a ton more control, stability and trust.

11 upvotes·16.8K views

Decision at SalesAutopilot Kft. about AWS CodePipeline, Jenkins, Docker, vuex, Vuetify, Vue.js, jQuery UI, Redis, MongoDB, MySQL, Amazon Route 53, Amazon CloudFront, Amazon SNS, Amazon CloudWatch, GitHub

Avatar of gykhauth
CTO at SalesAutopilot Kft. ·
AWS CodePipelineAWS CodePipeline
JenkinsJenkins
DockerDocker
vuexvuex
VuetifyVuetify
Vue.jsVue.js
jQuery UIjQuery UI
RedisRedis
MongoDBMongoDB
MySQLMySQL
Amazon Route 53Amazon Route 53
Amazon CloudFrontAmazon CloudFront
Amazon SNSAmazon SNS
Amazon CloudWatchAmazon CloudWatch
GitHubGitHub

I'm the CTO of a marketing automation SaaS. Because of the continuously increasing load we moved to the AWSCloud. We are using more and more features of AWS: Amazon CloudWatch, Amazon SNS, Amazon CloudFront, Amazon Route 53 and so on.

Our main Database is MySQL but for the hundreds of GB document data we use MongoDB more and more. We started to use Redis for cache and other time sensitive operations.

On the front-end we use jQuery UI + Smarty but now we refactor our app to use Vue.js with Vuetify. Because our app is relatively complex we need to use vuex as well.

On the development side we use GitHub as our main repo, Docker for local and server environment and Jenkins and AWS CodePipeline for Continuous Integration.

11 upvotes·9.7K views

Decision at Shopify about Redis, Memcached, MySQL, Rails

Avatar of kirs
Production Engineer at Shopify ·
RedisRedis
MemcachedMemcached
MySQLMySQL
RailsRails

As is common in the Rails stack, since the very beginning, we've stayed with MySQL as a relational database, Memcached for key/value storage and Redis for queues and background jobs.

In 2014, we could no longer store all our data in a single MySQL instance - even by buying better hardware. We decided to use sharding and split all of Shopify into dozens of database partitions.

Sharding played nicely for us because Shopify merchants are isolated from each other and we were able to put a subset of merchants on a single shard. It would have been harder if our business assumed shared data between customers.

The sharding project bought us some time regarding database capacity, but as we soon found out, there was a huge single point of failure in our infrastructure. All those shards were still using a single Redis. At one point, the outage of that Redis took down all of Shopify, causing a major disruption we later called “Redismageddon”. This taught us an important lesson to avoid any resources that are shared across all of Shopify.

Over the years, we moved from shards to the concept of "pods". A pod is a fully isolated instance of Shopify with its own datastores like MySQL, Redis, memcached. A pod can be spawned in any region. This approach has helped us eliminate global outages. As of today, we have more than a hundred pods, and since moving to this architecture we haven't had any major outages that affected all of Shopify. An outage today only affects a single pod or region.

10 upvotes·13.9K views

Decision at Config Cat about .NET, MySQL, Visual Studio Code, Angular 2, C#, TypeScript, Linode, Frontend, Backend, Configcat

Avatar of ConfigCat
ConfigCat ·
.NET.NET
MySQLMySQL
Visual Studio CodeVisual Studio Code
Angular 2Angular 2
C#C#
TypeScriptTypeScript
LinodeLinode
#Frontend
#Backend
#Configcat

When designing the architecture for #Configcat , we were dreaming of a system that runs on a small scale on low-cost infrastructure at the beginning and scales well later on when the requirements change. Should be platform independent, high performing and robust at the same time. Since most of our team were born and raised using Microsoft's enterprise-grade technologies in the last decade, we wanted to build on that experience. Finding the best solution was quite challenging. Finally, we came up with the idea of a .NET Core backend because it runs on all platforms highly scalable and we could start up with 5$ Linode Linux server. As a #frontend framework, we have chosen Angular mostly because of TypeScript which felt familiar and was easy to get used to after strongly typed languages like C# and the community support behind Angular 2 is awesome. Visual Studio Code makes the coding sessions with Live Share great fun and very productive. MySQL as a database is again is very affordable in the beginning, performs great a scales well and integrates with .NET Core's Entity Framework super easy.

10 upvotes·3.3K views