Feed powered byStream Blue Logo Copy 5

Trending Feed

Decision at FundsCorner about Zappa, AWS Lambda, SQLAlchemy, Python, Amazon SQS, Node.js, MongoDB Stitch, PostgreSQL, MongoDB

Avatar of jeyabalajis
CTO at FundsCorner ·
ZappaZappa
AWS LambdaAWS Lambda
SQLAlchemySQLAlchemy
PythonPython
Amazon SQSAmazon SQS
Node.jsNode.js
MongoDB StitchMongoDB Stitch
PostgreSQLPostgreSQL
MongoDBMongoDB

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

17 upvotes·7.3K views

Decision about MongoDB, GraphQL, Node.js

Avatar of juank11memphis
MongoDBMongoDB
GraphQLGraphQL
Node.jsNode.js

I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

  1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

  2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

  3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

14 upvotes·12 comments·4.5K views

Decision at Evojam about Azure Functions, Firebase, AWS Lambda, Serverless

Avatar of nowaq
Co-founder at Evojam ·
Azure FunctionsAzure Functions
FirebaseFirebase
AWS LambdaAWS Lambda
ServerlessServerless

In a couple of recent projects we had an opportunity to try out the new Serverless approach to building web applications. It wasn't necessarily a question if we should use any particular vendor but rather "if" we can consider serverless a viable option for building apps. Obviously our goal was also to get a feel for this technology and gain some hands-on experience.

We did consider AWS Lambda, Firebase from Google as well as Azure Functions. Eventually we went with AWS Lambdas.

PROS
  • No servers to manage (obviously!)
  • Limited fixed costs – you pay only for used time
  • Automated scaling and balancing
  • Automatic failover (or, at this level of abstraction, no failover problem at all)
  • Security easier to provide and audit
  • Low overhead at the start (with the certain level of knowledge)
  • Short time to market
  • Easy handover - deployment coupled with code
  • Perfect choice for lean startups with fast-paced iterations
  • Augmentation for the classic cloud, server(full) approach
CONS
  • Not much know-how and best practices available about structuring the code and projects on the market
  • Not suitable for complex business logic due to the risk of producing highly coupled code
  • Cost difficult to estimate (helpful tools: serverlesscalc.com)
  • Difficulty in migration to other platforms (Vendor lock⚠️)
  • Little engineers with experience in serverless on the job market
  • Steep learning curve for engineers without any cloud experience

More details are on our blog: https://evojam.com/blog/2018/12/5/should-you-go-serverless-meet-the-benefits-and-flaws-of-new-wave-of-cloud-solutions I hope it helps 🙌 & I'm curious of your experiences.

4 upvotes·1 comment·2.6K views

Decision about Fastly, Grunt, jQuery, Bootstrap, Jekyll, Let's Encrypt, Netlify, GitHub Pages, MaxCDN, StaticSiteGenerators, Webperf, GoogleFonts, CDN

Avatar of jdorfman
open source sustainer at SustainOSS ·
FastlyFastly
GruntGrunt
jQueryjQuery
BootstrapBootstrap
JekyllJekyll
Let's EncryptLet's Encrypt
NetlifyNetlify
GitHub PagesGitHub Pages
MaxCDNMaxCDN
#StaticSiteGenerators
#Webperf
#GoogleFonts
#CDN

When my SSL cert on MaxCDN was expiring on my personal site I decided it was a good time to revamp some things. Since GitHub Services is depreciated I can no longer have #CDN cache purges automated among other things. So I decided on the following: GitHub Pages, Netlify, Let's Encrypt and Jekyll. Staying the same was Bootstrap, jQuery, Grunt & #GoogleFonts.

What's awesome about GitHub Pages is that it has a #CDN (Fastly) built-in and anytime you push to master, it purges the cache instantaneously without you have to do anything special. Netlify is magic, I highly recommend it to anyone using #StaticSiteGenerators.

For the most part, everything went smoothly. The only things I had issues with were the following:

  • If you want to point www to GitHub Pages you need to rename the repo to www
  • If you edit something in the _config.yml you need to restart bundle exec jekyll s or changes won't show
  • I had to disable the Grunt htmlmin module. I replaced it with Jekyll layout that compresses HTML for #webperf

Last but certainly not least, I made a donation to Let's Encrypt. If you use their service consider doing it too: https://letsencrypt.org/donate/

8 upvotes·1 comment·6K views

Decision at Luminopia about Amazon S3, Zencoder, Amazon Elastic Transcoder, MediaTranscoding, VideoStreaming

Avatar of awendland
CTO at Luminopia ·
Amazon S3Amazon S3
ZencoderZencoder
Amazon Elastic TranscoderAmazon Elastic Transcoder
#MediaTranscoding
#VideoStreaming

We were looking for a versatile #MediaTranscoding service for #video to convert TV shows and movies from large content providers into web #VideoStreaming formats. These content providers gave us files ranging from Apple ProRes to h.264, with file sizes from 1 GB to 100 GB, and we needed a tool that could cope with all of it. We looked at Amazon Elastic Transcoder and Zencoder, and eventually chose @Zencoder because it had support for every format we needed, good handling of sound channel remapping, and a clear UI with fast processing times. We automated our usage with it by writing a simple Python script to interact with it's API, and hosted the input and output AV files on Amazon S3, which it could easily talk to. So far we've converted 15 TB representing several thousand files using the service and are quite happy!

8 upvotes·2 comments·3.4K views

Decision at Airbnb about Apollo, Figma, Zeplin, React Storybook, StorybookDesignStack, StorybookStack, ReactDesignStack

Avatar of adamrneary
Engineer at Airbnb ·
ApolloApollo
FigmaFigma
ZeplinZeplin
React StorybookReact Storybook
#StorybookDesignStack
#StorybookStack
#ReactDesignStack

The tool we use for editing UI is React Storybook. It is the perfect place to make sure your work aligns with designs to the pixel across breakpoints. You get fast hot module reloading and a couple checkboxes to enable/disable browser features like Flexbox.

The only tricks I apply to Storybook are loading the stories with the mock data we’ve extracted from the API. If your mock data really covers all the various various possible states for your UI, you are good to go. Beyond that, if you have alternative states you want to account for, perhaps loading or error states, you can add them in manually.

This is the crux of the matter for Storybook. This file is entirely generated from Yeoman (discussed below), and it delivers the examples from the Alps Journey by default. getSectionsFromJourney() just filters the sections.

One other hack you’ll notice is that I added a pair of divs to bookend my component vertically, since Storybook renders with whitespace around the component. That is fine for buttons or UI with borders, but it’s hard to tell precisely where your component starts and ends, so I hacked them in there.

Since we are talking about how all these fabulous tools work so well together to help you be productive, can I just say what a delight it is to work on UI with Zeplin or Figma side by side with Storybook. Digging into UI in this abstract way takes all the chaos of this madcap world away one breakpoint at a time, and in that quiet realm, you are good down to the pixel every time.

To supply Storybook and our unit tests with realistic mock data, we want to extract the mock data directly from our Shared Development Environment. As with codegen, even a small change in a query fragment should also trigger many small changes in mock data. And here, similarly, the hard part is tackled entirely by Apollo CLI, and you can stitch it together with your own code in no time.

Coming back to Zeplin and Figma briefly, they're both built to allow engineers to extract content directly to facilitate product development.

Extracting the copy for an entire paragraph is as simple as selecting the content in Zeplin and clicking the “copy” icon in the Content section of the sidebar. In the case of Zeplin, images can be extracted by selecting and clicking the “download” icon in the Assets section of the sidebar.

ReactDesignStack #StorybookStack #StorybookDesignStack
6 upvotes·3.3K views

Decision at Stack Overflow about .NET

Avatar of NickCraver
Architecture Lead at Stack Overflow ·
.NET.NET

We use .NET Core for our web socket servers, mail relays, and scheduling applications. Soon, it will power all of Stack Overflow. The ability to run on any platform, further extend and plug especially the ASP.NET bits and treat almost everything as a building block you can move around has been a huge win. We're headed towards an appliance model and with .NET Core we can finally put everything in box...on Linux. We can re-use more code, fit all our deployment scenarios both during the move and after, and also ditch a lot of performance workarounds we had to scale...they're in-box now.

And testing. The ability to fire up a web server and request and access both in a single method is an orders of magnitude improvement over ASP.NET 5. We're looking forward to tremendously improving our automated test coverage in places it's finally reasonable in both time and effort for devs to do so. In short: we're getting a lot more for the same dev time spent in .NET Core.

4 upvotes·1 comment·2.5K views

Decision at Magalix about Python, Go, Amazon EC2, Google Kubernetes Engine, Microsoft Azure, Kubernetes, Autopilot

Avatar of mehilba
Co-Founder and COO at Magalix ·
PythonPython
GoGo
Amazon EC2Amazon EC2
Google Kubernetes EngineGoogle Kubernetes Engine
Microsoft AzureMicrosoft Azure
KubernetesKubernetes
#Autopilot

We are hardcore Kubernetes users and contributors. We loved the automation it provides. However, as our team grew and added more clusters and microservices, capacity and resources management becomes a massive pain to us. We started suffering from a lot of outages and unexpected behavior as we promote our code from dev to production environments. Luckily we were working on our AI-powered tools to understand different dependencies, predict usage, and calculate the right resources and configurations that should be applied to our infrastructure and microservices. We dogfooded our agent (http://github.com/magalixcorp/magalix-agent) and were able to stabilize as the #autopilot continuously recovered any miscalculations we made or because of unexpected changes in workloads. We are open sourcing our agent in a few days. Check it out and let us know what you think! We run workloads on Microsoft Azure Google Kubernetes Engine and Amazon EC2 and we're all about Go and Python!

7 upvotes·2 comments·7.6K views

Decision at Relay42 about Terraform, Pingdom, Datadog, Monitoring, Relay42, Datadog

Avatar of nyovchev
Head of Engineering at Relay42 ·
TerraformTerraform
PingdomPingdom
DatadogDatadog
#Monitoring
#Relay42
#Datadog

#Datadog #Relay42 #Monitoring

With Datadog unveiling their Synthetics product (https://www.datadoghq.com/blog/introducing-synthetic-monitoring/), we at Relay42 are considering moving out of Pingdom.

The rationale is simple:

  • 90% of our monitoring is on Datadog, apart from the external requests. It'd be nice to identify regional issues in one place, so this is great in our monitoring consolidation efforts.

  • The lack of a non-community Terraform provider for Pingdom

We have yet to get in the beta and test it out but we feel very excited about this announcement.

1 upvote·1.6K views

Decision at Segment about Swagger UI, ReadMe.io, Markdown, Postman, QA, Api, Documentation

Avatar of nzoschke
Engineering Manager at Segment ·
Swagger UISwagger UI
ReadMe.ioReadMe.io
MarkdownMarkdown
PostmanPostman
#QA
#Api
#Documentation

We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.

Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username, password and workspace_name so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.

Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.

This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.

Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct

Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.

Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.

26 upvotes·1 comment·23.7K views

Elastic Stack 6.6.1 and 5.6.15 Released

 · elastic.co
Versions 5.6.15 and 6.6.1 of the Elastic Stack were released today. We recommend you upgrade to these latest versions. Each includes fixes for a number of security issues in Kibana, Elasticsearch...

Decision at Redash about Vue.js, React, Angular 2, AngularJS

Avatar of arikfr
Vue.jsVue.js
ReactReact
Angular 2Angular 2
AngularJSAngularJS

When Redash was created 5 years ago we chose AngularJS as our frontend framework, but as AngularJS was replaced by Angular 2 we had to make a new choice. We decided that we won't migrate to Angular, but to either React or Vue.js. Eventually we decided to migrate to React for the following reasons:

  1. Many in our community are already using React internally and will be able to contribute.
  2. Using react2angular we can do the migration gradually over time instead of having to invest in a big rewrite while halting feature development.

So far the gradual strategy pays off and in the last 3 major releases we already shipped React code in the Angular.js application.

10 upvotes·18.6K views

Decision at Zulip about Elasticsearch, MySQL, PostgreSQL

Avatar of tabbott
Founder at Zulip ·
ElasticsearchElasticsearch
MySQLMySQL
PostgreSQLPostgreSQL

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

8 upvotes·10.7K views

Decision at Algolia about React, Gatsby, Ruby, Middleman

Avatar of ronanlevesque
Software engineer at Algolia ·
ReactReact
GatsbyGatsby
RubyRuby
MiddlemanMiddleman

A few months ago we decided to move our whole static website (www.algolia.com) to a new stack. At the time we were using a website generator called Middleman, written in Ruby. As a team of only front-end developers we didn't feel very comfortable with the language itself, and the time it took to build was not satisfying. We decided to move to Gatsby to take advantage of its use of React , as well as its incredibly high performances in terms of build and page rendering.

12 upvotes·2 comments·39.8K views