Feed powered byStream Blue Logo Copy 5
Node.js

Node.js

Application and Data / Languages & Frameworks / Frameworks (Full Stack)

Decision about MongoDB, GraphQL, Node.js

Avatar of juank11memphis
MongoDBMongoDB
GraphQLGraphQL
Node.jsNode.js

I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

  1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

  2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

  3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

35 upvotes·28 comments·16.6K views

Decision at Stream about Go, Stream, Python, Yarn, Babel, Node.js, ES6, JavaScript, Languages, FrameworksFullStack

Avatar of nparsons08
Node.js Engineer & Evangelist at Stream ·
GoGo
StreamStream
PythonPython
YarnYarn
BabelBabel
Node.jsNode.js
ES6ES6
JavaScriptJavaScript
#Languages
#FrameworksFullStack

Winds 2.0 is an open source Podcast/RSS reader developed by Stream with a core goal to enable a wide range of developers to contribute.

We chose JavaScript because nearly every developer knows or can, at the very least, read JavaScript. With ES6 and Node.js v10.x.x, it’s become a very capable language. Async/Await is powerful and easy to use (Async/Await vs Promises). Babel allows us to experiment with next-generation JavaScript (features that are not in the official JavaScript spec yet). Yarn allows us to consistently install packages quickly (and is filled with tons of new tricks)

We’re using JavaScript for everything – both front and backend. Most of our team is experienced with Go and Python, so Node was not an obvious choice for this app.

Sure... there will be haters who refuse to acknowledge that there is anything remotely positive about JavaScript (there are even rants on Hacker News about Node.js); however, without writing completely in JavaScript, we would not have seen the results we did.

#FrameworksFullStack #Languages

29 upvotes·6.7K views

Decision at FundsCorner about Zappa, AWS Lambda, SQLAlchemy, Python, Amazon SQS, Node.js, MongoDB Stitch, PostgreSQL, MongoDB

Avatar of jeyabalajis
ZappaZappa
AWS LambdaAWS Lambda
SQLAlchemySQLAlchemy
PythonPython
Amazon SQSAmazon SQS
Node.jsNode.js
MongoDB StitchMongoDB Stitch
PostgreSQLPostgreSQL
MongoDBMongoDB

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

23 upvotes·18.7K views

Decision at Raygun about .NET, Node.js, FrameworksFullStack, Languages

Avatar of CmdrKeen
Co-founder & CEO at Raygun ·
.NET.NET
Node.jsNode.js
#FrameworksFullStack
#Languages

The core Web application of Raygun is still a Microsoft ASP.NET MVC application. Not too much has changed from a fundamental technology standpoint. We originally built using Mono, which just bled memory and would need to be constantly recycled. So we looked around at the options and what would be well suited to the highly transactional nature of our API. We settled on Node.js, feeling that the event loop model worked well given the lightweight workload of each message being processed. This served us well for several years.

When we started to look at .NET Core in early 2016, it became quite obvious that being able to asynchronously hand off to our queuing service greatly improved throughput. Unfortunately, at the time, Node.js didn’t provide an easy mechanism to do this, while .NET Core had great concurrency capabilities from day one. This meant that our servers spent less time blocking on the hand off, and could start processing the next inbound message. This was the core component of the performance improvement.

We chose .NET because it was a platform that our team was familiar with. Also we were skilled enough with it to know many performance tips and tricks to get the most from it. Due to this experience, it helped us get to market faster and deliver great performance.

#Languages #FrameworksFullStack

23 upvotes·1 comment·10.4K views

Decision at The New York Times about Apache HTTP Server, Kafka, Node.js, GraphQL, Apollo, React, PHP, MySQL

Avatar of nsrockwell
CTO at NY Times ·
Apache HTTP ServerApache HTTP Server
KafkaKafka
Node.jsNode.js
GraphQLGraphQL
ApolloApollo
ReactReact
PHPPHP
MySQLMySQL

When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

22 upvotes·1 comment·48.2K views

Decision at SmartZip about Amazon DynamoDB, Ruby, Node.js, AWS Lambda, New Relic, Amazon Elasticsearch Service, Elasticsearch, Superset, Amazon Quicksight, Amazon Redshift, Zapier, Segment, Amazon CloudFront, Memcached, Amazon ElastiCache, Amazon RDS for Aurora, MySQL, Amazon RDS, Amazon S3, Docker, Capistrano, AWS Elastic Beanstalk, Rails API, Rails, Algolia

Avatar of juliendefrance
Principal Software Engineer at Stessa ·
Amazon DynamoDBAmazon DynamoDB
RubyRuby
Node.jsNode.js
AWS LambdaAWS Lambda
New RelicNew Relic
Amazon Elasticsearch ServiceAmazon Elasticsearch Service
ElasticsearchElasticsearch
SupersetSuperset
Amazon QuicksightAmazon Quicksight
Amazon RedshiftAmazon Redshift
ZapierZapier
SegmentSegment
Amazon CloudFrontAmazon CloudFront
MemcachedMemcached
Amazon ElastiCacheAmazon ElastiCache
Amazon RDS for AuroraAmazon RDS for Aurora
MySQLMySQL
Amazon RDSAmazon RDS
Amazon S3Amazon S3
DockerDocker
CapistranoCapistrano
AWS Elastic BeanstalkAWS Elastic Beanstalk
Rails APIRails API
RailsRails
AlgoliaAlgolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

15 upvotes·10.8K views

Decision about ESLint, Prettier, Babel, npm, Yarn, Node.js, Webpack, ES5, ES6

Avatar of johnnyxbell
Sr. Software Engineer at StackShare ·
ESLintESLint
PrettierPrettier
BabelBabel
npmnpm
YarnYarn
Node.jsNode.js
WebpackWebpack
#ES5
#ES6

So when starting a new project you generally have your go to tools to get your site up and running locally, and some scripts to build out a production version of your site. Create React App is great for that, however for my projects I feel as though there is to much bloat in Create React App and if I use it, then I'm tied to React, which I love but if I want to switch it up to Vue or something I want that flexibility.

So to start everything up and running I clone my personal Webpack boilerplate - This is still in Webpack 3, and does need some updating but gets the job done for now. So given the name of the repo you may have guessed that yes I am using Webpack as my bundler I use Webpack because it is so powerful, and even though it has a steep learning curve once you get it, its amazing.

The next thing I do is make sure my machine has Node.js configured and the right version installed then run Yarn. I decided to use Yarn because when I was building out this project npm had some shortcomings such as no .lock file. I could probably move from Yarn to npm but I don't really see any point really.

I use Babel to transpile all of my #ES6 to #ES5 so the browser can read it, I love Babel and to be honest haven't looked up any other transpilers because Babel is amazing.

Finally when developing I have Prettier setup to make sure all my code is clean and uniform across all my JS files, and ESLint to make sure I catch any errors or code that could be optimized.

I'm really happy with this stack for my local env setup, and I'll probably stick with it for a while.

15 upvotes·3 comments·2.4K views

Decision at Heap about Heap, Node.js, Kafka, PostgreSQL, Citus, FrameworksFullStack, Databases, MessageQueue

Avatar of drob
HeapHeap
Node.jsNode.js
KafkaKafka
PostgreSQLPostgreSQL
CitusCitus
#FrameworksFullStack
#Databases
#MessageQueue

At Heap, we searched for an existing tool that would allow us to express the full range of analyses we needed, index the event definitions that made up the analyses, and was a mature, natively distributed system.

After coming up empty on this search, we decided to compromise on the “maturity” requirement and build our own distributed system around Citus and sharded PostgreSQL. It was at this point that we also introduced Kafka as a queueing layer between the Node.js application servers and Postgres.

If we could go back in time, we probably would have started using Kafka on day one. One of the biggest benefits in adopting Kafka has been the peace of mind that it brings. In an analytics infrastructure, it’s often possible to make data ingestion idempotent.

In Heap’s case, that means that, if anything downstream from Kafka goes down, we won’t lose any data – it’s just going to take a bit longer to get to its destination. We also learned that you want the path between data hitting your servers and your initial persistence layer (in this case, Kafka) to be as short and simple as possible, since that is the surface area where a failure means you can lose customer data. We learned that it’s a very good fit for an analytics tool, since you can handle a huge number of incoming writes with relatively low latency. Kafka also gives you the ability to “replay” the data flow: it’s like a commit log for your whole infrastructure.

#MessageQueue #Databases #FrameworksFullStack

14 upvotes·5.8K views

Decision at ChecklyHQ about vuex, Knex.js, PostgreSQL, Amazon S3, AWS Lambda, Vue.js, hapi, Node.js, GitHub, Docker, Heroku

Avatar of tim_nolet
Founder, Engineer & Dishwasher at Checkly ·
vuexvuex
Knex.jsKnex.js
PostgreSQLPostgreSQL
Amazon S3Amazon S3
AWS LambdaAWS Lambda
Vue.jsVue.js
hapihapi
Node.jsNode.js
GitHubGitHub
DockerDocker
HerokuHeroku

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

14 upvotes·5.6K views

Decision at BootstrapCDN about Ruby, Node.js, Amazon S3, MaxCDN, Google Analytics, Bootstrap, BootstrapCDN

Avatar of jdorfman
Developer Evangelist at StackShare ·
RubyRuby
Node.jsNode.js
Amazon S3Amazon S3
MaxCDNMaxCDN
Google AnalyticsGoogle Analytics
BootstrapBootstrap
#BootstrapCDN

This is the second Stack Decision of this series. You can read the last one to catch up (link below).

I was skeptical until one of the co-authors of Bootstrap, Jacob Thornton aka 'fat' tweeted about #BootstrapCDN and according to Google Analytics, that sent 10k uniques to the site in 24 hours. Now I was pumped but I knew I was way over my head and needed help. Fortunately, I met my co-maintainer Josh Mervine at the 2013 O’Reilly Velocity Conference and we hit it off immediately. I showed him the MaxCDN and Amazon S3 stats and his eyebrows went up. When I showed him the code, he was very polite, “well, I mean it works but I really want to try Node.js out so I’m just going to rewrite everything in Node and Ruby for the S3 scripts.

I didn’t know what to expect from Josh to be honest. In the next decision (part 3), I will go over how he completely transformed the project.

AMA below.

14 upvotes·5.1K views