Alternatives to PubNub logo

Alternatives to PubNub

Pusher, Socket.IO, SendBird, Stream, and Kafka are the most popular alternatives and competitors to PubNub.
153
252
+ 1
228

What is PubNub and what are its top alternatives?

PubNub makes it easy for you to add real-time capabilities to your apps, without worrying about the infrastructure. Build apps that allow your users to engage in real-time across mobile, browser, desktop and server.
PubNub is a tool in the Realtime Backend / API category of a tech stack.

Top Alternatives of PubNub

PubNub alternatives & related posts

related Pusher posts

Which messaging service (Pusher vs. PubNub vs. Google Cloud Pub/Sub) to use for IoT?

See more

related Socket.IO posts

across_the_grid
across_the_grid
Full-stack web developer at Capmo GmbH · | 10 upvotes · 213.2K views
Shared insights
on
Socket.IOSocket.IONode.jsNode.jsExpressJSExpressJS

I use Socket.IO because the application has 2 frontend clients, which need to communicate in real-time. The backend-server handles the communication between these two clients via websockets. Socket.io is very easy to set up in Node.js and ExpressJS.

In the research project, the 1st client shows panoramic videos in a so called cave system (it is the VR setup of our research lab, which consists of three big screens, which are specially arranged, so the user experience the videos more immersive), the 2nd client controls the videos/locations of the 1st client.

See more

I have always been interested in building a real-time multiplayer game engine that could be massively scalable, and recently I decided to start working on a MMO version of the classic "snake" game. I wanted the entire #Stack to be based on ES6 JavaScript so for the #Backend I chose to use FeathersJS with MongoDB for game/user data storage, Redis for distributed mutex and pub/sub, and Socket.IO for real-time communication. For the #Frontend I used React with Redux.js, the FeathersJS client as well as HTML5 canvas to render the view.

See more
Stream logo

Stream

135
141
54
135
141
+ 1
54
Build scalable feeds, activity streams & chat in a few hours instead of months.
Stream logo
Stream
VS
PubNub logo
PubNub
Kafka logo

Kafka

8.1K
8K
525
8.1K
8K
+ 1
525
Distributed, fault tolerant, high throughput pub-sub messaging system
Kafka logo
Kafka
VS
PubNub logo
PubNub

related Kafka posts

Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 1.1M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal
John Kodumal
CTO at LaunchDarkly · | 16 upvotes · 737.7K views

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more

related Firebase posts

Tassanai Singprom
Tassanai Singprom

This is my stack in Application & Data

JavaScript PHP HTML5 jQuery Redis Amazon EC2 Ubuntu Sass Vue.js Firebase Laravel Lumen Amazon RDS GraphQL MariaDB

My Utilities Tools

Google Analytics Postman Elasticsearch

My Devops Tools

Git GitHub GitLab npm Visual Studio Code Kibana Sentry BrowserStack

My Business Tools

Slack

See more
fontumi
fontumi

Fontumi focuses on the development of telecommunications solutions. We have opted for technologies that allow agile development and great scalability.

Firebase and Node.js + FeathersJS are technologies that we have used on the server side. Vue.js is our main framework for clients.

Our latest products launched have been focused on the integration of AI systems for enriched conversations. Google Compute Engine , along with Dialogflow and Cloud Firestore have been important tools for this work.

Git + GitHub + Visual Studio Code is a killer stack.

See more

related Twilio posts

Julien DeFrance
Julien DeFrance
Principal Software Engineer at Tophatter · | 3 upvotes · 349.3K views

Nexmo vs Twilio ?

Back in the early days at SmartZip Analytics, that evaluation had - for whatever reason - been made by Product Management. Some developers might have been consulted, but we hadn't made the final call and some key engineering aspects of it were omitted.

When revamping the platform, I made sure to flip the decision process how it should be. Business provided an input but Engineering lead the way and has the final say on all implementation matters. My engineers and I decided on re-evaluating the criteria and vendor selection. Not only did we need SMS support, but were we not thinking about #VoiceAndSms support as the use cases evolved.

Also, on an engineering standpoint, SDK mattered. Nexmo didn't have any. Twilio did. No-one would ever want to re-build from scratch integration layers vendors should naturally come up with and provide their customers with.

Twilio won on all fronts. Including costs and implementation timelines. No-one even noticed the vendor switch.

Many years later, Twilio demonstrated its position as a leader by holding conferences in the Bay Area, announcing features like Twilio Functions. Even acquired Authy which we also used for 2FA. Twilio's growth has been amazing. Its recent acquisition of SendGrid continues to show it.

See more