What is ActiveMQ and what are its top alternatives?
ActiveMQ is a popular open-source message broker written in Java that provides JMS (Java Message Service) support. It offers features such as high availability, message persistence, clustering, and support for various messaging protocols. However, some limitations of ActiveMQ include complex setup and configuration, potential performance issues under heavy loads, and lack of robust monitoring capabilities.
RabbitMQ: RabbitMQ is a robust message broker that supports multiple messaging protocols such as AMQP, MQTT, and STOMP. Key features include high availability, clustering, and flexible routing capabilities. Pros include strong community support, extensive documentation, and a variety of client libraries. Cons compared to ActiveMQ include a steeper learning curve for beginners and more limited support for JMS.
Kafka: Apache Kafka is a distributed streaming platform known for its high throughput and low latency. It is capable of handling real-time data feeds and offers features like fault tolerance, horizontal scalability, and support for complex event processing. Pros of Kafka include excellent performance, fault tolerance, and scalability, while cons include a more specialized use case compared to ActiveMQ and a different messaging model.
NATS: NATS is a lightweight and high-performance messaging system that focuses on simplicity and speed. It offers features like publish-subscribe messaging, request-reply patterns, and support for clustering. Pros of NATS include simplicity, speed, and ease of use, while cons compared to ActiveMQ include less advanced features and a smaller community.
Apache Pulsar: Apache Pulsar is a cloud-native, distributed messaging system that offers features like multi-tenancy, geo-replication, and support for a wide range of messaging patterns. Pros of Pulsar include scalability, performance, and support for modern cloud architectures, while cons compared to ActiveMQ include a newer project with potentially less mature features.
Amazon SQS: Amazon Simple Queue Service (SQS) is a fully managed message queuing service provided by AWS. It offers features like high availability, durability, and scalability without the need for managing infrastructure. Pros of SQS include ease of use, reliability, and integration with other AWS services, while cons compared to ActiveMQ include vendor lock-in and potentially higher costs.
Redis: Redis is an in-memory data structure store that can be used as a message broker with its pub/sub capabilities. It offers features like high performance, data persistence, and support for various data types. Pros of Redis include speed, simplicity, and versatility, while cons compared to ActiveMQ include less advanced messaging features and persistence options.
ZeroMQ: ZeroMQ is a lightweight messaging library that enables fast, asynchronous messaging between applications. It provides features like various messaging patterns, high performance, and low latency. Pros of ZeroMQ include speed, flexibility, and simplicity, while cons compared to ActiveMQ include less built-in support for advanced messaging features and clustering.
IBM MQ: IBM MQ is a messaging middleware that provides reliable messaging between applications and systems. It offers features like guaranteed message delivery, scalability, and support for various platforms. Pros of IBM MQ include enterprise-grade reliability, extensive platform support, and robust security features, while cons compared to ActiveMQ include potential higher costs and a more complex setup process.
HornetQ: HornetQ is an open-source messaging system that provides JMS support and high-performance messaging capabilities. It offers features like clustering, message redistribution, and support for large message sizes. Pros of HornetQ include performance, JMS support, and clustering capabilities, while cons compared to ActiveMQ include potential issues with documentation and community support.
RocketMQ: Apache RocketMQ is a distributed messaging and streaming platform that offers features like reliability, scalability, and support for real-time message processing. Pros of RocketMQ include performance, high availability, and support for big data scenarios, while cons compared to ActiveMQ include potentially less mature features and a smaller community.
Top Alternatives to ActiveMQ
- RabbitMQ
RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received. ...
- Kafka
Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...
- Apollo
Build a universal GraphQL API on top of your existing REST APIs, so you can ship new application features fast without waiting on backend changes. ...
- IBM MQ
It is a messaging middleware that simplifies and accelerates the integration of diverse applications and business data across multiple platforms. It offers proven, enterprise-grade messaging capabilities that skillfully and safely move information. ...
- ZeroMQ
The 0MQ lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided by specialised messaging middleware products. 0MQ sockets provide an abstraction of asynchronous message queues, multiple messaging patterns, message filtering (subscriptions), seamless access to multiple transport protocols and more. ...
- JavaScript
JavaScript is most known as the scripting language for Web pages, but used in many non-browser environments as well such as node.js or Apache CouchDB. It is a prototype-based, multi-paradigm scripting language that is dynamic,and supports object-oriented, imperative, and functional programming styles. ...
- Git
Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. ...
- GitHub
GitHub is the best place to share code with friends, co-workers, classmates, and complete strangers. Over three million people use GitHub to build amazing things together. ...
ActiveMQ alternatives & related posts
- It's fast and it works with good metrics/monitoring235
- Ease of configuration80
- I like the admin interface60
- Easy to set-up and start with52
- Durable22
- Standard protocols19
- Intuitive work through python19
- Written primarily in Erlang11
- Simply superb9
- Completeness of messaging patterns7
- Reliable4
- Scales to 1 million messages per second4
- Better than most traditional queue based message broker3
- Distributed3
- Supports MQTT3
- Supports AMQP3
- Clear documentation with different scripting language2
- Better routing system2
- Inubit Integration2
- Great ui2
- High performance2
- Reliability2
- Open-source2
- Runs on Open Telecom Platform2
- Clusterable2
- Delayed messages2
- Supports Streams1
- Supports STOMP1
- Supports JMS1
- Too complicated cluster/HA config and management9
- Needs Erlang runtime. Need ops good with Erlang runtime6
- Configuration must be done first, not by your code5
- Slow4
related RabbitMQ posts
As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.
Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.
#MessageQueue
Around the time of their Series A, Pinterest’s stack included Python and Django, with Tornado and Node.js as web servers. Memcached / Membase and Redis handled caching, with RabbitMQ handling queueing. Nginx, HAproxy and Varnish managed static-delivery and load-balancing, with persistent data storage handled by MySQL.
Kafka
- High-throughput126
- Distributed119
- Scalable92
- High-Performance86
- Durable66
- Publish-Subscribe38
- Simple-to-use19
- Open source18
- Written in Scala and java. Runs on JVM12
- Message broker + Streaming system9
- KSQL4
- Avro schema integration4
- Robust4
- Suport Multiple clients3
- Extremely good parallelism constructs2
- Partioned, replayable log2
- Simple publisher / multi-subscriber model1
- Fun1
- Flexible1
- Non-Java clients are second-class citizens32
- Needs Zookeeper29
- Operational difficulties9
- Terrible Packaging5
related Kafka posts
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
- From the creators of Meteor12
- Great documentation8
- Open source3
- Real time if use subscription2
- File upload is not supported1
- Increase in complexity of implementing (subscription)1
related Apollo posts
Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.
But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.
But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.
Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
- Reliable for banking transactions3
- Useful for big enteprises3
- Secure2
- Broader connectivity - more protocols, APIs, Files etc1
- Many deployment options (containers, cloud, VM etc)1
- High Availability1
- Cost2
related IBM MQ posts
Want to get the differences in features and enhancement, pros and cons, and also how to Migrate from IBM MQ to Azure Service Bus.
ZeroMQ
- Fast23
- Lightweight20
- Transport agnostic11
- No broker required7
- Low level APIs are in C4
- Low latency4
- Open source1
- Publish-Subscribe1
- No message durability5
- Not a very reliable system - message delivery wise3
- M x N problem with M producers and N consumers1
related ZeroMQ posts
In our Spring Boot application, which encompasses various projects, we employ ZeroMQ (ZMQ) for communication via a req/resp pattern. Recently, I observed that data is persisted in the MongoDB database before being transmitted to other applications. I've identified a method to monitor changes to the database, and I'm contemplating whether to utilize this monitoring approach to detect changes and execute the necessary instructions.
Which approach is more advisable in this scenario: leveraging the database monitoring mechanism or sticking with the current ZMQ req/resp communication?
Essentially, I'm seeking guidance on whether to rely on database monitoring for change detection and subsequent actions or to continue with the existing ZMQ communication pattern.
Hi, we are in a ZMQ set up in a push/pull pattern, and we currently start to have more traffic and cases that the service is unavailable or stuck. We want to: * Not loose messages in services outages * Safely restart service without losing messages (ZeroMQ seems to need to close the socket in the receiver before restart manually)
Do you have experience with this setup with ZeroMQ? Would you suggest RabbitMQ or Amazon SQS (we are in AWS setup) instead? Something else?
Thank you for your time
JavaScript
- Can be used on frontend/backend1.7K
- It's everywhere1.5K
- Lots of great frameworks1.2K
- Fast898
- Light weight745
- Flexible425
- You can't get a device today that doesn't run js392
- Non-blocking i/o286
- Ubiquitousness237
- Expressive191
- Extended functionality to web pages55
- Relatively easy language49
- Executed on the client side46
- Relatively fast to the end user30
- Pure Javascript25
- Functional programming21
- Async15
- Full-stack13
- Setup is easy12
- Future Language of The Web12
- Its everywhere12
- Because I love functions11
- JavaScript is the New PHP11
- Like it or not, JS is part of the web standard10
- Expansive community9
- Everyone use it9
- Can be used in backend, frontend and DB9
- Easy9
- Most Popular Language in the World8
- Powerful8
- Can be used both as frontend and backend as well8
- For the good parts8
- No need to use PHP8
- Easy to hire developers8
- Agile, packages simple to use7
- Love-hate relationship7
- Photoshop has 3 JS runtimes built in7
- Evolution of C7
- It's fun7
- Hard not to use7
- Versitile7
- Its fun and fast7
- Nice7
- Popularized Class-Less Architecture & Lambdas7
- Supports lambdas and closures7
- It let's me use Babel & Typescript6
- Can be used on frontend/backend/Mobile/create PRO Ui6
- 1.6K Can be used on frontend/backend6
- Client side JS uses the visitors CPU to save Server Res6
- Easy to make something6
- Clojurescript5
- Promise relationship5
- Stockholm Syndrome5
- Function expressions are useful for callbacks5
- Scope manipulation5
- Everywhere5
- Client processing5
- What to add5
- Because it is so simple and lightweight4
- Only Programming language on browser4
- Test1
- Hard to learn1
- Test21
- Not the best1
- Easy to understand1
- Subskill #41
- Easy to learn1
- Hard 彤0
- A constant moving target, too much churn22
- Horribly inconsistent20
- Javascript is the New PHP15
- No ability to monitor memory utilitization9
- Shows Zero output in case of ANY error8
- Thinks strange results are better than errors7
- Can be ugly6
- No GitHub3
- Slow2
- HORRIBLE DOCUMENTS, faulty code, repo has bugs0
related JavaScript posts
Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.
But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.
But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.
Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
https://eng.uber.com/distributed-tracing/
(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)
Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark
- Distributed version control system1.4K
- Efficient branching and merging1.1K
- Fast959
- Open source845
- Better than svn726
- Great command-line application368
- Simple306
- Free291
- Easy to use232
- Does not require server222
- Distributed27
- Small & Fast22
- Feature based workflow18
- Staging Area15
- Most wide-spread VSC13
- Role-based codelines11
- Disposable Experimentation11
- Frictionless Context Switching7
- Data Assurance6
- Efficient5
- Just awesome4
- Github integration3
- Easy branching and merging3
- Compatible2
- Flexible2
- Possible to lose history and commits2
- Rebase supported natively; reflog; access to plumbing1
- Light1
- Team Integration1
- Fast, scalable, distributed revision control system1
- Easy1
- Flexible, easy, Safe, and fast1
- CLI is great, but the GUI tools are awesome1
- It's what you do1
- Phinx0
- Hard to learn16
- Inconsistent command line interface11
- Easy to lose uncommitted work9
- Worst documentation ever possibly made7
- Awful merge handling5
- Unexistent preventive security flows3
- Rebase hell3
- When --force is disabled, cannot rebase2
- Ironically even die-hard supporters screw up badly2
- Doesn't scale for big data1
related Git posts
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
GitHub
- Open source friendly1.8K
- Easy source control1.5K
- Nice UI1.3K
- Great for team collaboration1.1K
- Easy setup867
- Issue tracker504
- Great community486
- Remote team collaboration483
- Great way to share451
- Pull request and features planning442
- Just works147
- Integrated in many tools132
- Free Public Repos121
- Github Gists116
- Github pages112
- Easy to find repos83
- Open source62
- It's free60
- Easy to find projects60
- Network effect56
- Extensive API49
- Organizations43
- Branching42
- Developer Profiles34
- Git Powered Wikis32
- Great for collaboration30
- It's fun24
- Clean interface and good integrations23
- Community SDK involvement22
- Learn from others source code20
- Because: Git16
- It integrates directly with Azure14
- Standard in Open Source collab10
- Newsfeed10
- It integrates directly with Hipchat8
- Fast8
- Beautiful user experience8
- Easy to discover new code libraries7
- Smooth integration6
- Cloud SCM6
- Nice API6
- Graphs6
- Integrations6
- It's awesome6
- Quick Onboarding5
- Reliable5
- Remarkable uptime5
- CI Integration5
- Hands down best online Git service available5
- Uses GIT4
- Version Control4
- Simple but powerful4
- Unlimited Public Repos at no cost4
- Free HTML hosting4
- Security options4
- Loved by developers4
- Easy to use and collaborate with others4
- Ci3
- IAM3
- Nice to use3
- Easy deployment via SSH3
- Easy to use2
- Leads the copycats2
- All in one development service2
- Free private repos2
- Free HTML hostings2
- Easy and efficient maintainance of the projects2
- Beautiful2
- Easy source control and everything is backed up2
- IAM integration2
- Very Easy to Use2
- Good tools support2
- Issues tracker2
- Never dethroned2
- Self Hosted2
- Dasf1
- Profound1
- Owned by micrcosoft54
- Expensive for lone developers that want private repos38
- Relatively slow product/feature release cadence15
- API scoping could be better10
- Only 3 collaborators for private repos9
- Limited featureset for issue management4
- Does not have a graph for showing history like git lens3
- GitHub Packages does not support SNAPSHOT versions2
- No multilingual interface1
- Takes a long time to commit1
- Expensive1
related GitHub posts
I was building a personal project that I needed to store items in a real time database. I am more comfortable with my Frontend skills than my backend so I didn't want to spend time building out anything in Ruby or Go.
I stumbled on Firebase by #Google, and it was really all I needed. It had realtime data, an area for storing file uploads and best of all for the amount of data I needed it was free!
I built out my application using tools I was familiar with, React for the framework, Redux.js to manage my state across components, and styled-components for the styling.
Now as this was a project I was just working on in my free time for fun I didn't really want to pay for hosting. I did some research and I found Netlify. I had actually seen them at #ReactRally the year before and deployed a Gatsby site to Netlify already.
Netlify was very easy to setup and link to my GitHub account you select a repo and pretty much with very little configuration you have a live site that will deploy every time you push to master.
With the selection of these tools I was able to build out my application, connect it to a realtime database, and deploy to a live environment all with $0 spent.
If you're looking to build out a small app I suggest giving these tools a go as you can get your idea out into the real world for absolutely no cost.
Context: I wanted to create an end to end IoT data pipeline simulation in Google Cloud IoT Core and other GCP services. I never touched Terraform meaningfully until working on this project, and it's one of the best explorations in my development career. The documentation and syntax is incredibly human-readable and friendly. I'm used to building infrastructure through the google apis via Python , but I'm so glad past Sung did not make that decision. I was tempted to use Google Cloud Deployment Manager, but the templates were a bit convoluted by first impression. I'm glad past Sung did not make this decision either.
Solution: Leveraging Google Cloud Build Google Cloud Run Google Cloud Bigtable Google BigQuery Google Cloud Storage Google Compute Engine along with some other fun tools, I can deploy over 40 GCP resources using Terraform!
Check Out My Architecture: CLICK ME
Check out the GitHub repo attached