What is BigchainDB and what are its top alternatives?
BigchainDB is a decentralized database that allows users to build scalable blockchain applications. It features high performance, immutability, and security provided by the underlying blockchain technology. However, it has limitations such as the need for proper data modeling and management, as well as potential issues with scalability under heavy load.
Hyperledger Fabric: Hyperledger Fabric is a permissioned blockchain infrastructure that enables organizations to create private, permissioned blockchain networks. It offers modular architecture, scalability, and guaranteed finality. Pros: strong security features, permissioned network, modular architecture. Cons: may require more resources to set up compared to BigchainDB.
Ethereum: Ethereum is a public blockchain platform that supports smart contract functionality. It allows developers to build decentralized applications on top of its network. Key features include smart contracts, decentralized applications (dApps), and a large developer community. Pros: established network, smart contract capabilities, decentralized applications. Cons: may have scalability issues during high network congestion.
Corda: Corda is a distributed ledger technology designed for businesses in industries such as finance and supply chain. It offers privacy, scalability, and interoperability with other blockchain networks. Pros: tailored for enterprise use, privacy features, interoperability. Cons: may not be as widely adopted as some other alternatives.
Quorum: Quorum is an open-source blockchain platform built on Ethereum. It is designed for enterprise use cases that require high throughput and privacy features. Key features include private transactions, permissioned network, and consensus mechanisms tailored for business needs. Pros: built on Ethereum, privacy features, optimized for enterprise use. Cons: may have a steeper learning curve for new users.
EOS: EOS is a blockchain platform that aims to provide a decentralized operating system for dApps. It offers scalability, low latency, and feeless transactions. Key features include delegated proof of stake (DPOS) consensus, parallel processing, and governance mechanisms. Pros: high throughput, feeless transactions, governance mechanisms. Cons: network can be seen as more centralized compared to other alternatives.
Tezos: Tezos is a smart contract platform that uses on-chain governance to improve scalability and upgradeability. It offers self-amendment, formal verification, and baking (proof of stake) as consensus mechanism. Pros: on-chain governance, formal verification, self-amendment. Cons: may have less adoption compared to more established platforms.
Algorand: Algorand is a blockchain platform that focuses on scalability, security, and decentralization. It uses a proof-of-stake consensus mechanism to achieve high transaction throughput. Key features include pure proof of stake, fast finality, and Byzantine Agreement. Pros: high transaction throughput, secure consensus mechanism, fast finality. Cons: may not be as well-known as other alternatives.
Stellar: Stellar is a decentralized platform that aims to facilitate cross-border payments and asset issuance. It features low transaction fees, fast settlement times, and a network of anchors to facilitate currency exchange. Pros: low transaction fees, fast settlement times, cross-border payments. Cons: may not be as focused on general-purpose blockchain applications as other alternatives.
Sawtooth: Sawtooth is a modular blockchain platform that allows for easy development and deployment of blockchain applications. It offers support for Ethereum smart contracts and provides scalability through parallel transaction processing. Pros: modular architecture, support for smart contracts, scalability. Cons: may require additional development effort compared to more feature-complete platforms.
IOTA: IOTA is a distributed ledger specifically designed for the Internet of Things (IoT) ecosystem. It features feeless transactions, scalability, and no reliance on traditional blockchain structures. Key features include Tangle (directed acyclic graph) as the underlying data structure, feeless transactions, and scalability through parallel processing. Pros: feeless transactions, scalability, tailored for IoT use cases. Cons: may have limited support for general-purpose blockchain applications.
Top Alternatives to BigchainDB
- Ethereum
A decentralized platform for applications that run exactly as programmed without any chance of fraud, censorship or third-party interference. ...
- MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...
- IPFS
It is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia in a distributed file system. ...
- MultiChain
It is a platform that helps users to establish a certain private Blockchains that can be used by the organizations for financial transactions. ...
- Hyperledger Fabric
It is a collaborative effort created to advance blockchain technology by identifying and addressing important features and currently missing requirements. It leverages container technology to host smart contracts called “chaincode” that comprise the application logic of the system. ...
- MySQL
The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...
- PostgreSQL
PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions. ...
- Microsoft SQL Server
Microsoft® SQL Server is a database management and analysis system for e-commerce, line-of-business, and data warehousing solutions. ...
BigchainDB alternatives & related posts
- Decentralized blockchain, most famous platform for DApp7
- Resistant to hash power attacks2
- Rich smart contract execution environment2
- #2 on capitalization after Bitcoin2
- High fees and lacks scalability1
related Ethereum posts
Which is the best to use for integrating blockchain techniques into a secure cloud system: Parity, Ethereum, or hyperedge fabric?
Hey! I am building an uber clone using blockchain. I am confused about where do I store the data of the drivers and riders and transaction information. IPFS or Ethereum? or do I store the IPFS URL on Ethereum? What would be the advantages of one over the other?
- Document-oriented storage827
- No sql593
- Ease of use553
- Fast464
- High performance410
- Free257
- Open source218
- Flexible180
- Replication & high availability145
- Easy to maintain112
- Querying42
- Easy scalability39
- Auto-sharding38
- High availability37
- Map/reduce31
- Document database27
- Easy setup25
- Full index support25
- Reliable16
- Fast in-place updates15
- Agile programming, flexible, fast14
- No database migrations12
- Easy integration with Node.Js8
- Enterprise8
- Enterprise Support6
- Great NoSQL DB5
- Support for many languages through different drivers4
- Drivers support is good3
- Aggregation Framework3
- Schemaless3
- Fast2
- Managed service2
- Easy to Scale2
- Awesome2
- Consistent2
- Good GUI1
- Acid Compliant1
- Very slowly for connected models that require joins6
- Not acid compliant3
- Proprietary query language1
related MongoDB posts
Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.
We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient
Based on the above criteria, we selected the following tools to perform the end to end data replication:
We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.
We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.
In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.
Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.
In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!
We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.
As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).
When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.
related IPFS posts
Hey! I am building an uber clone using blockchain. I am confused about where do I store the data of the drivers and riders and transaction information. IPFS or Ethereum? or do I store the IPFS URL on Ethereum? What would be the advantages of one over the other?
- No Transaction Fees4
related MultiChain posts
- Highly scalable and basically feeless3
- Higher customization of smart contracts2
- Flexible blockchain framework2
- Easily to developmenet1
related Hyperledger Fabric posts
I am a faculty at Morgan State University. I would like to know the differences between Hyperledger Fabric and Ripple. I found a lot of info on Google, but they are not so clear. For example, one use case for the ripple is bank settlements. Can I have more detail about how it works for this use case? I appreciate your response.
- Sql800
- Free679
- Easy562
- Widely used528
- Open source489
- High availability180
- Cross-platform support160
- Great community104
- Secure78
- Full-text indexing and searching75
- Fast, open, available25
- SSL support16
- Reliable15
- Robust14
- Enterprise Version8
- Easy to set up on all platforms7
- NoSQL access to JSON data type2
- Relational database1
- Easy, light, scalable1
- Sequel Pro (best SQL GUI)1
- Replica Support1
- Owned by a company with their own agenda16
- Can't roll back schema changes3
related MySQL posts
We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.
We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).
And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.
I can't recommend it highly enough.
Our most popular (& controversial!) article to date on the Uber Engineering blog in 3+ yrs. Why we moved from PostgreSQL to MySQL. In essence, it was due to a variety of limitations of Postgres at the time. Fun fact -- earlier in Uber's history we'd actually moved from MySQL to Postgres before switching back for good, & though we published the article in Summer 2016 we haven't looked back since:
The early architecture of Uber consisted of a monolithic backend application written in Python that used Postgres for data persistence. Since that time, the architecture of Uber has changed significantly, to a model of microservices and new data platforms. Specifically, in many of the cases where we previously used Postgres, we now use Schemaless, a novel database sharding layer built on top of MySQL (https://eng.uber.com/schemaless-part-one/). In this article, we’ll explore some of the drawbacks we found with Postgres and explain the decision to build Schemaless and other backend services on top of MySQL:
- Relational database762
- High availability510
- Enterprise class database439
- Sql383
- Sql + nosql304
- Great community173
- Easy to setup147
- Heroku131
- Secure by default130
- Postgis113
- Supports Key-Value50
- Great JSON support48
- Cross platform34
- Extensible32
- Replication28
- Triggers26
- Rollback23
- Multiversion concurrency control22
- Open source21
- Heroku Add-on18
- Stable, Simple and Good Performance17
- Powerful15
- Lets be serious, what other SQL DB would you go for?13
- Good documentation11
- Intelligent optimizer8
- Free8
- Scalable8
- Reliable8
- Transactional DDL7
- Modern7
- One stop solution for all things sql no matter the os6
- Relational database with MVCC5
- Faster Development5
- Developer friendly4
- Full-Text Search4
- Free version3
- Great DB for Transactional system or Application3
- Relational datanbase3
- search3
- Open-source3
- Excellent source code3
- Full-text2
- Text2
- Native0
- Table/index bloatings10
related PostgreSQL posts
Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.
We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient
Based on the above criteria, we selected the following tools to perform the end to end data replication:
We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.
We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.
In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.
Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.
In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!
We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.
We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).
And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.
I can't recommend it highly enough.
Microsoft SQL Server
- Reliable and easy to use139
- High performance102
- Great with .net95
- Works well with .net65
- Easy to maintain56
- Azure support21
- Full Index Support17
- Always on17
- Enterprise manager is fantastic10
- In-Memory OLTP Engine9
- Easy to setup and configure2
- Security is forefront2
- Faster Than Oracle1
- Decent management tools1
- Great documentation1
- Docker Delivery1
- Columnstore indexes1
- Expensive Licensing4
- Microsoft2
related Microsoft SQL Server posts
We initially started out with Heroku as our PaaS provider due to a desire to use it by our original developer for our Ruby on Rails application/website at the time. We were finding response times slow, it was painfully slow, sometimes taking 10 seconds to start loading the main page. Moving up to the next "compute" level was going to be very expensive.
We moved our site over to AWS Elastic Beanstalk , not only did response times on the site practically become instant, our cloud bill for the application was cut in half.
In database world we are currently using Amazon RDS for PostgreSQL also, we have both MariaDB and Microsoft SQL Server both hosted on Amazon RDS. The plan is to migrate to AWS Aurora Serverless for all 3 of those database systems.
Additional services we use for our public applications: AWS Lambda, Python, Redis, Memcached, AWS Elastic Load Balancing (ELB), Amazon Elasticsearch Service, Amazon ElastiCache
I am a Microsoft SQL Server programmer who is a bit out of practice. I have been asked to assist on a new project. The overall purpose is to organize a large number of recordings so that they can be searched. I have an enormous music library but my songs are several hours long. I need to include things like time, date and location of the recording. I don't have a problem with the general database design. I have two primary questions:
- I need to use either MySQL or PostgreSQL on a Linux based OS. Which would be better for this application?
- I have not dealt with a sound based data type before. How do I store that and put it in a table? Thank you.