Need advice about which tool to choose?Ask the StackShare community!
MySQL vs Scylla: What are the differences?
Introduction:
MySQL and Scylla are both popular database management systems, but they have significant differences in various aspects. Here are six key differences between MySQL and Scylla:
Architecture: MySQL follows a traditional architecture known as the shared-disk model, where all nodes communicate with a central storage component. On the other hand, Scylla is built on the shared-nothing architecture, where each node operates independently with its own storage. This architecture leads to improved scalability and fault tolerance in Scylla.
Data Model: MySQL uses a relational data model, allowing data to be organized into tables with defined relationships between them. Scylla, on the contrary, is based on a wide-column data model and supports a schema-less approach, where data can be stored in flexible column families. This enables better handling of unstructured and rapidly changing data in Scylla.
Consistency: MySQL emphasizes immediate consistency, ensuring that all clients see the same version of data at a specific point in time. In Scylla, eventual consistency is the default, which means that different clients might observe different versions of data for a certain period. However, Scylla provides configurable tunings to adjust the level of consistency as per application requirements.
Performance: Scylla is designed to deliver high performance and low latency, particularly for large-scale distributed systems. It achieves this through its memory-centric architecture that takes full advantage of RAM and solid-state drives. MySQL, though capable of handling significant workloads, may face scalability challenges when dealing with huge datasets or high traffic loads.
Replication: MySQL utilizes synchronous replication, where write operations are committed to multiple nodes before being acknowledged. This ensures data durability but introduces some latency. In contrast, Scylla employs asynchronous replication, allowing faster write operations but potential data loss if a node fails before synchronizing the data.
Availability: MySQL relies on a master-slave replication model for high availability, where a single master node handles writes while multiple slave nodes handle reads. If the master node fails, failover is required to promote one of the slaves as the new master. In Scylla, multi-master replication is native, allowing all nodes to handle both read and write operations, thereby eliminating the single point of failure.
In Summary, MySQL and Scylla differ in their architecture, data models, consistency models, performance characteristics, replication methods, and availability approaches. These differences make them suitable for different use cases depending on specific requirements, scalability needs, and latency sensitivities.
Hello, I am developing a new project with an internal chat between users. Also, there are complex relationships between the other project entities but I wolud like to build something scalable and fast and right now I am designing the data model. What kind of database would you recommend me to manage all application data? relational like MySQL, no relational like MongoDB or a mixed one? Thank you
In MongoDB, a write operation is atomic on the level of a single document, so it's harder to deal with consistency without transactions.
MongoDB supports horizontal scaling through Sharding , distributing data across several machines and facilitating high throughput operations with large sets of data. ... Sharding allows you to add additional instances to increase capacity when required
If you are trying with "complex relationships", give a chance to learn ArangoDB and Graph databases. Its database structures allow doing this with faster and simpler queries. The database is not as strict as others and allows arbitrary data. The data model is really like a neural network and you will never need foreign keys tables anymore. In Udemy there is a free course about it to get started.
The most important question is where are you planning to host? On-premise, or in the cloud.
Particularly if you are planning to host in either AWS or Azure, then your first point of call should be the PaaS (Platform as a Service) databases supplied by these vendors, as you will find yourself requiring a lot less effort to support them, much easier Disaster Recovery options, and also, depending on how PAYG the database is that you use, potentially also much cheaper costs than having a dedicated database server.
Your question regards 'Relational or not' is obviously key, and you need to consider both your required data structure, as well as the ACID requirements of your application model, as well as the non-functional requirements in terms of scalability, resilience, whether you want security authorisation at the highest application tier, or right down to 'row' level in the database, etc. - however please don't fall into the trap of considering 'NoSQL' as being single category. MongoDB, with its document-store type solution is a very different model to key-value-pair stores (like AWS DynamoDB), or column stores (like AWS RedShift) or for more complex data relationships, Entity Graph Stores (like AWS Neptune), to stores designed for tokenisation and text search (ElasticSearch) etc.
Also critical in all this is how many items you believe you need to index by. RDBMS/SQL stores are great for having as many indexes as you want, other than the slow-down in write speed, whereas databases like Amazon DynamoDB provide blisteringly fast read/write performance, but are very limited on key indexing capabilities.
It feels like you have most experience with SQL/RDBMS technologies, so for the simplest learning curve, and if your application fits it, then I'd personally start by looking at AWS Aurora https://aws.amazon.com/rds/aurora/ .
I think, Its depend of your project type and your skills. MySQL is good and simple for maintenance but MongoDB need more skills and knowledge. If you work on little project, use MySQL. For your project type, MySQL is enough after you can migrate with PostgreSQL
FIrstly, it may help if you explain what you mean by "complex relationships between project entities". Secondly, you can build a fast and scalable solution using either. With that said however, the data sounds relational so I would recommend MySQL.
I am going to work on a real estate project and have to decide on a database. Now, SQL databases can be very efficient if appropriately designed. More relations between the data and less redundancy. But with a #NoSQL database, the development time is reduced, and it is easy to query. Since this is my first time working on the real estate domain, I would like to pick a database that would be efficient in the long run.
I recommend PostgreSQL as it’s the most powerful out of the 3 databases you mentioned. It supports JSON objects so you can mimic the MongoDB functionality, but I would also argue that SQL is actually quite powerful and in many cases significantly easier to work with than with NoSQL databases.
Stay away from foreign keys, keep it fast and simple. Define your data structures well in advance. Try to model your data structures based on your system’s vision; based on where it’s going and not based solely on what you currently need it to do. This will help you avoid drastic changes to your database after your system is launched. Populate the database with fake data and run tests. PostgreSQL allows you to create Views from multiple tables. Try to create those views and make sure you can easily create useful views from multiple tables. Run an Explain on those view queries to make sure you created your indexes correctly. Make sure it’s fast!
Any of those three databases are going to be efficient, scalable, and reliable in the long term if you configure and use them correctly. They all also have solid hosting solutions.
All things being equal, I would agree with other posters that Postgres is my preference among the three, but there are caveats.
MongoDB and MySQL have better support for mutli-region replication in your big three cloud environments. Azure recently bought Citus Data, which was a best-in-class Postgres replication solution, so they might be the only one I trust to provide cross-region replication at the moment.
If you have a single region deployment and are on AWS, I can't recommend Aurora Postgres highly enough. It's a very good implementation and extremely performant.
I'll second another piece of advice. Postgresql's JSON columns are a dream when it comes to productivity and I use them frequently with our Rails application. In these cases, no migration is required to change schema. We store payloads with dozens or hundreds of keys and performance has not been an issue. We also have a lot of relational tables, so the joins we get with SQL are very important to us and hard to replicate with a NoQL solution.
That really depends of where do you see you application in the long run. On any application, any of those choices are excellent. You could argue about good support on JSON binaries, but even MySQL has an excellent support for that on the latest versions.
On the long run, when your application gets hundreds of thousands of requests per second, you might start thinking about how many inputs you will have in the database compared to the outputs. PostgresSQL it’s excellent at giving you outputs, but table corruption can happen when you start receiving this massive number of inputs (Which was the reason Uber switched from Postgres to MySQL)
On our OPS Platform at CTO.ai , we decided to use Postgres, because we need a reliable and agile way to send the output to our users, so that was out best choice in the long run for our product.
We are planning to migrate one of my applications from MSSQL to MySQL. Can someone help me with the version to select?. I have a strong inclination towards MySql 5.7. But, I see there are some standout features added in Mysql 8.0 like JSON_TABLE. Just wanted to know if the newer version has not compromised on its speed while giving out some add on features.
MySQL 8.0 is significantly better than MySQL 5.7. For all InnoDB row operations, you'll see a great performance improvement. Also, the time taken to process transactions is lower in MySQL 8.0. Moreover, there has been an improvement in managing read and read/write workloads.
MySQL AB doesn't implement anything in MySQL until they can find a way to do it efficiently and, often, more efficiently than other systems. So although I don't have experience with benchmarking JSON_TABLEs or similar new features, their development philosophy alone suggests that version 8 for the latest features would be a safe jump without sacrificing system performance.
Hello,
I am trying to design an online ordering app similar to Doordash or Uber Eats. I'm having a hard time trying to finalise on what database (or mixture of databases) to use. I'm leaning towards using a relational database like MySQL or PostgreSQL. But, when the application grows, I don't want to join on 20 tables to get a data. Any help would be greatly appreciated. Thank you for your time.
Hello Suhas , We build our product www.voilacabs.com which is in the same lines as yours but we have used a combination of Mysql and MongoDB. When using MySQL, i would recommend doing the following: 1. Use Mysql only for storage only and for realtime updates we recommend MongoDB. 2. Don't try to Join more than 3 tables. ( the moment you reach 3 join stop there and try to un-normalized database. 3. Never or very rarely use Auto-increments. ( we recommend using UUIDS ) . Use UUIDS always for Auto increments for MYSQL. If you using Postgre SQL then i would suggest you to please check this https://instagram-engineering.com/sharding-ids-at-instagram-1cf5a71e5a5c There is a stored procedure that generated unique keys instead of auto-increment keys and that will help you sharding or clustering database without sync errors. 4. Also For MongoDB if you can put a layer of REDIS Cache then that will boost your api performance under large loads. 5. Use Node.js programing language as that function asynchronously .
Let me know if you still need any suggestion's . Thanks & Regards Rupen Makhecha CTO @ Voila Cab's www.voilacabs.com
I would recommend a mixture of MySQL and MongoDB. Using MongoDB for the Content Distribution Network (CDN) will make it easy to store high volume incoming data. MySQL is recommended to be used for business logic. PostgreSQL is not recommended since you will be faced with inefficient database replication features and constant migration from one PostgreSQL version to another.
I asked my last question incorrectly. Rephrasing it here.
I am looking for the most secure open source database for my project I'm starting: https://github.com/SuPragma/SuPragma/wiki
Which database is more secure? MySQL or PostgreSQL? Are there others I should be considering? Is it possible to change the encryption keys dynamically?
Thanks,
Raj
PostgreSQL provides more tools and builtin features around security, eg: row level security and the support of SELinux (through SE-PostgreSQL). Overall, whatever you choose, the important is to keep it updated and have the skills to apply security best practices and update them regurarly, without this, it's like putting your money in Fort knox but leaving the vault key in a public place.
It is open-source and more tools than mySQL. PostgreSQL is an object-relational database management system (ORDBMS) with an emphasis on extensibility and standards compliance. It is also good for small companies due to tools for free availability. PostgreSQL includes built-in support for regular B-tree and hash indexes. Indexes in PostgreSQL also support Expression & Partial Indices ( index only a part of a table). Expression Index can be created with an index of the result of an expression or function, instead of simply the value of a column.
I am a Microsoft SQL Server programmer who is a bit out of practice. I have been asked to assist on a new project. The overall purpose is to organize a large number of recordings so that they can be searched. I have an enormous music library but my songs are several hours long. I need to include things like time, date and location of the recording. I don't have a problem with the general database design. I have two primary questions:
- I need to use either MySQL or PostgreSQL on a Linux based OS. Which would be better for this application?
- I have not dealt with a sound based data type before. How do I store that and put it in a table? Thank you.
Hi Erin,
Honestly both databases will do the job just fine. I personally prefer Postgres.
Much more important is how you store the audio. While you could technically use a blob type column, it's really not ideal to be storing audio files which are "several hours long" in a database row. Instead consider storing the audio files in an object store (hosted options include backblaze b2 or aws s3) and persisting the key (which references that object) in your database column.
Hi Erin, Chances are you would want to store the files in a blob type. Both MySQL and Postgres support this. Can you explain a little more about your need to store the files in the database? I may be more effective to store the files on a file system or something like S3. To answer your qustion based on what you are descibing I would slighly lean towards PostgreSQL since it tends to be a little better on the data warehousing side.
Hey Erin! I would recommend checking out Directus before you start work on building your own app for them. I just stumbled upon it, and so far extremely happy with the functionalities. If your client is just looking for a simple web app for their own data, then Directus may be a great option. It offers "database mirroring", so that you can connect it to any database and set up functionality around it!
Hi Erin! First of all, you'd probably want to go with a managed service. Don't spin up your own MySQL installation on your own Linux box. If you are on AWS, thet have different offerings for database services. Standard RDS vs. Aurora. Aurora would be my preferred choice given the benefits it offers, storage optimizations it comes with... etc. Such managed services easily allow you to apply new security patches and upgrades, set up backups, replication... etc. Doing this on your own would either be risky, inefficient, or you might just give up. As far as which database to chose, you'll have the choice between Postgresql, MySQL, Maria DB, SQL Server... etc. I personally would recommend MySQL (latest version available), as the official tooling for it (MySQL Workbench) is great, stable, and moreover free. Other database services exist, I'd recommend you also explore Dynamo DB.
Regardless, you'd certainly only keep high-level records, meta data in Database, and the actual files, most-likely in S3, so that you can keep all options open in terms of what you'll do with them.
Hi Erin,
- Coming from "Big" DB engines, such as Oracle or MSSQL, go for PostgreSQL. You'll get all the features you need with PostgreSQL.
- Your case seems to point to a "NoSQL" or Document Database use case. Since you get covered on this with PostgreSQL which achieves excellent performances on JSON based objects, this is a second reason to choose PostgreSQL. MongoDB might be an excellent option as well if you need "sharding" and excellent map-reduce mechanisms for very massive data sets. You really should investigate the NoSQL option for your use case.
- Starting with AWS Aurora is an excellent advise. since "vendor lock-in" is limited, but I did not check for JSON based object / NoSQL features.
- If you stick to Linux server, the PostgreSQL or MySQL provided with your distribution are straightforward to install (i.e. apt install postgresql). For PostgreSQL, make sure you're comfortable with the pg_hba.conf, especially for IP restrictions & accesses.
Regards,
I recommend Postgres as well. Superior performance overall and a more robust architecture.
Hi everyone! I am a high school student, starting a massive project. I'm building a system for a boarding school to be better connected to their students and be more efficient with information. In the meantime, I am developing a website and an android app. What's the best datastore I can use? I need to be able to access student data on the app from the main database and send push notifications. Also feed updates. What's the best approach? What's the best tool I can use to deploy the website and the database? One for testing and prototyping, and an official one... Thanks in advance!!!!
Firebase has Android, iOS, and Web SDKs; and a console where you can develop, manage, and monitor all the data and analytics from one place. Firebase real-time database is good for online presence and instant feed updates, while Firebase Firestone is good for user profile and other relational data records. Firebase has a UI SDK which makes it easy to interface with the resources in the project, and with tons of tutorials and starter projects it should be easy to quickly have a decent prototype to iterate upon. Since you said Massive, use their pricing calculator to figure if your expected scale will be covered by the free quota or if you go for the pay-as-you-go that the price is reasonable for your project.
Good luck with the project!
It sounds like a server-client relationship (central database) and while SQLite is probably the simplest, note that its performance is probably the worst of the top 20 or so choices you have. It is different from Firebase and MySQL (and most other databases) in that it is embedded in the product, although it could be embedded in your server itself.
MySQL would require a separate MySQL db server, which means either two servers (one for MySQL, and one to provide your specific services to your client app) or both running on a single server machine. There are many alternatives in the same category as MySQL, and a choice of relational databases or document (NoSQL) databases. But architecturally, they are in the same category as MySQL, a separate db server that your application server would get its data from.
Firebase is different yet again, in that it is a service that is already hosted by a company, providing many integrated features such as authentication and storage of user account info. However it does take care of many of the concerns with running a server, such as performance, scalability and management. There are some negatives that you should be aware of though: any investment of time and coding with Firebase is pretty much non-portable, in that you are stuck with Firebase going forward. If you needed to switch to a different service, not only would it be a different API, but it would be a different architecture and much of your coding would need to be discarded. Second, it's owned and run by Google now, so you have a large corporation backing it, but that also means they could decide to discontinue it without any real effect on the Google bottom line. Also some folks would have concerns with storing data on Google servers. That said, I think if you are aware of these in advance, and especially if you are a high school student, that Firebase is a fairly easy winner here. The server is already set up for you, the documentation is very complete and rich, with lots of examples, and Google is not going away. The main concern would be if it really is massive, there could be a rising cost to the service. I suspect though that it is not massive, even if everyone in a school used it. The number of concurrent connections would not be huge (probably not even into the hundreds, even if there are thousands of users).
I'd go with Firebase even though you will need to learn their API, because you'll need to learn something one way or another. SQLite is a bit of a toy database, and MySQL is a real one but you (or someone) would need to manage that server on top of needing to develop the server and client app. With Firebase, much of the server already exists, including a professionally hosted database. There are tons of high-level features provided and initial cost is somewhere between very low and zero.
Part of this is dependent on what language you want to write this in. Javascript for a cross-platform client app (I'd use Vue.js + Vuetify for UI, and provide it as a web app and optionally wrap that with Electron for a desktop app, Apache Cordova for mobile). Server could be Javascript with an Express-based REST API on Node.js, talking to Firebase for services.
If you were a Java developer though, all this goes out the window and I'd recommend a simple Java server with Javalin for REST API, and embedded ObjectDB for database storage (combined into one server). ObjectDB is very very fast and can be separated out into a scalable server if this became truly massive. But you would probably never need to go that far.
All of this is a lot of work. I hope this isn't for something like an assignment. It is in the order of 6 months of work if you know what you're doing, all year if you're learning as you go.
Don't think you can go wrong with MySQL or postgresql. python+postgres is VERY well supported stack and can do almost anything. Great visualization and administrative tools for both. There are some data-mismatch problems, however.. node.js/python with mongodb is a bit more modern and makes it trivial to "serialize" data with sprinklings of indexes. If you're using go-lang, then RocksDB is a great high-performance data-modeling base (it's not relational how-ever) It's more like a building-block for key-value store. But it's ACID so you CAN build relational systems on top. I've used LevelDB for other projects (Java/C) (similar architecture and works great on android - chrome uses it for it's metadata-storage). Rock/Level can achieve multi-million writes on cheap hardware thanks to it's trade-offs.
I'm very familiar with SQLite.. Personally my least favorite, but it's the most portable database format, and it does support ACID.. I have many gripes, but biggest issue is parallel access (you really need a single process/thread to own the data-model, then use IPC to communicate with your process/thread).. (same could be said for LevelDB, but that's so efficient, it's almost never an issue).
If your'e using Java, then JavaDB/DerbyDB/HSQLDB are EXCELLENT systems.. highly multi-threaded, good stand-alone tools. (embedded or TCP-connected). Perfect for unit-tests. Can use simple dumb portable formats (e.g. text-file containing only inserts) all the way to classic journaled binary B-tree formats to pure-in-memory. Java has a lot of overhead, so this is only really viable if you're already using Java in your project.
For high performance "memsql" is mysql API to a hybrid in-memory index + on-disk column-database (feels like classic SQL to you though). Falls into the mysql-swiss-army-knife tool-kit.
Similarly with in-memory there is "redis".. Absolutely a joy to work with. It too is a specialty swiss army knife. Steer clear of redis for primary data that you can't lose.. while redis does support persisting data, it isn't very efficient and will become the bottleneck. redis is great for micro-queue's, topics, stat-aggregators, message-repositories (password-management systems, where writes are rare so persistance is viable). Plus I love that redis uses a pure-text protocol so I can netcat or telnet directly into it and do stuff.
I've loved cloud-data-stores.. Amazon "DynamoDB" or Google BigTable are awesome!!! Cheap compared to normal hosting fees of an AWS EC2 instance.. You can play all day.. put a terabyte up, then blow it away.. pay for what you play with. It's a very very different data-model though.. They give you a very very few set of tricks that let you do complex data-modeling - and you have to be clever and have enough foresight to not block yourself into a hole (or have customer abuse expensive queries).
Then there's Cassandra/Hadoop (HBase). These are petabyte scale databases (technically so is Dynamo/BigTable). They're incredibly efficient at what they do. And they have a lot of plugins to do almost anything you need. I personally love these the best (and RocksDB/LevelDB are like their infant children offspring). You can run these on your laptop (unlike Amazon/Google engines above). But their discipline is very different than all the other's above.
The problem I have is - we need to process & change(update/insert) 55M Data every 2 min and this updated data to be available for Rest API for Filtering / Selection. Response time for Rest API should be less than 1 sec.
The most important factors for me are processing and storing time of 2 min. There need to be 2 views of Data One is for Selection & 2. Changed data.
Scylla can handle 1M/s events with a simple data model quite easily. The api to query is CQL, we have REST api but that's for control/monitoring
Cassandra is quite capable of the task, in a highly available way, given appropriate scaling of the system. Remember that updates are only inserts, and that efficient retrieval is only by key (which can be a complex key). Talking of keys, make sure that the keys are well distributed.
i love syclla for pet projects however it's license which is based on server model is an issue. thus i recommend cassandra
By 55M do you mean 55 million entity changes per 2 minutes? It is relatively high, means almost 460k per second. If I had to choose between Scylla or Cassandra, I would opt for Scylla as it is promising better performance for simple operations. However, maybe it would be worth to consider yet another alternative technology. Take into consideration required consistency, reliability and high availability and you may realize that there are more suitable once. Rest API should not be the main driver, because you can always develop the API yourself, if not supported by given technology.
The Gentlent Tech Team made lots of updates within the past year. The biggest one being our database:
We decided to migrate our #PostgreSQL -based database systems to a custom implementation of #Cassandra . This allows us to integrate our product data perfectly in a system that just makes sense. High availability and scalability are supported out of the box.
MySQL has a lot of strengths working for it. It's simple and easy to set up and use. It's JSON engine is also really good these days. Mongo is also simple to setup and use, and it's speed as a document-object storage engine is first class.
Where Postgres has both beat is in it's combining of all of the features that make both MySQL and Mongo great, while adding on enterprise grade level scalability and replication. It's Postgres' stability and robustness, while still fulfilling the roles of it's contemporaries extremely well that edge Postgre for me.
When I was new with web development, I was using PHP for backend and MySQL for database. But after improving my JS skills, I chosen Node.js. Because of too many reasons including npm, express, community, fast coding and etc. MongoDB is so good for using with Node.js. If your JS skills are enough good, I recommend to migrate to Node.js and MongoDB.
My data was inherently hierarchical, but there was not enough content in each level of the hierarchy to justify a relational DB (SQL) with a one-to-many approach. It was also far easier to share data between the frontend (Angular), backend (Node.js) and DB (MongoDB) as they all pass around JSON natively. This allowed me to skip the translation layer from relational to hierarchical. You do need to think about correct indexes in MongoDB, and make sure the objects have finite size. For instance, an object in your DB shouldn't have a property which is an array that grows over time, without limit. In addition, I did use MySQL for other types of data, such as a catalog of products which (a) has a lot of data, (b) flat and not hierarchical, (c) needed very fast queries.
PostgreSQL is enterprise level database with transactions, full-text indexes, vector indexes, JSON, BLOB, geo-spatial data and a lot more. Highly scalable, configurable and easily maintainable. all that on an open source RDBMS database and you are still looking for GPL licensed MySQL with limited features? Look again.
We wanted a JSON datastore that could save the state of our bioinformatics visualizations without destructive normalization. As a leading NoSQL data storage technology, MongoDB has been a perfect fit for our needs. Plus it's open source, and has an enterprise SLA scale-out path, with support of hosted solutions like Atlas. Mongo has been an absolute champ. So much so that SQL and Oracle have begun shipping JSON column types as a new feature for their databases. And when Fast Healthcare Interoperability Resources (FHIR) announced support for JSON, we basically had our FHIR datalake technology.
In the field of bioinformatics, we regularly work with hierarchical and unstructured document data. Unstructured text data from PDFs, image data from radiographs, phylogenetic trees and cladograms, network graphs, streaming ECG data... none of it fits into a traditional SQL database particularly well. As such, we prefer to use document oriented databases.
MongoDB is probably the oldest component in our stack besides Javascript, having been in it for over 5 years. At the time, we were looking for a technology that could simply cache our data visualization state (stored in JSON) in a database as-is without any destructive normalization. MongoDB was the perfect tool; and has been exceeding expectations ever since.
Trivia fact: some of the earliest electronic medical records (EMRs) used a document oriented database called MUMPS as early as the 1960s, prior to the invention of SQL. MUMPS is still in use today in systems like Epic and VistA, and stores upwards of 40% of all medical records at hospitals. So, we saw MongoDB as something as a 21st century version of the MUMPS database.
While there's been some very clever techniques that has allowed non-natively supported geo querying to be performed, it is incredibly slow in the long game and error prone at best.
MySQL finally introduced it's own GEO functions and special indexing operations for GIS type data. I prototyped with this, as MySQL is the most familiar database to me. But no matter what I did with it, how much tuning i'd give it, how much I played with it, the results would come back inconsistent.
It was very disappointing.
I figured, at this point, that SQL Server, being an enterprise solution authored by one of the biggest worldwide software developers in the world, Microsoft, might contain some decent GIS in it.
I was very disappointed.
Postgres is a Database solution i'm still getting familiar with, but I noticed it had no built in support for GIS. So I hilariously didn't pay it too much attention. That was until I stumbled upon PostGIS and my world changed forever.
I happen to point my asp.net core web application from MSSQL to MySQL due to infrastructure costs associated with the former db. The application also had challenges creating a migration schema of asp.net membership on MySQL.
After a thorough research I figured out how to do it and also made a video and uploaded to youtube. You can check that here https://youtu.be/X4I0DUw6C84
The full source code for the demo template is available on github here http://bit.ly/2LWgacA
Pros of MySQL
- Sql800
- Free679
- Easy562
- Widely used528
- Open source490
- High availability180
- Cross-platform support160
- Great community104
- Secure79
- Full-text indexing and searching75
- Fast, open, available26
- Reliable16
- SSL support16
- Robust15
- Enterprise Version9
- Easy to set up on all platforms7
- NoSQL access to JSON data type3
- Relational database1
- Easy, light, scalable1
- Sequel Pro (best SQL GUI)1
- Replica Support1
Pros of ScyllaDB
- Replication2
- Fewer nodes1
- Distributed1
- Scale up1
- High availability1
- Written in C++1
- High performance1
Sign up to add or upvote prosMake informed product decisions
Cons of MySQL
- Owned by a company with their own agenda16
- Can't roll back schema changes3