JSONlite聽vs聽MySQL

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

JSONlite
JSONlite

2
12
+ 1
1
MySQL
MySQL

35.5K
30K
+ 1
3.7K
Add tool

JSONlite vs MySQL: What are the differences?

Developers describe JSONlite as "A simple, serverless, zero-configuration JSON document store". JSONlite sandboxes the current working directory similar to SQLite. The JSONlite data directory is named jsonlite.data by default, and each json document is saved pretty printed as a uuid. On the other hand, MySQL is detailed as "The world's most popular open source database". The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.

JSONlite and MySQL can be primarily classified as "Databases" tools.

JSONlite and MySQL are both open source tools. MySQL with 3.97K GitHub stars and 1.56K forks on GitHub appears to be more popular than JSONlite with 817 GitHub stars and 33 GitHub forks.

What is JSONlite?

JSONlite sandboxes the current working directory similar to SQLite. The JSONlite data directory is named jsonlite.data by default, and each json document is saved pretty printed as a uuid.

What is MySQL?

The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose JSONlite?
Why do developers choose MySQL?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    What companies use JSONlite?
    What companies use MySQL?
      No companies found

      Sign up to get full access to all the companiesMake informed product decisions

      What tools integrate with JSONlite?
      What tools integrate with MySQL?

      Sign up to get full access to all the tool integrationsMake informed product decisions

      What are some alternatives to JSONlite and MySQL?
      PostgreSQL
      PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions.
      MongoDB
      MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
      Microsoft SQL Server
      Microsoft庐 SQL Server is a database management and analysis system for e-commerce, line-of-business, and data warehousing solutions.
      MariaDB
      Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most featureful, stable, and sanely licensed open SQL server in the industry. MariaDB is designed as a drop-in replacement of MySQL(R) with more features, new storage engines, fewer bugs, and better performance.
      SQLite
      SQLite is an embedded SQL database engine. Unlike most other SQL databases, SQLite does not have a separate server process. SQLite reads and writes directly to ordinary disk files. A complete SQL database with multiple tables, indices, triggers, and views, is contained in a single disk file.
      See all alternatives
      Decisions about JSONlite and MySQL
      Jake Stein
      Jake Stein
      CEO at Stitch | 16 upvotes 84.5K views
      atStitchStitch
      Clojure
      Clojure
      MySQL
      MySQL
      PostgreSQL
      PostgreSQL

      The majority of our Clojure microservices are simple web services that wrap a transactional database with CRUD operations and a little bit of business logic. We use both MySQL and PostgreSQL for transactional data persistence, having transitioned from the former to the latter for newer services to take advantage of the new features coming out of the Postgres community.

      Most of our Clojure best practices can be summed up by the phrase "keep it simple." We avoid more complex web frameworks in favor of using the Ring library to build web service routes, and we prefer sending SQL directly to the JDBC library rather than using a complicated ORM or SQL DSL.

      See more
      Gregory Koberger
      Gregory Koberger
      Founder | 13 upvotes 125.6K views
      atReadMe.ioReadMe.io
      MongoDB
      MongoDB
      MySQL
      MySQL
      PostgreSQL
      PostgreSQL
      MongoDB Atlas
      MongoDB Atlas
      MongoLab
      MongoLab
      Compose
      Compose

      We went with MongoDB , almost by mistake. I had never used it before, but I knew I wanted the *EAN part of the MEAN stack, so why not go all in. I come from a background of SQL (first MySQL , then PostgreSQL ), so I definitely abused Mongo at first... by trying to turn it into something more relational than it should be. But hey, data is supposed to be relational, so there wasn't really any way to get around that.

      There's a lot I love about MongoDB, and a lot I hate. I still don't know if we made the right decision. We've been able to build much quicker, but we also have had some growing pains. We host our databases on MongoDB Atlas , and I can't say enough good things about it. We had tried MongoLab and Compose before it, and with MongoDB Atlas I finally feel like things are in a good place. I don't know if I'd use it for a one-off small project, but for a large product Atlas has given us a ton more control, stability and trust.

      See more
      Antonio Sanchez
      Antonio Sanchez
      CEO at Kokoen GmbH | 14 upvotes 275.6K views
      atKokoen GmbHKokoen GmbH
      PHP
      PHP
      Laravel
      Laravel
      MySQL
      MySQL
      Go
      Go
      MongoDB
      MongoDB
      JavaScript
      JavaScript
      Node.js
      Node.js
      ExpressJS
      ExpressJS

      Back at the start of 2017, we decided to create a web-based tool for the SEO OnPage analysis of our clients' websites. We had over 2.000 websites to analyze, so we had to perform thousands of requests to get every single page from those websites, process the information and save the big amounts of data somewhere.

      Very soon we realized that the initial chosen script language and database, PHP, Laravel and MySQL, was not going to be able to cope efficiently with such a task.

      By that time, we were doing some experiments for other projects with a language we had recently get to know, Go , so we decided to get a try and code the crawler using it. It was fantastic, we could process much more data with way less CPU power and in less time. By using the concurrency abilites that the language has to offers, we could also do more Http requests in less time.

      Unfortunately, I have no comparison numbers to show about the performance differences between Go and PHP since the difference was so clear from the beginning and that we didn't feel the need to do further comparison tests nor document it. We just switched fully to Go.

      There was still a problem: despite the big amount of Data we were generating, MySQL was performing very well, but as we were adding more and more features to the software and with those features more and more different type of data to save, it was a nightmare for the database architects to structure everything correctly on the database, so it was clear what we had to do next: switch to a NoSQL database. So we switched to MongoDB, and it was also fantastic: we were expending almost zero time in thinking how to structure the Database and the performance also seemed to be better, but again, I have no comparison numbers to show due to the lack of time.

      We also decided to switch the website from PHP and Laravel to JavaScript and Node.js and ExpressJS since working with the JSON Data that we were saving now in the Database would be easier.

      As of now, we don't only use the tool intern but we also opened it for everyone to use for free: https://tool-seo.com

      See more
      Tim Abbott
      Tim Abbott
      Founder at Zulip | 23 upvotes 351.5K views
      atZulipZulip
      PostgreSQL
      PostgreSQL
      MySQL
      MySQL
      Elasticsearch
      Elasticsearch

      We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

      We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

      And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

      I can't recommend it highly enough.

      See more
      Conor Myhrvold
      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber | 15 upvotes 441K views
      atUber TechnologiesUber Technologies
      PostgreSQL
      PostgreSQL
      MySQL
      MySQL
      Python
      Python

      Our most popular (& controversial!) article to date on the Uber Engineering blog in 3+ yrs. Why we moved from PostgreSQL to MySQL. In essence, it was due to a variety of limitations of Postgres at the time. Fun fact -- earlier in Uber's history we'd actually moved from MySQL to Postgres before switching back for good, & though we published the article in Summer 2016 we haven't looked back since:

      The early architecture of Uber consisted of a monolithic backend application written in Python that used Postgres for data persistence. Since that time, the architecture of Uber has changed significantly, to a model of microservices and new data platforms. Specifically, in many of the cases where we previously used Postgres, we now use Schemaless, a novel database sharding layer built on top of MySQL (https://eng.uber.com/schemaless-part-one/). In this article, we鈥檒l explore some of the drawbacks we found with Postgres and explain the decision to build Schemaless and other backend services on top of MySQL:

      https://eng.uber.com/mysql-migration/

      See more
      Khauth Gy枚rgy
      Khauth Gy枚rgy
      CTO at SalesAutopilot Kft. | 12 upvotes 230.7K views
      atSalesAutopilot Kft.SalesAutopilot Kft.
      Amazon CloudWatch
      Amazon CloudWatch
      Amazon SNS
      Amazon SNS
      Amazon CloudFront
      Amazon CloudFront
      Amazon Route 53
      Amazon Route 53
      MySQL
      MySQL
      MongoDB
      MongoDB
      Redis
      Redis
      jQuery UI
      jQuery UI
      Vue.js
      Vue.js
      Vuetify
      Vuetify
      vuex
      vuex
      Docker
      Docker
      Jenkins
      Jenkins
      AWS CodePipeline
      AWS CodePipeline
      GitHub
      GitHub

      I'm the CTO of a marketing automation SaaS. Because of the continuously increasing load we moved to the AWSCloud. We are using more and more features of AWS: Amazon CloudWatch, Amazon SNS, Amazon CloudFront, Amazon Route 53 and so on.

      Our main Database is MySQL but for the hundreds of GB document data we use MongoDB more and more. We started to use Redis for cache and other time sensitive operations.

      On the front-end we use jQuery UI + Smarty but now we refactor our app to use Vue.js with Vuetify. Because our app is relatively complex we need to use vuex as well.

      On the development side we use GitHub as our main repo, Docker for local and server environment and Jenkins and AWS CodePipeline for Continuous Integration.

      See more
      Julien DeFrance
      Julien DeFrance
      Principal Software Engineer at Tophatter | 16 upvotes 1.3M views
      atSmartZipSmartZip
      Rails
      Rails
      Rails API
      Rails API
      AWS Elastic Beanstalk
      AWS Elastic Beanstalk
      Capistrano
      Capistrano
      Docker
      Docker
      Amazon S3
      Amazon S3
      Amazon RDS
      Amazon RDS
      MySQL
      MySQL
      Amazon RDS for Aurora
      Amazon RDS for Aurora
      Amazon ElastiCache
      Amazon ElastiCache
      Memcached
      Memcached
      Amazon CloudFront
      Amazon CloudFront
      Segment
      Segment
      Zapier
      Zapier
      Amazon Redshift
      Amazon Redshift
      Amazon Quicksight
      Amazon Quicksight
      Superset
      Superset
      Elasticsearch
      Elasticsearch
      Amazon Elasticsearch Service
      Amazon Elasticsearch Service
      New Relic
      New Relic
      AWS Lambda
      AWS Lambda
      Node.js
      Node.js
      Ruby
      Ruby
      Amazon DynamoDB
      Amazon DynamoDB
      Algolia
      Algolia

      Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

      I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

      For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

      Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

      Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

      Future improvements / technology decisions included:

      Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

      As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

      One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

      See more
      Ajit Parthan
      Ajit Parthan
      CTO at Shaw Academy | 1 upvotes 5.6K views
      atShaw AcademyShaw Academy
      MySQL
      MySQL
      MongoDB
      MongoDB
      #NosqlDatabaseAsAService

      Initial storage was traditional MySQL. The pace of changes during a startup mode made it very difficult to have a clean and consistent schema. Large portions ended up as unstructured data stuffed into CLOBs and BLOBs.

      Moving to MongoDB definitely made this part much easier.

      Accessing data for analysis is a little bit of a challenge - especially for people coming from the world of SQL Workbench. But with tools like Exploratory this is becoming less of a problem.

      #NosqlDatabaseAsAService

      See more
      Alex A
      Alex A
      Founder at PRIZ Guru | 6 upvotes 8.8K views
      atPRIZ GuruPRIZ Guru
      MySQL
      MySQL
      PostgreSQL
      PostgreSQL

      One of our battles at the very beginning of the road was choosing the right database. In fact, our first prototype was built on MySQL and back then nothing else was even under a consideration (don't ask me why). At some point, I was working on a project which was running on PostgreSQL a