Alternatives to Datomic logo

Alternatives to Datomic

MongoDB, Cassandra, Kafka, Neo4j, and GraphQL are the most popular alternatives and competitors to Datomic.
51
62
+ 1
0

What is Datomic and what are its top alternatives?

Build flexible, distributed systems that can leverage the entire history of your critical data, not just the most current state. Build them on your existing infrastructure or jump straight to the cloud.
Datomic is a tool in the Databases category of a tech stack.

Top Alternatives to Datomic

  • MongoDB

    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • Cassandra

    Cassandra

    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...

  • Kafka

    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • Neo4j

    Neo4j

    Neo4j stores data in nodes connected by directed, typed relationships with properties on both, also known as a Property Graph. It is a high performance graph store with all the features expected of a mature and robust database, like a friendly query language and ACID transactions. ...

  • GraphQL

    GraphQL

    GraphQL is a data query language and runtime designed and used at Facebook to request and deliver data to mobile and web apps since 2012. ...

  • MySQL

    MySQL

    The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...

  • PostgreSQL

    PostgreSQL

    PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions. ...

  • Microsoft SQL Server

    Microsoft SQL Server

    Microsoft® SQL Server is a database management and analysis system for e-commerce, line-of-business, and data warehousing solutions. ...

Datomic alternatives & related posts

MongoDB logo

MongoDB

65K
54.2K
4.1K
The database for giant ideas
65K
54.2K
+ 1
4.1K
PROS OF MONGODB
  • 824
    Document-oriented storage
  • 591
    No sql
  • 546
    Ease of use
  • 465
    Fast
  • 406
    High performance
  • 256
    Free
  • 215
    Open source
  • 179
    Flexible
  • 142
    Replication & high availability
  • 109
    Easy to maintain
  • 41
    Querying
  • 37
    Easy scalability
  • 36
    Auto-sharding
  • 35
    High availability
  • 31
    Map/reduce
  • 26
    Document database
  • 24
    Easy setup
  • 24
    Full index support
  • 15
    Reliable
  • 14
    Fast in-place updates
  • 13
    Agile programming, flexible, fast
  • 11
    No database migrations
  • 7
    Easy integration with Node.Js
  • 7
    Enterprise
  • 5
    Enterprise Support
  • 4
    Great NoSQL DB
  • 3
    Aggregation Framework
  • 3
    Support for many languages through different drivers
  • 3
    Drivers support is good
  • 2
    Schemaless
  • 2
    Easy to Scale
  • 2
    Fast
  • 2
    Awesome
  • 2
    Managed service
  • 1
    Consistent
CONS OF MONGODB
  • 5
    Very slowly for connected models that require joins
  • 3
    Not acid compliant
  • 1
    Proprietary query language

related MongoDB posts

Jeyabalaji Subramanian

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Robert Zuber

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

See more
Cassandra logo

Cassandra

3.2K
3.1K
492
A partitioned row store. Rows are organized into tables with a required primary key.
3.2K
3.1K
+ 1
492
PROS OF CASSANDRA
  • 114
    Distributed
  • 95
    High performance
  • 80
    High availability
  • 74
    Easy scalability
  • 52
    Replication
  • 26
    Multi datacenter deployments
  • 26
    Reliable
  • 8
    OLTP
  • 7
    Open source
  • 7
    Schema optional
  • 2
    Workload separation (via MDC)
  • 1
    Fast
CONS OF CASSANDRA
  • 2
    Reliability of replication
  • 1
    Updates

related Cassandra posts

Thierry Schellenbach
Shared insights
on
RedisRedisCassandraCassandraRocksDBRocksDB
at

1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

#InMemoryDatabases #DataStores #Databases

See more
Umair Iftikhar
Technical Architect at Vappar · | 3 upvotes · 141.3K views

Developing a solution that collects Telemetry Data from different devices, nearly 1000 devices minimum and maximum 12000. Each device is sending 2 packets in 1 second. This is time-series data, and this data definition and different reports are saved on PostgreSQL. Like Building information, maintenance records, etc. I want to know about the best solution. This data is required for Math and ML to run different algorithms. Also, data is raw without definitions and information stored in PostgreSQL. Initially, I went with TimescaleDB due to PostgreSQL support, but to increase in sites, I started facing many issues with timescale DB in terms of flexibility of storing data.

My major requirement is also the replication of the database for reporting and different purposes. You may also suggest other options other than Druid and Cassandra. But an open source solution is appreciated.

See more
Kafka logo

Kafka

15.8K
14.9K
573
Distributed, fault tolerant, high throughput pub-sub messaging system
15.8K
14.9K
+ 1
573
PROS OF KAFKA
  • 122
    High-throughput
  • 116
    Distributed
  • 87
    Scalable
  • 81
    High-Performance
  • 65
    Durable
  • 36
    Publish-Subscribe
  • 19
    Simple-to-use
  • 15
    Open source
  • 10
    Written in Scala and java. Runs on JVM
  • 6
    Message broker + Streaming system
  • 4
    Avro schema integration
  • 2
    Suport Multiple clients
  • 2
    Robust
  • 2
    KSQL
  • 2
    Partioned, replayable log
  • 1
    Fun
  • 1
    Extremely good parallelism constructs
  • 1
    Simple publisher / multi-subscriber model
  • 1
    Flexible
CONS OF KAFKA
  • 27
    Non-Java clients are second-class citizens
  • 26
    Needs Zookeeper
  • 7
    Operational difficulties
  • 2
    Terrible Packaging

related Kafka posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.1M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more
Neo4j logo

Neo4j

975
1.1K
340
The world’s leading Graph Database
975
1.1K
+ 1
340
PROS OF NEO4J
  • 68
    Cypher – graph query language
  • 58
    Great graphdb
  • 31
    Open source
  • 29
    Rest api
  • 27
    High-Performance Native API
  • 24
    ACID
  • 20
    Easy setup
  • 15
    Great support
  • 10
    Clustering
  • 9
    Hot Backups
  • 8
    Great Web Admin UI
  • 7
    Powerful, flexible data model
  • 7
    Mature
  • 6
    Embeddable
  • 5
    Easy to Use and Model
  • 4
    Best Graphdb
  • 4
    Highly-available
  • 2
    It's awesome, I wanted to try it
  • 2
    Great onboarding process
  • 2
    Great query language and built in data browser
  • 2
    Used by Crunchbase
CONS OF NEO4J
  • 4
    Comparably slow
  • 4
    Can't store a vertex as JSON
  • 1
    Doesn't have a managed cloud service at low cost

related Neo4j posts

We have an in-house build experiment management system. We produce samples as input to the next step, which then could produce 1 sample(1-1) and many samples (1 - many). There are many steps like this. So far, we are tracking genealogy (limited tracking) in the MySQL database, which is becoming hard to trace back to the original material or sample(I can give more details if required). So, we are considering a Graph database. I am requesting advice from the experts.

  1. Is a graph database the right choice, or can we manage with RDBMS?
  2. If RDBMS, which RDMS, which feature, or which approach could make this manageable or sustainable
  3. If Graph database(Neo4j, OrientDB, Azure Cosmos DB, Amazon Neptune, ArangoDB), which one is good, and what are the best practices?

I am sorry that this might be a loaded question.

See more

I'm evaluating the use of RedisGraph vs Microsoft SQL Server 2019 graph features to build a social graph. One of the key criteria is high availability and cross data center replication of data. While Neo4j is a much-matured solution in general, I'm not accounting for it due to the cost & introduction of a new stack in the ecosystem. Also, due to the nature of data & org policies, using a cloud-based solution won't be a viable choice.

We currently use Redis as a cache & SQL server 2019 as RDBMS.

I'm inclining towards SQL server 2019 graph as we already use SQL server extensively as relational database & have all the HA and cross data center replication setup readily available. I still need to evaluate if it fulfills our need as a graph DB though, I also learned that SQL server 2019 is still a new player in the market and attempts to fit a graph-like query on top of a relational model (with node and edge tables). RedisGraph seems very promising. However, I'm not totally sure about HA, Graph data backup, cross-data center support.

See more
GraphQL logo

GraphQL

22.6K
18.4K
292
A data query language and runtime
22.6K
18.4K
+ 1
292
PROS OF GRAPHQL
  • 69
    Schemas defined by the requests made by the user
  • 62
    Will replace RESTful interfaces
  • 59
    The future of API's
  • 47
    The future of databases
  • 12
    Self-documenting
  • 11
    Get many resources in a single request
  • 5
    Ask for what you need, get exactly that
  • 4
    Query Language
  • 3
    Evolve your API without versions
  • 3
    Fetch different resources in one request
  • 3
    Type system
  • 2
    GraphiQL
  • 2
    Ease of client creation
  • 2
    Easy setup
  • 1
    Good for apps that query at build time. (SSR/Gatsby)
  • 1
    Backed by Facebook
  • 1
    Easy to learn
  • 1
    "Open" document
  • 1
    Better versioning
  • 1
    Standard
  • 1
    1. Describe your data
  • 1
    Fast prototyping
CONS OF GRAPHQL
  • 3
    Hard to migrate from GraphQL to another technology
  • 3
    More code to type.
  • 1
    Works just like any other API at runtime
  • 1
    Takes longer to build compared to schemaless.

related GraphQL posts

Shared insights
on
Node.jsNode.jsGraphQLGraphQLMongoDBMongoDB

I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

  1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

  2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

  3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

See more
Nick Rockwell
SVP, Engineering at Fastly · | 44 upvotes · 1.7M views

When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

See more
MySQL logo

MySQL

85.8K
69.5K
3.7K
The world's most popular open source database
85.8K
69.5K
+ 1
3.7K
PROS OF MYSQL
  • 793
    Sql
  • 672
    Free
  • 556
    Easy
  • 527
    Widely used
  • 485
    Open source
  • 180
    High availability
  • 160
    Cross-platform support
  • 104
    Great community
  • 78
    Secure
  • 75
    Full-text indexing and searching
  • 25
    Fast, open, available
  • 14
    SSL support
  • 13
    Reliable
  • 13
    Robust
  • 8
    Enterprise Version
  • 7
    Easy to set up on all platforms
  • 2
    NoSQL access to JSON data type
  • 1
    Relational database
  • 1
    Easy, light, scalable
  • 1
    Sequel Pro (best SQL GUI)
  • 1
    Replica Support
CONS OF MYSQL
  • 14
    Owned by a company with their own agenda
  • 1
    Can't roll back schema changes

related MySQL posts

Tim Abbott

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

See more
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 21 upvotes · 1.1M views

Our most popular (& controversial!) article to date on the Uber Engineering blog in 3+ yrs. Why we moved from PostgreSQL to MySQL. In essence, it was due to a variety of limitations of Postgres at the time. Fun fact -- earlier in Uber's history we'd actually moved from MySQL to Postgres before switching back for good, & though we published the article in Summer 2016 we haven't looked back since:

The early architecture of Uber consisted of a monolithic backend application written in Python that used Postgres for data persistence. Since that time, the architecture of Uber has changed significantly, to a model of microservices and new data platforms. Specifically, in many of the cases where we previously used Postgres, we now use Schemaless, a novel database sharding layer built on top of MySQL (https://eng.uber.com/schemaless-part-one/). In this article, we’ll explore some of the drawbacks we found with Postgres and explain the decision to build Schemaless and other backend services on top of MySQL:

https://eng.uber.com/mysql-migration/

See more
PostgreSQL logo

PostgreSQL

65.6K
52.7K
3.5K
A powerful, open source object-relational database system
65.6K
52.7K
+ 1
3.5K
PROS OF POSTGRESQL
  • 755
    Relational database
  • 508
    High availability
  • 436
    Enterprise class database
  • 380
    Sql
  • 302
    Sql + nosql
  • 171
    Great community
  • 145
    Easy to setup
  • 129
    Heroku
  • 128
    Secure by default
  • 111
    Postgis
  • 48
    Supports Key-Value
  • 46
    Great JSON support
  • 32
    Cross platform
  • 30
    Extensible
  • 26
    Replication
  • 24
    Triggers
  • 22
    Rollback
  • 21
    Multiversion concurrency control
  • 20
    Open source
  • 17
    Heroku Add-on
  • 14
    Stable, Simple and Good Performance
  • 13
    Powerful
  • 12
    Lets be serious, what other SQL DB would you go for?
  • 9
    Good documentation
  • 7
    Intelligent optimizer
  • 7
    Scalable
  • 6
    Reliable
  • 6
    Transactional DDL
  • 6
    Modern
  • 5
    Free
  • 5
    One stop solution for all things sql no matter the os
  • 4
    Relational database with MVCC
  • 3
    Faster Development
  • 3
    Full-Text Search
  • 3
    Developer friendly
  • 2
    Excellent source code
  • 2
    Great DB for Transactional system or Application
  • 2
    search
  • 1
    Free version
  • 1
    Open-source
  • 1
    Full-text
  • 1
    Text
CONS OF POSTGRESQL
  • 9
    Table/index bloatings

related PostgreSQL posts

Jeyabalaji Subramanian

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Tim Abbott

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

See more
Microsoft SQL Server logo

Microsoft SQL Server

13.2K
9.7K
535
A relational database management system developed by Microsoft
13.2K
9.7K
+ 1
535
PROS OF MICROSOFT SQL SERVER
  • 137
    Reliable and easy to use
  • 101
    High performance
  • 94
    Great with .net
  • 65
    Works well with .net
  • 56
    Easy to maintain
  • 21
    Azure support
  • 17
    Always on
  • 17
    Full Index Support
  • 10
    Enterprise manager is fantastic
  • 9
    In-Memory OLTP Engine
  • 2
    Security is forefront
  • 1
    Columnstore indexes
  • 1
    Great documentation
  • 1
    Faster Than Oracle
  • 1
    Decent management tools
  • 1
    Easy to setup and configure
  • 1
    Docker Delivery
CONS OF MICROSOFT SQL SERVER
  • 4
    Expensive Licensing
  • 2
    Microsoft

related Microsoft SQL Server posts

We initially started out with Heroku as our PaaS provider due to a desire to use it by our original developer for our Ruby on Rails application/website at the time. We were finding response times slow, it was painfully slow, sometimes taking 10 seconds to start loading the main page. Moving up to the next "compute" level was going to be very expensive.

We moved our site over to AWS Elastic Beanstalk , not only did response times on the site practically become instant, our cloud bill for the application was cut in half.

In database world we are currently using Amazon RDS for PostgreSQL also, we have both MariaDB and Microsoft SQL Server both hosted on Amazon RDS. The plan is to migrate to AWS Aurora Serverless for all 3 of those database systems.

Additional services we use for our public applications: AWS Lambda, Python, Redis, Memcached, AWS Elastic Load Balancing (ELB), Amazon Elasticsearch Service, Amazon ElastiCache

See more

I am a Microsoft SQL Server programmer who is a bit out of practice. I have been asked to assist on a new project. The overall purpose is to organize a large number of recordings so that they can be searched. I have an enormous music library but my songs are several hours long. I need to include things like time, date and location of the recording. I don't have a problem with the general database design. I have two primary questions:

  1. I need to use either MySQL or PostgreSQL on a Linux based OS. Which would be better for this application?
  2. I have not dealt with a sound based data type before. How do I store that and put it in a table? Thank you.
See more