|Hacker News, Reddit, Stack Overflow Stats|
|Description||The world's most popular open source database||The database for giant ideas||A powerful, open source object-relational database system|
|Why people like using this service||
|Companies using this service|
라이센스가 오락가락 하는 MySQL을 대체하기 위해 파생된 DBMS. 기존 MySQL과 거의 완벽하게 명령어/구조가 같아 누구나 쉽게 넘어올 수 있는 오픈소스 소프트웨어.
Used as main storage for user settings, account settings, etc. (Our social data itself resides in ElasticSearch.)
mysql 에는 엔진이 있는데 요즘은 다 innodb를 사용 한다. 하지만 오래된 버전의 경우 myisam을 사용하는데.....트랜젝션이 안된다. 트랜젝션이 필요하다면 반드시 innodb를 쓰자 꼭!!
Data warehouse for storing anonymized/de-identified student data for analysis and reporting purposes.
Setup web server as service and user/database management for developer team.
Used MySQL as ORM managed database in application development. (Hibernate/Sequalize)
We can pull real potential out of MySQL expressions, sub-selects with our relational models.
MySQL is where most of our data lies, it's old but it's easier for us from BigData background to use and build on top of. If the data is properly indexed and the servers properly sharded, MySQL is quite performant.
I initially used MySQL as the database backend for the service. Thanks to JDBC (and some discipline), it was easy to swap in Oracle during deployment.
I am not using this DB for blog posts or data stored on the site. I am using to track IP addresses and fully qualified domain names of attacker machines that either posted spam on my website, pig flooded me, or had more that a certain number of failed SSH attempts.
As the baseline database in which to:
Relational content, such as user data, posts, logs and other relevant but not so heavy data. Consumed by web servers.
Common database store for basic admin to advanced databases, high performance, highly maintainable.
MySQL is the primary datastore, this is setup in a Master/Slave configuration, with offsite replicas.
Schema-less, JS in the console, flexible, fast, and pairs well with node/Mongoose.
We are testing out MongoDB at the moment. Currently we are only using a small EC2 setup for a delayed job queue backed by
agenda. If it works out well we might look to see where it could become a primary document storage engine for us.
While the huge majority of BI data comes from 3rd-party sources, some pieces require ad-hoc sources - this is largely where Mongo comes into play. Views such as "Activity Log" need on-the-fly recordkeeping that's best manually entered; considering that fetching from a task manager API will paint an overwhelming or inaccurate picture of the month's activity.
MySQL was yesterday... MongoDB rocks. Check out Iridium for MongoDB with TypeScript support.
MongoDB is our prefered document store for pretty much all non-relational-heavy development. MongoDB scales beautifully and provides us with very polished api's and drivers. Easy to use, flexible and scalable.
MongoDB is used as a datastore for accounts and users and underlying storage engine for databases.
Because you don't know what data you will be querying for. Perfectly suited for rapid prototyping. Altering tables in traditional, relational DBMS is painfully expensive and slow.
Performance is not great though. Ah who cares!
In conjunction with Mongoose.js as a Node.js interface for MongoDB, I utilize this service to store user/feedback information.
Hosts the unstructured data for each event that comes into the system. Also maps the event to each individual job or pipeline that gets spun off. All relevant persistent data is currently stored here.
Our data storage, for the most part is handled by a set of MongoDB storage clusters handling all the persistent information needed for our apps.
We use Mongo for persistent storage and retrieval. We also use a nifty Mongo ODM, Mongoose (available from NPM), to model our schemas and connect to Mongo on our private VLAN.
Big datasets not likely to need joins with another dataset go in Mongo to offload Postgresql
MongoDB is only used as session storage since storing session in Firebase is not feasible.
Used MongoDB as primary database. It holds trip data of NYC taxis for the year 2013. It is a huge dataset and it's primary feature is geo coordinates with pickup and drop off locations. Also used MongoDB's map reduce to process this large dataset for aggregation. This aggregated result was then used to show visualizations.
Nearly all of our backend storage is on MongoDB. This has also worked out pretty well. It's enabled us to scale up faster/easier than if we had rolled our own solution on top of PostgreSQL (which we were using previously). There have been a few roadbumps along the way, but the team at 10gen has been a big help with thing.
Used to be MySQL, but once moved to MongoDB, everything just speed up dramatically, data became pretty and easy to work with. Sophisticated aggregations allow us to run complicated analytics anytime as easy as possible.
MongoDB fills our more traditional database needs. We knew we wanted Trello to be blisteringly fast. One of the coolest and most performance-obsessed teams we know is our next-door neighbor and sister company StackExchange. Talking to their dev lead David at lunch one day, I learned that even though they use SQL Server for data storage, they actually primarily store a lot of their data in a denormalized format for performance, and normalize only when they need to.
Relational data stores solve a lot of problems reasonably well. Postgres has some data types that are really handy such as spatial, json, and a plethora of useful dates and integers. It has good availability of indexing solutions, and is well-supported for both custom modifications as well as hosting options (I like Amazon's Postgres for RDS). I use HoneySQL for Clojure as a composable AST that translates reliably to SQL. I typically use JDBC on Clojure, usually via org.clojure/java.jdbc.
Used in great detail at Cloudfind. Profiled and improved bulk SQL upsert performance 200 fold.
We use postgresql for the merge between sql/nosql. A lot of our data is unstructured JSON, or JSON that is currently in flux due to some MVP/interation processes that are going on. PostgreSQL gives the capability to do this.
At the moment PostgreSQL on amazon is only at 9.5 which is one minor version down from support for document fragment updates which is something that we are waiting for. However, that may be some ways away.
Other than that, we are using PostgreSQL as our main SQL store as a replacement for all the MSSQL databases that we have. Not only does it have great support through RDS (small ops team), but it also has some great ways for us to migrate off RDS to managed EC2 instances down the line if we need to.
we used postgres at talenthouse. some of the json stuff in there is pretty awesome. was quite fun using it with scala's slick.
We decided to go for Relational Database and PostgreSQL was the best fit (Search for custom fields etc).
Our core database.
Particularly useful features include: schemas for multi-tenancy and JSONB columns for schemaless data
Postgres is used as the primary data store for users, checks, alerts and so on, as well as some aggregated stats.
PostgreSQL is responsible for nearly all data storage, validation and integrity. We leverage constraints, functions and custom extensions to ensure we have only one source of truth for our data access rules and that those rules live as close to the data as possible. Call us crazy, but ORMs only lead to ruin and despair.
Installed PostgreSQL server. Maintain database administration and database migration.
→ My Stack
django 프로젝트는 orm을 사용하기 때문에 DB를 변경하는것에 대한 두려움이 없습니다. 하지만 굳이 postgreSQL 을 사용한 것은 json 형식으로 저장이 가능하다는 점 때문입니다. 이것을 어떻게 사용할지는 아직 모르겠는것이 사실입니다.
하지만 restful api를 구축할때 이 부분에 대해 심도있게 다룰 예정입니다.
I use PostgreSQL due to its frequency of updates compared to MySQL, and based on preference. Nginx + PostgreSQL + Gunicorn is the best stack for running a Django based API.
PostgresSQL is our datastore of record. It is a flexible, poweful, relation datastore that fulfills so many needs.
All data is centrally kept in a PostgreSQL database. Part of the business logic is realized as PostgreSQL stored procedures. We make a lot of use of asynchronous notifications and other advanced PostgreSQL features.
PostgreSQL is our datastore, used for long term storage and analytics. But not active data, such as messages enroute.
The main storage facility. It is performant enough and has the possibility to use NoSQL datatypes as well, it is also very good at development friendly and production enterprise quality and security
Primary relational database which holds all schedules, customer information, ticket purchases, and transactions.
PostgreSQL combines the best aspects of traditional SQL databases such as reliability, consistent performance, transactions, querying power, etc. with the flexibility of schemaless noSQL systems that are all the rage these days. Through the powerful JSON column types and indexes, you can now have your cake and eat it too! PostgreSQL may seem a bit arcane and old fashioned at first, but the developers have clearly shown that they understand databases and the storage trends better than almost anyone else. It definitely deserves to be part of everyone's toolbox; when you find yourself needing rock solid performance, operational simplicity and reliability, reach for PostgresQL.
We use PostgreSQL for all transactional data storage where data must remain consistent at all times (not eventually consistent like document databases).
General relational data for configuration and integration. Some reporting. Use as a better "mongo" for json document storage.
PostgreSQL is our data store for both core clinical data, internal status and log data, as well as application-specific schemas for user-generated data.
PostgreSQL is the "pragmatic" choice, and, unsurprisingly, is serving us well!
Being a Django project, healthchecks.io also works with MySQL.
5 years with PostgreSQL experience. 7 years of experience with MySQL and PostgreSQL combined.
When we need to keep data in a safe place, we turn to PostgreSQL. Not only is the software rock-solid, but new versions add amazing features like JSON support that give us the best of both worlds: SQL and NoSQL.
PostgreSQL is our main datastore, and we make judicious use of its fantastic features: JSON columns, recursive CTEs, streaming binary replication, triggers, and constraints, among others.
RDBMS 가 필요한 경우 모두 postgresql 을 사용합니다. 신뢰성 있고, 안정적이며, 오픈소스입니다.
데이터베이스의 앞에는 pgbouncer 라는 connection pooler 를 사용합니다. 어플리케이션 서버에서 데이터베이스에 연결하는데 1ms 대가 소요됩니다.
PostgreSQL is our main data store for all relational data. It comes with a build in key-value store and integrates well with other areas of our platform.