Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Azure Functions
Azure Functions

177
144
+ 1
23
MySQL
MySQL

23.2K
17.8K
+ 1
3.7K
Add tool

Azure Functions vs MySQL: What are the differences?

Azure Functions: Listen and react to events across your stack. Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems; MySQL: The world's most popular open source database. The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.

Azure Functions belongs to "Serverless / Task Processing" category of the tech stack, while MySQL can be primarily classified under "Databases".

"Pay only when invoked" is the top reason why over 7 developers like Azure Functions, while over 777 developers mention "Sql" as the leading cause for choosing MySQL.

MySQL is an open source tool with 3.91K GitHub stars and 1.54K GitHub forks. Here's a link to MySQL's open source repository on GitHub.

According to the StackShare community, MySQL has a broader approval, being mentioned in 2965 company stacks & 2945 developers stacks; compared to Azure Functions, which is listed in 27 company stacks and 21 developer stacks.

- No public GitHub repository available -

What is Azure Functions?

Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems.

What is MySQL?

The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Azure Functions?
Why do developers choose MySQL?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    Jobs that mention Azure Functions and MySQL as a desired skillset
    What companies use Azure Functions?
    What companies use MySQL?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Azure Functions?
    What tools integrate with MySQL?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Azure Functions and MySQL?
    AWS Lambda
    AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.
    Serverless
    Build applications comprised of microservices that run in response to events, auto-scale for you, and only charge you when they run. This lowers the total cost of maintaining your apps, enabling you to build more logic, faster. The Framework uses new event-driven compute services, like AWS Lambda, Google CloudFunctions, and more.
    Cloud Functions for Firebase
    Cloud Functions for Firebase lets you create functions that are triggered by Firebase products, such as changes to data in the Realtime Database, uploads to Cloud Storage, new user sign ups via Authentication, and conversion events in Analytics.
    Google Cloud Functions
    Construct applications from bite-sized business logic billed to the nearest 100 milliseconds, only while your code is running
    Apex
    Apex is a small tool for deploying and managing AWS Lambda functions. With shims for languages not yet supported by Lambda, you can use Golang out of the box.
    See all alternatives
    Decisions about Azure Functions and MySQL
    Jake Stein
    Jake Stein
    CEO at Stitch · | 15 upvotes · 57.2K views
    atStitchStitch
    PostgreSQL
    PostgreSQL
    MySQL
    MySQL
    Clojure
    Clojure

    The majority of our Clojure microservices are simple web services that wrap a transactional database with CRUD operations and a little bit of business logic. We use both MySQL and PostgreSQL for transactional data persistence, having transitioned from the former to the latter for newer services to take advantage of the new features coming out of the Postgres community.

    Most of our Clojure best practices can be summed up by the phrase "keep it simple." We avoid more complex web frameworks in favor of using the Ring library to build web service routes, and we prefer sending SQL directly to the JDBC library rather than using a complicated ORM or SQL DSL.

    See more
    Gregory Koberger
    Gregory Koberger
    Compose
    Compose
    MongoLab
    MongoLab
    MongoDB Atlas
    MongoDB Atlas
    PostgreSQL
    PostgreSQL
    MySQL
    MySQL
    MongoDB
    MongoDB

    We went with MongoDB , almost by mistake. I had never used it before, but I knew I wanted the *EAN part of the MEAN stack, so why not go all in. I come from a background of SQL (first MySQL , then PostgreSQL ), so I definitely abused Mongo at first... by trying to turn it into something more relational than it should be. But hey, data is supposed to be relational, so there wasn't really any way to get around that.

    There's a lot I love about MongoDB, and a lot I hate. I still don't know if we made the right decision. We've been able to build much quicker, but we also have had some growing pains. We host our databases on MongoDB Atlas , and I can't say enough good things about it. We had tried MongoLab and Compose before it, and with MongoDB Atlas I finally feel like things are in a good place. I don't know if I'd use it for a one-off small project, but for a large product Atlas has given us a ton more control, stability and trust.

    See more
    Antonio Sanchez
    Antonio Sanchez
    CEO at Kokoen GmbH · | 11 upvotes · 85.4K views
    atKokoen GmbHKokoen GmbH
    ExpressJS
    ExpressJS
    Node.js
    Node.js
    JavaScript
    JavaScript
    MongoDB
    MongoDB
    Go
    Go
    MySQL
    MySQL
    Laravel
    Laravel
    PHP
    PHP

    Back at the start of 2017, we decided to create a web-based tool for the SEO OnPage analysis of our clients' websites. We had over 2.000 websites to analyze, so we had to perform thousands of requests to get every single page from those websites, process the information and save the big amounts of data somewhere.

    Very soon we realized that the initial chosen script language and database, PHP, Laravel and MySQL, was not going to be able to cope efficiently with such a task.

    By that time, we were doing some experiments for other projects with a language we had recently get to know, Go , so we decided to get a try and code the crawler using it. It was fantastic, we could process much more data with way less CPU power and in less time. By using the concurrency abilites that the language has to offers, we could also do more Http requests in less time.

    Unfortunately, I have no comparison numbers to show about the performance differences between Go and PHP since the difference was so clear from the beginning and that we didn't feel the need to do further comparison tests nor document it. We just switched fully to Go.

    There was still a problem: despite the big amount of Data we were generating, MySQL was performing very well, but as we were adding more and more features to the software and with those features more and more different type of data to save, it was a nightmare for the database architects to structure everything correctly on the database, so it was clear what we had to do next: switch to a NoSQL database. So we switched to MongoDB, and it was also fantastic: we were expending almost zero time in thinking how to structure the Database and the performance also seemed to be better, but again, I have no comparison numbers to show due to the lack of time.

    We also decided to switch the website from PHP and Laravel to JavaScript and Node.js and ExpressJS since working with the JSON Data that we were saving now in the Database would be easier.

    As of now, we don't only use the tool intern but we also opened it for everyone to use for free: https://tool-seo.com

    See more
    Kestas Barzdaitis
    Kestas Barzdaitis
    Entrepreneur & Engineer · | 12 upvotes · 60.8K views
    atCodeFactorCodeFactor
    Google Cloud Functions
    Google Cloud Functions
    Azure Functions
    Azure Functions
    AWS Lambda
    AWS Lambda
    Docker
    Docker
    Google Compute Engine
    Google Compute Engine
    Microsoft Azure
    Microsoft Azure
    Amazon EC2
    Amazon EC2
    CodeFactor.io
    CodeFactor.io
    Kubernetes
    Kubernetes
    #SAAS
    #IAAS
    #Containerization
    #Autoscale
    #Startup
    #Automation
    #Machinelearning
    #AI
    #Devops

    CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

    CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

    AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

    It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

    The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

    In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

    Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

    See more
    Tim Abbott
    Tim Abbott
    Founder at Zulip · | 20 upvotes · 87.4K views
    atZulipZulip
    Elasticsearch
    Elasticsearch
    MySQL
    MySQL
    PostgreSQL
    PostgreSQL

    We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

    We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

    And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

    I can't recommend it highly enough.

    See more
    Conor Myhrvold
    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 5 upvotes · 82.3K views
    atUber TechnologiesUber Technologies
    Python
    Python
    MySQL
    MySQL
    PostgreSQL
    PostgreSQL

    Our most popular (& controversial!) article to date on the Uber Engineering blog in 3+ yrs. Why we moved from PostgreSQL to MySQL. In essence, it was due to a variety of limitations of Postgres at the time. Fun fact -- earlier in Uber's history we'd actually moved from MySQL to Postgres before switching back for good, & though we published the article in Summer 2016 we haven't looked back since:

    The early architecture of Uber consisted of a monolithic backend application written in Python that used Postgres for data persistence. Since that time, the architecture of Uber has changed significantly, to a model of microservices and new data platforms. Specifically, in many of the cases where we previously used Postgres, we now use Schemaless, a novel database sharding layer built on top of MySQL (https://eng.uber.com/schemaless-part-one/). In this article, we’ll explore some of the drawbacks we found with Postgres and explain the decision to build Schemaless and other backend services on top of MySQL:

    https://eng.uber.com/mysql-migration/

    See more
    Michal Nowak
    Michal Nowak
    Co-founder at Evojam · | 7 upvotes · 62.4K views
    atEvojamEvojam
    Azure Functions
    Azure Functions
    Firebase
    Firebase
    AWS Lambda
    AWS Lambda
    Serverless
    Serverless

    In a couple of recent projects we had an opportunity to try out the new Serverless approach to building web applications. It wasn't necessarily a question if we should use any particular vendor but rather "if" we can consider serverless a viable option for building apps. Obviously our goal was also to get a feel for this technology and gain some hands-on experience.

    We did consider AWS Lambda, Firebase from Google as well as Azure Functions. Eventually we went with AWS Lambdas.

    PROS
    • No servers to manage (obviously!)
    • Limited fixed costs – you pay only for used time
    • Automated scaling and balancing
    • Automatic failover (or, at this level of abstraction, no failover problem at all)
    • Security easier to provide and audit
    • Low overhead at the start (with the certain level of knowledge)
    • Short time to market
    • Easy handover - deployment coupled with code
    • Perfect choice for lean startups with fast-paced iterations
    • Augmentation for the classic cloud, server(full) approach
    CONS
    • Not much know-how and best practices available about structuring the code and projects on the market
    • Not suitable for complex business logic due to the risk of producing highly coupled code
    • Cost difficult to estimate (helpful tools: serverlesscalc.com)
    • Difficulty in migration to other platforms (Vendor lock⚠️)
    • Little engineers with experience in serverless on the job market
    • Steep learning curve for engineers without any cloud experience

    More details are on our blog: https://evojam.com/blog/2018/12/5/should-you-go-serverless-meet-the-benefits-and-flaws-of-new-wave-of-cloud-solutions I hope it helps 🙌 & I'm curious of your experiences.

    See more
    Khauth György
    Khauth György
    CTO at SalesAutopilot Kft. · | 11 upvotes · 96K views
    atSalesAutopilot Kft.SalesAutopilot Kft.
    AWS CodePipeline
    AWS CodePipeline
    Jenkins
    Jenkins
    Docker
    Docker
    vuex
    vuex
    Vuetify
    Vuetify
    Vue.js
    Vue.js
    jQuery UI
    jQuery UI
    Redis
    Redis
    MongoDB
    MongoDB
    MySQL
    MySQL
    Amazon Route 53
    Amazon Route 53
    Amazon CloudFront
    Amazon CloudFront
    Amazon SNS
    Amazon SNS
    Amazon CloudWatch
    Amazon CloudWatch
    GitHub
    GitHub

    I'm the CTO of a marketing automation SaaS. Because of the continuously increasing load we moved to the AWSCloud. We are using more and more features of AWS: Amazon CloudWatch, Amazon SNS, Amazon CloudFront, Amazon Route 53 and so on.

    Our main Database is MySQL but for the hundreds of GB document data we use MongoDB more and more. We started to use Redis for cache and other time sensitive operations.

    On the front-end we use jQuery UI + Smarty but now we refactor our app to use Vue.js with Vuetify. Because our app is relatively complex we need to use vuex as well.

    On the development side we use GitHub as our main repo, Docker for local and server environment and Jenkins and AWS CodePipeline for Continuous Integration.

    See more
    Julien DeFrance
    Julien DeFrance
    Principal Software Engineer at Tophatter · | 16 upvotes · 374.5K views
    atSmartZipSmartZip
    Amazon DynamoDB
    Amazon DynamoDB
    Ruby
    Ruby
    Node.js
    Node.js
    AWS Lambda
    AWS Lambda
    New Relic
    New Relic
    Amazon Elasticsearch Service
    Amazon Elasticsearch Service
    Elasticsearch
    Elasticsearch
    Superset
    Superset
    Amazon Quicksight
    Amazon Quicksight
    Amazon Redshift
    Amazon Redshift
    Zapier
    Zapier
    Segment
    Segment
    Amazon CloudFront
    Amazon CloudFront
    Memcached
    Memcached
    Amazon ElastiCache
    Amazon ElastiCache
    Amazon RDS for Aurora
    Amazon RDS for Aurora
    MySQL
    MySQL
    Amazon RDS
    Amazon RDS
    Amazon S3
    Amazon S3
    Docker
    Docker
    Capistrano
    Capistrano
    AWS Elastic Beanstalk
    AWS Elastic Beanstalk
    Rails API
    Rails API
    Rails
    Rails
    Algolia
    Algolia

    Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

    I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

    For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

    Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

    Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

    Future improvements / technology decisions included:

    Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

    As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

    One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

    See more
    Ajit Parthan
    Ajit Parthan
    CTO at Shaw Academy · | 1 upvotes · 5K views
    atShaw AcademyShaw Academy
    MongoDB
    MongoDB
    MySQL
    MySQL
    #NosqlDatabaseAsAService

    Initial storage was traditional MySQL. The pace of changes during a startup mode made it very difficult to have a clean and consistent schema. Large portions ended up as unstructured data stuffed into CLOBs and BLOBs.

    Moving to MongoDB definitely made this part much easier.

    Accessing data for analysis is a little bit of a challenge - especially for people coming from the world of SQL Workbench. But with tools like Exploratory this is becoming less of a problem.

    #NosqlDatabaseAsAService

    See more
    Alex A
    Alex A
    Founder at PRIZ Guru · | 6 upvotes · 8.4K views
    atPRIZ GuruPRIZ Guru
    PostgreSQL
    PostgreSQL
    MySQL
    MySQL

    One of our battles at the very beginning of the road was choosing the right database. In fact, our first prototype was built on MySQL and back then nothing else was even under a consideration (don't ask me why). At some point, I was working on a project which was running on PostgreSQL and it is only then I understood the full power of it. We have over a billion of records in production instance, and we are able to optimize it to run fast and reliable. Well, now my default DB is PostgreSQL :)

    See more
    Tor Hagemann
    Tor Hagemann
    at Socotra · | 2 upvotes · 2.2K views
    atSocotraSocotra
    Amazon DynamoDB
    Amazon DynamoDB
    PostgreSQL
    PostgreSQL
    MySQL
    MySQL

    Much of our data model is relational, which makes MySQL or PostgreSQL (and family) fit the API's we need to build, in order to meet the needs of our customers.

    Sometimes the flexibility of a NoSQL store like Amazon DynamoDB is very useful, but the lack of consistency really impacts usability and performance long-term, compared with viable alternatives. At our current scale, we've seen huge benefits from moving some of our tables out of Dynamo and doing more in SQL.

    There will always be use cases for NoSQL and key-values stores, but if your model is well understood in your business/industry: relational databases are the way to go after finding product-market fit. Always understand the trade-offs (and a few intimate details) of any data store before you add to your company's stack!

    See more
    Joseph Irving
    Joseph Irving
    DevOps Engineer at uSwitch · | 8 upvotes · 7.3K views
    atuSwitchuSwitch
    Go
    Go
    PostgreSQL
    PostgreSQL
    MySQL
    MySQL
    Kubernetes
    Kubernetes
    Vault
    Vault

    At uSwitch we use Vault to generate short lived database credentials for our applications running in Kubernetes. We wanted to move from an environment where we had 100 dbs with a variety of static passwords being shared around to a place where each pod would have credentials that only last for its lifetime.

    We chose vault because:

    • It had built in Kubernetes support so we could use service accounts to permission which pods could access which database.

    • A terraform provider so that we could configure both our RDS instances and their vault configuration in one place.

    • A variety of database providers including MySQL/PostgreSQL (our most common dbs).

    • A good api/Go -sdk so that we could build tooling around it to simplify development worfklow.

    • It had other features we would utilise such as PKI

    See more
    Tim Nolet
    Tim Nolet
    Founder, Engineer & Dishwasher at Checkly · | 5 upvotes · 20.3K views
    atChecklyHQChecklyHQ
    Node.js
    Node.js
    Google Cloud Functions
    Google Cloud Functions
    Azure Functions
    Azure Functions
    Amazon CloudWatch
    Amazon CloudWatch
    Serverless
    Serverless
    AWS Lambda
    AWS Lambda

    AWS Lambda Serverless Amazon CloudWatch Azure Functions Google Cloud Functions Node.js

    In the last year or so, I moved all Checkly monitoring workloads to AWS Lambda. Here are some stats:

    • We run three core functions in all AWS regions. They handle API checks, browser checks and setup / teardown scripts. Check our docs to find out what that means.
    • All functions are hooked up to SNS topics but can also be triggered directly through AWS SDK calls.
    • The busiest function is a plumbing function that forwards data to our database. It is invoked anywhere between 7000 and 10.000 times per hour with an average duration of about 179 ms.
    • We run separate dev and test versions of each function in each region.

    Moving all this to AWS Lambda took some work and considerations. The blog post linked below goes into the following topics:

    • Why Lambda is an almost perfect match for SaaS. Especially when you're small.
    • Why I don't use a "big" framework around it.
    • Why distributed background jobs triggered by queues are Lambda's raison d'être.
    • Why monitoring & logging is still an issue.

    https://blog.checklyhq.com/how-i-made-aws-lambda-work-for-my-saas/

    See more
    MongoDB
    MongoDB
    MySQL
    MySQL
    .NET Core
    .NET Core
    C#
    C#

    Hi! I needed to choose a full stack of tools for a web drop shipping site without the payment process for a family startup proyect. It will feed from several web services (JSON), I'm looking forward a 4,200 articles tops. For web use only and for a few clients at the beginning.

    I'm considering C# with .NET Core 3.0 as is the one language I'm starting to learn. For the Database I haven´t made my mind yet, but could be MySQL or MongoDB any advice is welcome as I'm getting back to programming after year away from this awesome world. Thanks

    See more
    Interest over time
    Reviews of Azure Functions and MySQL
    Review ofAzure FunctionsAzure Functions

    Poor developer experience

    How developers use Azure Functions and MySQL
    Avatar of Rajeshkumar T
    Rajeshkumar T uses MySQLMySQL
    • We are used MySQL database to build the Online Food Ordering System

      • Its best support normalization and all joins ( Restaurant details & Ordering, customer management, food menu, order transaction & food delivery).
      • Best for performance and structured the data.
      • Its help to stored the instant updates received from food delivery app ( update the real-time driver GPS location).
    Avatar of Srinivas Adireddi
    Srinivas Adireddi uses MySQLMySQL

    1.It's very popular. Heared about it in Database class 2. The most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. 3. MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language.

    Avatar of ShadowICT
    ShadowICT uses MySQLMySQL

    We use MySQL and variants thereof to store the data for our projects such as the community. MySQL being a well established product means that support is available whenever it is required along with an extensive list of support articles all over the web for diagnosing issues. Variants are also used where needed when, for example, better performance is needed.

    Avatar of shridhardalavi
    shridhardalavi uses MySQLMySQL

    MySQL is a freely available open source Relational Database Management System (RDBMS) that uses Structured Query Language (SQL). SQL is the most popular language for adding, accessing and managing content in a database. It is most noted for its quick processing, proven reliability, ease and flexibility of use.

    Avatar of John Galbraith
    John Galbraith uses MySQLMySQL

    I am not using this DB for blog posts or data stored on the site. I am using to track IP addresses and fully qualified domain names of attacker machines that either posted spam on my website, pig flooded me, or had more that a certain number of failed SSH attempts.

    Avatar of Yonas B.
    Yonas B. uses Azure FunctionsAzure Functions

    I used Azure functions as part of an integration service when creating a bulk insert module in azure.

    How much does Azure Functions cost?
    How much does MySQL cost?
    Pricing unavailable
    Pricing unavailable
    News about Azure Functions
    More news