What is Keycloak and what are its top alternatives?
Top Alternatives to Keycloak
- Auth0
A set of unified APIs and tools that instantly enables Single Sign On and user management to all your applications. ...
- Okta
Connect all your apps in days, not months, with instant access to thousands of pre-built integrations - even add apps to the network yourself. Integrations are easy to set up, constantly monitored, proactively repaired and handle authentication and provisioning. ...
- FreeIPA
FreeIPA is an integrated Identity and Authentication solution for Linux/UNIX networked environments. A FreeIPA server provides centralized authentication, authorization and account information by storing data about user, groups, hosts and other objects necessary to manage the security aspects of a network of computers. ...
- Dex
Dex is a personal CRM that helps you build stronger relationships. Remember where you left off, keep in touch, and be more thoughtful -- all in one place. ...
- Vault
Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. ...
- Postman
It is the only complete API development environment, used by nearly five million developers and more than 100,000 companies worldwide. ...
- Postman
It is the only complete API development environment, used by nearly five million developers and more than 100,000 companies worldwide. ...
- Stack Overflow
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's built and run by you as part of the Stack Exchange network of Q&A sites. With your help, we're working together to build a library of detailed answers to every question about programming. ...
Keycloak alternatives & related posts
Auth0
- JSON web token70
- Integration with 20+ Social Providers31
- It's a universal solution20
- SDKs20
- Amazing Documentation15
- Heroku Add-on11
- Enterprise support8
- Great Sample Repos7
- Extend platform with "rules"7
- Azure Add-on4
- Easy integration, non-intrusive identity provider3
- Passwordless3
- It can integrate seamlessly with firebase2
- Great documentation, samples, UX and Angular support2
- Polished2
- On-premise deployment2
- Will sign BAA for HIPAA-compliance1
- MFA1
- Active Directory support1
- Springboot1
- SOC21
- SAML Support1
- Great support1
- OpenID Connect (OIDC) Support1
- Pricing too high (Developer Pro)15
- Poor support7
- Rapidly changing API4
- Status page not reflect actual status4
related Auth0 posts
Hi Otensia! I'd definitely recommend using the skills you've already got and building with JavaScript is a smart way to go these days. Most platform services have JavaScript/Node SDKs or NPM packages, many serverless platforms support Node in case you need to write any backend logic, and JavaScript is incredibly popular - meaning it will be easy to hire for, should you ever need to.
My advice would be "don't reinvent the wheel". If you already have a skill set that will work well to solve the problem at hand, and you don't need it for any other projects, don't spend the time jumping into a new language. If you're looking for an excuse to learn something new, it would be better to invest that time in learning a new platform/tool that compliments your knowledge of JavaScript. For this project, I might recommend using Netlify, Vercel, or Google Firebase to quickly and easily deploy your web app. If you need to add user authentication, there are great examples out there for Firebase Authentication, Auth0, or even Magic (a newcomer on the Auth scene, but very user friendly). All of these services work very well with a JavaScript-based application.
I only know Java and so thinking of building a web application in the following order. I need some help on what alternatives I can choose. Open to replace components, services, or infrastructure.
- Frontend: AngularJS, Bootstrap
- Web Framework: Spring Boot
- Database: Amazon DynamoDB
- Authentication: Auth0
- Deployment: Amazon EC2 Container Service
- Local Testing: Docker
- Marketing: Mailchimp (Separately Export from Auth0)
- Website Domain: GoDaddy
- Routing: Amazon Route 53
PS: Open to exploring options of going completely native ( AWS Lambda, AWS Security but have to learn all)
- REST API14
- SAML9
- OIDC OpenID Connect5
- Protect B2E, B2B, B2C apps5
- User Provisioning5
- Easy LDAP integration5
- Universal Directory4
- Tons of Identity Management features4
- SSO, MFA for cloud, on-prem, custom apps4
- API Access Management - oAuth2 as a service4
- Easy Active Directory integration3
- SWA applications Integration2
- SOC21
- Test0
- Pricing is too high5
- Okta verify (Multi-factor Authentication)1
related Okta posts
Hello,
I'm trying to implement a solution for this situation:
There is a restaurant in which users can access RestAPI, using Google, Facebook, GitHub. There is even the possibility to login inside using the SPID authentication. In the first case I was considering Keycloak as a better solution for this case, but then i've read about Okta and its pros.
I cannot understand reading and searching on Google if SPID authentication is supported by OKTA. Looks like to be, because it should be using SAML, but I haven't found a clear solution.
I want some good advice on which one I should prefer. (Keycloak or Okta) Since Keycloak is open source, it will be our first preference, but do we face some limitations with this approach? And since our product is SAAS based and we support the following authentications at present. 1. AT DB level 2. 3rd part IDP providers 3. LDAP/AD...
FreeIPA
- Manages sudo command groups and sudo commands2
- Manages host and host groups1
related FreeIPA posts
related Dex posts
Vault
- Secure17
- Variety of Secret Backends13
- Very easy to set up and use11
- Dynamic secret generation8
- AuditLog5
- Privilege Access Management3
- Leasing and Renewal3
- Easy to integrate with2
- Open Source2
- Consol integration2
- Handles secret sprawl2
- Variety of Auth Backends2
- Multicloud1
related Vault posts
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
At uSwitch we use Vault to generate short lived database credentials for our applications running in Kubernetes. We wanted to move from an environment where we had 100 dbs with a variety of static passwords being shared around to a place where each pod would have credentials that only last for its lifetime.
We chose vault because:
It had built in Kubernetes support so we could use service accounts to permission which pods could access which database.
A terraform provider so that we could configure both our RDS instances and their vault configuration in one place.
A variety of database providers including MySQL/PostgreSQL (our most common dbs).
A good api/Go -sdk so that we could build tooling around it to simplify development worfklow.
It had other features we would utilise such as PKI
- Easy to use490
- Great tool369
- Makes developing rest api's easy peasy276
- Easy setup, looks good156
- The best api workflow out there144
- It's the best53
- History feature53
- Adds real value to my workflow44
- Great interface that magically predicts your needs43
- The best in class app35
- Can save and share script12
- Fully featured without looking cluttered10
- Collections8
- Option to run scrips8
- Global/Environment Variables8
- Shareable Collections7
- Dead simple and useful. Excellent7
- Dark theme easy on the eyes7
- Awesome customer support6
- Great integration with newman6
- Documentation5
- Simple5
- The test script is useful5
- Saves responses4
- This has simplified my testing significantly4
- Makes testing API's as easy as 1,2,34
- Easy as pie4
- API-network3
- I'd recommend it to everyone who works with apis3
- Mocking API calls with predefined response3
- Now supports GraphQL2
- Postman Runner CI Integration2
- Easy to setup, test and provides test storage2
- Continuous integration using newman2
- Pre-request Script and Test attributes are invaluable2
- Runner2
- Graph2
- <a href="http://fixbit.com/">useful tool</a>1
- Stores credentials in HTTP10
- Bloated features and UI9
- Cumbersome to switch authentication tokens8
- Poor GraphQL support7
- Expensive5
- Not free after 5 users3
- Can't prompt for per-request variables3
- Import swagger1
- Support websocket1
- Import curl1
related Postman posts
We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.
Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username
, password
and workspace_name
so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.
Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.
This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.
Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct
Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.
Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.
Our whole Node.js backend stack consists of the following tools:
- Lerna as a tool for multi package and multi repository management
- npm as package manager
- NestJS as Node.js framework
- TypeScript as programming language
- ExpressJS as web server
- Swagger UI for visualizing and interacting with the API’s resources
- Postman as a tool for API development
- TypeORM as object relational mapping layer
- JSON Web Token for access token management
The main reason we have chosen Node.js over PHP is related to the following artifacts:
- Made for the web and widely in use: Node.js is a software platform for developing server-side network services. Well-known projects that rely on Node.js include the blogging software Ghost, the project management tool Trello and the operating system WebOS. Node.js requires the JavaScript runtime environment V8, which was specially developed by Google for the popular Chrome browser. This guarantees a very resource-saving architecture, which qualifies Node.js especially for the operation of a web server. Ryan Dahl, the developer of Node.js, released the first stable version on May 27, 2009. He developed Node.js out of dissatisfaction with the possibilities that JavaScript offered at the time. The basic functionality of Node.js has been mapped with JavaScript since the first version, which can be expanded with a large number of different modules. The current package managers (npm or Yarn) for Node.js know more than 1,000,000 of these modules.
- Fast server-side solutions: Node.js adopts the JavaScript "event-loop" to create non-blocking I/O applications that conveniently serve simultaneous events. With the standard available asynchronous processing within JavaScript/TypeScript, highly scalable, server-side solutions can be realized. The efficient use of the CPU and the RAM is maximized and more simultaneous requests can be processed than with conventional multi-thread servers.
- A language along the entire stack: Widely used frameworks such as React or AngularJS or Vue.js, which we prefer, are written in JavaScript/TypeScript. If Node.js is now used on the server side, you can use all the advantages of a uniform script language throughout the entire application development. The same language in the back- and frontend simplifies the maintenance of the application and also the coordination within the development team.
- Flexibility: Node.js sets very few strict dependencies, rules and guidelines and thus grants a high degree of flexibility in application development. There are no strict conventions so that the appropriate architecture, design structures, modules and features can be freely selected for the development.
- Easy to use490
- Great tool369
- Makes developing rest api's easy peasy276
- Easy setup, looks good156
- The best api workflow out there144
- It's the best53
- History feature53
- Adds real value to my workflow44
- Great interface that magically predicts your needs43
- The best in class app35
- Can save and share script12
- Fully featured without looking cluttered10
- Collections8
- Option to run scrips8
- Global/Environment Variables8
- Shareable Collections7
- Dead simple and useful. Excellent7
- Dark theme easy on the eyes7
- Awesome customer support6
- Great integration with newman6
- Documentation5
- Simple5
- The test script is useful5
- Saves responses4
- This has simplified my testing significantly4
- Makes testing API's as easy as 1,2,34
- Easy as pie4
- API-network3
- I'd recommend it to everyone who works with apis3
- Mocking API calls with predefined response3
- Now supports GraphQL2
- Postman Runner CI Integration2
- Easy to setup, test and provides test storage2
- Continuous integration using newman2
- Pre-request Script and Test attributes are invaluable2
- Runner2
- Graph2
- <a href="http://fixbit.com/">useful tool</a>1
- Stores credentials in HTTP10
- Bloated features and UI9
- Cumbersome to switch authentication tokens8
- Poor GraphQL support7
- Expensive5
- Not free after 5 users3
- Can't prompt for per-request variables3
- Import swagger1
- Support websocket1
- Import curl1
related Postman posts
We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.
Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username
, password
and workspace_name
so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.
Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.
This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.
Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct
Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.
Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.
Our whole Node.js backend stack consists of the following tools:
- Lerna as a tool for multi package and multi repository management
- npm as package manager
- NestJS as Node.js framework
- TypeScript as programming language
- ExpressJS as web server
- Swagger UI for visualizing and interacting with the API’s resources
- Postman as a tool for API development
- TypeORM as object relational mapping layer
- JSON Web Token for access token management
The main reason we have chosen Node.js over PHP is related to the following artifacts:
- Made for the web and widely in use: Node.js is a software platform for developing server-side network services. Well-known projects that rely on Node.js include the blogging software Ghost, the project management tool Trello and the operating system WebOS. Node.js requires the JavaScript runtime environment V8, which was specially developed by Google for the popular Chrome browser. This guarantees a very resource-saving architecture, which qualifies Node.js especially for the operation of a web server. Ryan Dahl, the developer of Node.js, released the first stable version on May 27, 2009. He developed Node.js out of dissatisfaction with the possibilities that JavaScript offered at the time. The basic functionality of Node.js has been mapped with JavaScript since the first version, which can be expanded with a large number of different modules. The current package managers (npm or Yarn) for Node.js know more than 1,000,000 of these modules.
- Fast server-side solutions: Node.js adopts the JavaScript "event-loop" to create non-blocking I/O applications that conveniently serve simultaneous events. With the standard available asynchronous processing within JavaScript/TypeScript, highly scalable, server-side solutions can be realized. The efficient use of the CPU and the RAM is maximized and more simultaneous requests can be processed than with conventional multi-thread servers.
- A language along the entire stack: Widely used frameworks such as React or AngularJS or Vue.js, which we prefer, are written in JavaScript/TypeScript. If Node.js is now used on the server side, you can use all the advantages of a uniform script language throughout the entire application development. The same language in the back- and frontend simplifies the maintenance of the application and also the coordination within the development team.
- Flexibility: Node.js sets very few strict dependencies, rules and guidelines and thus grants a high degree of flexibility in application development. There are no strict conventions so that the appropriate architecture, design structures, modules and features can be freely selected for the development.
Stack Overflow
- Scary smart community257
- Knows all206
- Voting system142
- Good questions134
- Good SEO83
- Addictive22
- Tight focus14
- Share and gain knowledge10
- Useful7
- Fast loading3
- Gamification2
- Knows everyone1
- Experts share experience and answer questions1
- Stack overflow to developers As google to net surfers1
- Questions answered quickly1
- No annoying ads1
- No spam1
- Fast community response1
- Good moderators1
- Quick answers from users1
- Good answers1
- User reputation ranking1
- Efficient answers1
- Leading developer community1
- Not welcoming to newbies3
- Unfair downvoting3
- Unfriendly moderators3
- No opinion based questions3
- Mean users3
- Limited to types of questions it can accept2
related Stack Overflow posts
Google Analytics is a great tool to analyze your traffic. To debug our software and ask questions, we love to use Postman and Stack Overflow. Google Drive helps our team to share documents. We're able to build our great products through the APIs by Google Maps, CloudFlare, Stripe, PayPal, Twilio, Let's Encrypt, and TensorFlow.