What is Avocado and what are its top alternatives?
Avocado is a test framework that provides a high-level interface for writing tests in Python. It supports test development and execution in a simple, concise manner. However, Avocado may lack certain advanced features and integrations that are crucial for complex testing scenarios.
- pytest: pytest is a mature, full-featured testing tool for Python that allows easy test creation and execution. It supports fixtures, plugins, and parameterized testing, offering a wide range of functionalities. Pros include extensive plugin ecosystem and excellent documentation, while cons may include a steeper learning curve for beginners.
- nose2: nose2 is a successor to the nose testing framework, providing features like test discovery and running. It is simple to use and integrates well with other testing tools. Pros include ease of use and compatibility with legacy code, while cons may include less active development compared to other tools.
- unittest: unittest is Python's built-in testing framework, offering basic testing capabilities for writing and running tests. It is widely used and well-documented, making it a standard choice for many Python developers. Pros include standard library inclusion and simplicity, while cons may include boilerplate code and verbosity.
- Behave: Behave is a behavior-driven development (BDD) framework for Python that allows writing tests in a human-readable format. It integrates well with tools like Selenium and offers test automation capabilities. Pros include BDD support and readability, while cons may include slower execution compared to traditional testing frameworks.
- Robot Framework: Robot Framework is a generic test automation framework that supports keyword-driven testing approaches. It allows writing tests in tabular format and supports a wide range of external libraries. Pros include high-level test abstraction and extensibility, while cons may include a separate setup for Python integration.
- Tox: Tox is a tool that automates test environment management, allowing easy creation and execution of test suites in multiple environments. It simplifies the process of testing code against different Python versions and dependencies. Pros include environment isolation and flexibility, while cons may include setup overhead for initial configuration.
- Hypothesis: Hypothesis is a property-based testing tool for Python that generates test cases automatically based on defined properties. It helps in finding edge cases and potential bugs that may not be covered by traditional testing. Pros include automated test case generation and exhaustive coverage, while cons may include learning curve for property-based testing concepts.
- Mamba: Mamba is a testing framework inspired by RSpec and designed for behavior-driven development in Python. It provides a clean syntax for writing tests and integrates well with libraries like Mock. Pros include readability and expressiveness, while cons may include limited adoption compared to mainstream testing tools.
- PyTest-BDD: PyTest-BDD is a plugin for pytest that enables behavior-driven development using the Gherkin language syntax. It combines the benefits of pytest with BDD practices, allowing structured test creation. Pros include pytest integration and BDD support, while cons may include additional setup for Gherkin syntax usage.
- Testify: Testify is a testing framework that provides an expressive syntax for writing tests in Python. It offers features like fixtures, assertions, and test discovery, making test development efficient and readable. Pros include simplicity and readability, while cons may include limited updates and community support.
Top Alternatives to Avocado
- Postman
It is the only complete API development environment, used by nearly five million developers and more than 100,000 companies worldwide. ...
- Postman
It is the only complete API development environment, used by nearly five million developers and more than 100,000 companies worldwide. ...
- Stack Overflow
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's built and run by you as part of the Stack Exchange network of Q&A sites. With your help, we're working together to build a library of detailed answers to every question about programming. ...
- Google Maps
Create rich applications and stunning visualisations of your data, leveraging the comprehensiveness, accuracy, and usability of Google Maps and a modern web platform that scales as you grow. ...
- Elasticsearch
Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack). ...
- GitHub Pages
Public webpages hosted directly from your GitHub repository. Just edit, push, and your changes are live. ...
- Amazon Route 53
Amazon Route 53 is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating human readable names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Route 53 effectively connects user requests to infrastructure running in Amazon Web Services (AWS) – such as an Amazon Elastic Compute Cloud (Amazon EC2) instance, an Amazon Elastic Load Balancer, or an Amazon Simple Storage Service (Amazon S3) bucket – and can also be used to route users to infrastructure outside of AWS. ...
- OpenSSL
It is a robust, commercial-grade, and full-featured toolkit for the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols. It is also a general-purpose cryptography library. ...
Avocado alternatives & related posts
- Easy to use490
- Great tool369
- Makes developing rest api's easy peasy276
- Easy setup, looks good156
- The best api workflow out there144
- It's the best53
- History feature53
- Adds real value to my workflow44
- Great interface that magically predicts your needs43
- The best in class app35
- Can save and share script12
- Fully featured without looking cluttered10
- Collections8
- Option to run scrips8
- Global/Environment Variables8
- Shareable Collections7
- Dead simple and useful. Excellent7
- Dark theme easy on the eyes7
- Awesome customer support6
- Great integration with newman6
- Documentation5
- Simple5
- The test script is useful5
- Saves responses4
- This has simplified my testing significantly4
- Makes testing API's as easy as 1,2,34
- Easy as pie4
- API-network3
- I'd recommend it to everyone who works with apis3
- Mocking API calls with predefined response3
- Now supports GraphQL2
- Postman Runner CI Integration2
- Easy to setup, test and provides test storage2
- Continuous integration using newman2
- Pre-request Script and Test attributes are invaluable2
- Runner2
- Graph2
- <a href="http://fixbit.com/">useful tool</a>1
- Stores credentials in HTTP10
- Bloated features and UI9
- Cumbersome to switch authentication tokens8
- Poor GraphQL support7
- Expensive5
- Not free after 5 users3
- Can't prompt for per-request variables3
- Import swagger1
- Support websocket1
- Import curl1
related Postman posts
We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.
Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username
, password
and workspace_name
so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.
Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.
This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.
Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct
Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.
Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.
Our whole Node.js backend stack consists of the following tools:
- Lerna as a tool for multi package and multi repository management
- npm as package manager
- NestJS as Node.js framework
- TypeScript as programming language
- ExpressJS as web server
- Swagger UI for visualizing and interacting with the API’s resources
- Postman as a tool for API development
- TypeORM as object relational mapping layer
- JSON Web Token for access token management
The main reason we have chosen Node.js over PHP is related to the following artifacts:
- Made for the web and widely in use: Node.js is a software platform for developing server-side network services. Well-known projects that rely on Node.js include the blogging software Ghost, the project management tool Trello and the operating system WebOS. Node.js requires the JavaScript runtime environment V8, which was specially developed by Google for the popular Chrome browser. This guarantees a very resource-saving architecture, which qualifies Node.js especially for the operation of a web server. Ryan Dahl, the developer of Node.js, released the first stable version on May 27, 2009. He developed Node.js out of dissatisfaction with the possibilities that JavaScript offered at the time. The basic functionality of Node.js has been mapped with JavaScript since the first version, which can be expanded with a large number of different modules. The current package managers (npm or Yarn) for Node.js know more than 1,000,000 of these modules.
- Fast server-side solutions: Node.js adopts the JavaScript "event-loop" to create non-blocking I/O applications that conveniently serve simultaneous events. With the standard available asynchronous processing within JavaScript/TypeScript, highly scalable, server-side solutions can be realized. The efficient use of the CPU and the RAM is maximized and more simultaneous requests can be processed than with conventional multi-thread servers.
- A language along the entire stack: Widely used frameworks such as React or AngularJS or Vue.js, which we prefer, are written in JavaScript/TypeScript. If Node.js is now used on the server side, you can use all the advantages of a uniform script language throughout the entire application development. The same language in the back- and frontend simplifies the maintenance of the application and also the coordination within the development team.
- Flexibility: Node.js sets very few strict dependencies, rules and guidelines and thus grants a high degree of flexibility in application development. There are no strict conventions so that the appropriate architecture, design structures, modules and features can be freely selected for the development.
- Easy to use490
- Great tool369
- Makes developing rest api's easy peasy276
- Easy setup, looks good156
- The best api workflow out there144
- It's the best53
- History feature53
- Adds real value to my workflow44
- Great interface that magically predicts your needs43
- The best in class app35
- Can save and share script12
- Fully featured without looking cluttered10
- Collections8
- Option to run scrips8
- Global/Environment Variables8
- Shareable Collections7
- Dead simple and useful. Excellent7
- Dark theme easy on the eyes7
- Awesome customer support6
- Great integration with newman6
- Documentation5
- Simple5
- The test script is useful5
- Saves responses4
- This has simplified my testing significantly4
- Makes testing API's as easy as 1,2,34
- Easy as pie4
- API-network3
- I'd recommend it to everyone who works with apis3
- Mocking API calls with predefined response3
- Now supports GraphQL2
- Postman Runner CI Integration2
- Easy to setup, test and provides test storage2
- Continuous integration using newman2
- Pre-request Script and Test attributes are invaluable2
- Runner2
- Graph2
- <a href="http://fixbit.com/">useful tool</a>1
- Stores credentials in HTTP10
- Bloated features and UI9
- Cumbersome to switch authentication tokens8
- Poor GraphQL support7
- Expensive5
- Not free after 5 users3
- Can't prompt for per-request variables3
- Import swagger1
- Support websocket1
- Import curl1
related Postman posts
We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. A public API is only as good as its #documentation. For the API reference doc we are using Postman.
Postman is an “API development environment”. You download the desktop app, and build API requests by URL and payload. Over time you can build up a set of requests and organize them into a “Postman Collection”. You can generalize a collection with “collection variables”. This allows you to parameterize things like username
, password
and workspace_name
so a user can fill their own values in before making an API call. This makes it possible to use Postman for one-off API tasks instead of writing code.
Then you can add Markdown content to the entire collection, a folder of related methods, and/or every API method to explain how the APIs work. You can publish a collection and easily share it with a URL.
This turns Postman from a personal #API utility to full-blown public interactive API documentation. The result is a great looking web page with all the API calls, docs and sample requests and responses in one place. Check out the results here.
Postman’s powers don’t end here. You can automate Postman with “test scripts” and have it periodically run a collection scripts as “monitors”. We now have #QA around all the APIs in public docs to make sure they are always correct
Along the way we tried other techniques for documenting APIs like ReadMe.io or Swagger UI. These required a lot of effort to customize.
Writing and maintaining a Postman collection takes some work, but the resulting documentation site, interactivity and API testing tools are well worth it.
Our whole Node.js backend stack consists of the following tools:
- Lerna as a tool for multi package and multi repository management
- npm as package manager
- NestJS as Node.js framework
- TypeScript as programming language
- ExpressJS as web server
- Swagger UI for visualizing and interacting with the API’s resources
- Postman as a tool for API development
- TypeORM as object relational mapping layer
- JSON Web Token for access token management
The main reason we have chosen Node.js over PHP is related to the following artifacts:
- Made for the web and widely in use: Node.js is a software platform for developing server-side network services. Well-known projects that rely on Node.js include the blogging software Ghost, the project management tool Trello and the operating system WebOS. Node.js requires the JavaScript runtime environment V8, which was specially developed by Google for the popular Chrome browser. This guarantees a very resource-saving architecture, which qualifies Node.js especially for the operation of a web server. Ryan Dahl, the developer of Node.js, released the first stable version on May 27, 2009. He developed Node.js out of dissatisfaction with the possibilities that JavaScript offered at the time. The basic functionality of Node.js has been mapped with JavaScript since the first version, which can be expanded with a large number of different modules. The current package managers (npm or Yarn) for Node.js know more than 1,000,000 of these modules.
- Fast server-side solutions: Node.js adopts the JavaScript "event-loop" to create non-blocking I/O applications that conveniently serve simultaneous events. With the standard available asynchronous processing within JavaScript/TypeScript, highly scalable, server-side solutions can be realized. The efficient use of the CPU and the RAM is maximized and more simultaneous requests can be processed than with conventional multi-thread servers.
- A language along the entire stack: Widely used frameworks such as React or AngularJS or Vue.js, which we prefer, are written in JavaScript/TypeScript. If Node.js is now used on the server side, you can use all the advantages of a uniform script language throughout the entire application development. The same language in the back- and frontend simplifies the maintenance of the application and also the coordination within the development team.
- Flexibility: Node.js sets very few strict dependencies, rules and guidelines and thus grants a high degree of flexibility in application development. There are no strict conventions so that the appropriate architecture, design structures, modules and features can be freely selected for the development.
- Scary smart community257
- Knows all206
- Voting system142
- Good questions134
- Good SEO83
- Addictive22
- Tight focus14
- Share and gain knowledge10
- Useful7
- Fast loading3
- Gamification2
- Knows everyone1
- Experts share experience and answer questions1
- Stack overflow to developers As google to net surfers1
- Questions answered quickly1
- No annoying ads1
- No spam1
- Fast community response1
- Good moderators1
- Quick answers from users1
- Good answers1
- User reputation ranking1
- Efficient answers1
- Leading developer community1
- Not welcoming to newbies3
- Unfair downvoting3
- Unfriendly moderators3
- No opinion based questions3
- Mean users3
- Limited to types of questions it can accept2
related Stack Overflow posts
Google Analytics is a great tool to analyze your traffic. To debug our software and ask questions, we love to use Postman and Stack Overflow. Google Drive helps our team to share documents. We're able to build our great products through the APIs by Google Maps, CloudFlare, Stripe, PayPal, Twilio, Let's Encrypt, and TensorFlow.
- Free253
- Address input through maps api136
- Sharable Directions82
- Google Earth47
- Unique46
- Custom maps designing3
- Google Attributions and logo4
- Only map allowed alongside google place autocomplete1
related Google Maps posts
Google Analytics is a great tool to analyze your traffic. To debug our software and ask questions, we love to use Postman and Stack Overflow. Google Drive helps our team to share documents. We're able to build our great products through the APIs by Google Maps, CloudFlare, Stripe, PayPal, Twilio, Let's Encrypt, and TensorFlow.
A huge component of our product relies on gathering public data about locations of interest. Google Places API gives us that ability in the most efficient way. Since we are primarily going to be using as google data as a source of information for our MVP, we might as well start integrating the Google Places API in our system. We have worked with Google Maps in the past and we might take some inspiration from our previous projects onto this one.
- Powerful api328
- Great search engine315
- Open source231
- Restful214
- Near real-time search200
- Free98
- Search everything85
- Easy to get started54
- Analytics45
- Distributed26
- Fast search6
- More than a search engine5
- Great docs4
- Awesome, great tool4
- Highly Available3
- Easy to scale3
- Potato2
- Document Store2
- Great customer support2
- Intuitive API2
- Nosql DB2
- Great piece of software2
- Reliable2
- Fast2
- Easy setup2
- Open1
- Easy to get hot data1
- Github1
- Elaticsearch1
- Actively developing1
- Responsive maintainers on GitHub1
- Ecosystem1
- Not stable1
- Scalability1
- Community0
- Resource hungry7
- Diffecult to get started6
- Expensive5
- Hard to keep stable at large scale4
related Elasticsearch posts
We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.
We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).
And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.
I can't recommend it highly enough.
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
- Free290
- Right out of github217
- Quick to set up185
- Instant108
- Easy to learn107
- Great way of setting up your project's website58
- Widely used47
- Quick and easy41
- Great documentation37
- Super easy4
- Easy setup3
- Instant and fast Jekyll builds2
- Great customer support2
- Great integration2
- Not possible to perform HTTP redirects4
- Supports only Jekyll3
- Limited Jekyll plugins3
- Jekyll is bloated1
related GitHub Pages posts
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
I've heard that I have the ability to write well, at times. When it flows, it flows. I decided to start blogging in 2013 on Blogger. I started a company and joined BizPark with the Microsoft Azure allotment. I created a WordPress blog and did a migration at some point. A lot happened in the time after that migration but I stopped coding and changed cities during tumultuous times that taught me many lessons concerning mental health and productivity. I eventually graduated from BizSpark and outgrew the credit allotment. That killed the WordPress blog.
I blogged about writing again on the existing Blogger blog but it didn't feel right. I looked at a few options where I wouldn't have to worry about hosting cost indefinitely and Jekyll stood out with GitHub Pages. The Importer was fairly straightforward for the existing blog posts.
Todo * Set up redirects for all posts on blogger. The URI format is different so a complete redirect wouldn't work. Although, there may be something in Jekyll that could manage the redirects. I did notice the old URLs were stored in the front matter. I'm working on a command-line Ruby gem for the current plan. * I did find some of the lost WordPress posts on archive.org that I downloaded with the waybackmachinedownloader. I think I might write an importer for that. * I still have a few Disqus comment threads to map
Amazon Route 53
- High-availability185
- Simple148
- Backed by amazon103
- Fast76
- Auhtoritive dns servers are spread over different tlds54
- One stop solution for all our cloud needs29
- Easy setup and monitoring26
- Low-latency20
- Flexible17
- Secure15
- API available3
- Dynamically setup new clients1
- Easily add client DNS entries.1
- SLOW2
- Geo-based routing only works with AWS zones2
- Restrictive rate limit1
related Amazon Route 53 posts
I'm planning to create a web application and also a mobile application to provide a very good shopping experience to the end customers. Shortly, my application will be aggregate the product details from difference sources and giving a clear picture to the user that when and where to buy that product with best in Quality and cost.
I have planned to develop this in many milestones for adding N number of features and I have picked my first part to complete the core part (aggregate the product details from different sources).
As per my work experience and knowledge, I have chosen the followings stacks to this mission.
UI: I would like to develop this application using React, React Router and React Native since I'm a little bit familiar on this and also most importantly these will help on developing both web and mobile apps. In addition, I'm gonna use the stacks JavaScript, jQuery, jQuery UI, jQuery Mobile, Bootstrap wherever required.
Service: I have planned to use Java as the main business layer language as I have 7+ years of experience on this I believe I can do better work using Java than other languages. In addition, I'm thinking to use the stacks Node.js.
Database and ORM: I'm gonna pick MySQL as DB and Hibernate as ORM since I have a piece of good knowledge and also work experience on this combination.
Search Engine: I need to deal with a large amount of product data and it's in-detailed info to provide enough details to end user at the same time I need to focus on the performance area too. so I have decided to use Solr as a search engine for product search and suggestions. In addition, I'm thinking to replace Solr by Elasticsearch once explored/reviewed enough about Elasticsearch.
Host: As of now, my plan to complete the application with decent features first and deploy it in a free hosting environment like Docker and Heroku and then once it is stable then I have planned to use the AWS products Amazon S3, EC2, Amazon RDS and Amazon Route 53. I'm not sure about Microsoft Azure that what is the specialty in it than Heroku and Amazon EC2 Container Service. Anyhow, I will do explore these once again and pick the best suite one for my requirement once I reached this level.
Build and Repositories: I have decided to choose Apache Maven and Git as these are my favorites and also so popular on respectively build and repositories.
Additional Utilities :) - I would like to choose Codacy for code review as their Startup plan will be very helpful to this application. I'm already experienced with Google CheckStyle and SonarQube even I'm looking something on Codacy.
Happy Coding! Suggestions are welcome! :)
Thanks, Ganesa
In 2012 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.
Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.
PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.
Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!
Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.
We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).
Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service
OpenSSL
related OpenSSL posts
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.