What is Amazon LightSail and what are its top alternatives?
Top Alternatives to Amazon LightSail
- Amazon EC2
It is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. ...
- GoDaddy
Go Daddy makes registering Domain Names fast, simple, and affordable. It is a trusted domain registrar that empowers people with creative ideas to succeed online. ...
- Linode
Get a server running in minutes with your choice of Linux distro, resources, and node location. ...
- DigitalOcean
We take the complexities out of cloud hosting by offering blazing fast, on-demand SSD cloud servers, straightforward pricing, a simple API, and an easy-to-use control panel. ...
- Heroku
Heroku is a cloud application platform – a new way of building and deploying web apps. Heroku lets app developers spend 100% of their time on their application code, not managing servers, deployment, ongoing operations, or scaling. ...
- Beanstalk
A single process to commit code, review with the team, and deploy the final result to your customers. ...
- Microsoft Azure
Azure is an open and flexible cloud platform that enables you to quickly build, deploy and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any language, tool or framework. And you can integrate your public cloud applications with your existing IT environment. ...
- Google Cloud Platform
It helps you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning. It is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search and YouTube. ...
Amazon LightSail alternatives & related posts
- Quick and reliable cloud servers647
- Scalability515
- Easy management393
- Low cost277
- Auto-scaling270
- Market leader89
- Backed by amazon80
- Reliable79
- Free tier67
- Easy management, scalability58
- Flexible13
- Easy to Start10
- Widely used9
- Web-scale9
- Elastic9
- Node.js API7
- Industry Standard5
- Lots of configuration options4
- GPU instances2
- Extremely simple to use1
- Amazing for individuals1
- All the Open Source CLI tools you could want.1
- Simpler to understand and learn1
- Ui could use a lot of work13
- High learning curve when compared to PaaS6
- Extremely poor CPU performance3
related Amazon EC2 posts
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
























Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
- Flexible payment methods for domains8
- .io support3
- Constantly trying to upsell you2
- Not a great UI1
related GoDaddy posts
- Extremely reliable100
- Good value70
- Great customer support59
- Easy to configure58
- Great documentation36
- Servers across the world24
- Managed/hosted DNS service18
- Simple ui15
- Network and CPU usage graphs11
- IPv6 support7
- Multiple IP address support6
- Ssh access3
- Good price, good cusomter sevice3
- IP address fail over support2
- SSH root access2
- Great performance compared to EC2 or DO1
- It runs apps with speed1
- Best customizable VPS1
- Latest kernels1
- Cheapest1
- Ssds1
- No "floating IP" support2
related Linode posts
What is the data transfer out cost (Bandwidth cost) on Linode compared to Microsoft Azure?
DigitalOcean
- Great value for money560
- Simple dashboard364
- Good pricing362
- Ssds300
- Nice ui250
- Easy configuration191
- Great documentation156
- Ssh access138
- Great community135
- Ubuntu24
- Docker13
- IPv6 support12
- Private networking10
- 99.99% uptime SLA8
- Simple API7
- Great tutorials7
- 55 Second Provisioning6
- One Click Applications5
- Dokku4
- Node.js4
- LAMP4
- Debian4
- CoreOS4
- 1Gb/sec Servers3
- Word Press3
- LEMP3
- Simple Control Panel3
- Mean3
- Ghost3
- Runs CoreOS2
- Quick and no nonsense service2
- Django2
- Good Tutorials2
- Speed2
- Ruby on Rails2
- GitLab2
- Hex Core machines with dedicated ECC Ram and RAID SSD s2
- CentOS1
- Spaces1
- KVM Virtualization1
- Amazing Hardware1
- Transfer Globally1
- Fedora1
- FreeBSD1
- Drupal1
- FreeBSD Amp1
- Magento1
- ownCloud1
- RedMine1
- My go to server provider1
- Ease and simplicity1
- Nice1
- Find it superfitting with my requirements (SSD, ssh.1
- Easy Setup1
- Cheap1
- Static IP1
- It's the easiest to get started for small projects1
- Automatic Backup1
- Great support1
- Quick and easy to set up1
- Servers on demand - literally1
- Reliability1
- Variety of services0
- Managed Kubernetes0
- Pricing3
- No live support chat3
related DigitalOcean posts
Hello, I'm currently writing an e-commerce website with Laravel and Laravel Nova (as an admin panel). I want to start deploying the app and created a DigitalOcean account. After some searches about the deployment process, I saw that the setup via DigitalOcean (using Droplets) isn't very easy for beginners. Now I'm not sure how to deploy my app. I am in between Laravel Forge and DigitalOcean (?Apps Platform or Droplets?). I've read that Heroku and Laravel Vapor are a bit expensive. That's why I didn't consider them yet. I'd be happy to read your opinions on that topic!
Hi, I'm a beginner at using MySQL, I currently deployed my crud app on Heroku using the ClearDB add-on. I didn't see that coming, but the increased value of the primary key instead of being 1 is set to 10, and I cannot find a way to change it. Now I`m considering switching and deploying the full app and MySql to DigitalOcean any advice on that? Will I get the same issue? Thanks in advance!
Heroku
- Easy deployment705
- Free for side projects459
- Huge time-saver374
- Simple scaling348
- Low devops skills required261
- Easy setup190
- Add-ons for almost everything174
- Beginner friendly153
- Better for startups150
- Low learning curve133
- Postgres hosting48
- Easy to add collaborators41
- Faster development30
- Awesome documentation24
- Simple rollback19
- Focus on product, not deployment19
- Natural companion for rails development15
- Easy integration15
- Great customer support12
- GitHub integration8
- Painless & well documented6
- No-ops6
- I love that they make it free to launch a side project4
- Free4
- Great UI3
- Just works3
- PostgreSQL forking and following2
- MySQL extension2
- Security1
- Able to host stuff good like Discord Bot1
- Sec0
- Super expensive26
- Not a whole lot of flexibility8
- Storage6
- No usable MySQL option6
- Low performance on free tier4
- 24/7 support is $1,000 per month1
related Heroku posts











StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.
Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!
#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit
























Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
- Ftp deploy14
- Deployment9
- Easy to navigate8
- Code Editing4
- HipChat Integration4
- Integrations4
- Code review3
- HTML Preview2
- Security1
- Blame Tool1
- Cohesion1
related Beanstalk posts
Microsoft Azure
- Scales well and quite easy114
- Can use .Net or open source tools96
- Startup friendly81
- Startup plans via BizSpark73
- High performance62
- Wide choice of services38
- Low cost32
- Lots of integrations32
- Reliability31
- Twillio & Github are directly accessible19
- RESTful API13
- Enterprise Grade10
- PaaS10
- Startup support10
- DocumentDB8
- In person support7
- Virtual Machines6
- Free for students6
- Service Bus6
- Redis Cache5
- It rocks5
- SQL Databases4
- CDN4
- Infrastructure Services4
- Storage, Backup, and Recovery4
- Integration3
- Big Data3
- HDInsight3
- BizSpark 60k Azure Benefit3
- Preview Portal3
- IaaS3
- Scheduler3
- Built on Node.js3
- Backup2
- Open cloud2
- Web2
- SaaS2
- Big Compute2
- Mobile2
- Media2
- Dev-Test2
- Storage2
- StorSimple2
- Machine Learning2
- Stream Analytics2
- Data Factory2
- Event Hubs2
- Virtual Network2
- ExpressRoute2
- Traffic Manager2
- Media Services2
- BizTalk Services2
- Site Recovery2
- Active Directory2
- Multi-Factor Authentication2
- Visual Studio Online2
- Application Insights2
- Automation2
- Operational Insights2
- Key Vault2
- Infrastructure near your customers2
- Easy Deployment2
- Enterprise customer preferences1
- Security1
- Documentation1
- Best cloud platfrom1
- Easy and fast to start with1
- Remote Debugging1
- Confusing UI6
- Expensive plesk on Azure2
related Microsoft Azure posts
We are hardcore Kubernetes users and contributors. We loved the automation it provides. However, as our team grew and added more clusters and microservices, capacity and resources management becomes a massive pain to us. We started suffering from a lot of outages and unexpected behavior as we promote our code from dev to production environments. Luckily we were working on our AI-powered tools to understand different dependencies, predict usage, and calculate the right resources and configurations that should be applied to our infrastructure and microservices. We dogfooded our agent (http://github.com/magalixcorp/magalix-agent) and were able to stabilize as the #autopilot continuously recovered any miscalculations we made or because of unexpected changes in workloads. We are open sourcing our agent in a few days. Check it out and let us know what you think! We run workloads on Microsoft Azure Google Kubernetes Engine and Amazon EC2 and we're all about Go and Python!









CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.
CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.
AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.
It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.
The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.
In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.
Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.
- Good app Marketplace for Beginner and Advanced User4
- 1 year free trial credit USD3003
- Live chat support2
- Cheap2
- Premium tier IP address2
related Google Cloud Platform posts
My days of using Firebase are over! I want to move to something scalable and possibly less cheap. In the past seven days I have done my research on what type of DB best fits my needs, and have chosen to go with the nonrelational DB; MongoDB. Although I understand it, I need help understanding how to set up the architecture. I have the client app (Flutter/ Dart) that would make HTTP requests to the web server (node/express), and from there the webserver would query data from MongoDB.
How should I go about hosting the web server and MongoDb; do they have to be hosted together (this is where a lot of my confusion is)? Based on the research I've done, it seems like the standard practice would be to host on a VM provided by services such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, etc. If there are better ways, such as possibly self-hosting (more responsibility), should I? Anyways, I just want to confirm with a community (you guys) to make sure I do this right, all input is highly appreciated.
I am currently working on a long term mobile app project. Current stack: Frontend: Dart/Flutter Backend: Go, AWS Resources (AWS Lambda, Amazon DynamoDB, etc.) Since there are only two developers and we have limited time and resources, we are looking for a BAAS like Firebase or AWS Amplify to handle auth and push notifications for now. We are prioritizing developing speed so we can iterate quickly. The only problem is that AWS amplify support for flutter is in developer preview and has limited capabilities (We have tested it out in our app). Firebase is the more mature option. It has great support for flutter and has more than we need for auth, notifications, etc. My question is that, if we choose firebase, we would be stuck with using two different cloud providers. Is this bad, or is this even a problem? I am willing to change anything on the backend architecture wise, so any suggestions would be greatly appreciated as I am somewhat unfamiliar with Google Cloud Platform. Thank you.