Cloudinary

Cloudinary

Application and Data / Assets and Media / Image Processing and Management
Needs advice
on
CloudFlareCloudFlare
and
CloudinaryCloudinary

We are currently on Cloudinary but are looking at alternatives. We've recently enhanced our e-store with upgrades on functionality in order to stay competitive in the e-comm space.

The current key focus is to increase sales volume by driving more traffic and sales conversions on our e-store, But the skyrocketing cost of the Cloudinary monthly subscription is holding us back. We needed to take the moment to review the scalability of our current setup.

We are even currently afraid to work on advertising campaigns online because we are afraid of that:

  • The e-store takes too long to load
  • Skyrocketing costs on Cloudinary when more visitors are on our pages.

Summary Overview (on what we want to achieve):

  • Keep the cost sustainable
  • Scalability - manage and control costs
  • Specifications that cannot be compromised - high res images, acceptable time to load, etc.

What is the baseline and how can we measure them accurately? (A new user in incognito and not fetching from local browser caches, etc)

Findings:

  • Assets: 2500 images on average per month
  • Image Size: 2500x3500 (orginal) which is compressed into 900x1260
  • Bandwidth: May-Oct traffic numbers (100k Total). 20k per month
  • Location: 80% of the traffic is local traffic
  • Does the internal traffic impact on the CDN usage? Our internal team has been accessing and updating the live site a lot.
  • Measurements - scientific way to measure the differences between the solutions offered TTFB, FCP measurement
READ MORE
5 upvotes·22K views
Replies (3)

I would highly recommend using your cloud provider CDN whether this AWS, GCP or AZure on top of your object store SO, GCS or Blobstore. It will greatly reduce the cost while maintaining speed and performance. For image compression https://github.com/imazen/imageflow is amazing plus it's written in rust which really helps with safety and speed.

READ MORE
4 upvotes·11.7K views
Founder at Shivy·

Hello, We have been at similar situation like yours. We deal with serving around 1TB of images for one of our client, and it skyrockets the price way too quickly if we are using any cloud provider services. If and only if you have time or devops people to spare into research I'd recommend setting up your own CDN service in one/multi cloud provider. In my experience going with custom built solution is cheaper. We are using this https://github.com/ilhaan/kubeCDN, it's a CDN cluster setup on k8s cluster.

For image compressing, we have our inhouse library to compress images without losing much resolution making it almost impossible for regular human to detect changes. (Github link)[https://github.com/Comet-App/CImage].

Testing and benchmarking are always messy, cause in our case we can't scale it with existing tools so we simulate acutual 10k req on client's platform. For individual scores, we prefer (GTMetrix)[https://gtmetrix.com/] and (loader.io)[https://loader.io/]. We are also working on custom multiregion supported webpage/api benchmarking platform, will share with if you're interested.

READ MORE
3 upvotes·11.9K views
View all (3)
Shared insights
at
()

Repost

Overview: To put it simply, we plan to use the MERN stack to build our web application. MongoDB will be used as our primary database. We will use ExpressJS alongside Node.js to set up our API endpoints. Additionally, we plan to use React to build our SPA on the client side and use Redis on the server side as our primary caching solution. Initially, while working on the project, we plan to deploy our server and client both on Heroku . However, Heroku is very limited and we will need the benefits of an Infrastructure as a Service so we will use Amazon EC2 to later deploy our final version of the application.

Serverside: nodemon will allow us to automatically restart a running instance of our node app when files changes take place. We decided to use MongoDB because it is a non relational database which uses the Document Object Model. This allows a lot of flexibility as compared to a RDMS like SQL which requires a very structural model of data that does not change too much. Another strength of MongoDB is its ease in scalability. We will use Mongoose along side MongoDB to model our application data. Additionally, we will host our MongoDB cluster remotely on MongoDB Atlas. Bcrypt will be used to encrypt user passwords that will be stored in the DB. This is to avoid the risks of storing plain text passwords. Moreover, we will use Cloudinary to store images uploaded by the user. We will also use the Twilio SendGrid API to enable automated emails sent by our application. To protect private API endpoints, we will use JSON Web Token and Passport. Also, PayPal will be used as a payment gateway to accept payments from users.

Client Side: As mentioned earlier, we will use React to build our SPA. React uses a virtual DOM which is very efficient in rendering a page. Also React will allow us to reuse components. Furthermore, it is very popular and there is a large community that uses React so it can be helpful if we run into issues. We also plan to make a cross platform mobile application later and using React will allow us to reuse a lot of our code with React Native. Redux will be used to manage state. Redux works great with React and will help us manage a global state in the app and avoid the complications of each component having its own state. Additionally, we will use Bootstrap components and custom CSS to style our app.

Other: Git will be used for version control. During the later stages of our project, we will use Google Analytics to collect useful data regarding user interactions. Moreover, Slack will be our primary communication tool. Also, we will use Visual Studio Code as our primary code editor because it is very light weight and has a wide variety of extensions that will boost productivity. Postman will be used to interact with and debug our API endpoints.

READ MORE
19 upvotes·2 comments·2.1M views
John Akhilomen
John Akhilomen
·
April 1st 2020 at 3:00PM

I like the tech stack you guys have selected. You guys seem to have it all figured out, and well planned. Good luck!

·
Reply
Ne Labs
Ne Labs
·
March 9th 2020 at 12:34PM

RDBMS like Postgres can also store, index and query schemaless data as JSON fields, while also supporting relations where it makes sense. A document model is actually a downside, since usually data will still have relations, and it makes modeling them inconvenient.

·
Reply