Find the right developer tools and the companies that use them

Feed
Keep up with the tools you care about
Visit Feed
Stacks
Browse top companies’ stacks
Browse Stacks
Trending
Explore popular and trending tools
Explore Tools
Stackups
Compare tools side-by-side
Compare Tools

See reviews of popular open source and SaaS tools

  • See a personalized feed with the latest reviews and news about your tech stack
  • Share why and how you use tools in front of a community of 250K+ fellow developers
  • Get new product updates, articles, and announcements pushed to you daily/weekly
Check Out the Feed
StackShare Editors
StackShare Editors
Node.js
Node.js
npm
npm
Yarn
Yarn

From a StackShare Community member: “I’m a freelance web developer (I mostly use Node.js) and for future projects I’m debating between npm or Yarn as my default package manager. I’m a minimalist so I hate installing software if I don’t need to- in this case that would be Yarn. For those who made the switch from npm to Yarn, what benefits have you noticed? For those who stuck with npm, are you happy you with it?"

See more
Tony Ko
Tony Ko
Front End Developer at Brandfire Marketing Group · | 5 upvotes · 2.3K views
JavaScript
JavaScript
jQuery
jQuery

I prefer native JavaScript over jQuery where possible to avoid bloat. I also find native JS methods to be documented better than jQuery's documentation site.

Most jQuery methods can be replaced with native code. I rather use a dedicated library for any exceptions. For example, axios.get is much better than $.get. Also, you can pick any number of animation libraries that are better than jQuery.

However, I don't mind using it in a team environment where communication & maintainability > code size. jQuery can help in those cases because most team members will know jQuery.

See more
Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 336.9K views
atStitch FixStitch Fix
Kafka
Kafka
PostgreSQL
PostgreSQL
Amazon S3
Amazon S3
Apache Spark
Apache Spark
Presto
Presto
Python
Python
R
R
PyTorch
PyTorch
Docker
Docker
Amazon EC2 Container Service
Amazon EC2 Container Service
#AWS
#Etl
#ML
#DataScience
#DataStack
#Data

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Jerome Dalbert
Jerome Dalbert
Senior Backend Engineer at StackShare · | 2 upvotes · 5K views
atStackShareStackShare
Rails
Rails
Redis
Redis
#Performance

In our Rails app, I recently had to track the number of views for multiple records at once. Because these views happen on a high traffic page, #Performance is a concern, so I use our Redis instance for storage.

The following code benchmarks at 8 seconds for 50k values:

record_ids.each do |record_id|
  $redis.incr("my_records:#{record_id}:views_count")
end

It is not very efficient because we are performing 50k Redis calls over the network. Since I am already optimizing by using Redis, why not optimize all the way and try to perform only 1 Redis call?

It wasn’t immediately obvious how to do this. Redis' INCR doesn’t accept arrays. But after a bit of research, I found this solution which benchmarks at only 0.9 seconds for 50k values:

$redis.pipelined do
  record_ids.each do |record_id|
    $redis.incr("my_records:#{record_id}:views_count")
  end
end

Here, pipelined sends 50k INCR requests without waiting for a response from the Redis server. You only get one response at the end.

Much faster than performing 50k request+response cycles!

See more