Alternatives to PySpark logo

Alternatives to PySpark

Scala, Python, Apache Spark, Pandas, and Hadoop are the most popular alternatives and competitors to PySpark.
203
239
+ 1
0

What is PySpark and what are its top alternatives?

It is the collaboration of Apache Spark and Python. it is a Python API for Spark that lets you harness the simplicity of Python and the power of Apache Spark in order to tame Big Data.
PySpark is a tool in the Data Science Tools category of a tech stack.

Top Alternatives to PySpark

  • Scala
    Scala

    Scala is an acronym for “Scalable Language”. This means that Scala grows with you. You can play with it by typing one-line expressions and observing the results. But you can also rely on it for large mission critical systems, as many companies, including Twitter, LinkedIn, or Intel do. To some, Scala feels like a scripting language. Its syntax is concise and low ceremony; its types get out of the way because the compiler can infer them. ...

  • Python
    Python

    Python is a general purpose programming language created by Guido Van Rossum. Python is most praised for its elegant syntax and readable code, if you are just beginning your programming career python suits you best. ...

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Pandas
    Pandas

    Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more. ...

  • Hadoop
    Hadoop

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. ...

  • PyTorch
    PyTorch

    PyTorch is not a Python binding into a monolothic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use numpy / scipy / scikit-learn etc. ...

  • Dask
    Dask

    It is a versatile tool that supports a variety of workloads. It is composed of two parts: Dynamic task scheduling optimized for computation. This is similar to Airflow, Luigi, Celery, or Make, but optimized for interactive computational workloads. Big Data collections like parallel arrays, dataframes, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of dynamic task schedulers. ...

  • NumPy
    NumPy

    Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. ...

PySpark alternatives & related posts

Scala logo

Scala

8.7K
6.9K
1.5K
A pure-bred object-oriented language that runs on the JVM
8.7K
6.9K
+ 1
1.5K
PROS OF SCALA
  • 188
    Static typing
  • 179
    Pattern-matching
  • 177
    Jvm
  • 172
    Scala is fun
  • 138
    Types
  • 95
    Concurrency
  • 88
    Actor library
  • 86
    Solve functional problems
  • 83
    Open source
  • 80
    Solve concurrency in a safer way
  • 44
    Functional
  • 23
    Generics
  • 23
    Fast
  • 18
    It makes me a better engineer
  • 17
    Syntactic sugar
  • 13
    Scalable
  • 10
    First-class functions
  • 10
    Type safety
  • 9
    Interactive REPL
  • 8
    Expressive
  • 7
    SBT
  • 6
    Implicit parameters
  • 6
    Case classes
  • 4
    Used by Twitter
  • 4
    JVM, OOP and Functional programming, and static typing
  • 4
    Rapid and Safe Development using Functional Programming
  • 4
    Object-oriented
  • 3
    Functional Proframming
  • 2
    Spark
  • 2
    Beautiful Code
  • 2
    Safety
  • 2
    Growing Community
  • 1
    DSL
  • 1
    Rich Static Types System and great Concurrency support
  • 1
    Naturally enforce high code quality
  • 1
    Akka Streams
  • 1
    Akka
  • 1
    Reactive Streams
  • 1
    Easy embedded DSLs
  • 1
    Mill build tool
  • 0
    Freedom to choose the right tools for a job
CONS OF SCALA
  • 11
    Slow compilation time
  • 7
    Multiple ropes and styles to hang your self
  • 6
    Too few developers available
  • 4
    Complicated subtyping
  • 2
    My coworkers using scala are racist against other stuff

related Scala posts

Shared insights
on
JavaJavaScalaScalaApache SparkApache Spark

I am new to Apache Spark and Scala both. I am basically a Java developer and have around 10 years of experience in Java.

I wish to work on some Machine learning or AI tech stacks. Please assist me in the tech stack and help make a clear Road Map. Any feedback is welcome.

Technologies apart from Scala and Spark are also welcome. Please note that the tools should be relevant to Machine Learning or Artificial Intelligence.

See more
Marc Bollinger
Infra & Data Eng Manager at Thumbtack · | 5 upvotes · 583.6K views

Lumosity is home to the world's largest cognitive training database, a responsibility we take seriously. For most of the company's history, our analysis of user behavior and training data has been powered by an event stream--first a simple Node.js pub/sub app, then a heavyweight Ruby app with stronger durability. Both supported decent throughput and latency, but they lacked some major features supported by existing open-source alternatives: replaying existing messages (also lacking in most message queue-based solutions), scaling out many different readers for the same stream, the ability to leverage existing solutions for reading and writing, and possibly most importantly: the ability to hire someone externally who already had expertise.

We ultimately migrated to Kafka in early- to mid-2016, citing both industry trends in companies we'd talked to with similar durability and throughput needs, the extremely strong documentation and community. We pored over Kyle Kingsbury's Jepsen post (https://aphyr.com/posts/293-jepsen-Kafka), as well as Jay Kreps' follow-up (http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen), talked at length with Confluent folks and community members, and still wound up running parallel systems for quite a long time, but ultimately, we've been very, very happy. Understanding the internals and proper levers takes some commitment, but it's taken very little maintenance once configured. Since then, the Confluent Platform community has grown and grown; we've gone from doing most development using custom Scala consumers and producers to being 60/40 Kafka Streams/Connects.

We originally looked into Storm / Heron , and we'd moved on from Redis pub/sub. Heron looks great, but we already had a programming model across services that was more akin to consuming a message consumers than required a topology of bolts, etc. Heron also had just come out while we were starting to migrate things, and the community momentum and direction of Kafka felt more substantial than the older Storm. If we were to start the process over again today, we might check out Pulsar , although the ecosystem is much younger.

To find out more, read our 2017 engineering blog post about the migration!

See more
Python logo

Python

190.3K
163.9K
6.7K
A clear and powerful object-oriented programming language, comparable to Perl, Ruby, Scheme, or Java.
190.3K
163.9K
+ 1
6.7K
PROS OF PYTHON
  • 1.1K
    Great libraries
  • 947
    Readable code
  • 834
    Beautiful code
  • 780
    Rapid development
  • 682
    Large community
  • 426
    Open source
  • 385
    Elegant
  • 278
    Great community
  • 268
    Object oriented
  • 214
    Dynamic typing
  • 75
    Great standard library
  • 56
    Very fast
  • 51
    Functional programming
  • 43
    Scientific computing
  • 43
    Easy to learn
  • 33
    Great documentation
  • 26
    Matlab alternative
  • 25
    Productivity
  • 25
    Easy to read
  • 21
    Simple is better than complex
  • 18
    It's the way I think
  • 17
    Imperative
  • 15
    Very programmer and non-programmer friendly
  • 15
    Free
  • 14
    Powerfull language
  • 14
    Machine learning support
  • 14
    Powerful
  • 13
    Fast and simple
  • 12
    Scripting
  • 9
    Explicit is better than implicit
  • 8
    Unlimited power
  • 8
    Ease of development
  • 8
    Clear and easy and powerfull
  • 7
    Import antigravity
  • 6
    It's lean and fun to code
  • 6
    Print "life is short, use python"
  • 5
    Great for tooling
  • 5
    Fast coding and good for competitions
  • 5
    I love snakes
  • 5
    Python has great libraries for data processing
  • 5
    There should be one-- and preferably only one --obvious
  • 5
    High Documented language
  • 5
    Flat is better than nested
  • 5
    Although practicality beats purity
  • 4
    Rapid Prototyping
  • 4
    Readability counts
  • 3
    Great for analytics
  • 3
    Web scraping
  • 3
    Now is better than never
  • 3
    Plotting
  • 3
    Lists, tuples, dictionaries
  • 3
    Socially engaged community
  • 3
    Complex is better than complicated
  • 3
    Multiple Inheritence
  • 3
    Beautiful is better than ugly
  • 3
    CG industry needs
  • 2
    No cruft
  • 2
    Many types of collections
  • 2
    Easy to learn and use
  • 2
    Special cases aren't special enough to break the rules
  • 2
    If the implementation is hard to explain, it's a bad id
  • 2
    If the implementation is easy to explain, it may be a g
  • 2
    List comprehensions
  • 2
    Generators
  • 2
    Simple and easy to learn
  • 2
    Easy to setup and run smooth
  • 2
    Import this
  • 1
    Powerful language for AI
  • 1
    Because of Netflix
  • 1
    A-to-Z
  • 1
    Only one way to do it
  • 1
    Can understand easily who are new to programming
  • 1
    Flexible and easy
  • 1
    Better outcome
  • 1
    Batteries included
  • 1
    Good for hacking
  • 1
    Should START with this but not STICK with This
  • 1
    Pip install everything
  • 1
    It is Very easy , simple and will you be love programmi
  • 0
    Powerful
CONS OF PYTHON
  • 51
    Still divided between python 2 and python 3
  • 28
    Performance impact
  • 26
    Poor syntax for anonymous functions
  • 21
    GIL
  • 19
    Package management is a mess
  • 14
    Too imperative-oriented
  • 12
    Hard to understand
  • 12
    Dynamic typing
  • 10
    Very slow
  • 8
    Not everything is expression
  • 7
    Explicit self parameter in methods
  • 7
    Indentations matter a lot
  • 6
    Poor DSL capabilities
  • 6
    Incredibly slow
  • 6
    No anonymous functions
  • 6
    Requires C functions for dynamic modules
  • 5
    Hard to obfuscate
  • 5
    Threading
  • 5
    Fake object-oriented programming
  • 5
    The "lisp style" whitespaces
  • 4
    Official documentation is unclear.
  • 4
    Circular import
  • 4
    Lack of Syntax Sugar leads to "the pyramid of doom"
  • 4
    Not suitable for autocomplete
  • 4
    The benevolent-dictator-for-life quit
  • 2
    Meta classes
  • 1
    Training wheels (forced indentation)

related Python posts

Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 41 upvotes · 5.5M views

How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

https://eng.uber.com/distributed-tracing/

(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

See more
Nick Parsons
Building cool things on the internet 🛠️ at Stream · | 35 upvotes · 1.7M views

Winds 2.0 is an open source Podcast/RSS reader developed by Stream with a core goal to enable a wide range of developers to contribute.

We chose JavaScript because nearly every developer knows or can, at the very least, read JavaScript. With ES6 and Node.js v10.x.x, it’s become a very capable language. Async/Await is powerful and easy to use (Async/Await vs Promises). Babel allows us to experiment with next-generation JavaScript (features that are not in the official JavaScript spec yet). Yarn allows us to consistently install packages quickly (and is filled with tons of new tricks)

We’re using JavaScript for everything – both front and backend. Most of our team is experienced with Go and Python, so Node was not an obvious choice for this app.

Sure... there will be haters who refuse to acknowledge that there is anything remotely positive about JavaScript (there are even rants on Hacker News about Node.js); however, without writing completely in JavaScript, we would not have seen the results we did.

#FrameworksFullStack #Languages

See more
Apache Spark logo

Apache Spark

2.7K
3.2K
137
Fast and general engine for large-scale data processing
2.7K
3.2K
+ 1
137
PROS OF APACHE SPARK
  • 59
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 7
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    Interactive Query
  • 2
    In memory Computation
  • 2
    Machine learning libratimery, Streaming in real
CONS OF APACHE SPARK
  • 3
    Speed

related Apache Spark posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.6M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 1.2M views

Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

(Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

See more
Pandas logo

Pandas

1.4K
1.2K
22
High-performance, easy-to-use data structures and data analysis tools for the Python programming language
1.4K
1.2K
+ 1
22
PROS OF PANDAS
  • 21
    Easy data frame management
  • 1
    Extensive file format compatibility
CONS OF PANDAS
    Be the first to leave a con

    related Pandas posts

    Server side

    We decided to use Python for our backend because it is one of the industry standard languages for data analysis and machine learning. It also has a lot of support due to its large user base.

    • Web Server: We chose Flask because we want to keep our machine learning / data analysis and the web server in the same language. Flask is easy to use and we all have experience with it. Postman will be used for creating and testing APIs due to its convenience.

    • Machine Learning: We decided to go with PyTorch for machine learning since it is one of the most popular libraries. It is also known to have an easier learning curve than other popular libraries such as Tensorflow. This is important because our team lacks ML experience and learning the tool as fast as possible would increase productivity.

    • Data Analysis: Some common Python libraries will be used to analyze our data. These include NumPy, Pandas , and matplotlib. These tools combined will help us learn the properties and characteristics of our data. Jupyter notebook will be used to help organize the data analysis process, and improve the code readability.

    Client side

    • UI: We decided to use React for the UI because it helps organize the data and variables of the application into components, making it very convenient to maintain our dashboard. Since React is one of the most popular front end frameworks right now, there will be a lot of support for it as well as a lot of potential new hires that are familiar with the framework. CSS 3 and HTML5 will be used for the basic styling and structure of the web app, as they are the most widely used front end languages.

    • State Management: We decided to use Redux to manage the state of the application since it works naturally to React. Our team also already has experience working with Redux which gave it a slight edge over the other state management libraries.

    • Data Visualization: We decided to use the React-based library Victory to visualize the data. They have very user friendly documentation on their official website which we find easy to learn from.

    Cache

    • Caching: We decided between Redis and memcached because they are two of the most popular open-source cache engines. We ultimately decided to use Redis to improve our web app performance mainly due to the extra functionalities it provides such as fine-tuning cache contents and durability.

    Database

    • Database: We decided to use a NoSQL database over a relational database because of its flexibility from not having a predefined schema. The user behavior analytics has to be flexible since the data we plan to store may change frequently. We decided on MongoDB because it is lightweight and we can easily host the database with MongoDB Atlas . Everyone on our team also has experience working with MongoDB.

    Infrastructure

    • Deployment: We decided to use Heroku over AWS, Azure, Google Cloud because it is free. Although there are advantages to the other cloud services, Heroku makes the most sense to our team because our primary goal is to build an MVP.

    Other Tools

    • Communication Slack will be used as the primary source of communication. It provides all the features needed for basic discussions. In terms of more interactive meetings, Zoom will be used for its video calls and screen sharing capabilities.

    • Source Control The project will be stored on GitHub and all code changes will be done though pull requests. This will help us keep the codebase clean and make it easy to revert changes when we need to.

    See more

    Should I continue learning Django or take this Spring opportunity? I have been coding in python for about 2 years. I am currently learning Django and I am enjoying it. I also have some knowledge of data science libraries (Pandas, NumPy, scikit-learn, PyTorch). I am currently enhancing my web development and software engineering skills and may shift later into data science since I came from a medical background. The issue is that I am offered now a very trustworthy 9 months program teaching Java/Spring. The graduates of this program work directly in well know tech companies. Although I have been planning to continue with my Python, the other opportunity makes me hesitant since it will put me to work in a specific roadmap with deadlines and mentors. I also found on glassdoor that Spring jobs are way more than Django. Should I apply for this program or continue my journey?

    See more
    Hadoop logo

    Hadoop

    2.1K
    2.2K
    56
    Open-source software for reliable, scalable, distributed computing
    2.1K
    2.2K
    + 1
    56
    PROS OF HADOOP
    • 39
      Great ecosystem
    • 11
      One stack to rule them all
    • 4
      Great load balancer
    • 1
      Amazon aws
    • 1
      Java syntax
    CONS OF HADOOP
      Be the first to leave a con

      related Hadoop posts

      Shared insights
      on
      KafkaKafkaHadoopHadoop
      at

      The early data ingestion pipeline at Pinterest used Kafka as the central message transporter, with the app servers writing messages directly to Kafka, which then uploaded log files to S3.

      For databases, a custom Hadoop streamer pulled database data and wrote it to S3.

      Challenges cited for this infrastructure included high operational overhead, as well as potential data loss occurring when Kafka broker outages led to an overflow of in-memory message buffering.

      See more
      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 1.2M views

      Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

      Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

      https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

      (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

      See more
      PyTorch logo

      PyTorch

      1.2K
      1.3K
      42
      A deep learning framework that puts Python first
      1.2K
      1.3K
      + 1
      42
      PROS OF PYTORCH
      • 14
        Easy to use
      • 11
        Developer Friendly
      • 10
        Easy to debug
      • 7
        Sometimes faster than TensorFlow
      CONS OF PYTORCH
      • 3
        Lots of code
      • 1
        It eats poop

      related PyTorch posts

      Server side

      We decided to use Python for our backend because it is one of the industry standard languages for data analysis and machine learning. It also has a lot of support due to its large user base.

      • Web Server: We chose Flask because we want to keep our machine learning / data analysis and the web server in the same language. Flask is easy to use and we all have experience with it. Postman will be used for creating and testing APIs due to its convenience.

      • Machine Learning: We decided to go with PyTorch for machine learning since it is one of the most popular libraries. It is also known to have an easier learning curve than other popular libraries such as Tensorflow. This is important because our team lacks ML experience and learning the tool as fast as possible would increase productivity.

      • Data Analysis: Some common Python libraries will be used to analyze our data. These include NumPy, Pandas , and matplotlib. These tools combined will help us learn the properties and characteristics of our data. Jupyter notebook will be used to help organize the data analysis process, and improve the code readability.

      Client side

      • UI: We decided to use React for the UI because it helps organize the data and variables of the application into components, making it very convenient to maintain our dashboard. Since React is one of the most popular front end frameworks right now, there will be a lot of support for it as well as a lot of potential new hires that are familiar with the framework. CSS 3 and HTML5 will be used for the basic styling and structure of the web app, as they are the most widely used front end languages.

      • State Management: We decided to use Redux to manage the state of the application since it works naturally to React. Our team also already has experience working with Redux which gave it a slight edge over the other state management libraries.

      • Data Visualization: We decided to use the React-based library Victory to visualize the data. They have very user friendly documentation on their official website which we find easy to learn from.

      Cache

      • Caching: We decided between Redis and memcached because they are two of the most popular open-source cache engines. We ultimately decided to use Redis to improve our web app performance mainly due to the extra functionalities it provides such as fine-tuning cache contents and durability.

      Database

      • Database: We decided to use a NoSQL database over a relational database because of its flexibility from not having a predefined schema. The user behavior analytics has to be flexible since the data we plan to store may change frequently. We decided on MongoDB because it is lightweight and we can easily host the database with MongoDB Atlas . Everyone on our team also has experience working with MongoDB.

      Infrastructure

      • Deployment: We decided to use Heroku over AWS, Azure, Google Cloud because it is free. Although there are advantages to the other cloud services, Heroku makes the most sense to our team because our primary goal is to build an MVP.

      Other Tools

      • Communication Slack will be used as the primary source of communication. It provides all the features needed for basic discussions. In terms of more interactive meetings, Zoom will be used for its video calls and screen sharing capabilities.

      • Source Control The project will be stored on GitHub and all code changes will be done though pull requests. This will help us keep the codebase clean and make it easy to revert changes when we need to.

      See more
      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 8 upvotes · 1.5M views

      Why we built an open source, distributed training framework for TensorFlow , Keras , and PyTorch:

      At Uber, we apply deep learning across our business; from self-driving research to trip forecasting and fraud prevention, deep learning enables our engineers and data scientists to create better experiences for our users.

      TensorFlow has become a preferred deep learning library at Uber for a variety of reasons. To start, the framework is one of the most widely used open source frameworks for deep learning, which makes it easy to onboard new users. It also combines high performance with an ability to tinker with low-level model details—for instance, we can use both high-level APIs, such as Keras, and implement our own custom operators using NVIDIA’s CUDA toolkit.

      Uber has introduced Michelangelo (https://eng.uber.com/michelangelo/), an internal ML-as-a-service platform that democratizes machine learning and makes it easy to build and deploy these systems at scale. In this article, we pull back the curtain on Horovod, an open source component of Michelangelo’s deep learning toolkit which makes it easier to start—and speed up—distributed deep learning projects with TensorFlow:

      https://eng.uber.com/horovod/

      (Direct GitHub repo: https://github.com/uber/horovod)

      See more
      Dask logo

      Dask

      84
      128
      0
      A flexible library for parallel computing in Python
      84
      128
      + 1
      0
      PROS OF DASK
        Be the first to leave a pro
        CONS OF DASK
          Be the first to leave a con

          related Dask posts

          NumPy logo

          NumPy

          1.2K
          703
          10
          Fundamental package for scientific computing with Python
          1.2K
          703
          + 1
          10
          PROS OF NUMPY
          • 8
            Great for data analysis
          • 2
            Faster than list
          CONS OF NUMPY
            Be the first to leave a con

            related NumPy posts

            Server side

            We decided to use Python for our backend because it is one of the industry standard languages for data analysis and machine learning. It also has a lot of support due to its large user base.

            • Web Server: We chose Flask because we want to keep our machine learning / data analysis and the web server in the same language. Flask is easy to use and we all have experience with it. Postman will be used for creating and testing APIs due to its convenience.

            • Machine Learning: We decided to go with PyTorch for machine learning since it is one of the most popular libraries. It is also known to have an easier learning curve than other popular libraries such as Tensorflow. This is important because our team lacks ML experience and learning the tool as fast as possible would increase productivity.

            • Data Analysis: Some common Python libraries will be used to analyze our data. These include NumPy, Pandas , and matplotlib. These tools combined will help us learn the properties and characteristics of our data. Jupyter notebook will be used to help organize the data analysis process, and improve the code readability.

            Client side

            • UI: We decided to use React for the UI because it helps organize the data and variables of the application into components, making it very convenient to maintain our dashboard. Since React is one of the most popular front end frameworks right now, there will be a lot of support for it as well as a lot of potential new hires that are familiar with the framework. CSS 3 and HTML5 will be used for the basic styling and structure of the web app, as they are the most widely used front end languages.

            • State Management: We decided to use Redux to manage the state of the application since it works naturally to React. Our team also already has experience working with Redux which gave it a slight edge over the other state management libraries.

            • Data Visualization: We decided to use the React-based library Victory to visualize the data. They have very user friendly documentation on their official website which we find easy to learn from.

            Cache

            • Caching: We decided between Redis and memcached because they are two of the most popular open-source cache engines. We ultimately decided to use Redis to improve our web app performance mainly due to the extra functionalities it provides such as fine-tuning cache contents and durability.

            Database

            • Database: We decided to use a NoSQL database over a relational database because of its flexibility from not having a predefined schema. The user behavior analytics has to be flexible since the data we plan to store may change frequently. We decided on MongoDB because it is lightweight and we can easily host the database with MongoDB Atlas . Everyone on our team also has experience working with MongoDB.

            Infrastructure

            • Deployment: We decided to use Heroku over AWS, Azure, Google Cloud because it is free. Although there are advantages to the other cloud services, Heroku makes the most sense to our team because our primary goal is to build an MVP.

            Other Tools

            • Communication Slack will be used as the primary source of communication. It provides all the features needed for basic discussions. In terms of more interactive meetings, Zoom will be used for its video calls and screen sharing capabilities.

            • Source Control The project will be stored on GitHub and all code changes will be done though pull requests. This will help us keep the codebase clean and make it easy to revert changes when we need to.

            See more

            Should I continue learning Django or take this Spring opportunity? I have been coding in python for about 2 years. I am currently learning Django and I am enjoying it. I also have some knowledge of data science libraries (Pandas, NumPy, scikit-learn, PyTorch). I am currently enhancing my web development and software engineering skills and may shift later into data science since I came from a medical background. The issue is that I am offered now a very trustworthy 9 months program teaching Java/Spring. The graduates of this program work directly in well know tech companies. Although I have been planning to continue with my Python, the other opportunity makes me hesitant since it will put me to work in a specific roadmap with deadlines and mentors. I also found on glassdoor that Spring jobs are way more than Django. Should I apply for this program or continue my journey?

            See more