What is Pandas?
Who uses Pandas?
Pandas Integrations
Here are some stack decisions, common use cases and reviews by companies and developers who chose Pandas in their tech stack.
My process is like this: I would get data once a month, either from Google BigQuery or as parquet files from Azure Blob Storage. I have a script that does some cleaning and then stores the result as partitioned parquet files because the following process cannot handle loading all data to memory.
The next process is making a heavy computation in a parallel fashion (per partition), and storing 3 intermediate versions as parquet files: two used for statistics, and the third will be filtered and create the final files.
I make a report based on the two files in Jupyter notebook and convert it to HTML.
- Everything is done with vanilla python and Pandas.
- sometimes I may get a different format of data
- cloud service is Microsoft Azure.
What I'm considering is the following:
Get the data with Kafka or with native python, do the first processing, and store data in Druid, the second processing will be done with Apache Spark getting data from apache druid.
the intermediate states can be stored in druid too. and visualization would be with apache superset.
Should I continue learning Django or take this Spring opportunity? I have been coding in python for about 2 years. I am currently learning Django and I am enjoying it. I also have some knowledge of data science libraries (Pandas, NumPy, scikit-learn, PyTorch). I am currently enhancing my web development and software engineering skills and may shift later into data science since I came from a medical background. The issue is that I am offered now a very trustworthy 9 months program teaching Java/Spring. The graduates of this program work directly in well know tech companies. Although I have been planning to continue with my Python, the other opportunity makes me hesitant since it will put me to work in a specific roadmap with deadlines and mentors. I also found on glassdoor that Spring jobs are way more than Django. Should I apply for this program or continue my journey?
Server side
We decided to use Python for our backend because it is one of the industry standard languages for data analysis and machine learning. It also has a lot of support due to its large user base.
Web Server: We chose Flask because we want to keep our machine learning / data analysis and the web server in the same language. Flask is easy to use and we all have experience with it. Postman will be used for creating and testing APIs due to its convenience.
Machine Learning: We decided to go with PyTorch for machine learning since it is one of the most popular libraries. It is also known to have an easier learning curve than other popular libraries such as Tensorflow. This is important because our team lacks ML experience and learning the tool as fast as possible would increase productivity.
Data Analysis: Some common Python libraries will be used to analyze our data. These include NumPy, Pandas , and matplotlib. These tools combined will help us learn the properties and characteristics of our data. Jupyter notebook will be used to help organize the data analysis process, and improve the code readability.
Client side
UI: We decided to use React for the UI because it helps organize the data and variables of the application into components, making it very convenient to maintain our dashboard. Since React is one of the most popular front end frameworks right now, there will be a lot of support for it as well as a lot of potential new hires that are familiar with the framework. CSS 3 and HTML5 will be used for the basic styling and structure of the web app, as they are the most widely used front end languages.
State Management: We decided to use Redux to manage the state of the application since it works naturally to React. Our team also already has experience working with Redux which gave it a slight edge over the other state management libraries.
Data Visualization: We decided to use the React-based library Victory to visualize the data. They have very user friendly documentation on their official website which we find easy to learn from.
Cache
- Caching: We decided between Redis and memcached because they are two of the most popular open-source cache engines. We ultimately decided to use Redis to improve our web app performance mainly due to the extra functionalities it provides such as fine-tuning cache contents and durability.
Database
- Database: We decided to use a NoSQL database over a relational database because of its flexibility from not having a predefined schema. The user behavior analytics has to be flexible since the data we plan to store may change frequently. We decided on MongoDB because it is lightweight and we can easily host the database with MongoDB Atlas . Everyone on our team also has experience working with MongoDB.
Infrastructure
- Deployment: We decided to use Heroku over AWS, Azure, Google Cloud because it is free. Although there are advantages to the other cloud services, Heroku makes the most sense to our team because our primary goal is to build an MVP.
Other Tools
Communication Slack will be used as the primary source of communication. It provides all the features needed for basic discussions. In terms of more interactive meetings, Zoom will be used for its video calls and screen sharing capabilities.
Source Control The project will be stored on GitHub and all code changes will be done though pull requests. This will help us keep the codebase clean and make it easy to revert changes when we need to.
Blog Posts
Pandas's Features
- Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data
- Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
- Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
- Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both aggregating and transforming data
- Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into DataFrame objects
- Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
- Intuitive merging and joining data sets
- Flexible reshaping and pivoting of data sets
- Hierarchical labeling of axes (possible to have multiple labels per tick)
- Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving/loading data from the ultrafast HDF5 format
- Time series-specific functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.