COBOL vs Python: What are the differences?
What is COBOL? Imperative, procedural and, since 2002, object-oriented programming language. COBOL was one of the first programming languages to be standardised: the first COBOL standard was issued by ANSI in 1968. COBOL is primarily used in business, finance, and administrative systems for companies and governments.
What is Python? A clear and powerful object-oriented programming language, comparable to Perl, Ruby, Scheme, or Java. Python is a general purpose programming language created by Guido Van Rossum. Python is most praised for its elegant syntax and readable code, if you are just beginning your programming career python suits you best.
COBOL and Python can be categorized as "Languages" tools.
Python is an open source tool with 25.3K GitHub stars and 10.5K GitHub forks. Here's a link to Python's open source repository on GitHub.
What is COBOL?
What is Python?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
Sign up to add, upvote and see more consMake informed product decisions
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
We initially started out with Heroku as our PaaS provider due to a desire to use it by our original developer for our Ruby on Rails application/website at the time. We were finding response times slow, it was painfully slow, sometimes taking 10 seconds to start loading the main page. Moving up to the next "compute" level was going to be very expensive.
We moved our site over to AWS Elastic Beanstalk , not only did response times on the site practically become instant, our cloud bill for the application was cut in half.
In database world we are currently using Amazon RDS for PostgreSQL also, we have both MariaDB and Microsoft SQL Server both hosted on Amazon RDS. The plan is to migrate to AWS Aurora Serverless for all 3 of those database systems.
Additional services we use for our public applications: AWS Lambda, Python, Redis, Memcached, AWS Elastic Load Balancing (ELB), Amazon Elasticsearch Service, Amazon ElastiCache
Following its migration from vanilla instances with autoscaling groups to Kubernetes, Postmates began facing challenges while “migrating workloads that needed to scale up very quickly.”
The built-in Horizontal Pod Autoscaler (HPA) automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization. But the challenges for Postmates is that there’s no way to configure the scale velocity of one particular cluster with an HPA.
For Postmates, which runs at least three different types of applications with distinct performance and scaling characteristics, this proved problematic.
To overcome these challenges, the team created and open sourced the Configurable Horizontal Pod Autoscaler, which allows for fine-grained tuning on a per-HPA object basis. The result is that “you can configure critical services to scale down very slowly, while every other service could be configured to scale down instantly to reduce costs.”
At our company, and I've noticed a lot of other ones... application developers and dev-ops people tend to use Ruby and our statisticians and data scientists love Python . Like most companies, our stack is kind of split that way. Ruby is used as glue in most of our production systems ( Java being the main backend language), and then all of our data scientists and their various pipelines tend towards Python
Multiple systems means there is a requirement to cart data across them.
Started off with Talend scripts. This was great as what we initially had were PHP/Python script - allowed for a more systematic approach to ETL.
But ended up with a massive repository of scripts, complex crontab entries and regular failures due to memory issues.
Using Stitch or similar services is a better approach: - no need to worry about the infrastructure needed for the ETL processes - a more formal mapping of data from source to destination as opposed to script developer doing his/her voodoo magic - lot of common sources and destination integrations are already builtin and out of the box