Need advice about which tool to choose?Ask the StackShare community!
Scala vs Apache Spark: What are the differences?
What is Scala? A pure-bred object-oriented language that runs on the JVM. Scala is an acronym for “Scalable Language”. This means that Scala grows with you. You can play with it by typing one-line expressions and observing the results. But you can also rely on it for large mission critical systems, as many companies, including Twitter, LinkedIn, or Intel do. To some, Scala feels like a scripting language. Its syntax is concise and low ceremony; its types get out of the way because the compiler can infer them.
What is Apache Spark? Fast and general engine for large-scale data processing. Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Scala and Apache Spark are primarily classified as "Languages" and "Big Data" tools respectively.
"Static typing" is the primary reason why developers consider Scala over the competitors, whereas "Open-source" was stated as the key factor in picking Apache Spark.
Scala and Apache Spark are both open source tools. Apache Spark with 22.5K GitHub stars and 19.4K forks on GitHub appears to be more popular than Scala with 11.8K GitHub stars and 2.75K GitHub forks.
Twitter, Coursera, and 9GAG are some of the popular companies that use Scala, whereas Apache Spark is used by Uber Technologies, Slack, and Shopify. Scala has a broader approval, being mentioned in 437 company stacks & 324 developers stacks; compared to Apache Spark, which is listed in 266 company stacks and 112 developer stacks.
I am new to Apache Spark and Scala both. I am basically a Java developer and have around 10 years of experience in Java.
I wish to work on some Machine learning or AI tech stacks. Please assist me in the tech stack and help make a clear Road Map. Any feedback is welcome.
Technologies apart from Scala and Spark are also welcome. Please note that the tools should be relevant to Machine Learning or Artificial Intelligence.
I may be a little biased, but if you need some good introductions to Scala have a look at the free books from https://underscore.io/training/ - click through to each course and there is a free book.
We have a Kafka topic having events of type A and type B. We need to perform an inner join on both type of events using some common field (primary-key). The joined events to be inserted in Elasticsearch.
In usual cases, type A and type B events (with same key) observed to be close upto 15 minutes. But in some cases they may be far from each other, lets say 6 hours. Sometimes event of either of the types never come.
In all cases, we should be able to find joined events instantly after they are joined and not-joined events within 15 minutes.
The first solution that came to me is to use upsert to update ElasticSearch:
- Use the primary-key as ES document id
- Upsert the records to ES as soon as you receive them. As you are using upsert, the 2nd record of the same primary-key will not overwrite the 1st one, but will be merged with it.
Cons: The load on ES will be higher, due to upsert.
To use Flink:
- Create a KeyedDataStream by the primary-key
- In the ProcessFunction, save the first record in a State. At the same time, create a Timer for 15 minutes in the future
- When the 2nd record comes, read the 1st record from the State, merge those two, and send out the result, and clear the State and the Timer if it has not fired
- When the Timer fires, read the 1st record from the State and send out as the output record.
- Have a 2nd Timer of 6 hours (or more) if you are not using Windowing to clean up the State
Pro: if you have already having Flink ingesting this stream. Otherwise, I would just go with the 1st solution.
Please refer "Structured Streaming" feature of Spark. Refer "Stream - Stream Join" at https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#stream-stream-joins . In short you need to specify "Define watermark delays on both inputs" and "Define a constraint on time across the two inputs"
Finding the best server-side tool for building a personal information organizer that focuses on performance, simplicity, and scalability.
performance and scalability get a prototype going fast by keeping codebase simple find hosting that is affordable and scales well (Java/Scala-based ones might not be affordable)
I've picked Node.js here but honestly it's a toss up between that and Go around this. It really depends on your background and skillset around "get something going fast" for one of these languages. Based on not knowing that I've suggested Node because it can be easier to prototype quickly and built right is performant enough. The scaffolding provided around Node.js services (Koa, Restify, NestJS) means you can get up and running pretty easily. It's important to note that the tooling surrounding this is good also, such as tracing, metrics et al (important when you're building production ready services).
You'll get more scalability and perf from go, but balancing them out I would say that you'll get pretty far with a well built Node.JS service (our entire site with over 1.5k requests/m scales easily and holds it's own with 4 pods in production.
Without knowing the scale you are building for and the systems you are using around it it's hard to say for certain this is the right route.
We needed to incorporate Big Data Framework for data stream analysis, specifically Apache Spark / Apache Storm. The three options of languages were most suitable for the job - Python, Java, Scala.
The winner was Python for the top of the class, high-performance data analysis libraries (NumPy, Pandas) written in C, quick learning curve, quick prototyping allowance, and a great connection with other future tools for machine learning as Tensorflow.
The whole code was shorter & more readable which made it easier to develop and maintain.
Pros of Scala
- Static typing187
- Pattern-matching178
- Jvm177
- Scala is fun172
- Types138
- Concurrency95
- Actor library88
- Solve functional problems86
- Open source81
- Solve concurrency in a safer way80
- Functional44
- Fast24
- Generics23
- It makes me a better engineer18
- Syntactic sugar17
- Scalable13
- First-class functions10
- Type safety10
- Interactive REPL9
- Expressive8
- SBT7
- Case classes6
- Implicit parameters6
- Rapid and Safe Development using Functional Programming4
- JVM, OOP and Functional programming, and static typing4
- Object-oriented4
- Used by Twitter4
- Functional Proframming3
- Spark2
- Beautiful Code2
- Safety2
- Growing Community2
- DSL1
- Rich Static Types System and great Concurrency support1
- Naturally enforce high code quality1
- Akka Streams1
- Akka1
- Reactive Streams1
- Easy embedded DSLs1
- Mill build tool1
- Freedom to choose the right tools for a job0
Pros of Apache Spark
- Open-source60
- Fast and Flexible48
- One platform for every big data problem8
- Great for distributed SQL like applications8
- Easy to install and to use6
- Works well for most Datascience usecases3
- Interactive Query2
- In memory Computation2
- Machine learning libratimery, Streaming in real2
Sign up to add or upvote prosMake informed product decisions
Cons of Scala
- Slow compilation time11
- Multiple ropes and styles to hang your self7
- Too few developers available6
- Complicated subtyping4
- My coworkers using scala are racist against other stuff2
Cons of Apache Spark
- Speed4