Need advice about which tool to choose?Ask the StackShare community!
Wowza vs Ziggeo: What are the differences?
Developers describe Wowza as "A customizable live streaming platform *". It offers a customizable live streaming platform to build, deploy and manage high-quality video, live and on-demand. It powers professional-grade streaming for any use case and any device. On the other hand, *Ziggeo** is detailed as "An asynchronous interface to record and playback videos on websites". It is a cloud-based video technology SaaS (Software as a Service) company that provides asynchronous video APIs, mobile SDKs and tools to deliver enterprise-grade WebRTC capabilities.
Wowza and Ziggeo can be primarily classified as "Video Streaming" tools.
We want to make a live streaming platform demo to show off our video compression technology.
Simply put, we will stream content from 12 x 4K cameras ——> to an edge server(s) containing our compression software ——> either to Bitmovin or Wowza ——> to a media player.
What we would like to know is, is one of the above streaming engines more suited to multiple feeds (we will eventually be using more than 100 4K cameras for the actual streaming platform), 4K content streaming, latency, and functions such as being to Zoom in on the 4K content?
If anyone has any insight into the above, we would be grateful for your advice. We are a Japanese company and were recommended the above two streaming engines but know nothing about them as they literally “foreign” to us.
Thanks so much.
I've been working with Wowza Streaming Engine for more than 10 years, and it's likely very well suited to your application, particularly if you intend to host the streaming engine software. But, you should confirm that both the encoding format (e.g. H.264) and transport protocol (e.g. RTMP) you intend to use is supported by Wowza.
We would like to connect a number of (about 25) video streams, from an Amazon S3 bucket containing video data to endpoints accessible to a Docker image, which, when run, will process the input video streams and emit some JSON statistics.
The 25 video streams should be synchronized. Could people share their experiences with a similar scenario and perhaps offer advice about which is better (Wowza, Amazon Kinesis Video Streams) for this kind of problem, or why they chose one technology over the other?
The video stream duration will be quite long (about 8 hours each x 25 camera sources). The 25 video streams will have no audio component. If you worked with a similar problem, what was your experience with scaling, latency, resource requirements, config, etc.?
I have different experience with processing video files that I'll describe below. It might be helpful or at least make you think a bit diffferent about the problem. What I did (part of it is a mistake): To increase the level of parallelism at the time consuming step which was the video upload, using a custom cmd tool written in Python, I splitted the input videos to much smaller chunks (without losing their ordering - just file name labeling with timestamp) . It then uploaded the chunks to S3. That triggered a few Lambdas that each first pulled a chunked video, did the processing with ffmpeg (the Lambdas were the mistake - at that time the local Lambda storage was up to 512MB so lots of chunks and lots of Lambdas had to be in place, also Lambda are hell to debug), later called Rekognition and later using AWS Elemental MediaConvert to rebuild the full length video. I would use some sort of ECS deployment where processing is triggered by S3 event, and scale the number of Fargate nodes dependent on the number of chucks/videos. Then each processor pulls its video (not stream) to its local storage (local EBS drive) and works. I failed to understand why are you trying to stream videos that are basically static, as a file, or that putting the files on S3 is a current limitation (while your input videos are 'live' and streaming) that you're trying to remove ?