Overcast MAX #4: Max Recognition
AI image recognition is one of the new-fangled terms being bandied around. So, what does it mean and what can it do for you?
In today’s media landscape, there’s a vast volume of unstructured content that organisations are trying to manage. Let’s face it, we’ve all been there: spending hours searching through content for an outstanding soundbite or a stunning shot that we know we filmed but, alas, it eludes us.
[Drum roll, please] Ladies and gentlemen, welcome Max Recognition. Not only will it save you time, it will also rescue you from frustration.
Max Recognition uses the latest AI (artificial intelligence) and Machine Learning to identify people, objects, places, and events in videos and images. It’s like magic! Take a bow, Max!
Image recognition case studies
Television broadcasters, by their nature, create huge amounts of content. But they are subject to strict broadcasting guidelines in relation to sensitive content.
The traditional method of identifying such content was for people to painstakingly view hours and hours of tapes. But now, broadcasters can save hundreds if not thousands of hours a year by automating this moderation of inappropriate content using image recognition.
An online marketplace is a website or mobile application that connects buyers and sellers. Its success depends on adeptly assessing the suitability or compatibility of those traders in order to match them up.
For example, an influencer marketplace could use object and scene detection to segment its influencers based on what media they publish alongside their social media posts.
Benefits of Max Recognition
The great news is that you don’t have to input anything manually. That’s because your content is automatically identifiable and searchable.
Multiple use cases
Max Recognition empowers you to find what you need quickly and accurately; for example, identifying a celebrity, providing health analysis, finding sensitive data, enriching metadata or accessing events.
Big Data accessibility
We’re all creating and amassing a lot more content that is possible to consume. As a result, we’re building up archives of material. Max Recognition allows you to log and manage your archive effectively.
Get in touch
Image recognition is one of nine Overcast MAX products that facilitate easy video collaboration. If you’d like to learn more about this fantastic suite of solutions, please contact Philippe on email@example.com or click here to get in touch.
IBC 2019 is only a few weeks away and we are delighted to be attending as part of the Amazon Partner Network. Yes, that’s right, we’ll be on AWS booth 5.C80 from 13–17 September in Amsterdam. Click here to read an AWS blog post about the technological advances in video creation and distribution they will be showcasing.
Artificial intelligence and machine learning are changing video workflows. Two of the key innovations that we’ll be highlighting with AWS are:
- — Searching video clips for objects, colours, settings, events, words, sounds and facial recognition;
- — Automating transcription and translation to generate captions, subtitles, and audio in multiple languages for live and on-demand streams.
Why not find out from our CEO Philippe Brodeur at IBC how to make video collaboration effortless? Click here to schedule a meeting.
Are you involved in any aspect of sports production, content creation or distribution? If so, why not swing by the SVG Summit 2018 at the New York Hilton Hotel today and tomorrow (10th and 11th December). There’s so much to check out in terms of the latest in digital sports video technology such as augmented reality, machine learning, workflows, digital ad insertion and the impact of 5G.
All Hail The Cloud
As in previous years, the SVG Summit will feature a Cloud and Virtualisation Workshop. This seminar will be full of tips and tricks about cloud-based production and media asset management workflows which are being used by sports-media organisations. It will give insight into how the cloud, virtualisation and SaaS are transforming video production.
This transformation will be discussed by a panel that will include Scott Bounds of Microsoft and Rick Gilpin of Google Cloud. They’ll talk about how the entire video-production ecosystem is undergoing a makeover: from storage and editing to encoding and transcoding and beyond.
Cutting Edge Technologies
Another fascinating panel discussion will explore what’s new for digital sports content creators and distributors. 5G cellular connectivity is on the horizon; AI is putting its stamp on the clipping and enhancing of highlights and social media content; and UHD/HDR streaming has the potential to make a big difference to those live streaming to compatible Smart TVs. Experts will share what needs to happen next.
Every minute counts when it comes to getting post-produced content out across social media and other digital channels. Your team’s creative vision is challenged by tight timelines. Your business obligations might not match internal resources. So what do you do? A case study at the summit will show how production teams at MLB Network have adopted an agile workflow to get both internal and external stakeholders reviewing media faster from the first version to final delivery.
The summit will also feature keynotes by some of the biggest names in sports TV.
Brad Boim will talk about how heading into the 2018 NFL season, NFL Media upgraded its asset management system to enhance metadata tagging with the ingestion of GSIS data, Next-Gen Stats and Speech-to-Text transcriptions into their MAM. This extra metadata has enabled producers to run deeper searches within the database.
Charlie Ebersol, who has created a new American pro football league, will give a presentation on how video production and distribution technology will play a key role in it.
Fresh off a busy year when NBC Broadcasting/NBC Sports Group broadcast the Super Bowl, the Winter Olympic Games and the FIFA World Cup, its chairman Mark Lazarus will reflect on the successes and challenges of 2018 and examine issues and opportunities facing the live sports production industry.
If you’re interested in how to fill stadiums for matches, then don’t miss the panel discussion on how teams in the New York area attract fans to the stands game after game, using tech tools from larger-than-life video displays to mobile gaming.
AR, AI and Machine Learning
Other panel discussions will look at new ways to use Augmented Reality technology, camera tracking and data visualisation to give much more impact to a sports production and whether 1080p HDR sports production workflows are the sweet spot in terms of cost and performance.
There will also be a Sports Content Management Workshop exploring AI versus machine learning. It will tease out how major sports-media organisations are leveraging AI and machine learning for speech-to-text and translation; object and facial recognition; automated personalisation and content discovery.
The 2018 SVG Summit is also the go-to place for one-on-one conversations, case studies, tech showcases and insights into trends shaping the future of the sports-media business.
Overcast HQ CEO Philippe Brodeur will be at the SVG Summit 2018 and would be delighted to show you our video platform. If you would like to meet up with him and chat about all things video, please tweet him: @PhilippeBrodeur or email firstname.lastname@example.org
Can you really use Artificial Intelligence and Machine Learning to create video content faster?
To be honest, the more I hear and read about Artificial Intelligence and Machine Learning the more I think people are trying to pull the wool over my eyes. I honestly don’t believe most people know what they are talking about or what AI and ML actually are. So I set about setting myself a task to define them and then explain why Overcast is investing so much in them.
One of our advisors Hugh O’Byrne (a former senior IBM head of Digital Sales) started by telling me that everyone has actually got it wrong. What most people are talking about is “augmented” intelligence — i.e. they are talking about machines that can help (not replace) humans in the workplace. Machines might be able to learn and get better at doing manual tasks, but ultimately the work being done still needs a guiding human hand so it is augmented.
Understanding AI and ML
So if we keep that in mind, here is how we define AI: “Artificial Intelligence” is the science of making computers good at doing tasks that were previously done by people.
It’s pretty broad and probably covers what so many people claim as their “AI solution”. So, perhaps what is far more interesting is “Machine Learning”, which is a subset of AI and focuses on the ability of computers to use large sets of data to “learn” about a task and improve the performance of those tasks over time.
If you take these two statements for what they are, it’s actually machine learning that is far more interesting and far more powerful. AI has been talked about pretty much since the beginning of computers — but machine learning has only been possible since the introduction of large data sets that can lead to machines being “trained” or, in fact, “training themselves” according to a set of rules.
With video we are at the early stages. Up until now, very little data existed about a video that was not inputted manually. Sure, you could get technical details like length, file size, codec and things like that, but anything descriptive about the story had to be entered manually. That’s the metadata.
It’s all about business needs
Recent advances in AI and machine learning have enabled all of this to change. We can now extract a considerable amount of “descriptive” data that in turn can be used for a number of different content solutions.
A short list of what data we can extract from a video includes:
- Voice to text
- Image recognition
- Scene recognition
- Facial recognition
- Sentiment recognition
This is just a short list of the information that can make it easier to do a number of tasks. Caption creation, search, archiving, metadata enhancement and compliance are just a number of tasks that machines are getting better at doing without the need for human intervention.
Ultimately these advances in AI and Machine Learning should help to solve problems for creatives who waste much of their time on mundane tasks like searching for content, wondering if brand guidelines are being adhered to and even putting captions with the right punctuation on their content. You know, real business needs.
The result: yes, AI and Machine Learning can help make video content faster but it takes time for the machines to learn so it is taking time for the accuracy of these solutions to be able to be deployed at scale.