How deep learning could revolutionize broadcasting



About the author

Max Kalmykov is vice president for media and entertainment at DataArt.

Broadcasters and movie studios are beginning to explore the tremendous potential of modern technology to bring a new generation of movie entertainment to our televisions and cinemas. Artificial intelligence, machine learning and deep learning are the buzz words that inspire video managers with the promise of revolutionary new video creation and editing capabilities.

In particular, deep learning is the new frontier for the video industry, with which video professionals can automatically do things that would have taken weeks of work, as well as some things that would not have been possible at all. How does Deep Learning differ from other machine learning algorithms? And what are your practical applications for radio and movie entertainment? What are the scientific and business consequences?

Artificial intelligence, machine learning and deep learning

Artificial intelligence is any attempt to make a computer appear as if it has intelligence. The computer may be told exactly what to do in a given situation. In this case he has learned nothing. Machine learning is supposed to teach the computer how to perform certain tasks. There are a variety of methods that rely on the computer repeatedly changing the parameters by trial and error. One of the more complex ways is to mimic the neurons in a biological brain. If we make these artificial brains or neural networks more complex, we have deep learning.

Deep learning allows a computer to use something complex as input, such as: B. all the pixels in a video image, and something as complex to spend, for. For example, all pixels in a new, changed video image. For example, unwanted grain frames may be displayed as input and the output compared to clean frames. By trial and error it learns to remove the grain from the input. As more and more images pass through, it can learn to do the same for images that have never been shown.

Perhaps the first impressive use of deep learning was when Google was training a neural network to play Go, the well-known and complex board game. The game is far too complex for human instructions to create a viable opponent, and a level neural network would never have been enough. Deep Learning made it possible.

READ  HBO Max streaming service could come to Australia, new trademark filings suggest

Deep Learning is also used for a variety of other tasks. It is used to match generated speech with human speech, so that text-to-speech programs sound more natural. In a similar task, it is used by translation companies to teach computers to translate from one language to another. The self-driving cars, which several companies work on, are driven by deep learning. Marketing departments use it to learn the habits of customers and to guess how a particular customer behaves and which strategies to respond best. Digital assistants use it to better understand the requirements that we set for them.

Deep Learning for TV and Filmed Entertainment

There are many ways to apply in-depth learning techniques in video production, editing and cataloging. However, the technology is not limited to automating repetitive tasks. It can also improve the creative process, improve video delivery, and help preserve the extensive video archives that are stored in many studios.

Video creation and editing

Warner Bros. recently had $ 25 million to sign up for & # 39; Justice League & # 39; and part of that money flowed into the digital distance of a mustache that star Henry Cavill had grown and that could not shave due to an overlapping engagement. It's not just "Justice League" – the post-production phase of a movie is time consuming and expensive. Deep learning will change the game because these are types of tasks.

With easy-to-use, easy-to-use solutions like Flo, you can use Deep Learning to automatically create a video by describing what you want it to be. The software finds the relevant videos from your library and automatically processes them together.

Google has a neural network that can automatically separate the foreground and background of a video. What used to require a green screen can now be done without special equipment.

Deepfakes have been making quite a bit of headlines lately – when a person's face is set on another person's video, as well as deep portraits that apply motion to stills like the Mona Lisa. The uses of this technology for special effects are enormous.

For example, the mustache problem at Warner Bros., which involved Henry Cavill in a controversy with fans. Cavill had to get a mustache for Mission: Impossible – Fallout while shooting for the Justice League. Cavill had a mustache for Fallout, but had to be clean-shaven for Superman. He decided to keep the mustache so that the Justice League editorial team had to digitally remove the hairy lips for each scene he wanted to re-record.

READ  Du is seeing a slower shift of consumers from 4G to 5G compared to 3G to 4G

Unfortunately, this was noticed by the fans and caused a stir. If home-based hobbyists can put Nicholas Cage in movies he never used in-depth learning tools, one can only guess how much time and money Warner Bros. could have saved if Henry replaced Cavill with older footage of him would have been.

video restoration

According to the UCLA Film & Television Archive, almost half of all films produced before 1950 have disappeared. Worse, 90% of classic movie copies are in poor condition. Restoring these films is tedious, tedious and expensive. This is an area where deep learning will make a big difference.

The colorization of black and white footage has always been tedious. A movie consists of thousands of frames, and coloring takes a long time. Even with advanced tools, the process can only be automated that far. Thanks to Nvidia, Deep Learning can now significantly speed up the process. With tools that require an artist to paint only a frame of a scene. From there, the deep learning network does the rest automatically.

A previously encountered issue was the lack or corruption of frames in a video. You can not re-record something that happened years ago.

Restoring this movie type meant that the missing frames were edited. Google's deep learning networks now want to change that. They have developed a technology that realistically replicates part of a scene based on start and end images.

Face / object recognition

By recognizing the faces of everyone in a video, Deep Learning lets you quickly classify a video collection. For example, you can search for clips or movies that have a specific artist. Alternatively, you can use the technology to count the exact screen time for each actor in a video. Sky News recently used facial recognition to identify famous faces at the royal wedding.

However, the technology is not limited to recognizing faces. Sports broadcasts rely on cameramen tracking the ball's movements or identifying other key elements of the game, such as the goal. Object recognition can use AI-based tools to automate the production of a sports program.

READ  Best stereo speakers: the best bookshelf, floor and Hi-Fi speakers in 2019

video analysis

While Flo can identify what a scene is about and use that data to create a video of any subject, the same technology can be used to sort and classify videos to make it easier to find a particular footage by simply searching for people or people are searched for actions that appear in it.

In this way, unwanted content from videos can be detected and removed to make sure they are appropriate for a targeted audience. Similarly, it could be used to match new videos to old videos that a person has shown interest in and provide them with a personalized referral list.

Better streaming

As we turn to 4k streaming and TV makers start to introduce 8k displays, streaming consumes more data than ever before. Anyone with a bad connection knows what a problem this can be. The usefulness of a glossy 4K display will be weakened if your internet connection does not exploit bandwidth to take full advantage of it. Thanks to neural networks that can recover high-resolution frames from low-resolution input, we might soon be streaming low-resolution streams over our Internet connection while still enjoying the high-definition sheen our displays are capable of.

The future

The deep learning use in film and radio has only begun to nibble on the edges of what it will be used for in the future. I believe that the future in the video industry is particularly bright. However, as with all new technologies, deep learning is not without its drawbacks. As with deepfakes or facial recognition abuse, there are legitimate concerns about the privacy and trust that results from the rapid development of this technology.

As with any new technology, the industry has a number of issues to address. The video industry and technology experts need to work together to develop the standards of what tomorrow's new normalcy might look like. However, with the right approach, the benefits of extending Toolbox will be greater than currently imagined, and just like the advent of "talkies" and color films before, deep learning will turn film and television into a whole new level.

Max Kalmykov is the vice president for media and entertainment at DataArt,

Spread the good stuff:
This post contains affiliate links, to find out more information, please read our disclaimer.
The price written on this page is true as the time it is written. It may change at any moment.

Related Posts