Actus Digital to Demonstrate Intelligent Compliance and Unified Media Platform at IBC2019

“We have enhanced the Actus Digital intelligent media platform to include more functionality, additional AI options, and extended automation related to compliance, clips creation and export to social media, and content monitoring and analysis,” said Raphael Renous, CTO at Actus Digital. “At IBC2019, we will present how the new platform improves the daily workflow for media companies.”

A Compliance Platform That Goes Beyond Standard Requirements

Media companies today need a compliance solution that pushes the boundary of innovation, going beyond supporting simple tasks, such as logging, monitoring, and regulation compliance. They expect the compliance solution to provide quality assurance tools, video analysis reports, advanced clip creation, OTT monitoring, and multiviewer capabilities, all from a single platform. Moreover, since the industry is changing rapidly, the industry needs flexible solutions that support a range of deployment models. At IBC2019, Actus Digital will demonstrate the expansive functionalities of the Actus media monitoring platform, which offers an intelligent approach to monitoring and support for multiple deployment environments, including on-premises, virtualization, the cloud, and hybrid combinations.

ActusDigital-MediaPlatform

Clip Factory Makes Content Repurposing Fast and Easy

Being able to publish content as fast as possible to new media platforms, integrate with social media, OTT platforms, and leverage the benefits of AI is becoming an increasingly important requirement today for media companies. The content repurposing workflow has to be simple and fast.

Actus Digital’s Clip Factory clips creation workflow offers a simple web browser interface for creating and exporting clips, enabling content repurposing from anywhere, anytime, and on any device. Clip Factory ensures that all content is exported fast, whether it’s done manually or automatically. With Clip Factory, media companies no longer require a special workstation or professional editors..

THE MEDIAPRO STUDIO hires Juliana Barrera, the producer of “The White Slave” and “Always a Witch”, as Head of Content in Colombia

Juliana Barrera, the well-known Colombian producer and showrunner has joined THE MEDIAPRO STUDIO team. Barrera, one of the most respected professionals in the Latin American audiovisual scene, will be Head of Content at MEDIAPRO’s office in Colombia. By hiring Barrera, THE MEDIAPRO STUDIO once again confirms its commitment to high quality international content.

With a career spanning two decades in which she has specialised in creating, developing and producing content for television, Juliana Barrera has amassed extensive experience in writing and producing series of fiction and entertainment formats, and in directing content teams.

Since 2011 she has been linked to Caracol Televisión as a Producer, Head of Content and Creative Director. In this last role, she generated new business opportunities. Highlights of her long career include the executive production of the first season of “La Voz Colombia” and the development and production of fictions such as “Always a Witch”, the first original Netflix series in Colombia; “The Secret Law”, which brought to the screen for the first time the true story of a special police unit made up solely of women; “The White Slave”, an ambitious historical series about slavery in Colombia which has been sold to almost 150 countries; “Perfect Lies”, an adaptation for Colombia of the U.S. series “Nip/Tuck”, and “Lynch”, the first fiction written by Barrera and also the first Premium content in Colombia to be acquired by a pay channel.

In the area of entertainment, she has been Head of Content of several editions of the reality show “The Challenge” and of “Big Brother”, “The Farm”, “The Apprentice” and “The Hairdresser’s”, among other formats.

She has received a dozen awards for her work, including the TVyNovelas Prize for the Series of the Year for “Perfect Lies” and the India Catalina Audiovisual Industry Prize -awarded at the Cartagena de Indias International Film Festival- for Best Reality Show-Contest for “La Voz Colombia”.

Laura Fernández Espeso, TV Director at THE MEDIAPRO STUDIO, has commented: “THE MEDIAPRO STUDIO has become a benchmark in the international market for the production of high quality content and, in this sense, Juliana’s becoming part of the team will help us to continue implementing our strategy. We feel very fortunate to have a professional with her talent and experience”.

Juliana Barrera has added: “Joining THE MEDIAPRO STUDIO as Head of Content in Colombia at a time when the industry is so eager for content is a very important step in my career. Now the challenge is to attract the best talent, offer the best stories, and turn THE MEDIAPRO STUDIO Colombia into a benchmark of creativity and production in Latin America that appeals to all types of channels and platforms”.

Juliana Barrera joins the extensive list of creators at THE MEDIAPRO STUDIO, which includes Diego San José, Marc Vigil, Ran Tellem, Iván Escobar, Javier Olivares, Fernando González Molina, Daniel Burman and Lorenzo Silva, with whom THE MEDIAPRO STUDIO has recently signed a collaboration agreement to adapt fiction content inspired by his novels.

Videon Products at IBC2019

IBC2019 Highlight: EdgeCaster Ultra-Low Latency Encoder

Videon’s EdgeCaster product is at the forefront of edge computing. The EdgeCaster enables 4K HEVC and H.264 encoded signals as part of an HLS, DASH, and CMAF workflow, while simultaneously creating six different encoded output versions. By performing these functions that are traditionally carried out in the cloud, EdgeCaster enables faster-than-broadcast latency while also reducing the cost of streaming.

The key to the EdgeCaster’s management of time-laden, expensive cloud functions such as transcoding, format repackaging, multiple-bit-rate creation, and other computationally intensive processes is Videon’s intellectual property developed using Qualcomm® technology. With the processing power of the SnapDragon™ chip, the EdgeCaster streams at resolutions up to 4K at 30 FPS using either H.264 or H.265/HEVC compression. The EdgeCaster can also output up to six streams simultaneously, in multiple bit rates and resolutions, using chunked HLS or DASH and still offer the flexibility to take advantage of power over ethernet (PoE).

Videon-EdgeCasterHD

As an AWS Elemental Technology Partner, Videon’s EdgeCaster interfaces directly to MediaStore and Cloudfront, enabling less than 3 seconds of latency, in scale, over public internet connections using standard HTTP-compatible applications for playback. As a result, EdgeCaster users can easily launch and scale up services — including live, interactive services and other delay-sensitive applications.

In addition to HTTP-based streaming, the EdgeCaster can simultaneously support two additional low-latency formats. Videon’s support for SRT on both encode and decode allows users to stream from building to building or across campuses while achieving latency of less than half a second. EdgeCaster’s robust feature set ensures maximum flexibility by touting three low-latency options ranging from worldwide in three seconds, one second interactive, and less than a second for unparalleled point-to-point streaming.

“By providing ultra-low latency and enabling users to support multiple workflows simultaneously, our EdgeCaster edge compute encoder is transforming the manner in which live events, esports, enterprise connectivity, betting, auctions, and other applications — where video is distributed to audiences across various platforms — achieve social video engagement. Whether for local or global live streaming, Videon’s cost-effective EdgeCaster delivers ultra-low latency by using edge encoding to deliver HTTP CMAF formats with multiple bit rates that bypass the need for costly, time-intensive cloud operations, allowing users to take full advantage of current and future demand for streaming content.” Todd Erdley, CEO, Videon

How mature is AI development throughout the broadcasting industry?

By Chem Assayag. Chem Assayag is VO’s Executive Vice President of Sales and Marketing. With a strong experience in the world of digital television and content services, and during his tenure at OpenTV, the worldwide leader in interactive television, he managed operations in Europe and the Middle East, growing revenues in the company’s largest business region.


IBC recently ran a feature looking at AI in the broadcast industry and came across real world projects as diverse as using AI to automatically generate metadata, generate intermediate frames and thus super slo-mo from regular camera feeds; analyze audience behaviour patterns, provide speech to text, refine complex workflows, and much more.

Much of it is to do with automation in the value chain and currently it is mostly this that is driving the broadcast industry uptake. Broadcast is a very process-oriented business and many of these processes — think of Quality Control, monitoring channels for rights purposes, conforming, closed caption generation, ingest… the list is a long one —  map well into automation. The use of technology to increase speed and efficiency while at the same time cutting costs, is an attractive proposition.

Where is AI in Broadcast in 2019?

First off, it has to be said that, at time of writing, there is a lot of hyperbole regarding AI technologies and their impact in the value chain. A lot of what is being talked about is either projection, prototypes, or concerns projects that have a so far rather limited scope. The hype machine is in full flow as companies looking to leverage AI technologies look to raise funding and maximize interest.

This is not helped by a media that is fully buying into the more fanciful elements of the AI story. Sometimes these showcase legitimate concerns as well as rattling the funding tip jar, such as the story headlined ‘New AI fake text generator may be too dangerous to release, say creators’, which highlighted the problems of text-based deep fakes. On the whole though, there is a depressing willingness to buy into the trope of the inanimate becoming animate and leading to disaster, which is a cultural artefact you can trace from Talmudic stories of golems, through Frankenstein, and on to the Terminator movies and SkyNet; a point which the same newspaper as the previous link adroitly made a few months earlier in a piece titled ‘The discourse is unhinged’: how the media gets AI alarmingly wrong’.

Indeed, in its introduction to a gated reportexamining the hype cycle in relation to AI, analyst Gartner wrote: “AI is almost a definition of hype. Yet, it is still early: New ideas will surface and some current ideas will not live up to expectations.” Away from there alarmist mainstream headlines the AI-based technologies it sees as currently sliding into the Trough of Disillusionment (see here for an explanation of the Hype Cycle phases) include such high-profile cases as Computer Vision, Autonomous Vehicles, Commercial Drones, and Augmented Reality.

So, where are we in broadcast? Gartner has also produced a useful AI Maturity Model (see below) where companies and industries can measure their progress along a line that illustrates the growing deployment and impact of the technology.

ai in broadcasting diagram

As a whole, at this stage early in 2019 the broadcast industry is probably strung out in a line somewhere between the start of Level 2 and the early stages of Level 3. Level 2 is Active, and defined as AI appearing in proofs of concept and pilot projects, with meetings about AI focusing on knowledge sharing and the beginnings of standardization.

The Level 3 Operational stage sees at least one AI project having moved to production and best practices, while experts and technology are increasingly accessible to the enterprise. AI also has an executive sponsor within the organisation and a dedicated budget.

Things are moving swiftly though. IBC2017 was, after all, only eighteen months ago. But one of the accelerants for the introduction of technology is that the broadcast industry has been moving into the cloud at the same time. Companies no longer need to invest in their own infrastructure, hardware and software to implement AI in the value chain; they can outsource it via the cloud.

This is becoming easier to do than ever as well. As an illustration of what is available, AWS has a Free Tier that it bills as offering ‘Free offers and services you need to build, deploy, and run machine learning applications in the cloud’. The cloud-based machine learning services organizations and individuals can hook up to for no cost include:

  • Text to speech: 5 million characters per month
  • Speech to text: 60 minutes per month
  • Image recognition: Analyze 5000 images and store 1000 face metadata per month
  • Natural language processing: 5 million characters per month

Even the full costed versions can make a compelling argument. The Amazon Transcribe API is billed monthly at a rate of $0.0004 per second, for instance, meaning a transcript of a 60-minute show would cost $1.44. And, of course, though it is enormous and has a global reach, AWS is only one of a growing number of cloud-based companies offering AIaaS.

AI is Augmentation over Automation

One key point to make about AI in 2019 is that the industry is still largely working out the use cases. Automation is only the start of it, and indeed only looking at the places in a broadcast workflow where AI can automate processes is to underestimate the potential of the technology. AI holds out the promise of augmenting human actions, of being able to analyze information and make predictions based on those results faster than humans can.

That means when examining loci in a workflow where the technology can help, there are a few key considerations:

  • Does expert knowledge add value?
  • Is there a large amount of data to be processed?
  • Is the organization looking to affect an outcome included in that data?

If the answer to all those three questions is yes, then that is a business point that can be further augmented by AI.

AI in Broadcasting: All About Context

As we’ve said, there are lots of use cases and projects involving AI currently underway across the industry, but we’ll end by highlighting one example of what can be achieved in the field, the UK’s Channel 4 and its trials of Contextual Moments. This is an AI-driven technology that uses image recognition and natural language processing to analyze scenes in pre-recorded content, producing a list of positive moments that can then be matched to a brand.

Low scoring moments are discarded, whilst all candidate moments are checked by humans to ensure brand safety. After that, to use C4’s example, a baking brand might previously have contextually advertised around a show such as ‘The Great British Bake-Off’, now it can be presented with a list of programs where baking happens in a positive light, from dramas to reality shows.

Channel 4’s initial testing with 2000 people showed that this AI-driven contextual version of targeted advertising, boosted brand awareness and doubled ad recall to 64%. It will be interesting to see how those results are mirrored in real world data.

As yet, the Level 3 ideal of AI having a dedicated budget within an organization is largely fanciful; we are at too early a stage for ROI data to be reliably collated. The tendency for it to be applied mainly for automation purposes limits its impact, even though cost reductions can be impressive. More complex projects and thus more strategic impacts are on the way, though.

Gartner’s Level 4 of AI implementation sees all new digital projects at least consider AI; new products and services have embedded AI, and AI-powered applications interacting productively (and, presumably, with a degree of autonomy) within the broadcast organization. Given the speed of the timescales so far, you wouldn’t want to bet against some of the companies at the forefront of AI development starting to push into that territory towards the end of the year.

Read the full article on Viaccess-Orca website

viaccess-orca-banner

MultiDyne Extends Signal Processing Reach with MD9200 Compression Series

MultiDyne Fiber Optic Solutions will bring its first innovations specific to video compression to international audiences for the first time at IBC2019, where the fiber transport leader will demonstrate its now-shipping MD9200 range of encoders and decoders. MultiDyne exhibits at Stand 11.D40 from September 13-17 at RAI Amsterdam.

Ideal for linear broadcast/production and streaming media workflows (notably OTT and IPTV systems), the MD9200 series extends MultiDyne’s reach deeper into the signal processing chain. This strengthens MultiDyne’s value proposition as a core supplier of multi-purpose solutions from the content acquisition stage through to the studio/headend delivery point.

“The MD9200 series brings MultiDyne firmly into the compression world, which gives our core customers compelling end-to-end options to encode, compress, transport and deliver live HD, 4K and 8K content when paired with our fiber transport systems,” said Frank Jachetta, President, MultiDyne. “We also see many opportunities to expand our customer base with OTT/IPTV and ISPs that require high-end compression for IP-based contribution and distribution applications, and even service digital signage networks and CDNs that exist outside our core broadcast and production markets.”

The MD9200 Family

Multidyne md9200 banner

The initial MD9200 series consists of encoders and decoders in stand-alone desktop configurations, and in the popular openGear card module form-factor. All MD9200 series products deliver exceptional bandwidth efficiency while maintaining signal integrity and visual quality. The openGear encoding and decoding modules unlock potential to reach an even broader customer base due to its compatibility with the entire openGear community of infrastructure products.

Both encoder models enable highly secure streaming from one to many destinations, supporting virtually any mix of streaming protocols. Each independent encoder accommodates unique frame size, frame rate, video/audio codecs and data configurations, and can independently encode separately HD and SD streams at high and low bit rates from a single SDI input.

Both decoders optimize 1080P AVC (H.264) and 1080i MPEG-2 compression, with optional support for very high bit-rate 2160P HEVC (H265) UHD streams. When using the OpenGear decoder (MD9200-DEC-OG), customers can house up to 20 decoding modules in a single 2RU openGear chassis for high-density signal de-compression to SDI and HDMI. The standalone MD9200-DEC delivers the same operational benefits and visual quality for point-to-point streaming.

The MD9200 series will provide exceptional value in IP-based contribution applications for news, sports and studio links, according to Jesse Foster, MultiDyne’s Director of Product Development and Western Region Sales. Foster adds that MultiDyne’s engineering team has emphasized network security in the product design, providing systems integrators and IT professionals with peace of mind that their networks and content are safe from outside intrusion. That security, plus an attractive price point and broad feature set, provide outstanding value and flexibility for any customer.

“The MD9200 line-up brings together a very broad array of streaming and protocol transmission capabilities while addressing visual quality, signal latency and total cost of ownership,” said Foster. “This series represents an important new direction for MultiDyne that will continue to grow yet interoperate with our core fiber transport systems. Furthermore, these compact, lightweight, and low-power products will solve problems for broadcast engineers, IT technicians and other users, especially as more broadcasters and service providers adopt HEVC compression.”

Qligent Brings Multiplatform TV Monitoring and Analysis Vision to TecnoTelevisión México

Qligent, a specialist in cloud-based, enterprise-level content monitoring and analysis, is ramping up its visibility in Latin America with its first regional, standalone exhibition at the 2019 TecnoTelevisión México Conference. The industry’s first vendor to offer a truly cloud- and service-based monitoring and analysis platform for multiplatform television, Qligent will demonstrate its flagship Vision Compliance/QoS/QoE media quality assurance platform along with its groundbreaking Vision Analytics solution. Qligent exhibits at Stand 816 from August 14-16 at the World Trade Center in Mexico City, Mexico.

While this occasion marks the first year this trade show is taking place in Mexico City, the event is expected to attract a broad swath of media creation and delivery professionals from the broadcast, cable TV, digital post production, and new media markets. Qligent will share its booth with GAD Electronics of Argentina, its manufacturer’s representative and regional partner serving the Central and South American markets. The two companies will remain tightly integrated across all sales and marketing throughout Latin and South America as they work to bring Qligent’s message and value proposition to customers throughout the region.

Qligent CTO Ted Korte and GAD Electronics Chief Executive Officer Alejandro Pasika will be among the Conference’s 10 featured speakers, giving back-to-back presentations on August 15 at Stand 819:

  • Korte will address How to Use Big Data and AI to Prevent Abandonment, a subject that’s especially pertinent to LATAM cable TV and OTT online video service providers from 2:45-3:30pm.
  • Immediately following Korte’s presentation, Pasika sheds light on Multi-screen Monitoring and Measurement of 4K and IP Instruments from 3:30-4:15pm.

Monitoring and Analysis Innovations

Qligent’s Vision platform solves the problem of monitoring today’s complex, rapidly evolving media distribution landscape, and the sheer volume of premium content that media companies deliver. Vision provides broadcasters, MVPDs and OTT service providers with a centralized platform to review, verify, and ensure a high Quality of Service (QoS) and regulatory compliance. Leveraging a global, broadly dispersed network of virtual and/or IoT probes, Vision provides comprehensive, holistic monitoring and analysis of four quality management pillars—compliance errors, objective errors (QoS), subjective Quality of Experience (QoE) errors, and programmatic errors—from the origin facility out to the last mile.

Qligent_VisionAnalysics_BigData

“Vision gives broadcasters and service operators a vast, flexible toolset to satisfy their own standards of compliance, and quickly address problems could negatively impact the viewer experience and hurt the bottom line,” said Korte. “Using Vision, users can verify anything content providers are trying to deliver to their audiences, and do so in multiple ways across any number of locations.”

Scalable from simple to complex configurations, the Vision platform is rich with compliance tools and modules, such as the As-Run option that enables operators to visually correlate compliance events and anomalies against an imported program schedule.

Deep analysis of the full user experience can be achieved with Qligent’s award-winning Vision Analytics. Qligent® Vision Analytics™ is an open platform that leverages IoT, Big Data, ML, and AI to address three main problems for online delivery: user engagement, silent sufferers, and customer churn.

Vision Analytics is a great addition to any data-driven strategy that scale gradually to meet new needs and fill gaps. For example, users can start by monitoring the QoE of high-value media as it reaches viewers via multiple content distribution networks, including those owned and controlled by third parties. Vision Analytics can then be scaled to predict and prevent customer churn based on a host of contributing factors. A rich set of KPIs and KQIs arm system administrators with real-time performance and predictive insights to enable swift, corrective and preventative action.

“The value proposition for MVPDs and other media entertainment professionals is that our solution can identify and even predict technical problems—in an automated way—before they impact the end-user experience,” Korte added. “With the speedy relay of this comprehensive data to a central dashboard, system administrators now have the means to pinpoint and solve QoS/QoE issues before their subscribers become disgruntled and abandon their service. That’s the power of cloud-based quality assurance conducted across every distribution network out to viewers’ homes and mobile devices.”

More info

ENCO to Highlight enTranslate Automated Live Translation System with Machine Learning at IBC2019

Two dozen languages are officially spoken in the European Union, making the upcoming IBC exhibition in Amsterdam an ideal forum for the first showing outside North America of ENCO’s new enTranslate automated live translation system. Making broadcast and AV content accessible to non-native speakers for a fraction of the cost of traditional translation services, the award-winning solution will make its European debut in stand 8.A59 at the event from September 13 to 17.

ENCO_enTranslate

enTranslate offers broadcasters, media producers and live presenters an easy and affordable solution to automatically translate television programming, corporate meetings, government sessions, lectures, sermons, training materials and other content. TV broadcasters can offer subtitles and secondary or tertiary closed captions in alternative languages to expand their audiences, while government institutions, universities, houses of worship and corporations can embed translated captions in short and long-form VOD content or show live, open-captioned subtitles on local displays to assist in-person attendees.

enTranslate combines the highly-accurate, low-latency speech-to-text engine from ENCO’s patented enCaption open and closed captioning solution with advanced translation technology powered by Veritone, enabling automated, near-real-time translation of live or pre-recorded content for multi-language captioning, subtitling and more. Blending artificial intelligence with sophisticated linguistics modelling, enTranslate uses a Neural Machine Translation methodology to provide high-quality translations based on the context surrounding the current words and phrases. enTranslate supports 46 languages including English, French, Spanish and more.

“By making video content understandable to viewers who don’t speak its original language, enTranslate continues our long-standing mission of helping broadcasters and producers make their media accessible to a wider range of people,” said Ken Frommert, President of ENCO. “Traditional translation services can be prohibitively expensive and – particularly for live content –inconvenient, requiring advance scheduling of human translators. enTranslate translating live and pre-recorded content both practical and affordable for organizations large or small, and we are looking forward to showcasing it to IBC attendees.”

Deployable on-premises or in the cloud, enTranslate’s flexible architecture supports a wide range of live baseband and network-based audio and video inputs including analog, AES, HDMI, SDI, AoIP, NDI® and MADI. Translated results can be output in standard caption file formats; embedded as closed captions using an external or optional integrated encoder; or keyed as open captions over an SDI, HDMI or NDI® output. For offline, file-based applications, audio or video clips can be easily ingested into the system and captioned with translations in any supported language, enabling users to quickly and affordably process large libraries of previously recorded content.

More info