Digital Nirvana Adds Cloud Storage and Processing, AI to MonitorIQ Compliance Logger

It is the first compliance logger to combine cloud services and cloud storage with on-premises compliance recording and the first to run in on-premises, cloud, and hybrid environments.

DigitalNirvana-MonitorIQCaptionGeneration

“MonitorIQ 6.1 lets today’s broadcasters do more than they ever could before. For example, they can now use cloud-based, AI-driven microservices to add transcription to video assets automatically. They can also search stored video using face detection, logo recognition, and on-screen text detection,” said Russell Wise, senior vice president of sales and marketing for Digital Nirvana. “The integration of MonitorIQ and Media Services Portal opens up a whole new world of AI-enhanced possibilities for broadcasters to record, store, and repurpose content. Through intelligent and immediate logging and feedback of content quality and compliance, broadcasters are better-positioned to meet regulatory, compliance, and licensing requirements for closed captioning, decency, and advertising monitoring.”

DigitalNirvana-MonitorIQLoudnessMonitoring

The new capabilities in MonitorIQ 6.1 are:

  • A reliable, scalable, and expandable architecture that enables on-premises, cloud, or hybrid implementation. MonitorIQ was designed to run locally as a full turnkey solution or in a virtual environment, using local customer hardware or residing in the cloud or in a hybrid configuration. MonitorIQ also provides an extensible list of open APIs for easy integration into broadcast workflows or third-party systems.
  • Seamless integration to AI-based cloud microservices that can be spun up very quickly as needed and run on top of the compliance system. Microservices include:
    – Automated speech to text for closed-caption generation and transcription.
    – Closed-caption/teletext conformance and correction to meet streaming services’ standards (Hulu, Netflix, Amazon, etc.).
    – Powerful text and video-intelligence functions for detecting faces, logos, images, and on-screen text.
    – Detection of ads in competitive programming and identification of the category of the ad and advertisers.

DigitalNirvana-MonitorIQRatings

MonitorIQ 6.1 provides tight integration with Digital Nirvana’s Media Services Portal, a suite of smart self-service tools, which empowers broadcasters to apply AI capabilities to broadcast workflows. These microservices include speech-to-text for automated closed-caption generation, quality assessment for postproduction closed captioning, and transcription of recorded assets. A video intelligence engine provides logo, face, and object recognition.

DigitalNirvana-MonitorIQTraffic

More info

TVU Networks’ AI Tech Helps Broadcasters Deliver Customized Content

Already incorporated into several TVU products, AI is at the center of TVU’s story-centric workflow, which automates content distribution and allows broadcasters to customize – and therefore better monetize – live and archived media assets.

Customers across the world already rely on TVU’s mature AI solutions to accelerate news production and distribution. CNN Newsource, for example, provides an IP-based worldwide network of affiliates with access to breaking news, trending stories, and archival video content. When ingested through the TVU MediaMind Appliance, its live feeds are enhanced with real-time metadata generated by AI-based voice and facial recognition technologies.

As a result, CNN affiliates can find the live content they need in seconds by searching person of interest, geographic location, or other details in the metadata. With easier access to more appropriate content, broadcasters enjoy improved productivity and significant time savings.

Today’s consumers have a wide variety of platforms for consuming content and can choose to see a story on a smartphone, website, or TV. However, broadcast production is historically programmatic rather than story-centric. In the story-centric workflow, all media assets are shared, with content collected in a central location that allows any production group within a station to produce it the way they want for their viewers. This mass content customization for an audience of one is driven by Media 4.0.

TVU MediaMind
With TVU MediaMind, live feeds are enhanced with real-time metadata generated by AI-based voice and facial recognition technologies.

TVU MediaMind can automate the content distribution process for specific platforms, from traditional broadcast to social media sites, for even more efficiency. The TVU CAS (Contribution Automation Solution), which streamlines the capture and automatic metadata tagging of live video content, integrates with major MAM platforms including AP ENPS, Dalet, and Primestream.

A context-based AI solution is also used effectively in TVU Transcriber, a real-time, speech-to-text transcription service. TVU Transcriber can embed text into a video stream for closed or open captioning, as well as output a file for auditing purposes. While the caption information is important for compliance, the data provides very accurate and detailed metadata, which also improves the ability to search assets.

The TVU MediaMind user interface, which keep benefitting from regular upgrades, offers improved facial recognition accuracy and management, as well as a new Microsoft celebrity face database of more than one million celebrities. Users can also add or delete faces, plus search the database using a photo. TVU MediaMind also offers three times faster cloud interface loading speed for video preview, increased recording time up to 24 hours, and auto-prompt searching keywords.

“Machine learning and AI are more than just buzzwords, and TVU is harnessing the power of these technologies to deliver ‘smart studio’ solutions and bring about the evolution of Media 4.0,” said Paul Shen, CEO of TVU. “Our industry-leading AI is improving the efficiency at every stage of the media supply chain, from transcription services to real-time live feed searches, which benefits the story-centric workflows of everyone from small-market stations to major international networks. At IBC, we look forward to showing attendees how our established AI and machine learning technologies are helping to accelerate news production and reduce costs.”

More info

Viaccess-Orca and Smart Bring Together the Best in New Targeted TV Advertising Solution

At DMEXCO 2019 and IBC2019, Viaccess-Orca (VO) and its programmatic advertising partner Smart Adserver will introduce an end-to-end, targeted TV advertising solution. Using the preintegrated solution, service providers can generate revenues through data and inventory monetization, in particular, for linear multicast and on-demand TV. The solution notably allows the creation of granular audience segments leveraging VO’s AI-enriched TV data-management capabilities and the activation of these segments on Smart’s advertising platform. It further allows the distribution of targeted ads on any screen through VO’s secure player, which offers versatile ad replacement and insertion capabilities.

“Leveraging Smart’s and VO’s unique awareness around data privacy and security aspects, service providers can now enjoy new revenue channels generated by our targeted TV advertising solution while keeping control over vital data assets,” said Romain Job, Chief Strategy Officer at Smart Adserver. “This alliance between two recognized specialists allows service providers to access a single, preintegrated solution for managing ad campaigns, activating audience data, and distributing targeted ads on any device, including on the fast-growing Android TV-enabled market segment.”

The end-to-end solution supports the entire targeted ad cycle and offers seamless integration with Smart’s programmatic advertising environment. This includes data management enablers that are natively compliant with GDPR regulations and ensures full-service provider control from segment creation to monetization. In particular, VO’s behavioral insights on TV-usage data, the results of VO’s extensive AI research, provide an elegant and innovative route to TV audience monetization while alleviating reliance on legacy, controversial, third-party, data-based targeting techniques. The solution’s holistic ad monetization capabilities also cover cross-channel advertising, from direct IO management to programmatic transactions. It also supports KPI-focused advertising implementations, with real-time ad analytics to evaluate the effectiveness of ads and make advertising content as relevant as possible to the audience.

VO’s playback solutions, combined with Smart’s performance optimization techniques including peak management, ad-call pacing, local creative storage, validation, and provisioning, enable targeted ad delivery for any TV use case (e.g. linear, on-demand, or catch-up) onto any device (including Android TV-enabled) over IPTV or OTT.

“Targeted TV advertising is a huge revenue opportunity for operators and broadcasters that can help turn viewers’ experiences into personal ones, thereby driving higher engagement,” said Alain Nochimowski, Executive Vice President of Innovation at Viaccess-Orca. “This partnership between two mature technology providers results in an easy, fast, and secure way to enter the targeted TV advertising market and generate a new line of revenue. It’s exactly the kind of targeted TV advertising solution the industry needs to succeed.”

VO and Smart will demonstrate the targeted TV advertising solution at:
• DMEXCO, Sept. 11-12 in Cologne at Smart’s Booth Hall 6 – C061, and
• IBC2019, Sept. 13-17 in Amsterdam at Viaccess-Orca Stand 1.A51 and Harmonic Stand 1.B20.

More info

Actus Digital Drives Media Monitoring Efficiency With New Artificial Intelligence Capabilities

Actus Digital announced that its world-renowned media monitoring system has been enhanced with artificial intelligence (AI) capabilities that dramatically speed up workflows. Using Actus Digital’s intelligent, data-driven platform, media companies can automatically tag, organize, and categorize video recordings to enable rapid retrieval of relevant content and clips creation for social media outlets and the web.

“In today’s media environment, companies are dealing with a massive amount of content and data. How fast they can analyze data, find relevant content, and turn that content into engaging clips is a major differentiator,” said Raphael Renous, CTO, Actus Digital. “AI is a game changer for media monitoring, as it opens up an entire new range of workflows and automation options. With our AI media monitoring platform, tagging and clips creation is an instantaneous process based on comprehensive content analysis, and we’re excited to bring that unique value prop to our customers.”

ActusDigital-MediaPlatform

Actus Digital’s media monitoring platform, newly enhanced with AI, allows media companies to more intelligently monitor and search for content beyond the channel name, date/time, extracted metadata (i.e., as run/EPG, closed caption), and manually entered metadata. In addition, users can use the platform to search for spoken words (speech to text), text that appears in the videos, specific faces (facial recognition), logos and logo changes, and detect advertising. The AI capabilities are among ongoing improvements to the compliance solution, which offers detailed reports on loudness, SCTE, closed captions, automatic configuration options, OTT monitoring, and more.

AI capabilities have also been added to Actus Digital’s Clip Factory clips creation workflow for increased speed and efficiency. With these new options, media companies can decrease manual labor and gain an edge on the competition.

The Actus Digital media monitoring platform delivers an intelligent approach to monitoring and support for multiple deployment environments, including on-premise, virtualization, cloud, and hybrid.

Actus Digital will demonstrate the latest innovations for its media compliance and monitoring platforms at IBC2019, Sept. 13-17 in Amsterdam at stand 3.C69.

More info

Harmonic Brings Real-World OTT and Broadcast Success to IBC2019

At IBC2019, Harmonic will demonstrate real-world deployments of its unified OTT and broadcast delivery solutions. A variety of deployment strategies will be showcased, including cloud, on-premises and hybrid environments, to demonstrate how service providers, broadcasters and content owners originate and distribute video channels smarter, faster and simpler. Harmonic’s unique approach to video delivery includes cloud-based SaaS solutions that dramatically speed up the launch of new services, including live OTT channels, increase business agility and enhance video experiences.

“At IBC2019, we’re drawing from our extensive deployment experience and showcasing concrete solutions deployed in the field by leading service providers, broadcasters and content owners,” said Tim Warren, senior vice president and chief technology officer at Harmonic. “The video market is highly dynamic, and consumer demand for live OTT services is growing. Harmonic’s expansive solutions portfolio and enhancements in cloud, targeted advertisement and 8K technologies, with a SaaS business approach are designed with a sharp eye to help our customers stay one step ahead of the video evolution.”

Harmonic_vos360-211018

Unified OTT and Broadcast Delivery

Unified delivery workflows running in the cloud and on premises will be demonstrated, which offer service providers a low-latency OTT solution for multiprotocol environments to provide the same delay between broadcast and OTT feeds — key for delivering live sports. In addition, Harmonic will showcase a range of unique workflows for VOS®360 SaaS to launch skinny bundles, create new pop-up channels, deploy disaster recovery and stream live sports in UHD.

Maximize Efficiency & Flexibility with Virtualized, Software and Cloud Playout Solutions

At IBC2019, Harmonic will demonstrate real-world deployments of hybrid SDI/IP, UHD and HDR playout and on-premises/cloud-based channel origination for OTT and broadcast workflows powered by its virtualized Spectrum™ X media servers and VOS360 SaaS. By supporting the SMPTE ST 2110 suite of standards and AMWA IS-04 and IS-05 specifications, Harmonic enables video content and service providers to make a smooth transition to all-IP workflows.

Simplify Live Content Distribution with CDN-Enabled Primary Distribution SaaS

Harmonic’s CDN-enabled Primary Distribution solution, a powerful workflow supported by the company’s VOS360 SaaS, will also be highlighted. With the Primary Distribution SaaS solution, programmers can deliver linear channels to distributors, whether they are traditional pay-TV operators, virtual MVPDs or local broadcasters, anywhere in the world via CDN.

Increase Monetization with Video SaaS for OTT and Broadcast Services

Harmonic will also showcase the powerful dynamic ad insertion (DAI) capabilities in Harmonic’s video SaaS solutions. With the company’s video SaaS solutions, operators can deliver advanced targeted advertisements and optimize ad campaigns to deliver personalized advertisements to viewers in real time. Harmonic’s solutions provide operators and advertisers a unified approach to reach audiences over hybrid broadcast and broadband delivery networks.

Future Zone Brings 8K, AI Technologies to the Forefront

The company will demonstrate high-quality 8K video delivery on 8K connected TVs and legacy mobile devices with EyeQ™ content-aware encoding (CAE), which reduces the required bandwidth by up to 50% while improving quality of experience. AI-based video compression will also be demonstrated, showing how it can reduce bit rates for broadcast and IPTV delivery, improve QoE for OTT and increase density for all applications.

Thought Leadership on AI Video Compression, 8K and 5G at the Future Zone Theatre

Thierry Fautier, vice president of video strategy at Harmonic, will outline the opportunities that 5G networks promise and the challenges ahead for 8K in his “Taking a Look at 8K and 5G Technologies” presentation on Sept.14 at 1:50 p.m. in the Future Zone Theatre (Hall 8, Stand F40).

Also in the Future Zone Theatre, Stephane Cloirec, senior director, video appliances at Harmonic, will speak on how “AI-Driven Video Compression Brings New Revenue Opportunities and Cost Savings for Service Providers” on Sept. 16 at 4:10 p.m.

Harmonic is showcasing its innovative solutions for OTT and next-gen TV delivery, along with real-world deployment examples, at IBC2019, Sept. 13-17, in Amsterdam at Stand 1.B20.

More info

Veritone Inks Deal With ART19 to Use aiWARE for Intelligent Ad Targeting in Podcasting

Veritone announced an agreement with ART19, a leading podcast hosting, content management, monetization, and measurement platform. ART19 will use aiWARE to run cognitive analysis and topical extraction on publisher podcasts that are part of ART19’s targeted ad sales platform, making it possible for ART19’s advertising partners to target ads based on the content of an individual podcast episode.

“ART19 knows podcast advertising works. And advertisers know that ads can be incredibly successful if they’re reaching the right listeners at the right time,” said Lex Friedman, chief revenue officer at ART19. “By adding aiWARE to our platform, we’re empowering buyers to target ads to consumers based on the content of the episode they’re listening to. Listeners appreciate contextual and relevant ads — and that means they pay more attention to them.”

ART19 will integrate aiWARE’s transcription and topic analysis capabilities to enhance its targeted marketplace. The ART19 sales team will then be able to place contextually relevant advertisements within the podcast based on its subject matter.

Programmatic digital audio ad serving generally allows advertisers to reach users based on a combination of targetable attributes derived from first-party and third-party user data (such as zip code, age, gender, device ID, or internet browsing behavior). ART19 will use aiWARE to identify the topics discussed in a podcast episode, enabling the digital audio ad server to target or exclude ads based not only on who is listening to the content, but on the topical taxonomy of that content. Now, for example, a financial services firm can target ads to be played in podcast episodes that are discussing personal finance and are being listened to by 25- to 34-year-olds. Conversely, aiWARE can also support brand safety use cases, in which an advertiser requests that its ads be excluded from any podcast episodes that discuss controversial topics.

Before aiWARE, there was no way for ART19’s employees to analyze and categorize topical metadata efficiently across thousands of podcast episodes. Now, with aiWARE’s topical extraction capabilities, ART19 can automate this workflow and pass the metadata to the ad server, which will automatically include or exclude ads based on rule sets.

“We’re always looking for powerful tools that benefit everyone in our ecosystem to keep pushing podcasting forward — for publishers, advertisers, and listeners alike,” Friedman added. “aiWARE will lead to contextually relevant ads, which will serve publishers well in terms of revenue; yield great results for advertisers; and provide listeners with ads that feel native, custom, and in line with their interests.”

Unlike a single-point solution that solves only one use case, aiWARE affords ART19 a true AI operating system that makes its investment in AI future-proof. The aiWARE platform’s open architecture allows ART19 to integrate cognition from multiple machine learning algorithms into its existing technology stack seamlessly and to add more cognitive categories easily as new needs arise.

“aiWARE enables ad-tech capabilities within audio and video formats that were previously impractical because they exceeded the capacity and financial resources of human workforces. In a parallel to Google’s indexing of websites, Veritone’s aiWARE automates the systematic indexing and contextual extraction of hundreds of thousands of hours of audio and video content to enable new targeting and brand safety workflows,” said Veritone President Ryan Steelberg. “ART19’s deployment shows how refined targeting capabilities can deliver more efficiency for advertisers with a higher ROI on ad spend, all while creating a better listener experience.”

More info

Vionlabs To Debut AI-Powered Content Discovery Platform

“Viewers are spending 25% or more of screen-time looking for something to watch. Solving this issue is our major focus at Vionlabs and early customer engagement has shown that our AI-powered content analysis approach delivers game-changing results,” said Marcus Bergström, CEO, Vionlabs.

At deployment, Vionlabs measures everything in the videos in an operator’s catalogue and trains its AI engines to work out what matters in each asset. The platform uses AI and deep learning and combines this video analysis with a viewer’s detailed watch-history to provide content discovery services to operators through a cloud-based SaaS model. The platform is designed for linear TV, catch-up, streaming and all flavours of VOD.

“Our solution is based on deep thinking from our engineers and their key realisation that we could train AI engines to learn what variables should be measured in the video and audio and combine this with viewers’ watch history to significantly out-perform existing content discovery solutions. Our platform is now providing live content discovery that improves critical operator metrics such as VOD buy-rates and customer engagement and we’re excited to make our public debut at IBC2019.”

The Vionlabs content discovery platform analyses each video in detail and combine this with the viewer’s watch-history. The platform doesn’t need to know anything more about the consumer and it doesn’t try to guess who the viewer is. All that matters is the content they watch, for how long and how often.

“During the year we have productised our technology and deployed our content discovery platform for operators that are now enjoying a significant uplift in VOD buy-rates and engagement,” said Giles Wilson, CTO, Vionlabs. “By serving up the right content at the right time, our content discovery platform maximises the time viewers spend watching the content they love and minimises the time they spend searching for it. Leading to happier viewers that spend more time and money on a video platform.”

Vionlabs has trained multiple AI engines to measure everything in the video, including object recognition using computer vision as well as colours, pace, audio, and many more variables to produce a fingerprint timeline throughout the content.

The AI engine learns what matters and how changes in these fingerprint timelines are connected to what content individual viewers enjoy. Knowing similarities between content is vital, but it’s not enough on its own. Vionlabs has also trained an AI engine to analyse a viewer’s watch-history. Finally, it has an AI engine that takes the outputs of the other AI engines to provide the most accurate content discovery platform.

The content discovery results produced by the Vionlabs platform are made available through a cloud-based SaaS model integrated with the operators back office systems. The Vionlabs platform is already deployed and is generating very significant uplifts in VOD buy-rates and viewer engagement for its operator customers.