AES and AIMS Team Up to Host AV-over-IP Technology Pavilion at 2019 AES NY Convention

Following a successful partnership at last year’s convention, the Audio Engineering Society (AES) and the Alliance for IP Media Solutions (AIMS) will again collaborate to create a professional media networking pavilion at the AES Convention in New York on Oct. 16-18.

Central to the pavilion will be the AV-over-IP Technology Pavilion Theater featuring a continuous program of 30-minute presentations covering a wide range of topics relating to audio and video over IP. In addition to providing visitors with valuable insights and knowledge about audio over IP networking, the scope of this year’s pavilion will be expanded to include video topics.

The AES and AIMS have also today announced a call for presentations for the AV-over-IP Technology Pavilion Theater. End users, industry associations, solutions providers, and technology developers are invited to share their knowledge and perspectives on how developments in IP networking will impact the pro audio, broadcast, and pro AV industries. More information and the abstract submission form are available online.

The selection committee is looking forward to receiving the following types of presentations that are non-commercial in nature:

  • Technical tutorials (basic, intermediate, advanced)
  • Case studies
  • Market/business case analysis
  • Point-of-view/advocacy
  • Standards progress/update

Preference will be given to presentations related to the following technology areas on the Joint Task Force on Networked Media (JT-NM) Roadmap:

  • AES67
  • SMPTE ST 2110 suite of standards for Professional Media Over Managed IP Networks
  • AMWA NMOS specifications such as IS-04, IS-05, IS-07, and IS-08
  • Timing and synchronization using SMPTE ST 2059-1/2
  •  JT-NM TR-1001-1 framework for installing, configuring, and interconnecting equipment

End users and facility operators are particularly encouraged to present their insight and experience related to deployments of the technology areas above. Systems integrators, service providers, and manufacturers are also encouraged to apply.

Speaking times are limited during the three-day exhibition, and potential presenters are encouraged to act quickly and submit proposals early. The deadline for submitting presentation proposals is Aug. 30.

Call for Presentations Now Open for IP Showcase at IBC2019

The call for presentations is now open for the IP Showcase, an education and demonstration pavilion at IBC2019 that will highlight the benefits of, and momentum behind, a common set of IP standards for real-time professional media applications. The IP Showcase is sponsored by a consortium of industry partners including the Audio Engineering Society (AES), the Alliance for IP Media Solutions (AIMS), the Advanced Media Workflow Association (AMWA), the European Broadcasting Union (EBU), the Society of Motion Picture and Technology Engineers® (SMPTE®), and the Video Services Forum (VSF).

The submission deadline for speaking proposals is July 12. More details and the submission form are available at http://www.ipshowcase.org/call-for-presentations-ibc2019/.

“The IP Showcase Theatre provides an exceptional opportunity for presenters to offer unique viewpoints on IP video and audio for production and playout. This year’s edition will have a particular emphasis on JT-NM TR-1001-1 automation and covers the full range of standards including SMPTE ST 2110, AES67, and AMWA NMOS,” said Brad Gilmer, IP Showcase executive director. “As in past years, we are expecting hundreds of IBC show attendees to benefit from the wide range of expertise on offer.

 

“As broadcasters increasingly adopt and expand their commitment to IP implementations, the industry has come to rely on IP Showcase presentations as a valuable ongoing industry reference for media professionals worldwide,” Gilmer added.

The call for presentations is open to end users, industry associations, solution providers, and technology developers that can share their knowledge and perspectives on how developments in IP workflows are impacting the broadcast industry today and in the future. Presentations may take the form of tutorials (basic and advanced), case studies, panel discussions, market and business case analyses, point-of-view or advocacy, and standards progress updates. This year, the IP Showcase Theatre will also feature an expanded stage area, which provides the opportunity for presenting select hands-on technology demos.

Speaking times are limited during the five-day exhibition, and organizations are encouraged to act quickly and submit proposals early. Product marketing presentations are discouraged, as the theatre is an opportunity to discuss advances in working with media using open, IP-based technologies.

The IP Showcase returns again this year to Room E106/E107 at the RAI Amsterdam during IBC2019, Sept. 13-17.

 

IP Showcase to Make Australia Debut at Media + Entertainment Tech Expo 2019 in Sydney

The Australia Section of the Society of Motion Picture and Television Engineers® (SMPTE®), the Alliance for IP Media Solutions (AIMS), and the IABM have announced that they will stage an IP Showcase for the first time in Australia in conjunction with Media + Entertainment Tech Expo (METExpo 2019) in Sydney, 17-19 July.

The IP Showcase at METExpo 2019 will be an education and demonstration pavilion highlighting the benefits of and momentum behind the move to standards-based IP for real-time professional media across an extensive range of applications and users, from television and film to pro AV, small facilities, and independent operations.

Visitors will be able to see firsthand a collection of hardware and software, integrated and operating, with presentations on technical topics, case studies, and architectures enabled by the SMPTE ST 2110 family of open standards.

The IP Showcase will feature the leading-edge collaborative industry work being facilitated by the following industry partners: the Audio Engineering Society (AES), AIMS, the Advanced Media Workflow Association (AMWA), European Broadcasting Union (EBU), the IABM, SMPTE, and the Video Services Forum (VSF).

Also today, the sponsors announced a call for presentations for the IP Showcase Theatre. The submission deadline for speaking proposals is 7 June 2019.

The presentation stages at previous IP Showcase Theatres have been visited by hundreds of attendees each day at shows since 2016, often with standing-room-only crowds. With the growing interest in IP implementation by broadcasters worldwide, attendance at METExpo2019 is expected to be strong. The significant early adoption of SMPTE ST 2110 products and architectures by broadcasters and outside broadcast providers in Australia is paving the way for uptake by the audiovisual industry segment and other adjacent industries. Presenting at the METExpo2019 IP Showcase Theatre, companies gain a matchless opportunity to offer their unique viewpoint on IP video and audio for production using SMPTE ST 2110 and AES67, as well as on the latest developments in the AMWA NMOS technology stack.

End users, industry associations, solutions providers, and technology developers are all invited to share their knowledge and perspectives on how the developments in IP signal transport will impact the broadcast industry, pro AV, small independent operations, and freelancers today and in the future.

Speaking times are limited during the three-day exhibition, and companies are encouraged to act quickly and submit proposals early. The selection committee is looking for the following types of presentations:

  • Tutorial (basic, intermediate, advanced)
  • Case study
  • Panel discussion
  • Market/business case analysis
  • Point-of-view/advocacy
  • Standards progress/update

Preference will be given to presentations related to the following technology areas on the Joint Task Force on Networked Media (JT-NM) Roadmap:

  • AES67
  • SMPTE ST 2110 suite of standards for Professional Media Over Managed IP Networks
  • AMWA NMOS specifications such as IS-04, IS-05, IS-07, and IS-08
  • SMPTE ST 2059 suite of standards for timing and synchronization
  • JT-NM TR-1001-1 framework for installing, configuring, and interconnecting equipment

End users and facility operators in broadcast and pro AV are particularly encouraged to present their insight and experience related to deployments of the technology areas above. Systems integrators, service providers, and manufacturers are also encouraged to apply. Product marketing presentations are discouraged, as the theatre is an opportunity to discuss advances in working with media using open IP-based technologies.

More info

ENCO Launches enTranslate Automated, Live Translation and Captioning System

Furthering its mission to help make content more accessible and fully understandable to a wider range of people, ENCO will take the wraps of its new enTranslate automated translation and captioning system in booth N2524 at 2019 NAB Show. Combining the powerful speech-to-text engine from ENCO’s patented, market-leading enCaption automated captioning solution with advanced translation algorithms, enTranslate provides near-real-time translation of live or pre-recorded content for alternative-language closed captioning, subtitling and more.

ENCO_enTranslate

enTranslate offers broadcasters and live presenters an easy and affordable solution to automatically translate television programming, board meetings, legislative sessions, lectures, sermons and other content. TV broadcasters can engage wider audiences by offering additional closed caption options – such as CC2 or CC3 – in multiple languages, while government institutions, universities, houses of worship and corporations can embed translated captions in recorded content for subsequent viewing or display live, open-captioned subtitles on local displays for in-person attendees.

“Just like enCaption makes the audio element of video content accessible to hearing-impaired audiences, enTranslate makes it understandable to viewers who aren’t native speakers of the material’s original language,” said Ken Frommert, President of ENCO. “enTranslate delivers fast, high-quality translations at a fraction of the cost of traditional translation services, making live translation practical and affordable for both broadcast and non-broadcast applications. And because it’s available 24/7/365, users can translate content such as breaking news alerts or ad hoc presentations at a moment’s notice, without having to wait for a human translator to arrive.”

enTranslate builds on the highly-accurate, low-latency speech recognition core first implemented in ENCO’s award-winning enCaption closed and open captioning system. Leveraging machine learning technology and a deep neural network approach, the speech-to-text engine interprets incoming live or file-based audio in near real time with continually improving exactitude.

enTranslate then feeds the resulting text to its advanced translation process, which supports 46 languages including English, Spanish, French and more. Blending artificial intelligence with sophisticated linguistics modelling, enTranslate uses a Neural Machine Translation (NMT) methodology to provide high-quality translations based on the context surrounding the current words and phrases.

Deployable on-premises or in the cloud, enTranslate’s flexible architecture supports a wide range of live baseband and network-based audio and video inputs including analog, AES, HDMI, SDI, AoIP, NDI® and MADI. Translated results can be output in standard caption file formats; embedded as closed captions using an external or optional integrated encoder; or keyed as open captions over an SDI, HDMI or NDI® video output.

enTranslate offers both live and offline translation. For file-based applications, audio or video clips can be easily ingested into the system and translated into any supported language, enabling users to quickly and affordably process large libraries of previously recorded content.

enTranslate is expected to be commercially released in Q2, and like all ENCO solutions, will be backed by the company’s renowned, world-class, 24/7 technical support services.

More info

Zylia to Speak on Immersive Audio Capture at AES Dublin 2019 and AES York 2019

“The AES events in Dublin and York offer great opportunities for us to provide practical advice on how to best capture immersive audio. We’re seeing a lot of interest in our 360-degree recording technology and a high demand for guidance in maximizing the potential of spatial recordings that capture the entire sound scene,” said Tomasz Żernicki co-founder and chief technology officer of Zylia.

Zylia at AES Dublin 2019

Żernicki will chair a March 22 workshop titled “Video Creations for Music, Virtual Reality, Six Degrees of Freedom (6DoF) VR, and 3D Productions — Case Studies.” The workshop’s panel of professional audio engineers and musicians will include Przemysław Danowski from the department of sound engineering at Frederic Chopin University of Music in Warsaw, Poland; Hans-Peter Gasselseder of Aalborg University of Aalborg, Denmark; Maria Kallionpää of Hong Kong Baptist University in Hong Kong; and Eduardo Patricio, sound engineer and designer at Zylia.

The workshop will examine spatial audio-video creations in practice. Panelists will talk about how their 360-degree, 3D, and ambient productions combine sound and vision and discuss spatial recordings of concert music. The workshop will focus on the usage of spherical microphone arrays that enable users to record the entire 3D sound scene as well as 6DoF VR. Panelists will discuss how separation of individual sound sources in postproduction and Ambisonics enables creatives to achieve unique audio effects.

Also, during AES Dublin 2019, Zylia audio research engineer Łukasz Januszkiewicz will join Żernicki and Patricio to present a paper titled “Toward Six Degrees of Freedom Audio Recording and Playback Using Multiple Ambisonics Sound Fields.” The paper session is scheduled for 2:30 p.m. GMT on March 20.

Furthermore, Żernicki has been invited to judge the Saul Walker Student Design Competition during the 146th AES Convention in Dublin. The contest is an opportunity for aspiring hardware and software engineers to gain recognition for their hard work, technical creativity, and ingenuity.

“It is a great privilege for me to be in the jury of this year’s Saul Walker Student Design Competition. The idea of this contest is very close to me for many reasons: On one hand, Zylia’s mission is to create technologies and products that set trends in the audio industry; on the other, hard work, creativity, and courage are values we cherish the most in our team,” Żernicki said.

Zylia at the 2019 AES International Conference on Immersive and Interactive Audio

At the Immersive and Interactive Audio Conference, Żernicki, Patricio, and Januszkiewicz will hold a workshop titled “Six Degrees of Freedom Audio Capture and Playback Using Multiple Higher Order Ambisonics (HOA) Microphones.” Designed for professionals who wish to capture immersive audio scenes for VR or music production, this workshop will focus on recording sound and enabling 6DoF playback by using multiple simultaneous and synchronized HOA recordings. Zylia experts will explain how this strategy enables users to navigate a simulated 3D space and listen to the 6DoF recordings from different perspectives. They will also address challenges related to creating a unity-based navigable 3D audiovisual playback system.

  • AES Dublin 2019 will take place March 20-23 at the Convention Centre Dublin.
  • 2019 AES International Conference on Immersive and Interactive Audio will be held March 27-29 at the Contemporary Music Research Centre at the University of York.

More info

ENCO Brings Power of Award-Winning enCaption Live Automated Closed Captioning System to Radio

ENCO is already regarded as a pioneer and market leader in automated closed and open captioning, with its award-winning enCaption solutions successfully serving television audiences and video users in corporate, educational and government applications for many years. Now, the company is again breaking new ground by bringing enCaption’s renowned accuracy, speed and efficiency to radio, enabling hearing-impaired audiences to consume radio programming online or via over-the-top (OTT) services.

ENCO will showcase enCaption’s benefits for radio broadcasters and audiences in booth N2524 at the upcoming 2019 NAB Show, taking place April 8 to 11 in Las Vegas.

Enco_Encaption4

While captioning is most commonly associated with television rather than radio, Internet and OTT delivery have created the opportunity for radio broadcasters to display corresponding text in the “listener’s” browser or app. This opens the door for hearing-impaired individuals to enjoy content such as talk radio that was previously inaccessible to them. Automated captioning also enables immediate creation of searchable transcripts that broadcasters can post alongside recorded audio clips, enhancing SEO for their websites while improving content discovery for site visitors.

Non-hearing-impaired OTT and mobile users often consume content with their devices on “mute” while on trains or at work, so the availability of captions will make radio a viable new option for them. Last but not least, automated captioning can bolster Visual Radio programming, which is already popular in many international regions and expanding in the U.S. market.

ENCO’s cost-effective, software-defined enCaption solution helps broadcasters and content producers effortlessly provide closed or open captioning for both live and pre-recorded content in near-real-time. Leveraging machine learning technology and a deep neural network voice recognition approach, enCaption’s speech-to-text engine delivers exceptional accuracy with extremely low latency. Integration with newsroom systems and other external vocabulary sources further enhances enCaption’s word recognition and spelling precision, which combine with the solution’s robust punctuation, capitalization and speaker change detection to ensure high-quality captions.

“There is a lot of fantastic content that is only available on radio, and which should be accessible to everyone regardless of any hearing disabilities,” said Ken Frommert, President of ENCO. “We’re excited to combine our deep expertise in the radio market with the proven yet continually advancing capabilities of enCaption to break through these accessibility barriers. Just as captioning enables hearing-impaired audiences to understand and enjoy television content, the same technology now allows them to do the same for radio programming.”

Washington, DC area NPR member station WAMU is in the development process of integrating live captions created by enCaption into their website, with a goal of providing 24/7 captioning of all audio content on WAMU. “This will enable us to make all of our content, regardless of its producer, accessible to our entire community,” said Rob Bertrand, senior director of technology at WAMU. “Our integration is still in the proof-of-concept stage, but we are happy with what we’ve been able to demonstrate so far. We look forward to being able to deliver our content to all members of our community, including those who have historically not been able to be reached by audio content.”

enCaption configurations for radio can be flexibly deployed on-premises or in the cloud. On-premises enCaption systems support discrete analog or AES audio sources, while both on-premises and cloud-based deployments can input IP-based audio streams. Captions created by enCaption can be output as files or streams in standard WebVTT format or as a raw text data stream for integration with the station’s website media player, mobile or OTT app. enCaption can also turn an audio-only source into a video stream with open captions overlaid on a plain background or graphic, or combine the audio with a separate video stream while embedding closed captions for display in a web-based video player.

enCaption solutions tailored for radio broadcasters are available immediately, and are backed by ENCO’s world-class customer service.

More info

AIMS Creates New Working Group Focused on Pro AV Market

AIMS has appointed David Chiappini as chair and Scott Barella as deputy chair of the new working group.

The AIMS Pro AV Working Group will define an open-standards approach to addressing the pro AV world’s move toward IP media. The group’s efforts will be based on evaluating and recommending existing standards and specifications from AES, AMWA, VSF, SMPTE, IEEE, and IETF that already have been broadly adopted by the media and entertainment industry, incorporating pro AV-market-specific features including security, HDCP support for protected content, and IO control. The anticipated result of the Pro AV Working Group’s efforts is a flexible, future-proof method for meeting the video, audio, and data requirements of current and future pro AV solutions.

Chiappini, the new working group’s chair, is vice president of research and development for AIMS member Matrox. He has served in a variety of roles at Matrox since 1994 and currently manages the engineering teams behind the Matrox pro AV and broadcast products lines. Chiappini has extensive expertise and experience in the AV-over-IP movement, and he is involved in developing compressed and uncompressed products for that space.

Barella, who is CTO at PESA, provides leadership and insight for the company’s delivery of robust security components to video, audio, and data signals in the Ethernet fabric, as well as compression and distribution technologies. In addition to his new role as deputy chair of the Pro AV Working Group, he also serves as the AIMS Technical Working Group deputy chair. Barella has a strong background in broadcast systems design and architecture, OEM manufacturing and design, systems integration, and operations, with particular emphasis on IP video.